text
stringlengths 1
1.19M
| meta
dict |
|---|---|
\section*{Introduction}
The main motivation of this article is the study of rational transformations and homaloidal hypersurfaces over an algebraically closed field $\k$ of any characteristic. Recall that, given a homogeneous square free polynomial $f\in\k[x_0,\cdots, x_n]$, one defines the \emph{polar map} $\Phi_f:\P^{n}\dashrightarrow \P^{n}$ by sending $x\in\P^{n}$ to $\Big(f_0(x):\ldots:f_n(x)\Big)$ where $f_i=\frac{\partial f}{\partial x_i}$. The hypersurface $F=\lbrace f=0\rbrace \subset \P^{n}$ is called \emph{homaloidal} if $\Phi_f$ is birational. It was established by I.V.Dolgachev \cite[Theorem 4]{dolgachev2000polar} that the only homaloidal complex curves are the smooth conics, the unions of three general lines and the unions of a smooth conic with one of its tangent. Furthermore, it was noticed by A.V.D\'oria, S.H.Hassanzadeh and A.Simis \cite{dorHassSim2012polar} that a common property of these complex curves is that the \emph{base locus} of $\Phi_f$, i.e.\ the scheme of zeros of the \emph{jacobian ideal} $I=(f_0,\ldots,f_n)$, is a local complete intersection at each of its points.
\begin{pb}{\cite[Question 2.7]{dorHassSim2012polar}}\label{pbClassif} Let $f\in\k[x_0,x_1,x_2]$ be a square free homogeneous polynomial whose polar
map is birational. Is the jacobian ideal locally a
complete intersection at its minimal primes?
\end{pb}
In the spirit of studying the difference between characteristic zero and positive characteristic, we also consider the following \emph{reduction problem}. If $f=q_1^{\alpha_1}\ldots q_m^{\alpha_m}$ is not square free, or equivalently if $F$ is not reduced, the polar map $\Phi_f$ is defined by the mobile part of the linear system generated by $f_0,\ldots,f_n$. Over the field of complex numbers, it was established by A.Dimca and S.Papadima \cite{dimcapap2003hypersurfacecomplements} that $\Phi_f$ birational if and only if so is the polar map $\Phi_{f_{red}}$ associated to $f_{red}=q_1\ldots q_m$. Over a field of positive characteristic, this equivalence trivially fails: in characteristic $2$ for $f= x^2yz$, $\Phi_{f_{red}}$ is birational whereas $\Phi_f$ is not even dominant. This leads to the following problem.
\begin{pb}\label{pbRed}
Over a field of positive characteristic, given $\Phi_f$ dominant, is it birational if and only if so is $\Phi_{f_{red}}$?
\end{pb}
Both problems can be consider from a unified point of view by studying more generally the relation between the \emph{topological degree} $d_t(\Phi)$ of a rational map $\Phi$ and the geometric properties of its base locus. In the polar case, i.e.\ when $\Phi=\Phi_f$ for a homogeneous square free polynomial $f\in\k[x_0,\cdots, x_n]$ of degree $d$, the base locus $Z=\lbrace f_0=\ldots=f_n=0\rbrace\subset\P^{n}$ of $\Phi_f$ coincides with the singular locus of the hypersurface $F=\lbrace f=0\rbrace$. Over $\mathbb{C}$, assuming that this singular locus is finite, the following relation is established by A.Dimca and S.Papadima \cite{dimcapap2003hypersurfacecomplements},
\begin{equation}\label{eqMuIntro}\stepcounter{eqIntro}\tag{\theeqIntro}
d_t(\Phi_f)= (d-1)^n-\mu_f(Z)
\end{equation}
where $\mu_f(Z)$ is the global Milnor number of $F$ (see \cite{Milnor1968SingularPoints} or \Cref{UsMuTau}). Our main goal is to give an algebraic proof and to generalise this formula to the following setting.
Let $X$ be an $n$-dimensional smooth quasi-projective variety over $\k$ and let $\Phi:X\dashrightarrow \P^{n}$ be a rational map with zero-dimensional base locus $Z$ determined by a $n+1$-dimensional subspace $\tnV$ of global sections of a line bundle $\mathcal{L}$ over $X$. Our aim is to read off the topological degree $d_t(\Phi)$ of $\Phi$ from properties of the ideal sheaf $\mathcal{I}$ of $Z$, more precisely from the \emph{sheaf of relation} $\mathcal{E}$ defined as the kernel of the canonical evaluation map $ev:\O_X\otimes\tnV\rightarrow \mathcal{I}\otimes\mathcal{L}$.
In \Cref{SectionDT}, we study the projectivization $\pi_1:\mathbb{X}=\P(\mathcal{I})\rightarrow X$ of the symmetric algebra of $\mathcal{I}$. We show in particular that it decomposes as the union of the blow-up $\tilde{X}$ of $X$ at $\mathcal{I}$ and a \emph{torsion part} $\mathbb{T}_Z$ supported over $Z$. By construction the topological degree of $\Phi$ is equal to that of the restriction to $\tilde{X}$ of the lift $\pi_2:\mathbb{X}\rightarrow \P(\tnV)$ of $\Phi$. In other word $d_t(\Phi)=\deg\Big(c_1\restriction{(\O_{\mathbb{X}}(1)}{\tilde{X}})^n\Big)$.
In this context, we can also consider two other related notions of "naive" topological degrees: the degree $\deg\Big(c_1(\O_{\mathbb{X}}(1))^n\Big)$ of $\pi_2$ and the algebraic degree of $\Phi$ minus the length of $Z$. In \Cref{CosectionLongueur}~\ref{MuTauL} we show that the second one coincides with the degree of the $0$-cycle $[\mathbb{V}\Big(\vspace{0cm}\cos(\mathcal{E})\Big)]$ associated to the scheme of zeros of a general cosection $\cos(\mathcal{E}):\mathcal{E}\rightarrow\O_X$ of $\mathcal{E}$. Our main result, proven in \Cref{SectionProof}, asserts in particular that these two naive topological degrees coincide. It also elucidates the relation between these degrees and the topological degree of $\Phi$:
\begin{thmIntro}\label{theorPivot2}
With the notation above, $\mathbb{X}$ is equidimensional of dimension $n$ and $[\mathbb{V}\Big(\cos(\mathcal{E})\Big)]=\pi_{1*}c_1\Big(\O_{\mathbb{X}}(1)\Big)^n$. As a consequence \begin{align*}
d_t(\Phi)=\deg\Big([\mathbb{V}(\cos(\mathcal{E}))]\Big)-\deg\Big(c_1\restriction{(\O_{\mathbb{X}}(1)}{\mathbb{T}_Z})^n\Big).
\end{align*}
\end{thmIntro}
Let us discuss briefly why this theorem is a generalization of \eqref{eqMuIntro}. This summarizes the content of \Cref{SectionEA}. Recall that another classical invariant of singularities of a hypersurface $F=\lbrace f=0\rbrace$ is the global Tjurina number $\tau_f(Z)$ of $F$ (see \cite{Milnor1968SingularPoints} or \Cref{UsMuTau}). Actually, both Milnor and Tjurina numbers depend on the scheme structure of the singular locus, and, in this sense, they can be defined also for zero-dimensional subscheme $Z$ unrelated to singular hypersurfaces. Having this in mind, we can formulate the following result.
\begin{corIntro}
Formula \eqref{eqMuIntro} holds for any $0$-dimensional subscheme $Z$ defined by $n+1$ global sections of a line bundle $\L$ over a smooth quasi-projective $n$-variety $X$ and over any algebraically closed field.
\end{corIntro}
This corollary follows from the observation that Tjurina numbers compute the degree of $c_1\Big(\O_{\mathbb{X}}(1)\Big)^n$ whereas Milnor numbers compute the degree of $c_1\Big(\O_{\tilde{X}}(1)\Big)^n$. As an immediate application we recover the identity \eqref{eqMuIntro} from the equalities \[\deg\Big([\mathbb{V}(\cos(\mathcal{E}))]\Big)=(d-1)^n-\tau(Z) \hspace{1em}\text{and}\hspace{1em}\deg\Big(c_1\restriction{(\O_{\mathbb{X}}(1)}{\mathbb{T}_Z})^n\Big)=\mu(Z)-\tau(Z)\] where $\tau(Z)$ and $\mu(Z)$ are the generalised Tjurina and Milnor numbers.
\Cref{ExApplic} presents applications when $X=\mathbb{P}^2$ in which case $\mathcal{E}$ is locally free of rank $2$. The first application is motivated by the following situation. Tjurina numbers appear in a natural way in the classification of complex \emph{free curves}. These are the plane curves $F=\lbrace f=0\rbrace$ of degree $d$ whose jacobian ideal sheaves $\mathcal{I}$ have a locally free resolution of the form.
\begin{equation*}
\begin{tikzcd}[row sep=3em,column sep=0.5cm,minimum width=2em]
0 \ar{r}& \O_{\P^2}(3-2d)\oplus \O_{\P^2}(-d)\ar{r}& \O_{\P^2}(1-d)^{3} \ar{r}& \mathcal{I} \ar{r}&0.
\end{tikzcd}
\end{equation*}
It was established by A.A.du Plessis and C.T.C.Wall in \cite{duplessisWall19991higlysingular} that these curves are characterized by the following identity:
\begin{equation}\label{eqTauIntro}\stepcounter{eqIntro}\tag{\theeqIntro}
d-2=(d-1)^2-\tau_f(Z).\end{equation}
A first application is a generalisation of the numerical characterization \eqref{eqTauIntro} in arbitrary characteristic to locally free sheaves of rank $2$:
\begin{thmIntro}
Let $\mathcal{E}$ be the sheaf of relation of an ideal sheaf generated by three homogeneous polynomials of degree $d-1\geq 0$ in $3$ variables. Then \begin{equation*}\label{eqTauLibre}d-2 \leq (d-1)^2-\tau(Z)\end{equation*} and equality occurs if and only if $\mathcal{E}\simeq \O_{\mathbb{P}^2}(-1)\oplus \O_{\mathbb{P}^2}(2-d)$.
\end{thmIntro}
As a second application in \Cref{ExApplic}, we answer negatively \Cref{pbClassif}:
\begin{propIntro}
The curve $F=\mathbb{V}\Big((x_1^2+x_0x_2)x_0(x_1^2+x_0x_2+x_0^2)\Big)$ is homaloidal if and only if the base field $\k$ has characteristic $3$.
\end{propIntro}
We also answer negatively \Cref{pbRed} by producing an explicit homaloidal curve in characteristic $101$ whose polar has topological degree $3$ whereas the polar of its reduction has topological degree $5$.
\
The explicit computations given in this paper were made using basic functions of Macaulay2 and the Cremona package also running on Macaulay2 \cite{stagliano2017Mac2Pack}. The corresponding codes are available on request.
\section{Topological degrees via the symmetric algebra}\label{SectionDT}
We first recall some facts about the symmetric and the Rees algebras (or blow-up algebra) of an ideal before giving the definition of \emph{topological degree} and \emph{naive topological degrees}.
\subsection{Rees and symmetric algebras}\label{subSecReesSym}
Given an ideal sheaf $\mathcal{I}$ on a smooth variety $X$ of dimension $n$, recall that the blow-up $\tilde{X}$ of $X$ at $\mathcal{I}$ is the Proj of the \emph{Rees algebra} \[R(\mathcal{I})=\O_X\oplus \mathcal{I} t\oplus \mathcal{I}^2t^2\oplus \cdots =\underset{i=0}{\overset{\infty}{\oplus}}\mathcal{I}^it^i\subset \O_X[t]\] of $\mathcal{I}$. Denoting $Z=\mathbb{V}(\mathcal{I})$, we say also that $\tilde{X}$ is the blow-up of $X$ along $Z$. We denote by $\textnormal{S}(\mathcal{I})=\oplus_{i\geq 0}\textnormal{S}^i(\mathcal{I})$ the \emph{symmetric algebra} of $\mathcal{I}$ and by $\mathbb{X}$ the projectivization $\P(\mathcal{I})=\Proj(\Sy(\mathcal{I}))$ of $\mathcal{I}$ with its bundle map $\pi_1:\mathbb{X}\rightarrow X$.
The natural surjection $q:\S(\mathcal{I})\rightarrow \mathcal{R}(\mathcal{I})$ defines a closed embbeding of $\tilde{X}$ in $\mathbb{X}$. When $q$ is an isomorphism $\mathcal{I}$ is said of \emph{linear type} \cite{Vasconcelos2005Int}. This is the case for instance when $\mathcal{I}$ is locally generated by a regular sequence \cite[Example 1.2]{Vasconcelos2005Int}.
Otherwise the images by $\pi_1$ of the irreducible components of $\mathbb{X}$ different from $\tilde{X}$ are contained in the support of $Z$. Indeed, over the set $U=X\backslash Z$, we have $\mathcal{I}_U=\O_U$, so that $\restriction{\tilde{X}}{U}=\restriction{\mathbb{X}}{U}=\pi_1^{-1}(U)$. This justifies the following definition:
\begin{defi}\label{defCompTorsion}
An irreducible component of $\mathbb{X}$ different from $\tilde{X}$ is called a \emph{torsion component} of $X$. The union of the torsion components is called the \emph{torsion part} of $\mathbb{X}$, denoted by $\mathbb{T}_Z$.
\end{defi}
The following lemma provides a description of the torsion components supported over the generic points of the irreducible components of $Z$. Namely, letting $Z_i$ be an irreducible component of \( Z_{red} \) we consider $A=\O_{X,Z_i}$ and $I$ the image of $\mathcal{I}$ in $A$.
\begin{lemma}\label{lemmaLCI}
Let $X=\Spec(A)$ be the spectrum of a regular local ring essentially of finite type with maximal ideal $\mathfrak{m}$ and residue field $\kappa(\mathfrak{m})$. Let $I\subset \mathfrak{m}$ be an $\mathfrak{m}$-primary ideal minimally generated by $r+1$ elements which do not form a regular sequence. Then $\mathbb{X}$ is the union of $\tilde{X}$ and a unique other irreducible component contained in $\pi_1^{-1}(\mathfrak{m})$ whose reduction is isomorphic to $\P_{\kappa(\mathfrak{m})}^r$.
\end{lemma}
\begin{proof}
Let:
\begin{center}
\begin{tikzpicture}
\matrix (m) [row sep=0.5em,column sep=2em,minimum width=2em]
{
\node(c){$A^m$}; & \node(a){$A^{r+1}$}; &\node(){}; &\node(d){$I$}; & \node(e){$0$}; \\};
\path[-stealth]
(c) edge node[above]{$M$} (a)
(a) edge node[above]{$(\phi_0\;\ldots\;\phi_r)$} (d)
(d) edge (e);
\end{tikzpicture}
\end{center}
be a minimal presentation of $I$. Then $\mathbb{X}=\P(I)$ is isomorphic to the closed subscheme of $\P_A^r$ defined by the entries of the row matrix $(y_0\;\ldots\;y_n)\cdot M$ \cite[A.III.69.4]{Bourbaki2007Algebre}. Since $A$ is local and $\lbrace\phi_0,\ldots,\phi_r\rbrace$ is a minimal set of generators of $I$, all the entries of $M$ are elements of $\mathfrak{m}$ \cite[19.4]{eisenbud1995algebra}. It follows that $\pi_1^{-1}(\lbrace \mathfrak{m} \rbrace )=\P^r_{\mathfrak{m}}\simeq \P_{\kappa(\mathfrak{m})}^r$. The exceptional divisor in $\tilde{X}$ above the point $\mathfrak{m}$ is then the intersection of $\mathbb{X}$ and $\tilde{X}$.
\end{proof}
From a practical point of view, it might be difficult to determine how the torsion components vary from a given presentation, as illustrated by the following example:
\begin{ex}
Consider the ideal $I$ of $A=\k[x,y,z]$ given by the following resolution:
\begin{small}
\begin{center}
\begin{tikzpicture}
\matrix (m) [row sep=0.5em,column sep=0.5cm,minimum width=2em]
{
\node(b){$0$}; &\node(c){$A^3$};&\node(){}; &\node(){}; & \node(a){$A^4$}; &\node(){}; &\node(d){$I$}; & \node(e){$0$}; \\};
\path[-stealth]
(b) edge (c)
(c) edge node[above]{\begin{scriptsize}
$\begin{pmatrix}
0 & xz & y^2 \\
0 & x & xy \\
x & y & y \\
y & z & x \\ \end{pmatrix}$
\end{scriptsize} }(a)
(a) edge node[above]{$(\phi_0\;\ldots\; \phi_3)$} (d)
(d) edge (e);
\end{tikzpicture}
\end{center}
\end{small}
Above the line $\lbrace x=y=0\rbrace$ in $X=\Spec(A)$, the torsion component is $\lbrace x=y=0\rbrace\times \P_\k^2$ but above the point $\lbrace x=y=z=0\rbrace$, the torsion component is $\lbrace x=y=z=0\rbrace\times \P_\k^3$.
\end{ex}
This motivates why, in the following, we will assume that $\mathbb{V}(\mathcal{I})$ is zero dimensional.
In the case when $\mathcal{I}\otimes \mathcal{L}$ is generated by $n+1$ sections for some line bundle $\mathcal{L}$ over $X$, let \begin{center}
\begin{tikzpicture}
\matrix (m) [row sep=0.5em,column sep=2em,minimum width=2em]
{
\node(c){$\mathcal{F}$}; &\node(a){$\O_X^{n+1}$}; &\node(){}; &\node(d){$\mathcal{I}\otimes\mathcal{L}$}; &\node(e){$0$}; \\};
\path[-stealth]
(c) edge node[above]{$s$} (a)
(a) edge node[above]{$(\phi_0\;\ldots\; \phi_n)$} (d)
(d) edge (e);
\end{tikzpicture}
\end{center}
be a locally free presentation of $\mathcal{I}\otimes\mathcal{L}$. The map $s$ can be interpreted dually as the data of $n+1$ sections of $\mathcal{F}^\vee$. Recall that $\Fitt_{n}\mathcal{I}$ is the ideal sheaf generated by the common vanishing of these $n+1$ sections and $\mathbb{V}(\Fitt_{n}\mathcal{I})\subset Z$ \cite[20]{eisenbud1995algebra}. With this definition, we have:
\begin{cor}\label{LCI}
Let $X$ be a smooth variety of dimension $n$ and let $\mathcal{I}$ be an ideal sheaf over $X$. Denoting $Z=\mathbb{V}(\mathcal{I})$, assume that $\codim(Z)=n$ and that $\mathcal{I}\otimes \mathcal{L}$ is generated by $n+1$ sections for some line bundle $\mathcal{L}$ over $X$. Then the images of the torsion components of $\mathbb{X}$ in $X$ are precisely the points of the subscheme $\mathbb{V}(\Fitt_{n}\mathcal{I})$. Moreover, each torsion component with its reduced structure is isomorphic to $\mathbb{P}^n$.
\end{cor}
\begin{proof}
Since $Z$ is zero-dimensional, any $z\in Z$ is in an affine open set $U=\text{Spec}(A)$ of $X$ over which $\mathcal{L}$ is trivial. So over $U$, let
\begin{center}
\begin{tikzpicture}
\matrix (m) [row sep=0.5em,column sep=2em,minimum width=2em]
{
\node(c){$\O_U^m$}; &\node(a){$\O_U^{n+1}$}; &\node(d){$\restriction{\mathcal{I}}{U}$}; &\node(e){$0$}; \\};
\path[-stealth]
(c) edge node[above]{$M$} (a)
(a) edge (d)
(d) edge (e);
\end{tikzpicture}
\end{center}
be a presentation of $\mathcal{I}$.
By \cite[Proposition 20.6]{eisenbud1995algebra}, the scheme $\mathbb{V}(\Fitt_{n}\mathcal{I})$ is the subscheme of $Z$ consisting of points $z\in \mathbb{P}^n$ at which $\mathcal{I}_z$ can not be generated by $n$ elements. By \Cref{lemmaLCI}, only two situations can occur. Either $Z$ is a local complete intersection at $z$ i.e.\ $\mathcal{I}_z$ is generated by a local regular sequence. Hence the Rees algebra and the symmetric algebra coincide locally as explained before \Cref{defCompTorsion}.
Or $Z$ is not a local complete intersection at $z$ and then, since $\codim\Big(\mathbb{V}(\mathcal{I})\Big)=n$, $\mathbb{V}(\Fitt_{n}\mathcal{I})$ is exactly the support where $(\phi_0,\ldots ,\phi_n)$ can not be a local regular sequence. But the ideal of $\mathbb{X}$ in $\P^n_U$ is $(y_0\hspace{0.1cm}\ldots\hspace{0.1cm} y_n)\cdot M$ where $(y_0,\ldots,y_n)$ are the coordinates of the second factor so the scheme $\mathbb{V}(\Fitt_{n}\mathcal{I})$ is exactly the scheme of point of $z\in Z$ such that set-theoretically $\pi_1^{-1}(\lbrace z\rbrace)=\P^n_z$.
\end{proof}
\begin{NotNum}\label{NotationTorsion}
For every $z\in Z$, we let $\mathbb{T}_z$ be the scheme-theoretic fibre of the restriction of $\pi_1$ to $\mathbb{T}_Z$. By \Cref{LCI}, $\mathbb{T}_z$ is set-theoretically equal to $\P^n_z$ so $T_z=[\mathbb{T}_z]\cdot c_1\Big(\O_{\mathbb{X}}(1)\Big)^n$ is a $0$-cycle on $\mathbb{X}$. We denote by $T_Z$ the $0$-cycle $\underset{z\in Z}{\sum} T_z$.
\end{NotNum}
\subsection{Geometric interpretation of the torsion}
From now on, our setting is as follows: $X$ is a smooth $n$-variety over $\k$, $\mathcal{L}$ is a line bundle over $X$ and $\tnV$ is an $(n+1)$-dimensional subspace of $\text{H}^0(X,\L)$. We denote $\Phi:X\dashrightarrow \P(\tnV)\simeq \P^{n}$ the associated rational map. Recall that the base ideal sheaf $\mathcal{I}$ of $\Phi$ is the image of the evaluation map $ev:\tnV\otimes \L^\vee\rightarrow \O_X$. By the universal property of blow-up, $\tilde{X}$ is isomorphic to the graph $\Gamma$ of $\Phi$, that is, the closure in $X\times \P^{n}$ of the graph of the restriction of $\Phi$ to its domain of definition.
Let $p_1: \P_X^{n}=\P(\tnV\otimes \O_X)\rightarrow X$ be the structure map and let
\begin{center}
\begin{tikzpicture}
\matrix (m) [row sep=0.5em,column sep=2em,minimum width=2em]
{
\node(c){$\mathcal{F}$}; &\node(a){$\O_X^{n+1}$}; &\node(d){$\mathcal{I}\otimes\mathcal{L}$}; &\node(e){$0$}; \\};
\path[-stealth]
(c) edge node[above]{$s$} (a)
(a) edge node[above]{$ev$} (d)
(d) edge (e);
\end{tikzpicture}
\end{center}
be a locally free presentation of $\mathcal{I}\otimes \mathcal{L}$. The map $ev$ determines a closed embedding $\P(\mathcal{I}\otimes \mathcal{L})\hookrightarrow \P_X^{n}$ as the zero scheme of the global section $\sigma\in \tnH^0(\P^{n},\O_{\P_X^{n}}(1)\otimes p^*_1\mathcal{F}^\vee)$ deduced from the composition of $p^*_1 s$ with the canonical surjection $\tnV\otimes \O_{\P_X^{n}}\rightarrow \O_{\P_X^{n}}(1)$. Since $\P(\mathcal{I}\otimes \mathcal{L})\simeq\P(\mathcal{I})$ \cite[II.7.9]{hartshorne1977algebraic}, this provides a closed embedding $\mathbb{X}\hookrightarrow \P_X^{n}$.
Summing up, we have the following commutative diagram \eqref{Diag1}:
\begin{equation}\label{Diag1}\stepcounter{Diag}\tag{D\theDiag}
\begin{tikzcd}[row sep=0.73cm,column sep=1.8cm,minimum width=2em]
& \P_X^{n} \ar[dddl,"p_1" description]\ar[dddr,"p_2"description]& \\
& \mathbb{X} \ar[u,hook,"\iota"description]\ar[ddl,"\pi_1"description]\ar[ddr,"\pi_2"description]& \\
& \tilde{X} \ar[u,hook]\ar[dl,"\sigma_1"description]\ar[dr,"\sigma_2"description]& \\
X \ar[rr, dashed, "\Phi"]& & \P^{n}
\end{tikzcd}
\end{equation}
Recall that the \emph{topological degree} of a dominant rational map $\Phi:X\dashrightarrow Y$ between irreducible varieties $X$ and $Y$ of the same dimension is defined as the degree $d_t(\Phi)=[\textnormal{Frac}(Y):\textnormal{Frac}(X)]$ of the induced extension between their respective fields of rational functions. In our setting, $Y=\P^n$ and $d_t(\Phi)$ can be interpreted alternatively as the degree of the $0$-cycle $c_1\Bigl(\restriction{\O_{\P_X^{n}}(1)}{\tilde{X}}\Bigr)^n$ on $\tilde{X}$. Since $\sigma_1$ is birational we have thus:
\[d_t(\Phi)=\deg\Big(c_1(\restriction{\O_{\P_X^{n}}(1)}{\tilde{X}})^n\Big)=\deg\Big(\sigma_{1*}(c_1(\restriction{\O_{\P_X^{n}}(1)}{\tilde{X}})^n)\Big).\]
By \Cref{LCI}, $c_1(\restriction{\O_{\P_X^{n}}(1)}{\mathbb{X}})^n$ is also a $0$-cycle on $\mathbb{X}$ so we can set the following definition.
\begin{defi}\label{VraiDegTopNaif}
With the notation in \eqref{Diag1} the degree of $c_1(\restriction{\O_{\P_X^{n}}(1)}{\mathbb{X}})^n$ is called the \emph{first naive topological degree} of $\Phi$.
\end{defi}
Intuitively, the difference between the first naive topological degree and the actual topological degree reflects a difference between the symmetric algebra and the Rees algebra, see \Cref{theorMuTau} below for a precise statement.
Now, let $\mathcal{E}$ be the kernel of the evaluation map $ev:\O_{X}^{n+1}\rightarrow\mathcal{I}\otimes\mathcal{L}$ and let $\alpha:\O_X^{n+1}\rightarrow \O_X$ be a generic map. Since $\mathcal{E}$ has rank $n$, the zero locus $\mathbb{V}(\cos_\alpha)$ of the composition $\cos_{\alpha}=\alpha\circ \gamma$ is a $0$-dimensional subscheme of $X$. \begin{center}
\begin{tikzpicture}
\matrix (m) [row sep=1em,column sep=2em,minimum width=2em]
{
\node(a){$0$}; &\node(b){$\mathcal{E}$}; &\node(c){$\O_X^{n+1}$}; & \node(d){$\mathcal{I}\otimes \mathcal{L}$}; & \node(e){$0$}; \\
\node(){}; &\node(){}; &\node(z){$\O_X$}; & \node(){}; & \node(){}; \\};
\path[-stealth]
(a) edge (b)
(b) edge node [above]{$\gamma$} (c)
(c) edge node [above] {$ev $} (d)
(d) edge (e)
(c) edge node[right]{$\alpha$}(z)
(b) edge node[below]{$\cos_{\alpha}$}(z);
\end{tikzpicture}
\end{center}
In the proof of \Cref{theorPivot2}, we will establish in particular that the cycle class $[\mathbb{V}(\cos_\alpha)]$ of $\mathbb{V}(\cos_\alpha)$ is independent on the choice of a generic map $\alpha$ so, anticipating, we set the following definition.
\begin{defi}\label{DegTopNaif}
The \emph{second naive topological degree} of $\Phi$ is the degree of the $0$-cycle $[\mathbb{V}\Big(\hspace{0cm}\cos(\mathcal{E})\Big)]$ of a generic cosection $\cos(\mathcal{E})$ of $\mathcal{E}$.
\end{defi}
\begin{rem} If $\mathcal{E}$ is locally free, $[\mathbb{V}\Big(\hspace{0cm}\cos(\mathcal{E})\Big)]$ simply coincides with the top Chern class $c_n(\mathcal{E}^\vee)$ of $\mathcal{E}^\vee$. This is no longer true when $\mathcal{E}$ is not locally free. For instance the sheaf $\mathcal{E}$ of relations of the ideal sheaf $\mathcal{I}=(x_1^2-x_1x_3 , x_2^2-x_2x_3 , x_1x_2 , x_0x_3 )$ of $\P^3$ satisfies $c_3(\mathcal{E}^\vee)=4$ whereas $\deg\Big( [\mathbb{V}(\cos(\mathcal{E}))]\Big)=2$ as we can check from the resolution of $\mathcal{E}$:
\begin{center}
\begin{tikzpicture}
\matrix (m) [row sep=1cm,column sep=0.75cm,minimum width=2em]
{
\node(a){$0$}; &\node(c){$\O_{\P^3}(-3)^2$}; &\node(d){$\O_{\P^3}(-1)^2\oplus\O_{\P^3}(-2)^3$}; &\node(e){$\mathcal{E}$}; & \node(f){$0$.}; \\
};
\path[-stealth]
(a) edge (c)
(c) edge (d)
(d) edge (e)
(e) edge (f);
\end{tikzpicture}
\end{center}
\end{rem}
\section{Proof of Theorem \ref{theorPivot2}}\label{SectionProof}
Recall the settings of \Cref{theorPivot2}, we assume that $n\geq 2$, $\codim(Z)=n$ and that the map $\Phi$ is dominant.
By definition, the first naive topological degree is the length of the $0$-scheme $W$ of a general section of $\O_{\mathbb{X}}(1)^n$. Our strategy to show \Cref{theorPivot2} is now to push forward the following exact sequence:
\begin{equation}\label{suiteFond}\stepcounter{SExactes}\tag{E\theSExactes}
\begin{tikzcd}[row sep=1cm,column sep=0.75cm,minimum width=2em]
0 \ar{r}& \mathcal{K} \ar{r}& \O_{\mathbb{X}}^n \ar{r}& \O_{\mathbb{X}}(1) \ar{r}& \O_W(1) \ar{r}& 0
\end{tikzcd}
\end{equation}
where $\mathcal{K}$ is by definition the kernel of the map $\O_{\mathbb{X}}^n\rightarrow\O_{\mathbb{X}}(1)$.
So, applying $\pi_{1*}$ to \eqref{suiteFond} and assuming that $\mathrm{R}^1\pi_{1*}\Bigl(\mathcal{K}\Bigr)=\mathrm{R}^1\pi_{1*}\Bigl(\mathcal{I}_W(1)\Bigr)=0$, we have
\begin{center}
\begin{tikzpicture}
\matrix (m) [row sep=3em,column sep=2.5em,minimum width=2em]
{
\node(c){$\O_X^n$}; &\node(d){$\pi_{1*}\O_{\mathbb{X}}(1)$}; &\node(e){$\pi_{1*}\O_W(1)$}; & \node(f){$0$.}; \\};
\path[-stealth]
(c) edge (d)
(d) edge (e)
(e) edge (f);
\end{tikzpicture}
\end{center}
We emphasize that $\mathcal{I}$ is not locally free so $\pi_{1*}(\O_{\mathbb{X}}(1))$ might a priori be different from $\mathcal{I}$ (see Stack project, 26.21. Projective bundles, \href{https://stacks.math.columbia.edu/tag/01OA}{example 26.21.2}). However our strategy is to prove that these coincide in this case.
We use the same notation for the sheaves and their push forward by $\mathbb{X}\overset{\iota}{\hookrightarrow}\P_X^{n}$. Thus, the strategy is to provide that $\mathrm{R}^1p_{1*}\Bigl(\mathcal{K}\Bigr)=\mathrm{R}^1p_{1*}\Bigl(\mathcal{I}_W(1)\Bigr)=0$ and $p_{1*}\Bigl(\O_{\mathbb{X}}(1)\Bigr)=\mathcal{I}\otimes \mathcal{L}$ in order to get the sequence:
\begin{equation}\label{exSeq5}\stepcounter{SExactes}\tag{E\theSExactes}
\begin{tikzcd}
\O_X^n \ar{r}& \mathcal{I}\otimes \mathcal{L} \ar{r}& p_{1*}\O_W(1) \ar{r}& 0.
\end{tikzcd}
\end{equation}
As we will explain below, $[p_{1*}\O_W(1)]$ will turn out to be precisely the cycle $[\mathbb{V}(\cos_\alpha)]$ which by definition verifies the following exact sequence:
\begin{center}
\begin{tikzpicture}
\matrix (m) [row sep=3em,column sep=2em,minimum width=2em]
{
\node(b){$\mathcal{E}$}; &\node(c){$\O_X$}; &\node(d){$\O_{ \mathbb{V}(\cos_{\alpha})}$}; & \node(e){$0$.}; \\};
\path[-stealth]
(b) edge node[above]{$\cos_{\alpha}$} (c)
(c) edge (d)
(d) edge (e);
\end{tikzpicture}
\end{center}
This will show eventually \Cref{theorPivot2}.
\subsection{Cohomological preliminaries}
\begin{lemma}\label{Vanishing}\label{thmSubLin}
The following vanishings hold:
\begin{enumerate}[label=\rm{\it(\roman*)}]
\item\label{Vanishing1} $\mathrm{R}^1p_{1*}\mathcal{I}_{\mathbb{X}}(1)=0$,
\item\label{Vanishing2} $\mathrm{R}^{i+1}p_{1*}\O_{\mathbb{X}}(-i)=0$ for every $i\in\lbrace 0,\ldots, n-1\rbrace$,
\item\label{Vanishing3} $\mathrm{R}^{i}p_{1*}\O_{\mathbb{X}}(-i)=0$ for every $i\in\lbrace 1,\ldots, n-1\rbrace$.
\end{enumerate}
\end{lemma}
\begin{proof}
Under the assumption that $\dim(Z)=0$, by \cite[Corollary 2.9]{Big2018ResSymALg}, the ideal $\mathcal{I}_{\mathbb{X}}$ has a locally free resolution of the following form:
\begin{equation*}\label{resSym}\tag{\text{$\mathcal{G}_{\bullet}$}}
\begin{tikzcd}[row sep=3em,column sep=1em,minimum width=2em]
0 \ar{r}& \mathcal{G}_{n+1} \ar{r}& \mathcal{G}_n \ar{r}& \ldots \ar{r}& \mathcal{G}_2 \ar{r}& \mathcal{G}_1 \ar{r}& \mathcal{I}_{\PI} \ar{r}& 0
\end{tikzcd}
\end{equation*}
where $\mathcal{G}_i=\underset{j=1}{\overset{i}{\oplus}}p^*\mathcal{G}_{ij}\otimes \O_{\P_X^{n}}(-j)$ when $i\in\lbrace 1,\ldots,n\rbrace$ and $\mathcal{G}_{n+1}= p^*\mathcal{G}'_{n+1}\otimes \O_{\P_X^{n}}(-1)$ for some locally free sheaves $\mathcal{G}_{ij}$ and $\mathcal{G}'_{n+1}$ over $X$.
Now, a diagram chasing in \eqref{resSym} shows that $\mathrm{R}^1p_{1*}\mathcal{I}_{\PI}(1)=0$ provided that $\mathrm{R}^{k}p_{1*}\Big(\mathcal{G}_k(1)\Big)=0$ for all $ k\in\lbrace 1,\ldots,n+1\rbrace$. By Kunneth formula, those vanishings are verified if:
\begin{itemize}
\item $\tnH^{k}\Bigl(\mathbb{P}^n,\O_{\mathbb{P}^n}(-j+1)\Bigr)=0$ for all $k\in\lbrace 1,\ldots,n\rbrace$ and all $j\in\lbrace 1,\ldots,k\rbrace$,
\item $\tnH^{n+1}\Bigl(\mathbb{P}^n,\O_{\mathbb{P}^n}(-2)\Bigr)=0$
\end{itemize}
The only non trivial case to check is when $k=n$. But:
\begin{center}
$\tnH^{n}\Bigl(\mathbb{P}^n,\O_{\mathbb{P}^n}(-j+1)\Bigr)\simeq \tnH^{0}\Bigl(\mathbb{P}^n,\O_{\mathbb{P}^n}(j-n-2)\Bigr)^\vee=0$
\end{center}
because $j\leq n$.
For \ref{Vanishing2} and \ref{Vanishing3}, since $\O_{\mathbb{X}}=\O_{\P_X^{n}}/\mathcal{I}_{\mathbb{X}}$, the assertions follow from the same argument after twisting the complex \eqref{resSym} by $\O_{\P_X^{n}}(-i)$ for every $i\in\lbrace 0,\ldots, n-1\rbrace$.
\end{proof}
\begin{lemma}\label{imageDirecte3}
We have $p_{1*}\Bigl(\O_{\mathbb{X}}(1)\Bigr)=\mathcal{I}\otimes \mathcal{L}$.
\end{lemma}
\begin{proof}
First, $\O_{\P_X^{n}}(1)$ being the relative ample line bundle of the projective bundle $\P_X^{n}=\P\Big(\O_X^{n+1}\Big)$, we have $p_{1*}\O_{\P_X^{n}}(1)=\O_X^{n+1}$.
Moreover, since $\mathcal{I}_{\mathbb{X}}(1)$ is the image of the canonical map $p^*_{1}\mathcal{E}\rightarrow \O_{\P_X^{n}}(1)$, we let $\H$ be the kernel of this surjection and we write the exact sequence:
\begin{equation*}
\begin{tikzcd}[row sep=3em,column sep=1em,minimum width=2em]
0 \ar{r}& \H \ar{r}& p^*_1 \mathcal{E} \ar{r}& \mathcal{I}_{\mathbb{X}}(1) \ar{r}& 0.
\end{tikzcd}
\end{equation*}
Since $p_{1*}p^*_1\mathcal{E}\simeq \mathcal{E}$ and $\mathrm{R}^1p_{1*}p^*_1\mathcal{E}=0$, applying $p_{1*}$ to this exact sequence, we get:
\begin{equation}\label{pfX1}\tag{a}
\begin{tikzcd}[row sep=3em,column sep=1em,minimum width=2em]
0 \ar{r}& p_{1*}\H \ar{r}& \mathcal{E} \ar{r}& p_{1*}\mathcal{I}_{\mathbb{X}}(1) \ar{r}& \mathrm{R}^1 p_{1*}\H \ar{r}& 0.
\end{tikzcd}
\end{equation}
Also, since we proved that $\mathrm{R}^1p_{1*}\mathcal{I}_{\mathbb{X}}(1)=0$, applying $p_{1*}$ to the canonical exact sequence
\begin{equation*}
\begin{tikzcd}[row sep=3em,column sep=1em,minimum width=2em]
0 \ar{r}& \mathcal{I}_{\mathbb{X}}(1) \ar{r}& \O_{\P_X^{n}}(1) \ar{r}& \O_{\mathbb{X}}(1) \ar{r}& 0
\end{tikzcd}
\end{equation*}
we get
\begin{equation*}\label{pfX2}\tag{b}
\begin{tikzcd}[row sep=3em,column sep=1em,minimum width=2em]
0 \ar{r}& p_{1*}\mathcal{I}_{\mathbb{X}}(1) \ar{r}& \O_X^{n+1} \ar{r}& p_{1*}\O_{\mathbb{X}}(1) \ar{r}& 0.
\end{tikzcd}
\end{equation*}
The exact sequences \eqref{pfX1} and \eqref{pfX2} fit into the following commutative diagram:
\begin{equation*}
\begin{tikzcd}[row sep=0.8em,column sep=1em,minimum width=2em]
& 0 \ar{d}& & & & \\
& p_{1*}\H \ar{d}& & 0 \ar{d}& & \\
& \mathcal{E} \ar{d}\arrow[rr, "\simeq"]& & \mathcal{E} \ar{d}& &\\
0 \ar{r}& p_{*}\mathcal{I}_{\mathbb{X}}(1) \ar{rr}\ar{d}& & \O_X^{n+1} \ar{r}\ar{d}& p_{1*}\O_{\mathbb{X}}(1) \ar{r}\arrow[d,phantom,"{\rotatebox{90}{=}}"]& 0 \\
0 \ar{r} &\mathrm{R}^1p_{1*}\H \ar{rr}\ar{d}& & \mathcal{I}_Z\otimes\L \ar{r}\ar{d}& p_{1*}\O_{\mathbb{X}}(1) \ar{r}& 0 \\
& 0 & & 0 & & \\
\end{tikzcd}
\end{equation*}
where \eqref{pfX1} is the left column, \eqref{pfX2} is the central row and the map $\mathcal{I}_Z\rightarrow p_{1*}\O_{\mathbb{X}}(1)$ in the bottom row is the canonical morphism associated to the projectivization of $\mathcal{I}_Z$. This morphism is an isomorphism at $X\backslash Z$ and therefore $\mathcal{I}_Z\otimes \L\rightarrow p_{1*}\O_{\mathbb{X}}(1)$ is injective because $\mathcal{I}_Z$ is torsion free. Hence $p_{1*}\H\simeq 0\simeq \mathrm{R}^1p_{1*}\H$ and $p_{1*}\O_{\mathbb{X}}(1)\simeq \mathcal{I}_Z\otimes\L$.
\end{proof}
\subsection{Degree of cycles}
As above, let $W\subset \mathbb{X}$ be the intersection of $\mathbb{X}$ with $n$ general relative hyperplanes of $\P_X^{n}$ so that $[W]=c_1\Big(\O_{\mathbb{X}}(1)\Big)^n$.
\begin{proof}[Proof of Theorem \ref{theorPivot2}]\label{proofPivot}
Consider the following exact sequence:
\begin{equation}\label{Koszul}\tag{Kz}
\begin{tikzcd}[row sep=0.5em,column sep=1em,minimum width=2em]
0 \ar{r}& \mathcal{K} \ar{r}& \O_{\mathbb{X}}^n \ar{dr}\arrow[rr, "{\beta'}"]& & \O_{\mathbb{X}}(1) \ar{r}& \O_W(1) \ar{r}& 0 \\
& & & \mathcal{I}_W(1) \ar{ur}\ar{dr}& & & \\
& & 0 \ar{ur}& & 0 & & \\
\end{tikzcd}
\end{equation}
We claim that $$\mathrm{R}^1p_{1*}\Bigl(\mathcal{I}_W(1)\Bigr)=\mathrm{R}^1p_{1*}\Bigl(\mathcal{K}\Bigr)=0.$$
Indeed by \Cref{LCI}, $\mathbb{X}$ decomposes as the union of $\tilde{X}$, the blow-up of $X$ at $\mathcal{I}$, and the torsion part $\mathbb{T}_Z$, possibly empty, whose reduced structure is $\P_{Z'}^n$ for a set $Z'\subset Z$.
So the Koszul complex provides a resolution
\begin{equation*}
\begin{tikzcd}[column sep=0.5cm,minimum width=2em]
0 \ar{r}& \O_{\mathbb{X}}(-n+1) \ar{r}& \ldots \ar{r}& \O_{\mathbb{X}}(-1)^{\binom{n}{2}} \ar{r}& \O_{\mathbb{X}}^n \ar{r}& \mathcal{I}_W(1) \ar{r}& 0
\end{tikzcd}
\end{equation*}
of $\mathcal{I}_W(1)$ and the desired vanishings follow from \Cref{Vanishing}~\ref{Vanishing2}.
Since $p_{1*}\O_{\mathbb{X}}^n\simeq\O_X^n$, $p_{1*}\O_{\mathbb{X}}(1)\simeq\mathcal{I}\otimes\mathcal{L}$, $\mathrm{R}^1p_{1*}\Bigl(\mathcal{I}_W(1)\Bigr)=0$ and $\mathrm{R}^1p_{1*}(\mathcal{K})=0$, pushing forward by $p_1$ the exact sequence \eqref{Koszul}, we obtain the following commutative diagram:
\begin{equation}\stepcounter{Diag}\tag{D\theDiag}\label{serpent}
\begin{tikzcd}[row sep=1em,column sep=1.5em,minimum width=2em]
& & 0 \ar{d}& & p_{1*}\mathcal{K} \ar{d}& \\
& & \O_X^n \arrow[d, "{\beta}"]\arrow[rr,phantom, "="]& &\O_X^n \arrow[d, "{p_{1*}\beta'}"]& \\
0\ar{r}& \mathcal{E} \ar{r}\arrow[d,phantom,"\rotatebox{90}{=}"]& \O_X^{n+1} \ar[rr,"(\phi_0\;\ldots\;\phi_n)"]\arrow[d,"\alpha"] & &\mathcal{I}\otimes\mathcal{L} \ar{d}\ar{r}& 0 \\
& \mathcal{E} \ar[r,"{\cos_{\alpha}}"]& \O_X \ar{d}\ar{rr}& & p_{1*}\O_W(1)\ar{r}\ar{d}& 0 \\
& &0 & & 0 & \\
\end{tikzcd}
\end{equation}
where $\alpha$ is the cokernel map of the vertical map $\beta:\O_X^n\rightarrow \O_X^{n+1}$. Hence $p_{1*}\mathcal{K}=\ker(\cos_{\alpha})$ and \eqref{serpent} implies that
\begin{align*}
p_{1*}(\O_W)\simeq\O_{\mathbb{V}(\cos_{\alpha})}
\end{align*}
as in \Cref{DegTopNaif}. So in the end, we have that $[\mathbb{V}(\cos_{\alpha})]=[p_{1*}W]$. Since all the generic map $\alpha$ as in \eqref{exSeq5} can be obtained as cokernel of a generic map $\beta:\O_X^n\rightarrow \O_X^{n+1}$, $[\mathbb{V}(\cos_\alpha)]$ does not depend on the generic map $\alpha$ so that we can write $[\mathbb{V}\Big(\hspace{0cm}\cos(\mathcal{E})\Big)]$ for a generic cosection $\cos(\mathcal{E})$.
The fact that $\deg(W)=\deg(p_{1*}W)$ comes from the decomposition of $W$. Indeed, $\mathbb{X}$ decomposes into the graph $\tilde{X}$ and the torsion part $\mathbb{T}_Z$ supported on $\P^n_{\Fitt_n(Z)}$. Hence, we have the equality
\begin{center}
$[W]=[\mathbb{X}]\cdot c_1\Big(\O_{\P_X^{n}}(1)\Big)^n=[\tilde{X}]\cdot c_1\Big(\O_{\P_X^{n}}(1)\Big)^n+[\mathbb{T}_Z]\cdot c_1\Big(\O_{\P_X^{n}}(1)\Big)^n$.
\end{center}
Since $\tilde{X}$ is irreducible and $\sigma_1:\tilde{X}\rightarrow X$ birational, we have \[\deg\Big([\tilde{X}]\cdot c_1\Big(\O_{\P_X^{n}}(1)\Big)^n\Big)=\deg\Bigl(\sigma_{1*}([\tilde{X}]\cdot c_1\Big(\O_{\P_X^{n}}(1)\Big)^n)\Bigr).\] Moreover, as a consequence of \Cref{theorPivot2} we have:
\begin{center}
$d_t(\Phi)=\deg\Big([\mathbb{V}(\cos(\mathcal{E})] \Big)-\deg\Big(p_{1*}([\mathbb{T}_Z]\cdot c_1\Big(\O_{\P_X^{n}}(1)\Big)^n)\Big)$.
\end{center}
\end{proof}
\section{Measure of the difference between Rees and symmetric algebras}\label{SectionEA}
We relate now the topological degree and the naive topological degree with the notions of Milnor and Tjurina numbers. For the rest of this section, $\mathcal{I}$ is the ideal of a rational map $\Phi=(\phi_0:\ldots:\phi_n)$ associated to an $n+1$-subspace $\tnV$ of $\tnH^0(X,\L)$ where $\L$ is a line bundle over $X$. We denote by $Z$ the base scheme $\mathbb{V}(\mathcal{I})$ in $X$ and we assume that $\dim(Z)=0$.
\subsection{Generalized Milnor and Tjurina numbers}
\begin{Not}
We set temporarily as a notation that $\delta^n=\deg\Big(c_1(\L)^n\Big)$ which as to be understood as $\delta=\deg\Big(c_1(\L)\Big)$ when $X$ is the projective space $\P^n$.
\end{Not}
\begin{defi}\label{defTjurina} \label{defMilnor}
With notation as in \Cref{NotationTorsion}, for every $z\in Z$, put:
\begin{itemize}
\item $\tau(Z,z)=\lgth(\O_{Z,z})$
\item $\mu(Z,z)=\begin{cases}\tau(Z,z)\hspace{0.3cm}\text{if }Z\text{ is a local complete intersection at }$z$,\\
\tau(Z,z)+\deg(T_z)\hspace{0.3cm}\text{otherwise.}
\end{cases}$
\end{itemize}
We let $\tau(Z)=\underset{z\in Z}{\sum}\tau(Z,z)$ and $\mu(Z)=\underset{z\in Z}{\sum}\mu(Z,z)$.
\end{defi}
As a direct application of \Cref{theorPivot2}, we obtain:
\begin{prop}\label{CosectionLongueur} \label{theorMuTau}
The following equalities hold:
\begin{enumerate}[label=\rm{\it(\roman*)}]
\item \label{MuTauL} $\deg\Big( [\mathbb{V}(\cos(\mathcal{E}))]\Big) = \delta^n-\tau(Z)$
\item \label{MuTauT} $d_t(\Phi) = \delta^n-\mu(Z) $
\item \label{MuTauDiff} $d_t(\Phi)- \deg( \cos(\mathcal{E}))=\mu(Z)-\tau(Z)= \deg(T)=\deg(p_{1*}T)$.
\end{enumerate}
\end{prop}
\begin{proof}
Looking back at the diagram \eqref{serpent}, we see that $\mathbb{V}( s_{\alpha})$ has the following presentation:
\begin{center}
\begin{tikzpicture}
\matrix (m) [row sep=1em,column sep=1em,minimum width=2em]
{
\node(a){$\O_{X}^{n}$};&\node(){}; &\node(){}; &\node(){}; &\node(){}; &\node(f){$\mathcal{I}\otimes\L$}; &\node(d){$\O_{\mathbb{V}(s_{\alpha})}$};& \node(e){$0$}; \\};
\path[-stealth]
(a) edge node[above]{{\small $s_{\alpha}=(\underset{i=0}{\overset{n}{\sum}}a_{i1}\phi_i\;\ldots\; \underset{i=0}{\overset{n}{\sum}}a_{in}\phi_i)$}} (f)
(f) edge (d)
(d) edge (e);
\end{tikzpicture}
\end{center}
where $(a_{ij})_{0\leq i\leq n,1\leq j\leq n}$ is an $(n+1)\times n$ generic matrix with entries in the field $\k$. Since by definition, $[\mathbb{V}(\cos(\mathcal{E}))]= [\mathbb{V}( s_{\alpha})]$ we have that $\deg\Big([\mathbb{V}(\cos(\mathcal{E}))]\Big)=\lgth(\O_{\mathbb{V}( s_{\alpha})})=\delta^n-\tau(Z)$ by definition of $\tau(Z)$.
The equalities \ref{MuTauT} and \ref{MuTauDiff} follow in the same way from the definition of $\mu(Z)$ and $\tau(Z)$ and from the decomposition of $\mathbb{X}$ as the union of $\tilde{X}$ and $\mathbb{T}_Z$.
\end{proof}
We now explain how to practically compute the number $\mu(Z)$.
\begin{prop}\label{generalCasualMilnor} Let $(a_{ij})_{0\leq i\leq n,1\leq j\leq n}$ be an $(n+1)\times n$ generic matrix with entries in the field $\k$. Then, denoting by $(\underset{i=0}{\overset{n}{\sum}}a_{i1}\phi_i,\ldots, \underset{i=0}{\overset{n}{\sum}}a_{in}\phi_i)_z$ the localisation at $z$, we have:
\[\mu(Z,z)=\lgth(\O_{X,z}/(\underset{i=0}{\overset{n}{\sum}}a_{i1}\phi_i,\ldots, \underset{i=0}{\overset{n}{\sum}}a_{in}\phi_i)_z).\]
\end{prop}
\begin{proof}
Recall that $d_t(\Phi)$ can be computed in the following way. A generic point $y\in\mathbb{P}^n$ is the intersection of $n$ general hyperplanes $L_j:\underset{i=0}{\overset{n}{\sum}}a_{ij}x_i=0$, that is, the data of an $(n+1)\times n$ generic matric $N$ with entry in the field $\k$. The preimage of $y$ by $\Phi$ is contained in the scheme $\mathbb{F}'=\mathbb{V}(\underset{j=0}{\overset{n}{\sum}}a_{1j}\phi_j,\ldots, \underset{j=0}{\overset{n}{\sum}}a_{nj}\phi_j)$. Hence, to compute the topological degree of $\Phi$, it remains to remove the points of $\mathbb{F}'$ in the base locus. But since the formation of the symmetric algebra commutes with base change, we can suppose that $Z$ consists of a single point $z$.
So \[d_t(\Phi)=\lgth(\mathbb{F})=\delta^n-\lgth(\O_{X,z}/(\underset{i=0}{\overset{n}{\sum}}a_{i1}\phi_i,\ldots, \underset{i=0}{\overset{n}{\sum}}a_{in}\phi_i)_z),\] which implies that $\lgth(\O_{X,z}/(\underset{i=0}{\overset{n}{\sum}}a_{i1}\phi_i,\ldots, \underset{i=0}{\overset{n}{\sum}}a_{in}\phi_i)_z)=\tau(Z,z)+\deg(T_z)=\mu(Z,z).$
\end{proof}
\begin{rem} From a more computational point of view, letting \[\mathbb{F}'=\mathbb{V}(\underset{j=0}{\overset{n}{\sum}}a_{1j}\phi_j,\ldots, \underset{j=0}{\overset{n}{\sum}}a_{nj}\phi_j)\] as in the proof of \ref{generalCasualMilnor}, the preimage of $y$ is equal to the scheme \[\mathbb{F}=\mathbb{V}(\underset{z\in Z}{\cap}[(\underset{i=0}{\overset{n}{\sum}}a_{i1}\phi_i,\ldots, \underset{i=0}{\overset{n}{\sum}}a_{in}\phi_i):(\underset{i=0}{\overset{n}{\sum}}a_{i1}\phi_i,\ldots, \underset{i=0}{\overset{n}{\sum}}a_{in}\phi_i)_z])\]
where, given two ideals $J$ and $J'$ of a ring $R$, we let $[J:J']$ be the \emph{ideal quotient} (see \cite[page 15]{eisenbud1995algebra}).
\end{rem}
\subsection{The polar case}\label{polarCase}
In the polar case, $X$ is the projective space $\P^{n}$ over $\k$.
\begin{defi}
Let $F=\lbrace f=0\rbrace$ be a hypersurface in $\mathbb{P}^n$ where $f$ is a homogeneous polynomial of degree $d$ in $\k[x_0,\cdots, x_n]$. Let $f_i=\frac{\partial f}{\partial x_i}$ and $\mathcal{I}=(f_0,\ldots, f_n)$ be the ideal sheaf in $\O_{\P^n}$ generated by the partial derivatives of $f$, called the \emph{jacobian ideal} of $f$. Recall that we call the map $\Phi_f$ associated to $\mathcal{I}$ the \emph{polar map}.
The topological degree of $\Phi_f$ is called the \emph{polar degree of} $F$.
\end{defi}
In order to use the Euler identity, we suppose in the sequel that the characteristic of the base field does not divide the degree of the polynomial $f$ defining the hypersurface $F$. We also always assume that the jacobian ideal $\mathcal{I}$ of $F$ is zero-dimensional.
We recall the classical definition of Milnor and Tjurina numbers.
\begin{defi}\label{UsMuTau} Let $z\in Z=\mathbb{V}(\mathcal{I})$ and via a change of coordinates, suppose that $z=(1:0:\ldots:0)$. Set $g_{\flat}\in \k[x_1,\ldots,x_n]$, the usual deshomogeneisation of a homogeneous polynomial $g\in \k[x_0,\cdots, x_n]$ in the chart $\lbrace x_0\neq 0\rbrace $.
The \emph{local Tjurina number} at $z$, denoted by $\tau_f(Z,z)$ is defined as \[ \tau_f(Z,z)=\lgth\Bigl(\O_{\k^n,z}/(f_{\flat},(f_{\flat})_1,\ldots,(f_{\flat})_n)\Bigr) \hspace{0.5cm}\text{where }(f_{\flat})_i=\frac{\partial f_{\flat}}{\partial x_i}.\]
The \emph{local Milnor number} at $z$, denoted by $\mu_f(Z,z)$, is defined as \[ \mu_f(Z,z)=\lgth\Bigl(\O_{\k^n,z}/((f_{\flat})_1,\ldots,(f_{\flat})_n)\Bigr) \hspace{0.5cm}\text{where }(f_{\flat})_i=\frac{\partial f_{\flat}}{\partial x_i}.\]
The \emph{global Tjurina number} of $F$, denoted by $\tau_f(Z)$ (resp. \emph{global Milnor number} of $F$, denoted by $\mu_f(Z)$) is the sum $\sum \tau_f(Z,z)$ (resp. $\sum \mu_f(Z,z)$) over all $z\in Z$.
\end{defi}
We explain now how the numbers $\mu(Z)$ and $\tau(Z)$ defined in \Cref{defMilnor} coincide with the usual definitions of Milnor and Tjurina number given in \Cref{UsMuTau}.
\begin{prop}\label{propEgMuTau} Let $F=\lbrace f=0\rbrace$ be a reduced hypersurface in $\mathbb{P}^n$ where $f$ is a homogeneous polynomial in $\k[x_0,\cdots, x_n]$ of degree $d$. Let $z\in Z=\mathbb{V}(\mathcal{I})$ then:
\[ \tau(Z,z)=\tau_f(Z,z)\hspace{0.5cm}\text{and}\hspace{0.5cm}\mu(Z,z)=\mu_f(Z,z). \]
\end{prop}
\begin{proof}
Via a change of coordinates, we can suppose $z=(1:0:\ldots:0)$. The deshomogenisation of the Euler identity in the chart $\lbrace x_0\neq 0\rbrace$ is:
\[ df_{\flat}= (f_0)_{\flat}+\underset{i= 1}{\overset{n}{\sum}} x_i(f_i)_{\flat}\]
and $(f_i)_{\flat}=(f_{\flat})_i$ for $1\leq i\leq n$. The equality
\begin{center}
$((f_0)_{\flat},\ldots,(f_n)_{\flat})=(f_{\flat}, (f_{\flat})_1,\ldots,(f_{\flat})_n)$
\end{center}
implies that $\tau(Z,z)=\tau_f(Z,z)$.
For the Milnor number, we let $A=(a_{ij})_{0\leq i\leq n,1\leq j\leq n}$ a generic $(n+1)\times n$ matrix with entries in the field $\k$. By \Cref{generalCasualMilnor}, \[\mu(Z,z)=\lgth\Bigl(\O_{\mathbb{P}^n,z}/(\underset{i=0}{\overset{n}{\sum}}a_{i1}f_i,\ldots, \underset{i=0}{\overset{n}{\sum}}a_{in}f_i)_z\Bigr).\]
By localisation at $z$, we have that $\mu(Z,z) =\lgth(\O_{M_A})$ where $M_A$ is defined as the cokernel of the following composition map:
\begin{center}
\begin{tikzpicture}
\matrix (m) [row sep=0em,column sep=2em,minimum width=2em]
{
&\node(c){$\O_{z}^{n}$}; & \node(a){$\O_{z}^{n+1}$}; &\node(){}; &\node(f){$\O_z$}; &\node(d){$\O_{M_A}$};& \node(e){$0$}; \\};
\path[-stealth]
(c) edge node[above]{$A$} (a)
(a) edge node[above]{{\footnotesize $ (f_0 \hspace{0.1 cm}\ldots \hspace{0.1 cm} f_n)_z $}} (f)
(f) edge (d)
(d) edge (e);
\end{tikzpicture}
\end{center}
whereas $\mu_f(Z,z)=\lgth(\O_{M})$ where $M$ is defined as the cokernel of the following composition map:
\begin{center}
\begin{tikzpicture}
\matrix (m) [row sep=0em,column sep=2.3em,minimum width=2em]
{
&\node(c){$\O_{z}^{n}$}; &\node(){}; &\node(a){$\O_{z}^{n+1}$}; &\node(){}; &\node(f){$\O_z$}; &\node(d){$\O_{M}$};& \node(e){$0$.}; \\};
\path[-stealth]
(c) edge node[above]{}(a)
(a) edge node[above]{{\footnotesize $ (f_0 \hspace{0.1 cm}\ldots \hspace{0.1 cm} f_n)_z $}} (f)
(f) edge (d)
(d) edge (e);
\begin{tiny}
\matrix (m) [row sep=0cm,%
column sep=0cm,%
minimum width=2em,%
matrix of math nodes,%
left delimiter = (,%
right delimiter = )] at (-2.9,1)
{%
\node(a11){0}; &\node(a12){}; &\node(a13){}; &\node(a14){0}; \\
\node(a21){1}; &\node(a22){}; &\node(a23){}; &\node(a24){}; \\
\node(a31){0}; &\node(a32){}; &\node(a33){}; &\node(a34){}; \\
\node(a41){}; &\node(a42){}; &\node(a43){}; &\node(a44){0}; \\
\node(a51){0}; &\node(a52){}; &\node(a53){0}; &\node(a54){1}; \\
};
\draw[loosely dotted] (a11)-- (a44);
\draw[loosely dotted] (a11)-- (a14);
\draw[loosely dotted] (a14)-- (a44);
\draw[loosely dotted] (a21)-- (a54);
\draw[loosely dotted] (a31)-- (a53);
\draw[loosely dotted] (a31)-- (a51);
\draw[loosely dotted] (a51)-- (a53);
\end{tiny}
\end{tikzpicture}
\end{center}
But, since $\rk(A)=n$, we have $\lgth(\O_{M_A})=\lgth(\O_M)$.
\end{proof}
In the case when $\tau(Z,z)=\mu(Z,z)$ for a point $z\in Z$, $Z$ is also called quasi-homogeneous at $z$ in \cite{saito1980theory}. As an application of the previous proposition, we recover a result originally proved over the field $\mathbb{C}$ in \cite{dimcapap2003hypersurfacecomplements}.
\begin{prop}\label{propDimPap}
Let $F=\lbrace f=0\rbrace \subset \mathbb{P}^n$ be a reduced hypersurface of degree $d$ over an algebraically closed field $\k$. Let $\Phi_f=(f_0:\ldots:f_n)$ be the polar map of $f$ and assume that $\mathbb{V}(f_0,\ldots,f_n)$ is finite.
Then \[d_t(\Phi_f)=(d-1)^n-\mu_f(Z).\]
\end{prop}
\begin{proof}
Since $f$ has degree $d$ and $\mathbb{V}(f_0,\ldots,f_n)$ is finite, \Cref{propDimPap} follows from \Cref{propEgMuTau} and \Cref{CosectionLongueur}~\ref{MuTauT} since the polynomials $f_i$ have degree $d-1$.
\end{proof}
\section{Examples and applications in the plane}\label{ExApplic}
In this section, $X$ is the projective plane $\mathbb{P}^2$. Letting $\L$ be a line bundle $\O_{\P^2}(\delta)$ for some $\delta\geq 1$, we consider the sections $\phi_0,\phi_1,\phi_2$ associated to the map $\Phi$ as homogeneous polynomials of degree $\delta$. We assume that the base ideal $\mathcal{I}=(\phi_0,\phi_1,\phi_2)$ has codimension $2$.
As above, $\mathcal{E}$ is defined as the kernel of the evaluation map as in the following exact sequence:
\begin{equation}\label{exSeq1Plan}\stepcounter{SExactes}\tag{E\theSExactes}
\begin{tikzcd}
0 \ar{r}& \mathcal{E} \ar{r}& \O_{\mathbb{P}^2}^{3} \ar[rr,"{(\phi_0\;\phi_1\;\phi_2)}"]& &\mathcal{I}(\delta) \ar{r}& 0.
\end{tikzcd}
\end{equation}
Since $\mathcal{E}$ is reflexive of rank $2$, it is locally free \cite{hartshorne1980stable}.
For $i\in\lbrace 1,2\rbrace$, we denote by $c_i(\mathcal{E})$ the first and second Chern classes of $\mathcal{E}$. The class $\cos(\mathcal{E})$ of a generic cosection of $\mathcal{E}$ is equal to the second Chern class $c_2(\mathcal{E}^\vee)$ of $\mathcal{E}^\vee$ and $c_2(\mathcal{E}^\vee)=c_2(\mathcal{E})$.
From now on, we identify Chern classes with their degree in $\mathbb{Z}$.
\subsection{Free and nearly free sheaves of relations}\label{SubFree}
For this subsection, $\O$ stands for $\O_{\mathbb{P}^2}$.
\begin{defi}
A vector bundle $\mathcal{F}$ of rank $2$ over $\mathbb{P}^2$ is said to be \emph{free} of exponents $(d_1,d_2)$ if there exists $(d_1,d_2)\in\mathbb{N}^{*2}$ such that $\mathcal{F}\simeq \mathcal{O}(-d_1)\oplus \mathcal{O}(-d_2)$.
It is said to be \emph{nearly free} of exponents $(d_1,d_2)$ if it has a graded free resolution of the form:
\begin{center}
\begin{tikzpicture}
\matrix (m) [row sep=3em,column sep=2em,minimum width=2em]
{
\node(a){$0$}; &\node(b){$\O(-d_2-1)$}; &\node(c){$\O(-d_1)\oplus \O(-d_2)^2$}; &\node(u){}; & \node(d){$\mathcal{F}$}; & \node(e){$0$.}; \\};
\path[-stealth]
(a) edge (b)
(b) edge (c)
(c) edge node [above] {$(\phi_0\;\phi_1\;\phi_2) $} (d)
(d) edge (e);
\end{tikzpicture}
\end{center}
\end{defi}
\begin{defi}[\cite{dimcaSti2015NearlyFreeDivRatCuspPlaneCurves}]\label{remdeffreecurves}
In the case where $\phi_0=f_0$, $\phi_1=f_1$, $\phi_2=f_2$ are the partial derivatives of a given squarefree polynomial $f\in \k[x_0,x_1,x_2] $, the curve $F=\lbrace f=0 \rbrace $ is called free (resp. nearly-free) if $\mathcal{E}$ in \eqref{exSeq1Plan} is free (resp. nearly-free).
\end{defi}
If $\phi_0,\phi_1,\phi_2$ are the partial derivatives of a squarefree homogeneous polynomial $f$, a result of A.A. du Plessis and C.T.C.Wall in \cite{duplessisWall19991higlysingular} identifies in particular curves $F=\lbrace f=0\rbrace$ of a given degree $d$ with maximal possible global Tjurina number. These are the free curves of exponents $(1,d-2)$. By \Cref{theorMuTau}, the maximality of this Tjurina number is equivalent to the minimality of the second Chern class $c_2(\mathcal{E})$ of the vector bundle $\mathcal{E}$ associated to $F$. The following theorem is thus a generalisation of the result of du Plessis-Wall. We emphasize that in this case $c_1(\mathcal{E})$ is negative and $c_2(\mathcal{E})$ is positive.
\begin{thm}\label{propFibre}
Let $\mathcal{E}$ as in \eqref{exSeq1Plan}, then:
\begin{enumerate}
\item \label{propFibre1} $-c_1(\mathcal{E})\leq c_2(\mathcal{E})+1$ and equality holds if and only if $\mathcal{E}$ is free of exponents $\Big(1,c_2(\mathcal{E})\Big)$,
\item \label{propFibre2} in the case $c_1(\mathcal{E})\leq -5$, $\mathcal{E}$ is nearly free of exponents $\Big(1,c_2(\mathcal{E})\Big)$ if and only if $-c_1(\mathcal{E})=c_2(\mathcal{E})$.
\end{enumerate}
\end{thm}
\begin{proof}
We denote by $c_1$ and $c_2$ respectively the first Chern class $c_1(\mathcal{E})$ and the second Chern class $c_2(\mathcal{E})$ of $\mathcal{E}$. We let $c=-1-c_1\geq 0$ and \[m=\min\lbrace t\in\mathbb{Z}\text{ , }\textnormal{H}^0\Bigl(\mathbb{P}^2,(\mathcal{E}(t)\Bigr)\neq 0 \rbrace.\]
\begin{itemize}
\item[\ref{propFibre1}]Assume that $c_2\leq c$. We are going to show that the only possibility is that $c_2=c$ and $m=1$. First, $m>0$ since otherwise, if $0\neq s\in \tnH^0(\mathbb{P}^2,\mathcal{E})$ we would have had $\mathcal{E}\simeq \O\oplus\O(-1-c)$ which contradicts the fact that $c_2>0$.
Now, let $s\in \tnH^0\Bigl(\mathbb{P}^2,\mathcal{E}(m)\Bigr)$ be a non zero section. Since $m$ is minimal, we have the following exact sequence:
\begin{equation}\label{suiteExSection}\stepcounter{SExactes}\tag{E\theSExactes}
\begin{tikzcd}[row sep=1em,column sep=2em,minimum width=2em]
0 \ar{r}& \O(-m) \ar{r}& \mathcal{E} \ar{r}& \mathcal{I}_L(m-1-c) \ar{r}& 0
\end{tikzcd}
\end{equation}
where $L\subset\mathbb{P}^2$ is a $0$-dimensional subscheme of length $l\geq 0$. It is a computation to show that $l=c_2-m(c+1-m)\geq 0$, and since $c_2\leq c$, we have
\begin{equation}\label{ineqLong}
c(1-m)\geq m(1-m).
\end{equation}
So
\begin{itemize}
\item[(i)] if $m=1$, then $l=0$, i.e.\ $\mathcal{I}_L(m-1-c)=\O(m-1-c)$ and the sequence \eqref{suiteExSection} splits showing that $\mathcal{E}\simeq \O(-1)\oplus\O(-c)$,
\item[(ii)] if $m\geq 2$ then $m\geq c$.
\end{itemize}
Now, assume by contradiction that $m\geq 2$. First, it follows from the Riemann-Roch formula that:
\[\chi\Big(\mathcal{E}(1)\Big)=\dfrac{8-2c_2-3c+c^2}{2}\geq \dfrac{8-5c+c^2}{2}.\]
Hence $\chi\Big(\mathcal{E}(1)\Big)>0$ for all $c$. On the other hand, since $m\geq 2$, by (ii), $m\geq c$ and we have $\textnormal{H}^0\Bigl(\mathbb{P}^2,\mathcal{E}(1)\Bigr)=\textnormal{H}^2\Bigl(\mathbb{P}^2,\mathcal{E}(1)\Bigr)=0$ where the second vanishing follows the first, using from Serre-duality $\textnormal{H}^2\Bigl(\mathbb{P}^2,\mathcal{E}(1)\Bigr)\simeq\textnormal{H}^0\Bigl(\mathbb{P}^2,\mathcal{E}(c-3)\Bigr)^\vee$. These two vanishings contradict the fact that $\chi\Big(\mathcal{E}(1)\Big)>0$. Summing up, if $c_2\leq c$, the only possibility is $c_2=c$ and then $\mathcal{E}\simeq\O(-1)\oplus\O(-c)$ which completes the proof of \ref{propFibre1}.
\item[\ref{propFibre2}] It is a computation to show that if $\mathcal{E}$ is nearly free of exponents $(1,c_2)$, then $c_2=c+1=-c_1$. Now, we assume that $c_2=c+1$ and that $c\geq 4$ and we show that $\mathcal{E}$ is nearly-free of exponents $(1,c_2)$.
From the inequality \eqref{ineqLong}, we obtain:
\begin{itemize}
\item[(i)] $m\geq 3$ implies $m\geq c$ and thus $\textnormal{H}^0\Big(\mathbb{P}^2,\mathcal{E}(c-1)\Big)=\textnormal{H}^0\Big(\mathbb{P}^2,\mathcal{E}(1)\Big)=0$. Then, the Riemann-Roch formula implies that
\begin{center}
$\chi\Big(\mathcal{E}(1)\Big)=\frac{(c-2)(c-3)}{2},$
\end{center} hence $\chi\Big(\mathcal{E}(1)\Big)>0$ for $c\geq 4$. As above this leads to a contradiction and so this case does not occur.
\item[(ii)] $m=2$ implies $c\leq 3$, a case excluded by the assumption $c\geq 4$.
\item[(iii)] $m=1$ implies that $l=1$ where $l$ is the length of the scheme $L$ as in the exact sequence \eqref{suiteExSection}. Now, using the resolution of a point $p$ in $\mathbb{P}^2$, we get the following diagram:
\begin{footnotesize}
\begin{center}
\begin{tikzpicture}
\matrix (m) [row sep=1.5em,column sep=2em,minimum width=2em]
{
\node(a){};&\node(a){};&\node(a){};&\node(u){$0$};&\node(a){};\\
\node(b){$0$ };&\node(e){$ \mathcal{O}(-1)$}; & \node(f){ $\mathcal{E}$}; & \node(g){$\mathcal{I}_p(-c)$}; &\node(h){$0$}; \\
\node(x){};&\node(x){};&\node(x){};&\node(v){$\O(-1-c)^2$};&\node(x){};\\
\node(y){};&\node(y){};&\node(y){};&\node(w){$\O(-2-c)$};&\node(y){};\\
\node(z){};&\node(z){};&\node(z){};&\node(t){$0$};&\node(z){};\\};
\path[-stealth]
(b) edge (e)
(e) edge node[auto]{$\alpha$} (f)
(f) edge (g)
(g) edge (h)
(w) edge (v)
(t) edge (w)
(v) edge (g)
(g) edge (u)
(v) edge[dash pattern=on 2pt off 2pt] node[auto]{$\beta$} (f);
\end{tikzpicture}
\end{center}
\end{footnotesize}
where the existence of $\beta$ is provided by the vanishing of $\mathcal{E}\textnormal{xt}^1(\mathcal{O}(-1-c)^2,\mathcal{O}(-1))$ (see also \cite{MarValles2017NFCurvesBundle} for more details in this direction). Since $\mathcal{E}$ is locally free of rank $2$, the complex \eqref{exSeqDefNF} provides a locally free resolution of $\mathcal{E}$ showing that $\mathcal{E}$ is nearly-free of exponent $(1,-1-c)$ that is $\mathcal{E}$ has the resolution:
\begin{equation}\label{exSeqDefNF}\stepcounter{SExactes}\tag{E\theSExactes}
\begin{tikzcd}[row sep=1em,column sep=1.5em,minimum width=2em]
0 \ar{r}& \O(-c-2) \ar{r}& \O(-1)\oplus\O(-c-1)^2 \ar{r}& \mathcal{E} \ar{r}& 0.
\end{tikzcd}
\end{equation}
\end{itemize}
\end{itemize}
\end{proof}
As an application we recover \cite[Corollary 2.6]{dorHassSim2012polar} but with a different proof. Recall that $\mathcal{I}$ is said to be of \emph{linear type} if $\mathbb{X}=\tilde{X}$, see the beginning of \Cref{subSecReesSym}.
\begin{cor}\label{theorCentral}
If $\mathcal{I}=(\phi_0,\phi_1,\phi_2)$ is of linear type then the associated map $\Phi$ is birational only if $\delta\leq 2$.
\end{cor}
\begin{proof}
Indeed, letting $\mathcal{E}$ be as in \eqref{exSeq1Plan}, we have that $c_2(\mathcal{E})=d_t(\Phi)$. But $c_1=-\delta$ so the only possibility to have $d_t(\Phi)=1$ is that $\delta\leq 2$.
\end{proof}
\subsection{Homaloidal curves}\label{SubHomalo}
Now, let $\Phi_f$ be the polar map from $\mathbb{P}^2$ to $\mathbb{P}^2$ associated to a reduced plane curve $F=\lbrace f=0\rbrace\subset\mathbb{P}^2$ as in \Cref{polarCase}. In this case, \Cref{theorCentral} says that, if the singular locus of the curve $F$ is a local complete intersection, the curve is homaloidal only if $d\leq 3$. This extends partially to any algebraically closed field the result in \cite{dolgachev2000polar}.
Now, recall that for any singular point $z$ of the curve $F$, the \textit{conductor invariant} $\delta_z$ is defined as the length of the quotient module $\tilde{\O}_{F,z}/\O_{F,z}$ where $ \tilde{\O}_{F,z}$ is the normalisation of the local ring $\O_{F,z}$. The number of local branches of $F$ at $z$ is denoted by $r_z$.
The combination of the Jung-Milnor formula over $\mathbb{C}$: \[\tau(Z,z)\leq\mu(Z,z)=2\delta_z-r_z+1\] and the formula for the arithmetic genus of a plane curve \cite[Part 3, Lemma 3 and Lemma 4]{dolgachev2000polar} gives the following relation:
\begin{align*}
\sum(r_z-1)&\leq 2\underset{i=1}{\overset{h}{\sum}}(1-g_i)+c_2(\mathcal{E})-(d+1).
\end{align*}
Now, if $F$ verifies $c_2(\mathcal{E}) = d-2$, this inequality becomes:
\begin{align*}
\sum(r_z-1)&\leq 2\underset{i=1}{\overset{h}{\sum}}(1-g_i)-3
\end{align*}
where $h$ is the number of irreducible components $F_i$ of $X$ and $g_i$ is the genus of the normalization of $F_i$.
But $r_z\geq 1$, so $h>1$.
A direct consequence is the following proposition which elucidates a part of the structure of the curves with the smallest possible $c_2(\mathcal{E})$ identified in \Cref{propFibre}.
\begin{prop}\label{propIrreductibles} Suppose that the field $\k$ is $\mathbb{C}$. Let $F=\lbrace f=0\rbrace\subset\mathbb{P}^2_{\mathbb{C}}$ be a reduced plane curve of degree $d$ and let $\mathcal{I}$ be the ideal sheaf generated by the partial derivatives of $f$ and $\mathcal{E}$ be as in \eqref{exSeq1Plan}.
If $d=c_2(\mathcal{E})+2$ then $F$ is reducible.
\end{prop}
This gives in particular another proof to the result in \cite[Th. 2.5 (iv)]{dimcasti2015freedivratcuspplanecurves}.
\subsubsection{In characteristic 3, a homaloidal curve of degree $5$}\label{exChar3Homalo}
In \cite{dolgachev2000polar}, the classification of complex homaloidal plane curves relies on the analysis of the Jung-Milnor's formula. In \cite{BouGreMar2012InvarHyperSingPosChar}, the authors showed that the Jung-Milnor formula applies over a field of characteristic $p>0$ provided that $F$ has no \emph{wild vanishing cycle} (see \cite{BouGreMar2012InvarHyperSingPosChar} for a definition) and in \cite{Duc2016InvPosChar}, a sufficient condition for an irreducible cuvre $F$ to have no wild vanishing cycle is to have degree $d$ such that $d(d-1)<p$. A rough idea is that for every $d$ such that the characteristic $p$ is way greater, the classification of homaloidal curves of degree $d$ remains the same. The following proposition shows that the classification differs when the degree is big enough compared to the characteristic and answers \Cref{pbClassif}.
\begin{prop}\label{exChar3}
The curve $F=\mathbb{V}\Big((x_1^2+x_0x_2)x_0(x_1^2+x_0x_2+x_0^2)\Big)$ is homaloidal if and only if the base field $\k$ has characteristic $3$, in which case the inverse of the polar map is
\begin{center}$\Psi=(-x_1^2x_2^2-x_0x_2^3-x_2^4:x_1^3x_2+x_0x_1x_2^2+x_1x_2^3:x_1^4+x_0x_1^2x_2+x_0x_2^3)$
\end{center}
\end{prop}
\begin{proof}
The curve $F$ is defined over $\mathbb{Z}$ hence over $\mathbb{F}_p$ for every $p$. The resolution of the jacobian ideal $\mathcal{I}$ over $\mathbb{Z}$ is as follows:
\begin{equation}\label{resIZ}\stepcounter{SExactes}\tag{R\theSExactes}
\begin{tikzcd}[ampersand replacement=\&,row sep=3em,column sep=0.95em,minimum width=2em]
0 \ar{r}\& \O(-1)\oplus\O(-3) \arrow[rrrrrrrrrrrrrr, "{\begin{pmatrix}
0 & 2x_0^3+4x_0x_1^2+4x_0^2x_2 \\ x_0 & -x_1^3 \\ -2x_1 & -6x_0x_1^2-8x_0^2x_2-8x_1^2x_2-6x_0x_2^2
\end{pmatrix}}"] \& \& \& \& \& \& \& \& \& \& \& \& \& \& \O^3 \ar{r}\& \mathcal{I}(4) \ar{r}\& 0
\end{tikzcd}
\end{equation} where we denote $\O$ for the sheaf $\O_{\mathbb{P}^2}$.
We observe that for every prime $p\neq 2$ the reduction modulo $p$ of \eqref{resIZ} provides a resolution of $\mathcal{I}_p=\mathcal{I}\otimes_{\mathbb{Z}} \mathbb{F}_p$. In every characteristic $p\geq3$, $\Fitt_2\mathcal{I}_p=(x_0,x_1)$ so $\mathcal{I}_p$ is not a complete intersection and $\P(\mathcal{I}_p)$ has a torsion component above the point $z=(0:0:1)\in\mathbb{P}^2$.
Moreover, in characteristic other than $2$, the resolution of $\mathbb{X}_p=\P(\mathcal{I}_p)$ embedded in $\mathbb{P}^n\times\mathbb{P}^n$ is as follow:
\begin{equation*}
\begin{tikzcd}[row sep=3em,column sep=2em,minimum width=2em]
0 \ar{r}& \O(-4,-2) \ar{r}& \O(-1,-1)\oplus\O(-3,-1) \ar{r}& \mathcal{I}_{\mathbb{X}_p} \ar{r}& 0
\end{tikzcd}
\end{equation*}
where $\O$ stands for $\O_{\P^{n}\times\P^{n}}$ and we wrote to the right the shift in the variables of the second factor of the product $\P^{n}\times\P^{n}$. From this resolution, we can compute that $\tau(Z,z)=13$ in every characteristic other that $2$.
In characteristic $3$, $\mathcal{I}_3$ has the following resolution:
\begin{center}
\begin{tikzpicture}
\matrix (m) [row sep=3em,column sep=1.5em,minimum width=2em]
{
\node(a){$0$}; &\node(c){$\mathcal{O}(-1)\oplus\O(-3)$};&\node(u){};&\node(u){}; & \node(d){$\mathcal{O}^3$};& \node(e){$\mathcal{I}_3(4)$}; & \node(f){$0$.}; \\};
\path[-stealth]
(a) edge (c)
(c) edge node[above]{\begin{scriptsize}
$\begin{pmatrix}
0 & x_0^3-x_0x_1^2-x_0^2x_2 \\ x_0 & x_1^3 \\ x_1 & -x_0^2x_2-x_1^2x_2
\end{pmatrix}$
\end{scriptsize}} (d)
(d) edge (e)
(e) edge (f);
\end{tikzpicture}
\end{center}
The difference in characteristic $3$ comes from the multiplicity of the torsion component in $\mathbb{X}_3$. Indeed, the torsion component $\mathbb{T}_Z$ has the following resolution over $\mathbb{Z}$:
\begin{center}
\begin{tikzpicture}
\matrix (m) [row sep=5em,column sep=2em,minimum width=2em]
{
\node(a){$0$}; &\node(c){$\O(-2,0)$}; & \node(d){$\O(-1,0)^2$}; & \node(e){$\mathcal{I}_{\mathbb{T}_Z}$}; & \node(f){$0$}; \\};
\path[-stealth]
(a) edge (c)
(c) edge (d)
(d) edge (e)
(e) edge (f);
\end{tikzpicture}
\end{center}
whereas in characteristic $3$, it has resolution:
\begin{center}
\begin{tikzpicture}
\matrix (m) [row sep=5em,column sep=2em,minimum width=2em]
{
\node(a){$0$}; & \node(b){$\O(-3,-1)$}; & \node(c){$\begin{matrix}
\O(-3,0)^2 \\ \oplus \\ \O(-2,-1)^2\end{matrix} $}; & \node(d){$\begin{matrix}\O(-2,0)^3 \\ \oplus \\ \O(-1,-1) \end{matrix}$}; & \node(e){$\mathcal{I}_{\mathbb{T}_{Z_3}}$}; & \node(f){$0$.}; \\};
\path[-stealth]
(a) edge (b)
(b) edge (c)
(c) edge (d)
(d) edge (e)
(e) edge (f);
\end{tikzpicture}
\end{center}
To sum up $\mu(Z,z) = 15$ and $d_t(\Phi)=1$ in characteristic $3$ or else $\mu(Z,z) = 14$ and $d_t(\Phi)=2$ in other characteristic different from $2$ and $3$. In characteristic $3$, the polar map can be written \begin{center}$\Phi_f=(x_1^4+x_0^3x_2+x_0x_1^2x_2:-x_0^3x_1+x_0x_1^3+x_0^2x_1x_2,x_0^4-x_0^2x_1^2-x_0^3x_2).$ \end{center}
and it is a computation to check that $\Psi$ is the inverse of $\Phi_f$.
\end{proof}
\begin{rem}
What we did is to deepen the multiplicity of the torsion component by specializing the resolution of $\mathcal{I}$ over $\mathbb{Z}$ modulo a prime $p$ for which some monomials of the presentation matrix disappear (here $p=3$ works). We emphasize that in characteristic $3$, the torsion part $\mathbb{T}_Z$ is not equal scheme-theoretically to $\P^n_{\Fitt_n\mathcal{I}}$ whereas it is in greater characteristic. It is not clear if such an example is sporadic or not.
\end{rem}
\subsubsection{The reduction problem in positive characteristic}
The analysis of the presentation of the jacobian ideal gives also an easy way to construct examples of non reduced plane curves in positive characteristic where the topological degree is not preserved by reduction. It suffices to compute the presentation matrix of the jacobian ideal and adjust the characteristic of the field in order to modify the first syzygy matrix.
The next proposition answers \Cref{pbRed}. We emphasize that, in the examples we consider, none of the exponents divide the characteristic of the field and that the characteristic $101$ does not play a particular role in comparison to other primes.
\begin{prop} Let $k$ be an algebraically closed field of characteristic $101$.
\begin{enumerate}[\rm{\it(\roman*)}]
\item\label{redprob1} The curve $\mathbb{V}\Big(z(y^3+x^2z)\Big)$ has polar degree $2$ whereas $\mathbb{V}\Big(z^{50}(y^3+x^2z)^{51}\Big)$ has polar degree $1$.
\item\label{redprob2} The curve $\mathbb{V}\Big((y^3+x^2z)(y^2+xz)\Big)$ has polar degree $5$ whereas the curve $\mathbb{V}\Big((y^3+x^2z)^{31}(y^2+xz)^4\Big)$ has polar degree $3$.
\end{enumerate}
\end{prop}
\begin{proof} Both curves are defined over $\mathbb{Z}$ and as in the proof of \Cref{exChar3}, the idea is to take reduction modulo the prime $p=101$ of the resolution of their jacobian ideal over $\mathbb{Z}$ to get a resolution over $\mathbb{F}_p$. We give the complete argument for \Cref{redprob1}. \Cref{redprob2} is similar and left to the reader. As in the proof of \Cref{exChar3}, $\mathcal{I}_p$ stands for $\mathcal{I}\otimes_{\mathbb{Z}} \mathbb{F}_p$.
The jacobian ideal of $\mathbb{V}\Big(z(y^3+x^2z)\Big)=0$ has resolution
\begin{center}
\begin{tikzpicture}
\matrix (m) [row sep=3em,column sep=2em,minimum width=2em]
{
\node(a){$0$}; & \node(b){$\O(-1)\oplus \O(-2)$}; & \node(c){$\O^{3}$}; & \node(d){$\mathcal{I}_{101}(3)$}; & \node(e){$0$,}; \\};
\path[-stealth]
(a) edge (b)
(b) edge (c)
(c) edge node [above] {$\Phi_{red}$} (d)
(d) edge (e);
\end{tikzpicture}
\end{center}
$\mathcal{I}_{\mathbb{X}_{101}}$ has the following resolution:
\begin{small}
\begin{center}
\begin{tikzpicture}
\matrix (m) [row sep=3em,column sep=2em,minimum width=2em]
{
\node(b){$0$}; & \node(c){$\O(-3,-2)$}; & \node(d){$\begin{matrix}\O(-1,-1) \\ \oplus \\ \O(-2,-1) \end{matrix}$}; & \node(e){$\mathcal{I}_{\mathbb{X}_{101}}$}; & \node(f){$0$.}; \\};
\path[-stealth]
(b) edge (c)
(c) edge (d)
(d) edge (e)
(e) edge (f);
\end{tikzpicture}
\end{center}
\end{small}
There is no torsion component above the point $z=(1:0:0)$ and so the corresponding polar map has topological degree $2$.
But the jacobian ideal of the curve $\mathbb{V}\Big(z^{50}(y^3+x^2z)^{51}\Big)$ has resolution
\begin{center}
\begin{tikzpicture}
\matrix (m) [row sep=3em,column sep=2em,minimum width=2em]
{
\node(a){$0$}; & \node(b){$\O(-1)\oplus \O(-2)$}; & \node(c){$\O^{3}$}; & \node(d){$\mathcal{I}_{101}(3)$}; & \node(e){$0$}; \\};
\path[-stealth]
(a) edge (b)
(b) edge (c)
(c) edge node [above] {$\Phi $} (d)
(d) edge (e);
\end{tikzpicture}
\end{center}
and $\mathcal{I}_{\mathbb{X}_{101}}$ has the following resolution:
\begin{center}
\begin{tikzpicture}
\matrix (m) [row sep=3em,column sep=2em,minimum width=2em]
{
\node(b){$0$}; & \node(c){$\O(-3,-2)$}; & \node(d){$\begin{matrix}\O(-1,-1) \\ \oplus \\ \O(-2,-1)\end{matrix}$}; & \node(e){$\mathcal{I}_{\mathbb{X}_{101}}$}; & \node(f){$0$.}; \\};
\path[-stealth]
(b) edge (c)
(c) edge (d)
(d) edge (e)
(e) edge (f);
\end{tikzpicture}
\end{center}
There is a torsion component above the point $z=(1:0:0)$, what we can see from the resolution of $\tilde{X}$:
\begin{center}
\begin{tikzpicture}
\matrix (m) [row sep=3em,column sep=2em,minimum width=2em]
{
\node(b){$0$}; & \node(c){$\O(-2,-2)^2$}; & \node(d){$\begin{matrix}\O(-1,-1) \\ \oplus \\ \O(-2,-1)\\\oplus \\ \O(-1,-2) \end{matrix}$}; & \node(e){$\mathcal{I}_{\tilde{X}}$}; & \node(f){$0$.}; \\};
\path[-stealth]
(b) edge (c)
(c) edge (d)
(d) edge (e)
(e) edge (f);
\end{tikzpicture}
\end{center}
The polar map of the latter curve is given by \[(x:y:z)\mapsto (xz^2:-49y^2z:50y^3)\] and its inverse is $(x:y:z)\mapsto(-37xz^2:-3y^2z:y^3).$
\end{proof}
\bibliographystyle{alpha}
|
{
"timestamp": "2018-06-05T02:12:14",
"yymm": "1806",
"arxiv_id": "1806.00856",
"language": "en",
"url": "https://arxiv.org/abs/1806.00856"
}
|
\section{Introduction}
A \textit{meromorphic rank $2$ connection} $(E,\nabla)$ on a projective manifold $X$
is tha datum of a rank $2$ vector bundle $E$ equipped with a $\mathbb{C}$-linear
morphism $\nabla \colon E \ra E \otimes \Omega^1_X (D)$ satisfying
the Leibniz rule
\begin{equation*}
\nabla (f \cdot s) = f \cdot \nabla(s) + df \otimes s
\end{equation*}
for any section $s$ and function $f$.
Here $D$ is the polar divisor of the connection $\nabla$.
The connection $\nabla$ is \textit{flat} when the curvature vanishes,
that is $\nabla \cdot \nabla=0$.
For a flat meromorphic rank $2$ connection, we can define its monodromy representation.
When $\det(E) = \mathcal{O}_X$ and the trace connection $\mathrm{tr} (\nabla)$
is the trivial connection on $\mathcal{O}_X$,
we say that $(E,\nabla)$ is an \textit{$\mathfrak{sl}_2$-connection}.
A connection $(E,\nabla)$ is called \textit{regular}
if local $\nabla$-horizontal sections
have moderate growth near the polar divisor $D$
(for details, see \cite[Chap.\ II, Definition 4.2]{Deligne}).
In this paper, we introduce
a family, parametrized by $\boldsymbol{\lambda} \in \mathbb{C}^n$,
of meromorphic $\mathfrak{sl}_2$-connections
$\nabla_{\boldsymbol{\lambda}} = d + A_{\boldsymbol{\lambda}}$ on the trivial bundle
$\mathcal{O}_{\mathbb{P}^n}\oplus\mathcal{O}_{\mathbb{P}^n}$
over $\mathbb{P}^n$ with $n \ge 2$,
with an explicit connection matrix $A_{\boldsymbol{\lambda}}$.
\subsection{The explicit expression of $\nabla_{\boldsymbol{\lambda}}$}\label{2019.7.10.22.19}
The explicit connection matrix $A_{\boldsymbol{\lambda}}$ is described as follows.
Let $[x:y:z_1:\ldots:z_{n-2}:t]$ be the homogeneous coordinates of $\mathbb{P}^n$.
Set $f(x,y,t):=x^2 + y^2 +t^2 -2 (x y +yt + t x)$.
For $\boldsymbol{\lambda}= (\lambda_0 , \ldots, \lambda_{n-1}) \in \mathbb{C}^n$,
we define rational $1$-forms on $\mathbb{P}^n$ as follows:
\begin{equation*}
\begin{aligned}
\alpha_0(x,y) &:= -
\frac{(2 \lambda_0 + \lambda_1)dx - (2 \lambda_1 + \lambda_0)dy}{2} -
\frac{ \lambda_1 ( y-1)}{2}\frac{ dx}{x} +
\frac{ \lambda_0 ( x-1)}{2}\frac{ dy}{y} ,\\
\alpha_1(x,y) &:= - \frac{1}{4}\frac{ d f(x,y,1)}{f(x,y,1)}, \quad
\alpha_2(x,y):= -\frac{\alpha_0(x,y)}{f(x,y,1)},
\end{aligned}
\end{equation*}
and
\begin{equation*}
\begin{aligned}
\alpha^i_0(x,y, z_i) &:=\lambda_{i+1} \left( d z_i-
\frac{ z_id (f(x,y,1)-z_i^2) }{2(f(x,y,1)-z_i^2)} \right), \quad
\alpha^i_2(x,y, z_i):= -\frac{\alpha^i_0(x,y, z_i)}{f(x,y,1)} ,
\end{aligned}
\end{equation*}
which are described by the affine coordinates $[x:y:z_1:\ldots:z_{n-2}:1]$.
We define a connection matrix $A_{\boldsymbol{\lambda}}$ as
\begin{equation*}
\begin{aligned}
A_{\boldsymbol{\lambda}}=
\begin{pmatrix}
{\mathcal A}_{11} & {\mathcal A}_{12} \\
-{\mathcal A}_{21} &- {\mathcal A}_{11}
\end{pmatrix} + \sum_{i=1}^{n-2}
\begin{pmatrix}
{\mathcal A}_{11}^i & {\mathcal A}_{12}^i \\
-{\mathcal A}_{21}^i &- {\mathcal A}_{11}^i
\end{pmatrix},
\end{aligned}
\end{equation*}
\textit{where}
\begin{equation*}
\begin{aligned}
{\mathcal A}_{11} &:=
(x-1) \alpha_2 (x,y)
+ \alpha_1(x,y) +\frac{1}{2}\frac{d y}{y} ,\\
{\mathcal A}_{11}^i &:= (x-1) \alpha^i_2 (x,y, z_i), \\
{\mathcal A}_{12} &:= \frac{dx + (x-1)^2\alpha_2 (x,y)
+ 2 (x -1)\alpha_1(x,y) + \alpha_0(x,y) }{y} , \\
{\mathcal A}_{12}^i &:=
\frac{(x-1)^2\alpha^i_2 (x,y, z_i) + \alpha^{i}_0(x,y, z_i) }{y},\\
{\mathcal A}_{21}&:= y \alpha_2 (x,y), \\
{\mathcal A}_{21}^i &:= y \alpha^i_2 (x,y, z_i) ,
\end{aligned}
\end{equation*}
in the affine coordinates $[x:y:z_1:\ldots:z_{n-2}:1]$.
\subsection{Main results}
Let $\mathcal{Q}_0$ and $\mathcal{Q}_i$ be the divisors on $\mathbb{P}^n$ defined by
$\mathcal{Q}_0:= ( f (x, y ,t) =0 )$ and
$\mathcal{Q}_i:= ( f (x, y ,t) - z_i^2 =0 )$, $i=1,\ldots,n-2$,
respectively.
Let $D_n$ be the divisor on $\mathbb{P}^n$ defined by
\begin{equation*}
D_n := ( x=0 ) + ( y=0 ) + ( t=0 ) +
\mathcal{Q}_0 + \mathcal{Q}_1 + \cdots + \mathcal{Q}_{n-2}.
\end{equation*}
From the explicit expression of $\nabla_{\boldsymbol{\lambda}}$, it follows that
all $\nabla_{\boldsymbol{\lambda}}$ share the same polar divisor $D_n$.
Note that the conic $\mathcal{Q}_0$ plays a special role:
it is tangent to the conic $\mathcal{Q}_i$ inside the coordinate hyperplane $(z_i=0)$
for $i=1,\ldots,n-2$,
and it is tangent to the three coordinate hyperplanes $(x=0)$, $(y=0)$ and $(t=0)$.
\begin{theo}\label{2019.7.10.21.43}
\textit{
For each $\boldsymbol{\lambda}$, the connection $\nabla_{\boldsymbol{\lambda}}$
is flat and has at worst regular singularities.
}
\end{theo}
We say that two connections $(E,\nabla)$ and $(E',\nabla')$
are \textit{birationally equivalent} when there is a birational bundle transformation
$\phi \colon E \dashrightarrow E'$ that conjugates the two operators $\nabla$
and $\nabla'$.
We say that two connections $(E,\nabla)$ and $(E',\nabla')$
are \textit{projectively equivalent} if the induced $\mathbb{P}^1$-bundles
coincide $\mathbb{P}(E)=\mathbb{P}(E')$, and if moreover $\nabla$ and $\nabla'$
induce the same projective connection $\mathbb{P}(\nabla)= \mathbb{P}(\nabla')$.
\begin{theo}\label{2019.7.10.15.33}
\textit{Via a generically finite Galois morphism
$f \colon \mathbb{P}^n \rightarrow \mathbb{P}^n$, for each $\boldsymbol{\lambda}$,
the pull-back connection $f^*\nabla_{\boldsymbol{\lambda}}$ on the trivial bundle
is projectively birationally equivalent to a split connection of the form
\begin{equation*}
d+ \begin{pmatrix}
\omega & 0 \\
0 &-\omega
\end{pmatrix}
\end{equation*}
with $\omega$ a rational closed $1$-form on $X$.
}
\end{theo}
In this case, the generically finite Galois morphism is a genetically finite morphism of degree two.
Loray, Pereira, and Touzet have proved
the structure theorem of flat meromorphic $\mathfrak{sl}_2$-connections
on projective manifolds in \cite{LPT}
(see also \cite{CS}).
By Theorem \ref{2019.7.10.15.33}, each $\nabla_{\boldsymbol{\lambda}}$ is the first type of
the three possible types of flat meromorphic $\mathfrak{sl}_2$-connections
over projective manifolds in the sense of Loray, Pereira, and Touzet \cite[Theorem E]{LPT}.
Since the connection $\nabla_{\boldsymbol{\lambda}}$ is flat for each $\boldsymbol{\lambda}$,
we can define its monodromy representation
$\pi_1 (\mathbb{P}^n \setminus D_n)
\rightarrow
\mathrm{SL}_2(\mathbb{C})$
of $\nabla_{\boldsymbol{\lambda}}$ for each $\boldsymbol{\lambda}$.
Let $\boldsymbol{D}_{\infty}$ be the infinite dihedral group:
\begin{equation*}
\boldsymbol{D}_{\infty}:= \left\langle
\begin{pmatrix}
0 & \alpha \\
-\alpha^{-1} & 0
\end{pmatrix},
\begin{pmatrix}
\beta & 0 \\
0 & \beta^{-1}
\end{pmatrix}\
\middle| \
\alpha , \beta \in \mathbb{C}^*
\right\rangle \le \mathrm{SL}_2 (\mathbb{C}).
\end{equation*}
For the monodromy representation of $\nabla_{\boldsymbol{\lambda}}$,
we have the following.
\begin{theo}\label{2019.7.10.22.03}
\textit{
For generic $\boldsymbol{\lambda}$,
the monodromy representation of $\nabla_{\boldsymbol{\lambda}}$ is conjugated to an explicit
representation
\begin{equation*}
\rho_{\boldsymbol{\lambda}} \colon \pi_1 (\mathbb{P}^n \setminus D_n)
\longrightarrow
\mathrm{SL}_2(\mathbb{C}),
\end{equation*}
which is virtually abelian, i.e. abelian after a finite cover of $\mathbb{P}^n \setminus D_n$,
and takes values in the infinite dihedral group $\boldsymbol{D}_{\infty}$.
}
\end{theo}
\subsection{Algebraic Garnier solution}
The $(2n-2)$-variable \textit{Garnier system} $\mathcal{G}_{2n-2}$
is the completely integrable Hamiltonian system
\begin{equation*}
\mathcal{G}_{2n-2} \colon
\left\{
\begin{aligned}
\frac{\partial \rho_j}{\partial t_i} &= -\frac{\partial K_i}{\partial \nu_j} &
i,j= 1,\ldots, 2n-2 \\
\frac{\partial \nu_j}{\partial t_i} &= \frac{\partial K_i}{\partial \rho_j} &
i,j= 1,\ldots, 2n-2,
\end{aligned}
\right.
\end{equation*}
where
\begin{equation*}
K_i= -\frac{\Lambda(t_i)}{T'(t_i)}
\left[ \sum^{2n-2}_{k=1} \frac{T(\nu_k)}{(\nu_k-t_i) \Lambda'(\nu_k)}
\left\{ \rho_k^2 -\sum^{2n}_{m=1} \frac{\theta_m-\delta_{im}}{\nu_k - t_m} \rho_k
+ \frac{\kappa}{\nu_k(\nu_k -1)} \right\}
\right]
\end{equation*}
with $t_{2n-1}=0$, $t_{2n}=1$,
$\kappa:= \frac{1}{4}\left\{ (\sum^{2n}_{m=1} \theta_m -1)^2 - (\theta_{\infty}^2 +1) \right\}$,
$\Lambda(t):= \prod^{2n-2}_{k=1} (t-\nu_k)$ and
$T(t):= \prod^{2n}_{k=1} (t-t_k)$
(see \cite{G1}, \cite{G2}, and \cite{Okamoto}).
Here $\theta_m$ ($m=1,\ldots,2n,\infty$) is the constant parameters defined by
\begin{equation*}
\begin{aligned}
\theta_1&= \frac{1}{2}, & \theta_2&= \frac{1}{2}, &
\theta_{2i+1}&= \lambda_{i+1},&
\theta_{2i+2}&= \lambda_{i+1}\ (i=1,\ldots,n-2) \\
\theta_{2n-1}&= \lambda_{1},&
\theta_{2n}&= \lambda_{0}-1,&
\theta_{\infty}&= \lambda_{0}+\lambda_{1}.
\end{aligned}
\end{equation*}
To give a solution of the Garnier system $\mathcal{G}_{2n-2}$,
we consider the Fuchsian system with $2n + 1$ regular singularities
at $0,1,t_1 , \ldots , t_{2n-2}, \infty$:
\begin{equation}\label{2018.5.22.12.11}
d+
\tilde{H}_{2n-1}\frac{d\tilde{x}}{\tilde{x}} +
\tilde{H}_{2n}\frac{d\tilde{x}}{\tilde{x}-1} +
\sum_{i=1}^{2n-2} \tilde{H}_{i}\frac{d\tilde{x}}{\tilde{x}-t_i} ,
\end{equation}
where $\tilde{H}_{i}$ ($i=1, \ldots , 2n$) are
$2\times 2$ matrices independent of $\tilde{x}$ and
$t_i \neq t_j$ ($i \neq j$).
We assume that $\tilde{H}_{2n+1}:=-\sum_{i=1}^{2n} \tilde{H}_{i}$ is a diagonal matrix and
the eigenvalues of $\tilde{H}_{i}$ ($i=1, \ldots , 2n+1$) are
as in Table \ref{2018.5.17.17.04}.
\begin{table}[htb]
\caption{The eigenvalues of the residue matrices
($i=1,\ldots , n-2$).}
\begin{tabular}{c|ccccccc}\label{2018.5.17.17.04}
Reside matrices & $\tilde{H}_{1}$ &
$\tilde{H}_{2}$ & $\tilde{H}_{2i+1}$ & $\tilde{H}_{2i+2}$
& $\tilde{H}_{2n-1}$ & $\tilde{H}_{2n}$ & $\tilde{H}_{2n+1}$ \\\hline
Eigenvalues
& $\pm \frac{1}{4}$
& $\pm \frac{1}{4}$
& $\pm \frac{\lambda_{i+1}}{2}$
& $\pm \frac{\lambda_{i+1}}{2}$
& $\pm \frac{\lambda_1}{2}$
& $\pm \frac{\lambda_0-1}{2}$
& $\pm\frac{\lambda_0+\lambda_1}{2}$
\end{tabular}
\end{table}
We fix generators $\gamma_{\tilde{x}}$
($\tilde{x}=0,1, t_1 \ldots ,t_{2n-2}, \infty$)
of the fundamental group $\pi_1(\mathbb{P}^1 \setminus \{ 0,1,t_1,\ldots,t_{2n-2}, \infty \},*)$.
Here the loop $\gamma_{\tilde{x}}$ on $\mathbb{P}^1$
is oriented counter-clockwise,
$\tilde{x}$ lies inside, while the other singular points lie outside.
Let $\rho'_{\boldsymbol{\lambda}}\colon
\pi_1(\mathbb{P}^1 \setminus\{ t_1,\ldots,t_{2n}, \infty \},*) \ra \mathrm{SL}_2(\mathbb{C})$
be the representation of the fundamental group defined by Table \ref{2018.5.14.12.41}.
If we have the isomonodromic deformation of the Fuchsian system (\ref{2018.5.22.12.11})
whose preserved monodromy representation is conjugated to
$\rho'_{\boldsymbol{\lambda}}$,
then we obtain a solution of the Garnier system $\mathcal{G}_{2n-2}$
(see \cite[Section 2]{Mazz}).
\begin{table}[htb]
\caption{The representation of the fundamental group;
here $a_j = \exp(-\pi \sqrt{-1} \lambda_j)$ $j=0,1,\ldots,n-1$.}
\begin{center}
\begin{tabular}{c|c|c|c}\label{2018.5.14.12.41}
$\gamma_0$ & $\gamma_1$ & $\gamma_{t_1}$ &
$\gamma_{t_2}$ \\\hline
$\begin{pmatrix} a_1 & 0 \\ 0 & a_1^{-1} \end{pmatrix}$
& $\begin{pmatrix} -a_0 & 0 \\ 0 & -a_0^{-1} \end{pmatrix}$
& $\begin{pmatrix} 0 & 1 \\ -1 &0 \end{pmatrix}$
& $\begin{pmatrix} 0 & a_0^{2} \\ -a_0^{-2} &0 \end{pmatrix}$
\end{tabular}\\
\begin{tabular}{c|c|c}
$\gamma_{t_{2i+1}}$ ($i=1,\ldots,n-2$) & $\gamma_{t_{2i+2}}$ ($i=1,\ldots,n-2$)
& $\gamma_{\infty}$\\\hline
$\begin{pmatrix} a_{i+1} & 0 \\ 0 & a_{i+1}^{-1} \end{pmatrix}$
& $\begin{pmatrix} a^{-1}_{i+1} & 0 \\ 0 & a_{i+1} \end{pmatrix}$
& $\begin{pmatrix} a_0 a_1^{-1} & 0 \\ 0 &a_0^{-1} a_1 \end{pmatrix}$
\end{tabular}
\end{center}
\end{table}
We say $(\rho_j(t_1,\ldots,t_{2n-2}), \nu_j(t_1,\ldots,t_{2n-2}))_{j=1,...,2n-2} $
is an \textit{algebraic solution of $\mathcal{G}_{2n-2}$}
if $(\rho_j(t_1,\ldots,t_{2n-2}), \nu_j(t_1,\ldots,t_{2n-2}))_{j=1,...,2n-2} $
satisfies the Garnier system $\mathcal{G}_{2n-2}$
and the graph of the solution has Zariski closure of dimension $2n-2$.
\begin{theo}\label{2019.7.10.22.13}
Let $T$ be a certain Zariski open subset of $\mathbb{A}^{2n-2}$
parametrizing generic lines in $\mathbb{P}^n$.
From the natural morphism $\mathbb{P}^1 \times T \rightarrow \mathbb{P}^n$,
one obtains a relative connection $(\nabla_{\mathbb{P}^1 \times T/T})_{\boldsymbol{\lambda}}$
with $2n+1$ simple poles by the pull-back of $\nabla_{\boldsymbol{\lambda}}$.
\begin{itemize}
\item[(i)] Up to an \'etale base change $\tilde{T} \rightarrow T$, an isomorphism of the
relative trivial bundle, and up to relative M\"obius transformations in the base,
we can consider
the relative connection $(\nabla_{\mathbb{P}^1 \times T/T})_{\boldsymbol{\lambda}}$
as a family of the Fuchsian system (\ref{2018.5.22.12.11})
parametrized by $T$.
\item[(ii)] The family $(\nabla_{\mathbb{P}^1 \times T/T})_{\boldsymbol{\lambda}}$
is isomonodromic.
The preserved monodromy representation of the fundamental group
$\pi_1(\mathbb{P}^1 \setminus \{ 0,1,t_1,\ldots,t_{2n-2}, \infty \},*)$
of this isomonodromic family
is conjugated to the representation given by Table \ref{2018.5.14.12.41}
\item[(iii)] Since $\dim \tilde{T}=2n-2$, the connection matrices
of the isomonodromic family $(\nabla_{\mathbb{P}^1 \times T/T})_{\boldsymbol{\lambda}}$
defines an algebraic solution of the Garnier system $\mathcal{G}_{2n-2}$.
\end{itemize}
\end{theo}
In the case of $n = 2$,
the family of connections $\nabla_{\boldsymbol{\lambda}}$
have been established by Girand in \cite{Girand}.
Moreover Girand have discussed
an explicit relation to certain algebraic solutions of the sixth Painlev\'e equation in \cite{Girand}.
Our argument is generalization of
Girand's idea of explicit construction of $\nabla_{\boldsymbol{\lambda}}$,
and of the proof of the main results, to the case $n \ge 2$.
The organization of this paper is as follows.
In Section \ref{2018.5.18.17.10}, we introduce
a family, parametrized by $\boldsymbol{\lambda} \in \mathbb{C}^n$,
of meromorphic $\mathfrak{sl}_2$-connections
$\nabla_{\boldsymbol{\lambda}} = d + A_{\boldsymbol{\lambda}}$ on the trivial bundle
$\mathcal{O}_{\mathbb{P}^n}\oplus\mathcal{O}_{\mathbb{P}^n}$
over $\mathbb{P}^n$ with $n \ge 2$,
with an explicit connection matrix $A_{\boldsymbol{\lambda}}$.
In Section \ref{2019.7.10.22.00},
we show Theorem \ref{2019.7.10.21.43} and Theorem \ref{2019.7.10.15.33}.
In Section \ref{2018.5.18.17.14}, we compute the monodromy representation of
$\nabla_{\boldsymbol{\lambda}}$ for generic $\boldsymbol{\lambda}$.
In Section \ref{2019.7.10.22.02},
we show Theorem \ref{2019.7.10.22.03}.
In Section \ref{2018.5.18.17.17},
we consider the natural morphism $\mathbb{P}^1 \times T \rightarrow \mathbb{P}^n$,
where $T$ is a certain Zariski open subset of $\mathbb{A}^{2n-2}$
parametrizing generic lines in $\mathbb{P}^n$.
Let $(\nabla_{\mathbb{P}^1 \times T/T})_{\boldsymbol{\lambda}}$ be
the relative connection with $2n+1$ simple poles
given by the pull-back of $\nabla_{\boldsymbol{\lambda}}$.
In Section \ref{2019.7.10.22.11},
we introduce an \'etale base change $\tilde{T} \rightarrow T$
to prove the assertion (i) of Theorem \ref{2019.7.10.22.13}.
In Section \ref{2019.7.12.11.27},
after the \'etale base change $\tilde{T} \rightarrow T$,
we compute the residue matrix of
$(\nabla_{\mathbb{P}^1 \times \tilde{T}/\tilde{T}})_{\boldsymbol{\lambda}}$
for each simple pole.
In Section \ref{2019.7.10.22.15},
we recall the relation between isomonodromic deformations and the Garnier system
following \cite{Mazz}.
In Section \ref{2019.7.12.11.28},
we show Theorem \ref{2019.7.10.22.13}.
\section{Construction of flat connections on projective spaces}\label{2018.5.18.17.10}
In this section,
we introduce
a family, parametrized by $\boldsymbol{\lambda} \in \mathbb{C}^n$,
of meromorphic $\mathfrak{sl}_2$-connections
$\nabla_{\boldsymbol{\lambda}} = d + A_{\boldsymbol{\lambda}}$ on the trivial bundle
$\mathcal{O}_{\mathbb{P}^n}\oplus\mathcal{O}_{\mathbb{P}^n}$
over $\mathbb{P}^n$ with $n \ge 2$,
with the explicit connection matrix $A_{\boldsymbol{\lambda}}$
described in Section \ref{2019.7.10.22.19}.
For this introduction, we start from
a family, parametrized by $\boldsymbol{\lambda} \in \mathbb{C}^n$,
of flat meromorphic $\mathfrak{sl}_2$-connections $(\nabla_0)_{\boldsymbol{\lambda}}$
on the trivial bundle
$\mathcal{O}_{\mathbb{C}^n}\oplus\mathcal{O}_{\mathbb{C}^n}$
over $\mathbb{C}^n$
whose connection matrix splits.
Next, we consider a birational transformation of the projective connection
$\mathbb{P}((\nabla_0)_{\boldsymbol{\lambda}})$.
We define a generically finite Galois morphism
$f \colon \mathbb{C}^n \rightarrow \mathbb{C}^n$.
We show that this birational transformation descend to
a projective connection over $\mathbb{C}^n$.
We denote by $\mathbb{P}((\nabla_1)_{\boldsymbol{\lambda}})$ this projective connection.
The connection corresponding to $\mathbb{P}((\nabla_1)_{\boldsymbol{\lambda}})$
does not split.
If we extend the projective connection $\mathbb{P}((\nabla_1)_{\boldsymbol{\lambda}})$
over $\mathbb{C}^n$
to a projective connection over $\mathbb{P}^n$ naively,
then the extended projective connection over $\mathbb{P}^n$ has poles of oder $2$ along
the divisor $\mathbb{P}^n \setminus \mathbb{C}^n$.
Then we consider a birational transformation
of $\mathbb{P}((\nabla_1)_{\boldsymbol{\lambda}})$.
By this birational transformation, we obtain
the meromorphic $\mathfrak{sl}_2$-connections
$\nabla_{\boldsymbol{\lambda}} = d + A_{\boldsymbol{\lambda}}$
with the explicit connection matrix $A_{\boldsymbol{\lambda}}$
described in Section \ref{2019.7.10.22.19}.
Finally Theorem \ref{2019.7.10.21.43} and Theorem \ref{2019.7.10.15.33} follow from
this construction of $\nabla_{\boldsymbol{\lambda}}$.
\subsection{Flat connections $(\nabla_0)_{\boldsymbol{\lambda}}$ defined by
rational closed 1-forms}
Let $\lambda_0, \ldots, \lambda_{n-1} $ be complex numbers.
Set $Y:=\Spec \mathbb{C} [u_0,u_1,z_1,\ldots,z_{n-2}]$.
Let $\omega_0$ and $\psi_n$ be the closed rational 1-forms on $Y$ defined by
\begin{equation*}
\begin{aligned}
\omega_0 :=&\ \lambda_0 \left( \frac{du_0}{u_0} - \frac{du_1}{u_1}\right)
+ \lambda_1 \left( \frac{du_0}{u_0-1} - \frac{du_1}{u_1-1}\right)\\
\psi_n :=&\
\begin{cases}
\sum_{i=1}^{n-2} \lambda_{i+1} \left(
\frac{ d(u_0-u_1 + z_i)}{u_0-u_1 + z_i}
-\frac{d (u_0-u_1 - z_i)}{u_0-u_1 - z_i} \right) & n>2\\
0 & n=2.
\end{cases}
\end{aligned}
\end{equation*}
We have a family of flat connections
\begin{equation*}
(\nabla_0)_{\boldsymbol{\lambda}} :=d
+\frac{1}{2}
\begin{pmatrix}
\omega_0 + \psi_n & 0 \\
0 & -\omega_0 - \psi_n\\
\end{pmatrix}
\end{equation*}
on the trivial rank 2 vector bundle $E_0 \ra Y$.
The family $(\nabla_0)_{\boldsymbol{\lambda}}$
is parametrized by $\boldsymbol{\lambda}=(\lambda_0,\ldots,\lambda_{n-1})$.
On the associated projective bundle $\mathbb{P}(E_0)$,
we have the associated projective connection
$\mathbb{P}((\nabla_0)_{\boldsymbol{\lambda}})=d w_0 + (\omega_0 + \psi_n ) w_0$,
where $w_0$ is a projective coordinate on the fibers.
\subsection{Descent of the connection $(\nabla_0)_{\boldsymbol{\lambda}}$}\label{2019.7.10.23.01}
We consider the birational transformation of the projective connection
$\mathbb{P}((\nabla_0)_{\boldsymbol{\lambda}})$ defined by
$\Phi \colon \mathbb{P}(E_0) \dashrightarrow \mathbb{P}(E_0)$;
\begin{equation*}
\begin{aligned}
(u_0,u_1,z_1,\ldots,z_{n-2}, [w_0^0:w_0^1])
\longmapsto (u_0,u_1,z_1,\ldots,z_{n-2},[\tilde{w}_0^0:\tilde{w}_0^1] ),
\end{aligned}
\end{equation*}
where
\begin{equation}\label{2018.5.4.11.20}
\begin{aligned}
\frac{\tilde{w}_0^1}{\tilde{w}_0^0}=
(u_0 - u_1) \frac{w^1_0+w^0_0}{w^1_0-w_0^0} .
\end{aligned}
\end{equation}
The rational function (\ref{2018.5.4.11.20}) is an invariant
of the involution $\iota\colon \mathbb{P}(E_0) \ra \mathbb{P}(E_0)$;
\begin{equation*}
\begin{aligned}
\iota\colon
(u_0,u_1,z_1,\ldots,z_{n-2}, [w_0^0:w_0^1])
\longmapsto (u_1,u_0,z_1,\ldots,z_{n-2},[w_0^1:w_0^0] ),
\end{aligned}
\end{equation*}
that is $(\tilde{w}_0^1/\tilde{w}_0^0) \circ \iota=\tilde{w}_0^1/\tilde{w}_0^0$ as
functions on $\mathbb{P}(E_0)$.
Put $w_0=w_0^1/w_0^0$ and $\tilde{w}_0=\tilde{w}_0^1/\tilde{w}_0^0$.
We can check the following proposition by direct computation.
\begin{prop}
We define a map $f \colon Y \ra \mathbb{P}^n$ by
\begin{equation*}
\begin{aligned}
(u_0,u_1,z_1, \dots,z_{n-2})
&\longmapsto [s_1:s_2 : z_1 : \ldots :z_{n-2}:1 ],
\end{aligned}
\end{equation*}
where $s_1=u_0+u_1$ and $s_2=u_0 u_1$.
The birational transformation $(\Phi^{-1})^* \mathbb{P}((\nabla_0)_{\boldsymbol{\lambda}})$
on $\mathbb{P}(E_0)$ descends to a projective connection on
$f(Y) \times \mathbb{P}^1 \ra f(Y)$:
\begin{equation}\label{2018.4.27.14.06}
\begin{aligned}
(\Phi^{-1})^* \mathbb{P}((\nabla_0)_{\boldsymbol{\lambda}})
&=\frac{d \tilde{w}_0}{d w_0} \left( dw_0 + (\omega_0 + \psi_n) w_0 \right) \\
&=
d \tilde{w}_0 + \left( \alpha_2 (s_1,s_2) + \sum^{n-2}_{i=1}
\alpha^i_2 (s_1,s_2, z_i) \right) \tilde{w}_0^2 \\
&\quad + 2 \alpha_1(s_1,s_2) \tilde{w}_0
+ \left(\alpha_0 ( s_1,s_2) + \sum^{n-2}_{i=1}
\alpha^i_0 (s_1,s_2, z_i)\right),
\end{aligned}
\end{equation}
where
\begin{equation}\label{2018.4.27.14.04}
\begin{aligned}
\alpha_0 ( s_1,s_2) &:=
\frac{2 \lambda_0 (1 - s_1 + s_2 ) + \lambda_1 (-s_1 + 2 s_2)}{2( 1- s_1 + s_2)} d s_1
-
\frac{ \lambda_0 s_1 ( 1 - s_1 + s_2) +\lambda_1 s_2 (s_1-2)}{2 s_2 (1 - s_1 + s_2)} ds_2, \\
\alpha^i_0 ( s_1,s_2,z_i)&:= \lambda_{i+1} \left( d z_i
-
\frac{ z_id (s_1^2-4 s_2 - z_i^2) }{2(s_1^2-4 s_2 - z_i^2)} \right) ,\\
\alpha_1(s_1,s_2)
&:=-\frac{1}{4}\frac{d(s_1^2 - 4 s_2)}{s_1^2 - 4 s_2}, \\
\alpha_2(s_1,s_2)
& := - \frac{\alpha_0(s_1,s_2)}{s_1^2 -4 s_2} , \\
\alpha^i_2(s_1,s_2, z_i) &:=- \frac{\alpha^i_0(s_1,s_2, z_i)}{s_1^2-4 s_2}.
\end{aligned}
\end{equation}
\end{prop}
The corresponding connection $(\nabla_1)_{\boldsymbol{\lambda}}$
on $f(Y) \times \mathbb{C}^2 \ra f(Y)$
is
\begin{equation*}
(\nabla_1)_{\boldsymbol{\lambda}}=
d+ \begin{pmatrix}
\alpha_1 (s_1,s_2) & \alpha_0 (s_1,s_2)\\
-\alpha_2 (s_1,s_2) & -\alpha_1 (s_1,s_2)
\end{pmatrix}+\sum^{n-1}_{i=1}
\begin{pmatrix}
0 & \alpha_0^i (s_1,s_2,z_i)\\
-\alpha_2^i (s_1,s_2,z_i) & 0
\end{pmatrix}.
\end{equation*}
We consider a relation between
this connection and the connection $(\nabla_0)_{\boldsymbol{\lambda}}$.
Let $\nabla_0'$ be the meromorphic connection on $Y \times \mathbb{C}\ra Y$
defined by
$\nabla_0':=d-\frac{1}{2}
\frac{d(u_0-u_1)}{u_0-u_1}$.
We define a matrix $M_1(u_0,u_1)$ on $Y$ by
\begin{equation*}
M_1(u_0,u_1):=
\begin{pmatrix}
-1 & -u_0+ u_1\\
-1 & u_0- u_1
\end{pmatrix}.
\end{equation*}
Let $\nabla_0''$ be the meromorphic connection on $Y \times \mathbb{C}^2\ra Y$
defined by
\begin{equation*}
\begin{aligned}
\nabla_0''&:=d+M_1(u_0,u_1)^{-1} dM_1(u_0,u_1) \\
&\qquad +
M_1(u_0,u_1)^{-1}
\frac{1}{2}
\begin{pmatrix}
\omega_0 + \psi_n & 0 \\
0 & -\omega_0 - \psi_n\\
\end{pmatrix}
M_1(u_0,u_1).
\end{aligned}
\end{equation*}
Then we have
\begin{equation}\label{2018.5.8.23.53}
f^* (\nabla_1)_{\boldsymbol{\lambda}} = \nabla_0'' \otimes \nabla_0'.
\end{equation}
Moreover, we consider the map $\mathbb{P}^n \ra \mathbb{P}^n$;
$[s_1:s_2:z_1:\ldots:z_{n-2}:t] \mapsto [x:y:z_1:\ldots :z_{n-2}:t]$,
where $x:=t - s_1 +s_2$ and $y:=s_2$.
Set
\begin{equation*}
\begin{aligned}
f(x,y) :=&\ x^2 + y^2 +1 -2 (x y + x + y) .
\end{aligned}
\end{equation*}
Then the rational 1-forms (\ref{2018.4.27.14.04}) are transformed into
\begin{equation}\label{2018.4.27.13.15}
\begin{aligned}
\alpha_0(x,y) &= -
\frac{(2 \lambda_0 + \lambda_1)dx - (2 \lambda_1 + \lambda_0)dy}{2} -
\frac{ \lambda_1 ( y-1)}{2}\frac{ dx}{x} +
\frac{ \lambda_0 ( x-1)}{2}\frac{ dy}{y} , \\
\alpha^i_0(x,y, z_i) &=\lambda_{i+1} \left( d z_i
-
\frac{ z_id (f(x,y)-z_i^2) }{2(f(x,y)-z_i^2)} \right), \\
\alpha_1(x,y) &= - \frac{1}{4}\frac{ d f(x,y)}{f(x,y)},\\
\alpha_2(x,y)&= -\frac{\alpha_0(x,y)}{f(x,y)}, \\
\alpha^i_2(x,y, z_i)&= -\frac{\alpha^i_0(x,y, z_i)}{f(x,y)} ,
\end{aligned}
\end{equation}
which are described by the affine coordinates $[x:y:z_1:\ldots:z_{n-2}:1]$.
\subsection{Birational transformations of the connection
$(\nabla_1)_{\boldsymbol{\lambda}}$}\label{2019.7.10.22.00}
From the connection $(\nabla_1)_{\boldsymbol{\lambda}}$ on $ f(Y) \times \mathbb{C}^2 \ra f(Y)$,
we construct a connection on the trivial bundle $\mathbb{P}^n \times \mathbb{C}^2 \ra \mathbb{P}^n$
whose pole divisor is $D_n$.
If we extend the rational 1-forms (\ref{2018.4.27.13.15}) to
rational 1-forms on $\mathbb{P}^n$,
then
$\alpha_0(x,y)$,
$\alpha^i_0(x,y, z_i)$ and $\alpha_1(x,y)$ have
poles of order $2$, $2$ and $1$ along the divisor $(t=0)$, respectively.
On the other hand,
the rational 1-forms $\alpha_2(x,y)$ and $\alpha_2^i(x,y, z_i)$ have no pole along
the divisor $(t=0)$.
So we consider a birational transformation of the projective connection (\ref{2018.4.27.14.06})
as follows.
The $dy/y$ part of the projective connection (\ref{2018.4.27.14.06}) is
\begin{equation*}
\begin{aligned}
& d \tilde{w}_0 - \frac{\lambda_0 (\tilde{w}_0 - x+1)(\tilde{w}_0 + x-1)}{x+1} \frac{dy}{y} \\
&\quad + [\text{ terms whose pole divisors do not contain the divisor $(y=0)$ }] .
\end{aligned}
\end{equation*}
Then we consider the following birational map
\begin{equation}\label{2018.4.13.16.49}
\begin{aligned}
\mathbb{P}^n \times \mathbb{P}^1
&\dashrightarrow \mathbb{P}^n \times \mathbb{P}^1 \\
([x:y: z_1: \ldots: z_{n-2}:1],[1: \tilde{w}_0]) \
&\longmapsto ( [x: y: z_1: \ldots :z_{n-2}:1], [1:w]),
\end{aligned}
\end{equation}
where $\tilde{w}_0 -x+1 = w y$.
By this birational transformation (\ref{2018.4.13.16.49}),
the projective connection (\ref{2018.4.27.14.06}) is transformed into
\begin{equation*}
\begin{aligned}
& d w + \left( {\mathcal A}_{21} (x,y)
+ \sum^{n-2}_{i=1} {\mathcal A}_{21}^i (x,y, z_i)\right) w^2 \\
& \ + 2
\left({\mathcal A}_{11}^i(x,y)
+ \sum_{i=1}^{n-2} {\mathcal A}_{11}^i (x,y, z_i)
\right)
w +{\mathcal A}_{12}^i(x,y)
+ \sum_{i=1}^{n-2} {\mathcal A}_{12}^i (x,y, z_i),
\end{aligned}
\end{equation*}
where
\begin{equation}\label{2018.5.21.12.20}
\begin{aligned}
{\mathcal A}_{21} (x,y)&:= y \alpha_2 (x,y), \\
{\mathcal A}_{21}^i (x,y,z_i)&:= y \alpha^i_2 (x,y, z_i) , \\
{\mathcal A}_{11} (x,y)&:=
(x-1) \alpha_2 (x,y)
+ \alpha_1(x,y) +\frac{1}{2}\frac{d y}{y} ,\\
{\mathcal A}_{11}^i (x,y,z_i)&:= (x-1) \alpha^i_2 (x,y, z_i) ,\\
{\mathcal A}_{12} (x,y) &:= \frac{dx + (x-1)^2\alpha_2 (x,y)
+ 2 (x -1)\alpha_1(x,y) + \alpha_0(x,y) }{y} , \\
{\mathcal A}_{12}^i (x,y,z_i)&:=
\frac{(x-1)^2\alpha^i_2 (x,y, z_i) + \alpha^{i}_0(x,y, z_i) }{y} .
\end{aligned}
\end{equation}
The corresponding connection $\nabla_{\boldsymbol{\lambda}}$ on
$ \mathbb{P}^n \times \mathbb{C}^2 \ra \mathbb{P}^n$ is
\begin{equation*}
\begin{aligned}
\nabla_{\boldsymbol{\lambda}}
=d+
\begin{pmatrix}
{\mathcal A}_{11} (x,y) & {\mathcal A}_{12} (x,y) \\
-{\mathcal A}_{21} (x,y) &- {\mathcal A}_{11} (x,y)
\end{pmatrix} + \sum_{i=1}^{n-2}
\begin{pmatrix}
{\mathcal A}_{11}^i (x,y,z_i) & {\mathcal A}_{12}^i (x,y,z_i) \\
-{\mathcal A}_{21}^i (x,y,z_i) &- {\mathcal A}_{11}^i (x,y,z_i)
\end{pmatrix},
\end{aligned}
\end{equation*}
whose polar divisor is $D_n$.
This connection $\nabla_{\boldsymbol{\lambda}}$
is the connection described in Section \ref{2019.7.10.22.19}.
We consider a relation between $\nabla_{\boldsymbol{\lambda}}$ and
$(\nabla_1)_{\boldsymbol{\lambda}}$.
Let $\nabla_1'$ be the meromorphic connection
on $\mathbb{P}^n \times \mathbb{C} \ra \mathbb{P}^n$ defined by
$\nabla_1':=d-\frac{1}{2}\frac{dy}{y}$.
We define a matrix $M_2(x,y)$ on $Y$ by
\begin{equation*}
M_2(x,y):=
\begin{pmatrix}
y & x-1\\
0 &1
\end{pmatrix} .
\end{equation*}
Let $\nabla_1''$ be the meromorphic connection
on $\mathbb{P}^n \times \mathbb{C}^2 \ra \mathbb{P}^n$ defined by
\begin{equation*}
\begin{aligned}
\nabla_1''=d+& M_2(x,y)^{-1} dM_2(x,y)
+
M_2(x,y)^{-1}
\begin{pmatrix}
\alpha_1(x,y) & \alpha_0(x,y) \\
-\alpha_2(x,y) &- \alpha_1(x,y) \\
\end{pmatrix}
M_2(x,y)\\
&\quad +
\sum_{i=1}^{n-2}
M_2(x,y)^{-1}
\begin{pmatrix}
0 &
\alpha^i_0(x,y, z_i) \\
-\alpha^i_2(x,y, z_i) & 0 \\
\end{pmatrix}
M_2(x,y).
\end{aligned}
\end{equation*}
We can check that
\begin{equation}\label{2018.5.8.23.54}
\nabla_{\boldsymbol{\lambda}} =
\nabla''_1 \otimes \nabla'_1.
\end{equation}
By a combination of the equalities (\ref{2018.5.8.23.53}) and (\ref{2018.5.8.23.54}),
we have the following proposition:
\begin{prop}\label{2019.7.12.12.28}
The pull-back $ f^* \nabla_{\boldsymbol{\lambda}}$
is birationally equivalent to
$(\nabla_0)_{\boldsymbol{\lambda}}\otimes \nabla_0'\otimes f^* \nabla_1'$.
\end{prop}
\begin{proof}[Proof of Theorem \ref{2019.7.10.21.43} and Theorem \ref{2019.7.10.15.33}]
First, since $(\nabla_0)_{\boldsymbol{\lambda}}$,
$\nabla_0'$ and $\nabla_1'$ are flat and
$f$ is a generically finite Galois morphism, we have
the flatness of $\nabla_{\boldsymbol{\lambda}}$ for each $\boldsymbol{\lambda}$
by Proposition \ref{2019.7.12.12.28}.
Second, we have that $\nabla_{\boldsymbol{\lambda}}$
has at worst regular singularities for each $\boldsymbol{\lambda}$
by the explicit expression of $\nabla_{\boldsymbol{\lambda}}$ and
\cite[Chap. II, Theorem 4.1 (ii)]{Deligne}.
Finally, the assertion of Theorem \ref{2019.7.10.15.33}
is deduced by Proposition \ref{2019.7.12.12.28}.
\end{proof}
\section{Monodromy representation}\label{2018.5.18.17.14}
In this section, we consider the monodromy representation
$\pi_1(\mathbb{P}^n \setminus D_n,*) \ra \mathrm{SL}_{2}(\mathbb{C})$
of $\nabla_{\boldsymbol{\lambda}}$ for generic $\boldsymbol{\lambda}$.
In Section \ref{2019.7.10.23.32} and Section \ref{2019.7.10.23.33},
we discuss structure of the fundamental group $\pi_1(\mathbb{P}^n \setminus D_n,*)$
by using the Zariski's hyperplane section theorem
and the Zariski--Van-Kampen method.
In Section \ref{2019.7.10.22.02}, we show Theorem \ref{2019.7.10.22.03}
by using the results in Section \ref{2019.7.10.23.32} and Section \ref{2019.7.10.23.33}.
\subsection{Zariski's hyperplane section theorem}\label{2019.7.10.23.32}
Let $H_i$ ($i=1,\ldots,n-2$) be the hyperplanes in $\mathbb{P}^n$ defined by
\begin{equation*}
H_i := ( z_i - a_i x - b_i y -c_i t =0 ) \quad i=1,\ldots , n-2.
\end{equation*}
Here $a_i, b_i$, and $c_i $ $(i=1,\dots,n-2)$ are generic complex numbers.
For simplicity, we assume that $0<|a_i|\ll 1$ and $ 0 < |b_i | \ll 1$.
Let $f(x,y,t)$ be the following quadratic polynomial
\begin{equation*}
\begin{aligned}
f(x,y,t) :=&\ x^2 + y^2 +t^2 -2 ( x y +y t + t x) \\
=&\ (y-x-t)^2 - 4 xt .
\end{aligned}
\end{equation*}
Let $\tilde{\mathcal{Q}}_0$, $\tilde{\mathcal{Q}}_i$ and $\tilde{D}_n$
be the divisors on $\mathbb{P}^2=\mathbb{P}^n \cap (\cap_{i=1}^{n-2}H_i)$ defined by
\begin{equation*}
\begin{aligned}
\tilde{\mathcal{Q}}_0&:= (f(x,y,t)=0 ), \\
\tilde{\mathcal{Q}}_i&:= (f(x,y,t) - ( a_i x + b_i y +c_i t)^2 =0)
\quad (i=1,\ldots,n-2), \text{ and } \\
\tilde{D}_n &:= (x=0 ) + (y=0) + ( t=0) +
\tilde{\mathcal{Q}}_0 + \tilde{\mathcal{Q}}_1 + \cdots + \tilde{\mathcal{Q}}_{n-2},
\end{aligned}
\end{equation*}
respectively.
By Zariski's hyperplane section theorem (for example see \cite{HL}), we have the natural isomorphism
\begin{equation*}
\pi_1(\mathbb{P}^n \setminus D_n,*) \cong \pi_1(\mathbb{P}^2 \setminus \tilde{D}_n,*).
\end{equation*}
\subsection{Zariski--Van-Kampen method}\label{2019.7.10.23.33}
We derive some equalities in $\pi_1(\mathbb{P}^2 \setminus \tilde{D}_n,*)$
by the Zariski--Van-Kampen method (see for example \cite{Degtyarev}).
Let $\pi\colon \mathbb{P}^{2} \setminus \tilde{D}_n \ra \mathbb{P}^1$ be
the projection defined by
\begin{equation*}
\begin{aligned}
\pi\colon \mathbb{P}^{2} \setminus \tilde{D}_n &\lra \mathbb{P}^1 \\
[x:y:t]&\longmapsto [x:t].
\end{aligned}
\end{equation*}
Let $\{[x^+_i:1] ,[ x_i^-:1 ]\} \subset \mathbb{P}^1$
be the roots of the discriminant of $f(x,y,t) - ( a_i x + b_i y +c_i t)^2$ with respect to $y$.
We denote $[x^+_i:1]$ and $[x^-_i:1]$ by $x^+_i$ and $x^-_i$, respectively.
Since $0<|a_i|\ll 1$ and $ 0 < |b_i | \ll 1$,
there exists an element of $\{x^+_i , x_i^-\}$ in a neighborhood of $\infty=[0:1]$.
We assume that $x_i^-$ is a point in a neighborhood of $\infty$.
Set $a=[ a:1 ]$ where $0<|a|\ll 0$.
For $i=0,1,\dots,n-2$,
let $y^+_{i}$ and $y^-_{i}$ be the intersection of
$\tilde{\mathcal{Q}}_i$ and $\pi^{-1}(a)$:
$\tilde{\mathcal{Q}}_i\cap \pi^{-1}(a) = \{ y^+_{i} , y^-_{i} \} $.
Here we assume that
\begin{equation*}
0<\mathrm{Arg} \left( \frac{y^+_1-(a+1)}{y^+_0-(a+1)} \right)
<\cdots<\mathrm{Arg} \left( \frac{y^+_{n-2}-(a+1)}{y^+_0-(a+1)} \right)
< \pi.
\end{equation*}
\begin{figure}[h]
\caption{Fibers of $\pi$.}
\begin{center}
\includegraphics[clip,width=9cm]{fig1.pdf}
\end{center}
\end{figure}
We define natural numbers $i_k$ and $j_k$ ($k=1,\dots,n-2$)
so that
$\{i_1 ,\ldots,i_{n-2} \}=\{1 ,\ldots,n-2 \}$,
$\{j_1 ,\ldots,j_{n-2} \}=\{1 ,\ldots,n-2 \}$,
\begin{equation*}
0<\mathrm{Arg}\, (x^+_{i_1})
<\cdots<\mathrm{Arg}\, (x^+_{i_{n-2}})
< 2\pi \text{ and }
0<\mathrm{Arg}\, \frac{1}{x^-_{j_1}}
<\cdots<\mathrm{Arg}\, \frac{1}{x^-_{j_{n-2}}}
< 2\pi.
\end{equation*}
Here we define the range of the principal value of arguments $\mathrm{Arg}$
by the closed-open interval $[0,2\pi)$.
Let $\Gamma$
be the group defined by
\begin{equation*}
\begin{aligned}
\Gamma &:=
\left\langle
\begin{array}{l}
\alpha_0, \alpha_{y^+_0} ,\ldots , \alpha_{y^+_{n-2}} \\
\alpha_{y^-_0} ,\ldots , \alpha_{y^-_{n-2}},\alpha_{\infty}
\end{array}
\middle| \
\alpha_0\alpha_{y^+_{1}} \cdots \alpha_{y^+_{n-2}} \alpha_{y^-_{0}}
\alpha_{y^-_{1}} \ldots \alpha_{y^-_{n-2}}
\alpha_{y^+_0}
\alpha_{\infty}=1 \right\rangle .
\end{aligned}
\end{equation*}
Then we have
$\pi_1 (\pi^{-1} (a) \setminus (\tilde{D}_n\cap \pi^{-1}(a) ) , * ) \cong \Gamma$
and have an exact sequence
\begin{equation*}
1 \longrightarrow \Gamma
\longrightarrow \pi_1(\mathbb{P}^2 \setminus \tilde{D}_n,*)
\longrightarrow \pi_1(\mathbb{P}^1 \setminus \{ 0,\infty \},a) \longrightarrow 1.
\end{equation*}
Let
\begin{equation}\label{1.21.16.34}
\gamma_0, \gamma_1 , \gamma_{x^+_1}, \ldots , \gamma_{x^+_{n-2}} ,
\gamma_{x^-_1}, \ldots , \gamma_{x^-_{n-2}}, \gamma_{\infty}
\end{equation}
be loops with base point $a$
on $\mathbb{P}^1\setminus \{ 0, 1 , x^{\pm}_1 ,\ldots ,x^{\pm}_{n-1}, \infty \}$
such that for $x\in \{ 0, 1 , x^{\pm}_1 ,\ldots ,x^{\pm}_{n-1}, \infty \}$,
the loop
$\gamma_x$ is oriented counter-clockwise,
$x$ lies inside, while the other points
$\{ 0, 1 , x^{\pm}_1 ,\ldots ,x^{\pm}_{n-1}, \infty \}\setminus \{x\}$
lie outside as in Figure 2.
\begin{figure}[h]
\caption{Loops on $\pi^{-1}(a)$ and $\mathbb{P}^1=(\mathbb{C}_x)_0 \cup (\mathbb{C}_x)_{\infty}.$}
\begin{center}
\includegraphics[clip,width=15cm]{fig2.pdf}
\end{center}
\end{figure}
Let $s \colon \mathbb{P}^1 \setminus \{ 0,\infty \} \rightarrow \mathbb{P}^2 \setminus \tilde{D}_n $
be a continuous section of $\pi$ such that
$s(a)=* \in \mathbb{P}^2 \setminus \tilde{D}_n$.
For the loops (\ref{1.21.16.34}), we define the monodromy actions of the
loops (\ref{1.21.16.34}) on $\Gamma$ as in \cite[Theorem 2.2.1]{Cousin}.
Namely, the action $(\gamma_x,\alpha) \mapsto \gamma_x(\alpha)$
for loops $\gamma_x$ and $\alpha \in \Gamma$ is characterized by the equality
$\gamma_x(\alpha) = \gamma_x^{-1} \alpha \gamma_x$
in $\pi_1(\mathbb{P}^2 \setminus \tilde{D}_n,*) $.
Here we denote by $\gamma_x$
the loop $s_*(\gamma_x) \in \pi_1(\mathbb{P}^2 \setminus \tilde{D}_n,*) $
for simplicity.
For explicit computation of this action, we consider the motion of the points
$y^{\pm}_i$ ($i=0,1,\ldots,n-2$) when $a$ varies along the loop $\gamma_x$
and a continuous deformation of
$\alpha\in \pi_1 (\pi^{-1} (a) \setminus (\tilde{D}_n\cap \pi^{-1}(a) ) , * )$
according to the motion of these points.
Note that the assumptions
$0<|a_i|\ll 1$, $ 0 < |b_i | \ll 1$ and $0<|a| \ll 1$
make the computation of the motion of the points $y_i^{\pm}$ simple.
By explicit computation of the action on some loops, we can check the following equalities:
\begin{align}
\gamma_0(\alpha_{y^+_0}) &= \alpha_{y^-_{0}}; &
\gamma_{x^+_i}(\alpha_{y^+_i}) &= \alpha_{y^+_i} \alpha_{y^-_{i}} \alpha_{y^+_i}^{-1}
\ (i=1,\ldots , n-2); \label{2018.5.12.13.30}\\
\gamma_0(\alpha_{0}) &= \alpha_{0}; &
\gamma_0(\alpha_{y^+_i})& = \alpha_{y^+_{i}}
\ (i=1,\dots,n-2) ; \label{2019.1.21.16.48} \\
\tilde{\gamma}_{\infty}(\alpha_{y^+_0}) &= \alpha_{0}
\alpha_{y^-_0} \alpha_{0}^{-1}; \label{2018.5.12.13.28}
\end{align}
and
\begin{equation}\label{2019.1.12.16.51}
\begin{aligned}
\gamma_1(\alpha_{y^+_0}) &= ( \alpha_0 \alpha_{y^+_0}) \alpha_0
\alpha_{y^+_0}\alpha_0^{-1} (\alpha_{0} \alpha_{y^+_0})^{-1} .
\end{aligned}
\end{equation}
Here we put
$\tilde{\gamma}_{\infty}:= \gamma_{x^-_{j_1}} \cdots \gamma_{x^-_{j_{n-2}}} \gamma_{\infty}$.
In fact, if $a$ varies along the loop $\gamma_0$,
then $y_0^+$ moves to the location of $y_0^-$,
and $0$ and $y_i^+$ ($i=1,\ldots,n-2$) go back to the
prior locations, respectively.
If $a$ varies along the loop $\gamma_{x^{+}_i}$ for $i=1,\ldots,n-2$,
then $y_i^+$ moves to the location of $y_i^-$.
Here we assume that $-1 \ll \mathrm{Arg}\, (a)<0$
and $y_0^+$ closes to $0$ when $a$ approach $1$ along the real axis.
If $a$ varies along the loop $\tilde{\gamma}_{\infty}$,
then $y_0^+$ moves to the location of $y_0^-$ round by $0$.
If $a$ varies along the loop $\gamma_1$,
then $y_0^+$ go back to the prior locations round by $0$ twice.
If we consider continuous deformations of the corresponding loops
according to the motion of these points,
then we have the equalities (\ref{2018.5.12.13.30}), (\ref{2019.1.21.16.48}),
(\ref{2018.5.12.13.28}) and (\ref{2019.1.12.16.51}).
Here note that the images
of the intersection of $\mathcal{Q}_i$ and $\mathcal{Q}_j$ under $\pi$
are close to $\infty$ for $i,j =0,1, \ldots, n-2$
since $0<|a_i|\ll 1$ and $ 0 < |b_i | \ll 1$.
In $\pi_1(\mathbb{P}^2 \setminus \tilde{D}_n,*) $,
we have the equality
$\gamma_x(\alpha) = \gamma_x^{-1} \alpha \gamma_x$ for $\alpha \in \Gamma$.
By the equality (\ref{2018.5.12.13.30}), we can show that
$\alpha_{y_i^-}$ ($i=0,\ldots,n-2$) are generated by
$\alpha_{y^+_0} ,\ldots , \alpha_{y^+_{n-2}}$, and $\gamma_0$
in $\pi_1(\mathbb{P}^2 \setminus \tilde{D}_n,*)$.
Then we obtain the following proposition.
\begin{prop}
The group $\pi_1(\mathbb{P}^2 \setminus \tilde{D}_n,*)$ is generated by
$\alpha_0, \alpha_{y^+_0} ,\ldots , \alpha_{y^+_{n-2}}$, and
$\gamma_0$.
\end{prop}
\begin{prop}
Set $\tilde{\alpha}= \alpha_{y^+_1} \cdots \alpha_{y^+_{n-2}}$.
For the elements $\alpha_{y^+_0}, \alpha_0,\gamma_{0}$, and $\tilde{\alpha}$
of $\pi_1(\mathbb{P}^2 \setminus \tilde{D}_n,*)$,
we have the following equalities:
\begin{align}
\ [ \alpha_0 , \gamma_{0} ]&= [ \tilde{\alpha}, \gamma_0 ]=1 \label{2018.5.10.22.52}, \\
((\gamma_{0}\tilde{\alpha} ) \alpha_{y^+_0})^2&=
(\alpha_{y^+_0}(\gamma_{0}
\tilde{\alpha} ))^2, \label{2018.5.10.22.54} \\
(\alpha_{y^+_0} \alpha_0)^2 &= ( \alpha_0 \alpha_{y^+_0})^2 \label{2018.5.12.13.38}.
\end{align}
\end{prop}
\begin{proof}\label{2018.5.12.16.43}
By the equality (\ref{2019.1.21.16.48}),
we have the equality (\ref{2018.5.10.22.52}).
Second, we show the equality (\ref{2018.5.10.22.54}).
By the equalities
(\ref{2018.5.12.13.30}), (\ref{2018.5.12.13.28}),
$a_{\infty}=\gamma_{\infty}\gamma_0$, and (\ref{2018.5.10.22.52}),
we have
\begin{equation*}
\begin{aligned}
a_{y^+_0}&= \gamma_{\infty} \alpha_{0}
\gamma_0^{-1}\alpha_{y^+_0}\gamma_0
\alpha_{0}^{-1} \gamma_{\infty}^{-1} \\
&= \alpha_{\infty} \alpha_{0}
\gamma^{-1}_{0}\gamma_0^{-1}\alpha_{y^+_0}\gamma_0\gamma_{0}
\alpha_{0}^{-1}\alpha_{\infty}^{-1} \\
&=
(\alpha_{y^+_1} \cdots \alpha_{y^+_{n-2}} \alpha_{y^-_0}
\alpha_{y^-_1} \cdots \alpha_{y^-_{n-2}} \alpha_{y^+_0})^{-1}
\gamma_{0}^{-1} \gamma_0^{-1}\alpha_{y^+_0}\gamma_0\gamma_0
(\alpha_{y^+_1} \cdots \alpha_{y^+_{n-2}} \alpha_{y^-_0}
\alpha_{y^-_1} \cdots \alpha_{y^-_{n-2}} \alpha_{y^+_0})
\\
&=
( \tilde{\alpha} \gamma_{0}^{-1} \alpha_{y^+_0} \gamma_0
\tilde{\alpha} \alpha_{y^+_0})^{-1}
\gamma_{0}^{-1}\gamma_0^{-1}\alpha_{y^+_0}\gamma_0\gamma_{0}
( \tilde{\alpha} \gamma_{0}^{-1} \alpha_{y^+_0} \gamma_0
\tilde{\alpha} \alpha_{y^+_0})\\
&=
( \tilde{\alpha} \alpha_{y^+_0} \gamma_0
\tilde{\alpha} \alpha_{y^+_0})^{-1}
\gamma_{0}^{-1}\alpha_{y^+_0}\gamma_0
( \tilde{\alpha} \alpha_{y^+_0} \gamma_0
\tilde{\alpha} \alpha_{y^+_0}).
\end{aligned}
\end{equation*}
Then we have the equality (\ref{2018.5.10.22.54}).
By the equality (\ref{2019.1.12.16.51}), we have the equality (\ref{2018.5.12.13.38}).
\end{proof}
\subsection{Monodromy representation of $\nabla_{\boldsymbol{\lambda}}$}\label{2019.7.10.22.02}
Let $\boldsymbol{D}_{\infty}$ be the infinite dihedral group:
\begin{equation*}
\boldsymbol{D}_{\infty}:= \left\langle
\begin{pmatrix}
0 & \alpha \\
-\alpha^{-1} & 0
\end{pmatrix},
\begin{pmatrix}
\beta & 0 \\
0 & \beta^{-1}
\end{pmatrix}\
\middle| \
\alpha , \beta \in \mathbb{C}^*
\right\rangle \le \mathrm{SL}_2 (\mathbb{C}).
\end{equation*}
\begin{prop}\label{2018.5.8.15.13}
For generic $\boldsymbol{\lambda}$, the monodromy representation of
$\nabla_{\boldsymbol{\lambda}}$ is conjugated to
the dihedral representation $\rho_{\boldsymbol{\lambda}} \colon
\pi_1(\mathbb{P}^n \setminus D_n,*) \ra \boldsymbol{D}_{\infty}$ of
the fundamental group $\pi_1(\mathbb{P}^n \setminus D_n,*) $
defined by
\begin{equation*}
\begin{aligned}
\rho_{\boldsymbol{\lambda}}(\alpha_0)&=
\begin{pmatrix}
-\exp (- \pi \lambda_{0} ) & 0 \\
0 & -\exp ( \pi \lambda_{0} )
\end{pmatrix}, &
\rho_{\boldsymbol{\lambda}}(\gamma_0)&=\begin{pmatrix}
\exp (- \pi \lambda_1 ) & 0 \\
0 & \exp ( \pi \lambda_1 )
\end{pmatrix}, \\
\rho_{\boldsymbol{\lambda}}(\alpha_{y^+_0})&=
\begin{pmatrix}
0 & 1 \\
-1 & 0
\end{pmatrix}, &
\rho_{\boldsymbol{\lambda}}(\alpha_{y^+_i})&=
\begin{pmatrix}
\exp (- \pi \lambda_{i+1} ) & 0 \\
0 & \exp ( \pi \lambda_{i+1} )
\end{pmatrix},
\end{aligned}
\end{equation*}
where $i=1,\ldots , n-2$.
\end{prop}
\begin{proof}
Let $\rho_{\nabla_{\boldsymbol{\lambda}}}\colon
\pi_1(\mathbb{P}^n \setminus D_n,*) \ra \mathrm{SL}_{2}(\mathbb{C})$ be
a monodromy representation of $\nabla_{\boldsymbol{\lambda}}$.
Put $A_0 := \rho_{\nabla_{\boldsymbol{\lambda}}}(\alpha_0)$,
$A_{y^+_i} := \rho_{\nabla_{\boldsymbol{\lambda}}}(\alpha_{y^+_i})$
($i=0,\ldots,n-1$) and $C_0 := \rho_{\nabla_{\boldsymbol{\lambda}}}(\gamma_0)$.
Let $U$ be some analytic open subset of
$\mathbb{P}^2 \setminus (\tilde{{\mathcal Q}}_0 \cup (y=0)\cup (t=0))$
such that $U$ is simply connected and $U$ contains the loops $\alpha_{y_i^+}$ ($i=1,\ldots,n-1$)
and $\gamma_0$.
On the open subset $U$, the connection $\nabla_{\boldsymbol{\lambda}}$
is isomorphic to $(\nabla_0)_{\boldsymbol{\lambda}}$.
Then by some conjugation, we may put
\begin{equation*}
C_0=
\begin{pmatrix}
\exp (- \pi \lambda_1 ) & 0 \\
0 & \exp ( \pi \lambda_1 )
\end{pmatrix}, \quad
A_{y^+_i}=
\begin{pmatrix}
\exp (- \pi \lambda_{i+1} ) & 0 \\
0 & \exp ( \pi \lambda_{i+1} )
\end{pmatrix}\ i=1,\ldots , n-2.
\end{equation*}
Assume that $\exp (- \pi \lambda_1 ) \neq \exp ( \pi \lambda_1 )$.
By Proposition \ref{2018.5.12.16.43}, we have the equality $A_0 C_0= C_0A_0$.
Then we have
\begin{equation*}
A_{0}=
\begin{pmatrix}
-\exp (- \pi \lambda_{0} ) & 0 \\
0 & -\exp ( \pi \lambda_{0} )
\end{pmatrix}.
\end{equation*}
Note that the image $\mathrm{Im}(\rho_{\nabla_{\boldsymbol{\lambda}}})$ is non-abelian.
Since $C_0$, $A_0$, and $A_{y_i^+}$ ($i=1,\ldots, n-2$) are diagonal matrices,
we may put
\begin{equation*}
A_{y^+_0}=
\begin{pmatrix}
a_{11} & a_{12} \\
-1 & a_{22}
\end{pmatrix}.
\end{equation*}
Put $\tilde{A}:=A_{y^+_1} \cdots A_{y^+_{n-2}}$.
By Proposition \ref{2018.5.12.16.43}, we have the equalities
$(A_{y^+_0} (C_0 \tilde{A}) )^2=
((C_0 \tilde{A} )A_{y^+_0})^{2}$
and
$(A_{y^+_0} A_0 )^2=(A_0A_{y^+_0})^{2}$.
Assume that $(\exp (-\pi \lambda_1 - \pi \sum_{i=1}^{n-2} \lambda_{i+1} ) )^2 \neq 1$
and $ (- \exp (-\pi \lambda_0 ) )^2 \neq 1 $.
Since $A_{y^+_0} (C_0 \tilde{A})\neq (C_0 \tilde{A})A_{y^+_0}$
and $A_{y^+_0} A_0 \neq A_0A_{y^+_0}$, we have the equalities
$(A_{y^+_0}(C_0 \tilde{A} ))^{2}= - I_2$
and $(A_{y^+_0}A_0)^{2} = - I_2$.
Then we have the following equalities:
\begin{equation*}
\begin{cases}
a_{11}a_{22}+a_{12}=1 \\
a_{11} (\exp (-\pi \lambda_1 - \pi \sum_{i=1}^{n-2} \lambda_{i+1} ) )^2 = a_{22} \\
a_{11} (- \exp (-\pi \lambda_0 ) )^2 = a_{22}.
\end{cases}
\end{equation*}
We assume that
$(\exp (-\pi \lambda_1 - \pi \sum_{i=1}^{n-2} \lambda_{i+1} ) )^2 \neq (- \exp (-\pi \lambda_0 ) )^2$.
Then we have $a_{11}=0$, $a_{12}=1$, and $a_{22}=0$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{2019.7.10.22.03}]
By Theorem \ref{2019.7.10.15.33},
we have that the monodromy representation of
$\nabla_{\boldsymbol{\lambda}}$
is virtually abelian.
By Proposition \ref{2018.5.8.15.13},
we have that the monodromy representation of
$\nabla_{\boldsymbol{\lambda}}$
is conjugated to the explicit
representation
\begin{equation*}
\rho_{\boldsymbol{\lambda}} \colon \pi_1 (\mathbb{P}^n \setminus D_n)
\longrightarrow
\mathrm{SL}_2(\mathbb{C}),
\end{equation*}
which takes values in the infinite dihedral group $\boldsymbol{D}_{\infty}$.
\end{proof}
\section{Algebraic Garnier solution}\label{2018.5.18.17.17}
Assume that an $n$-tuple of
complex numbers $\boldsymbol{\lambda}=(\lambda_0, \ldots, \lambda_{n-1} )$
is sufficiently generic.
In this section, we restrict the flat connection $\nabla_{\boldsymbol{\lambda}}$ to
a generic line $\mathbb{P}^n\cap (\cap_{i=0}^{n-2} H'_i)$, where
\begin{equation}\label{2018.5.3.19.42}
\begin{cases}
H'_0 = (y-ax -b t=0) &\\
H'_i =(z_i - c_{i} x - d_i t=0) & (i=1,2,\dots,n-2).
\end{cases}
\end{equation}
Here $a, b, c_i $, and $d_i$ $(i=1,2,\dots,n-2)$ are generic complex numbers.
We consider the transformation $\tilde{x}= -\frac{a}{b}x$.
Let $T$ be a Zariski open subset of $\Spec \mathbb{C} [a,b, c_i ,d_i ]_{i=1,\ldots,n-2}$.
We consider the map $\mathbb{P}^1 \times T \ra \mathbb{P}^n$ defined by (\ref{2018.5.3.19.42}).
Let $(\nabla_{\mathbb{P}^1 \times T})_{\boldsymbol{\lambda}}$ be the flat connection
on the trivial rank $2$ vector bundle $F_0$ over $\mathbb{P}^1 \times T$
induced by the flat connection $\nabla_{\boldsymbol{\lambda}}$ over $\mathbb{P}^n$.
Let
\begin{equation*}
(\nabla_{\mathbb{P}^1 \times T/T})_{\boldsymbol{\lambda}}
\colon F_0 \lra F_0 \otimes \Omega^1_{\mathbb{P}^1 \times T/T}(D_n)
\end{equation*}
be the relative connection
on $F_0$ over $\mathbb{P}^1 \times T$
associated to $(\nabla_{\mathbb{P}^1 \times T})_{\boldsymbol{\lambda}}$.
In Section \ref{2019.7.10.22.11},
we introduce an \'etale base change $\tilde{T} \rightarrow T$
to prove the assertion (i) of Theorem \ref{2019.7.10.22.13}.
In Section \ref{2019.7.12.11.27},
after the \'etale base change $\tilde{T} \rightarrow T$,
we compute the residue matrix of
$(\nabla_{\mathbb{P}^1 \times \tilde{T}/\tilde{T}})_{\boldsymbol{\lambda}}$
for each simple pole.
In Section \ref{2019.7.10.22.15},
we recall the relation between isomonodromic deformations and the Garnier system
following \cite{Mazz}.
In Section \ref{2019.7.12.11.28},
we show Theorem \ref{2019.7.10.22.13}.
\subsection{Regular singular points of
$(\nabla_{\mathbb{P}^1 \times T/T})_{\boldsymbol{\lambda}}$}\label{2019.7.10.22.11}
By the pull-back of $x^2 + y^2 +t^2 -2 (x y + y t +t x ) $
and $x^2 + y^2 +t^2 -2 (x y + y t +t x ) -z_i^2 $
under $\mathbb{P}^1 \times T \ra \mathbb{P}^n$, we have the following polynomials over $T$:
\begin{equation*}
\begin{aligned}
f(a,b,\tilde{x}) &:=
\frac{(a-1)^2b^2}{a^2} \tilde{x}^2 +
\frac{2b(1+a+b-ab)}{a} \tilde{x} + (b-1)^2, \\
f_i(a,b,c_i,d_i,\tilde{x}) &:=f(a,b,\tilde{x})
-\frac{(a d_i -b c_i \tilde{x})^2}{a^2},
\end{aligned}
\end{equation*}
which are described on the affine coordinate $[\tilde{x}:1]$.
Let $I$ be the ideal of $\mathbb{C} [a,b,\tilde{b}, c_i ,d_i , \tilde{d}_i]_{i=1,\ldots,n-2}$ defined by
$I:=
(\tilde{b}^2-4 (a+b-ab),
\tilde{d}_i^2-\Delta^i_{\tilde{x}} )_{i=1,\ldots,n-2}$,
where $\Delta^i_{\tilde{x}} $ is the discriminant of $f_i(a,b,c_i,d_i,\tilde{x})$ with respect to $\tilde{x}$.
We have the natural morphism
\begin{equation*}
\Spec \mathbb{C} [a,b,\tilde{b}, c_i ,d_i , \tilde{d}_i]_{i=1,\ldots,n-2}
/ I \lra \Spec \mathbb{C} [a,b, c_i ,d_i ]_{i=1,\ldots,n-2}.
\end{equation*}
Let $\tilde{T}$ be the inverse image of $T$ under this morphism: $\tilde{T} \ra T$.
Let $t_{1}$ and $t_{2}$ be the rational functions on $\tilde{T}$ defined by
\begin{equation*}
t_{1}:=
\frac{a( \tilde{b}-2 )^2}{4(a-1)(\tilde{b}^2-a)} \text{ and }
t_{2}:=
\frac{a(\tilde{b}+2 )^2}{4(a-1)(\tilde{b}^2-a)}.
\end{equation*}
Then $f(a,b,t_1)=f(a,b,t_2)=0$.
Moreover,
let $t_{2i+1}$ and $t_{2i+2}$ be the rational functions on $\tilde{T}$ defined by
\begin{equation*}
t_{2i+1}:=
\frac{2ab(1+a+b-ab- c_i d_i) -a^2 \tilde{d}_i}{2b^2((a-1)^2-c_i^2)} \text{ and }
t_{2i+2}:=
\frac{2ab(1+a+b-ab- c_i d_i) +a^2 \tilde{d}_i}{2b^2((a-1)^2-c_i^2)}.
\end{equation*}
Then $ f_i (a,b,c_i,d_i,t_{2i+1})= f_i (a,b,c_i,d_i,t_{2i+2})=0$.
By these rational functions, we have a generically finite morphism
\begin{equation}\label{2018.5.12.23.51}
\tilde{T} \lra \Spec \mathbb{C}[t_1,t_2,\ldots ,t_{2n-2}],
\end{equation}
if the Zariski open subset $T$ shrinks.
We take the pull-back $(\nabla_{\mathbb{P}^1 \times \tilde{T}/\tilde{T}})_{\boldsymbol{\lambda}}$
of $(\nabla_{\mathbb{P}^1 \times T/T})_{\boldsymbol{\lambda}}$
under the morphism $\mathbb{P}^1 \times \tilde{T} \ra \mathbb{P}^1 \times T$.
Then $(\nabla_{\mathbb{P}^1 \times \tilde{T}/\tilde{T}})_{\boldsymbol{\lambda}}$
is a family of the
Fuchsian systems with $2n + 1$ regular singularities
at $\tilde{x}=0,1,t_1,\ldots , t_{2n-2}, \infty$ parametrized by $\tilde{T}$.
\subsection{Residue matrices of
$(\nabla_{\mathbb{P}^1 \times \tilde{T}/\tilde{T}})_{\boldsymbol{\lambda}}$ }\label{2019.7.12.11.27}
We describe the residue matrices of
$(\nabla_{\mathbb{P}^1 \times \tilde{T}/\tilde{T}})_{\boldsymbol{\lambda}}$
at the regular singular points.
Put
\begin{equation*}
\begin{aligned}
M_2(\tilde{x})&:=
\begin{pmatrix}
-ab (\tilde{x} -1) & - b \tilde{x} -a\\
0 &a
\end{pmatrix} \text{ and}\\
\alpha_0^i(\tilde{x})&:= \lambda_{i+1} \left(
- \frac{ b c_i}{a} -
\frac{ a d_i -b c_i \tilde{x} }{2a(\tilde{x} -t_{2i+1})}
-\frac{ a d_i -b c_i \tilde{x} }{2a(\tilde{x} -t_{2i+2} ) } \right).
\end{aligned}
\end{equation*}
Let $H^{\tilde{T}}_{2n-1}$ be
the residue matrix at $\tilde{x}=0$.
We have the following equality
\begin{equation*}
\begin{aligned}
H^{\tilde{T}}_{2n-1}&=M_2(0)^{-1}
\begin{pmatrix}
0 & \frac{\lambda_1(\tilde{b}^2 -4)}{8(a-1)} \\
\frac{2\lambda_1(a-1)}{\tilde{b}^2 -4} & 0
\end{pmatrix}
M_2(0).
\end{aligned}
\end{equation*}
Let $H^{\tilde{T}}_{2n}$ be
the residue matrix at $\tilde{x}=1$.
We have the following equality
\begin{equation*}
\begin{aligned}
H_{2n}^{\tilde{T}}
&=
\frac{1-\lambda_0}{2}
\begin{pmatrix}
1 & \frac{2 (\tilde{b}^2+4a^2-8a)}{\tilde{b}^2-4a^2} \\
0 & -1
\end{pmatrix}
+
\sum^{n-2}_{i=1}
\alpha_0^i(1)
\begin{pmatrix}
0 &1 + \frac{ ( a+b)^2}{b^2(a-1)^2(1 -t_1)(1 -t_2)}\\
0 & 0
\end{pmatrix}.
\end{aligned}
\end{equation*}
Let $H^{\tilde{T}}_{1}$ and $H^{\tilde{T}}_{2}$ be
the residue matrices at $\tilde{x}=t_{1}$
and $\tilde{x}=t_{2}$, respectively.
We have the following equalities
\begin{equation*}
\begin{aligned}
H^{\tilde{T}}_{1}
&=M_2(t_{1})^{-1}
\begin{pmatrix}
-\frac{1}{4} & 0 \\
-\frac{\lambda_0(a-1)}{2(\tilde{b} -2 a)}
-\frac{\lambda_1(a-1)}{2(\tilde{b}-2)}
+ \sum^{n-2}_{i=1}
\frac{ a^2 \alpha^i_0(t_1)}{b^2(a-1)^2(t_1 -t_2)}
& \frac{1}{4}
\end{pmatrix}
M_2(t_{1}) \\
H^{\tilde{T}}_{2}
&=M_2(t_{2})^{-1}
\begin{pmatrix}
-\frac{1}{4} & 0 \\
\frac{\lambda_0(a-1)}{2(\tilde{b} +2 a)}
+\frac{\lambda_1(a-1)}{2(\tilde{b}+2)}
+ \sum^{n-2}_{i=1}
\frac{ a^2 \alpha^i_0(t_2)}{b^2(a-1)^2(t_2 -t_1)}
&\frac{1}{4}
\end{pmatrix}
M_2(t_{2}).
\end{aligned}
\end{equation*}
Let $H^{\tilde{T}}_{2i+1}$ and $H^{\tilde{T}}_{2i+2}$ be
the residue matrices at $\tilde{x}=t_{2i+1}$ and $\tilde{x}=t_{2i+2}$, respectively.
We have the following equalities
\begin{equation*}
\begin{aligned}
H^{\tilde{T}}_{2i+1}&=
M_2(t_{2i+1})^{-1}
\begin{pmatrix}
0 & \frac{ \lambda_{i+1} (a d_i -b c_i t_{2i+1}) }{{2a}} \\
\frac{ \lambda_{i+1} a(-a d_i +b c_i t_{2i+1})}{{2b^2(a-1)^2 (t_{2i+1} -t_1)(t_{2i+1} -t_2)}} &0
\end{pmatrix}
M_2(t_{2i+1})\\
H^{\tilde{T}}_{2i+2}&=
M_2(t_{2i+2})^{-1}
\begin{pmatrix}
0 & \frac{ \lambda_{i+1} (a d_i -b c_i t_{2i+2}) }{{2a }} \\
\frac{ \lambda_{i+1} a (-a d_i +b c_i t_{2i+2})}{{2b^2(a-1)^2 (t_{2i+2} -t_1)(t_{2i+2} -t_2)}} &0
\end{pmatrix}
M_2(t_{2i+2}).
\end{aligned}
\end{equation*}
Let $H^{\tilde{T}}_{2n+1}$ be
the residue matrix at $\tilde{x}= \infty$.
Let ${\mathcal A}_{jk}^i (\tilde{x})$ be
the relative rational $1$-forms over $\tilde{T}$ which are the relativization of
the pull-backs of the rational $1$-forms (\ref{2018.5.21.12.20}) under
the composition $\mathbb{P}^1 \times \tilde{T} \ra\mathbb{P}^1 \times T \ra \mathbb{P}^n$.
Since $\lim_{\tilde{x}\ra 0 } \alpha_0^i(\frac{1}{\tilde{x}})=0$, we have
\begin{equation*}
\mathop{\sf res}\nolimits_{\tilde{x}=\infty}
\begin{pmatrix}
{\mathcal A}_{11}^i (\tilde{x}) & {\mathcal A}_{12}^i (\tilde{x}) \\
-{\mathcal A}_{21}^i (\tilde{x}) &- {\mathcal A}_{11}^i (\tilde{x})
\end{pmatrix}=0 \quad (i=1,\ldots,n-2).
\end{equation*}
Then we have
\begin{equation*}
H_{2n+1}^{\tilde{T}}=
\begin{pmatrix}
-1 & a -2 \\
1&a
\end{pmatrix}
\begin{pmatrix}
\frac{\lambda_0 + \lambda_1}{2} &0 \\
0&-\frac{\lambda_0 + \lambda_1}{2}
\end{pmatrix}
\begin{pmatrix}
-1 & a -2 \\
1&a
\end{pmatrix}^{-1}.
\end{equation*}
\subsection{Garnier system}\label{2019.7.10.22.15}
Let ${\mathcal A}(\tilde{x})$ be the Fuchsian system with $2n + 1$ regular singularities
at $t_1 , \ldots , t_{2n}, \infty$:
\begin{equation*}
{\mathcal A}(\tilde{x})= d+
\sum_{i=1}^{2n} \tilde{H}_{i}\frac{d\tilde{x}}{\tilde{x}-t_i} ,
\end{equation*}
where $\tilde{H}_{i}$ ($i=1, \ldots , 2n$) are
$2\times 2$ matrices independent of $\tilde{x}$ and
$t_i \neq t_j$ ($i \neq j$).
We assume that $\tilde{H}_{2n+1}:=-\sum_{i=1}^{2n} \tilde{H}_{i}$ is a diagonal matrix and
the eigenvalues of $\tilde{H}_{i}$ ($i=1, \ldots ,2n+1$) are
as in Table \ref{2018.5.20.22.24}.
\begin{table}[htb]
\caption{The eigenvalues of the residue matrices
($i=1,\ldots , n-2$).}
\begin{tabular}{c|ccccccc}\label{2018.5.20.22.24}
Reside matrices & $\tilde{H}_{1}$ &
$\tilde{H}_{2}$ & $\tilde{H}_{2i+1}$ & $\tilde{H}_{2i+2}$
& $\tilde{H}_{2n-1}$ & $\tilde{H}_{2n}$ & $\tilde{H}_{2n+1}$ \\\hline
Eigenvalues
& $\pm \frac{1}{4}$
& $\pm \frac{1}{4}$
& $\pm \frac{\lambda_{i+1}}{2}$
& $\pm \frac{\lambda_{i+1}}{2}$
& $\pm \frac{\lambda_1}{2}$
& $\pm \frac{\lambda_0-1}{2}$
& $\pm\frac{\lambda_0+\lambda_1}{2}$
\end{tabular}
\end{table}
We fix generators $\gamma_{\tilde{x}}$
($\tilde{x}= t_1 \ldots ,t_{2n}, \infty$)
of the fundamental group $\pi_1(\mathbb{P}^1 \setminus \{ t_1,\ldots,t_{2n}, \infty \},*)$.
Here the loop $\gamma_{\tilde{x}}$ on $\mathbb{P}^1$
is oriented counter-clockwise,
$\tilde{x}$ lies inside, while the other singular points lie outside.
Let $\rho'_{\boldsymbol{\lambda}}\colon
\pi_1(\mathbb{P}^1 \setminus\{ t_1,\ldots,t_{2n}, \infty \},*) \ra \mathrm{SL}_2(\mathbb{C})$
be the representation of the fundamental group defined by Table \ref{2018.5.12.23.13}.
We consider the isomonodromic deformation of the Fuchsian system ${\mathcal A}(\tilde{x})$
whose preserved monodromy representation is conjugated to
$\rho'_{\boldsymbol{\lambda}}$.
Let
$d+
\sum_{i=1}^{2n} \tilde{H}^0_{i}\frac{d\tilde{x}}{\tilde{x}-t^0_i} $
be the Fuchsian system with $2n + 1$ regular singularities
at $t^0_1 , \ldots , t^0_{2n}, \infty$ whose monodromy representation is
conjugated to $\rho'_{\boldsymbol{\lambda}}$.
There exists an open neighbourhood $U_{t^0}\subset \mathbb{C}^{2n}$
of the point $t^0=(t^0_1 , \ldots , t^0_{2n})$
such that for any $t \in U_{t^0}$, there exists a unique tuple
$(\tilde{H}_{i}(t))_{i=1,\ldots,2n}$ of analytic matrix valued functions
such that $\tilde{H}_{i}(t^0) = \tilde{H}^0_{i}$, $i=1,\ldots,2n$,
and the monodromy representation of
$d+
\sum_{i=1}^{2n} \tilde{H}_{i}(t)\frac{d\tilde{x}}{\tilde{x}-t_i} $
is
conjugated to $\rho'_{\boldsymbol{\lambda}}$.
The matrices $\tilde{H}_{i}(t)$ $i=1,\ldots,2n$ are
the solutions of the Cauchy problem
with the initial data $(\tilde{H}_{i}(t^0))_{i=1,\ldots,2n}$
for the Schlesinger equations (see \cite[Theorem 2.7]{Mazz}).
\begin{table}[htb]
\caption{The representation $\rho'_{\boldsymbol{\lambda}}$ of the fundamental group;
here $a_j = \exp(-\pi \sqrt{-1} \lambda_j)$ $j=0,1,\ldots,n-1$.}
\begin{tabular}{c|c|c|c}\label{2018.5.12.23.13}
$\tilde{x}=t_{2n-1}$ & $\tilde{x}=t_{2n}$ & $\tilde{x}=t_1$ &
$\tilde{x}=t_2$ \\\hline
$ \rho'_{\boldsymbol{\lambda}}(\gamma_{t_{2n-1}})
= \begin{pmatrix} a_1 & 0 \\ 0 & a_1^{-1} \end{pmatrix}$
& $\rho'_{\boldsymbol{\lambda}}(\gamma_{t_{2n}})
=\begin{pmatrix} -a_0 & 0 \\ 0 & -a_0^{-1} \end{pmatrix}$
& $\rho'_{\boldsymbol{\lambda}}(\gamma_{t_1})=\begin{pmatrix} 0 & 1 \\ -1 &0 \end{pmatrix}$
& $\rho'_{\boldsymbol{\lambda}}(\gamma_{t_2})
=\begin{pmatrix} 0 & a_0^{2} \\ - a_0^{-2} & 0 \end{pmatrix}$
\end{tabular}\\
\begin{tabular}{c|c|c}
$\tilde{x}=t_{2i+1}$ ($i=1,\ldots,n-2$)
& $\tilde{x}=t_{2i+2}$ ($i=1,\ldots,n-2$)
& $\tilde{x}=\infty$\\\hline
$\rho'_{\boldsymbol{\lambda}}(\gamma_{t_{2i+1}})=
\begin{pmatrix} a_{i+1} & 0 \\ 0 & a_{i+1}^{-1} \end{pmatrix}$
& $\rho'_{\boldsymbol{\lambda} }(\gamma_{t_{2i+2}})
= \begin{pmatrix} a^{-1}_{i+1} & 0 \\ 0 & a_{i+1} \end{pmatrix}$
& $\rho'_{\boldsymbol{\lambda}}(\gamma_{\infty})=
\begin{pmatrix} a_0 a_1^{-1} & 0 \\ 0 &a_0^{-1} a_1 \end{pmatrix}$
\end{tabular}
\end{table}
Let ${\mathcal A}(\tilde{x})$ be the Fuchsian system with $2n + 1$ regular singularities
at $t_1 , \ldots , t_{2n}, \infty$ as above.
We fix the poles $t_{2n-1}$ and $t_{2n}$ at $0$ and $1$, respectively.
Let $\{\nu_1,\ldots,\nu_{2n-2}\}$ be the roots of the following equation of degree $2n-2$:
\begin{equation}\label{2018.5.12.23.46}
\sum_{k=1}^{2n} \frac{(\tilde{H}_{k})_{12}}{\tilde{x}-t_k} =0.
\end{equation}
For each $\nu_i$, we define $\rho_i$ by
\begin{equation}\label{2018.5.12.23.47}
\begin{aligned}
\rho_i :=
\sum_{k=1}^{2n} \frac{(\tilde{H}_{k})_{11}+\frac{\theta_{k}}{2}}{\nu_i-t_{k}}.
\end{aligned}
\end{equation}
If a tuple $(\tilde{H}_{i}(t))_{i=1,\ldots,2n}$ is
a solution of the Schlesinger equations,
then the corresponding functions $\nu_{j}(t_1,\ldots,t_{2n-2})$ and
$\rho_{j}(t_1,\ldots,t_{2n-2})$ ($j=1,\ldots,2n-2$)
satisfy the Garnier system $\mathcal{G}_{2n-2}$ (see \cite[Theorem 2.1]{Mazz}).
\subsection{Algebraic solution}\label{2019.7.12.11.28}
By the morphism (\ref{2018.5.12.23.51}), we have a generically finite morphism
\begin{equation*}
\Spec \mathbb{C}[\rho_i,\nu_i]_{1\le i \le2n-2} \times \tilde{T}\lra
\Spec \mathbb{C}[\rho_i,\nu_i]_{1\le i \le2n-2} \times \Spec \mathbb{C}[t_1,\ldots,t_{2n-2}].
\end{equation*}
We consider the algebraic solution of $\mathcal{G}_{2n-2}$ associated to
the representation $\rho'_{\boldsymbol{\lambda}}$.
\begin{proof}[Proof of Theorem \ref{2019.7.10.22.13}]
For the residue matrices $H^{\tilde{T}}_{i} $ of
$(\nabla_{\mathbb{P}^1 \times \tilde{T}/\tilde{T}})_{\boldsymbol{\lambda}}$,
we put
\begin{equation*}
\tilde{H}^{\tilde{T}}_{i}:=
\begin{pmatrix}
-1 & a -2 \\
1&a
\end{pmatrix}^{-1}
H^{\tilde{T}}_{i}
\begin{pmatrix}
-1 & a -2 \\
1&a
\end{pmatrix}
\end{equation*}
for $i = 1, \ldots, 2n$.
Let ${\mathcal A}^{\tilde{T}}(\tilde{x})$ be the family of the Fuchsian systems
with $2n + 1$ regular singularities at $0,1,t_1 , \ldots , t_{2n-2}, \infty$
parametrized by $\tilde{T}$
defined by
\begin{equation}\label{2018.5.12.23.49}
{\mathcal A}^{\tilde{T}}(\tilde{x}):=d+
\tilde{H}^{\tilde{T}}_{2n-1}\frac{d\tilde{x}}{\tilde{x}} +
\tilde{H}^{\tilde{T}}_{2n}\frac{d\tilde{x}}{\tilde{x}-1} +
\sum_{i=1}^{2n-2} \tilde{H}^{\tilde{T}}_{i}\frac{d\tilde{x}}{\tilde{x}-t_i} .
\end{equation}
Since $\tilde{H}^{\tilde{T}}_{2n+1}:=
-\sum_{i=1}^{2n} \tilde{H}^{\tilde{T}}_{i}$
is a diagonal matrix,
we have the assertion (i) of Theorem \ref{2019.7.10.22.13}.
By Proposition \ref{2018.5.8.15.13},
for each $\tilde{t} \in \tilde{T}$,
the Fuchsian system ${\mathcal A}^{\tilde{T}}(\tilde{x})$
has the monodromy representation which is conjugated to $\rho'_{\boldsymbol{\lambda}}$,
which is independent of $\tilde{t} \in\tilde{T}$.
That is, the family ${\mathcal A}^{\tilde{T}}(\tilde{x})$
of the Fuchsian systems parametrized by $\tilde{T}$
preserves
their monodromy representations.
Then we have the assertion (ii) of Theorem \ref{2019.7.10.22.13}.
By (\ref{2018.5.12.23.46}), (\ref{2018.5.12.23.47}), and (\ref{2018.5.12.23.49}),
we have algebraic functions $\nu_i$, $\rho_i$ ($i=1,\ldots,2n-2$) on $\tilde{T}$.
These algebraic functions give the solution of $\mathcal{G}_{2n-2}$ associated to
the representation $\rho'_{\boldsymbol{\lambda}}$.
Since $\dim \tilde{T} =2n-2$ and
$\tilde{T} \ra \Spec \mathbb{C}[t_1,t_2,\ldots ,t_{2n-2}]$
is a generically finite morphism,
we have the assertion (iii) of Theorem \ref{2019.7.10.22.13}.
\end{proof}
\noindent
{\bf Acknowledgments.}
The author thanks Frank Loray for many valuable discussions.
He also thanks Masa-Hiko Saito for warm encouragement.
He is supported by JSPS KAKENHI Grant Numbers 18J00245
and 19K14506.
He is grateful to the anonymous referee's suggestions which helped to improve the paper.
|
{
"timestamp": "2019-07-29T02:08:29",
"yymm": "1806",
"arxiv_id": "1806.00970",
"language": "en",
"url": "https://arxiv.org/abs/1806.00970"
}
|
\chapter{Introduction}
\pagenumbering{arabic}
\setcounter{page}{1}
Convolutional neural networks are becoming very successful and popular for image recognition and speech recognition tasks \cite{6}. However, they require significant computational power and conventional architectures are proven to underperform considerably in comparison with specialized hardware. Specialised hardware, such as with the use of FPGAs, CGRAs and ASICs, introduce many challenges at different stages of development and maintenance \cite{7,8,9}. While specialised hardware could provide the optimal solution in terms of power efficiency and performance \cite{6}, GPUs and other general purpose platforms are a practical alternative because they offer reprogrammability while eliminating the challenges of specialised hardware.
One such general purpose platform that is promising for neural network applications is the Loki architecture \cite{1}. Loki is a many-core processor that utilises a simple interconnection network design to maintain high connectivity and throughput between 16 tiles of 8 cores each while keeping the power utilisation low. Loki also offers the functionality to swap tiles for L2 cache size programmatically. This option alongside other optimisation options, such as different algorithmic approaches, resource allocation schemes or memory access patterns, creates a design space for exploration for optimising this application for the target architecture.
In this project I explore the potential of these design options, possible optimal compile-time decisions and the benefit of optimisation during runtime. The latter may be useful in our application because the input data can vary substantially and a static configuration might not perform well in all cases.
The Loki many-core processor will be a very capable chip and a very interesting object of study due to the flexibility it offers and probably a very high throughput for certain applications. The exhaustive analysis of a related optimisation design space in this project shows the dimensionality of the problem and also some smart approaches to perform quicker exploration for Loki and also for generalizing the findings for different micro-architectural and implementation specifications.
In Chapter 2, I provide background information related on the architecture and the problem. In Chapter 3, I demonstrate a set of trivial and non-trivial optimisations and discuss possible challenges for exploring them or applying them to Loki software implementations. In Chapter 4, I present my loop interchange analysis where I search for the top performing nested loop permutations. In Chapter 5, there is further analysis of the loop permutation results. In Chapter 6, I get more realistic performance results and finally in Chapter 7 there is a summary along with a list of possible future work.
\chapter{Background Information}
In this chapter I present the basic principles behind the architecture, the convolutional neural networks and the experimental methodology. I also explain the available high-level parameters that can affect the overall performance.
\section {The Loki architecture}
Loki is a general purpose many-core architecture architecture that aims to provide the flexibility found in CGRAs for better performance and power efficiency for specialization purposes. It consists of a number of homogeneous tiles of 8 cores and 8 memory banks each. Each memory bank is an 8-Kbyte SRAM that is connected to every core in the tile through a crossbar. The design decision for homogeneity was preferred in order to provide fault tolerance capabilities, modularity in design and verification and scaling. One of the advantages of Loki is that its Instruction Set Architecture provides more control to the hardware, such as the ability to disable tiles for a unified L2 cache or more direct inter-core communication with packets.
The components are low-end in comparison with modern multi-cores. For example the total size of memory that is used for L1 and L2 purposes is 1 MByte across the whole 128-core chip configuration, when the total number of tiles is 16. In addition, there are not any cache coherence hardware mechanisms, but the related commands are exposed for the user to implement coherency in software. This design decision lowers the complexity and power usage of the components and but also enables the programmer or compiler to exploit more performance out of specialised applications. The many-core topology favours applications with a high degree of explicit parallelism but it also provides better performance than reconfigurable architectures for control-intensive software. It supports many parallel programming paradigms such as equivalent functionalities for SIMD, fine-grain dataflow, task-level pipelines, Instruction-Level parallelism and others \cite{2}.
In Figure 2.1 we can see a graphical representation of the Loki processor. Regarding intra-chip communication, each tile is connected to an on-chip interconnection network, consisting of 3 separate networks, a request, a response and a reply network for deadlock avoidance. Both data and instructions are transferred as packets. Each core communicates with main memory through channels. Their mapping is decided by the the source code and the assignments are stored in the channel map table (CMT). The cores and memory banks inside a tile also communicate with each other by using crossbars. The memory banks of a tile are also interconnected with a ring network for supporting cache-related functionalities. The cores are also interconnected and this is achieved with multicast buses.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{Loki2}
\caption{Loki architecture diagram for 4x4 tiles.}
\end{figure}
The combination of modularity and flexibility here is both a curse and a blessing. On the one hand the design process and verification became less complex as modular designs easier to validate and scale. Possibly, it will also be future-proof, as the less hardware specialisation leaves more room for wider adoption and future software optimisation. On the other hand it lacks high performance features, such as hardware floating point support, multiple levels of set-associative caches, SMP cores, a big Last-Level Cache (LLC), specialised instructions for accelerating complex operations, hardware coherency mechanisms and many others. It also lacks more complex mechanisms that are applied in modern multi-core processors for increasing the Instruction-Level Parallelism (ILP), such as wider pipelines, data prefetchers, sophisticated branch predictors, as well as a modern replacement policy for the LLC cache, instead of random replacement.
While the absence of many of those mechanisms could be compensated by more intelligent software, especially for applications that are parallelisable at a high degree and have certain workload characteristics, the biggest problem is the huge design space it provides for software optimisation. The Instruction Set Architecture gives so much control to the software that development for this platform would probably be effective only in specialized applications, where the programmer has a deep understanding of the architecture.
The current state of the compiler for Loki requires human intervention for the production of fast code. Ideally, the compiler will eventually become mature enough to apply a variety of optimisations such as automatic loop vectorisation with SIMD equivalent routines. Even then, the design space for optimisations would still be very big, as the architecture offers much greater amounts of additional functionality and freedom than regular multi-processors.
One other way Loki is able to compensate for the lack of powerful computing and local storage resources, it supports virtual architectures \cite{10}. The different pipeline stages and resources of each core can be used remotely and as a result this freedom can be used for emulating fewer more-complex cores and custom memory hierarchies.
The parameters that I will explore that are related to the architecture configuration and try to optimise are the \ul{number of tiles working as computation units} and the \ul{number of tiles working as a unified L2 cache}. I will also explore some task-specialization for the cores of each tile by evaluating some current Loki implementations of the convolutional neural network application. One of the goals of this project is to decide whether these options can take optimal values for all inputs or if there is a need for changing these values dynamically.
\section{Convolutional Neural Networks}
Convolutional Neural Networks (CNNs) are a type of Artificial Neural Networks (ANNs) used for Deep Learning that uses convolution operations in its layers. They are very similar to the more classic Multi-Layer Perception (MLP) ANNs with respect to the feed-forward data flow and the multiple inner layers of neurons.
In general, the difference between CNNs and MLPs is that CNN applies convolution between the majority of the first layers among transformations and sampling techniques, such as max-pooling \cite{6}. At the beginning there is a much bigger number of input neurons for inputs such as color images (RGB arrays) that it would be computationally very expensive to train the network if it was a fully connected MLP. Then the data goes through a number of those operations and ends up in a more compact form in the last layers which are fully-connected as with the MLP ANNs.
A convolutional layer applies a series of two-dimensional filters or kernels to multiple same-sized input 2-dimensional monochrome images using convolution \cite{6}. This can be computed using direct convolution which uses a sliding window to calculate the dot product. There are also alternative ways to calculate convolution, such as FFT Convolution, which is faster for bigger kernels \cite{16}. The convolution happens between the convolution part of the network as noted in Figure 2.2, where a Multi-Layer Perceptron is compared with a simplified form of a Convolutional Neural Network architecture.
\begin{figure}[h]
\centering
\includegraphics[trim=2cm 0 0 0,width=1.08\textwidth]{tikzNNlatex}
\caption{Comparison of an MLP (left) and a convolutional neural network (right, source: \href{http://www.ais.uni-bonn.de/deep_learning}{University of Bonn, Autonomous Intelligent Systems})}
\end{figure}
Regarding the main computation, we note that the convolution of one layer takes a number of input images (number of input channels) and a number of kernels (equal to the number of the output channels) and performs convolution for every pair of the two sets and produces a 3D array that will be the input of the next layer. The number of iterations that direct convolution operation makes is equal to the input image area multiplied by the kernel area. This results in a total number of iterations equal to the product of the number of input channels, the number of \ul{output channels}, the \ul{width of the image}, the \ul{height of the image}, the \ul{width of the kernels} and the \ul{height of the kernels}. In a later chapter we will see an implementation consisting of six nested loops that performs convolution. All of these parameters will be taken into consideration to measure how performance is affected under different conditions.
The architectural decisions for the simpler Multi-Layer Perceptron Networks are the number of hidden layers, the number of neurons in each layer, the transfer functions between layers and the input functions. There might be several problems with explicitly constructed Neural Network Architectures \cite{11} such as poor performance after evaluating with k-Folds Cross Validation, long training times, or even incapability of converging to a specific amount of error. There are also parameters and optimisations related the the training algorithms. The optimisation of a Multilayer Perceptron to solve a specified problem is still an unsolved task \cite{11} and it is very desirable to build a constructive Neural Network Learning algorithm \cite{12} that would decide the optimal topology algorithmically.
In the CNN case the design space is wider as many different types of layers with their own customisation option have been introduced that could be used in different arrangements when building the neural network topology. However, there have been studies that suggested certain architectures such as a famous CNN by Krizhevsky et al. \cite{13}, SqueezeNet \cite{14}, GoogLeNet and AlexNet \cite{15}. All these examples are used in variety of applications and are proven to perform relatively well especially in photograph classification \cite{15}.
The important common characteristic of Convolutional Neural Networks is that they are computationally expensive and the basic building block is convolution. Neural networks are generally known to be more expensive than other machine learning algorithms because they make less statistical assumptions on the input data. This is the reason that some of the algorithm design choices were already selected to increase performance. One example of this is the use of Rectified Linear Units (ReLU) which is a linear transfer function that is currently more favourable to the classic much more CPU-time-consuming tanh or sigmoid transfer functions \cite{13}.
This project will not focus on the neural network architecture design aspects. The goal will be to fine-tune the execution of the whole family of CNN algorithms by examining and optimising the most demanding workload of their execution, which is the convolution. The target architecture is attractive for this kind of workload as it is highly parallel. In addition, there is a variety of different amounts of data localities in convolution and they might be more easily exploited for better performance using Loki’s low cost communication features.
\section{Experimental Methodology}
There are two main simulation frameworks that I use for the analyses. First, I have developed a custom cache simulator with that is a fast functional simulator based on binary instrumentation that also makes rough performance predictions. Then I use lokisim, which is the official performance simulator of the Loki team, for validating the optimisations or monitoring bottlenecks.
The two simulators of the micro-architecture offer results based on two different levels of abstraction. The more detailed a simulator is, the more accurate the results will be. Additionally, the lower abstraction also increases time significantly as it models more aspects of the simulated microprocessor. Since the design space of exploration can be very large, a general methodology that is widely used is to make the exploration exhaustively using simpler simulations and then validate the most promising decisions in a more detailed simulation environment.
\subsection{Cache Simulator}
In order to evaluate a big number of optimisation variations on many input and hardware configurations efficiently, I developed an Intel Pin \cite{17} tool based on pinatrace.cpp. It is a many-level cache simulator that has a form of a Pin tool with similar configuration to Loki’s memory hierarchy’s characteristics for faster design space exploration. Pinatrace.cpp ia a Pin tool that produces a stream of memory accesses and aims to be independent to the underlying architecture \cite{18}. This is also a major advantage because the instrumentation is done on normal linux binaries, which are less expensive to produce than Loki binaries which require human intervention in the code for efficient usage of the resources.
Originally, the pinatrace.cpp Pin tool produced a multiple-gigabyte address stream file by instrumenting a binary that runs on the host machine architecture (x86). This practice is itself time consuming, as I/O operations are very expensive and also impractical when the number of combinations is that large. One solution would be to pipe the address stream from the Pin tool’s stdandard output to a separate cache simulator but after experimental evaluation, I found that it was around 40 times slower than the embedded simulator implementation. My current Pin tool produces the results for a single run in a couple of seconds in a summary form. Similar practices of summarised reports are well known for efficient probing/profiling results in many applications, such as the aggregation functions of DTrace \cite{19}, the Intel Pin example tools, as well as in hardware applications \cite{20}.
Table 2.1 summarises the initial cache simulator parameters to represent Loki’s design. In later stages the winner optimisation schemes are tested in lokisim, which has a lower level of abstraction and can give a more representative insight for overheads, such as the interconnection network bandwidth limit. One difference of the initial cache simulator configuration is that the L1 cache is shared between instruction and data blocks, while Loki has a separate small memory for instruction accesses. This configuration is used for simulations using up to 8 threads inside a single tile and with L2 cache equivalent to 8 tiles.
\begin{table}[h]
\centering
\begin{tabular}{||c||P{1.6cm}|p{2.1cm}|P{1.8cm}|P{1.3cm}|P{1.5cm}|c||}
\hline
Memory level & Access Latency & Size (KBytes) & Block size (Bytes) & Associ- ativity & Repl. policy & Scope \\ [0.5ex]
\hline\hline
L1 cache & 3 cycles & 64 (1 tile) & 32 & 1 & - & Shared \\
\hline
L2 cache&10 cycles&512 (8 tiles)&32&8&Random&Shared\\
\hline
Main memory &30 cycles&-&-&-&-&- \\ [1ex]
\hline
\end{tabular}
\caption{Cache simulator parameters to model a Loki design.}
\end{table}
The way with which the cache simulator provides performance estimates is simplistic. The total cycles are calculated by adding one cycle for each of the non-memory instructions and the number of hits in each memory level (L1, L2 and main memory) multiplied by their respective access latency. This high level of abstraction provides a very fast simulation infrastructure that is comparable to an off-the-shelf micro-architectural system simulator, such as MARSSx86, for the equivalent simplified model parameters to model Loki’s cache hierarchy.
\begin{figure}[h]
\centering
\includegraphics[trim=1.2cm 0 0 0,width=1\textwidth]{L1}
\caption{Comparison of equivalent performance metrics from MARSSx86 and the custom cache simulator.}
\end{figure}
In Figure 2.3 we can see a rough comparison of different inputs of a single-thread application performance under MARSSx86 and my custom cache simulator. In order to model a very simple CPU core inside MARSSx86 I have set both the issue width and the dispatch queue size to be equal to 1. One important observation is that while the results can be noisy, the good configurations section at around the configuration 150 in Figure 2.3 (right) is correctly predicted by the cache simulator.
I have also created some other versions of the Pin tool that have different features. The additional functionality is the option to stop after a specific number of instructions, a multi-core/multi-tile version that is more related to Loki and the option to change the cache sizes from the command line arguments. I have also implemented Belady’s OPT optimal replacement policy \cite{27} as an option for the cache block replacement policy to replace the default random policy, for bottleneck analysis.
\subsection{Lokisim}
Lokisim is a high-level simulator for the Loki architecture built by the Loki team. It was written in SystemC and aims to be a fast alternative to a cycle-accurate verilog implementation. It supports a variety of simulation options including modeling a chip outside the design specifications as well as simulation-related options. The micro-architectural options are the number of total tiles, number of cores and memory banks per tile, the size of memory banks the size of instruction cache and many more. Some simulation options include the functionality to stop the simulation after a specific number of cycles and some unrealistic features such as zeroing the memory access latency for identifying bottlenecks. It also provides detailed statistics such as the number and type of memory accesses, all operations usage breakdown for each core of each tile, the bandwidth usage, the average Instruction per Cycles and also the average link contention for inter-tile communication.
\chapter{Algorithmic Optimisations}
In this chapter I describe a set of algorithmic decisions or optimisations that can be applied in the main convolution loop to increase performance. Some of them create a design space whose exploration would aim to increase the data localities and data reuse, the elimination of redundant or unnecessary computations, as well as Loki-related application for more efficient utilisation of the chip’s resources.
For some of the optimisations it might be difficult to make generalizations due to the high number of possible input parameters that change the workload characteristics and also because the number of the combinations of the decisions here is also very high even with a single specified input. There are separate chapters for analyses on the loop interchange analysis and the tile vs L2 case, which is more specific to Loki.
One of the scopes of the analysis is to decide whether some of these optimisations could be statically set and perform well in a variety of different input configuration parameters on run-time conditions or there would be benefit to have dynamic optimisations that adapt to the workload.
\section{Basic optimisations}
In this section I present the basic steps to eliminate unnecessary multiplications in the most time consuming part of the convolution. These optimisations might be trivial but they are presented for better understanding of the algorithm and also they contribute to the methodology as we will later examine realistic access patterns. First, we transform the data from multiple dimension array representations to linear memory to better identify data localities and exploit optimisation opportunities.
For example, 2D data of the form a[x][y] would be transformed in a[x*\texttt{<} size of Y dimension\texttt{>} +y]. An example for 3D data would be the transformation from a[x][y][z] to a[x * \texttt{<}size of Y dimension\texttt{>} * \texttt{<}size of Z dimension\texttt{>} + y * \texttt{<}size of Z dimension\texttt{>} + z]. This is actually how basic array data types in common programming languages are represented in memory, such as with C/C++. Therefore, by this transformation we remove a layer of abstraction to allow manual optimisations. In Figure 3.1 we observe the differences after the transformation.
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{loops0}
\caption{The main nested loop of convolution (Top) and the implementation with one-dimensional arrays (Bottom). The gray areas highlight unnecessary multiplications. }
\end{figure}
As we notice, the shaded multiplications, such as the product of the image height and image width, have the same result in all iterations. Therefore, we can replace all the shaded portions with pre-calculated immediate values. This reduces the number of unnecessary frequent calculations.
Another observation is that even after the replacement of those multiplications there still remain some other multiplications, that could be computed earlier, such as y*imageWidth which could be determined after the 3rd nested loop in the figure, which increments y. Instead of performing those multiplications earlier, it would be even better to perform additions on respective sums, as addition is less expensive than multiplication in general. In Figure 3.2, we observe the resulting main loop code, where each for structure appears as a building block and the arrows point to the respective closing section.
\begin{figure}[h]
\centering
\includegraphics[trim=0.85cm 0 0 0,width=1.1\textwidth]{loops}
\caption{Optimised main loop with structure to generate permutations easily.}
\end{figure}
We note that there is only one multiplication left, which is necessary for the convolution computation. There is one more similar optimisation; the values of o, i and y are not used internally, and therefore we could iterate over their dependent values instead of them. However this might have a small impact in performance and irregular for loop structures may impact the compiler’s abilities to apply further optimisations, such as vectorisation for other architectures. The building block notion is useful for simplicity reasons in further sections. In general, the compiler might have been able to apply these optimisations or similar ones in common architectures, but Loki’s compiler is not very efficient for these applications at the moment of writing and they have to be applied manually. One more reason for keeping the simplicity of the code snippet is that I evaluate parallel performance using OpenMP and irregular loop structures are not exploitable for parallelisation.
\section{Access Pattern Manipulation}
One very interesting optimisation to explore is to change the access pattern so the working set could be decreased. By minimising the working set, the cache performance can be improved at a considerable amount, which can impact the overall performance. This is because by accessing the data in different fashions the spatial and temporal localities change.
Other factors that can impact performance when changing the access pattern are the block localities, as an access to a memory reference requests the fetch of an entire block, which has a size of 32 bytes in the Loki architecture, and prefetcher’s performance for other architectures that support it. Loki supports manual prefetching through fetch commands placed inside the application’s text segment. While the block access can virtually act like a form of prefetching, it could also be a reason for fetching irrelevant data or with data with low-temporal locality. Some processors support the critical word first optimisation which could make the results more complex as there are additional block buffers \cite{24}.
The way I change the access pattern is by selecting different permutations of the loop blocks. In Figure 3.2 we saw this block notation that kept the building blocks simple. I selected this loop structure also because it is very easy to apply permutations. The code will produce correct results for every permutation out of the 720 ones for the 6 loop blocks, as long as the closing sections are in the correct order, which is the inverse of the loop permutation.
Currently, due to the high number of combinations, there are no mechanisms or research work for exhaustive analysis or for deciding the loop order automatically, to my knowledge. Most of the loop reordering research topics are focused in maintaining the data dependencies of arbitrary nested loops automatically and how it can be scaled on parallel machines. The work “automatic loop interchange” \cite{30} summarises the concepts of loop reordering from older works and provides an algorithm for safe loop transformations for vectorising compilers.
In Figure 3.3, we can see a visualisation of the address and block reuse patterns for a specific network configuration under the best and worst loop permutation. The figures are for the first 100 million instructions of the execution of this configuration. The simulation framework for this, as well as how the worst and best loop order has been found is explained in Chapter 4. The results here are only for demonstration purposes. On the left side we see the reuse patterns for the best loop permutation, while on the right side we see the equivalent patters for the worst performing loop order. The top graphs represent the address reuse patterns which is also architecture independent in regard to the block size. The lower graphs represent the equivalent graphs for the block references instead of the address references. Some important observations are that the block reuse patterns are more representative for measuring performance and that the best loop order has a smaller working set in general. The worst loop permutation case has a very low address reuse in this time frame, but there is a considerable block reuse, not as much as the best case though.
\begin{figure}[h!]
\centering
\includegraphics[width=1.02\textwidth]{img.png}
\caption{Visualisation of the address and block reuse patterns. Initially two binaries were produced, one with the best loop order and one with the worst for the specific layer parameters. The execution of the first 100M instructions was instrumented to obtain the memory accesses references. Because the virtual address space is too sparse for visualisation, the addresses were renamed according to their time of first appearance. In that way we produced the two upper graphs that show the address reuse in a compact format. In the two lower graphs are the equivalent but after removing the word offset from each memory reference. The word offset size is equal to 5 to mach Loki’s offset.}
\end{figure}
Visually, we observe that the working set for the best case is around 500 blocks, which is approximately 16kB. Therefore an L1 size of 16 kB could be enough for this case. For the worst case, the working set seems to be around 5000 blocks, which is around 160 kB. That means that it would probably depend on the performance of a higher-level cache, which would have more access time than L1. It is important to note that there is not any formal definition for the working set as it is used to describe the data reuse in a short period of time.
\newpage
\section{Partial sums}
Due to the nature of the convolution algorithm, as with matrix multiplication, each entry inside the output layer is accessed multiple times for adding up a finite series of numbers. This requires a lot of memory writes which are much more expensive than read operations or writes to registers. We could eliminate the output array accesses by making this addition externally or at a local memory, such as inside a register or a scratchpad memory for Loki. Each sum can be computed partially at different stages and the final result could be computed later by adding partial sums or a single summation variable for the single-thread case.
In Figure 3.4 I present the application of the partial sums optimisation for our base loop permutation. In the shaded areas we can see the parts that the summation command from the previous code has partitioned into. The dashed arrows show the data dependencies of the index of the out array. This essentially means that the code inside the last dependency loop writes to the same place in memory. For example, as we can observe from the given loop permutation of the figure, the code inside the 4\textsuperscript{th} loop writes on the same variable but produces the same result as the unoptimised code.
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{loops2}
\caption{Partial sums optimisation. The dashed arrows show the dependencies of the index of the out array.}
\end{figure}
It is interesting to note that this optimisation does not improve the reuse because the reuse is already high for the references to the blocks of the out array. The benefits are coming from the change of type of memory that the partial sum is on and the decrease of memory write operations.
\section{Parallel implementations}
As a first step for my evaluation of different optimisations and configurations I use OpenMP to parallelise the above code using the outermost loop. OpenMP is a straightforward parallelisation framework that enables effortless and efficient parallelisation of common programming models, such as the parallel for. At a later stage I validate the findings using an equivalent code for Loki, which is written manually.
The main overhead of the parallel implementations usually comes from the mechanisms with which the thread safety is retained on the data structures that the threads use for writing shared information. In our case the data structure we need to protect will be the out array. Because the coherency measures are expensive, the partial sums optimisation has even more importance in the parallel case. The partial sums eliminate the times each thread is writing to the shared array and therefore fewer updates need to be monitored.
OpenMP provides different thread safety measures, such as critical sections, atomic operations and locks. Experimentally, I have found that atomic operations are ideal for our case and the the critical sections are unnecessarily expensive for updating a single location in memory. There is a lot of research work for improving and validating the safety measures in both hardware and programming models \cite{31,32}. At this stage I wanted to keep the code simple and portable in order to make quick observations from the higher-level cache simulator.
In Figure 3, we observe the parallel version of a different permutation. The shaded regions are the changes from the serial version. First we insert the pragma to parallelise the outermost loop with the static scheduling option to minimize any run-time overheads for thread scheduling. Dynamic scheduling is probably unnecessary because we already know that each iteration carry an equally sized workload. The next difference is that we insert the atomic operation pragma above the out array update line. In order for it to produce correct results we also need to make all the iteration values private as each thread will need to have their own states of the iterations. For this reason I declare these private values at the first line inside the outermost loop. Last, the iteration values optimisation cannot be applied for the outermost loop because I declare them inside the iteration and therefore I insert the respective multiplications to the outermost loop. Of course this could be prevented by declaring those variables outside the loop structures and declaring them as private variables using inside the OpenMP parallel for pragma, but I find this more elegant for demonstration.
\begin{figure}[h]
\centering
\includegraphics[width=1.1\textwidth]{loops3}
\caption{Parallel implementation. The shaded areas are the changes from the serial version.}
\end{figure}
Another optimisation that is useful for completely removing the need and overheads of thread safety is to partition the iterations in a way that at any given time no two threads will write to the same memory location. As we have noticed, the index of the out array in the code is dependent to the values of o, x and y. This essentially means that we can safely remove the atomic operation pragma in every case we parallelise any of the loops that iterate using one of those three parameters. For example in the permutation of Figure 3.5 we cannot remove pragma omp atomic because the threads write on all location of the out array, while if we used the parallel implementation of permutation in Figure Y we could safely omit the atomic operation pragma.
One of the ideologies of scalable architectures such as Loki is to keep components simplistic. Therefore a design decision was not to include any hardware coherency mechanism in the memory hierarchy. Instead, there are cache flush and invalidate commands that have to be placed manually by the programmer or compiler. As a result, there is an extra effort to enforce thread safety on the Loki implementations. Of course, one easier solution is to prefer the permutations that eliminate the need of the thread safety measure by writing in to different segments of the output layer. In addition, in Loki you can map all 8 cores of a tile to a unified L1, consisting of all the tile’s memory banks, which simplifies the thread safety measures across the cores of the tile.
A design space I will not explore in this project is the effect of parallelising an inner loop instead of the outermost loop. The benefit of the parallelisation of inner loops could be the better and more fair utilisation of the assigned the threads, if the outer loop consists out of a small number of iterations. Another result would be that the iterations of the outer loops would act as barriers and therefore more synchronisation would be enforced across the threads. More synchronisation could also eliminate the working set, because the threads would iterate over data with more temporal locality. Reasons for decrease in performance would be the extra wait time of the threads that have been given more compute time and finished earlier, as well as any additional OpenMP-generated overheads from the presence of more barriers.
\section{Convolution implementation generator}
In order to evaluate a big number of hardware configurations, input parameters and loop permutations I have created a python script that produces a C program according to the given parameters. The parameters are the input parameters, which are the convolutional neural network layer characteristics, and the permutation index, which ranges from 0 to 719 for the 6! possible permutations. The input parameters are the number of input and output channels, the width and height of the image, and the width and height of filter. This permutation index is based on the python itertools library, which produces an iterator that produces all permutations in a lexicographic order.
The output of the script is a single .c file which can be compiled by gcc and other compilers. The resulting code includes all the above optimisations, including partial sums and multiplication elimination. The ending sections of each loop is placed in the reverse order of the loop permutation automatically for the algorithm to work correctly. There is also a hardcoded variable that expresses the number of threads. When the threads variable gets a value higher than 1, it introduces the OpenMP-related code and also makes all the required changes demonstrated in Figure 3.5. It also removes the atomic operation pragma when thread safety is implied by the loop permutation.
The generated programs iterate over arrays filled with zeros to eliminate data loading times. I also use malloc instead of calloc to request the memory pages only on-demand because I would like to isolate and explore the memory access patterns of each loop permutation. When compiling with gcc, the optimisations should be turned off with the flag -O0 because the gcc optimisation heuristics remove almost all operations of the code as we do not initialize the arrays or use the output layer. In combination with the manual non-architecture-specific optimisations, I examine realistic access patterns isolated from the array initialisation phase which would be already implied in a CNN convolution layer. I have also created a version that inserts random data on all implementations and loop permutations and ensures that the result is the same in all combinations for validation purposes.
In Figure 3.6, I present a set of preliminary results using 1 thread on real hardware, a Haswell machine. The code is compiled using gcc without any optimisation and each configuration is run 50 times. The plotted values are the respective median, to eliminate outliers as this is not an isolated environment. The best loop order is found by running all 720 permutations and selecting the best performing one among the medians of 20 runs per permutation. The y axis shows the kind of the applied optimisations, where constants and iterations are the multiplication elimination measures described in 3.1. The only optimisation of these that needs tuning is the best loop order selection and here the result is not very promising as it offers around 40x speedup. One reason for this is that the initial loop was among the top as the worst loop order had around 3 times slowdown and maybe the selected input parameters produced less demanding workload. In the loop reordering chapter I also will explore the impact of the different configurations on the top in a processor with lower LLC cache size and multiple threads.
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{simpleexperiment}
\caption{Preliminary experiment to show the execution time after applying a variety of optimisations. Single-thread execution on an x86 server.}
\end{figure}
\section{Sparsity-sensitive algorithms}
One algorithm optimisation that is interesting to explore is related to the sparsity measures. The general idea is that when the multiplier or the multiplicand is equal to zero, then the addition of their product to the output layer is unnecessary. In the main summation operation we note that the two values that could equal to zero are the weight and input image data. Therefore, we can implement the optimisation by adding checks to skip all iterations in which one of the multiplicand or the multiplier is equal to zero. This usually saves computation time, but the performance is highly dependent to the actual sparsity of the input layer and the filter layer.
The convolution implementation generator gives the same performance across all input layers and filters. This is not the case for an activation-aware or weight-aware algorithm. For example, if one image has only 10\% activations the sparsity-sensitive algorithm may be more efficient than the dense algorithm. If the image has many zeros in the input layer, then the dense algorithm will certainly perform better as it does not have the checks which would otherwise still not save computation.
Generally, when evaluating this implementation variation it would be very important to make a case study of the common ranges of percentages of sparsity in the image and the weights. It would also be useful to consider the distribution of the dense areas in such cases. If some regions are very dense and the loop permutation assigns an 100\% dense region to a single core, then this could become a bottleneck. Therefore we must be very careful when making generalizations about the sparsity measures.
The focus of my convolution implementation generator is mainly to explore access patterns in a cache simulator without specific input values and therefore I do not implement the sparsity measures. The Loki team has implementations that add sparsity measures for the weights and activations and I will evaluate the results directly on the lokisim framework in a later section. One other reason for exploring them directly on lokisim is because they might further improve performance if there is a bandwidth bottleneck across the Loki tiles and the results of the cache simulator would not be as representative.
Something interesting to note is that on a real-world OpenMP implementation of the parallel case with the sparsity sensitivity optimisation it might prove essential to use dynamic scheduling instead of static thread allocation as different iteration chunks can have different amounts of workload, according to the distribution of the zeros in the image and filters.
\section{Loki-specific and other optimisations}
There are many other optimisation techniques that could be applied, but they can be architecture specific. One example is loop unrolling. The automatic loop unrolling of the compiler of the innermost loops could change the innermost loop structure in order to eliminate the jump assembly commands and the related data dependencies and further increase the performance.
In Loki implementations this optimisation is currently done manually, as the compiler is immature and programmer’s knowledge of the architecture is essential. Each core in the Loki processor has two kinds of instruction caches, instead of the regular private instruction-only L1 cache. There is one 64-word instruction packet queue (L0) that stores instruction packets (IPKs) containing the basic blocks of the program and also one higher-priority 16-word instruction buffer, that aims to save power and increase performance for repetitive tasks \cite{2}. During manual optimisation, the programmer acknowledges the size of this buffer and can unroll the loops up to a specific number of commands and minimise the instruction misses.
The Loki architecture provides the possibility of adapting to different memory hierarchies and specializing the resource allocation to the application's needs. The basic form of this functionality is to disable a specified number of tiles and use all their memory banks as a shared L2 cache. Therefore there is also an additional parameter for optimisation which will be explored in Chapter 6. Ideally the application will use a substantial amount of cores for high throughput and provide the amount of cache that will both eliminate the number of LLC misses with a block request rate that will not flood the interconnection network and cause stalls. Of course, there is also the question whether there will be an optimal configuration provided the architecture constraints and the wide range of possible inputs for the convolutional neural network application.
One other feature of the Loki architecture is the ability to partition the L1 cache into custom-sized memory bank groups for private use by selected cores or for different data structures. Since the L1 cache is direct-mapped, the partitioning of the 8 available memory banks can help reduce conflict misses. Another benefit of the L1 cache partitioning is that it can provide quality of service when some data is predicted to have high reuse or they have other properties that would otherwise degrade performance \cite{22}. Examples of those properties are thrashing and scanning cache access behaviours, where the working set is too big to fit in cache or the data is accessed in a stream fashion and has no reuse \cite{23}. In the latter case, Loki is able to programmatically bypass L1 or perform direct memory accesses.
One other optimisation that is applicable on the main convolution code is loop vectorisation, if the architecture supports it. On conventional multicores the architecture has dedicated assembly commands to support Single Instruction Multiple Data (SIMD) operations. The compiler checks which set of instructions are available to the host machine’s processor and can perform automatic vectorisation of loop structures for a more efficient utilisation of the available processor resources. Due to the packed-based architecture, Loki allows the remote assignment of specific tasks to specific cores during run-time. This is useful for implementing Data-Level Parallelism. Some cores can be set as helper cores and distribute similar workload to the rest of the cores. The helper coreas can also be used for load balancing. One design question would be to find the best performing ratio of helper and worker cores. Again, all these options contribute to the very wide design space of exploration of software optimisations. The reason that they can be considered software optimisations for Loki is because these options are exposed to the instruction set architecture.
\chapter{Loop Order Analysis}
In this chapter I present my analysis of the nested loop permutations in the most time consuming nested loop section of convolution in a CNN. The idea is to alter the access pattern in order to minimise the working set used during the calculations and therefore eliminate the cache misses. By eliminating the number of both L1 and L2 misses the performance is shown to be improved by a significant amount, not only due to the respective L2 and main memory access latencies, but also due to the reduced congestion in Loki’s interconnection network, which is sometimes proven to be a bottleneck in performance.
By permuting the main nested loop section we can find an optimal permutation that increases both the time and spatial localities in memory accesses. The question that will be answered at at later stage is whether there is a specific or a set of loop order permutations that perform near optimally on all input cases and cache configurations. One other important question to be answered is how critical is this decision and if is proven to be important, how an on-the-fly loop re-ordering mechanism could be implemented in software.
\section{Experimental setup}
In order to evaluate all the number of loop permutations (in our case 6! = 720) on many configurations efficiently, I have used my custom cache simulator. The x86 binary is produced by my convolution generator script. The arguments for a single binary are the permutation index and the input parameters (the number of input channels, output channels, the width of and the height of the image and the width and the width of the kernels). The architectural parameters were summarised in Table 2.1 at the methodology section of Chapter 2.
\section{Hamiltonian path index for permutation visualisation}
An innovative use of the Steinhaus-Johnson-Trotter algorithm (1963) \cite{21} is presented to represent the different permutations in an order that is based on spatial characteristics. The idea is to find an indexing function for all permutations using one parameter that carries some locality information. In this way we could distinguish regions of loop permutations that perform better than others and compare signatures more effectively, or even use it for dynamic loop reordering if we try to optimize a single parameter. For 6 elements the number of permutations is 6!=720 and the number of ways the indexing can be done with a single parameter is 6!! = 720!, which is an 1747-digit dumber.
Examples of common indexing functions are the lexicographical order and the reverse lexicographic order. This can be a bad idea for visualisation. In the first case for example, the permutations (4, 5, 3, 2, 1, 0) and (5, 0, 1, 2, 3, 4) are consecutive but they look very dissimilar. However, the fact that the more left an element is the less rapidly changes among consecutive permutations gives it another locality property. This property could be useful if we knew for sure that one end of the permutation has more impact on the dependent variable. In our case, this might hold for L1 misses, as innermost loops may determine the “immediate” working set, but for L2 misses this might not be the case.
The design space when dealing with permutations can be represented as an undirected graph of which each node represents a permutation and each edge connects the permutations that differ only by a neighbour (adjacent elements) swap. A simpler graph for n=4 can be seen in Figure 4.1. For n=6, the number of nodes is 720 and the number of edges is 1800. By using the Steinhaus-Johnson-Trotter algorithm we can get a hamiltonian path (a path that visits each node exactly once) of this graph which can then be used as an alternative indexing function.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{hgraph}
\caption{A permutohedron where the nodes are all permutations of 4 elements and the edges connect the permutations that differ only by an adjacent elements swap.}
\end{figure}
In Figure 4.2 I compare the results of 720 simulations, one for each permutation for the same input parameters, in 3 different orders. The first two indexing functions are the lexicographic and reverse lexicographic orders, as implemented in the python programming language library called itertools that provides iterators on permutations of object lists. As we can see, the hamiltonian path index is a very good option for visualizing permutations in the 3 evaluation metric cases. Another interesting observation is that there is some periodicity at the hamiltonian path index graphs and that is because we lose some locality information that would have probably existed in the initial graph.
If we compare the graphs individually, one of the lexicographic orders for the cycles graph produced a less noisy graph than the other indexing methods. However, this indexing might not be good as a general solution if we are not aware of any importance of the leftmost or rightmost elements of the permutations. In addition, the hamiltonian path index tends to produce the most distinguishable regions of hills and valleys of bad and well-performing regions respectively, which will later prove to be more suitable for graph comparison. The permutation graphs will be used as signatures for visual comparison of different input and configuration parameters.
The last observation is that in the cycles case, the fact that the inverse permutation python index produced this kind of downward shape shows the importance of the innermost loops (rightmost objects in a permutation) for determining the working set. By using the reverse lexicographic order, the x-axis is partitioned into 6 equally-sized segments which contain 120 loop permutations which share the same innermost loop.
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{compare1_1_index}
\caption{Cycle, L1 misses and L2 misses results from 720 simulations, one for each loop permutation, using different indexing methods. Input parameters: 256, 32, 28, 28, 3, 3 for output channels, input channels, image width, image height, filter width and filter height respectively. Single-thread version. The layer parameters is the specification of the 10th layer of TinyDarknet \cite{25}.}
\end{figure}
The hamiltonian path indexing method could also be used in other applications where different permutations are explored visually or when searching for an optimal index and there are localities among neighbouring permutations.
\section{Comparing signatures for different inputs and configurations }
In this section I present the results of the simulations of different input parameters for all loop permutations. The input parameters are explained in section 2.2 and they all contribute the overall problem size. This section is divided in two subsections that are the analyses of different ranges of inputs. In the first, I explore the performance aspects of 7 convolution layers of SqueezeNet \cite{14} when changing loop permutations. In the second case I present a wider range of synthetic layer parameters for the purpose of generalization. One of the aims of this section is to prove or disprove the need for dynamic loop interchange. This result will only be indicative because the simulator for this section is the cache simulator, which has a high level of abstraction and approximations.
\subsection{Layers from SqueezeNet and Tiny Darknet }
Squeezenet \cite{14} is a convolutional neural network topology designed with portability in mind. The architecture includes 10 main convolution layers in total and they constitute the majority of all steps. I have explicitly selected 7 layers of SqueezeNet, as well as one layer from Tiny Darknet \cite{25} to create a small design space with real world combinations of input parameters a variety of ranges. Table 4.1 summarizes the selected input parameters.
\begin{table}[h]
\centering
\begin{tabular}{||c||P{1.8cm}|P{1.7cm}|P{1cm}|P{1cm}|P{1cm}|P{1cm}|P{1cm}||}
\hline
Layer&
Number of Output Channels&
Number of Input Channels&
Image Width&
Image Height&
Kernel Width&
Kernel Height&
Source
\\ [0.5ex]
\hline\hline
initial-conf&256&32&28&28&3&3&\cite{25}\\\hline
fire3-conv3x3-2&64&16&55&55&3&3&\cite{14}\\\hline
fire4-conv1x1-1&32&128&55&55&1&1&\cite{14}\\\hline
fire4-conv1x1-2&128&32&55&55&1&1&\cite{14}\\\hline
fire7-conv1x1-1&48&384&27&27&1&1&\cite{14}\\\hline
fire9-conv1x1-1&64&512&13&13&1&1&\cite{14}\\\hline
fire9-conv3x3-2&256&64&13&13&3&3&\cite{14}\\\hline
conv-final&1000&512&13&13&1&1&\cite{14}\\ [1ex]
\hline
\end{tabular}
\caption{The parameter values for the selected convolution layers.}
\end{table}
By observing all the created signatures of Figure 4.3, we can conclude that there certainly are some regions of good permutations across all layers. The two valleys around the permutation indexes 200 and 300 seem to perform well in all cases in this 1-thread experiment. Another observation is that among the lower performing permutations there is greater variation when comparing the signatures of different layers. The reason that the signatures of the second and third column seem less noisy is because the filter size is 1x1 and the displacement of the kernel width and kernel height do not have a significant impact on performance, since they only do one iteration.
On average there seems to be around 2 times speedup from the worst loop permutation to best, although we are more interested in permutation comparison in this step because the cycles notation comes from the cache simulator, which is mainly a functional simulator.
\begin{figure}[h]
\centering
\includegraphics[trim=2cm 0 0 0, width=1.1\textwidth]{compare1thread}
\caption{Cycles for each layer of the small design space for each permutation. The permutations sorting is based on the Steinhaus-Johnson-Trotter algorithm \cite{21} to identify patterns visually.}
\end{figure}
In Figure 4.4, we can see the multi-threaded results. One observation is that there are good permutations more frequently as the number of threads increases. There seem to be cases of superlinear speedup. The reason for that might be that the threads help each other by prefetching useful data. As the workload is very similar across the threads there should be high probability of iterating over similar data and blocks in general. In the 1x1-sized kernel layers, we can observe all the cases that the kernel height or kernel width is the outermost loop. This is because OpenMP parallel for does not exploit any parallelism from 1 iteration loops. Small kernels could be considered a bad option for the multi-threaded design space because of the limited parallelisation they offer. However, it is already desirable to consider the kernel loops as outermost loops to be in bad permutations due to their limited exploitable parallelisability in the general use case.
\begin{figure}[h]
\centering
\includegraphics[trim=2.4cm 0 0 0, width=1.1\textwidth]{Cyclescompare}
\caption{Cycles for each layer in 1,2,4 and 8-thread modes for each permutation. The permutations sorting is based on the Steinhaus-Johnson-Trotter algorithm.}
\end{figure}
As a validation I have also measured the L2 misses for each of the different layer experiments (Figure 4.5). For the multi-threaded versions the L2 misses scale sublinearly while increasing the number of threads for the results obtained from the 1-thread versions. This is because there are many shared blocks. There are some exceptions though, in which the multi-threaded cases have many more misses, because of conflict and capacity misses (layers fire4-1 and fire7 at the region between around 100 and 150). This is because in a subset of permutations, different threads can write or read in different regions of the arrays. This selection includes the cases where we would consider them advantageous because we could omit the thread safety measures, but as we can see from here they also also have more demands from L2 cache. Also, in some cases such as the biggest trough in the fire7 layer, more threads can produce less misses. However, the L2 graphs alone does not give a clear picture of the L1 behaviour, which is more prone to noise from thread scheduling effects.
\begin{figure}[h]
\centering
\includegraphics[trim=2cm 0 0 0, width=1.2\textwidth]{L2compare}
\caption{L2 misses for each layer in 1,2,4 and 8-thread modes for each permutation.}
\end{figure}
\subsection{Synthetic layers}
In this section I present some results based on a synthetic design space to attempt generalisation for good permutations overall. In Table 4.2, I describe my design choice decisions. There are 216 combinations in total. Instead of variable grouping, it is more common to use other techniques for fast multivariate design exploration, such as the latin hypercube sampling (LHS), but here we already know that square images and kernels are a common case.
\begin{table}[h]
\centering
\begin{tabular}{||P{3.5cm}||c|c|c|c||}
\hline
Variable Name&Lower Bound&Upper Bound&Increment&Total\\ [0.5ex]
\hline\hline
Output Channels and Input Channels (equal)&10&210&40&6\\\hline
Image Width Image Height (equal) &10&210&40&6\\\hline
Kernel Width Kernel Height (equal)&1&11&2&6\\\hline
Total&-&-&-&216\\ [1ex]
\hline
\end{tabular}
\caption{The parameter ranges for the selected convolution layers.}
\end{table}
In order to save computation time I have limited the execution to 500 Million instructions. Usually, the representative region of execution is selected more carefully \cite{26} when limiting simulations. This is because the programs usually consist of many phases. In our case we have isolated only one phase of computation, convolution, which is a highly repetitive procedure. The methods of selecting the representative regions are ether by using automated routines \cite{26} or by visualising signatures of phases, such as with the number of touched program counter values over each N number of instructions. From my access pattern visualisation of section 3.2 we can observe that the initialisation phase is a very small fraction of the first 100M instructions (this was for 500M), at least for memory accesses, and then a relatively repetitive pattern continues until the end. The graph of the full execution is not presented for practical reasons.
In Figure 4.6 I present a random subset of the hamiltonian path index signatures obtained from the design space exploration. As we can notice there are two main families of signatures. The most common one has 3 big troughs and 5 small valleys, while the less common has bigger valleys and they are skewed to the left. The second type is more similar to SqueezeNet’s signatures.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{compare1threadreduced}
\caption{Random subset of the cycle signatures for the bigger layer design space.}
\end{figure}
Finally, we would like to see if there is a common good performing permutation that could be used as a static choice in the convolution code for layers of which the parameter values are close to the respective ranges of the design space. In order to do this we find the best permutation per layer and then see how well each of the 720 permutations performed globally. As we can observe from the left graph of Figure 4.7, there is a permutation that performed 97\% optimally on average and 60\% at the worst case. There is also another one that performed 94\% optimally and 83\% at the worst case. These are two good candidates for evaluation on the Loki simulator.
\begin{figure}[h]
\centering
\includegraphics[trim=1.8cm 0 0 0,width=1.03\textwidth]{averages}
\caption{Speedup each permutation achieves on every layer in comparison with the layer's optimal permutation, on average. On the Left, the average speedup is based on the cycles metric, while on the Right it is based on the L2 misses.}
\end{figure}
These results are based on cycles measurements. If we wanted to give emphasis on L2 misses because they can introduce link contention, we could also evaluate the equivalent candidates based on L2 misses. In the right graph of Figure 4.7 we can see that there is a permutation that has an average performance of 85\%, but the worst case is below 16\%. The top based on worst-case is very similar to the top average. If this candidate proves to perform better than the ones based on cycles, it may worth to consider dynamic loop reordering.
The winner permutations for a single thread can be seen at Figure 4.8. There are some similarities between them. The outermost loop is common for the three loop permutations and is the image height. The three outermost loops are common for the left and right permutations, which represent the best based on cycles and best based on L2 misses. This could support the hypothesis that the outer loops are more important for determining the L2 cache performance, at least when the innermost loops do not produce a working set larger than the size of L2.
\begin{figure}[h]
\centering
\includegraphics[trim=1.5cm 0 0 0,width=1.1\textwidth]{winners2}
\caption{The 3 candidates for best loop permutations for 1 thread. From left to right: Top average speedup (0.966004), Top worst-case speedup (0.831247, average 0.937533), Top speedup based on L2 misses (0.851068)}
\end{figure}
I have also explored an equivalent design space for the multi-thread case. The number of threads for this experiment is set to 8, which represents one tile. The ranges of the layers have been reduced due to time constraints, as well as the number of simulated instructions. The total number of layers are 36 and the instruction limit is set to 100 million instructions. In Table 4.3 we can see a summary of the layer design space.
\begin{table}[h]
\centering
\begin{tabular}{||P{3.5cm}||c|c|c|c||}
\hline
Variable Name&Lower Bound&Upper Bound&Increment&Total\\ [0.5ex]
\hline\hline
Output Channels and Input Channels (equal)&10&170&80&3\\\hline
Image Width Image Height (equal) &10&170&80&3\\\hline
\multicolumn{1}{||P{3.5cm}||}{Kernel Width Kernel Height (equal)}&\multicolumn{3}{c|}{1,3,9,11}&4\\\hline
Total&-&-&-&36\\ [1ex]
\hline
\end{tabular}
\caption{The parameter ranges for the selected convolution layers, multi-thread version.}
\end{table}
We also want to find a single top permutation that would perform well in the average case for multi-threaded experiments. Since Loki’s potential is based on the number of cores we want to exploit this opportunity for the multi-threaded convolution implementations. In Figure 4.9 we can see the results for each permutation. The main difference from the single-thread version is that now there is not a single near optimal loop permutation. When based on cycles, the top 1 is below 0.80 average speedup and the worst case is below 0.50. When based on L2 misses, the graph does not look very different from the 1-thread case, which means there might be a need for dynamic loop reordering for best performance.
We can also observe a big step at Figure 4.9 (left) from 0 to 239 and this is because exactly one third of the loop permutations have a kernel loop (kernel height or kernel width) in the outermost position. Because it is common to have small values for kernel width and kernel height, such as 1, the outermost loop in these cases is not parallelised sufficiently, with the result of impacting performance.
\begin{figure}[h]
\centering
\includegraphics[trim=1.6cm 0 0 0,width=1.05\textwidth]{averages2}
\caption{(8-threads) Speedup each permutation achieves on every layer in comparison with the layer's optimal permutation, on average. On the Left, the average speedup is based on the cycles metric, while on the Right it is based on the L2 misses.}
\end{figure}
The winner permutations for multi-thread can be seen in Figure 4.10. It is very interesting that the top based on cycles is very similar to the top in the single thread experiment. The only difference is one neighbour swap of the innermost loops. The top worst case now has a different outermost loop. The top based on L2 misses has an outermost loop order that is not very promising. It is a point on the left graph of Figure 4.9 (left), which tells us that the top based on L2 misses was not indicative for performance this case.
\begin{figure}[h]
\centering
\includegraphics[trim=1.5cm 0 0 0,width=1.1\textwidth]{winners4}
\caption{The 3 candidates for best loop permutations for multi-thread. From left to right: Top average speedup (0.775002), Top worst-case speedup (0.558273 , average 0.691414), Top speedup based on L2 misses (0.210397)}
\end{figure}
The top permutation based on L2 misses has rank 32 (based on misses) in the single-thread version and therefore the parallelisation is not the reason for producing a small number of misses. It has relatively many L1 misses but few L2 misses. It would be interesting to investigate why. One assumption is that the working set fits well in L2 cache but not in L1 and also there is little reuse between those cycles of the big working set. As we can see, the outermost loop is the output channels which iterates over a relatively big array, which could flush the direct-mapped L1 cache but it fits in L2. Since we are interested in permutations with low L2 misses, we still need to evaluate the resulting permutation candidates. We could select the next top based on L2 misses for many threads.
\chapter{Offline analysis of simulation results on Loop Reordering}
\section{Impact of cache hierarchy}
It is important to investigate whether our analysis is valid for different cache hierarchy configurations. In this way we will be confident that the winner permutations will be valid for different memory hierarchy configurations on Loki, since its reconfigurability is extended on the cache sizes. Of course, if we conclude that cache hierarchy has little impact on top permutations performance, it will also mean that we will not need to manipulate the loop order when searching for optimal L2 cache size or evaluating other optimisations. It is desirable to prove orthogonality between the loop permutation decision and the cache hierarchy because it reduces the design space for other optimisations and generalises the solutions.
The design space for this experiment is the reduced design space of Table 4.3 times three, because I also explore three very different cache hierarchy configurations. The simulated caches combinations are 1) 16KB L1 with 128KB L2, 2) 32KB L1 with 512KB L2 and 3) 64KB L1 with 960KB L2. The number of simulated instructions was 200 millions for every run.
The Figure 5.1 is a parallel coordinates visualisation that shows how the average performance of each individual permutation for all layers varies when changing the cache hierarchy. Each of the 720 lines represents a loop permutation and each parallel axis one cache configuration. The line colouring is based on the hamiltonian index (see Chapter 3). This means that lines of similar colors are permutations that differ only by a couple of neighbour swaps and this is observable in the graph as we can see some kind of clustering of the colors. The main observation is that the top permutations perform almost equally well across all hardware configurations. This is not the case for the non-optimal permutations as they get displaced by a considerable amount across the parallel axes, especially at the first transition from smallest caches.
\begin{figure}[h]
\centering
\includegraphics[trim=4cm 0 0 0,width=1.1\textwidth]{par}
\caption{Parallel coordinates visualisation for the performance of each of the 720 loop permutation across 3 different cache hierarchy configurations.}
\end{figure}
\section{Impact of multi-threading}
It is also very useful to know if a good loop permutation is good for different numbers of threads. In order to attempt generalisation, we assume that the cache sizes are orthogonal to the loop permutations. The previous section supports this statement to a certain degree. In order to evaluate the impact of threads we set up a similar experiment. The results are based on three data sets, one for 1-thread, one for 4-threads and one for 8-threads. The first and last are already described in the previous chapter. The 4-thread version has the same design space as the 8-thread version, which is described in Table 4.3.
It is also important to note that this result is less representative for Loki in comparison with the previous section because the cache simulator does not differentiate between reads or writes and the safety measures can be more expensive than what is predicted.
In Figure 5.2 we can see how each loop permutation’s average performance changes when increasing the number of threads. As we can see, when we are moving away from the single-thread case, exactly one third forms a group of bad performing permutations. All these permutation are the ones that have either the kernel height or kernel width as the outermost loop, which iterate one or few times and offer little or none exploitable parallelisation. One important observation is that the remaining two thirds of permutations perform fairly similarly when changing the number of threads. However, it is not at the same degree as in the cache sizes impact graph. The most important group of permutations are the top performing ones and their rank seems to change across the different number of threads but by not at a high degree for the two thirds group.
\begin{figure}[h]
\centering
\includegraphics[trim=3cm 0 0 0,width=1.1\textwidth]{par2}
\caption{Parallel coordinates visualisation for the performance of each of the 720 loop permutation across different number of threads.}
\end{figure}
\section {Dynamic loop reordering}
In this section I do some offline analysis of the results to identify properties that could benefit dynamic loop reordering scenarios. The idea is to create an algorithm that tests a small number of permutations and decides a good performing one for the rest of the execution of the program. This could be used in combination with micro-profiling to eliminate the testing time of the candidate permutations (or tested permutations in general). I evaluate some ideas by using the already calculated results for the layer design space and all permutation results mainly for the 1-thread case.
\subsection {Combinations}
Instead of selecting a single static permutation we could have a small number of permutations where they perform better in separate cases. This selection of permutations can look very different from the top single cases but the should collectively perform equally or better than the top single on average. Ideally, a micro-profiling \cite{5} mechanism could test a very small number of kernels, which are the permutation implementations in our case and select the best performing for the rest of the execution.
In Figure 5.3 we see the equivalent of Figure 4.7 for combinations of 2. Each dot represents the average speedup that would be achieved if we always selected the best out of the permutations in the pair for each layer of the design space (216). As explained before, the speedup range is from 0 to 1 because 1 represents the case where we select the minimum cycles (out of the 720) value that we collected for each layer. Figure 4.7 also could be considered to be for combinations of 1 permutations, Figure 5.3 for combinations of 2 and the optimal value is coming from the combinations of 720.
What Figure 5.3 (left) really tells us is that if we were able to magically select the best permutation out of the top pair for each layer of the design space, we would achieve a 0.99 average speedup over the optimal permutation for each layer; and worst case around 0.83. The biggest benefit of the top pair is better observed in Figure 5.3 (right). The top 1 pair achieves an average theoretical speedup of 0.91 and worst case speedup of 0.68, which is much better than having to chose only from one permutation.
\begin{figure}[h]
\centering
\includegraphics[trim=1.6cm 0 0 0,width=1.05\textwidth]{averages3.png}
\caption{Speedup each pair achieves on every layer in comparison with the layer's optimal permutation, on average. On the Left, the average speedup is based on the cycles metric, while on the Right it is based on the L2 misses. When a pair is used instead of a single permutation, only the maximum performance counts when tested for a layer.}
\end{figure}
I have also calculated the respective results for the 8-thread design space and the observations are similar. Again, the benefit is more apparent in the results based on L2 misses. We also could try the same experiments for combinations of more than two permutations.
For the top pairs we can apply a machine learning technique to produce a decision tree. If the decision tree can classify the layer parameter combination with a low true error, then this tree can be used as a simple heuristic to chose between the two permutations of the top pair. If we apply the resulting heuristic on a Loki implementation, it will have no impact on run-time because no profiling is required.
\subsection{Random selection}
Testing permutations based on a small random subset would be very straightforward to implement. The question is how big this sample would need to be to have a high probability of finding a good performing permutation. We assume that a good permutation for a single input would perform with speedup of 0.90 over the optimal permutation. In Figure 5.4, there are 216 lines one for each layer, which show the performance of each permutation for each layer. If we take the worst case for speedup equal to 0.90, this layer has only 80 “good” permutations. After applying simple statistics we get that we need 10 random permutations for getting a good permutation for an accuracy of over 68.3\% (one sigma) and 26 for an accuracy of over 95.4\% (two sigma). This result is only for the one-thread results.
\begin{figure}[h]
\centering
\includegraphics[trim=0 0 0 0,width=0.8\textwidth]{random.png}
\caption{Performance of all permutations on all layers, sorted individually. The performance metric is the speedup of the current permutation over the best permutation of the layer. Based on the 1-thread results. }
\end{figure}
While the combinations result sounds more reasonable for dynamic loop reordering, the result of this subsection also suggests a way of reducing the design space when searching for the best permutations for different number of inputs or hardware configurations.
\chapter {Evaluation on Lokisim}
In this chapter I present some results from the lokisim simulator. Lokisim is more accurate than my cache simulator because it models more aspects of the architecture than only the memory hierarchy. Therefore, it is also slower and this is the reason that the full design exploration was not done in this lower-abstraction simulator. This step is to evaluate that the methodology was correct and beneficial for future experiments. I also present the results of some other analyses to show the potential of other optimisations. The algorithm implementations were done by the Loki team.
\section{Performance of top candidates}
In this section I present the results of two of the three candidate permutations of the single-thread case in a small design space of layers. The selected candidates are the top one based on average speedup and the top one based on L2 misses. In Figure 6.1 we can see the set of results along with some other arbitrary permutations. The set of results is limited because of the difficulty that is involved for implementing efficient Loki software at the moment. Additionally, there are some missing data points and the design space is limited because of shortcomings from the experimental state of the Loki toolchain.
\begin{figure}[h]
\centering
\includegraphics[trim=5cm 0 0 0,width=1.15\textwidth]{Lokig}
\caption{Evaluation of 2 candidates for single-thread along with arbitrary permutations in lokisim.}
\end{figure}
As we can see the Rank 1 based on the analysis performs best in comparison with the other selection of permutations in the majority of the layers, especially in the bigger problem-size area (rightmost). We also correctly predicted the two permutations that are shown in blue to be similarly performing according to the rank. The rank 687 performed similarly but not worse than the others. From section 5.1 we saw that the most reliable permutation performance across different memory hierarchy are the top ones. The experiments on Loki had different memory hierarchy that what the analysis was based on and therefore changes in ranks below the top were expected.
The Rank 1 based on L2 misses seems promising, at least for larger layers, but there are many missing data points that would make the argument stronger. The L2 size in the experiment is only 64 KBytes and it could perform even better if we matched the L2 size of the previous analysis.
\section{Comparison with sparsity algorithms}
I also present a small analysis on existing Loki convolution implementations to show the impact of the sparsity measures and the impact of loop order. These results are based on implementations which were manually optimised and with different cache partitioning layouts. It would be interesting to see how my candidates compare to this result, but it will be a future work.
In Figure 6.2 we see the results for three configurations and different inputs on the same input parameters. The inputs are synthetic images of random data with specified weight and activation density. As we can see from the graph, the two dense implementations are completely insensitive to the input sparsity characteristics. The sparse algorithm, shown in blue, uses one core that searches for non-zero data and the remaining cores of the tile are doing the computation on request. In this way it saves a lot of time when the image or the weights array has a lot of zeros.
\begin{figure}[h]
\centering
\includegraphics[trim=1.65cm 0 0 0,width=1.02\textwidth]{sparse}
\caption{Performance in cycles for convolution by using 3 different algorithms and different inputs. Parameters: Image size 25x25, kernel size 3x3, 128 input and output channels. Architecture configuration: 1 tile for computation and 1 tile for L2 cache (64 Kbytes)}
\end{figure}
We can also see the impact of the loop order choice in these highly optimised implementations. The loop order B is over 4 times faster than the loop order A. This is an important finding because we did not observe this kind of speedup in my small-design space loop order analysis. It tells us that loop order is more important on real hardware because there are other bottlenecks that could make low-locality access patterns perform even worse.
The dense version of the A loop order is better than the sparse version only for the input image and weights of 100\% sparsity each, which is an unrealistic case. If we compare the sparse version of A with the dense of B the answer is more complicated. The sparse implementation wins at the low density cases but B performs better in the majority of runs. Therefore, in order to decide which of the two would be the best case we would need to study the average case for the density of activations and weights. Ideally, we would also find a best performing loop order that will also be friendly for sparse implementations.
\section{Swapping Tiles for L2 cache}
This experiment explores the impact of swapping computation tiles for a bigger unified L2 cache. The simulated hardware instance has 16 tiles in total. I also explore the potential of selecting the optimal number of tiles for L2 cache and computation.
As a first experiment I tried all the different combinations of the number of tiles for each purpose for a single layer. In Figure 6.3 we see how the performance is affected by the tiles configuration. For this example the optimal number of compute tiles is 10 and the number of L2 tiles are the remaining 6. One observation is that the more computation tiles exist, the more useful a bigger L2 cache is. On the diagonal line we have full utilisation of the tiles and since we attempt full utilisation, the number of tiles for L2 is equal to the difference of 16 and the number of computation tiles. We can express these configurations with only one of the two parameters.
\begin{figure}[h]
\centering
\includegraphics[trim=1.65cm 0 0 0,width=1.03\textwidth]{tiles}
\caption{Performance in cycles for all possible combinations of the number of tiles for computation and the number of tiles for L2}
\end{figure}
I also explore the potential for dynamically selecting the optimal configuration for each layer. In the following experimentation I used a small design space of layers consisting of different number of inputs and output channels. I ran all the 15 value combinations for full utilisation for each of the layers and found the best performing overall. Then I compared it to the result of the optimal per layer configuration to see what could be achieved by making this decision on the fly. The best overall tile configuration in this case was 8 tiles for computation and 8 tiles for L2 cache. Figure 6.4 shows that there is a common winner among the combinations of bigger number of input channels and the average speedup when selecting the optimal tile configuration would only be 1.5\% and at most around 12\%. This speedup may not justify the introduction of a dynamic tile configuration for convolution but it would be useful to verify this observation with other parameter combinations as well.
\begin{figure}[h]
\centering
\includegraphics[trim=2.45cm 0 0 0,width=1.015\textwidth]{speedups}
\caption{Speedup of the best tile configuration over the best average (8 tiles for calculation, 8 for L2) for a set of layers with different number of input channels and output channels. }
\end{figure}
\section{Motivation for adaptive algorithms}
In the last section we saw that there would not be much need for dynamic swapping of tiles for L2 for the respective layer parameter ranges. However, it would be still useful to see weather we could have an accurate prediction mechanism that would make correct choices under different configurations. In Figure 6.5 we can see how the recent IPC could predict the total execution time for 15 different tile configurations. The drawn lines are smoothed for better demonstration. As we can see, after the initialisation phase the recent IPC is a very good indicator for the overall performance. This is because it remains steady throughout the convolution execution due to the simple access patterns and algorithm that convolution has. IPC could be used to profile small sections of different versions of the code or other configurations to make correct choices in a small amount of time and therefore little overhead.
\begin{figure}[h]
\centering
\includegraphics[trim=1.3cm 0 0 0,width=1.019\textwidth]{localIPC}
\caption{Recent IPC during the full convolution execution for 15 different tile configurations. }
\end{figure}
\chapter{Conclusions}
\section{Summary}
In this project I explored a design space of optimisations for Loki by using two simulation frameworks of different levels of abstraction.
From the loop order analysis I produced some candidate loop permutations for evaluation on lokisim by performing an exhaustive search for the best loop permutations in a fast cache simulator. This kind of loop reordering analysis could also be applied in similar problems where there are a lot of independent nested loops.
From the offline analysis of the simulator data, I explored some properties of loop orders that could be used for dynamic loop ordering or faster design space exploration. I showed that instead of having a single top loop permutation we could select the top combination of N loop permutations which can collectively perform near optimally on average. I also explored the prospects when selecting a limited random sample of permutations for finding a good performing permutation.
I also expanded the design space for 3 different hypothetical architectures and 3 levels of parallelism. This demonstrated that the top orders will still perform optimally when changing cache hierarchy and also when changing the number of threads from one to eight. It is usually challenging to prove orthogonality when the design space is limited by the total simulation time but in this case a variety of results from different hardware and software configurations, including on lokisim, validated the stability of the top loop orders. The validation on lokisim also proved the correctness or usefulness of the methodology.
In the evaluation on Lokisim chapter I also explored the potential of other optimisations such as dynamically swapping tiles for L2 cache and the sparsity algorithms. We concluded that dynamically changing tile configurations would have limited speedup (around 10\% at maximum) over an optimal tile configuration. In the same chapter, however, I showed that adaptiveness to performance would be very applicable in a convolution algorithm because even in different parallel versions with different sorts of bottlenecks, the recent IPC predicts accurately the total execution time. This finding also validated the methodology because in the simulations we were stopping the execution early for wider design exploration.
\section{Future Work}
There are many ways this analysis can be expanded. The design space of each experiment was limited to save simulation time and it could be useful to repeat the experiments for different number of threads and layer configurations (input parameters). There are also many other optimisations that can be applied and they all contribute to a wide design space for future exploration.
In section 5.2 I showed that there is a correlation between the top loop permutations when changing the number of threads. This was limited for the single-core, four-core and eight-core configurations. It is a positive result for showing that top loop permutations will scale for bigger number of threads but we would like to see if this continues for more threads. My cache simulator already supports multi-tile configurations and it would be very useful to know if the section of the best permutations remains the same for multi-tile configurations.
Regarding access pattern manipulation, our focus was on the loop order. However, since the block access pattern is different from the reference access pattern, the way each multi-dimensional array is represented in memory could have important impact on performance. Therefore it could be one other optimisation technique that would be interesting to explore. The exploration could be combined with the permutations analysis. One hypothesis could be that the permutations would have different results but there would be equivalent permutations with similar loop performance results.
One other way of improving cache performance is loop tiling \cite{28}. Loop tiling is a loop transformation optimisation where a a nested loop is split into smaller blocks to minimise the working set and produce less capacity misses. This transformation would also be a nice extension to this analysis because each of the two optimisations seem very dependent to each other's decision and the combined design space could lead to more efficient solutions
The practical evaluation of a form of dynamic loop reordering would probably benefit from this analysis. A current approach for profiling and selecting the best algorithm is micro-profiling \cite{5}. Microprofiling is proven to increase the performance of massively-parallel software on GPUs by selecting the optimal kernel to be used throughout the whole execution of the program and making changes to the algorithm on the fly. In the neural networks case I evaluate approaches similar to microprofiling for further optimisation during runtime. Some algorithms have already been discussed for applications on micro-profiling or heuristics to select the best performing loop permutation at run-time and could be easily evaluated on specific simulators and hardware as a test platform. For evaluation of micro-profiling on conventional architectures, the PAPI framework \cite{29} could be used to read the performance counter values and compare and switch permutations during runtime.
There are also some other ideas that I would like to explore for potential in dynamic loop reordering. The first is to apply parameter optimisation using machine learning methods for exploring efficiently permutations on runtime. There would be a single parameter, the hamiltonian index, whose spatial localities could save time in searching. Another idea is to search by using Breadth-First Search on the permutations graph based on neighbour swap (see Figure 4.2). The latter would probably perform better because the graph contains much more locality information than a linear function, although it could be more difficult and costly to implement graph traversal algorithms in a micro-profiling environment.
As we have shown from experimentation on lokisim, measures such as IPC remain steady during the parallel execution of the convolution. This means that a micro-profiling mechanism could perform well with small amounts of sampling for this application. The results are very promising for adaptive algorithms on convolution because we found a simple metric for quick comparison of implementations.
\addcontentsline{toc}{chapter}{Bibliography}
\chapter{Introduction}
\pagenumbering{arabic}
\setcounter{page}{1}
This is the introduction where you should introduce your work. In
general the thing to aim for here is to describe a little bit of the
context for your work --- why did you do it (motivation), what was the
hoped-for outcome (aims) --- as well as trying to give a brief
overview of what you actually did.
It's often useful to bring forward some ``highlights'' into
this chapter (e.g.\ some particularly compelling results, or
a particularly interesting finding).
It's also traditional to give an outline of the rest of the
document, although without care this can appear formulaic
and tedious. Your call.
\chapter{Background}
A more extensive coverage of what's required to understand your
work. In general you should assume the reader has a good undergraduate
degree in computer science, but is not necessarily an expert in
the particular area you've been working on. Hence this chapter
may need to summarize some ``text book'' material.
This is not something you'd normally require in an academic paper,
and it may not be appropriate for your particular circumstances.
Indeed, in some cases it's possible to cover all of the ``background''
material either in the introduction or at appropriate places in
the rest of the dissertation.
\chapter{Related Work}
This chapter covers relevant (and typically, recent) research
which you build upon (or improve upon). There are two complementary
goals for this chapter:
\begin{enumerate}
\item to show that you know and understand the state of the art; and
\item to put your work in context
\end{enumerate}
Ideally you can tackle both together by providing a critique of
related work, and describing what is insufficient (and how you do
better!)
The related work chapter should usually come either near the front or
near the back of the dissertation. The advantage of the former is that
you get to build the argument for why your work is important before
presenting your solution(s) in later chapters; the advantage of the
latter is that don't have to forward reference to your solution too
much. The correct choice will depend on what you're writing up, and
your own personal preference.
\chapter{Design and Implementation}
This chapter may be called something else\ldots but in general
the idea is that you have one (or a few) ``meat'' chapters which
describe the work you did in technical detail.
\chapter{Evaluation}
For any practical projects, you should almost certainly have
some kind of evaluation, and it's often useful to separate
this out into its own chapter.
\chapter{Summary and Conclusions}
As you might imagine: summarizes the dissertation, and draws
any conclusions. Depending on the length of your work, and
how well you write, you may not need a summary here.
You will generally want to draw some conclusions, and point
to potential future work.
|
{
"timestamp": "2018-06-05T02:18:09",
"yymm": "1806",
"arxiv_id": "1806.01105",
"language": "en",
"url": "https://arxiv.org/abs/1806.01105"
}
|
\section{Introduction}
\citet{Schunker13} have shown that relatively shallow, horizontally
propagating, f and p modes have sensitivity to both the magnetic and
thermal structure of a sunspot. They found that travel-time measurements
can constrain the height of the Wilson depression to a precision of ~50 km.
\citet{Lindsey10} showed that rays approaching the sunspot
almost vertically from below are rather insensitive to the magnetic field.
Rays approaching vertically from below may only be sensitive to the thermal
structure, or the Wilson depression. To develop a helioseismic method
to reliably measure the Wilson depression would be a significant advance,
considering all the controversy surrounding the helioseismic sunspot
measurements \citep{Gizon09,Moradi10}.
\citet{Lindsey10} used the signal in the sunspot to
cross correlate with the ingression and egression holography signals to
get travel time perturbations. This method has the disadvantage of using
the signal in the sunspot, which puts an additional level of uncertainty
on the results.
\citet{Chou00} computed the cross covariance between the ingression and
the egression signal
to derive travel time perturbations. By using the Gabor wavelet to fit
the cross covariance, both a phase time and an envelope time were derived.
\citet{Chou00} found that the envelope time yields a considerably larger
signal than the phase time, much as we find in this paper.
The envelope time also has a larger error than the phase time with the
result being that the signal to noise for the phase time is larger than
for the envelope time. However, as we see later in Fig.~\ref{Figxc},
the envelope time signal for the
umbra is not a simple multiple of the phase time signal and so there
is hopefully independent information that can be extracted.
What we are proposing here is a technique that does not
use the signal in the sunspot, and is therefore akin to the original
sunspot work with the Hankel transform which did not use signals from the
interior of the spot \citep{Braun87} and more modern techniques, such as
the one described in \citet{Cameron08}, \citet{Schunker13}, and
\citet{Liang13} which also do not
use the signal in the sunspot.
\begin{figure*}
\centering
\includegraphics[trim=0 75 0 100]{raysplot_paper.eps}
\caption{Rays for the two-skip method in a vertical slice through the
center of the sunspot.
The umbral and penumbral locations are indicated by the (somewhat
exaggerated) pedestals near $x=0$. The horizontal size is
derived from the ten days of data used (Nov. 14-23, 2013). The average
umbral radius is $11.5\rm{\,Mm}$ and penumbral radius is $25.1\rm{\,Mm}$.
The solid curves
are the two-skip ray paths for the range of 1-skip distances
$\Delta=75-146\rm{\,Mm}$ used in the analysis. The dashed curves are the
corresponding 1-skip rays. The analysis consists of calculating temporal
cross covariances between endpoints (e.g. A and B). For the curves drawn,
an output map point would be associated with the location half way (C) between
the endpoints, or at the center of the spot. By moving the endpoints in
longitude and latitude, a map is constructed. The short vertical lines
at $\pm 36\rm{\,Mm}$ indicate the size of the map shown later
in Fig.~\ref{FigIc} and Fig.~\ref{Figmap}.
}
\label{FigRay}
\end{figure*}
The new time-distance technique presented here
correlates signals from opposite sides
of the spot and uses the signal that putatively bounces halfway in
between to infer properties of the spot (Fig.~\ref{FigRay}).
That such a two-skip
signal is sensitive to the presence of the spot was first shown by
\citet{Duvall95}. two-skip signals in sunspots were used by
\citet{Chou09} to separate absorption, emissivity reduction and
local suppression of sources.
\begin{figure*}
\centering
\includegraphics{ic_cont_3_horz.eps}
\linebreak
\caption{
Continuum images of the sunspot in NOAA active region 11899.
The left image is a single continuum image near the central meridian passage on
Nov. 18, 2013. The middle image is an average of the continuum images
for the ten days analyzed (Nov. 14-23, 2013). For the left and middle
images, the horizontal size is the same as that of the eventual travel time
maps. To identify the umbral-penumbral boundary, a contour of the
ten-day average intensity at the level of 0.4 is plotted (red). The
penumbral-photospheric boundary is represented from the contour at
0.85 (blue).
To derive the intensities,
a fit to limb darkening is done with the sunspot excluded
and the intensity is normalized to unity
for the background photosphere.
The right image is also the ten-day average of the
continuum intensity, but showing the entire field used to derive the
travel-time maps. The locations A, B from Fig.~\ref{FigRay} are shown.
}
\label{FigIc}
\end{figure*}
\section{Data analysis} \label{data}
We used observations from the HMI instrument \citep{Schou12} on board the SDO
satellite.
As one of the main constraints of the present project was to use
rays that impinge on the sunspot from below in an almost vertical
direction and to not use endpoints that are in sunspots, it seemed best
to find a relatively large spot that was reasonably isolated and
did not change very much during its disk passage. NOAA active region
11899 satisfies these requirements very nicely. Continuum images of the
sunspot are shown in Fig.~\ref{FigIc}. Doppler, continuum and magnetic
data from the HMI instrument \citep{Schou12} from Nov. 14-23, 2013 were
used for the sunspot analysis. The data were broken up into ten one-day
intervals. Each day was tracked using the program described in
\citet{Duvall13} with a sampling in longitude and latitude of $0.03\deg$,
critically sampling the HMI images at disk center.
A region covering $30\deg$ in longitude and in latitude
centered on the spot was tracked.
It was found that the computation of the cross covariances was too
time-consuming to be done with the $0.03\deg$ sampling and so the datacubes
were filtered and resampled at $0.06\deg$. This filtering was done
by Fourier transforming the original datacubes, truncating the transforms
at half the spatial nyquist frequencies, and inverse transforming.
The central Carrington longitude, latitude,
and the rotation rate were adjusted daily to keep the spot centered.
The center of the sunspot resided in the small latitude range $5-5.1\deg$ over
the ten days. A phase speed filter of the same form as the one applied in
\citet{Duvall13} was applied (FWHM $\Gamma=400$, units are spherical harmonic
degree $\ell$) which transmits both the
first and second skip over the range of distances used ($\Delta=12-24\deg$
of first skip distance).
This filter has central phase speed of 141 km/s.
A quiet-sun reference for the travel times was derived by doing the
same analysis on a region centered at the same latitude for the
days Nov. 8-16, 2013. The Carrington longitude of this region at
central meridian passage is $121.8\deg$.
Cross covariance maps were computed for each of the ten days using the
program described previously \citep{Duvall03}.
This method of computing the cross covariance at opposite sides of a
circle and associating the resultant travel time with a point at depth
below the midpoint of the two locations is related to the seismic technique
of common depth point (CDP) measurements \citep{McQuillin85}.
A departure from the previous analysis is the use of eight sectors instead
of the four, or quadrants. This enables the possibility to better study
an anisotropy of the mean signal, from which we might infer an anisotropy
of the wave speed due to the presence of a magnetic field. For
the present study, the covariance maps have been averaged over the eight
sectors to obtain a mean signal.
The covariance maps for the different days are combined with a weighting
to remove the heliocentric angle dependence of the amplitude of the
oscillation signal.
\begin{figure*}
\centering
\includegraphics{covar_paper.eps}
\caption{Covariances averaged separately over the umbra, penumbra, and
the quiet Sun analysis for the full $\Delta$ range. The top frame is
for $\nu=4.0\rm{\,mHz}$ and the bottom frame is for $\nu=3.1\rm{\,mHz}$.
The first and second skip
areas are averaged separately over $\Delta$ and then stitched together.
In both frames, the first wave packet corresponds to the first skip and
the second wave packet is for the second skip. There is very little
difference between umbra, penumbra, and quiet Sun for the first skip,
which is expected because of the large depth below the spot for the
first-skip rays. There are sizeable differences
between the quiet Sun times and the
spot times for the second skip, which might be expected. Differences
are seen in both envelope and phase travel times and the covariance
amplitudes.
}
\label{Figcv}
\end{figure*}
As a first step, covariances were averaged separately over the umbra,
penumbra, and the quiet-sun analysis. The results for two frequency
bandpasses (centered at $3.1\rm{\,mHz}$ and $4.0\rm{\,mHz}$) are
shown in Fig.~\ref{Figcv}.
The filters are Gaussian with full-width-half-maximum (FWHM) of $1.0\rm{\,mHz}$.
Several features are immediately obvious. For the first skip, whose
wave packet is near 70 minutes, there is little if no difference between
the spot regions and the quiet Sun.
This is expected because of the large depths of the first skip rays
and is a confirmation of the shallow nature of sunspots.
For our range of $\Delta$, the depths of the first skip rays are
in the range $51-104\rm{\,Mm}$.
However, for the second skip the
situation is quite different. The phase times and envelope times are shorter
for the spot regions than for the quiet Sun for both frequency bandpasses.
In addition, the amplitude of the penumbral covariance is considerably
lower than for the umbra or quiet Sun.
\begin{figure*}
\centering
\includegraphics{xc_sk2_vs_f.eps}
\caption{two-skip analysis of $\nu$ dependence of the cross covariance for
the umbra (left panels) and quiet Sun (right panels).
Gaussian filters with FWHM=$0.8\rm{\,mHz}$
are applied to the cross covariances.
The umbral times are averaged over the full ten days and the quiet sun is
averaged over 9 days.
The grey scale image in the upper left (right) is the $\nu$ resolved
cross covariance
for the umbra (quiet Sun), scaled separately for each $\nu$.
The blue and green curves are the results of the Gabor wavelength fitting for
$\tau_p$ and $\tau_e$.
In the middle row
left (right) plot is shown the envelope of the cross covariance
computed from the analytic signal \citep{Bracewell65} for the umbra (quiet Sun).Overplotted are the same blue and green curves from the top line.
In the lower left, the umbral and quiet Sun $\tau_p$ and $\tau_e$
are shown. The errors are smaller than the symbols.
In the lower right, the difference travel times (umbral minus
quiet Sun) are shown.
}
\label{Figxc}
\end{figure*}
An important issue is how the waves are reflected below the umbra,
which is related to the depth dependence of the acoustic cutoff
frequency [$\omega_c$].
This can be studied by
measuring travel times versus the temporal frequency [$\nu$]. For the
quiet sun, this has been done by \citet{Jefferies94} for the envelope
travel times. For this study,
the average quiet-sun cross-covariance and that for the umbra are
frequency filtered with a Gaussian of FWHM $0.8\rm{\,mHz}$ and subsequently
fit with a Gaussian wavelet \citep{Kosovichev97}
to obtain phase travel times [$\tau_{ph}$]
and travel times for the envelope [$\tau_{env}$].
The distances were averaged over by shifting the correlations for each $\Delta$
relative to the central one.
Fitting results are displayed in Fig.~\ref{Figxc}.
\begin{figure*}
\centering
\includegraphics{tm_mn_b.eps}
\caption{Travel-time maps are computed from ten-day averages of cross
covariances. The cross covariances are $\nu$-filtered with filters
centered at $3.1\rm{\,mHz}$ and $4.0\rm{\,mHz}$ before the travel times are
fitted with
the Gabor wavelets.
Similar 9-day average quiet Sun maps are averaged
over the map and similarly $\nu$-filtered to construct reference
travel times which are subsequently subtracted from those of the sunspot
maps.
The phase (envelope) times $[\tau_p]$ are plotted in the upper (middle)
line for the $3.1\rm{\,mHz}$ in the left column and for the
$4.0\rm{\,mHz}$ in the right column.
Overplotted on the upper four maps are the contours of the umbral-penumbral
boundary (red) and the penumbral-photosphere boundary (blue) as shown in
the earlier figure.
In the lower left and lower right are shown cuts in the north-south
direction averaged over the east-west direction between the pair of
vertical white lines shown overplotted on the maps. E rror bars are
computed from the scatter of the east-west averages. Also overplotted
on these cuts are the average umbral-penumbral boundary (heavy black
lines) and the penumbral-photosphere boundary (thin dashed lines).
}
\label{Figmap}
\end{figure*}
It is likely that there is independent information in the phase times
$[\tau_p]$ and in the envelope times $[\tau_e]$ (see the later section 3.).
In the ray theory, $\tau_p$ is obtained by integrating the inverse
phase velocity along the ray. $\tau_e$ is obtained by integrating the
inverse of the group velocity along the ray. A theoretical $\tau_p$
(which might be termed the `true phase speed')
would have a unique value while for our observations the phase time
from the cross covariance is only defined within a period. A way to
resolve (potentially) this nonuniqueness is to go to the high frequencies
above the peak acoustic frequency at $5.2\rm{\,mHz}$. The pseudomodes at
high frequencies correspond to purely acoustic waves that propagate
outward through the atmosphere. The `true' phase peak at high $\nu$ should
become constant with $\nu$. The phase peaks at larger time will slope down
towards this one while the phase peaks at shorter time will slope upwards
towards the true one. In addition, the envelope times should also
become constant with $\nu$ at high frequencies and be equal to the
true phase times in what is a purely acoustic situation.
In order to test that we are following a single phase peak from low
to high $\nu$, the two-skip cross covariance is shown in Fig.~\ref{Figxc}a
and Fig.~\ref{Figxc}b.
The cross covariance for each frequency filter is shown normalized to
its peak value so that the falloff of amplitude with $\nu$ of several
orders of magnitude is hidden.
For the umbra (Fig.~\ref{Figxc}a), there is no ambiguity in following the
phase peak. The phase peak that is near the envelope peak at $3\rm{\,mHz}$ is
normally the one that is followed. For the quiet Sun (Fig.~\ref{Figxc}b),
the phase peaks get a little confused near $5.5\rm{\,mHz}$ with an extra
feature appearing. The phase time differences are not computed
for $5.5\rm{\,mHz}$ and above because of this issue. For the envelope times,
there is a similar problem.
In Fig.~\ref{Figxc}c and Fig.~\ref{Figxc}d,
the envelope of the umbral and quiet Sun
covariances computed by an analytic signal formalism is shown with the
travel times measured from the Gabor wavelet fitting superimposed.
The envelope times should be located at the peak of
the envelope computed in this way. It is immediately apparent that
the dip in $\tau_e$ for the quiet Sun (Fig.~\ref{Figxc}d) near
$5.5\rm{\,mHz}$ is not present for the umbra (Fig.~\ref{Figxc}c). This dip
was observed in three separate ways for the quiet Sun by \citet{Jefferies94}.
The travel times
$\tau_e$ and $\tau_p$ for the umbra and quiet Sun are compared in
Fig.~\ref{Figxc}e. The $\tau_p$ for the umbra are in general shorter
than for the quiet Sun. Except near the confusing region of $5.5\rm{\,mHz}$,
this is also true for $\tau_e$. The current interpretation of these
shorter times is that the waves are reflected at a lower geometrical
level in the umbra implying a shorter path length and hence shorter
times \citep{Lindsey10}.
It may be useful to consider the quiet Sun times as a reference and
to take the difference of umbra minus quiet Sun. These differences
for $\tau_e$ and $\tau_p$ are shown in Fig.~\ref{Figxc}f. The
$\tau_p$ are very well determined. Both $\tau_e$ and $\tau_p$ become
small near $2\rm{\,mHz}$. Presumably this is because the waves are reflected
below the sunspot and so these waves do not 'see' the sunspot. This
suggests a way to avoid the effects of solar activity when trying to
measure global properties like meridional circulation. That would be
to observe at low frequencies. This seems difficult in time-distance
analysis as the signal becomes noisy.
Spatial maps of the travel times referenced to the quiet Sun are
shown in Fig.~\ref{Figmap}. The two bandpasses discussed previously,
centered at 3.1 and $4.0\rm{\,mHz}$ were used. The phase times are much less
noisy than the envelope times, which we would expect. The phase times
for $4.0\rm{\,mHz}$ are roughly a factor of two larger than those at
$3.1\rm{mHz}$ in
the umbra, in agreement with Fig.~\ref{Figxc}.
It is interesting that the phase and envelope times are still significant
at the edges of the field. This is not the case for the theoretical
analysis of the next section.
A major uncertainty about these maps is what is the horizontal
resolution? It is possible that the edge effects are caused by poor
horizontal resolution. Or it may be related to the acoustic moat
reported by \citep{Braun98}.
It would be useful to know how far the travel times are detectable
which could be done by extending the maps.
\section{Ray simulation} \label{raysec}
To better understand the results of our two-skip analysis of solar data, we perform numerical experiments on the model sunspot of \citet{PrzSheCal15aa} using standard magnetohydrodynamic (MHD) ray theory as described by \citet{MorCal08ab} and \citet{NewCal10aa} in plane-parallel geometry for example, founded on the dispersion function
\begin{multline}
{\mathcal{D}}=\omega^2 \omega_{\rm c}^2 a_p^2 k_{\rm h}^2 +
(\omega^2-a^2k_{\scriptscriptstyle\parallel}^2)\\
\times\left[\omega^4-(a^2+c^2)\omega^2 k^2+a^2c^2k^2k_{\scriptscriptstyle\parallel}^2 + c^2N^2 k_{\rm h}^2
-(\omega^2-a_z^2k^2) \omega_{\rm c}^2\right] . \label{DF}
\end{multline}
Here $c$ and $a$ are the sound and Alfv\'en speeds respectively, $\omega$ is the circular frequency, $a_z$ is the vertical component of the Alfv\'en velocity, and $a_p$ is the component perpendicular to the plane containing wave vector $\mathbf{k}$ and gravitational acceleration $\mathbf{g}$. The {Brunt-V\"ais\"al\"a} frequency $N$ is defined by $N^2=g/H-g^2/c^2$ where $H$ is the density scale height, $ \omega_{\rm c}$ is the acoustic cutoff frequency, and $ k_{\rm h}$ and $k_{\scriptscriptstyle\parallel}$ are the horizontal and field-aligned components of the wave vector respectively.
The associated ray equations are
\begin{equation}
\deriv{\mathbf{x}}{\tau} = \pderiv{{\mathcal{D}}}{\mathbf{k}}, \quad \deriv{\mathbf{k}}{\tau} = -\pderiv{{\mathcal{D}}}{\mathbf{x}}, \quad \deriv{t}{\tau} = -\pderiv{{\mathcal{D}}}{\omega}, \quad \deriv{S}{\tau}=\mathbf{k}\hspace{1.5pt}{\boldsymbol{\cdot}}\deriv{\mathbf{x}}{\tau}, \label{ray}
\end{equation}
where $\mathbf{x}=(x,y,z)$ is position, $S$ is phase, $t$ is time, and $\tau$ parametrizes a ray.
The sunspot model is magnetohydrostatic and axisymmetric, based on the method of \citet{KhoCol08aa}, and has been tuned to be both spectropolarimetrically and helioseismically quite realistic, within the confines of the static axisymmetric assumption. Based on the continuum formation height of 5000{\,\AA} radiation, the umbral centre representing the Wilson Depression is at $z=-600$ km, where the magnetic field strength is 3.09 kG.\footnote{The $z=0$ height represents an estimate of the radius of the solar surface obtained from the quiet Sun background model. However, it is slightly offset from the observed surface caused by minor changes in the synthesized continuum intensities obtained from the sunspot model. Our quiet Sun $\log(\tau_{5000})=1$ surface is actually at about $z=-49$ km. } The model does not contain a ``penumbral shelf'', with the magnetic and thermal features being continuous and smooth. For purposes of interpretation, the umbral radius ($R_\text{umbra}=6.6$ Mm) is characterized by $B_z=1.86$ kG \citep{JurBelSch15aa}, and we have arbitrarily identified the edge of the penumbra with the radius where the continuum formation height drops to $-70$ km ($R_\text{penumbra}=19.7$ Mm). The spot is centred at $x=0$, $y=0$ in a cartesian coordinate system. Curvature of the Sun is neglected, which will have little effect as it is travel time differences produced by near-surface perturbations that we work with, rather than raw travel times.
No attempt has been made to adjust the sunspot model to fit the AR11899 spot analysed in Section \ref{data}. Exact correspondences therefore cannot be expected. Nevertheless, broad correspondences (and contrasts) will prove instructive.
As pointed out forcefully by \citet{SchFle98aa,SchFle03aa}, there is no unique acoustic cutoff frequency $\omega_c$ in general. It depends on which variables are used in expressing the wave equation, and the way in which the eikonal approximation is applied. The two most commonly used formulae are the so-called ``isothermal'' cutoff frequency, $\omega_c=\omega_I=c/2H$, and the form of \citet{DeuGou84aa}, $\omega_c^2 = \omega_{DG}^2=(c^2/4H^2)(1-2H')$. The dimensionless number $H'=\mathrm{d} H/\mathrm{d} z$ is negative, roughly $-0.5$, throughout most of the convection zone, so $\omega_{DG}\sim1.4\,\omega_I$ in the interior. They are more comparable in the low atmosphere, and in fact are identical in an isothermal atmosphere where $H'=0$. The isothermal form arises naturally in the derivation of the dispersion relation in the appendix of \citet{NewCal10aa}, but that is because only leading order terms in variations of the background ``slowly varying'' atmosphere are retained, so $H'$ does not appear. We mostly employ $\omega_{DG}$ throughout, taking care to smooth the tabulated atmosphere where necessary to avoid unphysical wild oscillations, as it is certainly more firmly founded for the non-magnetic case. However, some comparisons derived with $\omega_I$ are also presented. The main effect of using $\omega_{DG}$ rather than $\omega_I$ is that rays reflect about 200 km lower near the surface, and hence are potentially less affected by the magnetic field.
Our experiment consists of launching a grid of 3 mHz and 4 mHz rays from their upper turning points at $x=-60$ Mm, $y=0$. These rays are designed to complete their first skip at the integer points of the $(-25\,{\rm Mm},25\,{\rm Mm})\times(-25\,{\rm Mm},25\,{\rm Mm})$ square grid centred on the origin, if the spot is not present. In reality, the spot shifts these points very slightly, as can be seen in the left column of Figure \ref{fig:scatt}.
On the other hand, the second skip points are displaced, both in direction and skip distance, due to scattering by the spot. Standard practice is to assume the first-skip point is the mid-point between the correlated initial and second-skip points, but the scatter makes this inaccurate. The right column of Figure \ref{fig:scatt} illustrates this by showing where the mid-point-inferred first-skip points would be, though in reality they are as shown in the left column.
The scatter results from the second of Equations (\ref{ray}). The horizontal components of the wave vector $\mathbf{k}$ are essentially constant along the ray path, except within typically 100--200 km of the top turning point if it occurs within the sunspot. This is because only in this shallow layer is there a significant horizontal variation in ${\mathcal{D}}$, contributed by both the magnetic field and the thermal inhomogeneity. The rays are therefore straight in horizontal projection except for a quite sharp change of direction around the first skip point. Equations (\ref{ray}) are integrated with a high-precision adaptive numerical scheme that follows them accurately through this critical region.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=0.8\textwidth]{scatt4UG}
\includegraphics[width=0.8\textwidth]{scatt3UG}
\caption{Scatter plots of the actual (left panels) and mid-point-inferred (right panels) first-skip turning points for the grid of rays fired from $(-60,0)$ Mm with frequency 4 mHz (top) and 3 mHz (bottom). Points (actually) in the umbra are identified with green colouring, the penumbra with red, and the quiet Sun with blue. The green and red circles are the umbral and penumbral boundaries respectively. These figures use $\omega_c=\omega_{DG}$; scattering with $\omega_c=\omega_I$ is typically substantially increased. }
\label{fig:scatt}
\end{center}
\end{figure*}
The significant insight from Figure \ref{fig:scatt} is that the sunspot, and in particular the penumbra, substantially scatters the rays. Scatter is much larger if $\omega_I$ is used (not shown) because the rays reach higher into the surface layers and are therefore affected more by the sunspot. Typically, second skip distances and directions are very different from those of the first skip. When the second skip upper turning point is ``observed'' in the quiet Sun, knowing its origin at $(-60,0)$, the standard helioseismic procedure is to infer that the mid-point of these two ends is the central skip point. The figure indicates that this may not be the case, especially for points actually incident in the penumbra, and for the lower frequency. In reality, ``sources'' and ``receivers'' may be oriented arbitrarily with regard to the spot, and these pictures can be azimuthally averaged. Nevertheless, this oriented view is instructive.
Two-skip travel times, both phase and group (envelope) times, are easily recovered from the ray calculations,\footnote{
This is notwithstanding the jump in phase at turning points \citep[][Sec.~5.1]{Bog97aa,TraBriRic14aa}, since only travel time differences are required, and it is assumed that the jump is the same in both cases.} and may be compared with the two-skip times joining the same end-points in quiet Sun (no intervening sunspot). Because of the substantial difference in the physical surrounds of the first skip point in the spot case, and the associated scattering, these times can differ significantly. We define $\delta\tau_\text{ph}$ to be the difference between the two-way two-skip phase travel time through the spot and through the quiet Sun model, and similarly for the group travel time perturbation $\delta\tau_\text{gr}$. Both are typically negative, indicating that the rays pass more quickly through the sunspot than through quiet Sun, despite their often longer ($x$-$y$-projected) path.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=.95\textwidth]{deltatau_vs_r}
\caption{Phase (red) and group (envelope, blue) travel time differences, azimuthally averaged, as functions of radius $r$ of the true (dots) or mid-point-inferred (full curves) middle skip points. Left column: 3 mHz; right column: 4 mHz. Top row: full magnetic sunspot model using $\omega_c=\omega_{DG}$; second row: full magnetic sunspot model using $\omega_c=\omega_I$; third row: ``thermal spot'' with the same thermal and density structure, but with magnetic field artificially suppressed. All points were binned to $1\, {\rm Mm^2}$
squares and averaged both by bin and azimuthally. All data presented here has been pre-filtered to remove any rays with second skip distance outside the range $(\frac{5}{7},\frac{7}{5})$ times the first skip distance, or second skip direction more than $20^\circ$ from the first skip direction.
The fraction of points deleted by this pre-filtering for the six panels is
(0.023, 0.140, 0.122, 0.143, 0.025, 0.129).
The dashed red and blue curves represent respectively the equivalent phase and group speed thermal depths of the Wilson depression; see text for details. The grey vertical lines represent the umbral and penumbral boundaries.}
\label{fig:deltatau}
\end{center}
\end{figure*}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.9\hsize]{DepthsG}
\caption{Equivalent depths (phase: red; group: blue) for 3 mHz (dashed) and 4 mHz (full).}
\label{fig:depths}
\end{center}
\end{figure}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=.9\textwidth]{phxy}
\caption{Phase travel time perturbations (with $\omega_\text{DG}$) for rays from $(-60,0)$ with first skip turning point lying along the $x$-axis (full curves) and along the $y$-axis (dashed curves). The magnetic field inclination at $z=0$ is indicated on the top axis. Left: 3 mHz; right: 4 mHz.}
\label{fig:phxy}
\end{center}
\end{figure*}
Figure \ref{fig:deltatau} summarizes the timing results. The most prominent points to note are:
\begin{enumerate}
\item Umbral phase travel time perturbations are significantly smaller in magnitude than group travel time perturbations (both are negative).
\item Mid-point-inferred and true centre point travel time perturbations differ substantially in the penumbra, particularly at 3 mHz. This is to be expected given the large degree of penumbral scattering, despite the filter applied to our rays to restrict first and second skip distance contrast to $(\frac{5}{7},\frac{7}{5})$ and direction change to $20^\circ$.
\item The filtering leaves some radii in the penumbra bereft of points, illustrated by gaps in the points representing ``true central point'' travel times. Relaxing the filtering criterion of course fills these gaps, but at the expense of ``true'' and ``mid-point-inferred'' first skip points differing by wider margins.
\item The measured phase times match quite well those predicted by
the equivalent phase speed depth,
especially in the umbra, where results are more reliable.
The group travel time perturbations are consistently smaller than predicted by the equivalent group speed depth.
\item The difference between results obtained with $\omega_{DG}$ and $\omega_I$ at 3 and 4 mHz is quite moderate.
\item There is little substantive difference between results with and without the magnetic field, indicating that the sunspot's thermal structure is primarily responsible for travel time shifts at these frequencies.
\end{enumerate}
The concept of ``the equivalent phase and group speed thermal depths of the Wilson depression'' is a simple though inexact device for converting between Wilson depression depth and travel time perturbations. Given that a ray passes through the surface layers of a sunspot very much faster than through the equivalent depths of quiet Sun \citep[see figs.~3 and 4 of][]{Cal07aa}, the two-way time difference between the magnetic and quiet cases is, to a first approximation, dominated by the quiet Sun travel time: $\delta\tau=-2\int_{z_\text{tp}+\Delta z}^{z_\text{tp}} \mathrm{d} z/V$, where $V$ is either the vertical phase or group speed, $z_\text{tp}$ is the upper turning point in quiet Sun, and $\Delta z<0$ is the ``Wilson depression'' by which the atmosphere has been lowered in the spot. This correspondence is plotted in Figure~\ref{fig:depths}.
Despite ray travel times being quite insensitive to magnetic field at these frequencies, they are strongly sensitive to direction through inhomogeneities in the background thermal structure, especially at 4 mHz. Figure \ref{fig:phxy} shows phase travel time perturbations along the $x$ and $y$ axes through the spot centre in the magnetic case, with rays launched from $(-60,0)$. The curves hardly differ from the thermal case, indicating that the effect is not directly magnetic. It is instead a consequence of the nature of the scattering on each axis. On the $x$-axis, by symmetry, the only scattering is in second skip distance. Increasing skip distance from the spot centre, the total timing of the now one-short/one-long (or vice versa) two-skip path relative to quiet Sun (symmetric) two-skip times reduces significantly out to about 2 Mm at 3 mHz and 10 Mm at 4 mHz, and then starts to increase as the scattering weakens. On the other hand, along the $y$-axis, the rays largely scatter laterally, thereby reducing the length (and timing) of the required equivalent quiet Sun path, and so the scattered rays' travel time deficits rapidly diminish.
The ray calculations presented here do not use the ``generalized ray theory'' of \citet{SchCal06aa}, and so do not allow for mode transmission (fast-to-slow; i.e., acoustic-to-magnetic) at the Alfv\'en-acoustic equipartition level. As the 4 mHz rays (for $\omega_c=\omega_{DG}$) barely penetrate the $a=c$ equipartition surface where mode conversion and/or transmission occurs, and 3 mHz rays do not reach it at all, this is unlikely to be of importance in the present context. (With $\omega_c=\omega_I$, some rays reach as high at $a^2/c^2=7$ at 4 mHz.) The effect is much enhanced above 5 mHz, where significant processes involving the atmospheric fast wave are believed to be of importance for both atmospheric waves and interior seismology \citep{CalMor13aa,MorCalPrz15aa,RijMorPrz15aa}.
Higher frequencies also introduce more uncertainty related to the ``true'' formula for the acoustic cutoff frequency (if such exists). Figure \ref{fig:5mHz} dramatically illustrates the difficulty. Phase and group travel time perturbations are plotted at 5 mHz for each of $\omega_c=\omega_I$ and $\omega_\text{DG}$. The group travel times in particular differ hugely, presumably because at this frequency the rays reach higher in the atmosphere to where the acoustic cutoff formulae differ substantially. At this stage we do not have a good a priori reason for choosing any of the many alternatives for $\omega_c$, but it is very interesting to note that the $\omega_\text{DG}$ case produces almost identical group and phase travel time differences in the umbra, in accord with observations (Fig.~\ref{Figxc}f).
A further complication is that the period of a 5 mHz wave is 200 seconds, so a travel time discrepancy of around 400 s (in the top panel) could conceivably have been folded over once or twice observationally. Perturbations that decrease continuously towards zero as $r$ increases (as in Figure \ref{fig:5mHz} ) presumably do not suffer this ambiguity.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=.4\textwidth]{deltatau_vs_r_Wilson_5mHz}
\caption{Phase and group travel time perturbations, as in Fig.~\ref{fig:deltatau}, but for 5 mHz waves with isothermal (upper) and DG (lower) forms of the acoustic cutoff frequency. Due to increased scatter at this higher frequency, the ray filtering has been relaxed to allow rays with ratio of second to first skip distance in the
range $(\frac{1}{3},3)$ and skip direction change up to $20^\circ$.}
\label{fig:5mHz}
\end{center}
\end{figure}
\section{Discussion}
In this paper, measurements of travel times for waves reflecting on the
bottom side of an active region are made and compared with theoretical
calculations of travel times through a sunspot model. Using the second skip
eliminates the need to use Doppler measurements in the magnetically modified
atmosphere of the active region as was done with center-to-annulus
distance methods.
The Fourier-Hankel method \citep{Braun87} and in the more recent method of
correlating the individual location signals with the average over a line
\citep{Cameron08} also do not use the Doppler signal in the sunspot.
The frequency dependence of travel times averaged over the umbra was measured
and modeled. The difference of the travel times from the quiet Sun is quite
large. The envelope time difference reaches a minimum near 3 mHz of -200 s and
is relatively constant in the range 2.5-4.5 mHz. The phase time difference is
zero near 2 mHz and increases (in magnitude) to -100 s near 5 mHz. The zero
of the phase time near 2 mHz relative to the quiet Sun suggests that the umbra
is fairly shallow and that the frequency-dependent reflection is below where
the umbra has an effect. It would be useful to be able to extend the frequency
range. For frequencies below 2 mHz, it might be possible to get below
the sunspot. At high frequencies it would be useful to have smaller
wavelengths.
However at high frequencies, the waves do not
reflect and so it is not possible to use the second skip. At low frequencies, the
background increases as does the horizontal wavelength making useful
observations difficult.
One question is how much the travel time signal is reduced at the
center of umbra by the finite wavelengths of the waves used in the
analysis. The large distances used, $\Delta=6-12\deg$, correspond
to sizable wavelengths at our mapping frequencies of 3.1 and 4.0 mHz.
A simple estimate of the resolution yields an approximate Gaussian
horizontal smoothing of $\sigma=5.6$ Mm (3.1 mHz) and
$\sigma=4.3$ Mm (4.0 mHz). If the signal were only due to a constant
Wilson depression over the 11.9 Mm radius umbra, a reduction of the
signal at umbra center of 12\% (3.1 mHz) and 3\% (4.0 mHz) would be expected
from convolving the pillbox-shaped signal with the Gaussian.
Noting the relative flatness of the signal at 4.0 mHz across the
umbra (Fig.~\ref{Figmap}), this seems like a reasonable model for
the umbral Wilson depression and the smoothing.
Additional work is required to obtain more quantitative answers about
the Wilson depression. Linear simulations of waves traveling through
a sunspot model need to be carried out. For the largest distance used here,
$\Delta=24\deg$, the depth required of such a model is at least 100 Mm,
somewhat more than has been done to date.
In the interim, the much cheaper ray calculations of Section \ref{raysec} offer valuable insights, irrespective of the differences between the sunspot model and the real spot.
Comparison of Figs.~\ref{Figmap} and \ref{fig:deltatau} reveals a qualitatively good correspondence in both phase and envelope (group) time delays at around 3 mHz, using the DG acoustic cutoff formula. At 4 mHz the increase in phase time delay is also well-modeled. However, at this higher frequency, the ray calculations underestimate the envelope time delay. At 5 mHz (Fig.~\ref{fig:5mHz}) the difference between phase and envelope delay almost vanishes, both for the real spot and in the ray calculation. It is unclear whether the underestimate in delay at 4 mHz reflects the difference between the model and true spot, or represents a weakness of the ray modeling with this acoustic cutoff formula. Figure \ref{Figxc}f suggests that the envelope delay is maximal around 3 mHz and vanishes around 5 mHz, and that a fairly minor change in the sunspot structure may produce a delay at 4 mHz consistent with Fig.~\ref{fig:deltatau}.
The ray calculations make two striking predictions. The first is that the thermal rather than magnetic structure of the spot is primarily responsible for the two-skip travel time delays. This is testable within linear wave simulations since the magnetic field may simply be turned off with the thermal structure left unchanged
\citep{Cally09,Lindsey10,Felipe16,Felipe17}.
The second striking insight is the extent to which the rays are scattered both longitudinally (change in $\ell$) and laterally (change in direction). This is harder to test in wave simulations, since the analysis would essentially mimic that used here for the real data. However, with the benefit of better signal-to-noise ratio in simulations, the first-skip point could also be examined, which would allow the hypothesis to be tested. Pending that confirmation, the spatial mapping of the true sunspot assuming the first skip is at the midpoint between the correlated external points must be regarded as suspect. Nevertheless, the true and the midpoint-inferred $\delta\tau$ displayed in Fig.~\ref{fig:deltatau} differ more in detail than in substance.
Our initial success in obtaining observable two-skip phase and envelope travel time differences from AR11899, and relating them to Wilson depression depth via simple ray calculations, suggests that the next important step is to perform large-scale wave simulations with sunspot models of varying depression depth in order to calibrate the correspondence between models and real sunspots. Once this is done, we will have a practically useful new tool for probing spot structure.
\begin{acknowledgements}
The authors would like to thank Aaron Birch and Robert Cameron for useful
discussions. The HMI data are courtesy of NASA/SDO and the HMI science team.
The data were processed at the German Data Center for SDO, funded
by the German Aerospace Center (DLR).
This work was supported by computational resources provided by the
Australian Government through the Pawsey Supercomputing Centre under the
National Computational Merit Allocation Scheme, as well as using the gSTAR
national facility at Swinburne University of Technology. gSTAR is funded by
Swinburne and the Australian Government's Education Investment Fund.
\end{acknowledgements}
\bibliographystyle{aa}
|
{
"timestamp": "2018-06-07T02:07:57",
"yymm": "1806",
"arxiv_id": "1806.01032",
"language": "en",
"url": "https://arxiv.org/abs/1806.01032"
}
|
\section{Introduction}
The molecular simulation community are often faced with the accuracy-efficiency dilemma:
the atomic interaction in the \emph{ab initio} molecular dynamics (AIMD) \cite{marx2000ab} is accurately modeled by the density functional theory (DFT) \cite{hohenberg1964inhomogeneous,kohn1965self,martin2004electronic},
but the extensive computational cost that typically scales cubically with respect to the number of atoms limits its applications to system size of a few hundreds of atoms and simulation time of a few hundreds of picoseconds.
On the other hand, molecular dynamics (MD) with atomic interaction modeled by classical force fields (FFs) can easily scale to millions of atoms, but the accuracy and transferability of classical FFs is often in question.
For a large class of MD applications, people have been addressing this dilemma by multi-scale modeling.
In these applications, only the accuracy of part of the system is crucial to the phenomena of interest.
Taking the problem of protein folding as an example, the accuracy of modeling the interactions within protein atoms and the interaction between the protein and nearby solvent molecules
dominates the conformation of the protein and the folding process.
Therefore, a natural idea of saving the computational cost while preserving the accuracy is to model the protein and nearby water molecules by an accurate but presumably expensive model,
whereas to model other water molecules by a cheaper model.
Methods of particular interest are the hybrid quantum mechanics/molecular mechanics (QM/MM) approach \cite{warshel1976theoretical,lin2007qm}, which combines QM models and classical molecular models,
and the adaptive-resolution-simulation (AdResS) technique \cite{praprotnik2005adaptive,delle2017molecular}, which combines atomic models and coarse-grained models.
{
It is noted that interpolating the force from different models is commonly used in the multi-scale modeling methods,
for example, the force-mixing QM/MM method~\cite{bernstein2009hybrid,bernstein2012qm,varnai2013tests,mones2015adaptive},
the force interpolation AdResS for classical~\cite{praprotnik2005adaptive,fritsch2012adaptive} or path-integral MD~\cite{poma2010classical,agarwal2015path},
and the force-blending atomistic-continuum coupling methods~\cite{li2012positive,lu2013convergence,li2014theory,wang2018posteriori}.
}
Recently, machine learning (ML) methods have brought in another solution to this dilemma~\cite{behler2007generalized,morawietz2016van,bartok2010gaussian,rupp2012fast,schutt2017quantum,chmiela2017machine,smith2017ani,han2017deep,zhang2018deep}.
After fitting the AIMD data, these approaches target at an accurate and much less expansive potential energy surface,
thereby eliminating the need of calculating electronic structure information on the fly.
A representative example is the Deep Potential Molecular Dynamics method (in abbreviation, DeePMD)~\cite{han2017deep,zhang2018deep} that the authors recently developed with collaborators.
In this scheme the many-body potential energy is a sum of the ``atomic energies'' associated to the individual atoms in the system.
Each one of these ``atomic'' energies depends analytically, via the deep neural network representation, on the symmetry-preserving coordinates of the atoms belonging to the local environment of each given atom.
Upon training, DeePMD faithfully reproduces the distribution information of trajectories from AIMD simulations, with nuclei being treated either classically, or quantum mechanically (by path-integral MD).
With the promising features of the ML methods, several problems have motivated us
{to develop an adaptive method that concurrently combines an ML model with a classical FF.
Throughout this paper we use the DeePMD method as an example.
First, accurate training data is expensive and the amount of data is often limited to small number of atoms and conformations.
In addition, for large and complex systems usually we do not have the full QM description but only a part of it.
Therefore, a practical expectation on the {DeePMD} model should be that it {is trained to be} accurate in the important regions under study,
whereas the remaining regions could be described by a more simplified classical model.
Second, since the DeePMD model is essentially a many-body potential, the evaluation of energy, force, and virial requires much more floating point operations than classical pairwise potentials.
Taking the liquid water system for example, the DeePMD model is about two orders of magnitudes more expensive than classical TIP3P~\cite{jorgensen1983comparison} water model~\cite{zhang2018deep}.
Therefore, given the same computational resource, the maximal system size that is tractable by the DeePMD model is two orders of magnitudes smaller than that is described by the TIP3P model,
or the longest simulation time achievable is two orders of magnitudes shorter than that of the TIP3P model.
This imposes a limitation on the spacial and temporal scales of the problem if it is only described by the DeePMD model.
Finally, the energy decomposition scheme adopted by the DeePMD model and many other ML methods provides a natural way of doing adaptive modeling.
This would make the boundary problem in QM/MM due to boundary conditions adopted by electronic computations much less severe.
It should be noted that, electronic structure information, as a natural output of the QM/MM method, will be missing when we perform MD using only an ML model or a classical FF model.
Therefore, we shall limit ourselves to cases that are well described by the potential energy surface
and are less sensitive to electronic degrees of freedoms.
In this work, we introduce an adaptive modeling method (\methodname{}) and numerically prove its feasibility in terms of adaptively changing the model for a molecule,
depending on its spacial position, from the DeePMD model to a classical model, or \emph{vise versa}.
The system is divided into DeePMD regions and classical regions.
Different regions are bridged by transition regions where the model of a molecule changes smoothly.
The equilibrium between the regions are ensured by the thermodynamic force applying in the transition region.
We demonstrate, by using liquid water as a representative example, that the density profile, the radial distribution functions (RDFs),
and angular distribution functions (ADFs) in the DeePMD region is in satisfactory agreement with the corresponding subregion of a full DeePMD reference system.
Therefore, the DeePMD region is embedded in the \methodname{} system as if it were embedded in a full DeePMD system.
The statistics of the DeePMD region approximates the grand-canonical ensemble in the thermodynamic limit.
\section{Method}
The \methodname{} simulation region is decomposed into {three types of non-overlapping regions:}
DeePMD regions $\Omega_\mathrm {D}$ where the many-body atomistic interactions are modeled by the DeePMD scheme \cite{zhang2018deep} ,
classical regions $\Omega_\mathrm {C}$ where the interactions are modeled by a classical FF model,
and transition regions $\Omega_\mathrm {T}$ of uniform thickness $d_\mathrm {T}$ that bridge the DeePMD regions and the classical regions.
{See an illustrative example in Fig.~\ref{fig:sys}.}
Here we only consider one DeePMD region, one classical region, and the transition region between them.
The case of multiple DeePMD or classical regions can be generalized without substantial difficulty.
We define a reference system, whose only difference with the \methodname{} system is that it is fully described by the DeePMD model in the whole simulation region.
Our goal is to embed the DeePMD region in the classical region as if it were embedded in a system that is fully modeled by the DeePMD.
In other words, the equilibrium statistical property of the DeePMD region should mimic that of {the corresponding subregion} in the reference system.
{In this sense, since the subregion of the reference system is subject to the grand-canonical ensemble as the size of the system goes to infinity,
the statistics in the DeePMD region approximates the grand-canonical ensemble, and the AMM is a grand-canonical-like molecule dynamics simulation. }
In the \methodname{} scheme, we use a force scheme to fulfill the goal.
We define the force $\vect F_i$ on each atom $i$ as a summation of three components:
\begin{align}\label{eqn:f-total}
\vect F_i = \vect F_i^{\text{I}} + \vect F_i^{\text{L}} + \vect F_i^{\mathrm {T}},
\end{align}
where $\vect F_i^{\text{I}}$ is an interpolated force between the DeePMD and the classical model,
$\vect F_i^{\text{L}}$ is a stochastic force from a Langevin thermostat that controls the canonical distribution,
and $\vect F_i^{\mathrm {T}}$ is a thermodynamic force that balances the density profile of the whole \methodname{} simulation region.
In the following we define and discuss in more details the three terms in the definition of $\vect F_i$.
For the interpolated force $\vect F_i^{\text{I}}$, we define a position dependent characteristic function $w(\bm r)$, which takes a constant value 1 in the DeePMD region and 0 in the classical region,
and changes smoothly from 1 to 0 in the transition region.
The way of defining the characteristic function $w(\bm r)$
is not unique, and here we use:
\begin{align}
w(\bm r) = \left\{
\begin{aligned}
& 1 & \bm r &\in \Omega_\mathrm {D} \\
& \frac12(1 + \cos\Big[\frac{\pi d(\bm r, \Omega_\mathrm {D})}{d_\mathrm {T}}\Big]) & \bm r & \in \Omega_\mathrm {T} \\
& 0 & \bm r &\in \Omega_\mathrm {C},
\end{aligned} \right.
\end{align}
where $d(\bm r, \Omega_\mathrm {D}) = \min_{\bm s\in\Omega_\mathrm {D}}\vert\bm r - \bm s\vert$ is the
{closest distance from the position $\bm r$ to the boundary of the DeePMD region.}
Then $\vect F_i^{\text{I}}$ is defined as a linear interpolation, through $w(\bm r)$, between the DeePMD force and the classical force, i.e.:
\begin{align}\label{eqn:f-intpl}
\vect F_i^{\text{I}} = w (\vect R(\vect r_i)) \vect F^\mathrm {D}_i + [\,1 - w (\vect R(\vect r_i))\,] \vect F^\mathrm {C}_i
\end{align}
where $\vect F^\mathrm {D}$ and $\vect F^\mathrm {C}$ denote the DeePMD force and the classical force, respectively,
and $\vect R(\cdot)$ denotes the characteristic position of atom $i$.
{In general, $\bm R(\cdot)$ can be directly defined as an identity mapping.
However, in our test example of the water system, we observe that defining $\vect R(\cdot)$ as a mapping from the atomic position to the molecular center-of-mass (COM)
will stabilize the numerical issue caused by the rigidness of the classical water model.
Based on similar considerations, for macromolecules, $\bm R(\cdot)$ can be, e.g., a mapping from atomic positions to the residue COM.}
In addition, we note that instead of a force-interpolation scheme, it is possible to do an energy-interpolation scheme, which conserves the total interpolated energy at the cost of momentum conservation~\cite{delle2007some}.
The equivalence of the two approaches in terms of equilibrium statistical properties is extensively discussed in Ref.~\cite{wang2015adaptive}.
Here we focus on the force interpolation approach.
The Langevin term $\vect F_i^{\text{L}}$ is defined as
\begin{align}
\vect F_i^{\text{L}} = -\gamma \vect p_i + \sqrt{m_i} \sigma \dot W,
\end{align}
where $\vect p_i$ and $m_i$ denote the momentum and mass of atom $i$, respectively.
$W$ denotes the standard Wiener process, the friction $\gamma$ and the noise magnitude $\sigma$ are related by the fluctuation-dissipation theorem $\sigma^2 = 2\gamma k_BT$,
with $k_B$ being the Boltzmann constant and $T$ being the temperature.
The accuracy of the AMM
is investigated by comparing the statistical properties of the DeePMD region with those of the corresponding region in the full DeePMD reference system.
Due to the difference in the definition of the DeePMD model and the classical model, an imbalance of pressure
on the transition region exists,
which in general will result in an unphysical gap of density profile of different regions, and other higher-order marginal distributions of the configurational distribution functions.
This can be systematically improved by requiring the marginal probability distributions of different orders
in the $transition$ region identical to those in the full DeePMD model~\cite{wang2013grand}.
The first-order marginal distribution, the density profile, is corrected by the one-body thermodynamic force $\vect F_i^{\mathrm {T}}$~\cite{fritsch2012adaptive}.
In practice, $\vect F^\mathrm {T}$ is computed by an iterative scheme:
\begin{align}\label{eqn:cal-thf}
\vect F^\mathrm {T}_{k+1} (\vect R) = \vect F^\mathrm {T}_{k} (\vect R) - \frac{\alpha}{\kappa\rho^2} \nabla \rho_{k}(\vect R),
\end{align}
where $\rho$ denotes the equilibrium number density, $\rho_{k}(\vect R)$ denotes the density profile at the $k$-th iteration step,
$\kappa$ denotes the isothermal compressibility, and $\alpha$ denotes a damping prefactor.
By using the iterative scheme of Eq.~\eqref{eqn:cal-thf}, the converged thermodynamic force will lead to a flat density profile in the system,
which indicates the equilibration between the DeePMD region and the classical MD region.
Higher-order corrections in the transition region, e.g. the correction of the radial distribution function (RDF),
are possible by using the RDF correction to the transition like that proposed in Ref.~\cite{wang2012adaptive}.
In this work, we do not consider the RDF and higher orders corrections,
and demonstrate, by the numerical example, that the properties in the DeePMD region is satisfactorily accurate by only using the thermodynamic force correction.
\section{Simulation protocol}
\begin{figure}
\centering
\includegraphics[width=0.45 \textwidth]{scheme.eps}
\caption{Schematic plot of an adaptive model system.
From left to right, are the classical, transition, DeePMD, transition and classical regions.
The blue curve presents the shape of the characteristic function $w(\bm r)$.
}
\label{fig:sys}
\end{figure}
In this work the \methodname{} scheme is demonstrated and validated by a water system.
In total 864 water molecules are simulated in a cubic cell of size $7.4668\textrm{nm} \times 1.8667\textrm{nm} \times 1.8667\textrm{nm}$ and subject to periodic boundary conditions.
As shown in Fig.\ref{fig:sys}, the model only changes along $x$ axis.
In a copy of the simulation cell, the DeePMD region of width 2.0~nm locates at the center of the simulation region $x_c = 3.7334$~nm,
and the thickness of the transition region is $d_\mathrm {T}=0.3$~nm.
It is noted that this thickness is smaller than those usually used in AdResS methods~\cite{praprotnik2005adaptive,delle2017molecular}, i.e.~twice of the cutoff radius.
Therefore, on average there are roughly 115, 70, and 678 water molecules in the DeePMD region, the transition region, and the classical region, respectively.
The damping prefactor and the compressibility in Eq.~\eqref{eqn:cal-thf}
are set to 0.25 and $4.6\times 10^{-5}\ \textrm{Bar}^{-1}$, respectively.
The rest of the system belongs to the classical region, in which the water molecules are modeled by a flexible SPC/E force field~\cite{berendsen1987missing}.
The details of the force field are provided in Appendix~\ref{app:spce}.
In this work,
the atoms are modeled by point-mass particles in both of the DeePMD and classical water models.
The whole system is coupled to a Langevin thermostat of lag-time 0.1~ps (as a rule of thumb, the friction is set to 10~$\mathrm{ps}^{-1}$~\cite{gao2016sampling}) to keep the temperature at 330~K.
One practical but important issue is that, the equilibrium covalent bond length of the DeePMD model is not identical to that of the classical force field.
When a molecule leaves the DeePMD region and enters the transition region,
the classical force field switches on, and the mismatched bond length will lead to a bond force with large magnitude.
This may cause the difficulty of equilibrating the bond length in the transition region.
One solution is to use a small enough time step,
so the prefactor $w (\vect R(\vect r_i))$ is small to suppress the large bond force as the molecule enters the transition region.
Another solution, which we use in this work for the simulation efficiency, is to slightly modify the force interpolation scheme~\eqref{eqn:f-intpl} as
\begin{align}\nonumber
\tilde{\vect F_i}^{\text{I}} &= w (\vect R(\vect r_i)) \vect F^\mathrm {D}_i
+ [\,1 - w (\vect R(\vect r_i))\,] \vect F^\mathrm {C,nb}_i \\ \label{eqn:f-intpl-p}
&+ \max\{ \varepsilon_p, 1 - w (\vect R(\vect r_i))\,\} \vect F^\mathrm {C,b}_i,
\end{align}
where $\vect F^\mathrm {C,nb}$ and $\vect F^\mathrm {C,b}$ are the non-bonded and bonded contributions to the classical force field, respectively.
$\varepsilon_p$ is the shape protection parameter, and we take $\varepsilon_p = 0.01$ in this work, {if not stated otherwise}.
With the shape protection parameter, the force in the DeePMD region is thus modified as $\vect F^\mathrm {D}_i + \varepsilon_p \vect F^\mathrm {C,nb}_i$,
so the molecular shape of the classical force field is partially preserved in the DeePMD region,
thus the equilibration of the bond length is much easier when the molecule enters the transition.
Since the protection parameter $\varepsilon_p$ is small,
the molecular configuration in the DeePMD region is not substantially perturbed.
It is noted that in the numerical example,
we do not observe any difficulty of equilibrating the covalent bonds when a molecule leaves the classical region and enters the transition region,
because the intramolecular part of the DeePMD interaction is much softer than the classical force field.
The data for training the DeePMD water model was generated by a 330~K NVT AIMD simulation of a 64-molecule bulk water system
with PBE0+TS exchange-correlation functional under periodic boundary condition.
The total length of the AIMD simulation is 20~ps, and the information of the system was saved every time step of 0.5~fs,
thus 40,000 snapshots of the system is available.
Among the data, 95\%, i.e.~38,000 snapshots, are used as the training data, while the rest 2,000 snapshots are used as testing data.
The cut-off radius of the DeePMD model is 0.6~nm.
The descriptors (network input) contain both the angular and radial information of 16 closest oxygen atoms and 32 closest hydrogen atoms.
The descriptors contain only the radial information for the rest of neighbors
in the cut-off radius.
The deep neural network that model the many-body atomic interaction has 5 hidden layer,
each of which has 240, 120, 60, 30, 10 neurons from the input side to the output side, respectively.
The model is trained using the DeePMD-kit package~\cite{WANG2018}.
The detailed description of the training process is available in Ref.~\cite{WANG2018}.
At the end of the training, the root-mean-square errors of the energy and force evaluated by the testing set are
0.44~meV (i.e.~0.042~kJ/mol, normalized by the number of molecules)
and $2.4\times 10^{-2}$~eV/\AA~(23~kJ/mol/nm), respectively.
\section{Result and discussion}
Since our goal is to make the DeePMD region in the \methodname{} system as if it were embedded in a full DeePMD system,
the essential check should be made by comparing the configurational probability density of the DeePMD region of the \methodname{} system with that of the
corresponding subregion of a full DeePMD reference system.
The high-dimensional configurational probability density defined in the phase space can not be easily compared,
but the marginal probability densities can be directly computed from MD trajectories, and compared with those from the reference system.
The agreement of the first-order marginal probability density is checked by the density profile along the $x$-axis, because the system is homogeneous on the $y$- and $z$-directions.
The agreement in the second-order marginal probability density is checked by comparing the oxygen-oxygen (O-O), oxygen-hydrogen (O-H), and hydrogen-hydrogen (H-H) RDFs.
In addition, we check the agreement of the third marginal probability density in terms of the ADFs of oxygen atoms defined
up to several cut-off radii.
In this work, both the \methodname{} system and the full DeePMD reference system are simulated for 2000~ps.
The first 200~ps of the trajectories are discarded,
and the rest of the MD trajectories are considered to be fully equilibrated.
The configurations of the systems are recorded every 0.1~ps,
and the density profile, RDFs, and ADFs are computed from these configurations.
\begin{figure}
\centering
\includegraphics[width = 0.45\textwidth]{rho.eps}
\caption{The density profile of the \methodname{} simulation.
The density is averaged along the $y$ and $z$ directions,
and the profile is displayed against the $x$ axis.
The solid line is the density profile,
and the light shadow denotes the statistical uncertainty of the density profile at the 95\% confidence level.
}
\label{fig:dens}
\end{figure}
We report the density profile of the \methodname{} system along the $x$-axis,
and compare it with the equilibrium density of the DeePMD model (denoted by $\rho_0$) in Fig.~\ref{fig:dens}.
The equilibrium profile in the DeePMD region is almost a constant
and is in satisfactory agreement with the equilibrium density of the full DeePMD result.
The density profiles in the transition regions and in the SPC/E region are also very close to the equilibrium density.
The worst-case deviation, with the statistical uncertainty considered, is 4\% from the equilibrium density.
This indicates that the DeePMD region is embedded in the \methodname{}
system with a similar environment as a full DeePMD system,
in the sense that the density profile of the environment is close to the
equilibrium density of the DeePMD model.
\begin{figure}
\centering
\includegraphics[width = 0.45\textwidth]{rdf-1.eps}
\caption{The RDFs of the DeePMD region of the \methodname{} simulation (solid lines) compared with the RDFs of the reference simulation (DeePMD, dotted lines).
The RDFs of the \methodname{} simulation are presented by solid lines, while those of the reference simulation are presented by dotted lines.
{The insert of the bottom panel presents the error in the O-H bond length distribution.
The solid green line shows the case of $d_\mathrm {T} = 0.3$~nm and $\varepsilon_p = 0.01$,
while the dashed green line shows the case of $d_\mathrm {T} = 0.9$~nm and $\varepsilon_p = 0.001$.}
}
\label{fig:rdf}
\end{figure}
The O-O, O-H, and H-H RDFs of the DeePMD region in the \methodname{} system is reported in the top panel of Fig.~\ref{fig:rdf}.
All the RDFs are compared with the those computed from the corresponding subregion of the reference system, and
the differences are presented in the bottom panel of Fig.~\ref{fig:rdf}.
The \methodname{} scheme reproduces the intermolecular parts of the RDFs with satisfactory accuracy.
It is noticed that the O-H RDF at around 0.1~nm, which corresponds to the intramolecular O-H bond length distribution, deviates from the reference system.
This deviation is due to the introduction of the shape protection term in the force interpolation~\eqref{eqn:f-intpl-p}.
In other words, when a water molecule diffuses from the classical region to the DeePMD region, a relaxation time is needed to equilibrate the O-H bond length.
{
As pointed out by the anonymous referee, it is possible to alleviate the problem by using a larger transition region.
In this study, we test the case of transition region width $d_\mathrm {T} = 0.9$~nm,
which allows a protection parameter that is 10 times smaller, viz.~0.001.
With this milder shape protection, the O-H bond distribution is restored in a better quality in the DeePMD region
(see the insertion of the bottom panel of Fig.~\ref{fig:rdf}).
It is noted that the protection parameter can be further reduced by using a larger transition region,
however, the computational cost of the \methodname{} simulation will also increase correspondingly.
This issue will be discussed later in this article.
}
\begin{figure}
\centering
\includegraphics[width = 0.45\textwidth]{adf-1.eps}
\caption{The ADFs of the DeePMD region of the \methodname{} simulation (solid lines) compared with the ADFs of the reference simulation (DeePMD, dotted lines).
In the top panel, the ADFs of the \methodname{} simulation are presented by solid lines, while those of the reference simulation are presented by dotted lines.
In the bottom panel, the difference between the ADFs of the \methodname{} and reference simulations are shown.
}
\label{fig:adf}
\end{figure}
The ADF is defined by
\begin{align}
P_{r_c}(\theta) = \frac 1Z
\Big\langle
\sum_i \sum_{\substack{ j,k \in \mathcal N(i,r_c)\\ j\neq k}}
\delta (\theta - \theta_{jik})
\Big\rangle
\end{align}
where $\mathcal N(i, r_c)$ denotes all the neighboring atoms of $i$ within a cut-off radius $r_c$,
$\theta_{jik}$ denotes the angle formed by the atoms $j$, $i$ and $k$,
$\langle\cdot\rangle$ denotes the ensemble average,
and $Z$ denotes the normalization factor so that $\int P_{r_c}(\theta) d\theta = 1$.
In Fig.~\ref{fig:adf} we report the oxygen ADF of the DeePMD region of the \methodname{} system
at various cut-off radius $r_c = 0.270$, 0.370, 0.456, and 0.600~nm.
All the results are compared with the reference system, and the differences between the \methodname{}
and the reference system are presented in the bottom panel of the figure.
The ADFs of the DeePMD region in the \methodname{} system is in satisfactory agreement
with the reference system.
\begin{figure}
\centering
\includegraphics[width = 0.45\textwidth]{rdf-region.eps}
\includegraphics[width = 0.45\textwidth]{adf-region.eps}
\caption{
{
The error of the O-O RDF (top panel) and ADF ($r_c$ = 0.37~nm, bottom panel)
investigated in different subregions of the DeePMD region, i.e.~regions I and II.
The whole DeePMD region is of width 2.0~nm. The region I is of width 1.0~nm, and the region II is of width 0.5~nm.
}
}
\label{fig:regions}
\end{figure}
{
In addition, we further investigate the RDF and ADF as a function of positions, and
plot the error of the RDF and the ADF investigated in different subregions of the DeePMD region.
As shown by Fig.~\ref{fig:regions}, the overall deviations of either the RDF or the ADF compared with the benchmarks are very small.
However, also as expected, the deviation in the region closer to the transition region (Region II) is larger than the deviation lying inside the DeePMD region (Region I).
The computational cost of AMM simulation is dominated by the computation of the atomic interactions,
and is estimated by
$ T = T_\mathrm {D} + T_\mathrm {C}$, where $T_\mathrm {D}$ and $T_\mathrm {C}$ denotes the computational costs of the DeePMD and classical forces, respectively.
Since both the DeePMD and classical forces are linearly scalable,
the costs are estimated by
$T_\mathrm {D} = C_\mathrm {D}(\rho) N_\mathrm {D}$ and $T_\mathrm {C} = C_\mathrm {C}(\rho) N_\mathrm {C}$,
where $N_\mathrm {D}$ and $N_\mathrm {C}$ denotes the number of atoms on which the DeePMD and classical forces are computed, respectively.
$C_\mathrm {D}(\rho)$ and $C_\mathrm {C}(\rho)$ are density dependent parameters, which are independent with the number of molecules.
We assume the system is well equilibrated, so that the number of atoms are estimated by
$N_\mathrm {D} = \rho (V_\mathrm {D} + V_\mathrm {T} + V_B)$, where $V_\mathrm {D}$ and $V_\mathrm {T}$ denote
the volumes of DeePMD and transition regions, respectively.
The DeePMD force depends on the network derivatives of the neighbors in the cut-off radius $r_c$~\cite{WANG2018},
therefore, the network derivatives are evaluated for the atoms in a buffering region of width $r_c$ outside the transition region,
and the computational cost of this part is estimated by $\rho V_B$.
Similarly, the computational cost of the classical force evaluation is estimated by
$N_\mathrm {C} = \rho (V_\mathrm {C} + V_\mathrm {T})$.
In the end, we have
\begin{align}\label{eqn:t-esti}
T_{\mathrm{\methodname{}}} = \tilde C_\mathrm {D}(\rho) (V_\mathrm {D} + V_\mathrm {T} + V_B) + \tilde C_\mathrm {C}(\rho) (V_\mathrm {C} + V_\mathrm {T})
\end{align}
where $\tilde C_\mathrm {D} (\rho) = \rho C_\mathrm {D} (\rho)$ and $\tilde C_\mathrm {C} (\rho) = \rho C_\mathrm {C} (\rho)$.
The constants $\tilde C_\mathrm {D}(\rho)$ and $\tilde C_\mathrm {C}(\rho)$
can be estimated by short simulations of small DeePMD and classical systems of the same density.
It is noted that using a larger transition region improves the accuracy of \methodname{}, but at the same time,
the extra computational cost grows linearly with respect to the size of the transition region.
Thus, the size of the transition region should be kept as small as possible,
as long as the accuracy of \methodname{} is still satisfactory.
The ratio of the computational cost of the \methodname{} over the full DeePMD simulations is
\begin{align}\label{eqn:t-ratio}
\frac{T_{\mathrm{\methodname{}}}}{T_{\textrm{DPMD}}}
\approx
\frac{V_\mathrm {D} + V_\mathrm {T} + V_B}{V_{\textrm{sys}}}
+
\frac{\tilde C_\mathrm {C}(\rho)}{\tilde C_\mathrm {D}(\rho)}
\frac{V_\mathrm {T} + V_\mathrm {C} }{V_{\textrm{sys}}},
\end{align}
where $V_{\textrm{sys}} = V_\mathrm {D} + V_\mathrm {T} + V_\mathrm {C}$ is the volume of the whole system.
At the limit of infinitely large classical region, the ratio converges to ${\tilde C_\mathrm {C}(\rho)}/{\tilde C_\mathrm {D}(\rho)}$,
while at the limit of infinitely fast classical force field evaluation,
the ratio converges to ${(V_\mathrm {D} + V_\mathrm {T} + V_B)}/{V_{\textrm{sys}}}$.
Therefore, ${\tilde C_\mathrm {D}(\rho)}/{\tilde C_\mathrm {C}(\rho)}$ and ${V_{\textrm{sys}}}/{(V_\mathrm {D} + V_\mathrm {T} + V_B)}$
are the highest acceleration ration that one obtains from using the \methodname{} method, at the corresponding limits.
In our example, the constants $\tilde C_\mathrm {D}(\rho)$ and $\tilde C_\mathrm {C}(\rho)$
are $7.6\times 10^{-3}$~s/nm$^3$/step and $1.2\times 10^{-3}$~s/nm$^3$/step on one core of an Intel Xeon Gold 6148 CPU.
It is noted that the performance of our in-house code of the classical force field is not optimized.
As a comparison, the constant of Gromacs 5.1.4~\cite{pronk2013gromacs,abraham2015gromacs} is $7.0\times 10^{-5}$~s/nm$^3$/step,
which is 17 times faster than the in-house code.
The estimated computational time by Eq.~\eqref{eqn:t-esti} is 0.124~s/step,
while measured wall-time on the same CPU is 0.134~s/step, which validates the estimated~\eqref{eqn:t-esti}.
The computational cost of a full DeePMD simulation is 0.200~s/step,
so the \methodname{} saves 33\% (or 38\% by estimate~\eqref{eqn:t-esti}) computational cost of the full DeePMD simulation.
This may be not significant at the first sight,
because the volume $V_\mathrm {D} + V_\mathrm {T} + V_B$ takes 51\% of the whole system,
then the \methodname{} cannot save more than 49\% of the computational cost.
Moreover, the sub-optimized force field code also makes the \methodname{} slower.
The acceleration of the \methodname{} will be further improved if
the DeePMD region becomes smaller, the classical region becomes larger,
or the code of the classical force field is further optimized.
}
\section{Conclusion and perspectives}
In summary, we introduce a promising tool for concurrent coupling of
the DeePMD model and a classical force field.
It should be clear that the same strategy should also be applicable to general cases,
where an expensive model is concurrently simulated with a cheap model.
The requirement for the expensive model is that the part dominating the computational expense is computed in a short-range manner.
Future work of this adaptive modeling method is to study biomolecules solvated in water,
where one could use DeePMD to only parameterize the potential for the biomolecules and nearby water molecules, and couple it to a less expensive water model.
It is worth investigating the accuracy in the systems where long-range electrostatic effect plays an important role.
In this situation,
the long-range electrostatic of the ML model should be included by, for example, learning the partial charge based on the atomic environment~\cite{artrith2011high}, and then the point-charge electrostatic is efficiently computed by fast Ewald algorithms~\cite{hockney1988computer,darden1993pme}.
It is also of particular interest to investigate the accuracy of the dynamical properties, like auto-correlation functions,
upon concurrently coupling of different models~\cite{agarwal2015molecular,delle2016formulation}.
\begin{acknowledgments}
The work of LZ and WE is supported in part by ONR grant N00014-13-1-0338, DOE grants DE-SC0008626 and DE-SC0009248,
and NSFC grants U1430237 and 91530322.
The work of HW is supported by the National Science Foundation
of China under Grants 11501039, 11871110 and 91530322,
the National Key Research and Development Program of China
under Grants 2016YFB0201200 and 2016YFB0201203,
and the Science Challenge Project No. JCKY2016212A502.
\end{acknowledgments}
|
{
"timestamp": "2018-08-28T02:17:20",
"yymm": "1806",
"arxiv_id": "1806.01020",
"language": "en",
"url": "https://arxiv.org/abs/1806.01020"
}
|
\section{Introduction}\label{s1}
\setcounter{equation}{0}
\noindent
This paper has three parts.
The first part is an introduction to isolated hypersurface
singularities. It makes the paper readable independently
of other sources, and it gives a basis for the other
two parts.
The second part is an extension to the simple elliptic
singularities of work which Looijenga and Deligne did 1974
for the simple singularities. This is the central part of
the paper.
The third part is an extension and refinement of work of
Jaworski 1986--1988 on the Lyashko-Looijenga maps for
the simple elliptic singularities. The arguments are
less conceptual and more computational and more laborious
than the arguments in the second part.
We need it in order to determine the sizes of certain
finite sets which are in bijection by the second part.
An {\it isolated hypersurface singularity} is a holomorphic
function germ $f:({\mathbb C}^{n+1},0)\to ({\mathbb C},0)$ with an
isolated singularity at 0.
In order to see its topology, one chooses a
{\it good representative} $f:Y\to T$ with
$Y\subset{\mathbb C}^{n+1}$ a suitable neighborhood of 0 and
$T\subset {\mathbb C}$ a small disk around 0.
The {\it Milnor lattice} $Ml(f)$ is the
(reduced for $n=0$) middle homology
$H_n^{(red)}(f^{-1}(\tau),{\mathbb Z})\cong{\mathbb Z}^\mu$
of a regular fiber $f^{-1}(\tau)$ of $f:Y\to T$ for some
$\tau\in T\cap{\mathbb R}_{>0}$.
Here $\mu\in{\mathbb Z}_{>0}$ is the {\it Milnor number} of $f$.
The Milnor lattice comes equipped
with a monodromy $M_h$, an intersection form
$I$, a {\it Seifert form} $L$, and a set ${\mathcal B}(f)$ of
certain ${\mathbb Z}$-bases of $Ml(f)$, the {\it distinguished bases}.
$M_h$ is a quasiunipotent automorphism,
$I$ is a $(-1)^n$-symmetric bilinear form,
$L$ is a unipotent bilinear form, and $L$ determines
$M_h$ and $I$. The group
$G_{\mathbb Z}(f):=\Aut(Ml(f),M_h,I,L)=\Aut(Ml(f),L)$ will be important.
The distinguished bases are constructed as follows. First,
one chooses a morsification $F^{(mor)}:Y^{(mor)}\to T$
of $f:Y\to T$, that is a deformation such that the one
singularity of $f:Y\to T$ with Milnor number $\mu$
splits into $\mu$ $A_1$-singularities $x^{(j)}$,
$j=1,...,\mu,$
of $F^{(mor)}:Y^{(mor)}\to T$ with pairwise different
critical values $u_j=F^{(mor)}(x^{(j)})$, $j=1,...,\mu$,
with $|u_j|<\tau$.
Second, one chooses a {\it distinguished system of paths}.
That is a system of $\mu$ paths $\gamma_j$, $j=1,...,\mu$,
from $u_{\sigma(j)}$ to $\tau$ for some permutation
$\sigma\in S_\mu$, which do not intersect except at $\tau$
and which arrive at $\tau$ in clockwise order.
Third, one shifts from the $A_1$-singularity above each
value $u_{\sigma(j)}$ the (up to the sign unique)
vanishing cycle along $\gamma_j$ to
$H_n^{(red)}((F^{(mor)})^{-1}(\tau),{\mathbb Z})$ and then by a
canonical isomorphism to $Ml(f)$ and calls the image
$\delta_j$. The tuple $\underline\delta =(\delta_1,...,\delta_\mu)$
turns out to be a ${\mathbb Z}$-basis of $Ml(f)$ and is called
a {\it distinguished basis}.
One morsification, all possible choicees of distinguished
systems of paths and both possible choices $\pm\delta_j$
of signs for each cycle give all distinguished bases.
The Stokes matrix of one distinguished basis
$\underline\delta$ is the matrix
$S=(-1)^{(n+1)(n+2)/2}L(\underline\delta^t,\underline\delta)^t$.
It is an upper triangular integer matrix with 1's on
the diagonal. The following table gives some information
on the sizes of the sets ${\mathcal B}(f)$ and
$|\{\textup{Stokes matrices}\}|$.
\begin{eqnarray}\label{1.1}
\begin{array}{lll}
f & |{\mathcal B}(f)| & |\{\textup{Stokes matrices}\}| \\ \hline
\textup{simple singularity} & \textup{finite} & \textup{finite} \\
\textup{simple elliptic singularity} & \textup{infinite} & \textup{finite} \\
\textup{any other singularity} & \textup{infinite} & \textup{infinite}
\end{array}
\end{eqnarray}
The last line of it was proved only recently by Ebeling
\cite{Eb18}. The other two lines are explained for example
in \cite{Eb18} or in remark \ref{t7.2} (i) below.
The simple singularities $A_\mu\ (\mu\geq 1)$,
$D_\mu\ (\mu\geq 4)$, $E_6$, $E_7$ and $E_8$ and the
simple elliptic singularities
$\widetilde E_6$, $\widetilde E_7$ and $\widetilde E_8$ are the first ones
in Arnold's lists \cite[ch. 15.1]{AGV85} of isolated
hypersurface singularities.
The simple singularities have no $\mu$-constant parameter.
The simple elliptic singularities are 1-parameter families.
See subsection \ref{s4.1} for normal forms for all of them.
Deligne \cite{De74} characterized ${\mathcal B}(f)$ and calculated
the number $|{\mathcal B}(f)|$ for the simple singularities.
Yu \cite{Yu90}\cite{Yu96}\cite{Yu99} derived from that
the number $|\{\textup{Stokes matrices}\}|$
for the simple singularities.
Kluitmann characterized ${\mathcal B}(f)$ for the simple elliptic
singularities. He calculated the number
$|\{\textup{Stokes matrices}\}|$
for $\widetilde E_6$ in \cite{Kl83} and for $\widetilde E_7$
in \cite{Kl87}, by huge combinatorial efforts.
The number $|\{\textup{Stokes matrices}\}|$ for $\widetilde E_8$
was not calculated before this paper.
In corollary \ref{t7.3} we recover Kluitmann's numbers
for $\widetilde E_6$ and $\widetilde E_7$, and we give the number
for $\widetilde E_8$, by a completely different method.
Our method combines a natural bijection in the second
part with the calculation of three numbers in the third
part, the degrees of certain Lyashko-Looijenga maps
for $\widetilde E_6$, $\widetilde E_7$ and $\widetilde E_8$.
The simple singularities $f=f(x_0,...,x_n)$ have
{\it universal unfoldings}
\begin{eqnarray}\label{1.2}
F^{alg}(x_0,...,x_n,t_1,...,t_\mu)=F(x,t)=F_t(x)
=f(x)+\sum_{j=1}^\mu t_jm_j,
\end{eqnarray}
with $m_1,...,m_\mu\in{\mathbb C}[x]$ the monomials in
table \eqref{4.4} and with parameters
$t\in M^{alg}:={\mathbb C}^\mu$.
1974 Looijenga \cite{Lo74} and Lyashko (but his work
was published only later in \cite{Ly79}\cite{Ly84})
considered the {\it Lyashko-Looijenga map}
\begin{eqnarray}\label{1.3}
LL^{alg} : M^{alg}\to M_{LL}^{(\mu)} :=
\{y^\mu+\sum_{j=0}^{\mu-1}s_jy^j\, |
\, (s_1,...,s_\mu)\in{\mathbb C}^\mu\} \hspace*{1cm} \\
t\mapsto \prod_{j=1}^\mu (y-u_j)\ \textup{with }
(u_1,...,u_\mu)\textup{ the critical values of }F^{alg}_t.
\nonumber
\end{eqnarray}
for the simple singularities.
It is a branched covering of a finite degree $\deg LL^{alg}$,
see theorem \ref{t6.1} for details.
Looijenga posed the following problem:
Consider a generic polynomial $p(y)=\prod_{j=1}^\mu(y-u_j)
\in M_{LL}^{(\mu)}$. Then $F_t$ for any
$t\in (LL^{alg})^{-1}(p(y))$ is a morsification
of $f$ with the same critical values $u_1,...,u_\mu$.
Now fix {\it one distinguished system of paths}
from $u_1,...,u_\mu$ to $\tau$. Each morsification $F^{alg}_t$
with $t\in (LL^{alg})^{-1}(p(y))$ gives one distinguished basis
$\underline\delta=(\delta_1,...,\delta_\mu)$ up to signs.
One obtains a map
\begin{eqnarray}\label{1.4}
LD:(LL^{alg})^{-1}(p(y))\to {\mathcal B}(f)/G_{sign,\mu},
\end{eqnarray}
where the group $G_{sign,\mu}:=\{\pm 1\}^\mu$ acts on
$\underline\delta=(\delta_1,...,\delta_\mu)$ by sign changes.
An easy argument in \cite{Lo74} which uses that $LL^{alg}$
is a branched covering, shows that the map $LD$ is surjective.
Looijenga asked whether $LD$ is injective. He proved
this for the $A_\mu$-singularities.
Then Deligne \cite{De74} calculated $|{\mathcal B}(f)|$ for all simple
singularities and showed $|{\mathcal B}(f)/G_{sign,\mu}|=\deg LL^{alg}
=|(LL^{alg})^{-1}(p(y))|$. This proved that $LD$ is a bijection
for all simple singularities.
Deligne's letter \cite{De74} to Looijenga is not published.
The result that $LD$ is a bijection
is stated in \cite{Mi89}, \cite{Yu90} and below in
theorem \ref{t7.1}.
A central part of this paper is an extension of this result
to the simple elliptic singularities. Here a universal
unfolding of a single simple elliptic singularity is not
good enough. In subsection \ref{s4.2} we present a global
family of functions
\begin{eqnarray}\label{1.5}
F^{alg}(x_0,...,x_n,t_1,...,t_{\mu-1},\lambda)&=&
F^{alg}(x,t',\lambda)=F^{alg}_{t',\lambda}(x)\\
&=& f_\lambda(x) + \sum_{j=1}^{\mu-1} t_jm_j,
\nonumber
\end{eqnarray}
with $m_1,...,m_{\mu-1}\in {\mathbb C}[x]$ the
monomials in table \eqref{4.7},
$f_\lambda(x)$ the 1-parameter families in Legendre normal form
in \eqref{4.2} of the simple elliptic singularities and
with parameters $(t',\lambda)\in M^{alg}:={\mathbb C}^{\mu-1}\times
({\mathbb C}-\{0,1\})$. For each $\lambda\in{\mathbb C}-\{0,1\}$, the
family $F^{alg}$ is (locally) a universal unfolding
of $f_\lambda$.
The family $F^{alg}$ is not completely canonical.
But the family $F^{mar}$ with parameter space
$M^{mar}:={\mathbb C}^{\mu-1}\times \H$, where
$\H\to{\mathbb C}-\{0,1\}$ is the universal covering, is canonical.
This is made precise in theorem \ref{t4.3} in a way
which uses marked singularities, the fact that the
parameter space of each of the three families
of marked simple elliptic
singularities is $M^{mar}_\mu\cong\H$ \cite{GH17-1},
and a thickening of this space to $M^{mar}$.
We obtain Lyashko-Looijenga maps
$LL^{alg}:M^{alg}\to M_{LL}^{(\mu)}$ and
$LL^{mar}:M^{mar}\to M_{LL}^{(\mu)}$.
Analogously to $LD$ for the simple singularities, we obtain
a {\it Looijenga-Deligne map}
\begin{eqnarray}\label{1.6}
LD: (LL^{mar})^{-1}(p(y))\to {\mathcal B}(f)/G_{sign,\mu}
\end{eqnarray}
for generic $p(y)\in M_{LL}^{(\mu)}$.
A main result of this paper is that this map is a bijection,
see theorem \ref{t7.1}.
But our arguments are more involved than the arguments
for the simple singularities.
The surjectivity follows by the same easy argument in
\cite{Lo74}, as soon as one has that the map
\begin{eqnarray}\label{1.7}
LL^{alg}:M^{alg}-(\textup{caustic}\cup\textup{Maxwell stratum})\\
\to M_{LL}^{(\mu)}-(\textup{discriminant }{\mathcal D}_{LL}^{(\mu)})
\nonumber
\end{eqnarray}
is a finite covering. This is a hard theorem of Jaworski
\cite[Theorem 2]{Ja86} \cite[Proposition 1]{Ja88},
see theorem \ref{t6.2}.
As both sides of \eqref{1.6} are infinite, the injectivity
of $LD$ in \eqref{1.6} does not follow by a comparison
of numbers. We need an action of $G_{\mathbb Z}$ on $M^{mar}$
and on the middle homology bundle above $M^{mar}-{\mathcal D}^{mar}$.
We need that $M^{mar}$ is an F-manifold with Euler field.
And we need that and how a Stokes matrix $S$ of
$LD(t)\in {\mathcal B}(f)/G_{sign,\mu}$ for $t\in (LL^{mar})^{-1}(p(y))$
encodes the covering in \eqref{1.7}.
See the proof of theorem \ref{t7.1} for the details.
The bijection in \eqref{1.6} induces also a bijection
\begin{eqnarray}\label{1.8}
(LL^{mar})^{-1}(p(y))/G_{\mathbb Z}
\to \{\textup{Stokes matrices}\}/G_{sign,\mu}.
\end{eqnarray}
Both sides are finite, and the number
$|(LL^{mar})^{-1}(p(y))/G_{\mathbb Z}|$ is in a simple way related
to $\deg LL^{alg}$. Though Jaworski's proofs in
\cite{Ja86} and \cite{Ja88} that $LL^{alg}$ in \eqref{1.7}
is a covering, do not allow to calculate
its degree $\deg LL^{alg}$.
The main task in the third part of the paper is to extend
Jaworski's work and calculate $\deg LL^{alg}$.
In theorem \ref{t6.3} we obtain an extension of
$M^{alg}$ above ${\mathbb C}-\{0,1\}$ to an orbibundle
$M^{orb}\stackrel{\pi_{orb}}{\to}\P^1$
such that $LL^{alg}$ extends to a holomorphic map
$LL^{orb}:M^{orb}\to M_{LL}^{(\mu)}$
which is outside of the $\mu$-constant stratum
(and its translates by the unit field)
a branched covering and which maps
$(\textup{caustic})\cup(\textup{Maxwell stratum})\cup
\pi_{orb}^{-1}(\{0,1,\infty\})$ to the discriminant
${\mathcal D}_{LL}^{(\mu)}\subset M_{LL}^{(\mu)}$.
Detailed information about $M^{orb}$ and $LL^{orb}$
allows us to calculate the degree
$\deg LL^{orb} \, (=\deg LL^{alg})$.
The first part of the paper consists of section \ref{s2},
the subsections \ref{s3.1} and \ref{s3.2} and the
first three pages of section \ref{s4}.
Section \ref{s2} recalls classical data and facts around
isolated hypersurface singularities, namely
Milnor fibrations, Milnor lattices,
universal unfoldings, the base spaces as F-manifolds
with Euler fields, Lyashko-Looijenga maps, distinguished bases,
Stokes matrices, and Thom-Sebastiani type results.
Subsection \ref{s3.1} reviews results in
\cite[Theorem 13.11 and Theorem 13.13]{He02} on symmetries
of singularities. Subsection \ref{s3.2} reviews
results in \cite[Theorem 4.3]{He11} on the moduli spaces
$M^{mar}_\mu$ of marked singularities.
The first three pages of section \ref{s4} give normal forms
for the simple and the simple elliptic singularities
and for the unfoldings.
The second part of the paper consists of the subsections
\ref{s3.3} and \ref{s3.4}, the latter part of section \ref{s4}
and the sections \ref{s6} and \ref{s7}.
Subsection \ref{s3.3} describes a thickening
$M^{mar}\supset M^{mar}_\mu$ of the moduli spaces of marked
singularities. Theorem \ref{t4.3} in the latter part of
section \ref{s4} proves the claims about this thickening in
the cases of the simple and the simple elliptic singularities.
Subsection \ref{s3.4} defines and discusses
Looijenga-Deligne maps $LD$ in a general setting.
Section \ref{s7} states and proves the main result
theorem \ref{t7.1} that $LD$ is a bijection for each
simple singularity and each family of simple elliptic
singularities. Corollary \ref{t7.3} provides the finite
numbers $|\{\textup{Stokes matrices}\}|$.
Section \ref{s6} states the old and new results on the
Lyashko-Looijenga maps for the simple singularities
(theorem \ref{t6.1}, Lyashko and Looijenga)
and the simple elliptic singularities
(theorem \ref{t6.2}, Jaworski, and theorem \ref{t6.3}, new).
The most beautiful formula in section \ref{s6} is formula
\eqref{6.7} for $\deg LL^{alg}$ for the simple elliptic
singularities.
The third part of the paper consists of the sections
\ref{s5}, \ref{s8}, \ref{s9} and \ref{s10}.
Section \ref{s5} makes the general discussion of the
symmetries of singularities in subsection \ref{s3.1}
explicit in the cases of the simple and the simple
elliptic singularities.
Section \ref{s8} follows Fulton's book \cite{Fu84}
and extends some results there to the case of
{\it smooth cone bundles} (definition \ref{t8.1}).
We need this for the proof of corollary \ref{t8.6}
which is used in the proof of formula \eqref{6.7}
in section \ref{s10}.
The long section \ref{s9} provides for the simple
elliptic singularities the extension of $M^{alg}
={\mathbb C}^{\mu-1}\times ({\mathbb C}-\{0,1\})$ to $\lambda=0$
such that $LL^{alg}$ extends well. Finding the right way
to glue into $M^{alg}$ a fiber above $\lambda=0$
was the most laborious part of this paper.
Section \ref{s10} combines this with the symmetries in
section \ref{s5} and provides the right extensions of
$M^{alg}$ to $\lambda=1$ and $\lambda=\infty$,
and it uses corollary \ref{t8.6} to prove the
formula \eqref{6.7} for $\deg LL^{alg}$ for the simple
elliptic singularities.
The first author thanks the organizers of the conference
"Moduli spaces and applications in geometry, topology,
analysis and mathematical physics" in Beijing February
27 -- March 3, 2017, for the invitation to the conference,
and both authors thank especially Lizhen Ji for a lot
of patience during the preparation of this paper.
\section{Topology and unfoldings of isolated hypersurface singularities}\label{s2}
\setcounter{equation}{0}
\noindent
An {\it isolated hypersurface singularity} (short: {\it singularity})
is a holomorphic function germ $f:({\mathbb C}^{n+1},0)\to ({\mathbb C},0)$ with an isolated
singularity at $0$.
Such objects were studied intensively since the end of the 1960'ies.
In this section, we review classical facts on their topology and their
unfoldings and fix some notations.
For the topology compare \cite{AGV88}
and \cite{Eb07}. For the unfoldings compare \cite{AGV85}
and (especially for F-manifolds) \cite{He02}.
The {\it Jacobi ideal} of $f$ is the ideal
$J_f:=(\frac{\partial f}{\partial x_i}) \subset{\mathcal O}_{{\mathbb C}^{n+1},0}$,
its {\it Jacobi algebra} is the quotient
${\mathcal O}_{{\mathbb C}^{n+1},0}/J_f$, its {\it Milnor number} is the finite number
$\mu:=\dim{\mathcal O}_{{\mathbb C}^{n+1},0}/J_f.$
\subsection{Topology of singularities}\label{s2.1}
A {\it good representative} of $f$ has to be defined with some
care \cite{Mi68}\cite{AGV88}\cite{Eb07}. It is $f:Y\to T$
with $Y\subset{\mathbb C}^{n+1}$ a suitable small neighborhood of 0 and
$T\subset{\mathbb C}$ a small disk around 0.
Then $f:Y'\to T'$ with $Y'=Y-f^{-1}(0)$ and
$T'=T-\{0\}$ is a locally trivial $C^\infty$-fibration,
the {\it Milnor fibration}. Each fiber has the
homotopy type of a bouquet of $\mu$ $n$-spheres \cite{Mi68}.
Therefore the (reduced for $n=0$) middle homology groups are {}\\{}
$H_n^{(red)}(f^{-1}(\tau),{\mathbb Z}) \cong {\mathbb Z}^\mu$ for $\tau\in T'$.
Each comes equipped with an {\it intersection form} $I$,
which is a datum of one fiber,
a {\it monodromy} $M_h$ and a {\it Seifert form} $L$, which come from the
Milnor fibration,
see \cite[I.2.3]{AGV88} for their definitions
(for the Seifert form, there are several
conventions in the literature, we follow \cite{AGV88}).
$M_h$ is a quasiunipotent automorphism, $I$ and $L$ are
bilinear forms with values in ${\mathbb Z}$,
$I$ is $(-1)^n$-symmetric, and $L$ is unimodular. $
L$ determines $M_h$ and $I$ because of the formulas
\cite[I.2.3]{AGV88}
\begin{eqnarray}\label{2.1}
L(M_ha,b)&=&(-1)^{n+1}L(b,a),\\ \label{2.2}
I(a,b)&=&-L(a,b)+(-1)^{n+1}L(b,a).
\end{eqnarray}
The lattices $H_n(f^{-1}(\tau),{\mathbb Z})$ for all Milnor fibrations
$f:Y'\to T'$ and then all $\tau\in{\mathbb R}_{>0}\cap T'$ are canonically isomorphic,
and the isomorphisms respect $M_h$, $I$ and $L$.
This follows from Lemma 2.2 in \cite{LR73}.
These lattices are identified and called {\it Milnor lattice} $Ml(f)$.
The group $G_{\mathbb Z}$ is
\begin{eqnarray}\label{2.3}
G_{\mathbb Z}=G_{\mathbb Z}(f):= \Aut(Ml(f),L)=\Aut(Ml(f),M_h,I,L),
\end{eqnarray}
the second equality is true because $L$ determines $M_h$ and $I$.
We will use the notation $Ml(f)_{\mathbb C}:=Ml(f)\otimes_{\mathbb Z} {\mathbb C}$,
and analogously for other rings $R$ with ${\mathbb Z}\subset R\subset {\mathbb C}$,
and the notations
\begin{eqnarray*}
Ml(f)_\lambda&:=&\ker((M_h-\lambda\id)^\mu:Ml(f)_{\mathbb C}\to Ml(f)_{\mathbb C})
\subset Ml(f)_{\mathbb C},\\
Ml(f)_{1,{\mathbb Z}}&:=& Ml(f)_1\cap Ml(f)\subset Ml(f),\\
Ml(f)_{\neq 1}&:=&\bigoplus_{\lambda\neq 1}Ml(f)_\lambda \subset
Ml(f)_{\mathbb C},\\
Ml(f)_{\neq 1,{\mathbb Z}}&:=& Ml(f)_{\neq 1}\cap Ml(f)\subset Ml(f).
\end{eqnarray*}
The formulas \eqref{2.1} and \eqref{2.2} show
$I(a,b)= L((M_h-\id)a,b)$. Therefore the eigenspace with eigenvalue
1 of $M_h$ is the radical $\Rad(I)\subset Ml(f)$ of $I$.
By \eqref{2.2} $L$ is
$(-1)^{n+1}$-symmetric on the radical of $I$.
\subsection{Unfoldings of singularities}\label{s2.2}
The notion of an unfolding of an isolated hypersurface singularity $f$ goes
back to Thom and Mather. An {\it unfolding} of $f$ is a holomorphic
function germ $F:({\mathbb C}^{n+1}\times M,0)\to ({\mathbb C},0)$ such that
$F|_{({\mathbb C}^{n+1},0)}=f$ and such that $(M,0)$
is the germ of a complex manifold.
Its Jacobi ideal is
$J_F:=(\frac{\partial F}{\partial x_i})\subset {\mathcal O}_{{\mathbb C}^{n+1}\times M,0}$,
its critical space is the germ $(C,0)\subset ({\mathbb C}^{n+1}\times M,0)$
of the zero set of $J_F$ with the canonical complex structure.
The projection $(C,0)\to (M,0)$ is finite and flat of degree $\mu$.
A kind of Kodaira-Spencer map is the ${\mathcal O}_{M,0}$-linear map
\begin{eqnarray}\label{2.4}
{\bf a}_C:{\mathcal T}_{M,0}\to {\mathcal O}_{C,0},\quad X\mapsto\widetilde X(F)|_{(C,0)}
\end{eqnarray}
where $\widetilde X$ is an arbitrary lift of $X\in{\mathcal T}_{M,0}$ to $({\mathbb C}^{n+1}\times M,0)$.
We will use the following notion of morphism between unfoldings.
Let $F_i:({\mathbb C}^{n+1}\times M_i,0)\to({\mathbb C},0)$ for $i\in\{1,2\}$ be
two unfoldings of $f$ with projections $\pr_i:({\mathbb C}^{n+1}\times M_i,0)\to(M_i,0)$.
A {\it morphism} from $F_1$ to $F_2$ is a pair
$(\Phi,\varphi)$ of map germs such that the following diagram
commutes,
\begin{eqnarray}\label{2.5}
\begin{xy}
\xymatrix{ ({\mathbb C}^{n+1}\times M_1,0) \ar[r]^\Phi \ar[d]^{\pr_1}
& ({\mathbb C}^{n+1}\times M_2,0) \ar[d]^{\pr_2}\\
(M_1,0) \ar[r]^{\varphi} & (M_2,0) }
\end{xy}
\end{eqnarray}
and
\begin{eqnarray}\label{2.6}
\Phi|_{({\mathbb C}^{n+1}\times\{0\},0)}&=&\id,\\
F_1 &=& F_2\circ\Phi \label{2.7}
\end{eqnarray}
hold. Then one says that $F_1$ {\it is induced} by $(\Phi,\varphi)$ from $F_2$.
An unfolding is {\it versal} if any unfolding is induced from it by a
suitable morphism. A versal unfolding $F:({\mathbb C}^{n+1}\times M,0)\to({\mathbb C},0)$ is
{\it universal} if the dimension of the parameter space $(M,0)$ is
minimal. Universal unfoldings exist by work of Thom and Mather.
More precisely, an unfolding is versal if and only if the map
${\bf a}_C$ is surjective, and it is universal if and only if the map ${\bf a}_C$
is an isomorphism (see e.g. \cite[ch. 8]{AGV85} for a proof).
Observe that ${\bf a}_C$ is surjective/an isomorphism
if and only if its restriction to 0, the map
\begin{eqnarray}\label{2.8}
{\bf a}_{C,0}:T_0M\to {\mathcal O}_{{\mathbb C}^{n+1},0}/J_f
\end{eqnarray}
is surjective/an isomorphism. Therefore an unfolding
\begin{eqnarray}\label{2.9}
F(x_0,...,x_n,t_1,...,t_\mu)=F(x,t)=F_t(x)=f(x)+\sum_{j=1}^\mu m_it_i,
\end{eqnarray}
with $(M,0)=({\mathbb C}^\mu,0)$ with coordinates $t=(t_1,...,t_\mu)$
where $m_1,...,m_\mu\in{\mathcal O}_{{\mathbb C}^{n+1},0}$ represent a basis of the
Jacobi algebra ${\mathcal O}_{{\mathbb C}^{n+1},0}/J_f$, is universal.
\subsection{F-manifolds}\label{s2.3}
The base space of a universal unfolding
is an {\it F-manifold} with {\it Euler field}.
\begin{definition}\label{t2.1}\cite{HM99}\cite{He02}
(a) An {\it F-manifold} is a complex manifold $M$ together
with a holomorphic commutative and associative multiplication
$\circ$ on its holomorphic tangent bundle $TM$ and with a unit field
$e\in {\mathcal T}_M$ such that the integrability condition
\begin{eqnarray}\label{2.10}
\Lie_{X\circ Y}(\circ)= X\circ \Lie_Y(\circ)+Y\circ \Lie_X(\circ)
\end{eqnarray}
holds.
(b) Let $(M,\circ,e)$ be an F-manifold. An Euler field (of weight 1)
is a global holomorphic vector field $E\in{\mathcal T}_M$ with $\Lie_E(\circ)=\circ$.
\end{definition}
F-manifolds are studied in \cite[ch. 2--5]{He02}. In the case of a
universal unfolding $F:({\mathbb C}^{n+1}\times M,0)\to({\mathbb C},0)$,
its base space inherits from the isomorphism ${\bf a}_C^{-1}:{\mathcal O}_{C,0}\to{\mathcal T}_M$
a multiplication. It satisfies \eqref{2.10}, and $e:={\bf a}_C^{-1}([1])$
and $E:={\bf a}_C^{-1}([F])$ are the unit field and an Euler field
\cite[Theorem 5.3]{He02}, so it is an F-manifold with Euler field.
We call a universal unfolding {\it universal} and not just {\it semiuniversal}, because
the morphism $\varphi$ in \eqref{2.5} between the base spaces of
any two universal unfoldings is unique \cite[Theorem 5.4]{He02}
(but $\Phi$ on the total spaces is not unique).
Therefore the base space of a universal unfolding is (with its structure
as F-manifold with Euler field) unique up to unique isomorphism.
The following result on decompositions of germs of F-manifolds
is a very instructive application of the integrability condition \eqref{2.10},
and it is especially telling in the case of isolated hypersurface singularities.
\begin{theorem}\label{t2.2}\cite[Theorem 2.11]{He02}
Let $(M,p)$ be the germ in $p\in M$ of an F-manifold.
It is an elementary fact from commutative algebra that the algebra
$T_pM$ decomposes into a direct sum $\bigoplus_{k=1}^l (T_pM)_k$
of irreducible local algebras (it is just the decomposition into
simultaneous generalized eigenspaces with respect to all (commuting!)
multiplication endomorphisms).
This decomposition extends uniquely to a decomposition $(M,p)=\prod_{k=1}^l (M_k,p_k)$
of germs of F-manifolds. These germs are irreducible germs of F-manifolds.
If $(M,p)$ has an Euler field, the Euler field decomposes into a sum of Euler fields.
\end{theorem}
In the case of a good representative $F:{\mathcal Y}\to T$ of a universal unfolding $F$,
for any $t\in M$, the canonical decomposition from theorem \ref{t2.2}
of $(M,t)$ into a product of germs of F-manifolds is a canonical decomposition
into a product of germs of base spaces of universal unfoldings of the germs
of $F_t$ at all its critical points.
At generic $t$, this is a decomposition into 1-dimensional F-manifolds, and
the eigenvalues $u_1,...,u_\mu$ of the Euler field form there local coordinates,
Dubrovin's {\it canonical coordinates}. The Euler field has there the shape
$E=\sum_{j=1}^\mu u_j e_j$ where $e_j=\frac{\partial}{\partial u_j}$,
the multiplication is given by $e_i\circ e_j=\delta_{ij}e_i$, the
unit field is $e=\sum_{j=1}^\mu e_j$, and the values $u_1,...,u_\mu$
are the critical values of $F_t$, i.e. the values of $F_t$ at its critical
points. Up to isomorphism there is only one germ of a 1-dimensional
F-manifold, which is called $A_1$. Then the germ $(M,t)$ at generic $t$
is as a germ of an F-manifold of the type $A_1^\mu$.
\subsection{Lyashko-Looijenga map}\label{s2.4}
Looijenga \cite{Lo74} was close to the notion of an F-manifold.
He had already the canonical coordinates at generic points.
And he and Lyashko \cite{Ly79}\cite{Ly84} studied the Lyashko-Looijenga map
and its behaviour near the
caustic and the Maxwell stratum. For $\mu\in{\mathbb Z}_{\geq 1}$ define
\begin{eqnarray}\label{2.11}
M_{LL}^{(\mu)}&=&\{y^\mu+\sum_{j=1}^\mu s_jy^{j-1}\, |\,
(s_1,...,s_\mu)\in{\mathbb C}^\mu\} \cong{\mathbb C}^\mu,\\
{\mathcal D}_{LL}^{(\mu)}&:=& \{p(y)\in
M_{LL}^{(\mu)}\, | \, p(y)\textup{ has multiple roots}\}.
\label{2.12}
\end{eqnarray}
${\mathcal D}_{LL}^{(\mu)}$ is a hypersurface in $M_{LL}^{(\mu)}$.
Let $F:({\mathbb C}^{n+1}\times M,0)\to({\mathbb C},0)$ be a universal unfolding of
a singularity $f$.
Let $F:{\mathcal Y}\to T$ be a good representative of it with base space $M$.
Then its {\it Lyashko-Looijenga map} is the holomorphic map
\begin{eqnarray}\label{2.13}
LL:M&\to& M_{LL}^{(\mu)},\quad t\mapsto \prod_{j=1}^\mu (y-u_j),\quad\textup{where }
u_1,...,u_\mu\\
&&\textup{are the critical values of }
F_t\textup{ (with multiplicities).}\nonumber
\end{eqnarray}
Define the {\it caustic} ${\mathcal K}_3\subset M$
and the {\it Maxwell stratum} ${\mathcal K}_2\subset M$ by
\begin{eqnarray}\label{2.14}
{\mathcal K}_3&:=& \{t\in M\, |\, F_t
\textup{ has less than }\mu \textup{ singularities}\},\\
{\mathcal K}_2&:=& \textup{the closure in }M\textup{ of the set }\{t\in M\, |\,
F_t\textup{ has }\mu \label{2.15}\\
&&\textup{ singularities, but less than }\mu \textup{ critical values}\} . \nonumber
\end{eqnarray}
They are hypersurfaces in $M$.
The Lyashko-Looijenga map $LL$ restricts to a locally biholomorphic map
$LL:M-({\mathcal K}_3\cup {\mathcal K}_2)\to M_{LL}^{(\mu)}-{\mathcal D}_{LL}^{(\mu)}$
(because the $u_1,...,u_\mu$ are
local coordinates on $M-{\mathcal K}_3$),
it maps ${\mathcal K}_3\cup {\mathcal K}_2$ to ${\mathcal D}_{LL}^{(\mu)}$,
and it is a branched covering of order 3 respectively 2 at generic points of
${\mathcal K}_3$ respectively ${\mathcal K}_2$. All of this was proved by Lyashko \cite{Ly79}\cite{Ly84}
and Looijenga \cite{Lo74}. Now it is an easy consequence of the F-manifold structure.
At a generic point $t$ of ${\mathcal K}_3$, the germ of the F-manifold is of the type
$A_2A_1^{\mu-2}$. Here $A_2$ is the first in the countable series of irreducible
germs of massive F-manifolds \cite[ch. 4.2]{He02}.
\subsection{Distinguished bases and Stokes matrices of singularities}\label{s2.5}
Good references for distinguished bases are \cite{AGV88} and \cite{Eb07}.
We sketch their construction and properties.
Choose a universal unfolding of $f$,
a {\it good representative} $F:{\mathcal Y}\to T$ of it with base space $M$,
and a generic parameter $t\in M$. Then $F_t:Y_t\to T$
with $T\subset {\mathbb C}$ the same disk as that for a Milnor fibration $f:Y\to T$
above and $Y_t\subset{\mathbb C}^{n+1}$
is a {\it morsification} of $f$.
It has $\mu$ $A_1$-singularities,
and their critical values $u_1,...,u_\mu\in T$
are pairwise different. Their numbering is also a choice.
Now choose a value $\tau\in T\cap{\mathbb R}_{>0}-\{u_1,...,u_\mu\}$ and
a {\it distinguished system of paths}. That is
a system of $\mu$ paths $\gamma_j$, $j=1,...,\mu$, from
$u_j$ to $\tau$ which do not intersect except at $\tau$
and which arrive at $\tau$ in clockwise order.
Finally, shift from the $A_1$ singularity above each value $u_j$
the (up to sign unique) vanishing cycle along $\gamma_j$
to the Milnor lattice $Ml(f)=H_n(f^{-1}(\tau),{\mathbb Z})$,
and call the image $\delta_j$.
The tuple $\underline{\delta}=(\delta_1,...,\delta_\mu)$
is a ${\mathbb Z}$-basis of
$Ml(f)$. All such bases are called {\it distinguished bases}.
They form one orbit ${\mathcal B}(f)$ of an action of a semidirect product
$G_{sign,\mu}\rtimes\textup{Br}_\mu$.
Here $\textup{Br}_\mu$ is the braid group with $\mu$ strings,
see \cite{AGV88} or \cite{Eb07} for its action.
The {\it sign change group} $G_{sign,\mu}:=\{\pm 1\}^\mu$ acts simply
by changing the signs of the entries of the tuples
$(\delta_1,...,\delta_\mu)$.
The members of the distinguished bases are called
{\it vanishing cycles}.
The matrix $L(\underline{\delta}^t,\underline{\delta})
=(L(\delta_i,\delta_j))_{i,j=1,...,\mu}$
of the Seifert form with respect to a distinguished basis
$\underline{\delta}=(\delta_1,...,\delta_\mu)$
is a lower triangular matrix with $(-1)^{(n+1)(n+2)/2}$ on the diagonal.
The {\it Stokes matrix} of the distinguished basis $\underline{\delta}$
is by definition the upper triangular matrix in $M(\mu\times\mu,{\mathbb Z})$
\begin{eqnarray}\label{2.16}
S:=(-1)^{(n+1)(n+2)/2}\cdot L(\underline{\delta}^t,\underline{\delta})^t
\end{eqnarray}
with 1's on the diagonal. Then \eqref{2.1} and \eqref{2.2} give
\begin{eqnarray}\label{2.17}
M_h(\underline{\delta})&=&\underline{\delta}\cdot (-1)^{n+1}\cdot S^{-1}S^t,\\
I(\underline{\delta}^t,\underline{\delta})&=&(-1)^{n(n+1)/2}\cdot
(S+(-1)^nS^t).\label{2.18}
\end{eqnarray}
The {\it Coxeter-Dynkin diagram} of the distinguished basis $\underline{\delta}$
encodes $S$ in a geometric way. It has $\mu$ vertices which are numbered
from 1 to $\mu$. Between two vertices $i$ and $j$ with $i<j$
one draws
\begin{tabular}{ll}
no edge & if $S_{ij}=0$, \\
$|S_{ij}|$ edges & if $S_{ij}<0$, \\
$S_{ij}$ dotted edges & if $S_{ij}>0$.
\end{tabular}
Coxeter-Dynkin diagrams of many singularities were calculcated
by A'Campo, Ebeling, Gabrielov and Gusein-Zade. Some of them
can be found in \cite{Ga74}, \cite{Eb83} and \cite{Eb07}.
Each Coxeter-Dynkin diagram of any singularity is connected.
We will use this important result in lemma \ref{t2.3}.
There are three proofs of it, by Gabrielov \cite{Ga74},
Lazzeri \cite{La73} and L\^e \cite{Le73}.
The Picard-Lefschetz transformation on $Ml(f)$
of a vanishing cycle $\delta$ is
\begin{eqnarray}\label{2.19}
s_{\delta}(b)&:=&b-(-1)^{n(n+1)/2}\cdot I(\delta,b)\cdot \delta.
\end{eqnarray}
For $n$ even $I(\delta,\delta)=(-1)^{n(n+1)/2}\cdot 2$ and $s_\delta$
is the identity on the subspace in $Ml(f)$ orthogonal to $\delta$
and $-\id$ on ${\mathbb Z}\cdot\delta$. For $n$ odd $s_\delta$ is unipotent
with kernel of $s_\delta-\id$ of rank $\mu-1$.
In both cases $s_\delta$ determines $\delta$ up to the sign.
The monodromy $M_h$ is
\begin{eqnarray}\label{2.20}
M_h &=& s_{\delta_1}\circ ...\circ s_{\delta_\mu}
\end{eqnarray}
for any distinguished basis $\underline{\delta}=(\delta_1,...,\delta_\mu)$.
Let us formulate a correspondence for later use.
\begin{lemma}\label{t2.3}
The orbit under $G_{sign,\mu}\rtimes \textup{Br}_\mu$ of a tuple
\begin{eqnarray}\label{2.21}
((u_1,...,u_\mu),\textup{a distinguished system of paths},
\textup{a Stokes matrix }S)
\end{eqnarray}
where $u_1,...,u_\mu\in{\mathbb C}$ are pairwise different
is equivalent to the isomorphism class of a ${\mathbb Z}$-lattice bundle
of rank $\mu$ above ${\mathbb C}-\{u_1,...,u_\mu\}$.
The only automorphisms (which fix the basis ${\mathbb C}-\{u_1,...,u_\mu\}$)
of this ${\mathbb Z}$-lattice bundle are $\pm\id$.
\end{lemma}
{\bf Proof:}
If a morsification $F_t$ with critical values $u_1,...,u_\mu$
and distinguished basis above the distinguished system of paths
with Stokes matrix $S$ exists,
then the ${\mathbb Z}$-lattice bundle is up to isomorphism
the flat extension to ${\mathbb C}-\{u_1,...,u_\mu\}$ of the middle homology bundle
\begin{eqnarray}\label{2.22}
\bigcup_{\tau\in T-\{u_1,...,u_\mu\}}H_n(F_t^{-1}(\tau),{\mathbb Z}).
\end{eqnarray}
If not, the ${\mathbb Z}$-lattice bundle is obtained from a case in \eqref{2.22}
by a suitable deformation.
The vanishing cycle near $u_j$ is up to its
sign uniquely determined by its Picard-Lefschetz transformation.
Any automorphism of the ${\mathbb Z}$-lattice bundle maps each of these
vanishing cycles to $\pm$
itself. As the Coxeter-Dynkin diagram is connected, the only
automorphisms of the ${\mathbb Z}$-lattice bundle are $\pm\id$.
\hfill $\Box$
\subsection{Thom-Sebastiani type results}\label{s2.6}
A result of Thom and Sebastiani
compares the Milnor lattices and monodromies of
the singularities $f=f(x_0,...,x_n),g=g(y_0,...,y_m)$ and
$f+g=f(x_0,...,x_n)+g(x_{n+1},...,x_{n+m+1})$.
There are extensions by Deligne for the Seifert form and
by Gabrielov for distinguished bases.
All results can be found in \cite[I.2.7]{AGV88}.
They are restated here.
There is a canonical isomorphism
\begin{eqnarray}\label{2.23}
\Phi:Ml(f+g)&\stackrel{\cong}{\longrightarrow} &Ml(f)\otimes Ml(g),\\
\textup{with } M_h(f+g)&\cong & M_h(f)\otimes M_h(g) \label{2.24}\\
\textup{and }
L(f+g)&\cong& (-1)^{(n+1)(m+1)}\cdot L(f)\otimes L(g).\label{2.25}
\end{eqnarray}
If $\underline{\delta}=(\delta_1,...,\delta_{\mu(f)})$
and $\underline{\gamma}=(\gamma_1,...,\gamma_{\mu(g)})$ are
distinguished bases of $f$ and $g$ with Stokes matrices
$S(f)$ and $S(g)$, then
$$\Phi^{-1}(\delta_1\otimes \gamma_1,...,
\delta_1\otimes \gamma_{\mu(g)},
\delta_2\otimes \gamma_1,...,
\delta_2\otimes \gamma_{\mu(g)},
...,
\delta_{\mu(f)}\otimes \gamma_1,...,
\delta_{\mu(f)}\otimes \gamma_{\mu(g)})$$
is a distinguished basis of $Ml(f+g)$,
that means, one takes the vanishing cycles
$\Phi^{-1}(\delta_i\otimes \gamma_j)$ in the lexicographic order.
Then by \eqref{2.16} and \eqref{2.25}, the matrix
\begin{eqnarray}\label{2.26}
S(f+g)=S(f)\otimes S(g)
\end{eqnarray}
(where the tensor product is defined
so that it fits to the lexicographic order)
is the Stokes matrix of this distinguished basis.
In the special case $g=x_{n+1}^2$,
the function germ $f+g=f(x_0,...,x_n)+x_{n+1}^2\in {\mathcal O}_{{\mathbb C}^{n+2},0}$
is called {\it stabilization} or {\it suspension} of $f$.
As there are only two isomorphisms $Ml(x_{n+1}^2)\to{\mathbb Z}$,
and they differ by a sign, there are two equally canonical
isomorphisms $Ml(f)\to Ml(f+x_{n+1}^2)$,
and they differ just by a sign.
Therefore automorphisms and bilinear forms on $Ml(f)$
can be identified with automorphisms and bilinear forms on
$Ml(f+x_{n+1}^2)$. In this sense
\begin{eqnarray}\label{2.27}
L(f+x_{n+1}^2) = (-1)^n\cdot L(f)\quad\textup{and}\quad
M_h(f+x_{n+1}^2)= - M_h(f)
\end{eqnarray}
\cite[I.2.7]{AGV88}, and $G_{\mathbb Z}(f+x_{n+1}^2)=G_{\mathbb Z}(f)$.
The Stokes matrix $S$ does not change under stabilization.
\section{Marked singularities and their symmetries}\label{s3}
\setcounter{equation}{0}
\subsection{Symmetries of singularities}\label{s3.1}
Here we will review results from \cite[13.1 and 13.2]{He02}
on symmetries of singularities. A review with slightly
simplified proofs was already given in \cite[ch. 6]{He11}.
Let $f:({\mathbb C}^{n+1},0)\to({\mathbb C},0)$ be a singularity, and
let $F:{\mathcal Y}\to T$ be a good representative with base space
$M\subset{\mathbb C}^\mu$ (with coordinates $t=(t_1,...,t_\mu)$)
of a universal unfolding $({\mathbb C}^{n+1}\times M,0)\to({\mathbb C},0)$.
Let
\begin{eqnarray*}
{\mathcal R}:=\{\varphi:({\mathbb C}^{n+1},0)\to({\mathbb C}^{n+1},0)\, |\, \varphi
\textup{ biholomorphic}\}
\end{eqnarray*}
be the group of all germs of coordinate changes, and let
\begin{eqnarray}\label{3.1}
{\mathcal R}^f:=\{\varphi\in{\mathcal R}\, |\, f\circ\varphi=f\}
\end{eqnarray}
be the group of symmetries of $f$. It is possibly
$\infty$-dimensional, but the group $j_k{\mathcal R}^f$
of $k$-jets in ${\mathcal R}^f$ is an algebraic group for any $k\in{\mathbb Z}_{\geq 1}$.
Let
\begin{eqnarray}\label{3.2}
R_f:= j_1{\mathcal R}^f/(j_1{\mathcal R}^f)^0
\end{eqnarray}
be the finite group of components of $j_1{\mathcal R}^f$. It is easy to see
that $R_f=j_k{\mathcal R}^f/(j_k{\mathcal R}^f)^0$ for any $k\in{\mathbb Z}_{\geq 1}$
\cite[Lemma 13.10]{He02}. Recall the definition of $G_{\mathbb Z}(f)$ in \eqref{2.3}.
There is a natural homomorphism
\begin{eqnarray}\label{3.3}
()_{hom}:{\mathcal R}^f\to G_{\mathbb Z}(f),\quad \varphi\mapsto (\varphi)_{hom}.
\end{eqnarray}
Let $\Aut_M:=\Aut((M,0),\circ,e,E)$ be the group of automorphisms
of the germ $(M,0)$ as a germ of an F-manifold with Euler field.
It is a finite group because $M$ is a massive F-manifold
\cite[Theorem 4.14]{He02}. We claim that there is also a natural
homomorphism
\begin{eqnarray}\label{3.4}
()_M:{\mathcal R}^f\to \Aut_M,\quad \varphi\mapsto (\varphi)_M.
\end{eqnarray}
It arises as follows. $F\circ\varphi^{-1}$ is a universal
unfolding of $f$ with the same base space $(M,0)$ as the universal
unfolding $F$. A morphism which induces $F\circ \varphi^{-1}$ by $F$
is given by a pair $(\Phi,(\varphi)_M)$ where $(\varphi)_M\in\Aut_M$ and where
$\Phi:({\mathbb C}^{n+1}\times M,0)\to({\mathbb C}^{n+1}\times M,0)$ is a
biholomorphic map germ with
\begin{eqnarray}\label{3.5}
F\circ \varphi^{-1}=F\circ\Phi,\quad \Phi|_{({\mathbb C}^{n+1}\times\{0\},0)}=\id,
\quad \pr_M\circ \Phi = (\varphi)_M\circ \pr_M.
\end{eqnarray}
Here $\Phi$ is not unique, but $(\varphi)_M$ is unique because
$\Aut_M$ is finite and the differential of $(\varphi)_M$ at $T_0M$
is determined by the action of $\varphi$ on the Jacobi algebra
${\mathcal O}_{{\mathbb C}^{n+1},0}/J_f\cong T_0M$. The map $\Phi\circ\varphi$
satisfies
\begin{eqnarray}\label{3.6}
F\circ (\Phi\circ\varphi)=F\quad\textup{and}\quad
\pr_M\circ (\Phi\circ\varphi)=(\varphi)_M\circ \pr_M
\end{eqnarray}
and is an extension of the symmetry $\varphi$ of $f$ to a symmetry of $F$.
The following theorem is contained in \cite[Theorem 13.11]{He02}
and is rewritten in \cite[Theorem 6.1]{He11}.
\begin{theorem}\label{t3.1}
As above, let $f:({\mathbb C}^{n+1},0)\to({\mathbb C},0)$ be an isolated hypersurface singularity,
and let $F:({\mathbb C}^{n+1}\times M,0)\to({\mathbb C},0)$ be a universal unfolding
with base space $(M,0)$.
\medskip
(a) The homomorphism $()_M:{\mathcal R}^f\to\Aut_M$ factors through $R_f$
to a homomorphism $()_M:R_f\to\Aut_M$.
If $\mult f\geq 3$ then $()_M:R_f\to\Aut_M$ is
an isomorphism and then $j_1{\mathcal R}^f=R_f$.
If $\mult f=2$ then $()_M:R_f\to\Aut_M$ is surjective with kernel of
order 2. If $f=g(x_0,...,x_{n-1})+x_n^2$ then the kernel is generated by
the class of the symmetry $(x\mapsto (x_0,...,x_{n-1},-x_n))$.
\medskip
(b) The homomorphism $()_{hom}:{\mathcal R}^f\to G_{\mathbb Z}(f)$ factors through $R_f$
to an injective homomorphism $()_{hom}:R_f\to G_{\mathbb Z}(f)$.
Let $G^{smar}_{\mathcal R}(f)\subset G_{\mathbb Z}(f)$ denote its image.
It contains $-\id$ if and only if $\mult f=2$.
If $f=g(x_0,...,x_{n-1})+x_n^2$ then
$-\id=(x\mapsto (x_0,...,x_{n-1},-x_n))_{hom}$.
\medskip
(c) The homomorphism
\begin{eqnarray}\label{3.7}
()_M\circ ()_{hom}^{-1}:G^{smar}_{\mathcal R}(f)\to\Aut_M
\end{eqnarray}
is an isomorphism if $\mult f\geq 3$. It is surjective with kernel
$\{\pm\id\}$ if $\mult f=2$.
\end{theorem}
Consider a singularity $f:({\mathbb C}^{n+1},0)\to({\mathbb C},0)$ and a good representative
$F:{\mathcal Y}\to T$ of a universal unfolding with base space $M$.
One can choose it such that any element of $R_f$
lifts to an automorphism of $F$.
Consider the ${\mathbb Z}$-lattice bundle
\begin{eqnarray}\label{3.8}
\bigcup_{(\tau,t)\in T\times M-{\mathcal D}} H_n(F_t^{-1}(\tau),{\mathbb Z}).
\end{eqnarray}
\begin{definition/lemma}\label{t3.2}
(a) (Definition)
We call the flat extension of the ${\mathbb Z}$-lattice bundle in \eqref{3.8}
to ${\mathbb C}\times M-{\mathcal D}$ the canonical ${\mathbb Z}$-lattice bundle of $M$.
\medskip
(b) (Lemma) Any element of $\Aut_M$ lifts to an automorphism
of the canonical ${\mathbb Z}$-lattice bundle of $M$.
The lift is unique up to $\pm 1$.
\end{definition/lemma}
{\bf Proof:}
The surjectivity of the homomorphism $()_M:R_f\to \Aut_M$
implies that any automorphism of $((M,0),\circ,e,E)$ lifts
to an automorphism of the unfolding and thus to an automorphism
of the ${\mathbb Z}$-lattice bundle in \eqref{3.10}.
Because of lemma \ref{t2.3}, the only automorphisms of the ${\mathbb Z}$-lattice
bundle which fix $T\times M$, are $\pm\id$.
Therefore any element of $\Aut_M$ has only two lifts, and they
differ by $\pm 1$.
\hfill$\Box$
\bigskip
Part (b) justifies part (a): The bundle depends only on the isomorphism
class of the germ $((M,0),\circ,e,E)$.
Instead of lemma \ref{t2.3}, we could have used theorem \ref{t3.1} (c).
But that would give only that any element of $\Aut_M$ has two canonical
lifts, which differ by $\pm 1$, not that they are the only lifts.
In the case of a quasihomogeneous singularity,
the finite group of quasihomogeneous symmetries
is a natural lift of $R_f$. This will be useful for the calculations
in section \ref{s5}.
\begin{theorem}\label{t3.3}\cite[Theorem 13.13]{He02}
\cite[Theorem 6.2]{He11}
Let $f\in {\mathbb C}[x_0,...,x_n]$ be a quasihomogeneous polynomial with an isolated singularity
at $0$ and weights $w_0,...,w_n\in{\mathbb Q}\cap(0,\frac{1}{2}]$ and weighted degree $1$.
Suppose that $w_0\leq ...\leq w_{n-1}<\frac{1}{2}$ (then $f\in {\bf m}^3$
if and only if $w_n<\frac{1}{2})$. Let $G_w$ be the algebraic group of
quasihomogeneous coordinate changes, that means, those which respect ${\mathbb C}[x_0,...,x_n]$
and the grading by the weights $w_0,...,w_n$ on it. Then
$$R_f \cong \Stab_{G_w}(f).$$
\end{theorem}
\begin{remark}\label{t3.4}
Let $f\in{\mathbb C}[x_0,...,x_n]$ be a quasihomogeneous polynomial
with an isolated singularity at 0 and weights
$w_0,...,w_n\in(0,\frac{1}{2}]$ and weighted degree 1.
Then
\begin{eqnarray*}
\varphi_1:=(x\mapsto (e^{2\pi iw_0}x_0,...,e^{2\pi iw_n}x_n))
\in\textup{Stab}_{G_{\bf w}}(f)
\end{eqnarray*}
satisfies $(\varphi_1)_{hom}=M_h$.
Now let $F(x,t)=f(x)+\sum_{i=1}^\mu t_im_i$
be a universal unfolding as in \eqref{2.9} with
$m_i$ a weighted homogeneous polynomial of weighted degree
$\deg_{\bf w}m_i$. Then $\deg_{\bf w}t_i:=1-\deg_{\bf w}m_i$,
\begin{eqnarray*}
(\varphi_1)_M=(t\mapsto (e^{2\pi i \deg_{\bf w}t_1}t_1,...,
e^{2\pi i \deg_{\bf w}t_\mu}t_\mu)),
\end{eqnarray*}
and the pair $(\Phi_1,(\varphi_1)_M)$ with
\begin{eqnarray*}
\Phi_1 = ((x,t)\mapsto (x,(\varphi_1)_M)
\end{eqnarray*}
induces $F\circ \varphi_1^{-1}$ by $F$, i.e.
\eqref{3.6} holds:
\begin{eqnarray*}
F\circ (\Phi_1\circ\varphi_1)=F,\quad
\pr_M\circ (\Phi_1\circ\varphi_1) = (\varphi_1)_M\circ\pr_M.
\end{eqnarray*}
Especially, $M_h\in G^{smar}_{\mathcal R}(f)$ and
$()_M\circ ()_{hom}^{-1}(M_h) =(\varphi_1)_M$.
\end{remark}
\subsection{Marked singularities and their moduli spaces}\label{s3.2}
In \cite{He11} the notion of a {\it marked} singularity was
defined and results from \cite{He02} on moduli spaces of right equivalence
classes of singularities were lifted to marked singularities.
Here we recall the central notions and results.
\begin{definition}\label{t3.5}
Fix one reference singularity $f^{(0)}:({\mathbb C}^{n+1},0)\to({\mathbb C},0)$.
\medskip
(a) Its $\mu$-homotopy class is the set of all singularities
$f:({\mathbb C}^{n+1},0)\to({\mathbb C},0)$ for which a $C^\infty$-family $f_s$, $s\in[0,1]$,
of singularities with $\mu(f_s)=\mu(f^{(0)})$ and $f_0=f^{(0)}$ and $f_1=f$ exists.
\medskip
(b) A {\it marked} singularity is a pair $(f,\pm\rho)$ with $f$ in the
$\mu$-homotopy class of $f^{(0)}$ and $\rho:Ml(f,L)\to Ml(f^{(0)},L)$ an isomorphism
between Milnor lattices with Seifert forms. Here $\pm\rho$ means
the set $\{\rho,-\rho\}$, so neither $\rho$ nor $-\rho$ is preferred.
\medskip
(c) Two singularities $f_1$ and $f_2$ are right equivalent if a
coordinate change $\varphi\in{\mathcal R}$ with $f_1=f_2\circ \rho$ exists.
Notation: $f_1\sim_{\mathcal R} f_2$.
Two marked singularities $(f_1,\pm\rho_1)$ and $(f_2,\pm\rho_2)$
are right equivalent if a coordinate change $\varphi\in{\mathcal R}$ with
$f_1=f_2\circ \varphi$ and $\rho_1=\varepsilon\cdot \rho_2\circ(\varphi)_{hom}$
for some $\varepsilon\in\{\pm 1\}$ exists.
Notation: $(f_1,\pm\rho_1)\sim_{\mathcal R}(f_2,\pm\rho_2)$.
\medskip
(d) The moduli spaces $M_\mu(f^{(0)})$ and $M_\mu^{mar}(f^{(0)})$ are defined
as the sets
\begin{eqnarray}\label{3.9}
M_\mu(f^{(0)})&:=& \{\textup{the }\mu\textup{-homotopy class of }f^{(0)}\}/\sim_{\mathcal R},\\
M_\mu^{mar}(f^{(0)})&:=& \{\textup{the marked singularities }(f,\pm\rho)\}/\sim_{\mathcal R}.
\label{3.10}
\end{eqnarray}
\end{definition}
A central result in \cite{He02} is that the moduli space $M_\mu(f^{(0)})$
has the structure of an analytic geometric quotient.
In \cite{He11} this result is extended to the space $M_{\mu}^{mar}(f^{(0)})$,
and it is shown that $M_{\mu}^{mar}(f^{(0)})$ is locally isomorphic
to a {\it $\mu$-constant stratum}. This is recalled in theorem \ref{t3.6} below.
Let $f:({\mathbb C}^{n+1},0)\to({\mathbb C},0)$ be a singularity and let
$F:({\mathbb C}^{n+1}\times M(f),0)\to({\mathbb C},0)$ be a universal unfolding of $f$
with base space $(M(f),0)$. Then the $\mu$-constant stratum is the
germ $(S_\mu(f),0)\subset (M(f),0)$ of the subset
\begin{eqnarray}\label{3.11}
S_\mu(f):=\{t\in M\, |\, F_t \textup{ has only one singularity }x_0
\textup{ and }F_t(x_0)=0\}.
\end{eqnarray}
Here $F:{\mathcal Y}\to T$ is a good representative of the germ $F$
with base space $M$. Obviously $(S_\mu(f),0)$ carries a natural structure
as a reduced complex space germ.
In \cite[Theorem]{He02} it is equipped furthermore with a
natural complex structure, which is not necessarily reduced.
\begin{theorem}\label{t3.6}
Fix one reference singularity $f^{(0)}$.
\medskip
(a) (\cite[Theorem 13.11]{He02} and \cite[Theorem 4.3]{He11})
$M_\mu(f^{(0)})$ and $M_\mu^{mar}(f^{(0)})$ are in a
natural way complex spaces.
They can be constructed with the underlying reduced complex structures
as analytic geometric quotients.
\medskip
(b) For any $f$ in the $\mu$-homotopy class of $f^{(0)}$, the germ
$(M_\mu(f^{(0)}),[f])$ is isomorphic with its canonical complex structure
to the quotient $(S_\mu(f),0)/\Aut_{M(f)}$ (the action of $\Aut_{M(f)}$ on $(M(f),0)$
restricts to an action on $(S_\mu(f),0)$).
For any marked singularity $(f,\pm \rho)$, the germ
$(M_\mu^{mar}(f^{(0)}),[(f,\pm\rho)])$ is isomorphic with its canonical complex
structure to the $\mu$-constant stratum $(S_\mu(f),0)$.
\medskip
(c) For any $\chi\in G_{\mathbb Z}(f^{(0)})$, the map
\begin{eqnarray}\label{3.12}
\chi_{mar}:M_\mu^{mar}(f^{(0)})&\to& M_\mu^{mar}(f^{(0)}),\\
{}[(f,\pm\rho)]&\mapsto& [(f,\pm\chi\circ\rho)],\nonumber
\end{eqnarray}
is an automorphism of $M_\mu^{mar}(f^{(0)})$. The action
\begin{eqnarray}\label{3.13}
G_{\mathbb Z}(f^{(0)})\times M_\mu^{mar}(f^{(0)})&\to& M_\mu^{mar}(f^{(0)}),\\
(\chi,[(f,\pm\rho)])&\mapsto& \chi_{mar}([(f,\pm\rho)])=[(f,\pm\chi\circ\rho)],\nonumber
\end{eqnarray}
is a group action from the left.
It is properly discontinuous. The quotient $M_\mu^{mar}(f^{(0)})/G_{\mathbb Z}(f^{(0)})$
is the moduli space $M_\mu(f^{(0)})$ of unmarked singularities,
with its canonical complex structure.
\medskip
(d) (Definition) Recall the definition of $G_{\mathcal R}^{smar}(f)$ in theorem \ref{t3.1} (b).
Define
\begin{eqnarray}\label{3.14}
G^{mar}_{\mathcal R}(f):=\{\pm\psi\, |\, \psi\in G^{smar}_{\mathcal R}(f)\}\subset G_{\mathbb Z}(f),
\end{eqnarray}
Remark: By theorem \ref{t3.1} this is equal to $G^{smar}_{\mathcal R}(f)$ if $\mult(f)=2$ and has $G^{smar}_{\mathcal R}(f)$
as index 2 subgroup if $\mult(f)\geq 3$.
\medskip
(e) For any point $[(f,\pm\rho)]\in M_\mu^{mar}(f^{(0)})$,
the stabilizer of it in $G_{\mathbb Z}(f^{(0)})$ is the finite group
\begin{eqnarray}\label{3.15}
\rho\circ G^{mar}_{\mathcal R}(f)\circ \rho^{-1}\subset G_{\mathbb Z}(f^{(0)}).
\end{eqnarray}
\end{theorem}
\begin{remarks}\label{t3.7}
(i) In \cite{He11} also the notion of a {\it strongly marked} singularity
is defined. It is a pair $(f,\rho)$ with $f$, $\rho$ and
$f^{(0)}$ as in definition
\ref{t3.5} (b). The moduli space $M_\mu^{smar}(f^{(0)})$
of strongly marked singularities behaves
as well as $M_\mu^{mar}(f^{(0)})$ if the following holds:
Either any singularity in the $\mu$-homotopy class of $f^{(0)}$ has multiplicity $\geq 3$,
or any singularity in the $\mu$-homotopy class of $f^{(0)}$ has multiplicity 2.
We expect that to hold, but we don't know it. If it does not hold
then $M_\mu^{smar}(f^{(0)})$ is not Hausdorff, see \cite[Theorem 4.3 (e)]{He11}.
We do not need strongly marked singularities here.
\medskip
(ii) In \cite{He11}\cite{GH17-1} and \cite{GH17-2} the moduli space
$M_\mu^{mar}(f^{(0)})$ for any singularity $f^{(0)}$ with modality $\leq 2$
was studied. It turned out that almost all of them are connected, but not all,
namely not those for $f^{(0)}$ in the subseries
$W_{1,12r}^\sharp,S_{1,10r}^\sharp,U_{1,9r},E_{3,18r},
Z_{1,14,r},Q_{2,12r},W_{1,12r},S_{1,10r}$
of the eight bimodal series.
This disproved the conjecture 3.2 (a) in \cite{He11}
that $M_\mu^{mar}(f^{(0)})$ is connected for any singularity.
But for all other singularities $f^{(0)}$ with modality $\leq 2$
$M_\mu^{mar}(f^{(0)})$ is connected.
An equivalent statement to $M_\mu^{mar}(f^{(0)})$ connected is
because of \cite[Theorem 4.4 (a)]{He11} that any element of $G_{\mathbb Z}(f^{(0)})$
arises from geometry, namely it is
$(\pm 1)\cdot$the transversal monodromy of
a suitable $\mu$-constant family $f_s$, $s\in[0,1]$, with
$f_0=f_1=f^{(0)}$ (a $C^\infty$ family of singularities $f_s$ with
$\mu(f_s)=\mu(f^{(0)})$).
\medskip
(iii) In the case of the ADE-singularities, which have modality 0,
$M_\mu^{mar}(f^{(0)})$ is simply a point \cite{He11}. In the case of the simple
elliptic singularities, which have modality one and which are
parametrized by elliptic curves, $M_\mu^{mar}(f^{(0)})$ is isomorphic
to $\H$ \cite{GH17-1}. In both cases, the connectedness of
$M_\mu^{mar}(f^{(0)})$ will be important in the proof of the main
theorem in section \ref{s7}.
\end{remarks}
\subsection{A thickening of the moduli space of marked singularities}\label{s3.3}
Fix one reference singularity $f^{(0)}$. By theorem \ref{t3.6} (b), locally
at $[(f,\pm\rho)]$, the moduli space $M_\mu^{mar}(f^{(0)})$
is isomorphic to the $\mu$-constant stratum $(S_\mu(f),0)\subset (M(f),0)$
of the singularity $f$. In \cite{GH18} we will show the following.
\begin{theorem}\label{t3.8}
Fix one reference singularity $f^{(0)}$.
\medskip
(a) The moduli space $M_\mu^{mar}(f^{(0)})$ of marked singularities can
be extended globally to a $\mu$-dimensional F-manifold
$M^{mar}(f^{(0)})\supset M_\mu^{mar}(f^{(0)})$ with the following properties.
\begin{list}{}{}
\item[(i)]
For any point $[(f,\pm\rho)]\in M_\mu^{mar}(f^{(0)})$, a certain neighborhood
$U_{[(f,\pm\rho)]}$ of it in $M^{mar}(f^{(0)})$
and an isomorphism $\psi_{[(f,\pm\rho)]}: M(f)\to U_{[(f,\pm\rho)]}$
of $F$-manifolds exist,
where $M(f)$ is the base space of a good representative
of a universal unfolding of $f$.
\item[(ii)]
$M^{mar}(f^{(0)})$ is covered by these neighborhoods $U_{[(f,\pm\rho)]}$.
\item[(iii)]
The action of $G_{\mathbb Z}(f^{(0)})$ on $M_\mu^{mar}(f^{(0)})$ extends to an action
of $G_{\mathbb Z}(f^{(0)})$ on this F-manifold with Euler field, and the map
\begin{eqnarray}\label{3.16}
G_{\mathbb Z}(f^{(0)})\to \Aut(M^{mar}(f^{(0)}),\circ,e,E)
\end{eqnarray}
is surjective with kernel $\{\pm \id\}$.
\end{list}
\medskip
(b) Let ${\mathcal D}^{mar}\subset {\mathbb C}\times M^{mar}(f^{(0)})$ be the discriminant
\begin{eqnarray*}
{\mathcal D}^{mar}:=\{(\tau,t)\in {\mathbb C}\times M^{mar}(f^{(0)})\, |\,
E\circ:T_tM\to T_tM\textup{ has eigenvalue }\tau\}.
\end{eqnarray*}
It is a hypersurface.
$M^{mar}(f^{(0)})$ comes equipped with a ${\mathbb Z}$-lattice bundle
$H_{\mathbb Z}\to ({\mathbb C}\times M^{mar}(f^{(0)})-{\mathcal D}^{mar})$ of rank $\mu$
with the following properties.
\begin{list}{}{}
\item[(i)]
For any point $[(f,\pm\rho)]\in M_\mu^{mar}(f^{(0)})$
and a good representative $F:{\mathcal Y}\to T$ of a universal unfolding of $f$
with base space $M(f)$ and the isomorphism
$\psi_{[(f,\pm\rho)]}:M(f)\to U_{[(f,\pm\rho)]}$ as in (a)(i),
this isomorphism lifts to an isomorphism from the canonical
${\mathbb Z}$-lattice bundle above $M(f)$ in definition \ref{t3.2} (a)
to the restriction of $H_{\mathbb Z}$ to ${\mathbb C}\times U_{[(f,\pm\rho)]}-{\mathcal D}^{mar}$.
(Because of lemma \ref{t3.2} (b), this lift is unique up to $\pm 1$).
\item[(ii)]
Let $r:M^{mar}(f^{(0)})\to{\mathbb R}_{>0}$ be a $C^\infty$ function with
${\mathcal D}^{mar}\subset \bigcup_{t\in M^{mar}(f^{(0)})}\Delta_{r(t)}\times \{t\}$.
Then the restriction of $H_{\mathbb Z}$ to
$\bigcup_{t\in M^{mar}(f^{(0)})}{\mathbb R}_{>r(t)}\times \{t\}$ has trivial monodromy,
i.e. it is a trivial flat ${\mathbb Z}$-lattice bundle.
\item[(iii)]
Write $t^{(0)}:= [(f^{(0)},\pm\id)]\in M^{mar}_\mu(f^{(0)})$.
For any $t=[(f,\pm\rho)]\in M^{mar}_\mu(f^{(0)})$ and any small $\tau>0$,
the following diagram of isomorphisms commutes,
\begin{eqnarray}\label{3.17}
\begin{xy}
\xymatrix{ H_{{\mathbb Z},(\tau,t)} \ar[r]^{\cong}_{(i)} \ar[d]_{\cong}^{(ii)}
& Ml(f) \ar[d]_{\cong}^{\pm\rho}\\
H_{{\mathbb Z},(\tau,t^{(0)})} \ar[r]^{\cong}_{(i)} & Ml(f^{(0)}) }
\end{xy}
\end{eqnarray}
\item[(iv)]
The action of $G_{\mathbb Z}(f^{(0)})$ on $M^{mar}(f^{(0)})$ extends to an action on the
${\mathbb Z}$-lattice bundle $H_{\mathbb Z}$ (the action of $G_{\mathbb Z}(f^{(0)})$ on the first factor ${\mathbb C}$
of ${\mathbb C}\times M^{mar}(f^{(0)})$ is trivial by definition).
\end{list}
\end{theorem}
As the paper \cite{GH18} is not yet available, we will prove this theorem for
the cases of interest here, the simple and the simple elliptic singularities,
directly in section \ref{s4}. See theorem \ref{t4.3}.
$M^{mar}(f^{(0)})$ contains besides ${\mathcal D}^{mar}$ also two other hypersurfaces,
the caustic ${\mathcal K}_3^{mar}$ and the Maxwell stratum ${\mathcal K}_2^{mar}$,
\begin{eqnarray}
{\mathcal K}_3^{mar}&:=& \{t\in M^{mar}(f^{(0)})\, |\, T_tM^{mar}(f^{(0)})
\textup{ decomposes into }\nonumber \\
&&\textup{less than }\mu\textup{ irreducible local algebras}\},\label{3.18}\\
{\mathcal K}_2^{mar}&:=& \textup{the closure of the set }
\{t\in M^{mar}(f^{(0)})-{\mathcal K}_3^{mar}\, |\, \textup{some} \nonumber \\
&&\textup{eigenvalues of }E\circ:T_tM\to T_tM\textup{ coincide}\}.\label{3.19}
\end{eqnarray}
\subsection{A Looijenga-Deligne map for distinguished bases}\label{s3.4}
Looijenga \cite{Lo74} studied in the case of the simple singularities
a relationship between distinguished bases and the base space of a universal
unfolding. His idea carries over to the F-manifold $M^{mar}(f^{(0)})$
in theorem \ref{t3.8} of an arbitrary $\mu$-homotopy class of singularities.
We describe the idea here in this generality.
In section \ref{s7} we will study it in the cases
of the simple and the simple elliptic singularities.
In \cite{GH18} we will study it in the general case.
The set ${\mathcal B}(f)$ of distinguished bases of the
Milnor lattice $Ml(f)$ of a singularity $f$ was constructed
in subsection \ref{s2.5}
by choosing {\it one} morsification $F_t$ of $f$
and considering {\it all possible} distinguished systems of paths.
Following Looijenga, now we want to fix {\it one} distinguished system of paths
and consider {\it all possible} morsifications.
The following two definitions make this precise.
\begin{definition}\label{t3.9}
(a) Fix a tuple $(u_1,...,u_\mu)\subset{\mathbb C}^\mu$ with $u_i\neq u_j$ for $i\neq j$.
The {\it good ordering} of it is the lexicographic ordering
$(u_{\sigma(1)},...,u_{\sigma(\mu)})$ by
(imaginary part,$-$real part). That means, the corresponding {\it good} permutation
$\sigma\in S_\mu$ is uniquely determined by
\begin{eqnarray}\label{3.20}
i<j\iff
\left\{\begin{array}{ll}\Imm(u_{\sigma(i)})<\Imm(u_{\sigma(j)})&\textup{ or }\\
\Imm(u_{\sigma(i)})=\Imm(u_{\sigma(j)})&\textup{ and }
\Ree(u_{\sigma(i)})>\Ree(u_{\sigma(j)}). \end{array}\right.
\end{eqnarray}
(b) Fix a tuple $(u_1,...,u_\mu)$ as in (a) with good permutation $\sigma\in S_\mu$,
and fix additionally a $\tau\in {\mathbb R}_{>0}$ with $\tau>\max_i|u_i|$.
A {\it good} distinguished system of paths is a distinguished system
of paths $\gamma_1,...,\gamma_\mu$ such that $\gamma_j$ goes from $u_{\sigma(j)}$
to $\tau$.
\end{definition}
For a fixed tuple $(u_1,...,u_\mu,\tau)$ as above,
all good distinguished systems of paths are homotopy equivalent
with respect to a natural notion of homotopy equivalence.
And if $F_t$ is a morsification of a singularity $f$ with
critical values $u_1,...,u_\mu$, all good distinguished systems
of paths give the same distinguished basis up to the action
of the sign group $G_{sign,\mu}$.
\begin{definition}\label{t3.10}
Fix one reference singularity $f^{(0)}$.
\medskip
(a) The set of {\it Stokes walls}
within the space $M^{mar}(f^{(0)})$ in theorem \ref{t3.8} is the set
\begin{eqnarray}\label{3.21}
W_{Stokes}&:=& \{t\in M^{mar}(f^{(0)})\, |\,
\textup{the eigenvalues }u_1,...,u_\mu\textup{ of }\\
&& E\circ:T_tM\to T_tM \textup{ satisfy }\Imm(u_i)=\Imm(u_j)\textup{ for some }
i\neq j\}\nonumber
\end{eqnarray}
The set $W_{Stokes}$ of Stokes walls is a real codimension 1 subvariety and
contains ${\mathcal K}_3^{mar}\cup {\mathcal K}_2^{mar}$. The components of its complement
$M^{mar}(f^{(0)})-W_{Stokes}$ are called {\it Stokes regions}.
Let $R_{Stokes}$ be the set of Stokes regions,
and let $R_{Stokes}^0$ be the subset of those Stokes regions
which are in the component
$M^{mar}(f^{(0)})^0$ of $M^{mar}(f^{(0)})$
which contains $[(f^{(0)},\pm\id)]$.
\medskip
(b) The set ${\mathcal B}^{ext}(f^{(0)})$ is the orbit of ${\mathcal B}(f^{(0)})$ under
the action of $G_{\mathbb Z}$. It contains (all?) ${\mathbb Z}$-bases $(\delta_1,...,\delta_\mu)$
of $Ml(f^{(0)})$ whose elements $\delta_j$ are vanishing cycles and such that
$s_{\delta_1}\circ ...\circ s_{\delta_\mu}=M_h$.
It consists of $G_{sign,\mu}\rtimes \textup{Br}_\mu$ orbits.
One of these orbits is the set ${\mathcal B}(f^{(0)})$ of distinguished bases.
\medskip
(c) The {\it Looijenga-Deligne map} is the map
\begin{eqnarray}\label{3.22}
LD:R_{Stokes}\to {\mathcal B}^{ext}(f^{(0)})/G_{sign,\mu}
\end{eqnarray}
which is defined as follows.
For a Stokes region in $M^{mar}(f^{(0)})$,
choose a point $t$ in it
and a point $[(f,\pm\rho)]\in M_\mu^{mar}(f^{(0)})$ with
$t\in U_{[(f,\pm\rho)]}$.
Let $(u_1,...,u_\mu)$ be the eigenvalues of $E\circ:T_tM\to T_tM$
in the good ordering (definition \ref{t3.9}).
Consider a good distinguished system of paths from $(u_1,...,u_\mu)$
to a value $\tau>\max_i|u_i|$ (definition \ref{t3.9} (b)).
The usual construction of distinguished bases gives a distinguished
basis in ${\mathcal B}(f)$ up to the action of the sign group $G_{sign,\mu}$.
Shift this basis with the isomorphism $\rho:Ml(f)\to Ml(f^{(0)})$ to an element of
${\mathcal B}^{ext}(f^{(0)})/G_{sign,\mu}$.
\end{definition}
\begin{remarks}\label{t3.11}
(i) We claim that $LD$ restricts to a map
\begin{eqnarray}\label{3.23}
LD:R^0_{Stokes}\to {\mathcal B}(f^{(0)})/G_{sign,\mu}.
\end{eqnarray}
We prove this by a different description of the image $LD(t)$ for
$t\in R^0_{Stokes}$.
Choose $[(f,\pm\rho)]\in M^{mar}_\mu(f^{(0)})^0$ with
$t\in U_{[(f,\pm\rho)]}$,
choose $\tau>\max_i |u_i|$, and choose a good
distinguished system of paths from
$(u_1,...,u_\mu)$ to $\tau$. One can move $t$ within
$M^{mar}(f^{(0)})^0-({\mathcal K}_3^{mar}\cup{\mathcal K}_2^{mar})$
to a point in
$U_{[(f^{(0)},\pm\id)]}\subset M^{mar}_\mu(f^{(0)})^0$.
Then the good distinguished system of
paths moves to some new distinguished system of paths.
Now the construction of distinguished bases for $f^{(0)}$
gives directly
the class of bases $LD(t)\in{\mathcal B}(f^{(0)})/G_{sign,\mu}$.
This follows with \eqref{3.17}.
\medskip
(ii) The action of $G_{\mathbb Z}$ on $M^{mar}(f^{(0)})$ induces an action
on $R_{Stokes}$. And $R^0_{Stokes}=R_{Stokes}$ if and only if
$M^{mar}(f^{(0)})$ is connected.
The map \eqref{3.22} is $G_{\mathbb Z}$-equivariant.
Therefore, if $M^{mar}(f^{(0)})$ is connected, then \eqref{3.23}
and the definition of ${\mathcal B}^{ext}(f^{(0)})$ give also
${\mathcal B}^{ext}(f^{(0)})={\mathcal B}(f^{(0)})$.
\medskip
(iii) But if $M^{mar}(f^{(0)})$ is not connected, we do not know
whether ${\mathcal B}^{ext}(f^{(0)})={\mathcal B}(f^{(0)})$ or
${\mathcal B}^{ext}(f^{(0)})\supsetneqq{\mathcal B}(f^{(0)})$ holds.
The first open cases are the subseries in remark \ref{t3.7} (ii) of the
eight bimodal series.
\medskip
(iv) Looijenga considered the map $LD$ for the simple singularities and
proved that it is an isomorphism for the $A_\mu$ singularities \cite{Lo74}.
Deligne \cite{De74} proved the same for the $D_\mu$ and $E_\mu$ singularities.
We will reprove their results and extend them to the simple elliptic
singularities in section \ref{s7}. We will study $LD$ in the general
case in \cite{GH18}.
\end{remarks}
\section{Unfoldings of the simple and the
simple elliptic singularities}\label{s4}
\setcounter{equation}{0}
\noindent
The first singularities in Arnold's lists of isolated hypersurface singularities
\cite[ch 15.1]{AGV85} are the simple and the simple elliptic singularities.
They are distinguished by many properties. Especially, they possess
universal unfoldings such that all members are defined globally on ${\mathbb C}^{n+1}$.
We start by giving well known normal forms. Then we choose universal unfoldings.
\subsection{Normal forms for the simple and the simple elliptic singularities}\label{s4.1}
The first table lists normal forms from \cite{AGV85}
for the simple singularities $f:{\mathbb C}^{n+1}\to{\mathbb C}$,
\begin{eqnarray}\label{4.1}
\begin{array}{cccr}
\textup{name} & \mu & n & f(x_0,...,x_n)\\ \hline
A_\mu & \geq 1 & \geq 0 & x_0^{\mu+1}+\sum_{i=1}^nx_i^2 \\
D_\mu & \geq 4 & \geq 1 & x_0^{\mu-1}+x_0x_1^2+\sum_{i=2}^nx_i^2 \\
E_6 & 6 & \geq 1 & x_0^4+x_1^3+\sum_{i=2}^nx_i^2 \\
E_7 & 7 & \geq 1 & x_0^3x_1+x_1^3+\sum_{i=2}^nx_i^2 \\
E_8 & 8 & \geq 1 & x_0^5+x_1^3+\sum_{i=2}^nx_i^2
\end{array}
\end{eqnarray}
The simple elliptic singularities can be represented as 1-parameter families
in different ways \cite[1.9 and 1.11]{Sa74}\cite[ch 15.1]{AGV85}.
We choose the Legendre normal forms $f=f_\lambda:{\mathbb C}^{n+1}\to{\mathbb C}$
from \cite[1.9]{Sa74} in the following table
with $\lambda\in{\mathbb C}-\{0,1\}$,
\begin{eqnarray}\label{4.2}
\begin{array}{cccr}
\textup{name} & \mu & n & f_\lambda(x_0,...,x_n)\\ \hline
\widetilde E_6 & 8 & \geq 2 & x_1(x_1-x_0)(x_1-\lambda x_0)-x_0x_2^2 +\sum_{i=3}^nx_i^2 \\
\widetilde E_7 & 9 & \geq 1 & x_0x_1(x_1-x_0)(x_1-\lambda x_0) +\sum_{i=2}^nx_i^2 \\
\widetilde E_8 & 10 & \geq 1 & x_1(x_1-x_0^2)(x_1-\lambda x_0^2) +\sum_{i=2}^nx_i^2
\end{array}
\end{eqnarray}
\subsection{Universal unfoldings}\label{s4.2}
For the simple singularities, we reproduce the universal unfoldings which are
given in \cite{Lo74}. They are as follows,
\begin{eqnarray}\label{4.3}
&&F^{alg}:{\mathbb C}^{n+1}\times M^{alg}\to{\mathbb C}\quad\textup{with}\quad M^{alg}={\mathbb C}^\mu,\\
&&F^{alg}(x_0,...,x_n,t_1,...,t_\mu)=F^{alg}(x,t)=F^{alg}_t(x)
=f(x)+\sum_{j=1}^\mu t_jm_j \nonumber
\end{eqnarray}
with $f=F^{alg}_0$ and $m_1,...,m_\mu$ the monomials in the tables
\eqref{4.4} and \eqref{4.5},
\begin{eqnarray}\label{4.4}
\begin{array}{ccccccc}
\textup{name} & m_1 & m_2 & m_3 & m_4 & ... & m_\mu \\ \hline
A_\mu & 1 & x_0 & x_0^2 & x_0^3 & ... & x_0^{\mu-1} \\
D_\mu & 1 & x_1 & x_0 & x_0^2 & ... & x_0^{\mu-2}
\end{array}
\end{eqnarray}
\begin{eqnarray}\label{4.5}
\begin{array}{ccccccccc}
\textup{name} & m_1 & m_2 & m_3 & m_4 & m_5
& m_6 & m_7 & m_8 \\ \hline
E_6 &
1 & x_0 & x_1 & x_0^2 & x_0x_1 & x_0^2x_1 & & \\
E_7 &
1 & x_0 & x_1 & x_0^2 & x_0x_1 & x_0^3 & x_0^4 & \\
E_8 &
1 & x_0 & x_1 & x_0^2 & x_0x_1 & x_0^3 & x_0^2x_1 & x_0^3x_1
\end{array}
\end{eqnarray}
One checks easily that the monomials form a basis
of the Jacobi algebra ${\mathcal O}_{{\mathbb C}^{n+1},0}/J_f$.
Therefore the unfolding $F^{alg}$ is indeed universal
(compare \eqref{2.9}).
For each of the three Legendre families of simple elliptic singularities,
we give a global family of functions as follows,
\begin{eqnarray}
&&F^{alg}:{\mathbb C}^{n+1}\times M^{alg}\to{\mathbb C} \quad\textup{with}\quad
M^{alg}={\mathbb C}^{\mu-1}\times({\mathbb C}-\{0,1\}),\nonumber\\
&&F^{alg}(x_0,...,x_n,t_1,...,t_{\mu-1},\lambda)
=F^{alg}(x,t',\lambda)\\
&&=F^{alg}_{t',\lambda}(x) \nonumber
=f_\lambda(x)+\sum_{j=1}^{\mu-1} t_jm_j \label{4.6}
\end{eqnarray}
with $f_\lambda=F^{alg}_{0,\lambda}$ and $m_1,...,m_{\mu-1}$
the monomials in the table \eqref{4.7},
\begin{eqnarray}\label{4.7}
\begin{array}{cccccccccc}
\textup{name} & m_1 & m_2 & m_3 & m_4 & m_5
& m_6 & m_7 & m_8 & m_9 \\ \hline
\widetilde E_6 &
1 & x_0 & x_1 & x_2 & x_0^2 & x_0x_1 & x_1x_2 & & \\
\widetilde E_7 &
1 & x_0 & x_1 & x_0^2 & x_0x_1 & x_1^2 & x_0^2x_1 & x_0x_1^2 & \\
\widetilde E_8 &
1 & x_0 & x_0^2 & x_1 & x_0^3 & x_0x_1 & x_0^2x_1 & x_1^2 & x_0x_1^2
\end{array}
\end{eqnarray}
Let $\lambda:\H\to{\mathbb C}-\{0,1\},t_\mu\mapsto \lambda(t_\mu)$, be the standard
universal covering. For each of the three Legendre families of
simple elliptic singularities, we will also
consider the global family of functions
\begin{eqnarray}\label{4.8}
&&F^{mar}:{\mathbb C}^{n+1}\times M^{mar}\to{\mathbb C}, (x,t)\mapsto F^{alg}(x,t',\lambda(t_\mu))\\
&&\textup{where}\quad M^{mar}={\mathbb C}^{\mu-1}\times\H.\nonumber
\end{eqnarray}
For the simple singularities, we set
\begin{eqnarray}\label{4.9}
M^{mar}=M^{alg}={\mathbb C}^\mu,\quad \lambda:=t_\mu,\quad F^{mar}=F^{alg}.
\end{eqnarray}
\begin{lemma}\label{t4.1}
Consider any of the three global families of functions in \eqref{4.6}.
At each point $(0,0,\lambda)\in{\mathbb C}^{n+1}\times{\mathbb C}^{\mu-1}\times({\mathbb C}-\{0,1\})$,
the germ of the family $F^{alg}$ is a universal unfolding of $f_\lambda$.
Also, at each point $(0,0,t_\mu)\in{\mathbb C}^{n+1}\times {\mathbb C}^{\mu-1}\times \H$,
the germ of the family $F^{mar}$ in \eqref{4.8}
is a universal unfolding of $f_{\lambda(t_\mu)}$.
\end{lemma}
{\bf Proof:}
It suffices to prove the statement for $F^{alg}$.
Because of \eqref{2.9}, it suffices to show
that for any $\lambda\in{\mathbb C}-\{0,1\}$ the monomials $m_1,...,m_{\mu-1}$
together with the weighted homogeneous polynomial
$\frac{\partial f_\lambda}{\partial\lambda}$ form a basis of the Jacobi algebra
${\mathcal O}_{{\mathbb C}^{n+1},0}/J_{f_\lambda}$.
We carry out the least trivial case, which is the case $\widetilde E_6$,
and leave the cases $\widetilde E_7$ and $\widetilde E_8$ to the reader.
In the case $\widetilde E_6$, we work with the minimal number $n+1=3$ of
variables. The normalized weight system is
${\bf w}=(w_0,w_1,w_2)=(\frac{1}{3},\frac{1}{3},\frac{1}{3})$,
and $\deg_{\bf w}f_\lambda=1$. As $f$ is quasihomogeneous, the Jacobi
algebra is isomorphic to ${\mathbb C}[x]/J_{f_\lambda}^{pol}$ where $J_{f_\lambda}^{pol}$
denotes the Jacobi ideal in ${\mathbb C}[x_0,x_1,x_2]={\mathbb C}[x]$. For $q\in{\mathbb Q}_{\geq 0}$
denote by ${\mathbb C}[x]_q$ the sub vector space of ${\mathbb C}[x]$
generated by the monomials of weighted degree q.
\begin{eqnarray*}
\begin{array}{lllll}
\frac{\partial f_\lambda}{\partial x_0}&=& -(\lambda+1)x_1^2+2\lambda\cdot x_0x_1-x_2^2
&\in& J_{f_\lambda}^{pol}\cap{\mathbb C}[x]_{2/3},\\
\frac{\partial f_\lambda}{\partial x_1}&=& 3x_1^2-2(\lambda+1)\cdot x_0x_1+\lambda\cdot x_0^2
&\in& J_{f_\lambda}^{pol}\cap{\mathbb C}[x]_{2/3},\\
\frac{\partial f_\lambda}{\partial x_2}&=& -2x_0x_2
&\in& J_{f_\lambda}^{pol}\cap{\mathbb C}[x]_{2/3},\\
\frac{\partial f_\lambda}{\partial \lambda}&=& x_0^2x_1-x_0x_1^2
&\in& {\mathbb C}[x]_{1}.
\end{array}
\end{eqnarray*}
We have to show for $q\in {\mathbb Q}_{\geq 0}$
\begin{eqnarray}\label{4.10}
(J_{f_\lambda}^{pol})\cap {\mathbb C}[x]_q+\sum_{j:\, \deg_{\bf w}m_j=q}{\mathbb C}\cdot m_j
+\left\{\begin{array}{ll}0&\textup{if }q\neq 1\\
{\mathbb C}\cdot\frac{\partial f_\lambda}{\partial \lambda}&\textup{if }q=1\end{array}\right\} ={\mathbb C}[x]_q.
\end{eqnarray}
The only nontrivial cases are $q\in\{\frac{2}{3},1,\frac{4}{3}\}$.
The case $q=\frac{2}{3}$:
${\mathbb C}[x]_{2/3}$ is generated by the six monomials
$x_0^2,x_0x_1,x_1^2,x_0x_2,x_1x_2,x_2^2$.
Here $m_5=x_0^2,m_6=x_0x_1,m_7=x_1x_2,$ and
$\frac{\partial f_\lambda}{\partial x_2}= -2x_0x_2$. Modulo these four monomials,
$\frac{\partial f_\lambda}{\partial x_0}$ and $\frac{\partial f_\lambda}{\partial x_1}$
are congruent to $-(\lambda+1)x_1^2-x_2^2$ and $3x_1^2$. Thus the left hand
side of \eqref{4.10} contains also the monomials $x_1^2$ and $x_2^2$.
The case $q=1$:
${\mathbb C}[x]_1$ is generated by the 10 monomials
$x_0^3,x_0^2x_1,x_0x_1^2,x_1^3,x_0^2x_2,x_0x_1x_2,x_1^2x_2,x_0x_2^2,x_1x_2^2,x_2^3$.
The partial derivatives
$x_0\frac{\partial f_\lambda}{\partial x_2},x_1\frac{\partial f_\lambda}{\partial x_2},
x_2\frac{\partial f_\lambda}{\partial x_2},x_2\frac{\partial f_\lambda}{\partial x_1}$ and
$x_2\frac{\partial f_\lambda}{\partial x_0}$ generate the monomials
$x_0^2x_2,x_0x_1x_2,x_0x_2^2,x_1^2x_2$ and $x_2^3$. For $\lambda\in{\mathbb C}-\{0,1\}$,
the polynomials $x_0^2x_1-x_0x_1^2$ and $x_0\frac{\partial f_\lambda}{\partial x_0}$
(and $x_0x_2^2$) generate the monomials $x_0^2x_1$ and $x_0x_1^2$.
Modulo these seven monomials, the three remaining partial derivatives
$x_1\frac{\partial f_\lambda}{\partial x_0},x_0\frac{\partial f_\lambda}{\partial x_1},
x_1\frac{\partial f_\lambda}{\partial x_1}$ are congruent to
$-(\lambda+1)x_1^3-x_1x_2^2, \lambda x_0^3, 3x_1^3$.
Thus also the three remaining monomials
$x_0^3,x_1^3,x_1x_2^2$ are in the
left hand side of \eqref{4.10}.
The case $q=\frac{4}{3}$:
The ideal $J^{pol}_{f_\lambda}$ contains
$\frac{\partial f_\lambda}{\partial x_2}$, so $x_0x_2$,
and $x_2\frac{\partial f_\lambda}{\partial x_1}$, so $x_1^2x_2$,
and $x_2\frac{\partial f_\lambda}{\partial x_0}$, so $x_2^3$.
Thus $J^{pol}_{f_\lambda}$ contains all monomials in
${\mathbb C}[x]_{4/3}$ which contain $x_2$.
For $g\in\{x_0^2,x_0x_1,x_1^2\}$, the intersection
$J^{pol}_{f_\lambda}\cap {\mathbb C}[x]_{4/3}$ contains
$g\cdot \frac{\partial f_\lambda}{\partial x_0}$, so
$g(-(\lambda+1)x_1^2+2\lambda x_0x_1)$, and
$g\cdot\frac{\partial f_\lambda}{\partial x_1}$, and thus also for
$i\in\{0,1\}$
\begin{eqnarray*}
&&x_ix_1\frac{\partial f_\lambda}{\partial x_1}+
(\frac{-1}{2}x_ix_0+\frac{3}{4}\frac{\lambda+1}{\lambda}x_ix_1)
(-(\lambda+1)x_1^2+2\lambda x_0x_1) \\
&&= \frac{-3}{4}\frac{(\lambda-1)^2}{\lambda} x_ix_1^3.
\end{eqnarray*}
Therefore $J^{pol}_{f_\lambda}$ contains $x_1^4$,
$x_0x_1^3$, and with
$g\cdot \frac{\partial f_\lambda}{\partial x_1}$ also
$x_0^2x_1^2$, $x_0^3x_1$, $x_0^4$. This shows
$J^{pol}_{f_\lambda}\supset{\mathbb C}[x]_{4/3}$.
\hfill$\Box$
\begin{remarks}\label{t4.2}
(i) A priori, we do not see a reason why the monomials
$m_1,...,m_{\mu-1}$
can be chosen such that they and
$\frac{\partial f_\lambda}{\partial \lambda}$
generate ${\mathbb C}[x]/J^{pol}_{f_\lambda}$
for each $\lambda\in{\mathbb C}-\{0,1\}$ simultaneously.
It is a nice fact, but not crucial.
(ii) Thus the global family $F^{alg}$ is nice,
but it is not a unique global unfolding
of the Legendre family in \eqref{4.2}.
Though the global family $F^{mar}$ is a unique global unfolding of
its restriction to $t'=0$, which is a family over $\H$ of functions
on ${\mathbb C}^{n+1}$ and which is the pull back by $\lambda:\H\to{\mathbb C}-\{0,1\}$
of the Legendre family in \eqref{4.2}. The family $F^{mar}$ is unique
because $\H$ is contractible.
(iii) For other 1-parameter families of simple elliptic singularities,
sets $m_1,...,m_{\mu-1}$ of monomials
with the analogous property as in lemma \ref{t4.1} had been chosen
in \cite[2.1]{Ja86} and in \cite[in 2.1 around formula (2)]{MS16}.
\end{remarks}
The next theorem is the special case of theorem \ref{t3.8}
for the simple and simple elliptic singularities.
\begin{theorem}\label{t4.3}
Consider for a simple or a simple elliptic singularity
the global family of functions $F^{mar}_t$ above the space $M^{mar}$
in \eqref{4.9} respectively \eqref{4.8}.
For any $t\in M^{mar}$, the global Milnor number
$\mu_{global}(F^{mar}_t):=\sum_{x\in \textup{Crit}(F^{mar}_t)}\mu(F^{mar}_t,x)$ is $\mu$.
The manifold $M^{mar}$ is an F-manifold with Euler field $E$.
It is a thickening with the properties in theorem \ref{t3.8}
of the moduli space $M^{mar}_\mu(f^{(0)})$.
Here $f^{(0)}=f$ in the case of the simple singularities,
and we choose $f^{(0)}=f_{1/2}$ in the case of the simple elliptic singularities.
The bundle $H_{\mathbb Z}$ is simply the bundle
$H_{\mathbb Z}=\bigcup_{(\tau,t)\in {\mathbb C}\times M^{mar}-{\mathcal D}^{mar}}H_n((F^{mar}_t)^{-1}(\tau),{\mathbb Z})$.
\end{theorem}
{\bf Proof:}
In all cases, $f$ respectively $f_\lambda$
is a quasihomogeneous polynomial of weighted degree 1
with respect to a weight system ${\bf w}=(w_0,...,w_n)\in{\mathbb Q}\cap (0,\frac{1}{2}]$.
In the case of a simple singularity, all monomials $m_1,...,m_\mu$ have
weighted degree in ${\mathbb Q}\cap(0,1)$. In the case of a simple elliptic
singularity, all monomials $m_1,...,m_{\mu-1}$ have
weighted degree in ${\mathbb Q}\cap (0,1)$.
Therefore in all cases, the highest part (with respect to the weighted degree)
of $\frac{\partial F^{mar}_t}{\partial x_i}$ is equal to
$\frac{\partial f}{\partial x_i}$ respectively
$\frac{\partial f_{\lambda(t_\mu)}}{\partial x_i}$.
Denote ${\mathbb C}[x]_{\leq q}:=
\{f\in{\mathbb C}[x]\, |\, \deg_{\bf w}f\leq q\}$ for $q\in {\mathbb Q}_{\geq 0}$.
In the case of the simple elliptic singularities,
\eqref{4.10} implies for $q\in{\mathbb Q}_{\geq 0}$
\begin{eqnarray}\label{4.11}
(J_{F^{mar}_t}^{pol})\cap {\mathbb C}[x]_{\leq q}
&+&\sum_{j:\, \deg_{\bf w}m_j\leq q}{\mathbb C}\cdot m_j\\
&+&\left\{\begin{array}{ll}0&\textup{if }q< 1\\
{\mathbb C}\cdot\frac{\partial f_\lambda}{\partial \lambda} &
\textup{if }q\geq 1\end{array}\right\} ={\mathbb C}[x]_{\leq q}.\nonumber
\end{eqnarray}
And in the case of the simple singularities,
\begin{eqnarray}\label{4.12}
(J_{F^{mar}_t}^{pol})\cap {\mathbb C}[x]_{\leq q}
+\sum_{j:\, \deg_{\bf w}m_j\leq q}{\mathbb C}\cdot m_j
={\mathbb C}[x]_{\leq q}.
\end{eqnarray}
holds. This shows
\begin{eqnarray}\label{4.13}
\dim {\mathbb C}[x]/J_{F^{mar}_t}^{pol}=\mu.
\end{eqnarray}
The left hand side is the global Milnor number $\mu_{global}(F_t^{mar})$.
Observe
\begin{eqnarray}\label{4.14}
\frac{\partial F^{mar}_t}{\partial t_j}=\left\{\begin{array}{ll}
m_j &\textup{for the ADE singularities},\\
m_j&\textup{for the }\widetilde E_k\textup{ cases with }j\leq \mu-1,\\
\frac{\partial f_\lambda}{\partial\lambda}
\cdot\frac{\partial\lambda}{\partial t_\mu} &
\textup{for the }\widetilde E_k\textup{ cases with }j=\mu. \end{array}\right.
\end{eqnarray}
This and \eqref{4.11} and \eqref{4.12} show that here the algebraic variant
\begin{eqnarray}\label{4.15}
{\bf a}_C^{alg,t}:T_{M^{mar},t}\to {\mathbb C}[x]/J^{alg}_{F^{mar}_t},\quad
\frac{\partial }{\partial t_j}\mapsto [\frac{\partial F^{mar}_t}{\partial t_j}],
\end{eqnarray}
(which is here for simplicity written pointwise) of the Kodaira-Spencer
map in \eqref{2.4} is an isomorphism and equips $T_{M^{mar},t}$
with a multiplication, a unit field vector and an Euler field vector.
This gives $M^{mar}(f^{(0)})$ the structure of an F-manifold with Euler field.
The proof of \cite[Theorem 5.3]{He02} works also here.
The unit field $e$ and the Euler field $E$ are here
\begin{eqnarray}\label{4.16}
e=\frac{\partial}{\partial t_1},\quad
E=\sum_{j=1}^{\mu} (1-\deg_{\bf w}m_j)t_j\frac{\partial}{\partial t_j}.
\end{eqnarray}
In the cases of the simple singularities, the weights $1-w_j$ are all
positive. The $F$-manifold structure on $M^{mar}$ can be obtained from that
of the germ $(M^{mar},0)$ by using the flow of the Euler field
and $\Lie_E(\circ)=\circ$.
In the cases of the simple elliptic singularities, $1-w_\mu=0$,
but all other weights $1-w_j$ are positive. The $F$-manifold structure
on ${\mathbb C}^{\mu-1}\times U$ for $U\subset\H$ a neighborhood of a point
$t_\mu$ can be obtained from that on the germ $(M^{mar},(0,t_\mu))$
by using the flow of the Euler field and $\Lie_E(\circ)=\circ$.
A polynomial $g\in{\mathbb C}[x_0,...,x_n]={\mathbb C}[x]$ is {\it tame}
in the sense of Broughton \cite[Definition 3.1]{Br88} if
a compact neighborhood $U\subset{\mathbb C}^{n+1}$ exists such that
$||(\frac{\partial g}{\partial x_0},...,\frac{\partial g}{\partial x_n})||$
is bounded away from 0 on ${\mathbb C}^{n+1}-U$.
He proved that $g$ is tame if and only of
$\mu_{global}(g)=\mu_{global}(g+\sum_{i=0}^n x_is_i)$ for any
$(s_0,...,s_n)\in{\mathbb C}^{n+1}$ \cite[Proposition 3.1]{Br88}.
His main result is that for a tame polynomial $g$, the fiber
$g^{-1}(\tau)$ for an arbitrary $\tau\in{\mathbb C}$ has the homotopy type of
$\mu_{global}(g)
-\sum_{x\in \textup{Crit}(g^{-1}(\tau))}\mu(g,x)$
many $n$-spheres \cite[Theorem 1.2]{Br88}.
This applies to all the polynomials $F^{mar}_t$.
Because $\mu_{global}(F^{mar}_t)=\mu$ holds for all of them,
and because the unfolding $F^{mar}$ comprises the unfolding
by the terms $+\sum_{i=0}^n x_is_i$, they are all tame.
The eigenvalues of $E\circ:T_tM\to T_tM$ are the critical values of $F_t$.
Therefore a fiber $(F^{mar}_t)^{-1}(\tau)$ is smooth
if and only if $(\tau,t)\in{\mathbb C}\times M^{mar}-{\mathcal D}^{mar}$.
By Broughton such a fiber has the homotopy type of a bouquet of
$\mu$ $n$-spheres. Therefore the middle homology groups
of these fibers glue to a flat ${\mathbb Z}$-lattice bundle of rank $\mu$,
\begin{eqnarray}\label{4.17}
H_{\mathbb Z}&:=& \bigcup_{(\tau,t)\in {\mathbb C}\times M^{mar}-{\mathcal D}^{mar}}
H_n((F^{mar}_t)^{-1}(\tau),{\mathbb Z})
\end{eqnarray}
Many of the properties of $M^{mar}(f^{(0)})$ and $H_{\mathbb Z}$ in theorem
\ref{t3.8} are now clear: (a)(i)+(ii) and (b)(i) are obvious.
(b)(ii) holds because $M^{mar}$ is simple connected.
In the case of the simple singularities, (b)(iii) is empty,
as $M^{mar}_\mu$ consists of a single point.
In the case of the simple elliptic singularities,
(b)(iii) holds by the proof in \cite[Theorem 6.1]{GH17-1}
that $M^{mar}_\mu$ is isomorphic to $\H$. There the markings on the
points in $\H$ were defined essentially by the commutativity of
the diagram \eqref{3.17}.
It rests to prove (a)(iii) and (b)(iv).
First we treat the simple singularities, where this is easier.
As $M^{mar}_\mu(f)$ consists of only one point,
the stabilizer of this point in $G_{\mathbb Z}(f)$ is the whole group $G_{\mathbb Z}(f)$,
so \eqref{3.15} becomes
\begin{eqnarray}\label{4.18}
G_{\mathbb Z}(f)=G^{mar}_{\mathcal R}(f).
\end{eqnarray}
The homomorphism in theorem \ref{t3.1} (c)
becomes a natural surjective homomorphism $G_{\mathbb Z}(f)\to\Aut_M$
with kernel $\{\pm\id\}$.
Because of the positive ${\mathbb C}^*$-action
by the flow of the Euler field on $M^{mar}$,
\begin{eqnarray}\label{4.19}
\Aut_M=\Aut(M^{mar},\circ,e,E).
\end{eqnarray}
By lemma \ref{t2.3}, any such automorphism lifts to an up to
$\pm 1$ unique automorphism of the canonical ${\mathbb Z}$-lattice
bundle $H_{\mathbb Z}$. The group of these automorphisms is $G_{\mathbb Z}(f)$.
This proves (a)(iii) and (b)(iv) in theorem \ref{t3.8}
for the simple singularities.
Now we treat the simple elliptic singularities.
Consider a group element $\chi\in G_{\mathbb Z}(f^{(0)})$, a point
$$(0,t_\mu)=[(f_{\lambda(t_\mu)},\pm\rho)]
\in M^{mar}_\mu(f^{(0)})\subset M^{mar}(f^{(0)}),$$
and its image
$$\chi_{mar}(t)=[(f_{\lambda(t_\mu)},\pm\chi\circ\rho)]
=(0,\widetilde t_\mu)=[(f_{\lambda(\widetilde t_\mu)},\pm\widetilde\rho)]
\in M^{mar}(f^{(0)})$$
under the action of $\chi_{mar}$ on $M^{mar}_\mu(f^{(0)})$.
Consider the isomorphism of F-manifolds with Euler fields
\begin{eqnarray}\label{4.20}
\psi_{(0,\widetilde t_\mu)}\circ \psi_{(0,t_\mu)}^{-1}:
U_{(0,t_\mu)}\to U_{(0,\widetilde t_\mu)}.
\end{eqnarray}
which (a)(i) in theorem \ref{t3.8} provides.
We claim that these isomorphisms for varying $t_\mu$
glue to an automorphism of $M^{mar}(f^{(0)})$
and that this lifts to an automorphism of $H_{\mathbb Z}$
which restricts on the trivial ${\mathbb Z}$-lattice bundle above
$\bigcup_{t\in M^{mar}(f^{(0)})}{\mathbb R}_{>r(t)}\times\{t\}$
in theorem \ref{t3.8} (b)(ii) to $\chi$.
In several steps one sees that one local isomorphism in
\eqref{4.20} extends to a global automorphism of $M^{mar}(f^{(0)})$.
First step: Its restriction to $M^{mar}_\mu(f^{(0)})$ is well defined
and given by $\chi_{mar}$.
Second step: The local isomorphism in \eqref{4.20}
extends to an automorphism of a suitable neighborhood of
$M^{mar}_\mu(f^{(0)})$ in $M^{mar}(f^{(0)})$ because
$M^{mar}_\mu(f^{(0)})=\H$ is contractible.
Third step:
With the ${\mathbb C}^*$-action by the flow of the Euler field
on $M^{mar}(f^{(0)})$,
this extends to a global automorphism of $M^{mar}(f^{(0)})$.
Above the extension to ${\mathbb C}\times U_{(0,t_\mu)}\to {\mathbb C}\times U_{(0,\widetilde t_\mu)}$
of the isomorphism in \eqref{4.20}, one has an isomorphism of the
corresponding restrictions of $H_{\mathbb Z}$, because they are isomorphic
to the canonical ${\mathbb Z}$-lattice bundles above
$U_{(0,t_\mu)}$ and $U_{(0,\widetilde t_\mu)}$ in definition/lemma \ref{t3.2}.
This isomorphism is unique up to $\pm 1$ by definition/lemma \ref{t3.2}.
The commuting diagram \eqref{3.17} tells that the restricted
isomorphism from $H_{\mathbb Z}$ above
$\bigcup_{s\in U_{(0,t_\mu)}}{\mathbb R}_{>r(s)}\times\{s\}$
to $H_{\mathbb Z}$ above
$\bigcup_{s\in U_{(0,\widetilde t_\mu)}}{\mathbb R}_{>r(s)}\times\{s\}$
is compatible with $\pm\chi$.
Because of the uniqueness up to $\pm 1$,
all the local isomorphisms of restrictions of $H_{\mathbb Z}$
to neighborhoods of points $(0,0,t_\mu)\in{\mathbb C}\times M^{mar}(f^{(0)})$
glue (possibly after changing some by $\pm\id$) to one automorphism of $H_{\mathbb Z}$.
Its restriction to
$\bigcup_{s\in M^{mar}}{\mathbb R}_{>r(s)}\times\{s\}$ is $\chi$.
Only now it becomes clear that the automorphism of $M^{mar}(f^{(0)})$
restricts for {\it any} $s_\mu\in \H$ to the isomorphism in \eqref{4.20}:
Its restriction to $U_{(0,s_\mu)}$ and the automorphism in \eqref{4.20}
are in the same way compatible with $\pm\chi$, therefore they coincide
if $U_{(0,s_\mu)}$ and $U_{(0,t_\mu)}$ overlap.
Now (a)(iii) and (b)(iv) in theorem \ref{3.10} are proved for the
simple elliptic singularities. \hfill$\Box$
\bigskip
In the next section, we will be more concrete about the action of
$G_{\mathbb Z}(f^{(0)})$ on $M^{mar}(f^{(0)})$.
\section{Symmetries of the simple and the simple elliptic singularities}\label{s5}
\setcounter{equation}{0}
\noindent
In this section, we will write down concrete formulas for
the action on $M^{mar}(f^{(0)})$ of generating elements of $G_{\mathbb Z}(f^{(0)})$.
We need these formulas for an explicit calculation of certain numbers
in section \ref{s10}.
The formulas will also reprove a part of (a)(iii) and (b)(iv) in theorem \ref{t3.8}
for the simple and the simple elliptic singularities.
But we prefer to keep the entity of the conceptual arguments in the last
part of the proof of theorem \ref{t4.3}, than to drop some of them
and mix the others with the concrete calculations below.
\subsection{Symmetries of the simple singularities}\label{s5.1}
We discussed the symmetries in the proof of theorem \ref{t4.3},
in the paragraph which contains the formulas \eqref{4.18}
and \eqref{4.19}.
In the case of a simple singularity $f$,
$M^{mar}_\mu(f)=\{\textup{pt}\}$, $G_{\mathbb Z}(f)=G^{mar}_{{\mathcal R}}(f)$,
and
\begin{eqnarray}\label{5.1}
\Aut(M^{mar},\circ,e,E)=\Aut_M\cong G_{\mathbb Z}(f)/\{\pm\id\}.
\end{eqnarray}
By the theorems \ref{t8.3} and \ref{t8.4} in \cite{He11}
\begin{eqnarray}\label{5.2}
G_{\mathbb Z}(f)&=& \{\pm M_h^k\, |\, k\in{\mathbb Z}\}\times U\textup{ with}\\
U&\cong& \left\{
\begin{array}{ll}
S_1 & \textup{for }A_\mu,D_{2l+1},E_6,E_7,E_8,\\
S_2 & \textup{for }D_{2l}\textup{ with }l\geq 3,\\
S_3 & \textup{for }D_4.
\end{array}\right. \nonumber
\end{eqnarray}
Remark \ref{t3.4} applies to $f$ and $F^{mar}=F^{alg}$
and gives
\begin{eqnarray*}
()_M\circ ()_{hom}^{-1}(M_h) =
(t\mapsto (e^{2\pi i \deg_{\bf w}t_0}t_0,...,
e^{2\pi i\deg_{\bf w}t_n}t_n))\in\Aut_M.
\end{eqnarray*}
In all cases with $U=S_1$, this automorphism of
$(M^{mar},\circ,e,E)$ generates $\Aut_M$.
In the cases $D_{2l}$, the coordinate change
\begin{eqnarray}\label{5.3}
\varphi_2=(x\mapsto (x_0,-x_1,x_2,...,x_n))\in
\textup{Stab}_{G_{\bf w}}(f)\subset{\mathcal R}^f
\end{eqnarray}
and $\Phi_2=(\id_X,(\varphi)_M)$ satisfy
\begin{eqnarray}\label{5.4}
(\varphi_2)_M&=& (t\mapsto (t_1,-t_2,t_3,...,t_\mu))\\
&\notin& \{()_M\circ ()_{hom}^{-1}(M_h^k)\, |\, k\in{\mathbb Z}\},
\nonumber\\
(\varphi_2)_{hom}&\notin& \{\pm M_h^k\, |\, k\in{\mathbb Z}\},\nonumber\\
\Phi_2\circ\varphi_2 &=& (\varphi_2,(\varphi_2)_M),\nonumber\\
F&=& F\circ (\Phi_2\circ\varphi_2),\quad
\pr_M\circ (\Phi_2\circ\varphi_2)
=(\varphi_2)_M\circ \pr_M.\nonumber
\end{eqnarray}
So, in the cases $D_{2l}$ with $l\geq 3$,
$(\varphi_2)_{hom}$ can be chosen as a generator of $U$.
In the case $D_4$, $U$ is generated by $(\varphi_2)_{hom}$
and $(\varphi_3)_{hom}$ where
\begin{eqnarray}\label{5.5}
\varphi_3&:=&
(x\mapsto (\frac{-1}{2}x_0+\frac{-i}{2}x_1,
\frac{3i}{2}x_0+\frac{1}{2}x_1,x_2,...,x_n))\\
&\in &\textup{Stab}_{G_{\bf w}}(f)\subset {\mathcal R}^f.\nonumber
\end{eqnarray}
This follows from theorem \ref{t3.1}, theorem \ref{t3.3}
and the fact, which can be checked easily, that the group
$\textup{Stab}_{G_{\bf w}}\cong R_f$ is in the case
$D_4$ with $n=1$ generated by $\varphi_1,\varphi_2$
and $\varphi_3$ where $\varphi_1$ is as in remark
\ref{t3.4}.
Only the unfolding morphism $(\Phi_3,(\varphi_3)_M)$
which induces $F\circ \varphi_3^{-1}$ by $F$,
i.e., which satisfies \eqref{3.8},
\begin{eqnarray}
F\circ (\Phi_3\circ\varphi_3)=F,\quad
\pr_M\circ(\Phi_3\circ\varphi_3) =(\varphi_3)_M\circ \pr_M,
\label{5.6}
\end{eqnarray}
is much more complicated than $(\Phi_1,(\varphi_1)_M)$
and $(\Phi_2,(\varphi_2)_M)$. It is given by
\begin{eqnarray}\label{5.7}
(\Phi_3(x_0),\Phi_3(x_1))&=&
(x_0+\frac{-1}{4}t_4,x_1+\frac{i}{4}x_1),\\
(\varphi_3)_M^{-1}(t_1,t_2,t_3,t_4)&=&
(t_1+\frac{i}{4}t_2t_4+\frac{-1}{4}t_3t_4+\frac{1}{16}t_4^3,
\nonumber\\
&&\frac{1}{2}t_2+\frac{-i}{2}t_3+\frac{i}{8}t_4^2,\nonumber\\
&&\frac{3i}{2}t_2+\frac{-1}{2}t_3+\frac{3}{8}t_4^2,\ t_4).
\label{5.8}
\end{eqnarray}
Here one calculates \eqref{5.8} with the ansatz \eqref{5.7} and
\begin{eqnarray}\label{5.9}
F_t((\Phi_3\circ\varphi_3)(x)) &=& F_{(\varphi_3)_M^{-1}}(x).
\end{eqnarray}
For the simple elliptic singularities, we will encounter
something similar, one coordinate change $\varphi_3$
for which $\Phi_3$ looks difficult.
\begin{remark}\label{t5.1}
For the simple singularities, it is rather obvious
(and it will be shown in the proof of theorem \ref{t7.1})
that $\Aut_M$ is the group of covering transformations
of the covering
\begin{eqnarray*}
LL^{mar}:M^{mar}-({\mathcal K}^{mar}_2\cup{\mathcal K}^{mar}_3)\to M_{LL}^{(\mu)}
-{\mathcal D}_{LL}^{(\mu)}
\end{eqnarray*}
in theorem \ref{t6.1}.
This given, the results above (together with the shape of
$\{\pm M_h^k\, |\, k\in{\mathbb Z}\}$, see e.g. the theorems
8.3 and 8.4 in \cite{He11}) prove the main theorem in
\cite{Li81} which describes this covering group.
This theorem and the isomorphism
$\Aut_M\cong G_{\mathbb Z}/\{\pm\id\}$ have also been
(re)proved in \cite[Theorem 1 and Theorem 2]{Yu99}.
\end{remark}
\subsection{Symmetries of the simple elliptic singularities}
\label{s5.2}
\noindent
The group $G_{\mathbb Z}=G_{\mathbb Z}(f^{(0)})$ of the simple elliptic
reference singularity $f^{(0)}=f_{1/2}$ (see theorem \ref{t4.3})
sits by theorem 3.1 in \cite{GH17-1} in an exact sequence
\begin{eqnarray}\label{5.10}
1\to (U_1^0\rtimes U_2)\times \{\pm\id\}\to G_{\mathbb Z}\hspace*{2cm}\\
\to \Aut(Ml(f^{(0)})_{(-1)^n,{\mathbb Z}},L)/\{\pm\id\}\to 1 \nonumber
\end{eqnarray}
where
\begin{eqnarray}\label{5.11}
\Aut(Ml(f^{(0)})_{(-1)^n,{\mathbb Z}},L)&\cong & \textup{SL}(2,{\mathbb Z})
\end{eqnarray}
and
\begin{eqnarray}
&& U_1^0\cong
\{(\alpha,\beta,\gamma)\in{\mathbb Z}/p{\mathbb Z}\times {\mathbb Z}/q{\mathbb Z}\times{\mathbb Z}/r{\mathbb Z}\,
\nonumber \\
&& \hspace*{2cm}
\frac{\alpha}{p}+\frac{\beta}{q}+\frac{\gamma}{r}\equiv
0\mod{\mathbb Z}\}\nonumber \\
&&\begin{array}{c|c|c|c}
& \widetilde E_6 & \widetilde E_7 & \widetilde E_8 \\
(p,q,r) & (3,3,3) & (4,4,2) & (6,3,2) \\
U_2\cong & S_3 & S_2 & S_1
\end{array}\label{5.12}
\end{eqnarray}
By theorem 6.1 in \cite{GH17-1}, the action of $G_{\mathbb Z}$ on
$M^{mar}_\mu$ pulls down to an action of the quotient
$\Aut(Ml(f^{(0)})_{(-1)^n,{\mathbb Z}},L)/\{\pm\id\}$ in the
exact sequence \eqref{5.10} on $M^{mar}_\mu$,
and by the isomorphisms \eqref{5.11} and
$M^{mar}_\mu\cong\H$ this becomes the standard action
of $\textup{PSL}(2,{\mathbb Z})$ on $\H$.
The action of $\textup{PSL}(2,{\mathbb Z})$ on $\H$ descends
to the action of $S_3$ on ${\mathbb C}-\{0,1\}$ where
$S_3$ acts via
\begin{eqnarray}\label{5.13}
S_3\cong \{\lambda\mapsto \lambda, 1-\lambda,
\frac{1}{\lambda},\frac{\lambda-1}{\lambda},
\frac{\lambda}{\lambda-1},\frac{1}{1-\lambda}).
\end{eqnarray}
This and theorem \ref{t3.6} (c),
$M^{mar}_\mu/G_{\mathbb Z}\cong M_\mu$, reprove the well known
fact that the orbits of this action of $S_3$ on
${\mathbb C}-\{0,1\}$ give the right equivalence classes of one
family of Legendre normals forms in table \eqref{4.2}.
The kernel $(U_1^0\rtimes U_2)\times\{\pm\id\}$
in the exact sequence \eqref{5.10} acts on the fibers
of the projection
\begin{eqnarray*}
\pr_\mu^{mar}:M^{mar}={\mathbb C}^{\mu-1}\times\H\to\H,\
t\mapsto t_\mu.
\end{eqnarray*}
This action pulls down to an action on the fibers of the
projection
\begin{eqnarray*}
\pr_\mu^{alg}:M^{alg}={\mathbb C}^{\mu-1}\times ({\mathbb C}-\{0,1\})\to
{\mathbb C}-\{0,1\},\
(t',\lambda)\mapsto \lambda.
\end{eqnarray*}
But the action of $G_{\mathbb Z}$ on $M^{mar}$ does not pull down
to an action of a quotient of $G_{\mathbb Z}$ on $M^{alg}$,
because the covering group
($\cong \Gamma(2)/\{\pm{\bf 1}_2\}\subset\textup{PSL}(2,{\mathbb Z})$)
of the coverings $M^{mar}\to M^{alg}$
and $\H\to {\mathbb C}-\{0,1\}$ is not a normal subgroup of
$G_{\mathbb Z}$.
The action of $G_{\mathbb Z}$ on $M^{mar}$ pulls only down to an action
of a {\it groupoid} (see e.g. \cite{ALR07} for the definition
of a groupoid) on $M^{alg}$, whose quotient is
$M^{mar}/G_{\mathbb Z}$. We will not delve into this.
The groupoid structure comes from all isomorphisms
$(M^{alg},(t'^{(1)},\lambda^{(1)}))
\to (M^{alg},(t'^{(2)},\lambda^{(2)}))$ of germs of F-manifolds
(they all underlie isomorphisms of universal unfoldings).
As global maps, many of these isomorphisms become multivalued.
For the use in section \ref{s10}, we make some of them
explicit below. Before, we state a lemma on the relation
between $M^{alg}$ and $M^{mar}/G_{\mathbb Z}$, which will be
used in corollary \ref{t7.3}.
\begin{lemma}\label{t5.2}
The map $M^{alg}\to M^{mar}/G_{\mathbb Z}$ is finite and flat
and has the following degree,
\begin{eqnarray}
\widetilde E_6:&& 6\cdot 2\cdot 3\cdot 3^2=326,\nonumber\\
\widetilde E_7:&& 6\cdot 1\cdot 4\cdot 2^2=96,\label{5.14}\\
\widetilde E_8:&& 6\cdot 1\cdot 6\cdot 1^2=36.\nonumber
\end{eqnarray}
\end{lemma}
{\bf Proof:}
Finiteness and flatness are clear.
The degree is $|S_3|\cdot |U_1^0|\cdot |U_2|$.
By \eqref{5.12}, this is the number in \eqref{5.14}.
\hfill$\Box$
\bigskip
In section \ref{s10}, we need to compare neighborhoods in
$M^{alg}={\mathbb C}^{\mu-1}\times({\mathbb C}-\{0,1\})$ of
${\mathbb C}^{\mu-1}\times\{0\}$, ${\mathbb C}^{\mu-1}\times\{1\}$ and
${\mathbb C}^{\mu-1}\times\{\infty\}$.
For this, we give now multivalued maps
$\psi_2,\psi_3:M^{alg}\dashrightarrow M^{alg}$ which
underlie locally isomorphisms of unfoldings and which lift
the automorphisms $\lambda\mapsto\frac{1}{\lambda}$
and $\lambda\mapsto 1-\lambda$ of ${\mathbb C}-\{0,1\}$.
In each case $\widetilde E_k$, $k\in\{6,7,8\}$, we will give
two coordinate changes $\varphi_2$ and $\varphi_3$
and multivalued maps
\begin{eqnarray*}
\Psi_2,\Psi_3&:&{\mathbb C}^{n+1}\times M^{alg}\dashrightarrow
{\mathbb C}^{n+1},\\
\psi_2,\psi_3&:& M^{alg}\dashrightarrow M^{alg}
\end{eqnarray*}
with
\begin{eqnarray}\label{5.15}
F^{alg}_{t',\lambda}((\Psi_i\circ\varphi_i)(x))
&=& F^{alg}_{\psi_i(t',\lambda)}(x),\\
\pr_M^{alg}\circ\psi_2 &=&
(\lambda\mapsto\frac{1}{\lambda})\circ \pr_M^{alg},\label{5.16}\\
\pr_M^{alg}\circ\psi_3 &=&
(\lambda\mapsto 1-\lambda)\circ \pr_M^{alg}.
\label{5.17}
\end{eqnarray}
Then $\Phi_i :=(\Psi_i,\psi_i^{-1})$ and $\varphi_i$
satisfy \eqref{3.8}.
The choice of $\phi_2,\Psi_2$ and $\psi_2$ is rather
obvious, and there $f_\lambda\circ \varphi_2=f_{1/\lambda}$,
\eqref{5.15}, \eqref{5.16} and \eqref{3.8}
are easy to check. Also the property of $\varphi_3$,
$$f_\lambda\circ \varphi_3=f_{1-\lambda}$$
is easy to see. But $\Psi_3$ looks more difficult.
It is determined by the requirement that
$F_{t',\lambda}^{alg}(\Psi_3\circ \varphi_3(x))
[=F^{alg}(\Psi_3(\varphi_3(x),t',\lambda),t',\lambda)]$
is an unfolding of $f_{1-\lambda}$ only in the monomials
in table \eqref{4.7}.
That means that the coefficients of the following monomials
must vanish:
\begin{eqnarray}
\textup{For }\widetilde E_6:&& x_0x_2,\
x_1^2\ (\textup{automatic}), \ x_2^2\ (\textup{automatic}),
\nonumber\\
\textup{For }\widetilde E_7:&& x_0^3,\
x_1^3\ (\textup{automatic}),
\label{5.18}\\
\textup{For }\widetilde E_8:&& x_0^5,\
x_0^3x_1\ (\textup{automatic}), \ x_0^4\ (\textup{automatic}).
\nonumber
\end{eqnarray}
Having $\Psi_3$ and $\varphi_3$, $\psi_3$ is determined
by \eqref{5.15}. In the case of $\widetilde E_6$ it takes
two lines, in the case of $\widetilde E_7$ it takes 11
lines, but in the case of $\widetilde E_8$ it would take 3 pages.
There we do not write down $\psi_3$ completely,
we write down only the part of it which is relevant
in section \ref{s10}.
\medskip
{\bf The case $\widetilde E_6$:}
\begin{eqnarray}
\varphi_2(x_0,x_1,x_2)&=& (\lambda^{-1}x_0,x_1,\lambda^{1/2}x_2),
\label{5.19}\\
\Psi_2(x,t',\lambda)&=& x,\label{5.20}\\
\psi_2(t',\lambda)&=& (t_1,\lambda^{-1}t_2,t_3,\lambda^{1/2}t_4,
\nonumber\\
&&\lambda^{-2}t_5,\lambda^{-1}t_6,\lambda^{1/2}t_7,\lambda^{-1}).
\label{5.21}
\end{eqnarray}
\begin{eqnarray}\label{5.22}
\varphi_3(x)&=& (-x_0,x_1-x_0,ix_2),\\
\Psi_3(x,t',\lambda) &=& (x_0,x_1,x_2-\frac{i}{2}t_7),
\label{5.23}\\
\psi_3(t',\lambda)&=& (t_1+\frac{1}{2}t_4t_7,
-t_2-t_3-\frac{1}{4}t_7^2,t_3+\frac{1}{2}t_7^2,\nonumber\\
&& it_4,t_5+t_6,-t_6,it_7,1-\lambda).\label{5.24}
\end{eqnarray}
\medskip
{\bf The case $\widetilde E_7$:}
\begin{eqnarray}\label{5.25}
\varphi_2(x)&=& (\lambda^{-3/4}x_0,\lambda^{1/4}x_1),\\
\Psi_2(x,t',\lambda) &=& x,\label{5.26}\\
\psi_2(t',\lambda)&=& (t_1,\lambda^{-3/4}t_2,\lambda^{1/4}t_3,
\lambda^{-3/2}t_4,\lambda^{-1/2}t_5,\nonumber\\
&& \lambda^{1/2}t_6,\lambda^{-5/4}t_7,\lambda^{-1/4}t_8,
\lambda^{-1})\label{5.27}.
\end{eqnarray}
\begin{eqnarray}\label{5.28}
\varphi_3(x)&=& (-\xi x_0,\xi(x_1-x_0))\quad\textup{ with }
\xi = e^{2\pi i 1/8},\\
\Psi_3(x,t',\lambda) &=& (x_0,x_1-\frac{t_7+t_8}{1-\lambda}),
\label{5.29}\\
\psi_3(t',\lambda)&=& (\widetilde t_1,...,\widetilde t_8,1-\lambda)
\quad\textup{with }\label{5.30}\\
\widetilde t_1&=& t_1+(-1)\frac{t_7+t_8}{1-\lambda}t_3+
\left(\frac{t_7+t_8}{1-\lambda}\right)^2 t_6,\nonumber\\
\widetilde t_2&=& (-\xi)t_2+(-\xi)t_3 + \xi \frac{t_7+t_8}{1-\lambda}t_5
+2\xi \frac{t_7+t_8}{1-\lambda}t_6 \nonumber\\
&&\hspace*{1cm}+(-\xi)\left(\frac{t_7+t_8}{1-\lambda}\right)^2t_8
+\xi\left(\frac{t_7+t_8}{1-\lambda}\right)^3,\nonumber\\
\widetilde t_3&=& \xi t_3+(-2\xi)\frac{t_7+t_8}{1-\lambda}t_6,\nonumber
\end{eqnarray}
\begin{eqnarray*}
\widetilde t_4&=& \xi^2(t_4+t_5+t_6)
+(-\xi^2)\frac{t_7+t_8}{1-\lambda}t_7
+(-2\xi^2)\frac{t_7+t_8}{1-\lambda}t_8 \nonumber\\
&& \hspace*{1cm}+\xi^2(2-\lambda)\left(\frac{t_7+t_8}{1-\lambda}
\right)^2, \nonumber\\
\widetilde t_5&=& (-\xi^2)t_5+(-2\xi^2)t_6
+2\xi^2 \frac{t_7+t_8}{1-\lambda}t_8
+(-3\xi^2)\left(\frac{t_7+t_8}{1-\lambda}\right)^2,\nonumber\\
\widetilde t_6&=& \xi^2t_6,\nonumber\\
\widetilde t_7&=& \frac{\xi^3}{1-\lambda}((-3+\lambda)t_7
+(-2)t_8),\nonumber\\
\widetilde t_8&=& \frac{\xi^3}{1-\lambda}(3t_7+(2+\lambda)t_8)
\nonumber.
\end{eqnarray*}
\medskip
{\bf The case $\widetilde E_8$:}
\begin{eqnarray}\label{5.31}
\varphi_2(x)&=& (\lambda^{-1/2}x_0,x_1),\\
\Psi_2(x,t',\lambda) &=& x,\label{5.32}\\
\psi_2(t',\lambda)&=& (t_1,\lambda^{-1/2}t_2,\lambda^{-1}t_3,
t_4,\lambda^{-3/2}t_5,\nonumber\\
&&\lambda^{-1/2}t_6,\lambda^{-1}t_7,t_8,
\lambda^{-1/2}t_9,\lambda^{-1}).\label{5.33}
\end{eqnarray}
\begin{eqnarray}\label{5.34}
\varphi_3(x)&=& (ix_0,x_1-x_0^2),\\
\Psi_3(x,t',\lambda) &=& (x_0
+\frac{1}{2}\frac{t_9}{(1-\lambda)^2},
x_1+\frac{t_7+t_8}{1-\lambda}\nonumber\\
&&\hspace*{1cm}
+i\lambda \frac{t_9}{(1-\lambda)^2}x_0
+\frac{1}{4}\frac{t_9^2(4\lambda^2-2\lambda-1)}{(1-\lambda)^4}),
\label{5.35}\\
\psi_3(t',\lambda)&=& (\widetilde t_1,...,\widetilde t_9,1-\lambda)
\quad\textup{with}\label{5.36}\\
\widetilde t_1&=& t_1 + (\textup{a term in }
{\mathbb C}[\lambda,t_2,...,t_9,\frac{t_7+t_8}{1-\lambda},
\frac{t_9}{(1-\lambda)^2}]),\nonumber\\
\widetilde t_2&=& it_2 + (\textup{a term in }
{\mathbb C}[\lambda,t_3,...,t_9,\frac{t_7+t_8}{1-\lambda},
\frac{t_9}{(1-\lambda)^2}]),\nonumber\\
\widetilde t_3&=& -t_3-t_4 + (\textup{a term in }
{\mathbb C}[\lambda,t_5,...,t_9,\frac{t_7+t_8}{1-\lambda},
\frac{t_9}{(1-\lambda)^2}]),\nonumber\\
\widetilde t_4&=& t_4 + (\textup{a term in }
{\mathbb C}[\lambda,t_6,...,t_9,\frac{t_7+t_8}{1-\lambda},
\frac{t_9}{(1-\lambda)^2}]),\nonumber
\end{eqnarray}
\begin{eqnarray}
\widetilde t_5&=& (-i)(t_5+t_6) + (\textup{a term in }
{\mathbb C}[\lambda,t_7,t_8,t_9,\frac{t_7+t_8}{1-\lambda},
\frac{t_9}{(1-\lambda)^2}]),\nonumber\\
\widetilde t_6&=& it_6 + (\textup{a term in }
{\mathbb C}[\lambda,t_7,t_8,t_9,\frac{t_7+t_8}{1-\lambda},
\frac{t_9}{(1-\lambda)^2}]),\nonumber\\
\widetilde t_7&=& \frac{\lambda-3}{\lambda-1}t_7 +
\frac{-2}{\lambda-1}t_8 +
\frac{6\lambda+1}{2(1-\lambda)^3}t_9^2,\nonumber\\
\widetilde t_8&=& \frac{3}{\lambda-1}t_7
+\frac{\lambda+2}{\lambda-1}t_8 +
\frac{14\lambda^2-11\lambda-2}{4(1-\lambda)^4}t_9^2,\nonumber\\
\widetilde t_9&=& i\frac{\lambda^2}{(1-\lambda)^2}t_9.\nonumber
\end{eqnarray}
\begin{remarks}\label{t5.3}
(i) In the case of the minimal number of variables
($n=1$ for $\widetilde E_7$ and $\widetilde E_8$, and $n=2$ for $\widetilde E_6$),
the subgroup $U_1^0\rtimes U_2$ of the kernel
$(U_1^0\rtimes U_2)\times \{\pm\id\}$ in the
exact sequence in \eqref{5.10} comes via
$\textup{Stab}_{G_{\bf w}}(f_\lambda)\cong R_f
\stackrel{()_{hom}}{\longrightarrow} G_{\mathbb Z}$
from $\textup{Stab}_{G_{\bf w}}(f_\lambda)$ for generic
$\lambda$. This follows from the fact that the kernel of the
exact sequence in \eqref{5.10} is the subgroup of $G_{\mathbb Z}$
which acts trivially on $M^{mar}_\mu\cong\H$.
In the cases of $\widetilde E_7$ and $\widetilde E_8$, the elements
of $\textup{Stab}_{G_{\bf w}}(f_\lambda)$ for generic $\lambda$
can be determined easily explicitly.
In the case of $\widetilde E_6$, this is more difficult.
\medskip
(ii) In any case, one can avoid at the beginning of
this subsection \ref{s5.2} the use of theorem \ref{t3.1}
in \cite{GH17-1}, which gives the facts in
\eqref{5.10}--\eqref{5.12} on $G_{\mathbb Z}$.
One can recover these by the following steps:
(1) Determine $\textup{Stab}_{G_{\bf w}}(f_\lambda)$ for
generic $\lambda$.
(2) Use (i).
(3) Show that
$\textup{Stab}_{G_{\bf w}}(f_\lambda)$ for generic $\lambda$
and $\varphi_2$ and $\varphi_3$ generate all quasihomogeneous
coordinate changes which map each $f_\lambda$
to some $f_{\widetilde\lambda}$.
\end{remarks}
\section{Lyashko-Looijenga maps for the simple and the
simple elliptic singularities}\label{s6}
\setcounter{equation}{0}
\subsection{Lyashko-Looijenga maps and their degrees}\label{s6.1}
Lyashko-Looijenga maps in general were discussed in subsection
\ref{s2.4}.
Here we consider the Lyashko-Looijenga maps for the families of functions
defined in section \ref{s4}, the maps
\begin{eqnarray}
LL^{alg}:M^{alg}&\to& M_{LL}^{(\mu)} \quad \textup{and}\label{6.1} \\
LL^{mar}:M^{mar}&\to& M_{LL}^{(\mu)},\quad \textup{with}
\label{6.2}\\
t\in M^{mar}&\mapsto& \prod_{j=1}^\mu (y-u_j)
\quad \textup{ with }u_1,...,u_\mu
\textup{ the} \nonumber\\
&&\textup{critical values of }F^{mar}_t
\textup{ (with multiplicities).}\nonumber
\end{eqnarray}
The caustic ${\mathcal K}_3^{mar}\subset M^{mar}$
and the Maxwell stratum ${\mathcal K}_2^{mar}\subset M^{mar}$
had been defined in \eqref{3.18} and \eqref{3.19}.
They are analytic hypersurfaces.
The caustic ${\mathcal K}_3^{alg}\subset M^{alg}$
and the Maxwell stratum ${\mathcal K}_2^{alg}\subset M^{alg}$
are defined analogously.
They are algebraic hypersurfaces as $LL^{alg}$ is even an
algebraic map.
By Looijenga \cite{Lo74} and Lyashko \cite{Ly79}\cite{Ly84},
the map $LL^{alg}$ restricts to a locally biholomorphic map
$LL^{alg}:M^{alg}-({\mathcal K}_3^{alg}\cup {\mathcal K}_2^{alg})
\to M_{LL}^{(\mu)}-{\mathcal D}_{LL}^{(\mu)}$,
it maps ${\mathcal K}_3^{alg}\cup {\mathcal K}_2^{alg}$ to ${\mathcal D}_{LL}^{(\mu)}$,
and it is a branched covering of order 3 respectively 2
at generic points of ${\mathcal K}_3^{alg}$ respectively ${\mathcal K}_2^{alg}$,
and analogous statements hold
for $LL^{mar}$ and ${\mathcal K}_3^{mar},{\mathcal K}_2^{mar}\subset M^{mar}$.
In the case of the simple and the simple elliptic singularities,
we have the following more precise results.
Theorem \ref{t6.1} concerns the simple singularities and was
proved by Looijenga \cite{Lo74}
and Lyashko \cite{Ly79}\cite{Ly84}.
The covering result in theorem \ref{t6.2}
for the simple elliptic singularities
is an achievement of Jaworski \cite[Theorem 2]{Ja86}\cite[Proposition 1]{Ja88}.
The refinement in theorem \ref{t6.3} of theorem \ref{t6.2}
is a major result of this paper.
It will be proved in section \ref{s10},
which builds on the sections \ref{s5}, \ref{s8} and \ref{s9}.
It reproves Jaworski's result.
But the main point is the degree $\deg LL^{alg}$
for the simple elliptic singularities,
which was not calculated before.
Though for the bijections in the main result theorem \ref{t7.1},
we do not need the degree $\deg LL^{alg}$.
Theorem \ref{t6.2} and the analogous part of
theorem \ref{t6.1} are sufficient.
\begin{theorem}\label{t6.1} \cite{Lo74}\cite{Ly79}\cite{Ly84}
In the case of the simple singularities,
$LL^{alg}$ is a branched covering of degree
\begin{eqnarray}\label{6.3}
\deg LL^{alg} = \frac{\mu!}{\prod_{j=1}^\mu \deg_{\bf w} t_j}.
\end{eqnarray}
Here $\deg_{\bf w}t_j:=1-\deg_{\bf w}m_j$.
The degree $\deg LL^{alg}$
is given explicitly in table \eqref{6.4}.
\begin{eqnarray}\label{6.4}
\begin{array}{l|l|l|l|l|l}
\textup{name} & A_\mu & D_\mu & E_6 & E_7 & E_8 \\
\deg LL^{alg} & (\mu+1)^{\mu-1} & 2(\mu-1)^\mu & 2^9\cdot 3^4
& 2\cdot 3^{12} & 2\cdot 3^5\cdot 5^7
\end{array}
\end{eqnarray}
And the restriction
$LL^{alg}:M^{alg}-({\mathcal K}_3^{alg}\cup {\mathcal K}_2^{alg})
\to M_{LL}^{(\mu)}-{\mathcal D}_{LL}^{(\mu)}$
is a covering.
\end{theorem}
\begin{theorem}\label{t6.2}\cite{Ja86}\cite{Ja88}
In the case of the simple elliptic singularities,
the restriction $LL^{alg}:M^{alg}-({\mathcal K}_3^{alg}\cup {\mathcal K}_2^{alg})
\to M_{LL}^{(\mu)}-{\mathcal D}_{LL}^{(\mu)}$ is a covering.
\end{theorem}
\begin{theorem}\label{t6.3}
In the case of the simple elliptic singularities, an extension $M^{orb}$
\begin{eqnarray}\label{6.5}
\begin{array}{ccccc}(t',\lambda)& \in & M^{alg} & \subset & M^{orb} \\
\downarrow & & \downarrow & & \downarrow \pi_{orb} \\
\lambda & \in & {\mathbb C}-\{0,1\} & \subset & \P^1
\end{array}
\end{eqnarray}
of $M^{alg}$ to an orbibundle above $\P^1\supset {\mathbb C}-\{0,1\}$
exists such that $LL^{alg}$ extends to a surjective
holomorphic map $LL^{orb}:M^{orb}\to M_{LL}^{(\mu)}$
with the following properties.
The two-dimensional subspace $M^{orb}_0\subset M^{orb}$ with
\begin{eqnarray*}
M^{orb}_0&=&(\textup{closure in }M^{orb}\textup{ of }
\{(t',\lambda)\in M^{alg}\, |\, t_2=...=t_{\mu-1}=0\})\\
&\cong&{\mathbb C}\times\P^1
\end{eqnarray*}
(which is the $\mu$-constant stratum and its translates
under the unit field)
is mapped to the one-dimensional subspace $M_{LL,0}^{(\mu)}
\subset M_{LL}^{(\mu)}$ with
\begin{eqnarray*}
M_{LL,0}^{(\mu)}:= \{p(y)\in M_{LL}^{(\mu)}\, |\,
p(y)=(y-u_1)^\mu ,\, u_1\in{\mathbb C}\} \cong{\mathbb C}.
\end{eqnarray*}
The restriction
\begin{eqnarray}\label{6.6}
LL^{orb}&:& M^{orb}-M^{orb}_0 \to M_{LL}^{(\mu)}-M_{LL,0}^{(\mu)}
\end{eqnarray}
is a branched covering of degree
\begin{eqnarray}\label{6.7}
\deg LL^{orb}=\deg LL^{alg} =
\frac{\mu!\cdot \frac{1}{2}\cdot\sum_{j=2}^{\mu-1}\frac{1}{\deg_{\bf w}t_j}}
{\prod_{j=2}^{\mu-1} \deg_{\bf w} t_j}.
\end{eqnarray}
Here $\deg_{\bf w}t_j:=1-\deg_{\bf w}m_j$.
The degree $\deg LL^{alg}$
is given explicitly in table \eqref{6.8}.
\begin{eqnarray}\label{6.8}
\begin{array}{l|l|l|l}
\textup{name} & \widetilde E_6 & \widetilde E_7 & \widetilde E_8 \\
\deg LL^{alg} & 2^2\cdot 3^{11}\cdot 5\cdot 7
& 2^{18}\cdot 3\cdot 5^3\cdot 7 & 2^9\cdot 3^{10}\cdot 7\cdot 101
\end{array}
\end{eqnarray}
And $LL^{orb}$ maps
${\mathcal K}_3^{alg}\cup{\mathcal K}_2^{alg}\cup\pi_{orb}^{-1}(\{0,1,\infty\})$
to ${\mathcal D}_{LL}^{(\mu)}$.
\end{theorem}
\begin{remark}\label{t6.4}
(i) Let $N_{Coxeter}$ be the Coxeter number of an ADE root lattice,
and $W$ its Weyl group.
By \cite{Bo68} $|W|=N_{Coxeter}^\mu\cdot \prod_{j=1}^\mu\deg_{\bf w}t_j$.
Therefore
\begin{eqnarray}\label{6.9}
\deg LL^{alg} = \frac{\mu!}{\prod_{j=1}^\mu \deg_{\bf w} t_j}
=\frac{\mu!\cdot N_{Coxeter}^\mu}{|W|}.
\end{eqnarray}
This was observed for example in \cite{Yu90}.
\medskip
(ii) In order to make the tables \eqref{6.4} and \eqref{6.8} transparent,
here we give the weights $\deg_{\bf w}x_i=w_i$, the weights
$\deg_{\bf w}t_j$,
in the ADE cases the Coxeter numbers $N_{Coxeter}$,
and in the simple elliptic cases the number
$\frac{1}{2}\sum_{j=2}^{\mu-1}\frac{1}{\deg_{\bf w}t_j}$,
\begin{eqnarray*}
\begin{array}{l|l|l|l|l|l|l|l|l|l|l|l}
& N_{Coxeter} & x_0 & x_1 & t_1 & t_2 & t_3 & t_4 & ... &
t_\mu \\
A_\mu & \mu+1 & \frac{1}{\mu+1} & & 1 & \frac{\mu}{\mu+1} & \frac{\mu-1}{\mu+1}
& \frac{\mu-2}{\mu+1} & ... & \frac{2}{\mu+1} \\
D_\mu & 2(\mu-1) & \frac{1}{\mu-1} & \frac{\mu-2}{2(\mu-1)} & 1 & \frac{\mu}{2(\mu-1)}
& \frac{\mu-2}{\mu-1} & \frac{\mu-3}{\mu-1} & ... & \frac{1}{\mu-1}
\end{array}
\end{eqnarray*}
\begin{eqnarray*}
\begin{array}{l|l|l|l|l|l|l|l|l|l|l|l}
& N_{Coxeter} & x_0 & x_1 & t_1 & t_2 & t_3 & t_4 & t_5 & t_6 & t_7 & t_8 \\
E_6 & 12 & \frac{1}{4} & \frac{1}{3} & 1 & \frac{3}{4}
& \frac{2}{3} & \frac{1}{2} & \frac{5}{12} & \frac{1}{6} & &
\\[1mm]
E_7 & 18 & \frac{2}{9} & \frac{1}{3} & 1 & \frac{7}{9}
& \frac{2}{3} & \frac{5}{9} & \frac{4}{9} & \frac{1}{3}
& \frac{1}{9} & \\[1mm]
E_8 & 30 & \frac{1}{5} & \frac{1}{3} & 1 & \frac{4}{5}
& \frac{2}{3} & \frac{3}{5} & \frac{7}{15} & \frac{2}{5}
& \frac{4}{15} & \frac{1}{15}
\end{array}
\end{eqnarray*}
\begin{eqnarray*}
\begin{array}{l|l|l|l|l|l|l|l|l|l|l|l|l|l}
& x_0 & x_1 & x_2 & t_1 & t_2 & t_3 & t_4 & t_5 & t_6 & t_7 & t_8 & t_9
& \frac{1}{2}\sum_{j=2}^{\mu-1}\frac{1}{\deg_{\bf w}t_j}\\
\widetilde E_6 & \frac{1}{3} & \frac{1}{3} & \frac{1}{3} & 1 & \frac{2}{3}
& \frac{2}{3} & \frac{2}{3} & \frac{1}{3} & \frac{1}{3} & \frac{1}{3} & &
& \frac{27}{4} \\
\widetilde E_7 & \frac{1}{4} & \frac{1}{4} & & 1 & \frac{3}{4} & \frac{3}{4}
& \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{4} & \frac{1}{4} &
& \frac{25}{3} \\
\widetilde E_8 & \frac{1}{6} & \frac{1}{3} & & 1 & \frac{5}{6} & \frac{2}{3}
& \frac{2}{3} & \frac{1}{2} & \frac{1}{2} & \frac{1}{3} & \frac{1}{3}
& \frac{1}{6} & \frac{101}{10}
\end{array}
\end{eqnarray*}
\end{remark}
\bigskip
\subsection{Limit behaviour of $LL^{alg}$ after Jaworski}\label{s6.2}
Jaworski's proof of theorem \ref{t6.2}
required an understanding of the
limit behaviour of $LL^{alg}$ near $\lambda\in\{0,1,\infty\}$.
Here we will explain a result of him which concerns this limit behaviour.
It will be crucial for the proof of the main theorem \ref{t7.1}
in the case of the simple elliptic singularities.
The intersection form $I$ on the Milnor lattice $Ml(f)$
of a simple elliptic singularity is positive semidefinite if $n\equiv 0(4)$,
see e.g. \cite{AGV88}.
Consider the Stokes matrix $S$ of any distinguished basis as defined in \eqref{2.16}.
Because of \eqref{2.18}, $S+S^t$ is positive semidefinite.
Therefore all entries of $S$ are in $\{0,\pm 1,\pm 2\}$.
Therefore for any two vanishing cycles $\delta_i$ and $\delta_j$
and any $n$ (not necessarily $n\equiv 0(4)$)
$I(\delta_i,\delta_j)\in\{0,\pm 1,\pm 2\}$.
$LL^{alg}:M^{alg}-({\mathcal K}_3^{alg}\cup {\mathcal K}_2^{alg})\to M_{LL}^{(\mu)}-{\mathcal D}_{LL}^{(\mu)}$
is a covering by theorem \ref{t6.2}.
Now consider a ($C^\infty$ or real analytic) path
\begin{eqnarray}\label{6.10}
r:[0,\varepsilon)\to M_{LL}^{(\mu)}
&\textup{with}& r((0,\varepsilon))\subset M_{LL}^{(\mu)}-{\mathcal D}_{LL}^{(\mu)},\\
&\textup{and}& r(0)\in{\mathcal D}_{LL}^{(\mu),reg}.\nonumber
\end{eqnarray}
Consider any lift
$\rho:(0,\varepsilon)\to M^{alg}-({\mathcal K}_3^{alg}\cup {\mathcal K}_2^{alg})$
of the restriction of the path $r$ to $(0,\varepsilon)$.
For $s\in (0,\varepsilon)$, denote by $u_1(s),...,u_\mu(s)$ the
critical values of $F_{\rho(s)}$.
They are pairwise different.
Because of $r(0)\in {\mathcal D}_{LL}^{(\mu),reg}$,
precisely two of them will tend to one another if $s\to 0$.
We can suppose that they are numbered $u_i(s)$ and $u_{i+1}(s)$.
Write
$\rho(s)=(t_1^{(\rho)}(s),...,t_{\mu-1}^{(\rho)}(s),
\lambda^{(\rho)}(s))$
for $s\in(0,\varepsilon)$.
For fixed $s\in (0,\varepsilon)$,
consider the ${\mathbb Z}$-lattice bundle
$\bigcup_{\tau\in{\mathbb C}-\{u_1(s),...,u_\mu(s)\}}
H_n(F_{\rho(s)}^{-1}(\tau),{\mathbb Z})$.
Move the vanishing cycles at $u_i(s)$ and $u_{i+1}(s)$
along straight
lines to the ${\mathbb Z}$-lattice
$H_n(F_{\rho(s)}^{-1}(\frac{u_i(s)+u_{i+1}(s)}{2}),{\mathbb Z})$
and call the images $\delta_i(s),\delta_{i+1}(s)$.
They are unique up to the sign. Then the following holds.
\begin{theorem}\cite[Proposition 2]{Ja88}\label{t6.5}
\begin{eqnarray}
I(\delta_i(s),\delta_{i+1}(s))=0 &\iff& \rho\textup{ extends to 0 and }
\rho(0)\in{\mathcal K}_2^{alg,reg},\label{6.11} \\
I(\delta_i(s),\delta_{i+1}(s))=\pm 1&\iff& \rho\textup{ extends to 0 and }
\rho(0)\in{\mathcal K}_3^{alg,reg},\nonumber\\
I(\delta_i(s),\delta_{i+1}(s))=\pm 2&\iff&
\lambda^{(\rho)}\textup{ extends to 0 and }\lambda^{(\rho)}(0)\in\{0,1,\infty\}.
\nonumber
\end{eqnarray}
\end{theorem}
{\bf Proof:}
The statement in \cite[Proposition 2]{Ja88} is slightly weaker.
Therefore here we provide the additional arguments.
Because of theorem \ref{t6.3}, $\lambda^{(\rho)}$ extends in any case to 0,
and if $\lambda^{(\rho)}(0)\notin\{0,1,\infty\}$, then
$\rho$ extends to $0$ and $\rho(0)\in{\mathcal K}_2^{alg,reg}\cup {\mathcal K}_3^{alg,reg}$.
Therefore exactly one of the three cases on the right hand side
of \eqref{6.11} holds. In the third case, Proposition 2 in \cite{Ja88}
applies and gives $I(\delta_i(s),\delta_{i+1}(s))=\pm 2$.
In the second case, $F_{\rho(0)}$ has $\mu-2$ $A_1$ singularities
and one $A_2$-singularity, with pairwise different critical values,
and then $\delta_i(s)$ and $\delta_{i+1}(s)$
are a distinguished basis of the Milnor lattice of the $A_2$ singularity.
Then $I(\delta_i(s),\delta_{i+1}(s))=\pm 1$.
In the first case, $F_{\rho(0)}$ has $\mu$ $A_1$ singularities,
and two of them have the same critical value, the others have pairwise different
critical values. Then $\delta_i(s)$ and $\delta_{i+1}(s)$ are
vanishing cycles of the two $A_1$ singularities with the same critical value.
Then $I(\delta_i(s),\delta_{i+1}(s))=0$.
\hfill $\Box$
\bigskip
The situation for the simple singularities is analogous, but simpler.
The Milnor lattice $Ml(f)$ with intersection form of an ADE singularity is
isomorphic to the ADE root lattice if $n\equiv 0(4)$, see e.g.
\cite{AGV88}. Consider the Stokes matrix $S$ of any distinguished basis
as defined in \eqref{2.16}. Because of \eqref{2.18}, $S+S^t$ is positive definite.
Therefore all entries of $S$ are in $\{0,\pm 1\}$.
Therefore for any two vanishing cycles $\delta_i$ and $\delta_j$
and any $n$ (not necessarily $n\equiv 0(4)$)
$I(\delta_i,\delta_j)\in\{0,\pm 1\}$.
Now consider a ($C^\infty$ or real analytic) path
$r:[0,\varepsilon)\to M_{LL}^{(\mu)}$ as in \eqref{6.10},
and consider any lift
$\rho:(0,\varepsilon)\to M^{alg}-({\mathcal K}_3^{alg}\cup {\mathcal K}_2^{alg})$
of the restriction of the path $r$ to $(0,\varepsilon)$.
Because $LL^{alg}:M^{alg}\to M_{LL}^{alg}$ is a branched covering,
$\rho$ extends to $0$, and $\rho(0)\in {\mathcal K}_2^{alg,reg}\cup {\mathcal K}_3^{alg,reg}$.
For $s\in (0,\varepsilon)$, denote again by $u_1(s),...,u_\mu(s)$ the
critical values of $F_{\rho(s)}$.
They behave for $s\to 0$ as above, and we obtain vanishing
cycles $\delta_i(s)$ and $\delta_{i+1}(s)$ as above.
Then the following holds.
\begin{lemma}\label{t6.6}
\begin{eqnarray}
I(\delta_i(s),\delta_{i+1}(s))=0 &\iff&
\rho(0)\in{\mathcal K}_2^{alg,reg},\label{6.12} \\
I(\delta_i(s),\delta_{i+1}(s))=\pm 1&\iff&
\rho(0)\in{\mathcal K}_3^{alg,reg}.\nonumber
\end{eqnarray}
\end{lemma}
The proof is a subset of the proof of theorem \ref{t6.5}.
\section{The main theorem, its proof and consequences}\label{s7}
\setcounter{equation}{0}
\noindent
In subsection \ref{s3.4} we introduced for any reference singularity
$f^{(0)}$ a {\it Looijenga-Deligne map}
\begin{eqnarray}\label{7.1}
LD:R_{Stokes}\to{\mathcal B}^{ext}(f^{(0)})/G_{sign,\mu}.
\end{eqnarray}
Recall that $R_{Stokes}$ is the set of {\it Stokes regions},
which are the components of the complement of the Stokes walls
$W_{Stokes}$ in $M^{mar}(f^{(0)})$,
and ${\mathcal B}^{ext}(f^{(0)})$ is the orbit under $G_{\mathbb Z}$ of the set
${\mathcal B}(f^{(0)})$ of distinguished bases of $f^{(0)}$.
The map $LD$ is $G_{\mathbb Z}$ equivariant.
In the case of a simple singularity $f^{(0)}=f$ or
of a simple elliptic singularity $f^{(0)}=f_{1/2}$
(with $f_\lambda$ the Legendre normal form from \eqref{4.2}),
$M^{mar}(f^{(0)})$ had been constructed in section \ref{s4}.
The main theorem is as follows.
For the simple singularities, the bijection \eqref{7.3}
was proved in a different way in \cite{Lo74} and \cite{De74},
see remark \ref{t7.2} (iv) below. Yu \cite[6.3 Satz]{Yu90}
built on this and proved the bijection \eqref{7.4} for the
simple singularities.
\begin{theorem}\label{t7.1}
Consider a simple singularity $f^{(0)}=f$ or
a simple elliptic singularity $f^{(0)}=f_{1/2}$. Then
\begin{eqnarray}\label{7.2}
R_{Stokes}=R^{0}_{Stokes},\quad {\mathcal B}^{ext}(f^{(0)})={\mathcal B}(f^{(0)}).
\end{eqnarray}
The Looijenga-Deligne map
\begin{eqnarray}\label{7.3}
LD:R^{0}_{Stokes}\to{\mathcal B}(f^{(0)})/G_{sign,\mu}
\end{eqnarray}
and the induced quotient map
\begin{eqnarray}\label{7.4}
LD/G_{\mathbb Z}:R^{0}_{Stokes}/G_{\mathbb Z}\to
\{\textup{Stokes matrices}\}/G_{sign,\mu}
\end{eqnarray}
are bijections.
\end{theorem}
{\bf Proof:} In \cite{GH17-1} it was proved that the moduli space
$M^{mar}(f^{(0)})$ is connected (see remark \ref{t3.7} (ii)).
Therefore $R_{Stokes}=R_{Stokes}^0$.
Recall the argument in remark \ref{t3.11} (ii) for ${\mathcal B}^{ext}(f^{(0)})={\mathcal B}(f^{(0)})$:
The map $LD$ is $G_{\mathbb Z}$ equivariant, and remark \ref{t3.11} (i) shows
that $R_{Stokes}^0$ is mapped to ${\mathcal B}(f^{(0)})$. Therefore
${\mathcal B}^{ext}(f^{(0)})={\mathcal B}(f^{(0)})$.
The Stokes matrix of a distinguished basis $\underline{\delta}$
of the Milnor lattice $Ml(f^{(0)})$ was defined in \eqref{2.16}
as $S:=(-1)^{(n+1)(n+2)/2}\cdot
L(\underline{\delta}^t,\underline{\delta})^t$.
Obviously the set of Stokes matrices can be identified
with the quotient ${\mathcal B}(f^{(0)})/G_{\mathbb Z}$.
It suffices to prove that the map $LD$ in \eqref{7.3}
is a bijection. Then the quotient map $LD/G_{\mathbb Z}$ in
\eqref{7.4} is a bijection, too.
Looijenga's argument \cite{Lo74}
that $LD$ is surjective for the simple singularities,
works because of Jaworski's theorem \ref{t6.2}
also for the simple elliptic singularities.
The argument is as follows.
Let $U\in R_{Stokes}^0$ be any Stokes region,
let $t\in U$, and let $\underline{\delta}$ be the up to the action of $G_{sign,\mu}$
unique distinguished basis which is constructed from the morsification
$F_t$ and the good distinguished system of paths in definition \ref{t3.9} (b).
Then $\underline{\delta}$ is in $LD(U)$.
Let $\underline{\gamma}$ be any distinguished basis.
It is the image of $\underline{\delta}$ under the action
of a certain braid in $\textup{Br}_\mu$
and possibly a sign change in $G_{sign,\mu}$.
The braid gives a (homotopy class of a) closed path in
$M_{LL}^{(\mu)}-D_{LL}^{(\mu)}$.
The path has a unique lift to $M^{mar}-({\mathcal K}_3^{mar}\cup{\mathcal K}_2^{mar})$
which starts at $t\in U$ because the Lyashko-Looijenga map
$LL^{mar}:M^{mar}-({\mathcal K}_3^{mar}\cup{\mathcal K}_2^{mar})\to M_{LL}^{(\mu)}-{\mathcal D}_{LL}^{(\mu)}$
is a covering by the theorems \ref{t6.1} and \ref{t6.2}.
Let $\widetilde t$ be the endpoint of this lift and let $\widetilde U$ be the Stokes
region which contains $\widetilde t$. Then $\underline{\gamma}\in LD(\widetilde U)$.
Therefore $LD$ is surjective.
It rests to prove that $LD$ is injective.
Let $U^{(1)}$ and $U^{(2)}$ be two Stokes regions with
$LD(U^{(1)})=LD(U^{(2)})$. Because the Lyashko-Looijenga map
$LL^{mar}:M^{mar}-({\mathcal K}_3^{mar}\cup{\mathcal K}_2^{mar})\to M_{LL}^{(\mu)}-{\mathcal D}_{LL}^{(\mu)}$
is a covering, both Stokes regions are mapped by $LL^{mar}$ bijectively to
the open subset
\begin{eqnarray}\label{7.5}
\{p(y)\in M_{LL}^{(\mu)}&|& p(y)=\prod_{j=1}^\mu(y-u_j)\\
&&\textup{ with }\Imm(u_i)\neq\Imm(u_j)\textup{ for }
i\neq j\}.\nonumber
\end{eqnarray}
of $M_{LL}^{(\mu)}$. There is a unique isomorphism
$\psi^U:U^{(1)}\to U^{(2)}$ which is compatible with $LL^{mar}$.
Obviously it is an isomorphism of semisimple F-manifolds.
We claim that it extends to an automorphism
$\psi^{mar}:M^{mar}(f^{(0)})\to M^{mar}(f^{(0)})$.
If this is true then the rest of the proof is an elegant application
of theorem \ref{t4.3}: Then $\psi^{mar}$ comes from an element
$\psi\in G_{\mathbb Z}(f^{(0)})$ (which is unique up to $\pm 1$).
The element $\psi$ must map $LD(U^{(1)})$ to $LD(U^{(2)})$.
As they coincide by assumption, $\psi=\pm\id$.
Thus $\psi^{mar}=\id$ on $M^{mar}$, thus $U^{(1)}=U^{(2)}$.
It rests to show that $\psi^U$ extends an automorphism
$\psi^{mar}$ of $M^{mar}(f^{(0)})$.
Roughly, the reason is that the covering
$LL^{mar}:M^{mar}-({\mathcal K}_3^{mar}\cup{\mathcal K}_2^{mar})
\to M_{LL}^{(\mu)}-{\mathcal D}_{LL}^{(\mu)}$
with base point in $U^{(k)}$ is determined by the class
of Stokes matrices
modulo $G_{sign,\mu}$ which are associated to the distinguished bases
in $LD(U^{(k)})$. As this class coincides for $k=1,2$,
a deck transformation
$\psi^{mar}:M^{mar}-({\mathcal K}_3^{mar}\cup{\mathcal K}_2^{mar})
\to M^{mar}-({\mathcal K}_3^{mar}\cup{\mathcal K}_2^{mar})$ exists, which
extends $\psi^{U}$.
It extends to ${\mathcal K}_3^{mar}\cup{\mathcal K}_2^{mar}$ as there $LL^{mar}$
is generically branched of order 3 respectively 2.
More precisely, we can argue as follows.
Let $t^{(k)}\in U^{(k)}$
be points with $LL^{mar}(t^{(1)})=LL^{mar}(t^{(2)})$.
Then $\psi^U(t^{(1)})=t^{(2)}$.
Choose a path within
$M^{mar}(f^{(0)})-({\mathcal K}_3^{mar,sing}\cup{\mathcal K}_2^{mar,sing})$
from $t^{(1)}$ to any point in this space.
We claim that $\psi^U$ extends from $U^{(1)}$ to a well defined map
\begin{eqnarray}\label{7.6}
\psi^{U\cup\textup{path}}&:&U^{(1)}\cup(\textup{a neighborhood of this path})\\
&&\to M^{mar}-({\mathcal K}_3^{mar,sing}\cup {\mathcal K}_2^{mar,sing})\nonumber
\end{eqnarray}
and that this is locally an isomorphism of F-manifolds.
If the path does not meet ${\mathcal K}_3^{mar}\cup {\mathcal K}_2^{mar}$, this is obvious.
Now suppose that it meets ${\mathcal K}_3^{mar,reg}\cup {\mathcal K}_2^{mar,reg}$.
Let $\rho^{(1)}$ be the restriction of the path to a path
from $t^{(1)}$
to a point $\widetilde t^{(1)}$ just before the first meeting point
with ${\mathcal K}_3^{mar,reg}\cup {\mathcal K}_2^{mar,reg}$. Then
\begin{eqnarray*}
&&\psi^{U\cup\rho^{(1)}}: U^{(1)}\cup
\{\textup{a neighborhood of }\rho^{(1)}\} \\
&\to& M^{mar}-({\mathcal K}_3^{mar,sing}\cup {\mathcal K}_2^{mar,sing})
\end{eqnarray*}
is well defined. Let
$\rho^{(2)}:=\psi^{U\cup\rho^{(1)}}\circ \rho^{(1)}$
be the image of $\rho^{(1)}$ under
$\psi^{U\cup\rho^{(1)}}$.
Then $\rho^{(2)}$ starts at $t^{(2)}$ and ends at
$\widetilde t^{(2)}:=\psi^{U\cup\rho^{(1)}}(\widetilde t^{(1)})$.
Then $LL(\widetilde t^{(1)})=LL(\widetilde t^{(2)})$.
Let $\widetilde U^{(1)}$ and $\widetilde U^{(2)}$ be the Stokes regions which contain
$\widetilde t^{(1)}$ and $\widetilde t^{(2)}$.
Then still $LD(\widetilde U^{(1)})=LD(\widetilde U^{(2)})$,
and also the associated Stokes matrices are equal up to the action of
$G_{sign,\mu}$.
Write $LL(\widetilde t^{(1)})=LL(\widetilde t^{(2)})
=\prod_{i=1}^\mu (y-u_i)$ with $(u_1,...,u_\mu)$ in good
ordering (definition \ref{t3.9} (a)).
By lemma \ref{t2.3}, the ${\mathbb Z}$-lattice bundles
$\bigcup_{\tau\in{\mathbb C}-\{u_1,...,u_\mu\}}H_n(F_{\widetilde t^{(k)}})^{-1}(\tau),{\mathbb Z})$
for $k=1$ and $k=2$ are isomorphic.
Near $\widetilde t^{(k)}$ the path $\rho^{(k)}$ is a lift of a path
$r$ as in \eqref{6.10}.
By the construction before theorem \ref{t6.5} and lemma \ref{t6.6},
we obtain vanishing cycles $\delta_i^{(k)}$ and $\delta_{i+1}^{(k)}$
in $H_n(F_{\widetilde t^{(k)}}^{-1}(\frac{u_i+u_{i+1}}{2}),{\mathbb Z})$.
Because the ${\mathbb Z}$-lattice bundles are isomorphic,
\begin{eqnarray*}
I(\delta_i^{(1)},\delta_{i+1}^{(1)})=I(\delta_i^{(2)},\delta_{i+1}^{(2)}).
\end{eqnarray*}
By theorem \ref{t6.5} and lemma \ref{t6.6},
this is either $0$ or $\pm 1$,
and the first meeting point of the extension of $\rho^{(1)}$
is in ${\mathcal K}_2^{mar}$ in the first case and in ${\mathcal K}_3^{mar}$ in the second case,
and $\rho^{(2)}$ extends to ${\mathcal K}_2^{mar}$ in the first case and to
${\mathcal K}_3^{mar}$ in the second case.
Therefore the isomorphism $\psi^{U\cup\rho^{(1)}}$ extends to a local
isomorphism of F-manifolds beyond this first meeting point.
Therefore \eqref{7.6} holds. As ${\mathcal K}_3^{mar,sing}\cup{\mathcal K}_2^{mar,sing}$
has codimension two in $M^{mar}$, the extensions of $\psi^U$ to
local isomorphisms of F-manifolds along all paths in
$M^{mar}-{\mathcal K}_3^{mar,sing}\cup{\mathcal K}_2^{mar,sing}$
glue to one global automorphism $\psi^{mar}$ of $M^{mar}$.
\hfill$\Box$
\begin{remarks}\label{t7.2}
(i) In the case of a simple singularity, the sets $R^0_{Stokes}$,
${\mathcal B}(f)/G_{sign,\mu}$ and $\{\textup{Stokes matrices}\}/G_{sign,\mu}$
are all finite. $|R_{Stokes}^0|=\deg LL^{alg}$ is finite
because the Lyashko-Looijenga map is algebraic.
${\mathcal B}(f)$ and the quotient sets are finite
because there are only finitely many vanishing cycles,
they are the roots of the ADE lattice.
In the case of a simple elliptic singularity, the set $R_{Stokes}^0$
is not finite, because the universal covering $\lambda:\H\to{\mathbb C}-\{0,1,\infty\}$
has infinite degree.
The set ${\mathcal B}(f_{1/2})$ of distinguished bases is not finite
because the group $G_{\mathbb Z}$ acts on it because of \eqref{7.2}, and
the group $G_{\mathbb Z}$ is not finite \cite{GH17-1}.
The set of Stokes matrices is finite because
the entries of each Stokes matrix are in $\{0,\pm 1,\pm 2\}$,
see the beginning of subsection \ref{s6.2}.
Ebeling \cite{Eb18} showed that for all other singularities the set ${\mathcal B}(f)$ and
the set of Stokes matrices are infinite. Together this gives the
picture in table \eqref{1.1}.
\medskip
(ii) Theorem \ref{t7.1} together with the degrees of $LL^{alg}$
in the theorems \ref{t6.1} and \ref{t6.3} and in the case of the
simple singularities the number $|G_{\mathbb Z}|$ allows now to calculate
all finite numbers in \eqref{1.1}.
Corollary \ref{t7.3} gives the result.
\medskip
(iii) All numbers in corollary \ref{t7.3}
except the number of Stokes matrices for $\widetilde E_8$
had already been determined. Deligne \cite{De74}
determined the number $|{\mathcal B}(f)|$ for the simple singularities,
Yu \cite{Yu90}\cite{Yu96}\cite{Yu99} determined
the number $|\{\textup{Stokes matrices}\}|$ for the simple
singularities. Kluitmann determined the number
$|\{\textup{Stokes matrices}\}|$ for the simple elliptic singularities
$\widetilde E_6$ \cite{Kl83} and $\widetilde E_7$ \cite{Kl87}.
Deligne and Kluitmann worked directly with the braid group orbits
${\mathcal B}(f)$. Their calculations are hard, especially those of Kluitmann.
It is satisfying that theorem \ref{t7.1} together with
$\deg LL^{alg}$ gives the same numbers
$|\{\textup{Stokes matrices}\}|$ for $\widetilde E_6$ and $\widetilde E_7$,
and that they allow to find the missing number,
the number $|\{\textup{Stokes matrices}\}|$ for $\widetilde E_8$.
\medskip
(iv) For the simple singularities, Deligne \cite{De74}
and Looijenga \cite{Lo74} proved the bijection \eqref{7.3}
by comparison of numbers. Looijenga proved that \eqref{7.3} is surjective
(see the proof of theorem \ref{t7.1} for his argument)
and calculated $|R_{Stokes}^0|=\deg LL^{alg}$.
Deligne calculated $|{\mathcal B}(f)|/G_{sign,\mu}$ and observed
that it coincides with $|R_{Stokes}^0|$.
Therefore \eqref{7.3} is a bijection.
But for the simple elliptic singularities, both sides of \eqref{7.3}
are infinite, and this argument does not work.
\medskip
(v) For the simple singularities observe
\begin{eqnarray}\label{7.8}
|{\mathcal B}(f)|&=&2^\mu\cdot |{\mathcal B}(f)/G_{sign,\mu}|,
\end{eqnarray}
For the simple and simple elliptic singularities observe
\begin{eqnarray}
|\{\textup{Stokes matrices}\}|&=&2^{\mu-1}\cdot
|\{\textup{Stokes matrices}\}/G_{sign,\mu}| .\label{7.9}
\end{eqnarray}
The last equality holds because any Coxeter-Dynkin diagram is connected.
\end{remarks}
\begin{corollary}\label{t7.3}
For any simple singularity $|{\mathcal B}(f)/G_{sign,\mu}|=\deg LL^{alg}$, and
this number is given in table \eqref{6.4}. The other numbers are as follows.
\begin{eqnarray}\label{7.10}
\begin{array}{lll}
& |G_{\mathbb Z}| & |\{\textup{Stokes matrices}\}/G_{sign,\mu}| \\
\hline
A_\mu & 2(\mu+1) & (\mu+1)^{\mu-2}\\
D_4 & 36 & 9\\
D_\mu,\mu\geq 5 & 4(\mu-1) & (\mu-1)^{\mu-1}\\
E_6 & 24 & 2^7\cdot 3^3 = 3456\\
E_7 & 18 & 2\cdot 3^{10} = 118098\\
E_8 & 30 & 2\cdot 3^4\cdot 5^6 = 2531250
\end{array}
\end{eqnarray}
In the case of the simple elliptic singularities
\begin{eqnarray}\label{7.11}
\begin{array}{lll}
& \deg(M^{alg}\to M^{mar}/G_{\mathbb Z}) & |\{\textup{Stokes matrices}\}/G_{sign,\mu}| \\
\hline
\widetilde E_6 & 6\cdot 2\cdot 3\cdot 3^2 =326 & 3^7\cdot 5\cdot 7=76545\\
\widetilde E_7 & 6\cdot 1\cdot 4\cdot 2^2 = 96 & 2^{13}\cdot 5^3\cdot 7 =7168000\\
\widetilde E_8 & 6\cdot 1\cdot 6\cdot 1^2 = 36 & 2^7\cdot 3^8\cdot 7\cdot 101
=593744256
\end{array}
\end{eqnarray}
Here $\deg(M^{alg}\to M^{mar}/G_{\mathbb Z})$ means the generic degree.
\end{corollary}
{\bf Proof:}
First we consider the simple singularities.
The bijection \eqref{7.3} gives
$$\deg LL^{alg}=|R_{Stokes}^0|=|{\mathcal B}(f)/G_{sign,\mu}|.$$
The group $G_{\mathbb Z}$ acts on $M^{alg}=M^{mar}$
with kernel $\{\pm\id\}$.
This and the bijection \eqref{7.4} give
\begin{eqnarray*}
|\{\textup{Stokes matrices}\}/G_{sign,\mu}|&=&
|R_{Stokes}^0/G_{\mathbb Z}|=2\cdot |R_{Stokes}^0|/|G_{\mathbb Z}|\\
&=&2\cdot\deg LL^{alg}/|G_{\mathbb Z}|.
\end{eqnarray*}
The values $|G_{\mathbb Z}|$ can be found in \cite[Theorem 8.3 and Theorem 8.4]{He11}.
Together with \eqref{6.4}, this gives \eqref{7.10}.
Now we consider the simple elliptic singularities. Obviously
\begin{eqnarray*}
|R^0_{Stokes}/G_{\mathbb Z}|&=&\deg(LL:M^{mar}/G_{\mathbb Z}\to M_{LL}^{(\mu)})\\
&=& \frac{\deg LL^{alg}}{\deg(M^{alg}\to M^{mar}/G_{\mathbb Z})}.
\end{eqnarray*}
Therefore the degree of the map $M^{alg}\to M^{mar}/G_{\mathbb Z}$
in the second column of \eqref{7.11},
the bijection \eqref{7.4} and the table \eqref{6.8}
give the third column of \eqref{7.11}.
The degree $\deg(M^{alg}\to M^{mar}/G_{\mathbb Z})$ is calculated
in lemma \ref{t5.2}.
\hfill$\Box$
\section{Segre classes of smooth cone bundles}\label{s8}
\setcounter{equation}{0}
\noindent
The calculation of the degrees $\deg LL^{alg}$ for the simple
elliptic singularities in section \ref{s10} will use corollary \ref{t8.6} below.
For this corollary, we have to extend some notions and results
in \cite{Fu84}. We do not need new ideas, just some new details.
We follow closely this book.
In \cite[B.5]{Fu84} {\it cones} are defined in the category of algebraic
schemes as follows. $X$ is an algebraic scheme.
$S^\bullet=\sum_{d\geq 0}S^d$ is a graded sheaf of ${\mathcal O}_X$-algebras
such that the canonical map ${\mathcal O}_X\to S^0$ is an isomorphism,
$S^1$ is a coherent ${\mathcal O}_X$-module, and $S^\bullet$ is
(locally) generated by $S^1$ as an ${\mathcal O}_X$-algebra.
Then $C:=\textup{Spec}(S^\bullet)$ with the projection $C\to X$ is a {\it cone}.
Its fibers are affine and come equipped with a ${\mathbb C}^*$-action.
The bundle of ${\mathbb C}^*$-orbits is $P(C):=\textup{Proj}(S^\bullet)$.
The projection $p_C:P(C)\to X$ is proper.
The rational functions on $C$ which are homogeneous of degree $d$ induce
the line bundle ${\mathcal O}_{P(C)}(d)$.
If $f:Y\to X$ is a morphism, then the pull-back $f^*C=C\times_XY$ is the
cone on $Y$ defined by the sheaf $f^*S^\bullet$ of ${\mathcal O}_Y$-algebras.
If $C_1$ and $C_2$ are two cones on $X$ defined by $S_1^\bullet$ and
$S_2^\bullet$, their direct sum $C_1\oplus C_2$ is the cone on $X$
defined by the graded sheaf $S_1^\bullet\otimes S_2^\bullet$.
Now suppose that the cone $C$ is pure dimensional. Then its {\it Segre class}
is by \cite[Example 4.1.2]{Fu84}
\begin{eqnarray}\label{8.1}
s(C)=(p_C)_*\left(\sum_{i\geq 0}
c_1({\mathcal O}(1))^i\cap[P(C)]\right)\in A_*X.
\end{eqnarray}
Here $A_*P(C)$ and $A_*X$ are the spaces of cycles modulo rational
equivalence \cite[1.3]{Fu84},
$[P(C)]\in A_*P(C)$, $(p_C)_*:A_*P(C) \to A_*X$ is the
push-forward
\cite[1.4]{Fu84}, ${\mathcal O}(1)$ is the canonical line bundle on $P(C)$,
and the Chern class $c_1({\mathcal O}(1))$ is understood in the operational sense,
as a map
\begin{eqnarray*}
c_1({\mathcal O}(1))\cap :A_kP(C)\to A_{k-1}P(C)
\end{eqnarray*}
\cite[3.2]{Fu84}.
We are interested in the (more special and more general) situation where
the base $X$ is pure dimensional, the fibers are smooth, and the
fibration $C\to X$ is locally trivial, but where the condition that $S^1$
generates $S^\bullet$ is not necessarily satisfied. The following
definition fixes this situation.
\begin{definition}\label{t8.1}
For some $n\in{\mathbb Z}_{\geq 1}$, let ${\bf a}=(a_1,...,a_n)\in{\mathbb Z}_{\geq 1}^n$
with $a_1\leq a_2\leq ...\leq a_n$.
Let $R={\mathbb C}[x_1,...,x_n]$ be the ${\mathbb C}$-algebra with the grading
$R=R^\bullet =\sum_{d\geq 0}R^d$ such that $x_i\in R^{a_i}$.
Let $X$ be an algebraic pure dimensional scheme.
Let $S^\bullet=\sum_{d\geq 0}S^d$ be a graded sheaf of ${\mathcal O}_X$-algebras
such that there is a covering of $X$ by open affine charts
$U_i,i\in I,$ and there are isomorphisms
$S|_{U_i}\cong \underline{R}_{U_i}:={\mathcal O}_{U_i}\otimes R$ of
graded ${\mathcal O}_{U_i}$-algebras.
Then $C:=\textup{Spec}(S^\bullet)$ is a {\it smooth cone bundle}
({\it smooth} because the fibers of $C\to X$ are smooth,
{\it bundle} because $C\to X$ is locally trivial).
\end{definition}
By the next lemma, a smooth cone bundle comes equipped with a chain of
smooth cone subbundles and quotients, which are vector bundles.
\begin{lemma}\label{t8.2}
The situation in definition \ref{t8.1} is kept.
For $k\in{\mathbb Z}$ with $a_1\leq k\leq a_n+1$,
define $I_k\subset S^\bullet$ as the sheaf of
homogeneous ideals generated by $S^1+...+S^{k-1}$ (so $I_{a_1}=\{0\}$),
define $S^\bullet_k:= S^\bullet/I_k$ as the quotient sheaf of
${\mathcal O}_X$-algebras with the induced grading, and define
$S_{k,sub}^\bullet\subset S^\bullet_k$ as the subring sheaf
of $S^\bullet_k$ generated by $S^k_k$
(obviously $S_{k,sub}^d=0$ if $k\not|d$).
Define $S^\bullet_{(k)}$ essentially as $S^\bullet_{k,sub}$, but
with the new grading $S^d_{(k)}:=S^{d\cdot k}_{k,sub}$.
Then the $C_k:=\textup{Spec}(S^\bullet_k)$ are smooth cone bundles
on $X$ and form a chain
\begin{eqnarray}\label{8.2}
(\textup{zero section})=C_{a_n+1}\subset C_{a_n}\subset
C_{a_n-1}\subset ...\subset C_{a_1+1}\subset C_{a_1}=C.
\end{eqnarray}
The cones $C_{(k)}:=\textup{Spec}(S^\bullet_{(k)})$
are vector bundles
on $X$ with $\rank C_{(k)}=|\{i\, |\, k=a_i\}|$
(so many of them may be 0,
and $\sum_{k=1}^{a_n}\rank C_{(k)}=\rank C$).
The smooth cone bundle $C_{k+1}$ is the kernel of
the projection $\pr_k: C_k\to C_{(k)}$
(from the inclusion $S^\bullet_{k,sub}\hookrightarrow S^\bullet_k$).
\end{lemma}
The proof is clear.
Given a smooth cone bundle $C\to X$, we want to define a {\it Segre class}.
The rational functions of degree $d$ on $C$ induce a sheaf
${\mathcal O}_{P(C)}(d)$. But $S^\bullet$ is in general not generated by $S^1$,
therefore ${\mathcal O}_{P(C)}(1)$ is not necessarily invertible.
For example if $\gcd(a_1,...,a_n)>1$ then $S^d=0$ and
${\mathcal O}_{P(C)}(d)=0$ if $\gcd(a_1,...,a_n)\not| d$.
But for certain larger $d$, the sheaf ${\mathcal O}_{P(C)}(d)$ is
good enough.
\begin{definition}\label{t8.3}
The situation in definition \ref{t8.1} is kept.
Choose $d\in \lcm(a_1,...,a_n)\cdot{\mathbb Z}_{\geq 1}$, and define
a Segre class
\begin{eqnarray}\label{8.3}
s^{(d)}(C):= (p_C)_*\left( \sum_{i\geq 0}
\left(\frac{c_1({\mathcal O}(d))}{d}\right)^i
\cap \frac{[P(C)]}{\gcd(a_1,...,a_n)}\right) \in A_*^{\mathbb Q} X,
\end{eqnarray}
here $A_*^{\mathbb Q} X=A_*X\otimes_{\mathbb Z}{\mathbb Q}$, $A_*^{\mathbb Q} P(C)
=A_*P(C)\otimes_{\mathbb Z}{\mathbb Q}$,
and $c_1({\mathcal O}(d))\cap:A_k^{\mathbb Q} X P(C)\to A_{k-1}^{\mathbb Q} P(C)$,
$(p_C)_*:A_k^{\mathbb Q} P(C)\to A_k^{\mathbb Q} X$.
\end{definition}
Part (b) in the following proposition generalizes
\cite[Proposition 4.1 (a)]{Fu84}.
\begin{proposition}\label{t8.4}
The situation in definition \ref{t8.1} is kept.
(a) $s^{(d)}(C)$ is independent of the choice of $d$
and is called $s^{(scb)}(C)$.
If $a_1=...=a_n=1$, then $C$ is a vector bundle
and $s^{(scb)}(C)$ is the classical Segre class in
\cite[Example 4.1.2]{Fu84}.
(b)
\begin{eqnarray}\label{8.4}
s^{(scb)}(C)&=& \frac{1}{a_1...a_n}\cdot \prod_{k=a_1}^{a_n}
c^{pol}_{1/k}(C_{(k)})^{-1}\cap [X],
\end{eqnarray}
here $C_{(k)}$ is the vector bundle associated to $C$ in lemma \ref{t8.2},
and $c^{pol}_t(C_{(k)})=c_0+tc_1+t^2c_2+...$ is its Chern polynomial (in \eqref{8.4}
the variable $t$ is replaced by the number $\frac{1}{k})$.
\end{proposition}
{\bf Proof:}
(a) The independence will follow from the formula \eqref{8.4}
in (b). If $a_1=...=a_n=1$, then $C$ is a vector bundle,
${\mathcal O}(d)={\mathcal O}(1)^{\otimes d}$, $c_1({\mathcal O}(d))=d\cdot c_1({\mathcal O}(1))$,
and the definition \eqref{8.3} agrees with
\cite[Example 4.1.2]{Fu84}.
(b) This will be proved by induction on the dimension $n$ of the
fibers of the smooth cone bundle. We carry out the first step of
the induction. Part of it is close to \cite[Example 4.1.5]{Fu84}.
By the splitting construction \cite[Proof of Theorem 3.2]{Fu84},
there is a flat morphism $f:Y\to X$ such that
$f^*:A_*X\to A_*Y$ is injective and
$f^*C_{(a_1)}$ has a filtration by subbundles
\begin{eqnarray}\label{8.5}
f^*C_{(a_1)} = E_r\supset E_{r-1}\supset ...\supset
E_1\supset E_0=0
\end{eqnarray}
with line bundle quotients $L_i=E_i/E_{i-1}$
(and $r=|\{i\, |\, a_i=a_1\}|=\rk C_{(a_1)}$).
The smooth cone bundle $F:=f^*(C)$ in $Y$
contains the smooth cone bundle
$G:=\ker(F\to L_r)$ in codimension one,
where $F\to L_r$ is the composition of the projections
$f^*\pr_{a_1}:F\to E_r$ and $E_r\to L_r$.
Denote by ${\mathcal L}^\bullet$ the graded sheaf of ${\mathcal O}_X$-algebras with
$L_r=\textup{Spec}({\mathcal L}^\bullet)$.
The projection $F\to L_r$ with kernel $G$ corresponds to an embedding
${\mathcal L}^\bullet\hookrightarrow f^*S_{k,sub}^\bullet\subset f^*S^\bullet$.
Denote by $p_F:P(F)\to Y$ the projection.
The line bundle $(p_F)^* L_r^{\otimes d/a_1}\otimes
{\mathcal O}_{P(F)}(d)$ has a global
section $\sigma$: If $U\subset Y$ is an open affine chart and
$f^*S^\bullet|_U ={\mathcal O}_U\otimes R\cdot e_1$ is a trivialization and
${\mathcal L}^\bullet|_U\cong {\mathcal O}_U\otimes {\mathbb C}[x_1]$, then
$\sigma|_{(p_F)^{-1}(U)} \sim (p_F)^*e_1^{\otimes d/a_1}\otimes x_1^{d/a_1}$.
Its zero-scheme in $P(F)$ may be identified with $P(G)$
with multiplicity $\frac{d}{a_1}\cdot \frac{\gcd(a_1,...,a_n)}{\gcd(a_2,...,a_n)}$,
equivalently,
\begin{eqnarray}\label{8.6}
\frac{d}{a_1}\cdot \frac{\gcd(a_1,...,a_n)}{\gcd(a_2,...,a_n)}
\cdot [P(G)]
= c_1((p_F)^* L_r^{\otimes d/a_1}\otimes {\mathcal O}_{P(F)}(d))
\cap [P(F)].
\end{eqnarray}
Here $\gcd(a_1,...,a_n)$ and $\gcd(a_2,...,a_n)$ are the
sizes of the kernels of the ${\mathbb C}^*$-actions on $F$ and
on $G$.
Now we want to calculate $s^{(d)}(G)$ in terms of $s^{(d)}(F)$ and
the value at $t=\frac{1}{a_1}$ of the Chern polynomial $c^{pol}_t(L_r)$.
For this observe that the closed embedding
$i:P(G)\hookrightarrow P(F)$
is proper. The formula $i^*{\mathcal O}_{P(F)}(d)={\mathcal O}_{P(G)}(d)$ and the
projection formula for Chern classes \cite[Theorem 3.2 (c)]{Fu84} will be used.
\begin{eqnarray*}
&&\frac{1}{a_1}\cdot s^{(d)}(G)\\
&=& \frac{1}{a_1}\cdot (p_G)_* \left(\sum_{i\geq 0}
\left(\frac{c_1({\mathcal O}_{P(G)}(d))}{d}\right)^i
\cap \frac{[P(G)]}{\gcd(a_2,...,a_n)}\right) \\
&=& \frac{1}{d} \cdot (p_F)_* \left(\sum_{i\geq 0}
\left(\frac{c_1({\mathcal O}_{P(F)}(d))}{d}\right)^i
\cap c_1\Bigl((p_F)^* L_r^{\otimes d/a_1}
\otimes {\mathcal O}_{P(F)}(d)\Bigr) \right.\\
&& \left. \hspace*{7cm}\cap
\frac{[P(F)]}{\gcd(a_1,...,a_n)}\right) \\
&=& (p_F)_* \left(\sum_{i\geq 0} \left( \frac{c_1({\mathcal O}_{P(F)}(d))}{d}\right)^i
\cap \left(\frac{1}{a_1}c_1((p_F)^* L_r)+\frac{1}{d}c_1({\mathcal O}_{P(F)}(d)\right)\right. \\
&& \left. \hspace*{7cm}\cap
\frac{[P(F)]}{\gcd(a_1,...,a_n)}\right) \\
&=& (p_F)_*\left(\sum_{i\geq 0}\left(\frac{c_1({\mathcal O}_{P(F)}(d))}{d}\right)^i
\cap c^{pol}_{1/a_1}((p_F)^* L_r)\cap
\frac{[P(F)]}{\gcd(a_1,...,a_n)}\right) \\
&=& c_{1/a_1}^{pol}(L_r)\cap s^{(d)}(F).
\end{eqnarray*}
In the second to last equality, the term
$(p_F)_*\Bigl(c_0(p_F)^*(L_r)\cap
\frac{[P(F)]}{\gcd(a_w,...,a_n)}\Bigr)$ was added.
This term vanishes,
as $[P(F)]\in A_{\dim P(F)}$ is mapped by $(p_F)_*$ to
$A_{\dim P(F)}Y$, which is zero because
$\dim P(F)=\dim P(G)+1\geq \dim Y+1$. Therefore
\begin{eqnarray}\label{8.7}
s^{(d)}(F) &=& \frac{1}{a_1}\cdot c^{pol}_{1/a_1}(L_r)^{-1}\cap s^{(d)}(G).
\end{eqnarray}
By induction and the product formula
$c^{pol}_t(L_r)\cdot c^{pol}_t(E_{r-1}) = c^{pol}_t(f^* C_{(a_1)})$ we obtain
\begin{eqnarray}
s^{(d)}(F) &=& \frac{1}{a_1^r}\cdot \prod_{j=1}^r
c_{1/a_1}^{pol}(L_j)^{-1}\cap s^{(d)}(f^*C_{(a_1+1)})\nonumber\\
&=& \frac{1}{a_1^r} \cdot c_{1/a_1}^{pol}(f^*C_{(a_1)})^{-1}
\cap s^{(d)}(f^*C_{(a_1+1)})\nonumber\\
&=& \frac{1}{a_1...a_n}\cdot\prod_{k=a_1}^{a_n}
c_{1/k}^{pol}(f^*C_{(k)})^{-1})\cap [Y] \nonumber \\
&=& \frac{1}{a_1...a_n}\cdot f^*\left( \prod_{k=a_1}^{a_n}
c_{1/k}^{pol}(C_{(k)})^{-1})\cap [X]\right) . \label{8.8}
\end{eqnarray}
Now the injectivity of $f^*:A_*X\to A_*Y$ gives
\begin{eqnarray}\label{8.9}
s^{(d)}(C) &=& \frac{1}{a_1...a_n}\cdot\prod_{k=a_1}^{a_n}
c_{1/k}^{pol}(C_{(k)})^{-1})\cap [X].
\end{eqnarray}
\medskip
It rests to settle the beginning of the induction.
Consider the case $n=1$ and choose $d\in a_1\cdot {\mathbb Z}_{\geq 1}$.
Then $P(C)=X$, $p_C=\id$, $C=C_{(a_1)}\cong {\mathcal O}_{P(C)}(-a_1)$, and
\begin{eqnarray*}
\sum_{i\geq 0}\left(\frac{c_1({\mathcal O}_X(d))}{d}\right)^i &=&
\left(1-\frac{1}{d}c_1({\mathcal O}_X(d))\right)^{-1}
= \left(1+\frac{1}{d}c_1({\mathcal O}_X(-d))\right)^{-1} \\
&=& \left(1+\frac{1}{a_1}c_1({\mathcal O}_X(-a_1))\right)^{-1}
=c^{pol}_{1/a_1}(C_{(a_1)})^{-1}.
\end{eqnarray*}
Therefore
\begin{eqnarray*}
s^{(d)}(C)&=& (p_C)_*\left( \sum_{i\geq 0}\left(\frac{c_1({\mathcal O}_{P(C)}}{d}\right)^i
\cap \frac{[P(C)]}{a_1}\right) \\
&=& \frac{1}{a_1} (c_{1/a_1}^{pol}(C_{(a_1)}))^{-1}\cap [X]. \hspace*{2cm}\Box
\end{eqnarray*}
\bigskip
As in \cite{Fu84}, a variety means a reduced and irreducible scheme.
A smooth cone bundle $C\to X$ on a variety is also a variety.
In the following $X\subset C$ means the embedding as zero section.
Suppose that $C_1\to X_1$ and $C_2\to X_2$ are smooth cone bundles
on complete varieties $X_1$ and $X_2$ with $\dim C_1=\dim C_2$
and $\dim X_1\geq \dim X_2$,
and suppose that $f:C_1\to C_2$ is a ${\mathbb C}^*$-equivariant proper morphism
such that $f^{-1}(X_2)=X_1$ and the restriction
$f^{C-X}:C_1-X_1\to C_2-X_2$ is finite. Then $f$ and the restrictions
$f^{C-X}$ and $f^X:X_1\to X_2$ are surjective, and $f$ has a finite
degree $\deg f=[K(C_1):K(C_2)]$, which is the number of preimages
of a generic point in $C_2-X_2$.
In \cite[Definition 1.4]{Fu84} a degree $\int_{X_i}:A_0X_i\to {\mathbb Z}$
is defined and extended by $\int_{X_i}:A_kX_i\to 0$ for $k>0$ to
$\int_{X_i}:A_*X_i\to{\mathbb Z}$. It also extends to $\int_{X_i}A_*^{\mathbb Q} X_i\to{\mathbb Q}$.
\begin{proposition}\label{t8.5}
In the situation just described
\begin{eqnarray}\label{8.10}
f^X_* s^{(scb)}(C_1) &=& \deg f\cdot s^{(scb)}(C_2),\\
\int_{X_1} s^{(scb)}(C_1) &=& \deg f\cdot \int_{X_2}s^{(scb)}(C_2).\label{8.11}
\end{eqnarray}
\end{proposition}
{\bf Proof:}
Denote by ${\bf a}=(a_1,...,a_{n_1})$ respectively by
${\bf v}=(v_1,...,v_{n_2})$ the weights of $C_1$ respectively $C_2$.
Denote $\widetilde w:=\gcd(a_i),\widetilde v:=\gcd(v_i), d_1:=\lcm(a_i), d_2:=\lcm(v_i)$.
Because $f$ is ${\mathbb C}^*$-equivariant and does not map $C_1$ to $X_2$,
$\widetilde w$ divides $\widetilde v$. The map $f^{C-X}$ induces a finite
(and surjective) morphism $f^{PC}:P(C_1)\to P(C_2)$ with
\begin{eqnarray*}
\deg f^{PC}=\left(\frac{\widetilde v}{\widetilde w}\right)^{-1}\cdot \deg f.
\end{eqnarray*}
Furthermore
\begin{eqnarray*}
(f^{PC})^* {\mathcal O}_{P(C_2)}(d) &=& {\mathcal O}_{P(C_1)}(d),\\
(f^{PC})_* [P(C_1)] &=& \deg f^{PC}\cdot [P(C_2)].
\end{eqnarray*}
Choose $d\in\lcm(d_1,d_2)\cdot{\mathbb Z}_{\geq 1}$. Then
\begin{eqnarray*}
{\mathcal O}_{P(C_i)}(d)={\mathcal O}_{P(C_i)}(d_i)^{\otimes d/d_i},\quad
\frac{1}{d_i} c_1({\mathcal O}_{P(C_i)}(d_i)) = \frac{1}{d} c_1({\mathcal O}_{p(C_i)}(d)).
\end{eqnarray*}
Therefore
\begin{eqnarray*}
&&f^X_* s^{(d)}(C_1)\\
&=& f^X_*\circ (p_{C_1})_*\left( \sum_{i\geq 0}
\left( \frac{c_1({\mathcal O}_{P(C_1)}(d))}{d}\right)^i
\cap \frac{[P(C_1)]}{\widetilde w}\right) \\
&=& (p_{C_2})_*\circ (f^{PC})_* \left( \sum_{i\geq 0}
\left( \frac{c_1((f^{PC})^*{\mathcal O}_{P(C_2)}(d))}{d}\right)^i
\cap \frac{[P(C_1)]}{\widetilde w} \right) \\
&=& (p_{C_2})_*\left( \sum_{i\geq 0}
\left( \frac{c_1({\mathcal O}_{P(C_2)}(d)}{d}\right)^i
\cap \frac{(f^{PC})_* [P(C_1)]}{\widetilde w} \right) \\
&=& \deg f\cdot (p_{C_2})_*\left( \sum_{i\geq 0}
\left( \frac{c_1({\mathcal O}_{P(C_2)}(d)}{d}\right)^i
\cap \frac{[P(C_2)]}{\widetilde v}\right) \\
&=& \deg f\cdot s^{(d)}(C_2).
\end{eqnarray*}
With the functoriality $\int_{X_1}\alpha = \int_{X_2} f^X_*\alpha$
\cite[Definition 1.4]{Fu84}, we obtain
\begin{eqnarray*}
\int_{X_1} s^{(scb)} (C_1) = \deg f\cdot \int_{X_2} s^{(scb)}(C_2).
\hspace*{1cm}\Box
\end{eqnarray*}
If $\int_{X_2}s^{(scb)}(C_2)\neq 0$, then \eqref{8.11} can be used
to calculate $\deg f$. We will use it in the following case.
\begin{corollary}\label{t8.6}
Keep the situation in and before proposition \ref{t8.5}.
Suppose additionally that $X_1$ is a smooth complete curve
and $X_2$ is a point.
The weights of $C_1$ are denoted
$(a_1,..,a_{n_1})$, the weights of $C_2$ are denoted
$(b_1,...,b_{n_2})$. The vector bundles on $X_1$
associated to $C_1$ in lemma \ref{t8.2} are
denoted by $C_{1,(k)}$, $a_1\leq k\leq a_n$
Then $n_2=n_1+1$,
\begin{eqnarray}\label{8.12}
\int_{X_2} s^{(scb)}(C_2) &=& \frac{1}{b_1...b_{n_2}}>0,\\
\int_{X_1} s^{(scb)}(C_1) &=& \frac{1}{a_1...a_{n_1}}
\cdot \left(-\sum_{k=a_1}^{a_{n_1}}
\frac{1}{k}\deg C_{1,(k)}\right),\label{8.13}\\
\deg f &=& \frac{b_1...b_{n_2}}{a_1...a_{n_1}}\cdot \left(-\sum_{k=a_1}^{a_{n_1}}
\frac{1}{k}\deg C_{1,(k)}\right).\label{8.14}
\end{eqnarray}
\end{corollary}
{\bf Proof:}
This follows with proposition \ref{t8.4} (b) and proposition \ref{t8.5}.
\hfill$\Box$
\section{Extension to $\lambda=0$
of the Lyashko-Looijenga map for the simple
elliptic singularities}\label{s9}
\setcounter{equation}{0}
\noindent
Here we will do the first and biggest step in the proof
of theorem \ref{t6.3}. The $\lambda$-parameter space
${\mathbb C}-\{0,1\}$ contains the punctured disk
$\Delta^*=\Delta-\{0\}$, where $\Delta=\{z\in{\mathbb C}\, |\,
|z|<1\}$. Define $c:=3$ for $\widetilde E_6$ and $\widetilde E_8$
and $c:=2$ for $\widetilde E_7$, and define the $c$-fold
coverings
\begin{eqnarray*}
\rho_{naive}:{\mathbb C}^{\mu-1}\times\Delta^* &\to&
{\mathbb C}^{\mu-1}\times\Delta^*,\\
(t',\kappa)&\mapsto& (t',\kappa^c)=(t',\lambda),\\
R_{naive}:{\mathbb C}^{n+1}\times{\mathbb C}^{\mu-1}\times\Delta^*&\to&
{\mathbb C}^{n+1}\times{\mathbb C}^{\mu-1}\times\Delta^*,\\
(x,t',\kappa)&\mapsto& (x,t',\kappa^c)=(x,t',\lambda).
\end{eqnarray*}
We will glue fibers above $\kappa=0$ into
${\mathbb C}^{n+1}\times{\mathbb C}^{\mu-1}\times\Delta^*$
and ${\mathbb C}^{\mu-1}\times\Delta^*$ such that
$F^{alg}\circ R_{naive}$, its critical space
and its Lyashko-Looijenga map extend well to $\kappa=0$.
This is the content of the following theorem \ref{t9.1}
and its long proof.
\begin{theorem}\label{t9.1}
Consider for each of the three families of simple
elliptic singularities and their unfoldings the following
spaces and maps.
For $\widetilde E_6$:
\begin{eqnarray}
(x,y,s,\kappa) &=& (x_0,...,x_n,y_0,y_1,y_2,s_1,...,s_7,\kappa)
\nonumber\\
&\in& {\mathbb C}^{n+1}\times {\mathbb C}^3\times {\mathbb C}^7\times\Delta
={\mathbb C}^{n+11}\times\Delta^*,\nonumber\\
Y&:=&\{(x,y,s,\kappa)\in{\mathbb C}^{n+11}\times\Delta\, |\,
x_0(x_1+s_5)=\kappa y_0,\nonumber\\
&&(x_1+s_5)y_0=\kappa y_1,x_0x_2^2=\kappa^2y_2\},\label{9.1}\\
\pr_\mu:Y&\to&{\mathbb C}^7\times\Delta,\, (x,y,s,\kappa)\mapsto(s,\kappa),
\nonumber
\end{eqnarray}
\begin{eqnarray}
\rho:{\mathbb C}^7\times\Delta^*&\to&{\mathbb C}^7\times\Delta^*,\nonumber\\
(s,\kappa)&\mapsto& (s_1,\kappa^2s_2+\kappa s_5s_6-s_5^2,s_3,
s_4,\kappa^3s_5,\nonumber\\
&& \kappa s_6-2s_5,s_7,\kappa^3),\label{9.2}\\
R:Y\cap {\mathbb C}^{n+11}\times\Delta^*&\to&{\mathbb C}^{n+8}\times\Delta^*,
\nonumber\\
(x,y,s,\kappa)&\mapsto& (\kappa^{-2}x_0,x_1,...,x_n,
\rho(s,\kappa)).\label{9.3}
\end{eqnarray}
For $\widetilde E_7$:
\begin{eqnarray}
(x,y,s,\kappa) &=& (x_0,...,x_n,y,s_1,...,s_8,\kappa)
\nonumber\\
&\in& {\mathbb C}^{n+1}\times {\mathbb C}\times {\mathbb C}^8\times\Delta
={\mathbb C}^{n+10}\times\Delta^*,\nonumber\\
Y&:=&\{(x,y,s,\kappa)\in{\mathbb C}^{n+10}\times\Delta\, |\,
x_0x_1=\kappa y\},\label{9.4}\\
\pr_\mu:Y&\to&{\mathbb C}^8\times\Delta,\, (x,y,s,\kappa)\mapsto(s,\kappa),
\nonumber
\end{eqnarray}
\begin{eqnarray}
\rho:{\mathbb C}^8\times\Delta^*&\to&{\mathbb C}^8\times\Delta^*,\nonumber\\
(s,\kappa)&\mapsto& (s_1,\kappa s_2,s_3,
\kappa^2s_4,s_5,s_6,\kappa s_7,s_8,\kappa^2),\label{9.5}\\
R:Y\cap {\mathbb C}^{n+10}\times\Delta^*&\to&{\mathbb C}^{n+9}\times\Delta^*,
\nonumber\\
(x,y,s,\kappa)&\mapsto& (\kappa^{-1}x_0,x_1,...,x_n,
\rho(s,\kappa)).\label{9.6}
\end{eqnarray}
For $\widetilde E_8$:
\begin{eqnarray}
(x,y,s,\kappa) &=& (x_0,...,x_n,y,s_1,...,s_9,\kappa)
\nonumber\\
&\in& {\mathbb C}^{n+1}\times {\mathbb C}\times {\mathbb C}^9\times\Delta
={\mathbb C}^{n+11}\times\Delta^*,\nonumber\\
Y&:=&\{(x,y,s,\kappa)\in{\mathbb C}^{n+11}\times\Delta\, |\,
(x_0-\frac{1}{2}s_9)x_1=\kappa y\},\label{9.7}\\
\pr_\mu:Y&\to&{\mathbb C}^9\times\Delta,\, (x,y,s,\kappa)\mapsto(s,\kappa),
\nonumber
\end{eqnarray}
\begin{eqnarray}
\rho:{\mathbb C}^9\times\Delta^*&\to&{\mathbb C}^9\times\Delta^*,\nonumber\\
(s,\kappa)&\mapsto& (s_1,\kappa s_2,\kappa^2 s_3,\nonumber\\
&&s_4-\frac{1}{2}\kappa^{-1}s_6s_9-\frac{1}{4}\kappa^{-1}s_7s_9^2
-\frac{1}{16}\kappa^{-1}s_9^4,\kappa^3s_5,\nonumber\\
&&s_6,\kappa s_7,s_8-\frac{1}{4}\kappa^{-2}s_9^2,\kappa^{-1}s_9,
\kappa^3),\label{9.8}\\
R:Y\cap {\mathbb C}^{n+11}\times\Delta^*&\to&{\mathbb C}^{n+10}\times\Delta^*,
\nonumber\\
(x,y,s,\kappa)&\mapsto& (\kappa^{-1}x_0,x_1,...,x_n,
\rho(s,\kappa)).\label{9.9}
\end{eqnarray}
(a) The ${\mathbb C}^*$-action on ${\mathbb C}^{n+11}\times\Delta$ for
$\widetilde E_6$ and $\widetilde E_8$ and on ${\mathbb C}^{n+10}\times\Delta$
for $\widetilde E_7$ with the following weights restricts to
a ${\mathbb C}^*$-action on $Y$,
\begin{eqnarray}
&&\deg_w x_i=w_i,\ \deg_w s_i=\deg_w t_i,\
\deg_w\kappa=\deg_w \lambda=0,\nonumber\\
&&\textup{for }\widetilde E_7\textup{ and }\widetilde E_8:\quad
\deg_w y=w_0+w_1, \label{9.10}\\
&&\textup{ for }\widetilde E_6:\quad\left\{\begin{array}{l}
\deg_w y_0=w_0w_1,\ \deg_w y_1=w_0+2w_1,\\
\deg_w y_2=w_0+2w_2.\end{array}\right. \nonumber
\end{eqnarray}
The map $R$ is ${\mathbb C}^*$-equivariant with respect to this
${\mathbb C}^*$-action and the natural ${\mathbb C}^*$-action on the image
space with coordinates $(x,t',\lambda)$.
\medskip
(b) The maps $\rho$ and $R$ are coverings of degree $c$.
Especially, for each fixed
$(s,\kappa)\in{\mathbb C}^{\mu-1}\times\Delta^*$,
\begin{eqnarray}
R:\pr_\mu^{-1}((s,\kappa))&\stackrel{\cong}{\to}&
{\mathbb C}^{n+1}\times\{\rho(s,\kappa)\}.\label{9.11}
\end{eqnarray}
\medskip
(c) The pull back $F^{alg}\circ R$ extends from
$Y\cap {\mathbb C}^{n+(10\textup{ or }11)}\times\Delta^*$
holomorphically to $\kappa=0$, that means, to a function
$(F^{alg}\circ R)^{ext}:Y\to {\mathbb C}$.
\medskip
(d) Let $C^{alg}:=\{(x,t',\lambda)\in{\mathbb C}^{n+1}\times M^{alg}\, |\,
\frac{\partial F^{alg}}{\partial t_i}=\frac{\partial F^{alg}}{\partial \lambda}
=0\}$
be the critical space of the unfolding $F^{alg}$.
Consider the closure
$\overline{R^{-1}(C^{alg})}$ in $Y$ of the pull back by $R$ of
$C^{alg}\cap {\mathbb C}^{n+1}\times {\mathbb C}^{\mu-1}\times\Delta^*$.
The restriction
\begin{eqnarray}\label{9.12}
\pr_\mu:\overline{R^{-1}(C^{alg})}\to{\mathbb C}^{\mu-1}\times\Delta,\
(x,y,s,\kappa)\mapsto (s,\kappa)
\end{eqnarray}
is finite and flat of degree $\mu$.
\medskip
(e) The composition $LL^{alg}\circ\rho:{\mathbb C}^{\mu-1}\times\Delta^*
\to M_{LL}^{(\mu)}$ of the Lyashko-Looijenga map
$LL^{alg}$ with $\rho$ extends holomorphically to
${\mathbb C}^{\mu-1}\times\Delta$. The restriction
\begin{eqnarray}\label{9.13}
(LL^{alg}\circ\rho)^{ext}:{\mathbb C}^{\mu-1}\times\Delta -
\{(s,\kappa)\, |\, s_2=...=s_{\mu-1}=0\}\\
\to M_{LL}^{(\mu)}-M_{LL,0}^{(\mu)}\hspace*{4cm}\nonumber
\end{eqnarray}
is finite and flat onto its image. And
${\mathbb C}^{\mu-1}\times\{0\}$ is mapped by
$(LL^{alg}\circ\rho)^{ext}$ to $D_{LL}^{(\mu)}$.
\end{theorem}
The rest of this section is devoted to the proof
of this theorem.
\bigskip
{\bf Proof:}
(a) This follows from comparison of the formulas
\eqref{9.1}--\eqref{9.9} with the weights in remark
\ref{t6.4} (ii) and in \eqref{9.10}.
\medskip
(b) The definition of $Y$ shows that for $\kappa\in\Delta^*$
$(x_0,...,x_n)$ serve as coordinates on
$\pr_\mu^{-1}((s,\kappa))$ and that this is isomorphic
to ${\mathbb C}^{n+1}$.
The following three statements show that $\rho$ and $R$
are coverings of degree $c$.
The last component of $\rho$ is $\rho_\mu(s,\kappa)=\kappa^c$.
Each other component $\rho_i$ has a nonvanishing linear term
in $s_i$ (and in the case of $\widetilde E_6$ the linear terms
of $\rho_5$ and $\rho_6$ are $\kappa^3s_5$
and $\kappa s_6-2s_5$).
The map $R$ restricts to a linear isomorphism
$R:\pr_\mu^{-1}((s,\kappa))\to
{\mathbb C}^{n+1}\times\{\rho((s,\kappa))\}$
for $(s,\kappa)\in{\mathbb C}^{\mu-1}\times \Delta^*$.
\medskip
(c) The pull back $F^{alg}\circ R
=\bigl(f_\lambda(x)+\sum_{i=1}^{\mu-1}m_it_i\bigr)\circ R$
can be written as follows.
For $\widetilde E_6$:
\begin{eqnarray}
F^{alg}\circ R&=&
\kappa^3 \kappa^{-4}x_0^2x_1
-(\kappa^3+1)\kappa^{-2}x_0x_1^2 +x_1^3
-\kappa^{-2}x_0x_2^2 +\sum_{i=3}^n x_i^2 \nonumber\\
&&+s_1+\kappa^{-2}x_0(\kappa^2s_2+\kappa s_5s_6-s_5^2)
+x_1s_3+x_2s_4 \nonumber\\
&&+ \kappa^{-4}x_0^2 \kappa^3s_5
+\kappa^{-2}x_0x_1 (\kappa s_6-2s_5)+x_1x_2s_7 \nonumber\\
&=& \kappa^{-1}x_0(x_0+s_6)(x_1+s_5)
-\kappa^{-2}x_0(x_1+s_5)^2
-\kappa x_0x_1^2 +x_1^3 \nonumber\\
&& -\kappa^{-2}x_0x_2^2 +\sum_{i=3}^n x_i^2 +s_1
+x_0s_2 + x_1s_3 +x_2s_4 +x_1x_2s_7 \nonumber\\
&=& (x_0+s_6)y_0 -y_1 -\kappa x_0x_1^2 +x_1^3 -y_2
+\sum_{i=3}^n x_i^2\\
&&+ s_1+x_0s_2+x_1s_3+x_2s_4+x_1x_2s_7.\nonumber
\end{eqnarray}
For $\widetilde E_7$:
\begin{eqnarray}
F^{alg}\circ R&=&
\kappa^2 \kappa^{-3}x_0^3x_1
-(\kappa^2+1)\kappa^{-2}x_0^2x_1^2
+\kappa^{-1}x_0x_1^3 +\sum_{i=2}^n x_i^2 \nonumber\\
&&+s_1+\kappa^{-1}x_0\kappa s_2
+x_1s_3+ \kappa^{-2}x_0^2\kappa^2 s_4 \nonumber\\
&&+ \kappa^{-1}x_0x_1 s_5
+ x_1^2s_6 +\kappa^{-2}x_0^2x_1\kappa s_7
+\kappa^{-1}x_0x_1^2 s_8 \nonumber\\
&=& x_0^2y -(\kappa^2+1)y^2+ x_1^2y
+\sum_{i=2}^n x_i^2 \\
&&+ s_1+x_0s_2+x_1s_3+x_0^2s_4+ ys_5+x_1^2s_6
+x_0y s_7+x_1ys_8.\nonumber
\end{eqnarray}
For $\widetilde E_8$:
\begin{eqnarray}
F^{alg}\circ R&=&
\kappa^3 \kappa^{-4}x_0^4x_1
-(\kappa^3+1)\kappa^{-2}x_0^2x_1^2
+x_1^3 +\sum_{i=2}^n x_i^2 \nonumber\\
&&+s_1+\kappa^{-1}x_0\kappa s_2
+\kappa^{-2}x_0^2\kappa^2 s_3\nonumber\\
&&+x_1(s_4-\frac{1}{2}\kappa^{-1}s_6s_9
-\frac{1}{4}\kappa^{-1} s_7s_9^2
-\frac{1}{16} \kappa^{-1}s_9^4) \nonumber\\
&&+ \kappa^{-3}x_0^3\kappa^3 s_5
+ \kappa^{-1}x_0x_1s_6 +\kappa^{-2}x_0^2x_1\kappa s_7\nonumber\\
&& +x_1^2(s_8-\frac{1}{4}\kappa^{-2}s_9^2)
+ \kappa^{-1}x_0x_1^2\kappa^{-1}s_9 \nonumber\\
&=& (x_0^3+x_0^2\frac{1}{2}s_9 +x_0\frac{1}{4}s_9^2
+\frac{1}{8}s_9^3 + (x_0+\frac{1}{2}s_9)s_7 +s_6)y \nonumber\\
&&-\kappa x_0^2x_1^2-y^2+ x_1^3
+\sum_{i=2}^n x_i^2\\
&&+ s_1+x_0s_2+x_0^2s_3+x_1s_4+ x_0^3s_5+x_1^2s_8.\nonumber
\end{eqnarray}
In all three cases, the terms after the last equality
sign are in ${\mathbb C}[x,y,s,\kappa]$.
\medskip
(d) We call {\it $y$-relations} the elements
\begin{eqnarray}
\left. \begin{array}{l}
x_0(x_1+s_5)-\kappa y_0,\ (x_1+s_5)y_0-\kappa y_1,\\
x_0(x_1+s_5)^2-\kappa^2 y_1, x_0x_2^2-\kappa^2 y_2
\end{array}\right\} && \textup{for }\widetilde E_6,\nonumber \\
x_0x_1-\kappa y && \textup{for }\widetilde E_7,\label{9.17} \\
(x_0-\frac{1}{2}s_9)x_1-\kappa y && \textup{for }\widetilde E_8
\nonumber
\end{eqnarray}
in ${\mathbb C}[x,y,s,\kappa,\kappa^{-1}]$ and in
${\mathbb C}[x,y,s,\kappa]$.
The compositions $\frac{\partial F^{alg}}{\partial x_i}\circ R$
of the partial derivatives of $F^{alg}$ with $R$ are
in ${\mathbb C}[x,y,s,\kappa,\kappa^{-1}]$.
We consider the following ideals,
\begin{eqnarray}
I_0&:=& \left( \frac{\partial F^{alg}}{\partial x_i}\circ R,
y\textup{-relations}\right)\subset {\mathbb C}[x,y,s,\kappa,\kappa^{-1}],
\nonumber\\
I_1&:=& I_0\cap {\mathbb C}[x,y,s,\kappa],\nonumber\\
I_2&:=& \{g(x,y,s,0)\, |\, g(x,y,s,\kappa)\in I_1\}
\subset {\mathbb C}[x,y,s], \label{9.18}\\
I_3&:=& \{g(x,y,0)\, |\, g(x,y,s)\in I_2\}\subset {\mathbb C}[x,y].
\nonumber
\end{eqnarray}
We will calculate generating elements of these ideals.
Then we will show $\dim {\mathbb C}[x,y]/I_3=\mu$.
This is sufficient for (d) because of the following.
$C^{alg}\subset {\mathbb C}^{n+1}\times M^{alg}$ and
$\overline{R^{-1}(C^{alg})}\subset Y$ are invariant under the
${\mathbb C}^*$-actions. Therefore it is sufficient to show that
the restriction of \eqref{9.12} to $s=0$
is finite and flat of degree $\mu$. This holds above
$\Delta^*$. For $\kappa=0$ is is equivalent to
$\dim{\mathbb C}[x,y]/I_3=\mu$.
$I_1$ determines $\overline{R^{-1}(C^{alg})}\subset Y$,
and $I_2$ determines $\overline{R^{-1}(C^{alg})}\subset Y
\cap {\mathbb C}^{n+(10\textup{ or }11)}\times\{0\}$.
The information below on $I_2$ will also be useful in the
proof of part (e).
\medskip
{\bf The case $\widetilde E_6$:}
\begin{eqnarray*}
\frac{\partial F^{alg}}{\partial x_0}\circ R &=&
2\kappa x_0x_1 -(\kappa^3+1)x_1^2-x_2^2
+(\kappa^2s_2+\kappa s_5s_6-s_5^2)\\
&&+ 2\kappa x_0s_5 +x_1(\kappa s_6-2s_5)\\
&\stackrel{y\textup{-relations}}{\equiv} &
2\kappa^2 y_0-\kappa^3 x_1^2 -
(x_1+s_5-\kappa s_6)(x_1+s_5)\\
&& -x_2^2 +\kappa^2s_2.
\end{eqnarray*}
\begin{eqnarray*}
\frac{\partial F^{alg}}{\partial x_0}\circ R \ \&\ y\textup{-relations}
&\Rightarrow& (x_1+s_5)^2 +x_2^2\in I_2,\\
x_0\cdot \frac{\partial F^{alg}}{\partial x_0}\circ R \ \&\
y\textup{-relations}
&\Rightarrow& 2x_0y_0-y_1+y_0s_6-y_2+x_0s_2\in I_2.
\end{eqnarray*}
\begin{eqnarray*}
\frac{\partial F^{alg}}{\partial x_1}\circ R &=&
\kappa^{-1} x_0^2 -2(\kappa^3+1)\kappa^{-2}x_0x_1+3x_1^2 \\
&& + s_3 +\kappa^{-2}x_0(\kappa s_6-2s_5)+x_2s_7\\
&\stackrel{y\textup{-relations}}{\equiv} &
\kappa^{-1}x_0^2 -2\kappa x_0x_1 -2\kappa^{-1}y_0 +3x_1^2
+s_3\\
&&+\kappa^{-1} x_0s_6 +x_2s_7.
\end{eqnarray*}
\begin{eqnarray}
\frac{\partial F^{alg}}{\partial x_1}\circ R \ \&\ y\textup{-relations}
&\Rightarrow& x_0^2-2y_0+x_0s_6\in I_2,\nonumber\\
(x_1+s_5)\cdot \frac{\partial F^{alg}}{\partial x_1}\circ R \ \&\
y\textup{-relations}
&\Rightarrow& x_0y_0-2y_1+3(x_1+s_5)x_1^2\nonumber\\
+(x_1+s_5)s_3+y_0s_6
&&+(x_1+s_5)x_2s_7\in I_2.\label{9.19}
\end{eqnarray}
\begin{eqnarray*}
\frac{\partial F^{alg}}{\partial x_2}\circ R &=&
-2\kappa^{-2}x_0x_2+s_4+x_1s_7.
\end{eqnarray*}
\begin{eqnarray*}
\frac{\partial F^{alg}}{\partial x_2}\circ R
&\Rightarrow& x_0x_2\in I_2,\\
x_2\cdot \frac{\partial F^{alg}}{\partial x_2}\circ R \ \&\
y\textup{-relations}
&\Rightarrow& -2y_2+x_2s_4+x_1x_2s_7\in I_2.
\end{eqnarray*}
\begin{eqnarray*}
&&x_2\cdot \frac{\partial F^{alg}}{\partial x_1}\circ R
\ \&\ \frac{\partial F^{alg}}{\partial x_2}\circ R
\ \&\ x_0\cdot \frac{\partial F^{alg}}{\partial x_2}\circ R\\
&\Rightarrow&
-(x_1+s_5)(s_4+x_1s_7) + 3x_1^2x_2 + x_2s_3 +x_2^2s_7\in I_2.
\end{eqnarray*}
\begin{eqnarray*}
y\textup{-relation } x_0(x_1+s_5)-\kappa y_0
&\Rightarrow& x_0(x_1+s_5)\in I_2.
\end{eqnarray*}
This gives the following $n+2$ elements of $I_2$.
The first three elements express $y_0,y_1$ and $y_2$
in terms of $(x,s)$, the last element is calculated from
these three elements and from \eqref{9.19}.
\begin{eqnarray}
&&-2y_0+x_0(x_0+s_6),
\ -2y_2+x_2(s_4+x_1s_7),\nonumber\\
&&-y_1-y_2+(2x_0+s_6)y_0+x_0s_2,\ x_3,...,\ x_n,
\ x_0(x_1+s_5), \ x_0x_2,\nonumber\\
&&(x_1+s_5)^2+x_2^2, \ 3x_1^2x_2-(x_1+s_5)(s_4+x_1s_7) +x_2s_3+x_2^2s_7,\nonumber\\
&&-\frac{1}{2}x_0(x_0+s_6)(3x_0+s_6) -2x_0s_2
+3x_1^2(x_1+s_5)\nonumber\\
&&\hspace*{1cm} +x_2(s_4+x_1s_7)
+(x_1+s_5)s_3 +(x_1+s_5)x_2s_7.\label{9.20}
\end{eqnarray}
Restriction to $s=0$ gives the following $n+2$ elements of $I_3$.
\begin{eqnarray}
&& -2y_0+x_0^2,\ y_2,\ -y_1+2x_0y_0, \ x_3,...,\ x_n,
\ x_0x_1,\ x_0x_2,
\nonumber \\
&& x_1^2+x_2^2,\ x_1^2x_2,\ -x_0^3+2x_1^3.\label{9.21}
\end{eqnarray}
Therefore the monomials
\begin{eqnarray*}
1,\ x_0,\ x_1,\ x_2,\ x_0^2,\ x_1^2,\ x_1x_2,\ x_0^3
\end{eqnarray*}
generate the quotient ${\mathbb C}[x,y]/I_3$.
As this quotient cannot have dimension less than 8,
it has dimension 8, the elements in \eqref{9.21}
generate $I_3$, and the elements in \eqref{9.20}
generate $I_2$.
\medskip
{\bf The case $\widetilde E_7$:}
\begin{eqnarray*}
\frac{\partial F^{alg}}{\partial x_0}\circ R &=&
3 x_0^2x_1 -2(\kappa^2+1)\kappa^{-1}x_0x_1^2+x_1^3 \\
&& +\kappa s_2+ 2\kappa x_0s_4 +x_1s_5 +2x_0x_1s_7 +x_1^2s_8\\
&\stackrel{y\textup{-relation}}{\equiv} &
3\kappa x_0y -2(\kappa^2+1)x_1y +x_1^3\\
&& + \kappa s_2 +2\kappa x_0s_4
+x_1s_5 +2\kappa ys_7 +x_1^2s_8.
\end{eqnarray*}
\begin{eqnarray*}
\frac{\partial F^{alg}}{\partial x_0}\circ R \ \&\ y\textup{-relation}
&\Rightarrow& -2x_1y+x_1^3+x_1s_5+x_1^2s_8\in I_2,\\
x_0\cdot \frac{\partial F^{alg}}{\partial x_0}\circ R \ \&\
y\textup{-relation}
&\Rightarrow& 3x_0^2y -2y^2 +x_1^2y +x_0s_2
+2x_0^2s_4
\\&& +ys_5 +2x_0ys_7 +x_1ys_8\in I_2.
\end{eqnarray*}
\begin{eqnarray*}
\frac{\partial F^{alg}}{\partial x_1}\circ R &=&
\kappa^{-1} x_0^3 -2(\kappa^2+1)\kappa^{-2}x_0^2x_1
+3\kappa^{-1}x_0x_1^2 \\
&& +s_3 +\kappa^{-1}x_0s_5 +2x_1s_6 +\kappa^{-1}x_0^2s_7
+2\kappa^{-1}x_0x_1s_8\\
&\stackrel{y\textup{-relation}}{\equiv} &
\kappa^{-1}x_0^3 -2(\kappa^2+1)\kappa^{-1}x_0y +3x_1y \\
&& +s_3 +\kappa^{-1}x_0s_5 +2x_1s_6 +\kappa^{-1}x_0^2s_7 +2ys_8.
\end{eqnarray*}
\begin{eqnarray*}
\frac{\partial F^{alg}}{\partial x_1}\circ R \ \&\ y\textup{-relation}
&\Rightarrow& x_0^3-2x_0y+x_0s_5+x_0^2s_7\in I_2,\\
x_1\cdot \frac{\partial F^{alg}}{\partial x_1}\circ R \ \&\
y\textup{-relation}
&\Rightarrow& x_0^2y-2y^2+3x_1^2y+x_1s_3 +ys_5\\
&& +2x_1^2s_6 +x_0ys_7 +2x_1ys_8\in I_2.
\end{eqnarray*}
\begin{eqnarray*}
y\textup{-relation } x_0x_1-\kappa y_0
&\Rightarrow& x_0x_1\in I_2.
\end{eqnarray*}
This gives the following $n+4$ elements of $I_2$.
\begin{eqnarray}
&& x_2,...,\ x_n,\ x_0x_1,\ -2x_0y+x_0^3+x_0s_5+x_0^2s_7,
\nonumber\\
&& -2x_1y+x_1^3+x_1s_5+x_1^2s_8, \nonumber\\
&& -4y^2+(4x_0^2+4x_1^2+3x_0s_7+3x_1s_8+2s_5)y\nonumber\\
&& \hspace*{2cm}+x_0s_2+x_1s_3+2x_0^2s_4+2x_1^2s_6,\label{9.22}\\
&& (2x_0^2-2x_1^2+x_0s_7-x_1s_8)y +x_0s_2-x_1s_3+2x_0^2s_4
-2x_1^2s_6.\nonumber
\end{eqnarray}
Restriction to $s=0$ gives the following $n+4$ elements of $I_3$.
\begin{eqnarray}
x_2,...,\ x_n,\ x_0x_1,\ -2x_0y+x_0^3,\ -2x_1y+x_1^3,
\nonumber\\
-y^2+(x_0^2+x_1^2)y,\ (x_0^2-x_1^2)y.\label{9.23}
\end{eqnarray}
Therefore the monomials
\begin{eqnarray*}
1,\ x_0,\ x_1,\ x_0^2,\ x_1^2,\ y, \ x_0^3,\ x_1^3,\ x_0^4
\end{eqnarray*}
generate the quotient ${\mathbb C}[x,y]/I_3$.
As this quotient cannot have dimension less than 9,
it has dimension 9, the elements in \eqref{9.23}
generate $I_3$, and the elements in \eqref{9.22}
generate $I_2$.
\medskip
{\bf The case $\widetilde E_8$:}
\begin{eqnarray*}
\frac{\partial F^{alg}}{\partial x_0}\circ R &=&
4 x_0^3x_1 -2(\kappa^3+1)\kappa^{-1}x_0x_1^2
+\kappa s_2+ 2\kappa x_0s_3\\
&& +3\kappa x_0^2s_5
+ x_1s_6 +2x_0x_1s_7 +\kappa^{-1}x_1^2s_9\\
&\stackrel{y\textup{-relation}}{\equiv} &
4\kappa x_0^2y +2\kappa x_0ys_9 + \kappa ys_9^2
+\frac{1}{2}x_1s_9^3 -2\kappa^2 x_0x_1^2\\
&& -2x_1y +\kappa s_2 +2\kappa x_0s_3
+3\kappa x_0^2 s_5 +x_1s_6\\
&& +2\kappa ys_7 +x_1s_7s_9.
\end{eqnarray*}
\begin{eqnarray*}
\frac{\partial F^{alg}}{\partial x_0}\circ R &\& &
y\textup{-relation}\\
&\Rightarrow& \frac{1}{2}x_1s_9^3-2x_1y +x_1s_6+x_1s_7s_9
\in I_2,\\
(x_0-\frac{1}{2}s_9)
\cdot \frac{\partial F^{alg}}{\partial x_0}\circ R &\& &
y\textup{-relation}\\
&\Rightarrow& 4x_0^3y -2y^2 +(x_0-\frac{1}{2}s_9)
(s_2+2x_0s_3+3x_0^2s_5)\\
&& +ys_6 +2x_0ys_7\in I_2.
\end{eqnarray*}
\begin{eqnarray*}
\frac{\partial F^{alg}}{\partial x_1}\circ R &=&
\kappa^{-1} x_0^4 -2(\kappa^3+1)\kappa^{-2}x_0^2x_1
+3x_1^2 \\
&& +(s_4-\frac{1}{2}\kappa^{-1}s_6s_9
-\frac{1}{4}\kappa^{-1}s_7s_9^2-\frac{1}{16}\kappa^{-1}s_9^4) \\
&& +\kappa^{-1}x_0s_6 +\kappa^{-1}x_0^2s_7
+2x_1(s_8-\frac{1}{4}\kappa^{-2}s_9^2) \\
&& +2\kappa^{-2}x_0x_1s_9\\
&\stackrel{y\textup{-relation}}{\equiv} &
\kappa^{-1}x_0^4 -2\kappa x_0^2x_1 -2\kappa^{-1}x_0y +3x_1^2 \\
&& (s_4-\frac{1}{2}\kappa^{-1}s_6s_9
-\frac{1}{4}\kappa^{-1}s_7s_9^2-\frac{1}{16}\kappa^{-1}s_9^4)\\
&& +\kappa^{-1}x_0s_6 +\kappa^{-1}x_0^2s_7
+2x_1s_8 +\kappa^{-1}ys_9.
\end{eqnarray*}
\begin{eqnarray*}
\frac{\partial F^{alg}}{\partial x_1}\circ R \ \&\ y\textup{-relation}
&\Rightarrow& x_0^4-2x_0y+ys_9-\frac{1}{2}s_6s_9
-\frac{1}{4}s_7s_9^2\\
&& -\frac{1}{16}s_9^4 +x_0s_6+x_0^2s_7\in I_2,\\
x_1\cdot \frac{\partial F^{alg}}{\partial x_1}\circ R \ \&\
y\textup{-relation}
&\Rightarrow& (x_0^3 +x_0^2\frac{1}{2}s_9 +x_0\frac{1}{4}s_9^2 +\frac{1}{8}s_9^3)y \\
-2y^2+3x_1^3+x_1s_4 +ys_6
&& +(x_0+\frac{1}{2}s_9) ys_7+2x_1^2s_8\in I_2.
\end{eqnarray*}
\begin{eqnarray*}
y\textup{-relation } (x_0-\frac{1}{2}s_9)x_1-\kappa y_0
&\Rightarrow& (x_0-\frac{1}{2}s_9)x_1\in I_2.
\end{eqnarray*}
This gives the following $n+4$ elements of $I_2$.
\begin{eqnarray}
&& x_2,...,\ x_n,\ (x_0-\frac{1}{2}s_9)x_1,\
x_1(-2y+s_6+s_7s_9+\frac{1}{2}s_9^3),\nonumber\\
&& y(-2y+s_6+2x_0s_7+4x_0^3)
+(x_0-\frac{1}{2}s_9)(s_2+2x_0s_3+3x_0^2s_5),\nonumber\\
&& (x_0-\frac{1}{2}s_9)(-2y+s_6+(x_0+\frac{1}{2}s_9)s_7
+(x_0^3 +x_0^2\frac{1}{2}s_9 +x_0\frac{1}{4}s_9^2
+\frac{1}{8}s_9^3)),\nonumber\\
&& y(-2y+s_6+(x_0+\frac{1}{2}s_9)s_7
+(x_0^3 +x_0^2\frac{1}{2}s_9 +x_0\frac{1}{4}s_9^2
+\frac{1}{8}s_9^3)) \nonumber \\
&&\hspace*{2cm} + x_1(3x_1^2+s_4+2x_1s_8).\label{9.24}
\end{eqnarray}
Restriction to $s=0$ gives the following $n+4$ elements of $I_3$.
\begin{eqnarray}
x_2,...,\ x_n,\ x_0x_1,\ x_1y, \ -y^2+2x_0^3y, \nonumber\\
-2x_0y+x_0^4,\
-2y^2+ x_0^3y+3x_1^3.\label{9.25}
\end{eqnarray}
Therefore the monomials
\begin{eqnarray*}
1,\ x_0,\ x_0^2,\ x_1,\ x_0^3,\ y, \ x_0^4,\ x_1^2,\ x_0^5,
\ x_0^6
\end{eqnarray*}
generate the quotient ${\mathbb C}[x,y]/I_3$.
As this quotient cannot have dimension less than 10,
it has dimension 10, the elements in \eqref{9.25}
generate $I_3$, and the elements in \eqref{9.24}
generate $I_2$.
\medskip
(e) The critical space $C^{alg}$ of $F^{alg}$ is
everywhere smooth as $F^{alg}$ is everywhere locally a
universal unfolding.
But the closure $\overline{R^{-1}(C^{alg})}\subset Y$
is not smooth above ${\mathbb C}^{\mu-1}\times\{0\}$.
Denote by
$$C^0:= \overline{R^{-1}(C^{alg})}\cap
\pr_\mu^{-1}({\mathbb C}^{\mu-1}\times\{0\})$$
the restriction of it which lies above
${\mathbb C}^{\mu-1}\times\{0\}$.
It will turn out below that
$$C^{0,red}:=
(C^0\textup{ with the reduced complex structure})$$
is in each of the three cases a union of four smooth
components,
$$C^{0,red}= C^{0,red}_1\cup C^{0,red}_2\cup C^{0,red}_3
\cup C^{0,red}_4.$$
The first three components appear in $C^0$ with their reduced
structure, $C^{0,red}_4$ appears in $C^0$ with a nonreduced
structure with multiplicity two.
The following table \eqref{9.27}
collects facts, which will be proved
below in a case-by-case discussion.
{\it A-to-\eqref{9.26}}
means the answer to the following question
\eqref{9.26}.
\begin{eqnarray}\label{9.26}
&& \textup{Is }C^{0,red}_i
\textup{ isomorphic to the critical space of a suitable}\\
&& \textup{unfolding which is obtained by a
restriction of }F^{alg}\circ R\textup{?}\nonumber \\
&& \textup{And if yes, what is the type of the function
which is unfolded?}\nonumber
\end{eqnarray}
\begin{eqnarray}\label{9.27}
\begin{array}{l|l|l|l|l|l}
& & C^{0,red}_1 & C^{0,red}_2 & C^{0,red}_3 & C^{0,red}_4 \\
\hline
\widetilde E_6 & \deg\pr_\mu|_{C^{0,red}_i} & 2 & 2 & 2 & 1 \\
& \textup{A-to-\eqref{9.26}} & \textup{no} &
\textup{yes},A_2 & \textup{yes},A_2 & \textup{no} \\ \hline
\widetilde E_7 & \deg\pr_\mu|_{C^{0,red}_i} & 3 & 3 & 1 & 1 \\
& \textup{A-to-\eqref{9.26}} & \textup{yes},A_3
& \textup{yes},A_3 & \textup{no} & \textup{no} \\ \hline
\widetilde E_8 & \deg\pr_\mu|_{C^{0,red}_i} & 5 & 2 & 1 & 1 \\
& \textup{A-to-\eqref{9.26}} & \textup{yes},A_5
& \textup{yes},A_2 & \textup{no} & \textup{no}
\end{array}
\end{eqnarray}
The Lyashko-Looijenga map $LL^{alg}\circ\rho$ maps
$(s,\kappa)\in{\mathbb C}^{\mu-1}\times\Delta^*$ to the tuple
of the symmetric polynomials (with suitable signs)
in the values of $F^{alg}\circ R$ on
$R^{-1}(C^{alg})\cap{\mathbb C}^{n+1}\times\{(s,\kappa)\}$.
Because of (c), $F^{alg}\circ R$ extends to $\kappa=0$.
Because of (d), $R^{-1}(C^{alg})$ extends to
$\kappa=0$. Therefore the map $LL^{alg}\circ\rho$
extends to a holomorphic map
$(LL^{alg}\circ\rho)^{ext}:{\mathbb C}^{\mu-1}\times\Delta
\to M_{LL}^{(\mu)}$.
The table \eqref{9.27} shows that in each of the three cases
\begin{eqnarray}\label{9.28}
\sum_{i=1}^4 \deg\pr_\mu|_{C^{0,red}_i}=\mu-1.
\end{eqnarray}
Therefore $(LL^{alg}\circ\rho)^{ext}$ maps
${\mathbb C}^{\mu-1}\times\{0\}$ to $D_{LL}^{(\mu)}$.
It rests to show that the map in \eqref{9.13} is finite
and flat onto its image. Above ${\mathbb C}^{\mu-1}\times\Delta^*$
this holds. Therefore in order to prove it, it
rests to show
\begin{eqnarray}\label{9.29}
(LL^{alg})^{ext}(s,0)=0&\Rightarrow & s=0.
\end{eqnarray}
This will be shown below in the case-by-case discussion.
\medskip
{\bf The case $\widetilde E_6$:}
\eqref{9.20} shows that $C^{0,red}$ has the following four
components $C^{0,red}_i$, $i\in\{1,2,3,4\}$.
The components are given in terms of functions which
vanish on them.
In each case, they contain the first three functions
in \eqref{9.20} which express $y_0,y_1$ and $y_2$
in terms of $(x,s)$.
Of course, $x_3,...,x_n$ vanish on all four components.
\begin{eqnarray}
C^{0,red}_1:&&
\textup{ the first three functions in \eqref{9.20}},
\ x_1+s_5,\ x_2,\nonumber\\
&& (x_0+s_6)(3x_0+s_6)+4s_2\
(\textup{and generically }x_0\neq 0).\label{9.30} \\
C^{0,red}_2&&\textup{and }C^{0,red}_3:
\textup{ the first three functions in \eqref{9.20}},\nonumber\\
&&x_0,\ x_2-\varepsilon\cdot i\cdot (x_1+s_5)\textup{ with}
\nonumber\\
&& \varepsilon=1\textup{ for }C^{0,red}_2\textup{ and }
\varepsilon=-1\textup{ for }C^{0,red}_3,\nonumber\\
&& 3x_1^2+\varepsilon i (2x_1s_7+s_4+s_5s_7)+s_3\nonumber \\
&& (\textup{and generically }x_1+s_5\neq 0,x_2\neq 0).
\label{9.31} \\
C^{0,red}_4:&&
\textup{ the first three functions in \eqref{9.20}},\nonumber\\
&& x_0,\ x_1+s_5,\ x_2.\label{9.32}
\end{eqnarray}
Obviously, each $C^{0,red}_i$ is smooth, and
$\deg\pr_\mu|_{C^{0,red}_i}$ is as claimed in table \eqref{9.27}.
It rests to prove \eqref{9.29}.
The restriction of $(F^{alg}\circ R)^{ext}$
(which was calculated in the proof of (c))
to $C^{0,red}_i$ is as follows:
\begin{eqnarray}\label{9.33}
(F^{alg}\circ R)^{ext}|_{C^{0,red}_1}&=&
-\frac{1}{2}(x_0+s_6)x_0^2 + s_1 -s_3s_5-s_5^3,\\
(F^{alg}\circ R)^{ext}|_{C^{0,red}_j}&=&
x_1^3 +s_1+x_1s_3+\varepsilon i (x_1+s_5)s_4\nonumber \\
&& +\varepsilon i x_1(x_1+s_5)s_7\textup{ for }j\in\{2,3\},
\label{9.34} \\
(F^{alg}\circ R)^{ext}|_{C^{0,red}_4}&=& s_1-s_3s_5-s_5^3.
\label{9.35}
\end{eqnarray}
Consider a parameter $s\in {\mathbb C}^7$ with
$(LL^{alg}\circ R)^{ext}(s,0)=0$.
We want to show $s=0$.
\eqref{9.35} gives $s_1-s_3s_5-s_5^3=0$.
\eqref{9.33} and \eqref{9.30} give the existence of
a number $x_0\in{\mathbb C}$ with
$(x_0+s_6)(3x_0+s_6)+4s_2=0$ and $(x_0+s_6)x_0=0$.
The first quadratic polynomial has a double zero
if and only if $12s_2-s_6^2=0$,
and then the double zero is $x_0=-\frac{2}{3}s_6$.
It is a zero of $(x_0+s_6)x_0$ only if $s_6=0$,
and then $s_2=0$.
In the case $12s_2-s_6^2\neq 0$, we must have
$$(x_0+s_6)(3x_0+s_6)+4s_2=3(x_0+s_6)x_0,
\quad\textup{thus again } s_6=0,s_2=0.$$
So, \eqref{9.33} gives in any case $s_6=0$ and $s_2=0$.
Now consider $j\in\{2,3\}$ and \eqref{9.34}.
It motivates the definition of the unfolding
\begin{eqnarray*}
G_j(x_1,s_1,s_3,s_4,s_5,s_7):=
x_1^3+s_1+x_1s_3+\varepsilon i (x_1+s_5)s_4 +
\varepsilon i x_1(x_1+s_5)s_7
\end{eqnarray*}
in the variable $x_1$ and with parameters
$s_1,s_3,s_4,s_5,s_7$ of the $A_2$-singularity $x_1^3$.
The derivative $\frac{\partial G_j}{\partial x_1}$
is in the ideal which defines $C^{0,red}_j$,
so
$$\textup{Crit}(G_j)\cong C^{0,red}_j\textup{ and }
G_j|_{\textup{Crit}(G_j)} \cong
(F^{alg}\circ R)^{ext}|_{C^{0,red}_j}.$$
Denote the Lyashko-Looijenga map of $G_j$ by $LL_{G_j}$.
Then $LL_{G_j}(s_1,s_3,s_4,s_5,s_7)=0$.
The unfolding $G_j$ is induced by the universal unfolding
$$G_{A_2}(z,t_1,t_2)=z^3+t_1+zt_2$$
via the morphism $(\Phi^{(j)},\varphi^{(j)})$ with
$G_{A_2}\circ\Phi^{(j)}=G_j$ and
\begin{eqnarray*}
z=\Phi^{(j)}_1(x_1,s)&=&
x_1+\frac{1}{3}\varepsilon i s_7,\\
t_1=\varphi^{(j)}_1(s)&=&
s_1+\varepsilon i s_4s_5+\frac{1}{27}\varepsilon i s_7^3 \\
&&-\frac{1}{3}\varepsilon i (s_3+\varepsilon i s_4+
\varepsilon i s_5s_7 +\frac{1}{3}s_7^2)s_7,\\
t_2=\varphi^{(j)}_2(s)&=&
s_3+\varepsilon i s_4 +\varepsilon i s_5s_7 +\frac{1}{3}s_7^2.
\end{eqnarray*}
Then $LL_{G_j} = LL_{A_2}\circ \varphi^{(j)}$
where $LL_{A_2}$ is the Lyashko-Looijenga map of the
universal unfolding $G_{A_2}$.
The map $LL_{A_2}$ is a finite branched covering and has
value 0 only at 0. Therefore
$\varphi^{(j)}_2(s)=\varphi^{(j)}_1(s)=0$.
Now $\varphi^{(1)}_2\pm \varphi^{(2)}_2=0$ and
$\varphi^{(1)}_1\pm \varphi^{(2)}_1=0$ give
$$ s_3+\frac{1}{3}s_7^2=0,\ s_4+s_5s_7=0,\ s_1=0,\
s_4s_5+\frac{1}{27}s_7^3=0.$$
Together with
$$s_1-s_3s_5-s_5^3=0 \textup{ and }s_6=0,\ s_2=0,$$
this gives $s=0$. Now \eqref{9.29} is proved in the
case $\widetilde E_6$.
\medskip
{\bf The case $\widetilde E_7$:}
\eqref{9.22} shows that $C^{0,red}$ has the following four
components $C^{0,red}_i$, $i\in\{1,2,3,4\}$.
The components are given in terms of functions which
vanish on them.
Of course, $x_2,...,x_n$ vanish on all four components.
\begin{eqnarray}
C^{0,red}_1:&&
-2y+x_0^2+s_5+x_0s_7,\ x_1,\nonumber\\
&&(2x_0+s_7)y+s_2+2x_0s_4\nonumber \\
&& (\textup{and generically }x_0\neq 0).\label{9.36} \\
C^{0,red}_2:&&
-2y+x_1^2+s_5+x_1s_8,\ x_0,\nonumber\\
&&(2x_1+s_8)y+s_3+2x_1s_6\nonumber \\
&& (\textup{and generically }x_1\neq 0).
\label{9.37} \\
C^{0,red}_3:&& y,\ x_0,\ x_1.\label{9.38}\\
C^{0,red}_4:&& y-\frac{1}{2}s_5,\ x_0,\ x_1.\label{9.39}
\end{eqnarray}
Obviously, each $C^{0,red}_i$ is smooth, and
$\deg\pr_\mu|_{C^{0,red}_i}$ is as claimed in table \eqref{9.27}.
It rests to prove \eqref{9.29}.
The restriction of $(F^{alg}\circ R)^{ext}$
to $C^{0,red}_i$ is as follows:
\begin{eqnarray}
(F^{alg}\circ R)^{ext}|_{C^{0,red}_1}&=&
x_0^2y-y^2+s_1+x_0s_2+x_0^2s_4\nonumber\\
&&+ys_5+x_0ys_7,\label{9.40}\\
(F^{alg}\circ R)^{ext}|_{C^{0,red}_2}&=&
x_1^2y-y^2+s_1+x_1s_3+x_1^2s_6\nonumber\\
&&+ys_5+x_1ys_8,\label{9.41} \\
(F^{alg}\circ R)^{ext}|_{C^{0,red}_3}&=& s_1,
\label{9.42}\\
(F^{alg}\circ R)^{ext}|_{C^{0,red}_4}&=& s_1+\frac{1}{4}s_5^2.
\label{9.43}
\end{eqnarray}
Consider a parameter $s\in {\mathbb C}^8$ with
$(LL^{alg}\circ R)^{ext}(s,0)=0$.
We want to show $s=0$.
\eqref{9.42} and \eqref{9.43} give $s_1=s_5=0$.
This and \eqref{9.40} motivate the definition of the unfolding
\begin{eqnarray*}
G(x_0,y,s_2,s_4,s_7):=
x_0^2y-y^2+x_0s_2+ x_0^2s_4+x_0ys_7
\end{eqnarray*}
of the $A_3$-singularity in the parameters
$s_2,s_4,s_7$.
The derivatives $\frac{\partial G}{\partial x_0}$
and $\frac{\partial G}{\partial y}$
are in the ideal which defines $C^{0,red}_1|_{s_1=s_5=0}$,
so
$$\textup{Crit}(G)\cong C^{0,red}|_{s_1=s_5=0}\textup{ and }
G|_{\textup{Crit}(G)} \cong
(F^{alg}\circ R)^{ext}|_{C^{0,red}_1|_{s_1=s_5=0}}.$$
Denote the Lyashko-Looijenga map of $G$ by $LL_{G}$.
Then $LL_{G}(s_2,s_4,s_7)=0$.
The unfolding $G$ is induced by the universal unfolding
$$G_{A_3}(z,y_1,t_1,t_2,t_3)=z^2y_1-y_1^2+t_1+zt_2+xz^2t_3$$
via the morphism $(\Phi,\varphi)$ with
$G_{A_3}\circ\Phi=G$ and
\begin{eqnarray*}
z=\Phi_1(x_0,y,s)&=&
x_0+\frac{1}{2}s_7,\\
y_1=\Phi_1(x_0,y,s)&=&
y+\frac{1}{8}s_7^2 ,\\
t_1=\varphi_1(s)&=&
-\frac{1}{2}s_2s_7 +\frac{1}{4}s_4s_7^2+\frac{1}{64}s_7^4, \\
t_2=\varphi_2(s)&=&
s_2-s_4s_7,\\
t_3=\varphi_3(s)&=&
s_4-\frac{1}{8}s_7^2.
\end{eqnarray*}
Then $LL_{G} = LL_{A_3}\circ \varphi$
where $LL_{A_3}$ is the Lyashko-Looijenga map of the
universal unfolding $G_{A_3}$.
The map $LL_{A_3}$ is a finite branched covering and has
value 0 only at 0. Therefore
$0=\varphi_1(s)=\varphi_2(s)=\varphi_3(s)$.
This gives $s_2=s_4=s_7=0$.
\eqref{9.36}\ \& \eqref{9.37} and \eqref{9.40}\ \&\ \eqref{9.41}
are symmetric with respect to
$$(x_0,y,s_1,s_2,s_4,s_5,s_7)\longleftrightarrow
(x_1,y,s_1,s_3,s_6,s_5,s_8).$$
Therefore also $s_3=s_6=s_8=0$.
This gives $s=0$. Now \eqref{9.29} is proved in the
case $\widetilde E_7$.
\medskip
{\bf The case $\widetilde E_8$:}
\eqref{9.24} shows that $C^{0,red}$ has the following four
components $C^{0,red}_i$, $i\in\{1,2,3,4\}$.
The components are given in terms of functions which
vanish on them.
Of course, $x_2,...,x_n$ vanish on all four components.
\begin{eqnarray}
C^{0,red}_1:&&
-2y+(x_0^3+x_0^2\frac{1}{2}s_9+x_0\frac{1}{4}s_9^2
+\frac{1}{8}s_9^3)+s_6+(x_0+\frac{1}{2}s_9)s_7,\nonumber\\
&& x_1,\ y(s_7+3x_0^2+x_0s_9+\frac{1}{4}s_9^2)+s_2+2x_0s_3
+3x_0^2s_5\nonumber \\
&& (\textup{and generically }x_0-\frac{1}{2}s_9\neq 0).
\label{9.44} \\
C^{0,red}_2:&&
x_0-\frac{1}{2}s_9,\ -2y+s_6+s_7s_9+\frac{1}{2}s_9^3,\nonumber\\
&& 3x_1^2+s_4+2x_1s_8\ (\textup{and generically }x_1\neq 0).
\label{9.45} \\
C^{0,red}_3:&& x_0-\frac{1}{2}s_9,\ x_1,\ y.\label{9.46}\\
C^{0,red}_4:&& x_0-\frac{1}{2}s_9,\ x_1,\
-2y+s_6+s_7s_9+\frac{1}{2}s_9^3.\label{9.47}
\end{eqnarray}
Obviously, each $C^{0,red}_i$ is smooth, and
$\deg\pr_\mu|_{C^{0,red}_i}$ is as claimed in table \eqref{9.27}.
It rests to prove \eqref{9.29}.
The restriction of $(F^{alg}\circ R)^{ext}$
to $C^{0,red}_i$ is as follows:
\begin{eqnarray}\label{9.48}
(F^{alg}\circ R)^{ext}|_{C^{0,red}_1}&=&
y\bigl[(x_0^3+x_0^2\frac{1}{2}s_9+x_0\frac{1}{4}s_9^2
+\frac{1}{8}s_9^3)+s_6\\
&&+(x_0+\frac{1}{2}s_9)s_7\bigr]
-y^2+s_1+x_0s_2+x_0^2s_3+x_0^3s_5,\nonumber\\
(F^{alg}\circ R)^{ext}|_{C^{0,red}_2}&=&
y(s_6+s_7s_9+\frac{1}{2}s_9^3)-y^2+x_1^3\label{9.49}\\
&&+s_1+\frac{1}{2}s_2s_9+\frac{1}{4}s_3s_9^2+x_1s_4
+\frac{1}{8}s_5s_9^3+x_1^2s_8,\nonumber \\
(F^{alg}\circ R)^{ext}|_{C^{0,red}_3}&=&
s_1+\frac{1}{2}s_2s_9+\frac{1}{4}s_3s_9^2+\frac{1}{8}s_5s_9^3,
\label{9.50}\\
(F^{alg}\circ R)^{ext}|_{C^{0,red}_4}&=&
\frac{1}{4}(s_6+s_7s_9+\frac{1}{2}s_9^3)^2 \label{9.51}\\
&&+(s_1+\frac{1}{2}s_2s_9+\frac{1}{4}s_3s_9^2
+\frac{1}{8}s_5s_9^3).\nonumber
\end{eqnarray}
Consider a parameter $s\in {\mathbb C}^9$ with
$(LL^{alg}\circ R)^{ext}(s,0)=0$.
We want to show $s=0$.
\eqref{9.50} and \eqref{9.51} give
\begin{eqnarray}\label{9.52}
0=s_1+\frac{1}{2}s_2s_9+\frac{1}{4}s_3s_9^2+\frac{1}{8}s_5s_9^3,
\quad 0= s_6+s_7s_9+\frac{1}{2}s_9^3.
\end{eqnarray}
This and \eqref{9.48} motivate the definition of the unfolding
\begin{eqnarray*}
&&G_1(x_0,y,s_2,s_3,s_5,s_7,s_9)\\
&:=& y\bigl[(x_0^3+x_0^2\frac{1}{2}s_9+x_0\frac{1}{4}s_9^2
-\frac{3}{8}s_9^3)+(x_0-\frac{1}{2}s_9)s_7\bigr]\\
&&-y^2 -(\frac{1}{2}s_2s_9+\frac{1}{4}s_3s_9^2
+\frac{1}{8}s_5s_9^3)+x_0s_2+x_0^2s_3+x_0^3s_5
\end{eqnarray*}
of the $A_5$-singularity $yx_0^3-y^2$ in the parameters
$s_2,s_3,s_5,s_7,s_9$.
The derivatives $\frac{\partial G_1}{\partial x_0}$
and $\frac{\partial G_1}{\partial y}$
are in the ideal which defines
$C^{0,red}_1|_{s\textup{ with }\eqref{9.52}}$,
so
$$\textup{Crit}(G_1)\cong C^{0,red}|_{s\textup{ with }
\eqref{9.52}}\textup{ and }
G_1|_{\textup{Crit}(G_1)} \cong
(F^{alg}\circ R)^{ext}|_{C^{0,red}_1|_{s\textup{ with }
\eqref{9.52}}}.$$
Denote the Lyashko-Looijenga map of $G_1$ by $LL_{G_1}$.
Then $LL_{G_1}(s_2,s_3,s_5,s_7,s_9)=0$.
The unfolding $G_1$ is induced by the universal unfolding
$$G_{A_5}=y_1z^3-y_1^2+t_1+zt_2+z^2t_3+z^3t_4+zy_1t_5$$
via the morphism $(\Phi^{(1)},\varphi^{(1)})$ with
$G_{A_5}\circ\Phi^{(1)}=G_1$ and
\begin{eqnarray*}
z=\Phi_1^{(1)}(x_0,y,s)&=&
x_0+\frac{1}{6}s_9,\\
y_1=\Phi_2^{(1)}(x_0,y,s)&=&
y+\frac{1}{2}(\frac{2}{3}s_7s_9+\frac{11}{27}s_9^3),\\
t_1=\varphi_1^{(1)}(s)&=&
-\frac{1}{4}(\frac{2}{3}s_7s_9+\frac{11}{27}s_9^3)^2\\
&&+(-\frac{2}{3}s_2s_9-\frac{2}{9}s_3s_9^2
-\frac{7}{54}s_5s_9^3),\\
t_2=\varphi_2^{(1)}(s)&=& s_2-\frac{1}{3}s_3s_9+
\frac{1}{12}s_5s_9^2,\\
t_3=\varphi_3^{(1)}(s)&=& s_3-\frac{1}{2}s_5s_9, \\
t_4=\varphi_4^{(1)}(s)&=& s_5 ,\\
t_5=\varphi_5^{(1)}(s)&=& s_7+\frac{1}{6}s_9^2.
\end{eqnarray*}
Then $LL_{G} = LL_{A_5}\circ \varphi^{(1)}$
where $LL_{A_5}$ is the Lyashko-Looijenga map of the
universal unfolding $G_{A_5}$.
The map $LL_{A_5}$ is a finite branched covering and has
value 0 only at 0. Therefore
$$0=\varphi_1^{(1)}(s)=\varphi_2^{(1)}(s)=\varphi_3^{(1)}(s)
=\varphi_4^{(1)}(s)=\varphi_5^{(1)}(s).$$
This gives
\begin{eqnarray}\label{9.53}
s_2=s_3=s_5=s_7=s_9=0,
\textup{ and with \eqref{9.52}}\ s_1=s_6=0.
\end{eqnarray}
This and \eqref{9.49} motivate the definition of the unfolding
\begin{eqnarray*}
G_2(x_1,y,s_4,s_8)&:=&
-y^2+x_1^3+x_1s_4+x_1^2s_8
\end{eqnarray*}
of the $A_2$-singularity $-y^2+x_1^3$ in the parameters
$s_4$ and $s_8$.
The derivatives $\frac{\partial G_2}{\partial x_1}$
and $\frac{\partial G_2}{\partial y}$
are in the ideal which defines
$C^{0,red}_2|_{s\textup{ with }\eqref{9.53}}$,
so
$$\textup{Crit}(G_2)\cong C^{0,red}_2|_{s\textup{ with }
\eqref{9.53}}\textup{ and }
G_2|_{\textup{Crit}(G_2)} \cong
(F^{alg}\circ R)^{ext}|_{C^{0,red}_2|_{s\textup{ with }
\eqref{9.53}}}.$$
Denote the Lyashko-Looijenga map of $G_2$ by $LL_{G_2}$.
Then $LL_{G_2}(s_4,s_8)=0$.
The unfolding $G_2$ is induced by the universal unfolding
$$G_{A_2}(z,y_1,t_1,t_2)=-y_1^2+z^3+t_1+zt_2$$
via the morphism $(\Phi^{(2)},\varphi^{(2)})$ with
$G_{A_2}\circ\Phi^{(2)}=G_2$ and
\begin{eqnarray*}
z=\Phi_1^{(2)}(x_1y,s)&=&
x_1+\frac{1}{3}s_8,\\
y_1=\Phi_2^{(2)}(x_0,y,s)&=& y,\\
t_1=\varphi_1^{(2)}(s)&=&
-\frac{1}{3}s_4s_8+\frac{2}{27}s_8^3,\\
t_2=\varphi_2^{(2)}(s)&=& s_4-\frac{1}{3}s_8^2.
\end{eqnarray*}
Then $LL_{G_2} = LL_{A_2}\circ \varphi^{(2)}$
where $LL_{A_2}$ is the Lyashko-Looijenga map of the
universal unfolding $G_{A_2}$.
The map $LL_{A_2}$ is a finite branched covering and has
value 0 only at 0. Therefore
$$0=\varphi_1^{(2)}(s)=\varphi_2^{(2)}(s),\quad\textup{so}\quad
0=s_4=s_8.$$
This gives $s=0$. Now \eqref{9.29} is proved in the
case $\widetilde E_8$.
This finishes the proof of theorem \ref{t9.1}.
\hfill$\Box$
\section{Degree of the Lyashko-Looijenga map $LL^{alg}$
for the simple elliptic singularities}\label{s10}
\setcounter{equation}{0}
\noindent
This section is devoted to the proof of theorem \ref{t6.3}.
The main work has already been done in the sections
\ref{s9}, \ref{s5} and \ref{s8}.
The maps $\rho$ in theorem \ref{t9.1} tell how to glue
into $M^{alg}={\mathbb C}^{\mu-1}\times ({\mathbb C}-\{0,1\})$ a fiber
above $\lambda=0$. This and the maps $\psi_2$ and $\psi_3$
in subsection \ref{s5.2} tell how to glue into $M^{alg}$
fibers above $\lambda=1$ and $\lambda=\infty$.
Corollary \ref{t8.6} together with the maps
$\psi_2,\psi_3$ and $\rho$ allows to calculate the
degree of $LL^{alg}$.
The maps $\psi_2$ in \eqref{5.21},\eqref{5.27} and \eqref{5.33}
and the maps $\rho$ in \eqref{9.2}, \eqref{9.5} and \eqref{9.8}
contain the following fractional powers of $\lambda$,
\begin{eqnarray}\label{10.1}
\begin{array}{llll}
& \widetilde E_6 & \widetilde E_7 & \widetilde E_8 \\
\psi_2 & \lambda^{1/2} & \lambda^{1/4} & \lambda^{1/2} \\
\rho & \kappa=\lambda^{1/c}=\lambda^{1/3} &
\kappa=\lambda^{1/c}=\lambda^{1/2} &
\kappa=\lambda^{1/c}=\lambda^{1/3}
\end{array}
\end{eqnarray}
Therefore we consider coverings of ${\mathbb C}-\{0,1\}$
and of $M^{alg}$ which are of order $2c$ at respectively
above $\lambda\in\{0,1,\infty\}$.
Denote by $\P^1(2c,2c,2c)$ the orbifold $\P^1$ with
orbifold points $0,1$ and $\infty$ which all have multiplicity
$2c$. Because of $2-3(1-\frac{1}{2c})=-1+\frac{3}{2c}<0$
it is a hyperbolic orbifold, so a good orbifold.
By a classical theorem of Fox \cite[Theorem 2.5]{Sc83},
a finite orbifold covering $p_X:X\to\P^1(2c,2c,2c)$
with $X$ a manifold exists.
It is a branched covering of order $2c$ at each preimage
of $0$, $1$ and $\infty$ and a covering everywhere else.
Denote $N^{alg}:= {\mathbb C}^{\mu-1}\times (X-p_X^{-1}(\{0,1,\infty\})$,
and denote by $p_{alg}:=\id|_{{\mathbb C}^{\mu-1}}\times
p_X:N^{alg}\to M^{alg}$
the lift to $N^{alg}$ of the restricted map
$p_X:X-p_X^{-1}(\{0,1,\infty\})\to {\mathbb C}-\{0,1,\infty\}$.
The bundles $M^{alg}\to {\mathbb C}-\{0,1,\infty\}$ and
$N^{alg}\to X-p_X^{-1}(\{0,1,\infty\})$ are smooth cone bundles
with the weights
$(a_1,a_2,..,a_{\mu-1})=
(\deg_{\bf w}t_{\mu-1},\deg_{\bf w}t_{\mu-2},...,
\deg_{\bf w}t_1)\cdot d$. Here
\begin{eqnarray}\label{10.2}
d:=3\textup{ for }\widetilde E_6, \quad
d:=4\textup{ for }\widetilde E_7, \quad
d:=6\textup{ for }\widetilde E_8,
\end{eqnarray}
is chosen so that all weights $d\cdot \deg_{\bf w}t_i$ are
integers.
Now $N^{alg}$ will be extended to a smooth cone bundle
$N^{orb}\to X$, i.e. fibers above the points in
$p_X^{-1}(\{0,1,\infty\})$ will be glued into $N^{alg}$.
Let $\delta_0:\Delta\to X$ be an isomorphism from
the unit disk $\Delta$ to a neighborhood of any point
in $p_X^{-1}(0)$ with $p_X\circ \delta_0(z)=z^{2c}$.
Glue ${\mathbb C}^{\mu-1}\times \Delta$ into $N^{alg}$ with the map
\begin{eqnarray}\label{10.3}
{\mathbb C}^{\mu-1}\times\Delta^*&\hookrightarrow& N^{alg},\\
(t',z)&\mapsto& ((\rho_1,...,\rho_{\mu-1})(t',z^2),
\delta_0(z)).\nonumber
\end{eqnarray}
Let $\delta_1:\Delta\to X$ be an isomorphism from
the unit disk $\Delta$ to a neighborhood of any point
in $p_X^{-1}(1)$ with $p_X\circ \delta_1(z)=1-z^{2c}$.
Glue ${\mathbb C}^{\mu-1}\times \Delta$ into $N^{alg}$ with the map
\begin{eqnarray}\label{10.4}
{\mathbb C}^{\mu-1}\times\Delta^*&\hookrightarrow& N^{alg},\\
(t',z)&\mapsto& (((\psi_3)_1,...(\psi_3)_{\mu-1})
((\rho_1,...,\rho_{\mu-1})(t',z^2),z^{2c}),\delta_1(z)).
\nonumber
\end{eqnarray}
Let $\delta_\infty:\Delta\to X$ be an isomorphism from
the unit disk $\Delta$ to a neighborhood of any point
in $p_X^{-1}(\infty)$ with $p_X\circ \delta_\infty(z)=z^{-2c}$.
Glue ${\mathbb C}^{\mu-1}\times \Delta$ into $N^{alg}$ with the map
\begin{eqnarray}\label{10.5}
{\mathbb C}^{\mu-1}\times\Delta^*&\hookrightarrow& N^{alg},\\
(t',z)&\mapsto& (((\psi_2)_1,...,(\psi_2)_{\mu-1})
((\rho_1,...,\rho_{\mu-1})(t',z^2),z^{2c}),\delta_\infty(z)).
\nonumber
\end{eqnarray}
This is a univalued map although $\psi_2$ contains
$\lambda^{1/2}$ (in the cases $\widetilde E_6$ and $\widetilde E_8$)
and $\lambda^{1/4}$ (in the case $\widetilde E_7$),
by setting $\lambda^{1/2}\circ z^{2c}=z^c$ and
$\lambda^{1/4}\circ z^4=z$.
The resulting manifold $N^{orb}$ is a smooth cone bundle above
$X$ with weights
$(a_1,a_2,..,a_{\mu-1})=
(\deg_{\bf w}t_{\mu-1},\deg_{\bf w}t_{\mu-2},...,
\deg_{\bf w}t_1)\cdot d$
because $M^{alg}$ and $N^{alg}$ are smooth cone bundles with
these weights and all involved maps are ${\mathbb C}^*$-equivariant
with respect to the natural ${\mathbb C}^*$-actions.
The covering group of the covering
$p_{alg}:N^{alg}\to M^{alg}$ extends to an automorphism group
of $N^{orb}$. The quotient of $N^{orb}$ by this group
is an orbibundle $M^{orb}$ above $\P^1$ which extends
$M^{alg}\to{\mathbb C}-\{0,1\}$.
Let $p_{orb}:N^{orb}\to M^{orb}$ be the quotient map.
Recall the definition of $M^{orb}_0\subset M^{orb}$
in theorem \ref{t6.3}, and define
$N^{orb}_0:=p_{orb}^{-1}(M^{orb}_0)$.
We claim that $LL^{alg}\circ p_{alg}:N^{alg}\to M_{LL}^{(\mu)}$
extends to a holomorphic map
$LL^{orb}_N:N^{orb}\to M_{LL}^{(\mu)}$,
that the restriction
\begin{eqnarray}\label{10.6}
LL^{orb}_N: N^{orb}-N^{orb}_0\to M_{LL}^{(\mu)}-M_{LL,0}^{(\mu)}
\end{eqnarray}
is a branched covering of a finite degree,
and that $LL^{orb}_N$ maps the fibers of $N^{orb}$
above the points in $p_X^{-1}(\{0,1,\infty\})$ to
$D_{LL}^{(\mu)}$.
Near the fibers of $N^{orb}$ above the points in
$p_X^{-1}(0)$, this follows from theorem \ref{t9.1} (e).
Near the fibers of $N^{orb}$ above the points in
$p_X^{-1}(\{1,\infty\})$, this follows again from
theorem \ref{t9.1} (e) and from the fact that
$\psi_3$ and $\psi_2$ are locally isomorphisms of
F-manifolds with Euler fields and thus
\begin{eqnarray}\label{10.7}
LL^{alg}(t',\lambda)
&=& LL^{alg}(\psi_2(t',\lambda))
=LL^{alg}(\psi_3(t',\lambda)) \\
\textup{for}&&
(t',\lambda)\in {\mathbb C}^{\mu-1}\times\Delta^*\subset M^{alg}
={\mathbb C}^{\mu-1}\times({\mathbb C}-\{0,1\}).\nonumber
\end{eqnarray}
$LL^{alg}$ inherits the good properties from
$LL^{alg}\circ p_{alg}$.
It extends to a holomorphic map
$LL^{orb}:M^{orb}\to M_{LL}^{(\mu)}$,
the restriction in \eqref{6.6} is a branched covering,
and $\pi_{orb}^{-1}(\{0,1,\infty\})$ is mapped to
$D_{LL}^{(\mu)}$.
It rests to determine the degree
of $LL^{alg}$. Of course,
$\deg LL^{orb}_N=\deg LL^{alg}\cdot \deg p_{alg}$.
The tuple $(LL^{orb}_N,N^{orb},M_{LL}^{(\mu)})$ satisfies
almost the properties of the tuple
$(f,C_1,C_2)$ in corollary \ref{t8.6}, but not completely.
The affine group $G_{{\mathbb A}_1}=({\mathbb C},+)$ acts freely on $N^{orb}$
and $M_{LL}^{(\mu)}$ as follows,
and $LL^{orb}_N$ is equivariant with respect to these actions.
We have to divide out these actions.
The action of $G_{{\mathbb A}_1}$ on $N^{orb}$ comes from the lift
to $N^{alg}$ and extension to $N^{orb}$ of the action
on $M^{alg}$,
\begin{eqnarray}\label{10.8}
G_{{\mathbb A}_1}\times M^{alg}\to M^{alg},\quad
(s,t',\lambda)\mapsto (t_1+s,t_2,...,t_{\mu-1},\lambda).
\end{eqnarray}
The action of $G_{{\mathbb A}_1}$ on $M_{LL}^{(\mu)}$ is given by
\begin{eqnarray}\label{10.9}
G_{{\mathbb A}_1}\times M_{LL}^{(\mu)}\to M_{LL}^{(\mu)},\quad
(s,p(y))\mapsto p(y-s).
\end{eqnarray}
The quotient triple $(LL^{orb}_N,N^{orb},M_{LL}^{(\mu)})/G_{{\mathbb A}_1}$
satisfies the properties of the triple
$(f,C_1,C_2)$ in corollary \ref{t8.6}.
$C_1:=N^{orb}/G_{{\mathbb A}_1}$ is a smooth cone bundle with
weights $(a_1,...,a_{\mu-2})=(\deg_{\bf w}t_{\mu-1},...,
\deg_{\bf w}t_2)\cdot d$
and basis $X_1:=X$ of dimension 1.
$C_2:=M_{LL}^{(\mu)}/G_{{\mathbb A}_1}$ is a smooth cone bundle with
weights $(b_1,b_2,...,b_{\mu-1})=(2,3,...,\mu-1)\cdot d$
with basis $X_2=(\textup{a point})$.
And $f:=LL^{orb}_N/G_{{\mathbb A}_1}$ satisfies the properties in the
situation before proposition \ref{t8.5}. Therefore
by corollary \ref{t8.6}
\begin{eqnarray}
\deg LL^{orb}_N &=& \deg f =
\frac{b_1...b_{\mu-1}}{a_1...a_{\mu-2}}\cdot\left(
-\sum_{k=a_1}^{a_{\mu-2}}\frac{1}{k}\cdot\deg C_{1,(k)}\right)
\nonumber \\
&=& \frac{2\cdot 3\cdot ...\cdot \mu}
{\prod_{i=2}^{\mu-1}\deg_{\bf w}t_i}\cdot \left(
-\sum_{k=a_1}^{a_{\mu-2}} \frac{d}{k}\cdot \deg C_{1,(k)}\right),
\label{10.10} \\
\deg LL^{alg}&=&
\frac{\mu!}{\prod_{i=2}^{\mu-1}\deg_{\bf w}t_i}\cdot\left(
\sum_{k=a_1}^{a_{\mu-2}} \frac{d}{k}\cdot\left(
-\frac{\deg C_{1,(k)}}{\deg p_{alg}}\right)\right).
\label{10.11}
\end{eqnarray}
For the proof of formula \eqref{6.7} it rests to show
\begin{eqnarray}\label{10.12}
-\frac{\deg C_{1,(k)}}{\deg p_{alg}} =
\frac{1}{2}\cdot |\{j\, |\, a_j=k\}|.
\end{eqnarray}
A basis of trivial global sections of the trivial smooth cone bundle
${\mathbb C}^{\mu-1}\times X\supset N^{alg}$ and the glueing maps
\eqref{10.3}, \eqref{10.4} and \eqref{10.5} give a global
meromorphic section in the determinant bundle
$\deg C_{1,(k)}$ of each vector bundle $C_{1,(k)}$.
The sum of the orders of zeros and poles of this section
is $\deg C_{1,(k)}$. In fact, we can read of
$-\deg C_{1,(k)}/\deg p_{alg}$ directly from the sum of the
orders of $\lambda$ in those parts of
$\rho,\psi_3$ and $\psi_2$ which correspond to $C_{1,(k)}$.
Here $\rho$ is used three times, $\psi_3$ and $\psi_2$ are
each used one times. The following tables collect
the relevant data from the formulas
\eqref{9.2}, \eqref{9.5}, \eqref{9.8}, \eqref{5.24},
\eqref{5.30}, \eqref{5.36}, \eqref{5.21}, \eqref{5.27}
and \eqref{5.33}.
\bigskip
The case $\widetilde E_6$:
\begin{eqnarray*}
\begin{array}{l|l|l|l}
k & \textup{involved }t_i& \rho:\textup{order of}
& \psi_3:\textup{order of} \\
& & \lambda\textup{ in \eqref{9.2}} &
\lambda\textup{ in \eqref{5.24}} \\ \hline
1=a_1=a_2=a_3 & t_7,t_6,t_5 & 0+\frac{1}{3}+1 & 0+0+0
\\[1mm]
2=a_4=a_5=a_6 & t_4,t_3,t_2 & 0+0+\frac{2}{3} & 0+0+0
\end{array}
\end{eqnarray*}
\begin{eqnarray*}
\begin{array}{l|l|l}
k & \psi_2:\textup{order of }\lambda\textup{ in }\eqref{5.21} &
-\deg C_{1,(k)}/\deg p_{alg} \\[1mm] \hline
1 & \frac{1}{2}-1-2 &
3\cdot \frac{4}{3}+0-\frac{5}{2}=\frac{3}{2} \\[1mm]
2 & \frac{1}{2}+0-1 &
3\cdot \frac{2}{3}+0-\frac{1}{2}=\frac{3}{2}
\end{array}
\end{eqnarray*}
\bigskip
The case $\widetilde E_7$:
\begin{eqnarray*}
\begin{array}{l|l|l|l}
k & \textup{involved }t_i& \rho:\textup{order of}
& \psi_3:\textup{order of} \\
& & \lambda\textup{ in \eqref{9.5}} &
\lambda\textup{ in \eqref{5.30}} \\ \hline
1=a_1=a_2 & t_8,t_7 & 0+\frac{1}{2} &
1\textup{ (see \eqref{10.13}})
\\[1mm]
2=a_3=a_4=a_5 & t_6,t_5,t_4 & 0+0+1 & 0+0+0 \\
3=a_6=a_7 & t_3,t_2 & 0+\frac{1}{2} & 0+0
\end{array}
\end{eqnarray*}
\begin{eqnarray*}
\begin{array}{l|l|l}
k & \psi_2:\textup{order of }\lambda\textup{ in }\eqref{5.27} &
-\deg C_{1,(k)}/\deg p_{alg} \\[1mm] \hline
1 & -\frac{1}{4}-\frac{5}{4} &
3\cdot \frac{1}{2}+1-\frac{3}{2}=1 \\[1mm]
2 & \frac{1}{2}-\frac{1}{2}-\frac{3}{2} &
3\cdot 1+0-\frac{3}{2}=\frac{3}{2} \\[1mm]
3 & \frac{1}{4}-\frac{3}{4} &
3\cdot \frac{1}{2}+0-\frac{1}{2} = 1
\end{array}
\end{eqnarray*}
\begin{eqnarray}\label{10.13}
\det\begin{pmatrix}-3+\lambda & -2 \\ 3 & 2+\lambda\end{pmatrix}
=-\lambda(1-\lambda).
\end{eqnarray}
\bigskip
The case $\widetilde E_8$:
\begin{eqnarray*}
\begin{array}{l|l|l|l}
k & \textup{involved }t_i& \rho:\textup{order of}
& \psi_3:\textup{order of} \\
& & \lambda\textup{ in \eqref{9.8}} &
\lambda\textup{ in \eqref{5.36}} \\ \hline
1=a_1 & t_9 & -\frac{1}{3} & 2
\\[1mm]
2=a_2=a_3 & t_8,t_7 & 0+\frac{1}{3} &
1\textup{ (see \eqref{10.13})}\\
3=a_4=a_5 & t_6,t_5 & 0+1 & 0+0 \\
4=a_6=a_7 & t_4,t_3 & 0+\frac{2}{3} & 0+0 \\
5=a_8 & t_2 & \frac{1}{3} & 0
\end{array}
\end{eqnarray*}
\begin{eqnarray*}
\begin{array}{l|l|l}
k & \psi_2:\textup{order of }\lambda\textup{ in }\eqref{5.33} &
-\deg C_{1,(k)}/\deg p_{alg} \\[1mm] \hline
1 & -\frac{1}{2} &
3\cdot \frac{-1}{3}+2-\frac{1}{2}=\frac{1}{2} \\[1mm]
2 & 0-1 &
3\cdot \frac{1}{3}+1-1=1 \\[1mm]
3 & -\frac{1}{2}-\frac{3}{2} & 3\cdot 1+0-2 = 1 \\
4 & 0-1 & 3\cdot \frac{2}{3}+0-1=1 \\
5 & -\frac{1}{2} & 3\cdot\frac{1}{3}+0-\frac{1}{2}=\frac{1}{2}
\end{array}
\end{eqnarray*}
In all cases \eqref{10.12} holds. This and \eqref{10.11}
show \eqref{6.7}. This completes the proof of theorem
\ref{t6.3}.
|
{
"timestamp": "2018-06-05T02:15:05",
"yymm": "1806",
"arxiv_id": "1806.00996",
"language": "en",
"url": "https://arxiv.org/abs/1806.00996"
}
|
\section{Introduction}
We all know field theory describes a system containing infinitely many degrees of freedom and most of the time when we describe a field theoretical system we generally considered a infinite size system or system in thermodynamic limit. This limit is very crucial while we are doing canonical transformation that presereves commutation bracket. One such class is Bogoliubov transformations in scalar field theory which is often used in condensed matter physics \cite{casalbuoni2003lecture}, \cite{timm2012theory}, \cite{chalker2013quantum} and quantum field theory in curved spacetime \citep{birrell1980massive}, \citep{biswas1995particle}, \citep{jacobson2005introduction}. It can be shown and I will show that thermodynamic limit or infinite volume limit make this tranformation impossible and it created two inequivalent representations of two disjoint Fock space which is often used in quantum many body systems. Because of such inequivalent disjoint vector spaces, the operators both in original form and ones after transformation have their own seperate domain to act on states. Therefore, traditional ways of showing particle production \citep{parker2015creation} is not well-defined in thermodynamic limit. However, I actually show in this article that because of under such transformation Hamiltonian is no longer invariant under U(1) action defined in article therefore particle number of the system is not conserved which implies particle production in curved spacetime under gravity.
\vfill\null
\section{Massive scalar free-field theory in hamiltonian description}
\subsection{Massive scalar free-field theory in Minkowski spacetime}
Let's Consider following action
\begin{equation}
\begin{split}
S & =\int\sqrt{-\eta}d^{4}x\Big[\frac{1}{2}(\partial\phi(x))^{2}-\frac{1}{2}m^{2}\phi^{2}(x)\Big]\\
\eta & =\text{diag}(1,-1,-1,-1)
\end{split}
\end{equation}
We can write field operators in following way
\begin{equation}
\hat{\phi}(x) =\int\frac{d^{3}k}{\sqrt{(2\pi)^{3}2\omega_{\vec{k}}}}[\hat{a}_{\vec{k}}e^{-ik.x}+\hat{a}_{\vec{k}}^{\dagger}e^{ik.x}]
\end{equation}
Here canonical conjugate momentum field operator is $\hat{\pi}=\dot{\hat{\phi}}$. And one can check from canonical commutation relation
\begin{equation}
[\hat{\phi}(x),\hat{\pi}(y)]_{x^{0}=y^{0}} =i\delta^{(3)}(\vec{x}-\vec{y})
\end{equation}
following algebra between creation-annihilation operators
\begin{equation}
\begin{split}
[\hat{a}_{\vec{k}},\hat{a}_{\vec{k}'}] & =0=[\hat{a}_{\vec{k}}^{\dagger},\hat{a}_{\vec{k}'}^{\dagger}]\\
[\hat{a}_{\vec{k}},\hat{a}_{\vec{k}'}^{\dagger}] & =\delta^{(3)}(\vec{k}-\vec{k}')
\end{split}
\end{equation}
And Hamiltonian operator for this system is following \cite{book:14817}
\begin{equation}
\begin{split}
\hat{H} & =\int d^{3}x\Big[\frac{1}{2}\hat{\pi}^{2}(x)+\frac{1}{2}(\vec{\nabla}\phi(x))^{2}+\frac{1}{2}m^{2}\phi^{2}(x)\Big]\\
& =\int d^{3}k\frac{1}{2}\omega_{\vec{k}}[\hat{a}_{\vec{k}}\hat{a}_{\vec{k}}^{\dagger}+\hat{a}_{\vec{k}}^{\dagger}\hat{a}_{\vec{k}}]\\
& \equiv\int d^{3}k\omega_{\vec{k}}\hat{a}_{\vec{k}}^{\dagger}\hat{a}_{\vec{k}}, \ \omega_{\vec{k}}=\sqrt{\vec{k}^{2}+m^{2}}
\end{split}
\end{equation}
The above expression can also be written using the definition of number operators
\begin{equation}
\begin{split}
\hat{N}_{\vec{k}} & =\hat{a}_{\vec{k}}^{\dagger}\hat{a}_{\vec{k}}\\
\implies\hat{H} & =\int d^{3}k \ \omega_{\vec{k}}\hat{N}_{\vec{k}}
\end{split}
\end{equation}
Note that Hamiltonian of this theory is invariant both global $U(1)$-group action which is following
\begin{equation}
\begin{split}
\hat{a}_{\vec{k}} & \rightarrow e^{i\Theta}\hat{a}_{\vec{k}}\\
\hat{a}_{\vec{k}}^{\dagger} & \rightarrow e^{-i\Theta}\hat{a}_{\vec{k}}^{\dagger}
\end{split}
\end{equation}
and local $U(1)$-group action which is following
\begin{equation}
\begin{split}
\hat{a}_{\vec{k}} & \rightarrow e^{i\Theta_{\vec{k}}}\hat{a}_{\vec{k}}\\
\hat{a}_{\vec{k}}^{\dagger} & \rightarrow e^{-i\Theta_{\vec{k}}}\hat{a}_{\vec{k}}^{\dagger}
\end{split}
\end{equation}
And vacuum of this theory is state $\ket{0}$ which is such that
\begin{equation}
\hat{a}_{\vec{k}}\ket{0} =0, \ \forall\vec{k}
\end{equation}
And all the single and multi-particle states can be constructed using operation of creation operators on the vacuum state.
\subsection{Massive Scalar Free-Field Theory in different frame}
Now we want to move to a different frame which is non-inertial and w.r.t an observer from this frame action can be written down using minimal prescription
\begin{equation}
\begin{split}
S & =\int\sqrt{-g}d^{4}x\Big[\frac{1}{2}(\partial\phi(x))^{2}-\frac{1}{2}m^{2}\phi^{2}(x)\Big]\\
(\partial\phi(x))^{2} & =g^{\mu\nu}(x)\partial_{\mu}\phi(x)\partial_{\nu}\phi(x)
\end{split}
\end{equation}
where the metric $g_{\mu\nu}(x)$ is non-trivial.\\[5pt]
Here we choose the mode functions to be solutions of Euler-Lagrange equation which is of following form
\begin{equation}
\frac{1}{\sqrt{-g}}\partial_{\mu}(\sqrt{-g}g^{\mu\nu}\partial_{\nu}\phi)+m^{2}\phi=0
\end{equation}
Let $\{f_{n}(x)\}$ be complete set of such mode functions which solve above equation.\\[5pt]
We define an inner product between 2 such mode function in following way
\begin{equation}
(f,g)\equiv i\int\sqrt{-g}d^{3}x \ f^{*}(x)[g^{0\nu}\overrightarrow{\partial}_{\nu}-g^{0\nu}\overleftarrow{\partial}_{\nu}]g(x)
\end{equation}
And one can show that this inner product is time-translation invariant \cite{book:274311}
\begin{equation}
\begin{split}
\partial_{0}(f,g) & = i\int d^{3}x \ \partial_{0}\Big[\sqrt{-g}f^{*}(x)[g^{0\nu}\overrightarrow{\partial}_{\nu}-g^{0\nu}\overleftarrow{\partial}_{\nu}]g(x)\Big]\\
& =i\int d^{3}x \ \partial_{\mu}\Big[\sqrt{-g}f^{*}(x)[g^{\mu\nu}\overrightarrow{\partial}_{\nu}-g^{\mu\nu}\overleftarrow{\partial}_{\nu}]g(x)\Big]\\
& -i\int d^{3}x \ \partial_{i}\Big[\sqrt{-g}f^{*}(x)[g^{i\nu}\overrightarrow{\partial}_{\nu}-g^{i\nu}\overleftarrow{\partial}_{\nu}]g(x)\Big]\\
=i\int d^{3}x \ & \Big[f^{*}(x)\partial_{\mu}[\sqrt{-g}g^{\mu\nu}\overrightarrow{\partial}_{\nu}]-\partial_{\mu}[\sqrt{-g}g^{\mu\nu}f^{*}(x)\overleftarrow{\partial}_{\nu}]g(x)\Big]\\
& -i\int d^{3}x \ \partial_{i}\Big[\sqrt{-g}f^{*}(x)[g^{i\nu}\overrightarrow{\partial}_{\nu}-g^{i\nu}\overleftarrow{\partial}_{\nu}]g(x)\Big]\\
=0
\end{split}
\end{equation}
where we throw away the surface term because we have assumed that fields vanish at surface near spatial infinity and consider Euler-Lagrange equation followed by the modes.\\[5pt]
Similar definition is also there in first frame with only difference is the metric is Minkowski. So, we label the inner product in first and second frame to be 1 and 2 respectively.\\[5pt]
In this frame also we can write the field operators in following way
\begin{equation}
\hat{\phi}'(y) = \sum_{n}[\hat{b}_{n}f_{n}(y)+\hat{b}_{n}^{\dagger}f_{n}^{*}(y)]
\end{equation}
Now consider a point in spacetime which has different coordinates w.r.t two frames and let call them $x$ and $x'$ respectively in initial and final frame. Since, we are dealing with scalar field theory which has a nice property which is its tranformation property under general coordinate transformation
\begin{equation}
\hat{\phi}'(x') =\hat{\phi}(x)
\end{equation}
Using the definition of inner-product one can define a Bogoliubov type transformation which is defined in next subsection. And for convenience we choose momentum mode decomposition of fields from next section onwards by consiering spacetime which has spatial translational invariant for example spatially flat FRW spacetime \citep{book:274311}.
\subsection{Bogoliubov transformation}
To show the Bogoliubov transformation \cite{jacobson2005introduction}, \cite{sato1994bogoliubov}, we consider a real scalar field with modes $\{\hat{a}_{\vec{k}}\}$. And they satisfy following algebra
\begin{equation}
[\hat{a}_{\vec{k}},\hat{a}_{\vec{q}}^{\dagger}]=\delta^{(3)}(\vec{k}-\vec{q})
\end{equation}
and rest of the commutator brackets are zero.\\[5pt]
With these one can construct Fock space $\mathcal{H}[a]$ with repeated applications of $\hat{a}_{\vec{k}}^{\dagger}$s on vacuum state denoted by $\ket{0}$ defined by
\begin{equation}
\hat{a}_{\vec{k}}\ket{0}=0, \ \forall\vec{k}
\end{equation}
Let us now consider the following Bogoliubov transformation(this is not most general transformation because most general transformation also can mix different momentum modes)
\begin{equation}
\begin{split}
\hat{c}_{\vec{k}}(\theta) & =\cosh\theta_{\vec{k}}\hat{a}_{\vec{k}}-\sinh\theta_{\vec{k}}\hat{a}_{-\vec{k}}^{\dagger}
\end{split}
\end{equation}
With these transformations in hand, one can check that new operators also satisfy
\begin{equation}
[\hat{c}_{\vec{k}}(\theta),\hat{c}_{\vec{q}}^{\dagger}(\theta)]=\delta^{(3)}(\vec{k}-\vec{q})
\end{equation}
and all other commutators vanishing.\\[5pt]
Now we consider vacuum relative to the operators $\{\hat{c}_{\vec{k}}(\theta)$, denoted by $\ket{0(\theta)}$ and defined by
\begin{equation}
\hat{c}_{\vec{k}}(\theta)\ket{0(\theta)}=0, \ \forall\vec{k}
\end{equation}
and we construct new Fock space representation $\mathcal{H}(c)$ by repeated application of $\{\hat{c}_{\vec{k}}^{\dagger}(\theta)$ on the vacuum state $\ket{0(\theta)}$.\\[5pt]
If now we assume the existence of an unitary operator $G(\theta)$ which generates the transformation
\begin{equation}
U(\theta)\hat{a}_{\vec{k}}U^{-1}(\theta)=\hat{c}_{\vec{k}}(\theta)
\end{equation}
where $U(\theta)=e^{iG(\theta)}$, then one can explicitly check that
\begin{equation}
G(\theta)=i\int d^{3}k \ \theta_{\vec{k}}(\hat{a}_{\vec{k}}\hat{a}_{-\vec{k}}-\hat{a}_{-\vec{k}}^{\dagger}\hat{a}_{\vec{k}}^{\dagger})
\end{equation}
And therefore, one can write the transformation operator in following way
\begin{equation}
\begin{split}
U(\theta) & =e^{-\delta^{(3)}(0)\int d^{3}l\ln\cosh\theta_{\vec{k}}}e^{\int d^{3}k\tanh\theta_{\vec{k}}\hat{a}_{\vec{k}}^{\dagger}\hat{a}_{-\vec{k}}^{\dagger}}\\
& \times^{-\int d^{3}k\tanh\theta_{\vec{k}}\hat{a}_{-\vec{k}}\hat{a}_{\vec{k}}}
\end{split}
\end{equation}
then we have
\begin{equation}
\ket{0(\theta)}=e^{-\delta^{(3)}(0)\int d^{3}k\ln\cosh\theta_{\vec{k}}}e^{\int d^{3}k\tanh\theta_{\vec{k}}\hat{a}_{\vec{k}}^{\dagger}\hat{a}_{-\vec{k}}^{\dagger}}\ket{0}
\end{equation}
where $\delta^{(3)}(0)$ in discrete limit is volume V. Therefore, unless $V<\infty$ and $\int d^{3}k\ln\cosh\theta_{\vec{k}}<\infty$, state $\ket{0(\theta)}$ does not belong to $\mathcal{H}(a)$ Fock space therefore, we can say that these two representations are inequivalent \cite{stepanian2013unitary}.\\[5pt]
Note that transformation between operators are well-defined for any volume of the system, so once we defined the transformation we take the limit $V\rightarrow\infty$, for which transformation between operators are still well-defined but new and old Fock space becomes disjoint. Therefore, new and old creation and annihilation operators have seperate Hilbert space to act one or in otherwords theor domains become disjoint. This construction will be assumed from next section onwards.
\subsection{Hamiltonian under Bogoliubov transformation}
Now let's look at the Hamiltonian under Bogoliubov transformation but before that we need the inverse Bogoliubov transformation
\begin{equation}
\begin{split}
\hat{c}_{\vec{k}}(\theta) & =\cosh\theta_{\vec{k}}\hat{a}_{\vec{k}}-\sinh\theta_{\vec{k}}\hat{a}_{-\vec{k}}^{\dagger}\\
\hat{c}_{\vec{k}}^{\dagger}(\theta) & =\cosh\theta_{\vec{k}}\hat{a}_{\vec{k}}^{\dagger}-\sinh\theta_{\vec{k}}\hat{a}_{-\vec{k}}\\
\implies\hat{a}_{\vec{k}} & =\cosh\theta_{\vec{k}}\hat{c}_{\vec{k}}+\sinh\theta_{\vec{k}}\hat{c}_{-\vec{k}}^{\dagger}\\
\hat{a}_{\vec{k}}^{\dagger} & =\cosh\theta_{\vec{k}}\hat{c}_{\vec{k}}^{\dagger}+\sinh\theta_{\vec{k}}\hat{c}_{-\vec{k}}
\end{split}
\end{equation}
Using this we can write
\begin{equation}
\begin{split}
\hat{H} & =\int d^{3}k \ \varepsilon_{\vec{k}}\hat{a}_{\vec{k}}^{\dagger}\hat{a}_{\vec{k}}\\
& =\int d^{3}k \ \varepsilon_{\vec{k}}[\cosh\theta_{\vec{k}}\hat{c}_{\vec{k}}^{\dagger}+\sinh\theta_{\vec{k}}\hat{c}_{-\vec{k}}]\\
& \times[\cosh\theta_{\vec{k}}\hat{c}_{\vec{k}}+\sinh\theta_{\vec{k}}\hat{c}_{-\vec{k}}^{\dagger}]\\
& =\int d^{3}k \ \varepsilon_{\vec{k}}[\cosh^{2}\theta_{\vec{k}}\hat{c}_{\vec{k}}^{\dagger}\hat{c}_{\vec{k}}+\sinh^{2}\theta_{\vec{k}}\hat{c}_{-\vec{k}}\hat{c}_{-\vec{k}}^{\dagger}\\
& +\sinh\theta_{\vec{k}}\cosh\theta_{\vec{k}}(\hat{c}_{-\vec{k}}\hat{c}_{\vec{k}}+\hat{c}_{\vec{k}}^{\dagger}\hat{c}_{-\vec{k}}^{\dagger})]\\
& \simeq\int d^{3}k \ \varepsilon_{\vec{k}}[(\cosh^{2}\theta_{\vec{k}}+\sinh^{2}\theta_{-\vec{k}})\hat{c}_{\vec{k}}^{\dagger}\hat{c}_{\vec{k}}\\
& +\sinh\theta_{\vec{k}}\cosh\theta_{\vec{k}}(\hat{c}_{-\vec{k}}\hat{c}_{\vec{k}}+\hat{c}_{\vec{k}}^{\dagger}\hat{c}_{-\vec{k}}^{\dagger})]
\end{split}
\end{equation}
Note that under Bogoliubov transformation Hamiltonian breaks $U(1)$ invariance in new Fock space representation. Also note that this Hamiltonian looks similar to BCS superconductor Hamiltonian in mean-field approximation \cite{timm2012theory}, \cite{casalbuoni2003lecture} but here we have non-trivial coefficients attached to the operators.\\[5pt]
Bow we are willing to write the Hamiltonian in matrix form and to do that we choose a representation w.r.t basis of the form $\begin{pmatrix}
\hat{c}_{\vec{k}}\\
\hat{c}_{-\vec{k}}^{\dagger}
\end{pmatrix}, \ \forall\vec{k}$. Using this representation we can write the Hamiltonian in following form
\begin{equation}
\begin{split}
\hat{H}=\int d^{3}k\varepsilon_{\vec{k}} & \begin{pmatrix}
\hat{c}_{\vec{k}}^{\dagger} & \hat{c}_{-\vec{k}}
\end{pmatrix}\begin{bmatrix}
\cosh^{2}\theta_{\vec{k}} & \sinh\theta_{\vec{k}}\cosh\theta_{\vec{k}}\\
\sinh\theta_{\vec{k}}\cosh\theta_{\vec{k}} & \sinh^{2}\theta_{\vec{k}}
\end{bmatrix}\\
& \times\begin{pmatrix}
\hat{c}_{\vec{k}}\\
\hat{c}_{-\vec{k}}^{\dagger}
\end{pmatrix}
\end{split}
\end{equation}
This representation will help us latter in finding off-diagonal components of 2-point correlation function or Green's function.\\[5pt]
Before we proceed to next subsection where we start discussion of coherent states and partition function we want to make some comments
\begin{itemize}
\item First of all, note that new vacuum state that we got in new Fock space representation made out of combination of states of pair of $a-$ particles with opposite momentum, which can be seen directly from the mathematical definition of new vacuum state in terms of old vacuum state.
\item Secondly, since Hamiltonian in new Fock space representation breaks the $U(1)$-invariance therefore we expect particle number is not conserved which will also reflect in correlation functions which we will show later.
\end{itemize}
\section{Correlation functions}
\subsection{Description of Coherent states}
Let's strat with simple harmonic oscillator description first where we have as usual following algebra
\begin{equation}
[\hat{a},\hat{a}^{\dagger}]=1, \ [\hat{a},\hat{a}]=0=[\hat{a}^{\dagger},\hat{a}^{\dagger}]
\end{equation}
We define coherent states to be the states in Hilbert space which are eigenstates of annihilation operator $\hat{a}$ and defined in following way
\begin{equation}
\ket{z}=e^{z\hat{a}^{\dagger}}\ket{0}
\end{equation}
where $z$ is a complex number and $\ket{0}$ is vacuum state or ground state in this case. One can easily check that
\begin{equation}
\hat{a}\ket{z}=\hat{a}e^{z\hat{a}^{\dagger}}\ket{0}=z\ket{z}
\end{equation}
and similarly correspong dual state can be written as
\begin{equation}
\bra{z}=\bra{0}e^{\bar{z}\hat{a}}
\end{equation}
Similarly with little bit algebra one can show that
\begin{equation}
\braket{z|z'}=e^{\bar{z}z'}
\end{equation}
One can similarly show the resolution of identity
\begin{equation}
\hat{I}=\int\frac{dz d\bar{z}}{2\pi i}e^{-z\bar{z}}\ket{z}\bra{z}
\end{equation}
And for any normal ordered operator $A(\hat{a}^{\dagger},\hat{a})$ one can show easily that
\begin{equation}
\bra{z}A(\hat{a}^{\dagger},\hat{a})\ket{z'}=A(\bar{z},z')e^{\bar{z}z'}
\end{equation}
\subsubsection{Number operator as generator of phase}
In above construction number operator is defined as
\begin{equation}
\hat{N}=\hat{a}^{\dagger}\hat{a}
\end{equation}
Now consider a coherent state $\ket{z}$ such that $\hat{a}\ket{z}=z\ket{z}$ where $z$ is some complex number. Then we look at the state $e^{-i\hat{N}\theta}\ket{z}$
\begin{equation}
\begin{split}
\hat{a}e^{-i\hat{N}\theta}\ket{z} & =e^{-i\hat{N}\theta}e^{i\hat{N}\theta}\hat{a}e^{-i\hat{N}\theta}\ket{z}\\
=e^{-i\hat{N}\theta} & \Big[\hat{a}+i\theta[\hat{N},\hat{a}]+\frac{(i\theta)^{2}}{2}[\hat{N},[\hat{N},\hat{a}]]+\ldots\Big]\ket{z}\\
=e^{-i\hat{N}\theta} & e^{-i\theta}\hat{a}\ket{z}=z e^{-i\theta}e^{-i\hat{N}\theta}\ket{z}\\
\implies e^{-i\hat{N}\theta}\ket{z} & \propto\ket{z e^{-i\theta}}
\end{split}
\end{equation}
Therefore, we can see that number operator $\hat{N}$ is the generator of complex phase for coherent states. We will come to this point again later in this article.
\subsection{Path integral using coherent states}
We will want to compute the matrix elements of the evolution operator $\hat{U}$ defined by
\begin{equation}
\hat{U}(t_{f},t_{i})=e^{-\frac{i}{\hbar}T\hat{H}(\hat{a}^{\dagger},\hat{a})}
\end{equation}
where $T=(t_{f}-t_{i})$ and $\hat{H}(\hat{a}^{\dagger},\hat{a})$ is the Hamiltonian operator of the system in normal ordered form. Thus, if $\ket{i}$ and $\ket{f}$ denote two arbitrary initial and final states, we can write the matrix element of $\hat{U}(t_{f},t_{i})$ as $\bra{f}\hat{U}(t_{f},t_{i})\ket{i}$. Now we split the whole time interval into $N$-equal segments with $N\rightarrow\infty$, then each segment has length of $\epsilon\rightarrow0^{+}$, then using the first order approximation we can write
\begin{equation}
\bra{f}\hat{U}(t_{f},t_{i})\ket{i}=\lim_{\epsilon\rightarrow0^{+}}\lim_{N\rightarrow\infty}\bra{f}\left(1-\frac{i}{\hbar}\epsilon\hat{H}(\hat{a}^{\dagger},\hat{a})\right)^{N}\ket{i}
\end{equation}
Then using an overcomplete set $\{\ket{z_{j}}\}$ at each time $t_{j}$ where $j=1,\ldots,N$ and using the resolution of identity for coherent states one can show that
\begin{equation}
\begin{split}
\lim_{\epsilon\rightarrow0^{+}}\lim_{N\rightarrow\infty}\bra{f} & \left(1-\frac{i}{\hbar}\epsilon\hat{H}(\hat{a}^{\dagger},\hat{a})\right)^{N}\ket{i}\\
=\int & \left(\prod_{j=1}^{N}\frac{dz_{j}d\bar{z}_{j}}{2\pi i}\right)e^{-\sum_{j}|z_{j}|^{2}}\\
\times\Big[\prod_{k=1}^{N-1}\bra{z_{k+1}} & \left(1-\frac{i}{\hbar}\epsilon\hat{H}(\hat{a}^{\dagger},\hat{a})\right)\ket{z_{k}}\Big]\\
\times\bra{f}\left(1-\frac{i}{\hbar}\epsilon\hat{H}(\hat{a}^{\dagger},\hat{a})\right)\ket{z_{N}} & \bra{z_{1}}\left(1-\frac{i}{\hbar}\epsilon\hat{H}(\hat{a}^{\dagger},\hat{a})\right)\ket{i}
\end{split}
\end{equation}
In the limit $\epsilon\rightarrow0^{+}$ these matrix elements become
\begin{equation}
\begin{split}
\bra{z_{k+1}}\left(1-\frac{i}{\hbar}\epsilon\hat{H}(\hat{a}^{\dagger},\hat{a})\right)\ket{z_{k}} & =\braket{z_{k+1}|z_{k}}\\
& \times\Big[1-\frac{i}{\hbar}\epsilon\hat{H}(\bar{z}_{k+1},z_{k})\Big]
\end{split}
\end{equation}
And using the above information we can write
\begin{equation}
\begin{split}
\bra{f}\hat{U}(t_{f},t_{i})\ket{i} & =\int\left(\prod_{j=1}^{N}\frac{dz_{j}d\bar{z}_{j}}{2\pi i}\right)e^{-\sum_{j}|z_{j}|^{2}}e^{\sum_{j=1}^{N-1}\bar{z}_{j+1}z_{j}}\\
\times & \prod_{k=1}^{N-1}\Big[1-\frac{i}{\hbar}\epsilon\hat{H}(\bar{z}_{k+1},z_{k})\Big]\braket{f|z_{N}}\braket{z_{1}|i}\\
& \times\Big[1-\frac{i\epsilon}{\hbar}\frac{\bra{f}\hat{H}\ket{z_{N}}}{\braket{f|z_{N}}}\Big]\Big[1-\frac{i\epsilon}{\hbar}\frac{\bra{z_{1}}\hat{H}\ket{i}}{\braket{z_{1}|i}}\Big]\\
=\int\mathcal{D}z\mathcal{D}\bar{z} & e^{\frac{i}{\hbar}\int_{t_{i}}^{t_{f}}dt\Big[\frac{\hbar}{2i}(z\partial_{t}\bar{z}-\bar{z}\partial_{t}z)-H(\bar{z},z)\Big]}\\
\times & e^{\frac{1}{2}(|z_{i}|^{2}+|z_{f}|^{2})}\bar{\psi}_{f}(z_{f})\psi_{i}(\bar{z}_{i})
\end{split}
\end{equation}
where we have used the fact that
\begin{equation}
\begin{split}
\bra{f} & =\int\frac{dz_{f}d\bar{z}_{f}}{2\pi i}e^{-|z_{f}|^{2}}\bar{\psi}_{f}(z_{f})\bra{z_{f}}\\
\ket{i} & =\int\frac{dz_{i}d\bar{z}_{i}}{2\pi i}e^{-|z_{i}|^{2}}\psi_{i}(\bar{z}_{i})\ket{z_{i}}
\end{split}
\end{equation}
\subsection{Extend the path integral to field theory}
As we have seen in free-field theory we have harmonic oscillators with different momentum modes for which we have Hamiltonian in old Fock space as
\begin{equation}
\hat{H}=\int d^{3}k \ \varepsilon_{\vec{k}}\hat{c}_{\vec{k}}^{\dagger}\hat{
c}_{\vec{k}}
\end{equation}
Now we define states \cite{book:18638}
\begin{equation}
\ket{\phi}=e^{\int d^{3}k\phi(\vec{k})\hat{c}_{\vec{k}}^{\dagger}}\ket{0}
\end{equation}
and one can now check that
\begin{equation}
\hat{c}_{\vec{l}}\ket{\phi}=\phi({\vec{l}})\ket{\phi}
\end{equation}
as well as obeying the resolution of the identity in this space of states
\begin{equation}
\hat{I}=\int\mathcal{D}\phi\mathcal{D}\bar{\phi}e^{-\int d^{3}k|\phi(\vec{k})|^{2}}\ket{\phi}\bra{\phi}
\end{equation}
Therefore, using above extended quantities one can easily show that
\begin{equation}
\begin{split}
\bra{f} & e^{-\frac{i}{\hbar}T\hat{H}}\ket{i}\\
=\int\mathcal{D}\phi\mathcal{D}\bar{\phi} & \bar{\Psi}_{f}(\phi(t_{f},\vec{k}))\Psi(\bar{\phi}(t_{i},\vec{k}))e^{\frac{1}{2}\int d^{3}k(|\phi(t_{i},\vec{k})|^{2}+|\phi(t_{f},\vec{k})|^{2})}\\
\times & e^{\frac{i}{\hbar}\int_{t_{i}}^{t_{f}}\int d^{3}k\Big[\frac{\hbar}{2i}(\phi(t,\vec{k})\partial_{t}\bar{\phi}(t,\vec{k})-\bar{\phi}(t,\vec{k})\partial_{t}\phi(t,\vec{k}))-\varepsilon_{\vec{k}}\bar{\phi}(t,\vec{k})\phi(t,\vec{k})\Big]}\\
=\int\mathcal{D}\phi\mathcal{D}\bar{\phi} & \ e^{\frac{i}{\hbar}\int_{t_{i}}^{t_{f}}\int d^{3}k\Big[i\hbar\bar{\phi}(t,\vec{k})\partial_{t}\phi(t,\vec{k})-\varepsilon_{\vec{k}}\bar{\phi}(t,\vec{k})\phi(t,\vec{k})\Big]}\\
\times & \bar{\Psi}_{f}(\phi(t_{f},\vec{k}))\Psi(\bar{\phi}(t_{i},\vec{k}))e^{\frac{1}{2}\int d^{3}k(|\phi(t_{i},\vec{k})|^{2}+|\phi(t_{f},\vec{k})|^{2})}
\end{split}
\end{equation}
Now we consider partition function in grand canonical ensemble, defined by
\begin{equation}
\mathcal{Z}=\text{Tr}e^{-\beta(\hat{H}-\mu\hat{N})}
\end{equation}
which we want to evaluate using the path integral formalism where we extend the formalism in following way
\begin{itemize}
\item $\ket{i}=\ket{f}$ and arbitrary.
\item summing over boundary states
\item wick rotation to imaginary time $t\rightarrow-i\tau$ with time span $T\rightarrow-i\beta\hbar$ \cite{book:428978}.
\end{itemize}
The result will be following \cite{laine2016basics}, \cite{yang2011introduction}
\begin{equation}
\mathcal{Z}=\int\mathcal{D}\phi\mathcal{D}\bar{\phi} \ e^{-S_{E}(\phi,\bar{\phi})}
\end{equation}
where
\begin{equation}
S_{E}(\phi,\bar{\phi})=\frac{1}{\hbar}\int_{0}^{\beta\hbar}d\tau\int d^{3}k\bar{\phi}(\tau,\vec{k})\Big[\hbar\partial_{\tau}-\xi_{\vec{k}}\Big]\phi(\tau,\vec{k})
\end{equation}
with $\xi_{\vec{k}}=\varepsilon_{\vec{k}}-\mu$ and with periodic boundary condition $\phi(\tau,\vec{k})=\phi(\tau+\beta\hbar,\vec{k})$. This requiremnet suggests that we can decompose $\phi(\tau,\vec{k})$ in following way
\begin{equation}
\phi(\tau,\vec{k})=\sum_{n}e^{i\omega_{n}\tau}\phi_{n}(\vec{k})
\end{equation}
where $\omega_{n}=\frac{2\pi n}{\beta\hbar}$, which are known as matsubara frequencies.
\subsection{Correlation function in curved spacetime under Bogoliubov transformation}
Recall that after doing Bogoliubov transformation new Hamiltonian for real scalar free-fiel theory was of the form
\begin{equation}
\begin{split}
\hat{H} & =\int d^{3}k \ \varepsilon_{\vec{k}}[\cosh^{2}\theta_{\vec{k}}\hat{c}_{\vec{k}}^{\dagger}\hat{c}_{\vec{k}}+\sinh^{2}\theta_{\vec{k}}\hat{c}_{-\vec{k}}\hat{c}_{-\vec{k}}^{\dagger}\\
& +\sinh\theta_{\vec{k}}\cosh\theta_{\vec{k}}(\hat{c}_{-\vec{k}}\hat{c}_{\vec{k}}+\hat{c}_{\vec{k}}^{\dagger}\hat{c}_{-\vec{k}}^{\dagger})]\\
& \simeq\int d^{3}k \ \varepsilon_{\vec{k}}[(\cosh^{2}\theta_{\vec{k}}+\sinh^{2}\theta_{-\vec{k}})\hat{c}_{\vec{k}}^{\dagger}\hat{c}_{\vec{k}}\\
& +\sinh\theta_{\vec{k}}\cosh\theta_{\vec{k}}(\hat{c}_{-\vec{k}}\hat{c}_{\vec{k}}+\hat{c}_{\vec{k}}^{\dagger}\hat{c}_{-\vec{k}}^{\dagger})]
\end{split}
\end{equation}
Therefore, for this Hamiltonian, the Euclidean action in the exponent of path integral description of partition function will be
\begin{equation}
\begin{split}
S_{E}(\bar{\phi},\phi) & =\frac{1}{\hbar}\int_{0}^{\beta\hbar}d\tau\int d^{3}k\\
& \Big[\bar{\phi}(\tau,\vec{k})(\hbar\partial_{\tau}-(\cosh^{2}\theta_{\vec{k}}+\sinh^{2}\theta_{\vec{k}})\xi_{\vec{k}})\phi(\tau,-\vec{k})\\
-\xi_{\vec{k}}\sinh\theta_{\vec{k}} & \cosh\theta_{\vec{k}}(\phi(\tau,\vec{k})\phi(\tau,-\vec{k})+\bar{\phi}(\tau,\vec{k})\bar{\phi}(\tau,-\vec{k}))\Big]
\end{split}
\end{equation}
which can be written in matrix representation in following way(denoting $\chi_{\vec{k}}=\cosh^{2}\theta_{\vec{k}}+\sinh^{2}\theta_{\vec{k}}, \ \eta_{\vec{k}}=\cosh\theta_{\vec{k}}\sinh\theta_{\vec{k}}$ and we assumed $\chi_{\vec{k}}=\chi_{-\vec{k}}, \ \eta_{\vec{k}}=\eta_{-\vec{k}}$)
\begin{equation}
\begin{split}
S_{E}(\bar{\phi},\phi)=\frac{1}{\hbar}\int_{0}^{\beta\hbar}d\tau\int d^{3}k & \begin{pmatrix}
\bar{\phi}(\tau,\vec{k}) & \phi(\tau,-\vec{k})
\end{pmatrix}\\
\times\begin{bmatrix}
\frac{1}{2}(\hbar\partial_{\tau}-\xi_{\vec{k}}\chi_{\vec{k}}) & \xi_{\vec{k}}\eta_{\vec{k}}\\
\xi_{\vec{k}}\eta_{\vec{k}} & \frac{1}{2}(\hbar\overleftarrow{\partial}_{\tau}-\xi_{\vec{k}}\chi_{\vec{k}})
\end{bmatrix}
& \times\begin{pmatrix}
\phi(\tau,\vec{k})\\
\bar{\phi}(\tau,-\vec{k})
\end{pmatrix}\\
=\beta\sum_{n}\int d^{3}k \begin{pmatrix}
\bar{\phi}(\omega_{n},\vec{k}) & \phi(-\omega_{n},-\vec{k})
\end{pmatrix} & \times\\
\begin{bmatrix}
\frac{1}{2}(i\hbar\omega_{n}-\xi_{\vec{k}}\chi_{\vec{k}}) & \xi_{\vec{k}}\eta_{\vec{k}}\\
\xi_{\vec{k}}\eta_{\vec{k}} & \frac{1}{2}(-i\hbar\omega_{n}-\xi_{\vec{k}}\chi_{\vec{k}})
\end{bmatrix} & \times\begin{pmatrix}
\phi(\omega_{n},\vec{k})\\
\bar{\phi}(-\omega_{n},-\vec{k})
\end{pmatrix}
\end{split}
\end{equation}
Now we define generating functional to get the correlation functions of any order. It is defined as(from now on we consider $\hbar=1$)
\begin{equation}
\begin{split}
\mathcal{Z}[J,\bar{J}] & =\frac{1}{\mathcal{Z}}\int\mathcal{D}\phi \ \mathcal{D}\bar{\phi} \ e^{-S_{E}(\phi,\bar{\phi})}\\
& \times e^{\sum_{n}\int d^{3}k(\bar{J}(\omega_{n},\vec{k})\phi(\omega_{n},\vec{k})+J(\omega_{n},\vec{k})\bar{\phi}(\omega_{n},\vec{k}))}\\
& =e^{\bar{J}\mathcal{G}J}
\end{split}
\end{equation}
where $\bar{J}\mathcal{G}J$ denotes a matrix multiplication with sum over modes and $\mathcal{G}$ is the propagator matrix or 2-point function matrix. Let's write down $\bar{J}\mathcal{G}J$ explicitly
\begin{equation}
\begin{split}
\bar{J}\mathcal{G}J & =\sum_{n}\int d^{3}k\begin{pmatrix}
\bar{J}(\omega_{n},\vec{k}) & J(-\omega_{n},-\vec{k})
\end{pmatrix}\\
& \times\frac{4}{\omega_{n}^{2}+\xi_{\vec{k}}^{2}(\chi_{\vec{k}}^{2}-4\eta_{\vec{k}}^{2})}\\
\times & \begin{bmatrix}
\frac{1}{2}(-i\omega_{n}-\xi_{\vec{k}}\chi_{\vec{k}}) & -\xi_{\vec{k}}\eta_{\vec{k}}\\
-\xi_{\vec{k}}\eta_{\vec{k}} & \frac{1}{2}(i\omega_{n}-\xi_{\vec{k}}\chi_{\vec{k}})
\end{bmatrix}\begin{pmatrix}
J(\omega_{n},\vec{k})\\
\bar{J}(-\omega_{n},-\vec{k})
\end{pmatrix}
\end{split}
\end{equation}
Note that for $\theta_{\vec{k}}=0, \ \forall\vec{k}$ which is equivalent to saying we have not done Bogoliubov transformation in that case we can get back the known result that the 2-point correlation function is given by single quantity $<\bar{\phi}(\omega_{n},\vec{k})\phi(\omega_{n},\vec{k})>\propto\frac{1}{i\omega_{n}-\xi_{\vec{k}}}$.\\[5pt]
Now let's do the matrix multiplication and write down the $\bar{J}\mathcal{G}J$ explicitly
\begin{equation}
\begin{split}
\bar{J}\mathcal{G}J & =\sum_{n}\int d^{3}k\frac{4}{\omega_{n}^{2}+\xi_{\vec{k}}^{2}}\Big[\bar{J}(\omega_{n},\vec{k})(-i\omega_{n}-\xi_{\vec{k}}\chi_{\vec{k}})J(\omega_{n},\vec{k})\\
& -\xi_{\vec{k}}\eta_{\vec{k}}(\bar{J}(\omega_{n},\vec{k})\bar{J}(-\omega_{n},-\vec{k})+J(\omega_{n},\vec{k})J(-\omega_{n},-\vec{k}))\Big]
\end{split}
\end{equation}
Note that for this case we found out that non-vanishing 2-point functions are
\begin{equation}
\begin{split}
<\phi(\omega_{n},\vec{k})\phi(-\omega_{n},-\vec{k})> & =-\frac{8\xi_{\vec{k}}\eta_{\vec{k}}}{\omega_{n}^{2}+\xi_{\vec{k}}^{2}}\\
<\bar{\phi}(\omega_{n},\vec{k})\bar{\phi}(-\omega_{n},-\vec{k})> & =-\frac{8\xi_{\vec{k}}\eta_{\vec{k}}}{\omega_{n}^{2}+\xi_{\vec{k}}^{2}}\\
<\bar{\phi}(\omega_{n},\vec{k})\phi(\omega_{n},\vec{k})> & =\frac{4(-i\omega_{n}-\xi_{\vec{k}}\chi_{\vec{k}})}{\omega_{n}^{2}+\xi_{\vec{k}}^{2}}
\end{split}
\end{equation}
Note that our result not only matches with known result in appropriate limit but also consistent with the fact that the first two result should be complex conjugate to each other, and here it's trivially satisfied becuase of the fact that we consider $\eta_{\vec{k}},\chi_{\vec{k}}$ are real functions of $\vec{k}$ which we have assumed from the begining of the calculation for sake of convenience. Note also that non-zero value of the first two correlation function is a consequence of violation of particle number conservation.\\[5pt]
\subsection{Evolution of number of particle in a state}
To get to know that number of particles containing in states in new Fock space in free-field theory is not conserved one can also check following quantity which one can get from the Hamiltonian defined in new Fock space. Note that according to eq.(25)
\begin{equation}
\begin{split}
\frac{d}{dt}(\hat{c}_{\vec{k}}^{\dagger}\hat{c}_{\vec{k}}) & =i[\hat{H},\hat{c}_{\vec{k}}^{\dagger}]\hat{c}_{\vec{k}}+i\hat{c}_{\vec{k}}^{\dagger}[\hat{H},\hat{c}_{\vec{k}}]\\
& =i\int d^{3}l \ \varepsilon_{\vec{l}}\Big[[(\chi_{\vec{l}}\hat{c}_{\vec{l}}^{\dagger}\hat{c}_{\vec{l}}+\eta_{\vec{l}}(\hat{c}_{-\vec{l}}\hat{c}_{\vec{l}}+\hat{c}_{\vec{l}}^{\dagger}\hat{c}_{-\vec{l}}^{\dagger})),\hat{c}_{\vec{k}}^{\dagger}]\hat{c}_{\vec{k}}\\
& +\hat{c}_{\vec{k}}^{\dagger}[(\chi_{\vec{l}}\hat{c}_{\vec{l}}^{\dagger}\hat{c}_{\vec{l}}+\eta_{\vec{l}}(\hat{c}_{-\vec{l}}\hat{c}_{\vec{l}}+\hat{c}_{\vec{l}}^{\dagger}\hat{c}_{-\vec{l}}^{\dagger})),\hat{c}_{\vec{k}}]\Big]\\
& =i\varepsilon_{\vec{k}}\Big[\chi_{\vec{k}}\hat{c}_{\vec{k}}^{\dagger}\hat{c}_{\vec{k}}+2\eta_{\vec{k}}\hat{c}_{-\vec{k}}\hat{c}_{\vec{k}}-\chi_{\vec{k}}\hat{c}_{\vec{k}}^{\dagger}\hat{c}_{\vec{k}}-2\eta_{\vec{k}}\hat{c}_{\vec{k}}^{\dagger}\hat{c}_{-\vec{k}}^{\dagger}\Big]\\
& =2i\eta_{\vec{k}}(\hat{c}_{-\vec{k}}\hat{c}_{\vec{k}}-\hat{c}_{\vec{k}}^{\dagger}\hat{c}_{-\vec{k}}^{\dagger})\neq0
\end{split}
\end{equation}
Now if we consider a state say $\ket{\psi}$ in new Fock space then
\begin{equation}
\begin{split}
\frac{d}{dt}\bra{\psi} & \hat{c}_{\vec{k}}^{\dagger}\hat{c}_{\vec{k}}\ket{\psi}\\
& =i\bra{\psi}[\hat{H},\hat{c}_{\vec{k}}^{\dagger}]\hat{c}_{\vec{k}}+i\hat{c}_{\vec{k}}^{\dagger}[\hat{H},\hat{c}_{\vec{k}}]\ket{\psi}\\
& =2i\eta_{\vec{k}}\bra{\psi}(\hat{c}_{-\vec{k}}\hat{c}_{\vec{k}}-\hat{c}_{\vec{k}}^{\dagger}\hat{c}_{-\vec{k}}^{\dagger})\ket{\psi}\neq 0, \ \text{in general}\\
& =2i\eta_{\vec{k}}[\bra{\psi}\hat{c}_{-\vec{k}}\hat{c}_{\vec{k}}\ket{\psi}-\bra{\psi}\hat{c}_{-\vec{k}}\hat{c}_{\vec{k}}\ket{\psi}^{*}]\\
& =-4\eta_{\vec{k}}\text{Im}[\bra{\psi}\hat{c}_{-\vec{k}}\hat{c}_{\vec{k}}\ket{\psi}]\\
\implies\frac{d}{dt}\bra{\psi} & \sum_{\vec{k}}\hat{c}_{\vec{k}}^{\dagger}\hat{c}_{\vec{k}}\ket{\psi}=\frac{d}{dt}\bra{\psi}\hat{N}\ket{\psi}\neq0
\end{split}
\end{equation}
which will be clear if one use completeness relation of occupation number basis of new Fock space. This clearly shows violation of particle number conservation.\\[5pt]
Recall I have mentioned earlier that number operator is a generator of phase in coherent states, in the above calculation we can alearly see that number operator does not commute with the Hamiltonian under Bogoliubov transformation $[\hat{H},\hat{N}]\neq0$, therefore phase generator or the charge corresponfing to U(1) symmetry is broken. Because of such non-commutativity Hamiltonian and number operator don't have simultaneous eigenstates. For example we can clearly see that vacuum state which is defined to be a state that is annihilated by annihilation operator is no longer an eigenstate of Hamiltonian operator although it conatains zero number of particles since vacuum expectation value of Hamiltonian is zero. And we can also check that there is no particle production happen in vacuum state since
\begin{equation}
\begin{split}
\frac{d}{dt}\bra{0(\theta)} & \hat{c}_{\vec{k}}^{\dagger}\hat{c}_{\vec{k}}\ket{0(\theta)}\\
=2i\eta_{\vec{k}}\bra{0(\theta)} & (\hat{c}_{-\vec{k}}\hat{c}_{\vec{k}}-\hat{c}_{\vec{k}}^{\dagger}\hat{c}_{-\vec{k}}^{\dagger})\ket{0(\theta)}=0\\
\implies\frac{d}{dt}\bra{0(\theta)} & \hat{N}\ket{0(\theta)}=0
\end{split}
\end{equation}
Note that saying particle creation out of vacuum by showing that old vacuum state in old Fock space representation is not true vacuum in new Fock space(which is mostly done \cite{degner2010cosmological}, \cite{Ford:1997hb}, \cite{biswas1995particle}, \cite{book:274311}, \cite{hossenfelder2003particle}, \cite{0305-4470-8-4-022}, \cite{ford1987gravitational}, \cite{birrell1980massive}, \cite{winitzki2005cosmological}, \cite{frieman1989particle}) is not equivalent of saying violation of particle number conservation because old vacuum state and multi-particle states in old Fock space do not belong to new Fock space in field theory because of infinite volume limit(even is some cases people consider amplitude of particle propagation under time evolution in path integral formalism by considering a initial and final states to be vacuum states at different time \cite{chitre1977path}, \cite{duru1986particle} which is also not a correct description). Therefore, one should carefully choose action of operactors according to their domain.\\[5pt]
Now let's check whether or not the vacuum state remains vacuum states under infinitesimal time evolution
\begin{equation}
\begin{split}
\hat{c}_{\vec{k}}e^{-i\hat{H}\epsilon}\ket{0(\theta)} & =e^{-i\hat{H}\epsilon}e^{i\hat{H}t}\hat{c}_{\vec{k}}e^{-i\hat{H}\epsilon}\ket{0(\theta)}\\
=e^{-i\hat{H}\epsilon} & \Big[\hat{c}_{\vec{k}}+i\epsilon[\hat{H},\hat{c}_{\vec{k}}]+\mathcal{O}(\epsilon^{2})\Big]\ket{0(\theta)}\\
=e^{-i\hat{H}t} & \Big[i\epsilon[\hat{H},\hat{c}_{\vec{k}}]+\mathcal{O}(\epsilon^{2})\Big]\ket{0(\theta)}\\
=e^{-i\hat{H}t} & [(-\chi_{\vec{k}}\hat{c}_{\vec{k}}-2\eta_{\vec{k}}\hat{c}_{-\vec{k}}^{\dagger})+\mathcal{O}(\epsilon^{2})]\ket{0(\theta)}\neq\ket{0(\theta)}
\end{split}
\end{equation}
So, we can see that even under infinitesimal time evolution vacuum state is no longer vacuum state of new transformed Fock space.\\[5pt]
Note also that in this whole setup we have not considered about action functional of this theory. From the action one can easily notice that since it is a real scalar field theory there is no breaking of $U(1)$-symmetry at all but once we write down the Hamiltonian and define what would be proper action of U(1) to check not charge conservation but rather particle number conservation because charge conservation may not violate because from vacuum state one can produce particle-antiparticle pairs but since for a real scalar field charge is zero we don't have to worry about charge conservation.
\subsection{Non-invariance under U(1) action in an example of 2-particle scattering interaction}
Now consider that system is interacting with the following interaction term in old Fock space represenatation
\begin{equation}
\hat{H}_{\text{int}}=\sum_{\vec{q},\vec{k},\vec{l}}v(\vec{q})\hat{a}_{\vec{k}+\vec{q}}^{\dagger}\hat{a}_{\vec{l}-\vec{q}}^{\dagger}\hat{a}_{\vec{l}}\hat{a}_{\vec{k}}
\end{equation}
where $v(\vec{q})$ is the interaction strength which depends on the exchanged momentum 2 particle scattering process at tree-level.\\[5pt]
If we do Bogolibuov transformation(then taking thermodynamic limit since we are considering field theory) which is equivalent to switch on gravity then in new Fock space representation we will get following interaction term
\begin{equation}
\begin{split}
\hat{H}_{\text{int}}=\sum_{\vec{q},\vec{k},\vec{l}}v(\vec{q}) & \Big[(\cosh\theta_{\vec{k}+\vec{q}}\hat{c}_{\vec{k}+\vec{q}}^{\dagger}+\sinh\theta_{\vec{k}+\vec{q}}\hat{c}_{-\vec{k}-\vec{q}})\\
& \times(\cosh\theta_{\vec{l}-\vec{q}}\hat{c}_{\vec{l}-\vec{q}}^{\dagger}+\sinh\theta_{\vec{l}-\vec{q}}\hat{c}_{-\vec{l}+\vec{q}})\\
\times(\cosh\theta_{\vec{l}}\hat{c}_{\vec{l}}+\sinh\theta_{\vec{l}} & \hat{c}_{-\vec{l}}^{\dagger})\times(\cosh\theta_{\vec{k}}\hat{c}_{\vec{k}}+\sinh\theta_{\vec{k}}\hat{c}_{-\vec{k}}^{\dagger})\Big]\\
=\cosh\theta_{\vec{k}+\vec{q}}\cosh\theta_{\vec{l}-\vec{q}} & \cosh\theta_{\vec{l}}\cosh\theta_{\vec{k}}\hat{c}_{\vec{k}+\vec{q}}^{\dagger}\hat{c}_{\vec{l}-\vec{q}}^{\dagger}\hat{c}_{\vec{l}}\hat{c}_{\vec{k}}\\
+\cosh\theta_{\vec{k}+\vec{q}}\cosh\theta_{\vec{l}-\vec{q}} & \cosh\theta_{\vec{l}}\sinh\theta_{\vec{k}}\hat{c}_{\vec{k}+\vec{q}}^{\dagger}\hat{c}_{\vec{l}-\vec{q}}^{\dagger}\hat{c}_{\vec{l}}\hat{c}_{-\vec{k}}^{\dagger}\\
+\cosh\theta_{\vec{k}+\vec{q}}\cosh\theta_{\vec{l}-\vec{q}} & \sinh\theta_{\vec{l}}\cosh\theta_{\vec{k}}\hat{c}_{\vec{k}+\vec{q}}^{\dagger}\hat{c}_{\vec{l}-\vec{q}}^{\dagger}\hat{c}_{-\vec{l}}^{\dagger}\hat{c}_{\vec{k}}\\
+\cosh\theta_{\vec{k}+\vec{q}}\cosh\theta_{\vec{l}-\vec{q}} & \sinh\theta_{\vec{l}}\sinh\theta_{\vec{k}}\hat{c}_{\vec{k}+\vec{q}}^{\dagger}\hat{c}_{\vec{l}-\vec{q}}^{\dagger}\hat{c}_{-\vec{l}}^{\dagger}\hat{c}_{-\vec{k}}^{\dagger}\\
+\cosh\theta_{\vec{k}+\vec{q}}\sinh\theta_{\vec{l}-\vec{q}} & \cosh\theta_{\vec{l}}\cosh\theta_{\vec{k}}\hat{c}_{\vec{k}+\vec{q}}^{\dagger}\hat{c}_{-\vec{l}+\vec{q}}\hat{c}_{\vec{l}}\hat{c}_{\vec{k}}\\
+\cosh\theta_{\vec{k}+\vec{q}}\sinh\theta_{\vec{l}-\vec{q}} & \cosh\theta_{\vec{l}}\sinh\theta_{\vec{k}}\hat{c}_{\vec{k}+\vec{q}}^{\dagger}\hat{c}_{-\vec{l}+\vec{q}}\hat{c}_{\vec{l}}\hat{c}_{-\vec{k}}^{\dagger}\\
+\cosh\theta_{\vec{k}+\vec{q}}\sinh\theta_{\vec{l}-\vec{q}} & \sinh\theta_{\vec{l}}\cosh\theta_{\vec{k}}\hat{c}_{\vec{k}+\vec{q}}^{\dagger}\hat{c}_{-\vec{l}+\vec{q}}\hat{c}_{-\vec{l}}^{\dagger}\hat{c}_{\vec{k}}\\
+\cosh\theta_{\vec{k}+\vec{q}}\sinh\theta_{\vec{l}-\vec{q}} & \sinh\theta_{\vec{l}}\sinh\theta_{\vec{k}}\hat{c}_{\vec{k}+\vec{q}}^{\dagger}\hat{c}_{-\vec{l}+\vec{q}}\hat{c}_{-\vec{l}}^{\dagger}\hat{c}_{-\vec{k}}^{\dagger}\\
+\sinh\theta_{\vec{k}+\vec{q}}\cosh\theta_{\vec{l}-\vec{q}} & \cosh\theta_{\vec{l}}\cosh\theta_{\vec{k}}\hat{c}_{-\vec{k}-\vec{q}}\hat{c}_{\vec{l}-\vec{q}}^{\dagger}\hat{c}_{\vec{l}}\hat{c}_{\vec{k}}\\
+\sinh\theta_{\vec{k}+\vec{q}}\cosh\theta_{\vec{l}-\vec{q}} & \cosh\theta_{\vec{l}}\sinh\theta_{\vec{k}}\hat{c}_{-\vec{k}-\vec{q}}\hat{c}_{\vec{l}-\vec{q}}^{\dagger}\hat{c}_{\vec{l}}\hat{c}_{-\vec{k}}^{\dagger}\\
+\sinh\theta_{\vec{k}+\vec{q}}\cosh\theta_{\vec{l}-\vec{q}} & \sinh\theta_{\vec{l}}\cosh\theta_{\vec{k}}\hat{c}_{-\vec{k}-\vec{q}}\hat{c}_{\vec{l}-\vec{q}}^{\dagger}\hat{c}_{-\vec{l}}^{\dagger}\hat{c}_{\vec{k}}\\
+\cosh\theta_{-\vec{k}-\vec{q}}\cosh\theta_{\vec{l}-\vec{q}} & \sinh\theta_{\vec{l}}\sinh\theta_{\vec{k}}\hat{c}_{-\vec{k}-\vec{q}}\hat{c}_{\vec{l}-\vec{q}}^{\dagger}\hat{c}_{-\vec{l}}^{\dagger}\hat{c}_{-\vec{k}}^{\dagger}\\
+\sinh\theta_{\vec{k}+\vec{q}}\sinh\theta_{\vec{l}-\vec{q}} & \cosh\theta_{\vec{l}}\cosh\theta_{\vec{k}}\hat{c}_{-\vec{k}-\vec{q}}\hat{c}_{-\vec{l}+\vec{q}}\hat{c}_{\vec{l}}\hat{c}_{\vec{k}}\\
+\sinh\theta_{\vec{k}+\vec{q}}\sinh\theta_{\vec{l}-\vec{q}} & \cosh\theta_{\vec{l}}\sinh\theta_{\vec{k}}\hat{c}_{-\vec{k}-\vec{q}}\hat{c}_{-\vec{l}+\vec{q}}\hat{c}_{\vec{l}}\hat{c}_{-\vec{k}}^{\dagger}\\
\end{split}
\end{equation}
\begin{align*}
\begin{split}
+\sinh\theta_{\vec{k}+\vec{q}}\sinh\theta_{\vec{l}-\vec{q}} & \sinh\theta_{\vec{l}}\cosh\theta_{\vec{k}}\hat{c}_{-\vec{k}-\vec{q}}\hat{c}_{-\vec{l}+\vec{q}}\hat{c}_{-\vec{l}}^{\dagger}\hat{c}_{\vec{k}}\\
+\sinh\theta_{\vec{k}+\vec{q}}\sinh\theta_{\vec{l}-\vec{q}} & \sinh\theta_{\vec{l}}\sinh\theta_{\vec{k}}\hat{c}_{-\vec{k}-\vec{q}}\hat{c}_{-\vec{l}+\vec{q}}\hat{c}_{-\vec{l}}^{\dagger}\hat{c}_{-\vec{k}}^{\dagger}
\end{split}
\end{align*}
Note that even the interaction term gets modified in curved spacetime such way that it violates particle number conservation because of the fact interaction is not invariant under the action of global $U(1)$ group but momentum conservation still holds. Remember we have chosen a curved spacetime where notion momentum modes are well-defined which means that spacetime line-element or metric is invariant under spatial translations.\\[5pt]
According to the non-vanishing 2-point function that we have found all the interaction terms in the interaction Hamiltonian in new Fock space representation contribute in 4-point and higher order correlation function in interacting theory which is easy to see if one follows perturbative approach.\\[5pt]
\subsection{Conclusion}
In the beginning of this article I emphasized on the fact that in thermodynamic limit although we have 2 disjoint vector spaces but still the we can do the canonical transformation. And we also restrict ourself to new Fock space because after taking infinite volume limit we can't get back to the old Fock space. In this article I am able to show how to look at the particle production phenomena under change in coordinate transformation or under frame change which is equivalent of doing Bogoliubov transformation in field theory in thermodynamic limit. We have shown how does change in frame breaks both global and local U(1) invariance which is suitably defined. We have also seen that there is no particle production happen out of vacuum state in new transformed Fock space under time evolution but it can happen out of other many-particle states and vacuum state is not an eigenvector of Hamltonian operator in transformed Fock space and vacuum state does not remain vacuum state under time evolution.
\section{Acknowledgement}
Author wants to thank Dr. Golam Mortuza Hossain, Gopal Sardar for helpful discussion regarding the subject matter and their comments on the idea of this paper. Author would also like to thank CSIR to support this work through JRF fellowship.
\bibliographystyle{plain}
|
{
"timestamp": "2018-06-05T02:18:23",
"yymm": "1806",
"arxiv_id": "1806.01123",
"language": "en",
"url": "https://arxiv.org/abs/1806.01123"
}
|
\section{Introduction} \label{sec:1}
In this paper, we study transport equations with nonlocal velocity. One of the most well-known equation is the two dimensional Euler equation in vorticity form,
\[
\omega_{t}+u\cdot \nabla \omega=0,
\]
where the velocity $u$ is recovered from the vorticity $\omega$ through
\[
u=\nabla^\perp(-\Delta)^{-1}\omega \quad \text{or equivalently} \quad \widehat{u}(\xi)=\frac{i\xi^{\perp}}{|\xi|^{2}}\widehat{\omega}(\xi).
\]
Other nonlocal and quadratically nonlinear equations, such as the surface quasi-geostrophic equation, the incompressible porous medium equation, Stokes equations, magneto-geostrophic equation in multi-dimensions, have been studied intensively as one can see in \cite{Bae 2, Bae, Baker, Carrillo, CC, CC2, Chae, De Gregorio, Kiselev, Lazar2, LiRodrigo, LiRodrigo2, Morlet} and references therein.
We here consider the 1D transport equations with nonlocal velocity field of the form
\begin{subequations}\label{model equation}
\begin{align}
&\theta_t+u\theta_x+\nu \Lambda^{\gamma}\theta=0,\\
& u=\mathcal{N}(\theta),
\end{align}
\end{subequations}
where $\mathcal{N}$ is typically expressed by a Fourier multiplier. The study of (\ref{model equation}) is mainly motivated by \cite{CCF} where C\'ordoba, C\'ordoba, and Fontelos proposed the following 1D model
\begin{subequations}\label{CCF}
\begin{align}
&\theta_{t}+u\theta_{x}=0, \\
& u=-\mathcal{H}\theta, \quad (\text{$\mathcal{H}$ being the Hilbert transform})
\end{align}
\end{subequations}
for the 2D surface quasi-geostrophic equation and proved the finite time blow-up of smooth solutions. In this paper, we deal with (\ref{CCF}) and its variations with the following objectives.
\begin{enumerate}[]
\item (1) The existence of weak solution with \emph{rough initial data}. The existence of global-in-time solutions is possible even if strong solutions blow up in finite time, as in the case of the Burgers' equation.
\item (2) The existence of strong solution when the velocity $u$ is more singular than $\theta$. We intend to see the competitive relationship between nonlinear terms and viscous terms.
\end{enumerate}
More specifically, the topics covered in this paper can be summarized as follows.
\vspace{1ex}
\noindent
\textbullet \ {\bf The model 1: $\mathcal{N}=-\mathcal{H}$ and $\nu=0$.} We first show the existence of local-in-time solution in a critical space under the scaling $\theta_{0}(x)\mapsto \theta_{0}(\lambda x)$. We then introduce the notion of a weak super-solution and obtain a global-in-time weak super-solution with $\theta_{0}\in L^{1}\cap L^{\infty}$ and $\theta_{0}\ge 0$.
\vspace{1ex}
\noindent
\textbullet \ {\bf The model 2: $\mathcal{N}=-\mathcal{H}(\partial_{xx} )^{-\alpha}$, $\alpha>0$, $\nu=1$, and $\gamma>0$.} This is a regularized version of (\ref{CCF}) which is also closely related to many equations as mentioned in \cite{Bae 3}. In this case, we show the existence of weak solutions globally in time under weaker conditions on $\alpha$ and $\gamma$ compared to \cite{Bae 3}.
\vspace{1ex}
\noindent
\textbullet \ {\bf The model 3: $\mathcal{N}=-\mathcal{H}(\partial_{xx} )^{\beta}$, $\beta>0$, $\nu=1$, and $\gamma>0$.} Since $\beta>0$, the velocity field is more singular than the previous two models. In this case, we show the existence of strong solutions locally in time in two cases: (1) $0<\beta\leq \frac{\gamma}{4}$ when $0<\gamma<2$ and (2) $0<\beta<1$ when $\gamma=2$. We also show the existence of strong solutions for $0<\beta<\frac{1}{2}$ and $\gamma=2$ with rough initial data. We finally show the existence of strong solutions globally in time with $0<\beta<\frac{1}{4}$ and $\gamma=2$.
\vspace{1ex}
We will give detailed statements and proofs of our results in Section 3--5.
\section{Preliminaries}
All constants will be denoted by $C$ that is a generic constant. In a series of inequalities, the value of $C$ can vary with each inequality. We use following notation: for a Banach space $X$,
\[
C_{T}X=C([0,T]:X), \quad L^{p}_{T}X=L^{p}(0,T:X).
\]
The Hilbert transform is defined as
\[
\mathcal{H}f(x)=\text{p.v.} \int_{\mathbb{R}} \frac{f(y)}{x-y}dy.
\]
We will use the BMO space (see e.g. \cite{Bahouri} for the definition) and its dual which is the Hardy space $\mathcal{H}^1$ which consists of those $f$ such that $f$ and $\mathcal{H}f$ are integrable. We will use the following formula
\[
2 \mathcal{H}(f\mathcal{H}f)=(\mathcal{H}f)^2 - f^2
\]
which implies that $g=f\mathcal{H}f \in \mathcal{H}^1$ and for any $f\in L^{2}$,
\begin{equation} \label{hardy}
\Vert g \Vert_{\mathcal{H}^1} \leq \Vert f \Vert^{2}_{L^{2}}.
\end{equation}
The differential operator $\Lambda^{\gamma}=(\sqrt{-\Delta})^{\gamma}$ is defined by the action of the following kernels \cite{Cordoba}:
\begin{eqnarray} \label{lambda gamma}
\Lambda^{\gamma} f(x)=c_{\gamma}\text{p.v.} \int_{\mathbb{R}} \frac{f(x)-f(y)}{|x-y|^{1+\gamma}}dy,
\end{eqnarray}
where $c_{\gamma}>0$ is a normalized constant. Alternatively, we can define $\Lambda^{\gamma}=(\sqrt{-\Delta})^{\gamma}$ as a Fourier multiplier: $\widehat{\Lambda^{\gamma} f}(\xi)=|\xi|^{\gamma}\widehat{f}(\xi)$. When $\gamma=1$, $\Lambda f(x)=\mathcal{H}f_{x}(x)$.
\vspace{1ex}
We finally introduce Simon's compactness.
\begin{lemma} \cite{Simon} \label{lem:2.2}
Let $X_{0}$, $X_{1}$, and $X_{2}$ be Banach spaces such that $X_{0}$ is compactly embedded in $X_{1}$ and $X_{1}$ is a subset of $X_{2}$. Then, for $1\leq p<\infty$, the set $\left\{v\in L^{p}_{T}X_{0}: \ \frac{\partial v}{\partial t}\in L^{1}_{T}X_{2}\right\}$ is compactly embedded in $L^{p}_{T}X_{1}$.
\end{lemma}
\section{The model 1}
We now study (\ref{model equation}) with $\mathcal{N}=-\mathcal{H}$ and $\nu=0$ which is nothing but (\ref{CCF}):
\begin{subequations} \label{eq:1.1}
\begin{align}
& \theta_{t} -\left(\mathcal{H}\theta\right)\theta_{x} =0, \label{eq:1.1 a}\\
& \theta(0,x)=\theta_{0}(x). \label{eq:1.1 b}
\end{align}
\end{subequations}
\subsection{Local well-posedness}
The local well-posedness of (\ref{eq:1.1}) is established in $H^{2}$ (\cite{Bae}) and $H^{\frac{3}{2}-\gamma}$ with the viscous term $\Lambda^{\gamma}\theta$ (\cite{HDong}). To improve these results, we first notice that (\ref{eq:1.1}) has the following scaling invariant property: if $\theta(t,x)$ is a solution of (\ref{eq:1.1}), then so is $\theta_{\lambda}(t,x)=\theta(\lambda t, \lambda x)$. So, we take initial data in a space whose norm is closely invariant under the scaling: $\theta_{0}(x)\mapsto \theta_{\lambda 0}(x)=\theta_{0}(\lambda x)$. In this paper, we take the space $\dot{B}^{\frac{3}{2}}_{2,1}$ because there is a constant $C$ such that
\[
C^{-1} \left\|\theta_{\lambda 0}\right\|_{\dot{B}^{\frac{3}{2}}_{2,1}} \leq \|\theta_{0}\|_{\dot{B}^{\frac{3}{2}}_{2,1}} \leq C \left\|\theta_{\lambda 0}\right\|_{\dot{B}^{\frac{3}{2}}_{2,1}}.
\]
The mathematical tools needed to prove the local well-posedness of (\ref{eq:1.1}), such as the Littlewood-Paley decomposition and Besov spaces, are provided in the appendix. We also need the following commutator estimate \cite[Lemma 2.100, Remark 2.101]{Bahouri}.
\begin{lemma}[Commutator estimate]\label{commutator lemma 1}
For $f, g \in \mathcal{S}$
\[
\left\|[f,\Delta_{j}]g_{x}\right\|_{L^{2}}\leq Cc_{j}2^{-\frac{3}{2}j} \left\|f_{x}\right\|_{\dot{B}^{\frac{1}{2}}_{2,1}} \left\|g\right\|_{\dot{B}^{\frac{3}{2}}_{2,1}}, \quad \sum^{\infty}_{j=-\infty}c_{j}\leq 1.
\]
\end{lemma}
The first result in this paper the following theorem.
\begin{theorem}\label{Besov theorem}
For any $\theta_{0}\in \dot{B}^{\frac{3}{2}}_{2,1}$, there exists $T=T(\|\theta_0\|)$ such that a unique solution of (\ref{eq:1.1}) exists in $C_{T}\dot{B}^{\frac{3}{2}}_{2,1}$.
\end{theorem}
\begin{proof}
We only provide a priori estimates of $\theta$ in the space stated in Theorem \ref{Besov theorem}. The other parts, including the approximation procedure, are rather standard.
We apply $\Delta_{j}$ to (\ref{eq:1.1}), multiply by $\Delta_{j}\theta$, and integrate the resulting equation over $\mathbb{R}$ to get
\begin{equation} \label{eq:3.2}
\begin{split}
\frac{1}{2}\frac{d}{dt} \left\|\Delta_{j} \theta \right\|^2_{L ^{2}} &=\int_{\mathbb{R}} \Delta_{j}\left((\mathcal{H}\theta)\theta_{x} \right) \Delta_{j}\theta dx\\
&=\int_{\mathbb{R}} \left((\mathcal{H}\theta)\Delta_{j}\theta_{x} \right) \Delta_{j}\theta dx+ \int_{\mathbb{R}} \left[\Delta_{j}, \mathcal{H}\theta\right]\Delta_{j}\theta_{x} \Delta_{j}\theta dx\\
&=-\frac{1}{2}\int_{\mathbb{R}} (\mathcal{H}\theta)_{x}\left|\Delta_{j}\theta\right|^{2} dx+ \int_{\mathbb{R}} \left[\Delta_{j}, \mathcal{H}\theta\right]\Delta_{j}\theta_{x} \Delta_{j}\theta dx.
\end{split}
\end{equation}
By the Bernstein inequality, we have
\begin{eqnarray} \label{low frequency}
\left\|\mathcal{H}\theta_{x}\right\|_{L^{\infty}}\leq C\|\theta\|_{\dot{B}^{\frac{3}{2}}_{2,1}}.
\end{eqnarray}
We then apply Lemma \ref{commutator lemma 1} to the second term in the right-hand side of (\ref{eq:3.2}) to obtain
\begin{equation}\label{eq:3.3}
\begin{split}
\int_{\mathbb{R}} \left[\Delta_{j}, \mathcal{H}\theta\right]\Delta_{j}\theta_{x} \Delta_{j}\theta dx\leq Cc_{j}2^{-\frac{3}{2}j}\|\theta\|^{2}_{\dot{B}^{\frac{3}{2}}_{2,1}} \left\|\Delta_{j} \theta \right\|_{L ^{2}}.
\end{split}
\end{equation}
By (\ref{eq:3.2}), (\ref{low frequency}), and (\ref{eq:3.3}), we have
\[
\frac{d}{dt}\|\theta\|^{2}_{\dot{B}^{\frac{3}{2}}_{2,1}}\leq C\|\theta\|^{3}_{\dot{B}^{\frac{3}{2}}_{2,1}},
\]
from which we deduce
\[
\|\theta(t)\|_{\dot{B}^{\frac{3}{2}}_{2,1}}\leq \frac{\|\theta_{0}\|_{\dot{B}^{\frac{3}{2}}_{2,1}}}{1-Ct\|\theta_{0}\|_{\dot{B}^{\frac{3}{2}}_{2,1}}} \leq 2 \|\theta_{0}\|_{\dot{B}^{\frac{3}{2}}_{2,1}} \quad \text{for all $t\leq T=\frac{1}{2C \|\theta_{0}\|_{\dot{B}^{\frac{3}{2}}_{2,1}}}$}.
\]
This completes the proof.
\end{proof}
\subsection{Global weak super-solution}
We next consider (\ref{eq:1.1}) with rough initial data. More precisely, we assume that $\theta_{0}$ satisfies the following conditions
\begin{eqnarray} \label{initial condition}
\theta_{0}\ge 0, \quad \theta_{0}\in L^{1}\cap L^{\infty}.
\end{eqnarray}
Since $\theta$ satisfies the transport equation, we have
\begin{eqnarray}
\theta(t,x)\ge 0, \quad \theta \in L^{\infty}(\mathbb{R}) \quad \text{for all time}.
\end{eqnarray}
If we follow the usual weak formulation of (\ref{eq:1.1}), for all $\phi\in C^{\infty}_{c}([0,\infty)\times \mathbb{R})$
\begin{eqnarray} \label{weak solution to 1D sqg}
\int^{T}_{0}\int_{\mathbb{R}} \left[\theta \psi_{t} - \left(\mathcal{H}\theta\right)\theta \psi_{x}+\left(\Lambda \theta\right) \theta \psi \right] dxdt =\int_{\mathbb{R}}\theta_{0}(x)\psi(x,0)dx.
\end{eqnarray}
For $\theta_{0}\ge 0$, there is gain of a half derivative from the structure of the nonlinearity, that is
\begin{eqnarray} \label{half energy dd}
\left\|\theta(t)\right\|_{L^{1}} +\int^{t}_{0}\left\|\Lambda^{\frac{1}{2}}\theta(s)\right\|^{2}_{L^{2}}ds=\left\|\theta_{0}\right\|_{L^{1}}.
\end{eqnarray}
So, we can rewrite the left-hand side of (\ref{weak solution to 1D sqg}) as
\[
\int^{T}_{0}\int_{\mathbb{R}} \left[\theta \psi_{t} - \left(\mathcal{H}\theta\right)\theta \psi_{x}+\Lambda^{\frac{1}{2}}\theta \left[\Lambda^{\frac{1}{2}},\psi\right]\theta +\left|\Lambda^{\frac{1}{2}}\theta\right|^{2}\psi\right] dxdt =\int_{\mathbb{R}}\theta_{0}(x)\psi(x,0)dx.
\]
However, the $\dot{H}^{\frac{1}{2}}$ regularity derived from (\ref{half energy dd}) is not enough to pass to the limit in
\[
\int^{T}_{0}\int_{\mathbb{R}} \left|\Lambda^{\frac{1}{2}}\theta^{\epsilon}\right|^{2}\psi dxdt
\]
from the $\epsilon$-regularized equations described below. So, we introduce a new notion of solution. Let
\[
\mathcal{A}_{T}=L^{\infty}_{T}\left(L^{1}\cap L^{\infty}\right)\cap L^{2}_{T}H^{\frac{1}{2}}.
\]
\begin{definition}
We say $\theta$ is a weak super-solution of (\ref{eq:1.1}) on the time interval $[0,T]$ if $\theta(t,x)\ge 0$ for all $t\in [0,T]$, $\theta\in \mathcal{A}_{T}$, and for each nonnegative $\psi\in C^{\infty}_{c}([0,T]\times\mathbb{R})$,
\begin{eqnarray} \label{supersolution to 1d sqg}
\int^{T}_{0}\int_{\mathbb{R}} \left[\theta \psi_{t} - \left(\mathcal{H}\theta\right)\theta \psi_{x}+\Lambda^{\frac{1}{2}}\theta \left[\Lambda^{\frac{1}{2}},\psi\right]\theta +\left|\Lambda^{\frac{1}{2}}\theta\right|^{2}\psi\right] dxdt \ge \int_{\mathbb{R}}\theta_{0}(x)\psi(x,0)dx.
\end{eqnarray}
\end{definition}
To prove Theorem \ref{main theorem}, we need to estimate a commutator term involving $\Lambda^{\frac{1}{2}}$:
\[
\left[\Lambda^{1/2},\psi\right](f-g)\in L^{6}
\]
which is proved in \cite{Bae 3}.
\begin{lemma} \label{commutator lemma 0}
For $f\in L^{\frac{3}{2}}$, $g\in L^{\frac{3}{2}}$ and $\psi \in W^{1,\infty}$, we have
\[
\left\| \left[\Lambda^{\frac{1}{2}},\psi \right]f- \left[\Lambda^{\frac{1}{2}},\psi \right]g\right\|_{L^{6}}\leq C\|\psi\|_{W^{1,\infty}} \left\|f-g\right\|_{L^{\frac{3}{2}}}.
\]
\end{lemma}
The second result in our paper is the following theorem.
\begin{theorem}\label{main theorem}
For any $\theta_{0}$ satisfying (\ref{initial condition}), there exists a weak super-solution of (\ref{eq:1.1}) in $\mathcal{A}_{T}$.
\end{theorem}
\begin{proof}
We first regularize initial data as $\theta^{\epsilon}_{0}=\rho_{\epsilon}\ast \theta_{0}$ where $\rho_{\epsilon}$ is a standard mollifier that preserve the positivity of the regularized initial data. We then regularize the equation by introducing the Laplacian term with a coefficient $\epsilon>0$, namely
\begin{eqnarray} \label{regularized equation}
\theta^{\epsilon}_t -\mathcal{H}\theta^{\epsilon}\theta^{\epsilon}_x =\epsilon \theta^{\epsilon}_{xx}.
\end{eqnarray}
For the proof of the existence of a global-in-time smooth solution we refer to \cite{Lazar}. Moreover, $\theta^{\epsilon}$ satisfies that $\theta^{\epsilon}\ge 0 $ and
\[
\left\| \theta^{\epsilon}(t) \right\|_{L^{1}}+\left\| \theta^{\epsilon}(t) \right\|_{L^{\infty}}+ \int^{t}_{0}\left\|\Lambda^{\frac{1}{2}}\theta^{\epsilon}(s)\right\|^{2}_{L^{2}}ds \leq \left\|\theta_{0}\right\|_{L^{1}} + \left\|\theta_{0}\right\|_{L^{\infty}}.
\]
Therefore, $(\theta_{\epsilon})$ is bounded in $\mathcal{A}_{T}$ uniformly in $\epsilon>0$.
From this, we have uniform bounds
\[
\mathcal{H}\theta^{\epsilon}\in L^{4}_{T}L^{2}, \quad \theta^{\epsilon}\in L^{2}_{T}L^{2}, \quad \left(\left(\mathcal{H}\theta^{\epsilon}\right)\theta^{\epsilon}\right)_x\in L^{\frac{4}{3}}_{T}H^{-2}, \quad \epsilon \theta^{\epsilon}_{xx}\in L^{2}_{T}H^{-2}.
\]
Moreover, for any $\phi \in H^{2}$,
\[
\int_{\mathbb{R}}\left|\theta^{\epsilon} \Lambda\theta^{\epsilon} \phi \right| dx \leq \left\|\Lambda^{\frac{1}{2}}\theta^{\epsilon}\right\|^{2}_{L^{2}} \left\|\phi\right\|_{L^{\infty}} + \left\|\Lambda^{\frac{1}{2}}\theta^{\epsilon}\right\|_{L^{2}} \left\|\theta^{\epsilon}\right\|_{L^{\infty}} \left\|\Lambda^{\frac{1}{2}}\phi\right\|_{L^{\infty}}
\]
which implies that
\[
\theta^{\epsilon} \Lambda\theta^{\epsilon} \in L^{1}_{T}H^{-2}.
\]
Combining all together, we obtain
\[
\theta^{\epsilon}_{t}=\mathcal{H}\theta^{\epsilon}\theta^{\epsilon}_x +\epsilon \theta^{\epsilon}_{xx}= \left(\mathcal{H}\theta^{\epsilon}\theta^{\epsilon}\right)_x -\theta^{\epsilon} \Lambda\theta^{\epsilon}+\epsilon \theta^{\epsilon}_{xx}\in L^{1}_{T}H^{-2}.
\]
To pass to the limit into the weak super-solution formulation, we extract a subsequence of $\left(\theta^{\epsilon}\right)$, using the same index $\epsilon$ for simplicity, and a function $\theta \in \mathcal{A}_T$ such that
\begin{equation}\label{convergence of approximate solutions}
\begin{split}
& \theta^{\epsilon} \stackrel{\star}{\rightharpoonup} \theta \quad \text{in} \quad L^{\infty}_{T}\left(L^{p}\cap H^{\frac{1}{2}}\right) \quad \text{for all $p\in (1,\infty)$},\\
& \theta^{\epsilon} \rightharpoonup \theta \quad \text{in} \quad L^{2}_{T}H^{\frac{1}{2}},\\
& \theta^{\epsilon} \rightarrow \theta \quad \text{in $L^{2}_{T}L^{p}$ for all $1<p<\infty$} ,
\end{split}
\end{equation}
where we use Lemma \ref{lem:2.2} for the strong convergence with
\[
X_0=L^2_{T}H^{\frac{1}{2}},\quad X_{1}=L^2_{T}L^p,\quad X_2=L^1_{T}H^{-2}.
\]
We now multiply (\ref{regularized equation}) by a test function $\psi\in \mathcal{C}^{\infty}_{c}\left([0,T)\times\mathbb{R}\right)$ and integrate over $\mathbb{R}$. Then,
\begin{equation} \label{weak form with commutator 111}
\begin{split}
&\int^{T}_{0}\int \Big[\theta^{\epsilon} \psi_{t} - \underbrace{\left(\mathcal{H}\theta^{\epsilon}\right)\theta^{\epsilon}\psi_{x}}_{\text{I}}+ \epsilon \theta^{\epsilon}\psi_{xx}\Big] dxdt - \int \theta^{\epsilon}_{0}(x)\psi(0,x)dx\\
& = -\int^{T}_{0}\int \underbrace{\Lambda^{\frac{1}{2}}\theta^{\epsilon} \left[\Lambda^{\frac{1}{2}},\psi \right]\theta^{\epsilon}}_{\text{II}} dxdt -\int^{T}_{0}\int \underbrace{\left|\Lambda^{\frac{1}{2}}\theta^{\epsilon}\right|^{2}\psi}_{\text{III}} dxdt.
\end{split}
\end{equation}
We note that we are able to rearrange terms in the usual weak formulation into (\ref{weak form with commutator 111}) since $\theta^{\epsilon}$ is smooth. By the strong convergence in (\ref{convergence of approximate solutions}), we can pass to the limit to $\text{I}$. Moreover, since
\[
\left[\Lambda^{\frac{1}{2}},\psi \right]\theta^{\epsilon} \rightarrow \left[\Lambda^{\frac{1}{2}},\psi \right]\theta
\]
strongly in $L^{2}_{T}L^{6}$ by Lemma \ref{commutator lemma 0} and the strong convergence in (\ref{convergence of approximate solutions}), we can pass to the limit to $\text{II}$. Lastly, by Fatou's lemma,
\[
\lim_{\epsilon\rightarrow 0}\int^{T}_{0}\int \left|\Lambda^{\frac{1}{2}}\theta^{\epsilon}\right|^{2}\psi dxdt \ge \int^{T}_{0}\int \left|\Lambda^{\frac{1}{2}}\theta\right|^{2}\psi dxdt.
\]
Combining all the limits together, we obtain that
\begin{equation} \label{weak form with commutator 2}
\begin{split}
\int^{T}_{0}\int_{\mathbb{R}} \left[\theta \psi_{t} - \left(\mathcal{H}\theta\right)\theta \psi_{x}+\Lambda^{\frac{1}{2}}\theta \left[\Lambda^{\frac{1}{2}},\psi\right]\theta +\left|\Lambda^{\frac{1}{2}}\theta\right|^{2}\psi\right] dxdt \ge \int_{\mathbb{R}}\theta_{0}(x)\psi(x,0)dx.
\end{split}
\end{equation}
This completes the proof.
\end{proof}
\section{The model 2}
We now consider the following equation:
\begin{subequations} \label{new model}
\begin{align}
& \theta_{t} -\left(\mathcal{H}(\partial_{xx})^{-\alpha}\theta\right)\theta_{x} +\Lambda^{\gamma}\theta=0, \\
& \theta(0,x)=\theta_{0}(x),
\end{align}
\end{subequations}
where $\alpha,\gamma>0$. In this case, we focus on the existence of weak solutions under some conditions of $(\alpha,\gamma)$. As before, we assume that $\theta_{0}$ satisfies the following conditions
\begin{eqnarray} \label{initial condition 2}
\theta_{0}\ge 0, \quad \theta_{0}\in L^{1}\cap L^{\infty}.
\end{eqnarray}
Let
\[
\mathcal{B}_{T}=L^{\infty}_{T}\left(L^{1}\cap L^{\infty}\right)\cap L^{2}_{T}H^{\frac{\gamma}{2}}.
\]
\begin{definition}
We say $\theta$ is a weak solution of (\ref{new model}) on the time interval $[0,T]$ if $\theta(t,x)\ge 0$ for all $t\in [0,T]$, $\theta\in \mathcal{B}_{T}$, and for each $\psi\in C^{\infty}_{c}([0,T]\times\mathbb{R})$,
\[
\int^{T}_{0}\int_{\mathbb{R}} \left[\theta \psi_{t} - \left(\mathcal{H}(\partial_{xx})^{-\alpha}\theta\right)\theta \psi_{x}-\Lambda^{1-\frac{\gamma}{2}} (\partial_{xx})^{-\alpha}\theta \Lambda^{\frac{\gamma}{2}} (\theta\psi) -\theta \Lambda^{\gamma}\psi\right] dxdt = \int_{\mathbb{R}}\theta_{0}(x)\psi(x,0)dx.
\]
\end{definition}
The third result in the paper is the following.
\begin{theorem}\label{new model weak}
Suppose that two positive numbers $\alpha$ and $\gamma$ satisfy
\begin{eqnarray} \label{range of parameters}
0<\gamma<1, \quad \alpha\ge \frac{1}{2}-\frac{\gamma}{2}.
\end{eqnarray}
Then, for any $\theta_{0}$ satisfying (\ref{initial condition 2}), there exists a weak solution of (\ref{new model}) in $\mathcal{B}_{T}$ for all $T>0$.
\end{theorem}
\begin{proof}
As in the proof of Theorem \ref{main theorem}, we regularize $\theta_{0}$ and the equation as
\begin{eqnarray} \label{regularized equation new model}
\theta^{\epsilon}_{0}=\rho_{\epsilon}\ast \theta_{0}, \quad \theta^{\epsilon}_t -\left(\mathcal{H}(\partial_{xx})^{-\alpha}\theta^{\epsilon}\right)\theta^{\epsilon}_{x} +\Lambda^{\gamma}\theta^{\epsilon} =\epsilon \theta^{\epsilon}_{xx}.
\end{eqnarray}
Then, the corresponding $\theta^{\epsilon}$ satisfies
\begin{eqnarray} \label{sign and max}
\theta^{\epsilon}(t,x)\ge 0, \quad \left\|\theta^{\epsilon}(t)\right\|_{L^{\infty}}\leq \|\theta_{0}\|_{L^{\infty}} \quad \text{for all time }
\end{eqnarray}
and
\begin{eqnarray} \label{L1 bound new}
\left\|\theta^{\epsilon}(t)\right\|_{L^{1}} +\int^{t}_{0}\left\|\Lambda^{\frac{1}{2}}(\partial_{xx})^{-\frac{\alpha}{2}}\theta^{\epsilon}(s)\right\|^{2}_{L^{2}}ds\leq \left\|\theta_{0}\right\|_{L^{1}}.
\end{eqnarray}
We next multiply (\ref{regularized equation new model}) by $\theta^{\epsilon}$ and integrate over $\mathbb{R}$. Then,
\begin{equation*}
\begin{split}
&\frac{1}{2}\frac{d}{dt}\left\|\theta^{\epsilon}(t)\right\|^{2}_{L^{2}}+\left\|\Lambda^{\frac{\gamma}{2}}\theta^{\epsilon}(t)\right\|^{2}_{L^{2}} +\epsilon \left\|\theta^{\epsilon}_{x}\right\|^{2}_{L^{2}}=-\frac{1}{2}\int_{\mathbb{R}}\left\{\Lambda (\partial_{xx})^{-\alpha}\theta^{\epsilon}(t)\right\} (\theta^{\epsilon}(t))^{2}dx\\
&=-\frac{1}{2}\int_{\mathbb{R}}\left\{\Lambda^{1-\frac{\gamma}{2}} (\partial_{xx})^{-\alpha}\theta^{\epsilon}(t)\right\} \Lambda^{\frac{\gamma}{2}}(\theta^{\epsilon}(t))^{2}dx\\
& \leq C\left\|\Lambda^{1-\frac{\gamma}{2}} (\partial_{xx})^{-\alpha}\theta^{\epsilon}(t)\right\|_{L^{2}} \left\|\Lambda^{\frac{\gamma}{2}}\theta^{\epsilon}(t)\right\|_{L^{2}}\|\theta^{\epsilon}(t)\|_{L^{\infty}} \\
& \leq \frac{1}{2} \left\|\Lambda^{\frac{\gamma}{2}}\theta^{\epsilon}(t)\right\|^{2}_{L^{2}} + C\left\|\Lambda^{1-\frac{\gamma}{2}} (\partial_{xx})^{-\alpha}\theta^{\epsilon}(t)\right\|^{2}_{L^{2}}\|\theta^{\epsilon}(t)\|^{2}_{L^{\infty}}.
\end{split}
\end{equation*}
By (\ref{range of parameters}), (\ref{sign and max}) and (\ref{L1 bound new}), we obtain
\begin{eqnarray} \label{new L2 bound}
\|\theta^{\epsilon}(t)\|^{2}_{L^{2}}+\int^{t}_{0}\left\|\Lambda^{\frac{\gamma}{2}}\theta^{\epsilon}(s)\right\|^{2}_{L^{2}}ds +\epsilon \int^{t}_{0}\left\|\theta^{\epsilon}_{x}(s)\right\|^{2}_{L^{2}}ds \leq C\|\theta_{0}\|^{2}_{L^{1}} \|\theta_{0}\|^{2}_{L^{\infty}}.
\end{eqnarray}
Therefore, $(\theta_{\epsilon})$ is bounded in $\mathcal{B}_{T}$ uniformly in $\epsilon>0$.
From this, we have uniform bounds
\[
\left\{\left(\mathcal{H}(\partial_{xx})^{-\alpha}\theta\right)\theta\right\}_{x} \in L^{2}_{T}L^{2}, \quad \Lambda^{\gamma}\theta^{\epsilon}+\epsilon \theta^{\epsilon}_{xx}\in L^{2}_{T}H^{-2}.
\]
Moreover, the condition (\ref{range of parameters}) implies that
\[
\left(\Lambda(\partial_{xx})^{-\alpha}\theta^{\epsilon}\right)\theta^{\epsilon} \in L^{1}_{T}H^{-1}.
\]
Combining all together, we also derive that
\[
\theta^{\epsilon}_{t}\in L^{1}_{T}H^{-2}.
\]
We now multiply (\ref{regularized equation new model}) by a test function $\psi\in \mathcal{C}^{\infty}_{c}\left([0,T)\times\mathbb{R}\right)$ and integrate over $\mathbb{R}$. Then,
\begin{equation} \label{weak form with commutator}
\begin{split}
&\int^{T}_{0}\int \Big[\theta^{\epsilon} \psi_{t} - \underbrace{\left(\mathcal{H}(\partial_{xx})^{-\alpha}\theta^{\epsilon}\right)\theta^{\epsilon}\psi_{x}}_{\text{I}}+ \Lambda^{\gamma}\theta^{\epsilon}+\epsilon \theta^{\epsilon}\psi_{xx}\Big] dxdt - \int \theta^{\epsilon}_{0}(x)\psi(0,x)dx\\
& = \int^{T}_{0}\int \underbrace{\Lambda^{1-\frac{\gamma}{2}}\mathcal{H}(\partial_{xx})^{-\alpha}\theta^{\epsilon} \Lambda^{\frac{\gamma}{2}}(\theta^{\epsilon}\psi)}_{\text{II}} dxdt.
\end{split}
\end{equation}
To pass the limit to this formulation, we extract a subsequence of $\left(\theta^{\epsilon}\right)$, using the same index $\epsilon$ for simplicity, and a function $\theta \in \mathcal{B}_T$ such that
\begin{equation}\label{convergence of approximate solutions 2}
\begin{split}
& \theta^{\epsilon} \stackrel{\star}{\rightharpoonup} \theta \quad \text{in} \quad L^{\infty}_{T}\left(L^{p}\cap H^{\frac{1}{2}}\right) \quad \text{for all $p\in (1,\infty)$},\\
& \theta^{\epsilon} \rightharpoonup \theta \quad \text{in} \quad L^{2}_{T}H^{\frac{\gamma}{2}},\\
& \theta^{\epsilon} \rightarrow \theta \quad \text{in $L^{2}_{T}H^{1-\frac{\gamma}{2}-2\alpha}\cap L^{2}_{T}L^{p}$ for all $1<p<\infty$} ,
\end{split}
\end{equation}
where we use Lemma \ref{lem:2.2} for the strong convergence with the condition (\ref{range of parameters}) and
\[
X_0=L^2_{T}H^{\frac{\gamma}{2}},\quad X_{1}=L^{2}_{T}H^{1-\frac{\gamma}{2}-2\alpha} \cap L^2_{T}L^p,\quad X_2=L^1_{T}H^{-2}.
\]
By the strong convergence in (\ref{convergence of approximate solutions 2}), we can pass to the limit to $\text{I}$ and $\text{II}$ in (\ref{weak form with commutator}). Therefore, we obtain
\[\int^{T}_{0}\int_{\mathbb{R}} \left[\theta \psi_{t} - \left(\mathcal{H}(\partial_{xx})^{-\alpha}\theta\right)\theta \psi_{x}-\Lambda^{1-\frac{\gamma}{2}} (\partial_{xx})^{-\alpha}\theta \Lambda^{\frac{\gamma}{2}} (\theta\psi) -\theta \Lambda^{\gamma}\psi\right] dxdt = \int_{\mathbb{R}}\theta_{0}(x)\psi(x,0)dx.
\]
This completes the proof of Theorem \ref{new model weak}.
\end{proof}
\noindent
{\bf Remark.} Theorem \ref{new model weak} improves Theorem 1.4 in \cite{Bae 3}, where $(\alpha, \gamma)$ is assumed to satisfy $\alpha\ge \frac{1}{2}-\frac{\gamma}{4}$. The main idea of taking weaker regularization in (\ref{new model}) is that the Hilbert transform in front of $(1-\partial_{xx})^{-\alpha}$ gives (\ref{L1 bound new}) which makes to obtain (\ref{new L2 bound}). We choose $\alpha> \frac{1}{2}-\frac{\gamma}{2}$ instead of $ \alpha\ge \frac{1}{2}-\frac{\gamma}{2}$ to apply compactness argument when we pass to the limit to $\epsilon$-regularized equations.
\section{The model 3}
In this section, we consider the following equation
\begin{subequations} \label{singular model d}
\begin{align}
& \theta_{t} -\left(\mathcal{H}(\partial_{xx})^{\beta}\theta\right)\theta_{x} +\Lambda^{\gamma}\theta=0, \\
& \theta(0,x)=\theta_{0}(x)
\end{align}
\end{subequations}
where $\beta,\gamma>0$. Depending on the range of $\beta$ and $\gamma$, we will have four different results.
\subsection{Local well-posedness}
We begin with the local well-posedness result.
\begin{theorem} \label{LW and blow-up criterion theorem}
Let $0<\gamma<2$ and $0<\beta\leq \frac{\gamma}{4}$. For $\theta_0 \in H^2 (\Bbb R)$ there exists $T=T(\|\theta_0\|_{H^2})$ such that a unique solution of (\ref{singular model d}) exists in $C\left([0, T); H^2 (\Bbb R)\right)$. Moreover, we have the following blow-up criterion:
\begin{eqnarray} \label{blow-up criterion 1}
\lim\sup_{t\nearrow T^*} \|\theta(t)\|_{H^2}=\infty \quad \text{if and only if} \quad \int_0 ^{T^*} \Big(\left\|u_x(s)\right\|_{L^{\infty}}+\left\|\theta_x(s)\right\|_{L^{\infty}}\Big)ds=\infty.
\end{eqnarray}
\end{theorem}
\begin{proof}
Let $u=-\mathcal{H}(\partial_{xx})^{\beta}\theta$. Operating $\partial_x^l$ on (\ref{singular model d}), taking its $L^2$ inner product with $\partial_x^l \theta$, and summing over $l=0, 1,2$,
\begin{equation} \label{pr1}
\begin{split}
&\frac12 \frac{d}{dt} \|\theta(t)\|_{H^2}^2 + \left\|\Lambda^{\frac{\gamma}{2}}\theta\right\|_{H^2}^2= -\sum_{l=0} ^2 \int \partial_x^l (u \theta_x) \partial_x^l \theta dx \\
&=-\sum_{l=0} ^2 \int \left( \partial_x^l (u \theta_x)- u \partial_x^l \theta_x \right) \partial_x^l \theta dx-\sum_{l=0} ^2 \int u \partial_x^l \theta_x \partial_x^l \theta dx = \text{I}_1+ \text{I}_2.
\end{split}
\end{equation}
Using the commutator estimate in \cite{Kato}
\[
\sum_{|l|\leq 2} \left\|D^l (fg)-fD^l g\right\|_{L^2} \leq C\left(\|\nabla f\|_{L^\infty} \left\|D g\right\|_{L^2} +\left\|D^2 f\right\|_{L^2} \|g\|_{L^\infty} \right),
\]
we have
\begin{equation}\label{bound of I1}
\begin{split}
\text{I}_1 \leq \sum_{l=0} ^2 \left\|\partial_x^l (u \theta_x)- u \partial_x^l \theta_x\right\|_{L^2} \|\theta\|_{H^2} &\leq C\left(\|u_x\|_{L^\infty} \|\theta\|_{H^2} + \|u\|_{H^2}\|\theta_x\|_{L^\infty} \right)\|\theta\|_{H^2} \\
&\leq C_{\kappa} \left(\|u_x\|_{L^\infty}+\|\theta_x\|^{2}_{L^\infty}\right)\|\theta\|_{H^2}^2 + \kappa\|u\|^{2}_{H^2}.
\end{split}
\end{equation}
And by integration by parts,
\begin{eqnarray} \label{bound of I2}
\text{I}_2 = -\frac{1}{2}\sum_{l=0} ^2\int u \partial_x \left|\partial_x^l \theta\right|^2 dx= \frac{1}{2}\sum_{l=0} ^2\int u_x \left|\partial_x^l \theta \right|^2 dx \leq C \|u_x\|_{L^\infty} \|\theta\|_{H^2}^2.
\end{eqnarray}
Since $\beta\leq \frac{\gamma}{4}$, for a sufficiently small $\kappa>0$
\[
\kappa\|u\|^{2}_{H^2}\leq \frac{1}{2}\left\|\Lambda^{\frac{\gamma}{2}}\theta\right\|_{H^2}^2.
\]
By (\ref{bound of I1}) and (\ref{bound of I2}), we obtain
\begin{eqnarray} \label{Hm bound of w}
\frac{d}{dt} \|\theta\|_{H^2}^2 +\left\|\Lambda^{\frac{\gamma}{2}}\theta\right\|_{H^2}^2 \leq C \left(\|u_x\|_{L^\infty}+\|\theta_x\|^{2}_{L^\infty}\right)\|\theta\|_{H^2}^2 \leq C\|\theta\|_{H^2}^3+ C\|\theta\|_{H^2}^4, \quad \beta\leq \frac{\gamma}{4}
\end{eqnarray}
from which we deduce that there is $T=T(\|\theta_{0}\|_{H^{2}})$ such that
\[
\|\theta(t)\|_{H^2}\leq 2 \|\theta_0\|_{H^2} \quad \text{for all} \ t<T.
\]
(\ref{Hm bound of w}) also implies (\ref{blow-up criterion 1}).
To show the uniqueness, let $\theta_{1}$ and $\theta_{2}$ be two solutions of (\ref{singular model d}), and let $\theta=\theta_{1}-\theta_{2}$ and $u=u_{1}-u_{2}$. Then, $(\theta,u)$ satisfies the following equations
\[
\theta_t+u_{1}\theta_x-u\theta_{2x}=-\Lambda^{\gamma}\theta, \quad u=-\mathcal{H}(\partial_{xx} )^{\beta} \theta, \quad \theta(0,x)=0.
\]
By taking the $L^{2}$ product of the equation with $\theta$,
\[
\frac{d}{dt}\|\theta\|^{2}_{L^{2}}+2\left\|\Lambda^{\frac{\gamma}{2}}\theta\right\|^{2}_{L^{2}} \leq C \left(\left\|u_{1x}\right\|_{L^{\infty}} +\left\|\theta_{2x}\right\|_{L^{\infty}}\right)\|\theta\|^{2}_{L^{2}}\leq C \left(\left\|\Lambda^{\frac{\gamma}{2}}\theta_{1}\right\|_{H^{2}} +\left\|\theta_{2}\right\|_{H^{2}}\right)\|\theta\|^{2}_{L^{2}}.
\]
So, $\theta=0$ in $L^{2}$ and thus a solution is unique. This completes the proof of Theorem \ref{LW and blow-up criterion theorem}.
\end{proof}
Theorem \ref{LW and blow-up criterion theorem} provides a local existence result for $\beta \nearrow \frac{1}{2}$ as $\gamma\nearrow 2$. But, we can increase the range of $\beta$ when we deal with (\ref{singular model d}) directly with $\gamma=2$ because we can do the integration by parts.
\begin{theorem} \label{LW and blow-up criterion theorem 2}
Let $\gamma=2$ and $0<\beta<1$. For $\theta_0 \in H^2 (\Bbb R)$ there exists $T=T(\|\theta_0\|_{H^2})$ such that a unique solution of (\ref{singular model d}) exists in $C\left([0, T); H^2 (\Bbb R)\right)$.
\end{theorem}
\begin{proof}
We begin the $L^{2}$ bound:
\[
\frac{1}{2}\frac{d}{dt}\left\|\theta\right\|^{2}_{L^{2}}+ \left\|\theta_{x}\right\|^{2}_{L^{2}}\leq \|\theta\|_{L^{\infty}} \left\|\mathcal{H}(\partial_{xx})^{\beta}\theta\right\|_{L^{2}} \left\|\theta_{x}\right\|_{L^{2}}\leq C\|\theta\|^{3}_{H^{2}}.
\]
We next estimate $\theta_{xx}$. Indeed, after several integration parts, we have
\[
\frac{1}{2}\frac{d}{dt}\left\|\theta\right\|^{2}_{\dot H^{2}} + \Vert \theta \Vert^{2}_{\dot H^{3}}= -\int \left\{\mathcal{H}(\partial_{xx})^{\beta}\theta_{x}\right\}\theta_{x}\theta_{xxx}dx+\frac{1}{2}\int \left\{\mathcal{H}(\partial_{xx})^{\beta}\theta_{x}\right\}\theta_{xx}\theta_{xx}dx =\text{I}_1+\text{I}_2.
\]
When $0<\beta<1$,
\begin{equation*}
\begin{split}
\left|\text{I}_{1}\right|& \leq \left\|\theta_{x}\right\|_{L^{\infty}}\left\|\mathcal{H}(\partial_{xx} )^{\beta} \theta_{x}\right\|_{L^{2}}\left\|\theta_{xxx}\right\|_{L^{2}} =\left\|\theta_{x}\right\|_{L^{\infty}}\left\|\Lambda^{2\beta+1}\theta \right\|_{L^{2}}\left\|\theta_{xxx}\right\|_{L^{2}}\\
& \leq C\left\|\theta\right\|_{H^{2}} \left\|\theta_{x}\right\|^{1-\beta}_{L^{2}}\left\|\theta_{xxx}\right\|^{1+\beta}_{L^{2}} \leq C\left\|\theta\right\|^{4}_{H^{2}} + C\left\|\theta\right\|^{\frac{4-2\beta}{1-\beta}}_{H^{2}}+\frac{1}{4}\left\|\theta_{xxx}\right\|^{2}_{L^{2}}.
\end{split}
\end{equation*}
And
\[
\left|\text{I}_{2}\right| \leq \left\|\mathcal{H}(\partial_{xx})^{\beta}\theta_{x}\right\|_{L^{2}} \left\|\theta_{xx}\right\|^{2}_{L^{4}} \leq C\left\|\mathcal{H}(\partial_{xx})^{\beta}\theta_{x}\right\|_{L^{2}} \left\|\theta_{xx}\right\|^{\frac{3}{2}}_{L^{2}}\left\|\theta_{xxx}\right\|^{\frac{1}{2}}_{L^{2}} \leq C\left\|\theta\right\|^{4}_{H^{2}} +\frac{1}{4}\left\|\theta_{xxx}\right\|^{2}_{L^{2}}.
\]
Therefore, we obtain
\begin{eqnarray}
\frac{d}{dt}\left\|\theta\right\|^{2}_{H^{2}}+ \left\|\theta_{x}\right\|^{2}_{H^{2}}\leq C\left\|\theta\right\|^{4}_{H^{2}} + C\left\|\theta\right\|^{\frac{4-2\beta}{1-\beta}}_{H^{2}}.
\end{eqnarray}
This implies that there exists $T=T(\|\theta_0\|_{H^2})$ such that there exists a unique solution of (\ref{singular model d}) in $C\left([0, T); H^2 (\Bbb R)\right)$.
\end{proof}
We may lower the regularity of the initial data to prove a local existence result of a weak solution by considering initial data in $\dot H^{\frac{1}{2}}$. The main tools to achieve this will be the use of the Hardy-BMO duality together with interpolation arguments. However, in order to simplify the computation, we consider an equivalent equation by changing the sign of the nonlinearity:
\begin{subequations} \label{singular model d}
\begin{align}
& \theta_{t} + \left(\mathcal{H}(-\partial_{xx})^{\beta}\theta\right)\theta_{x} +\Lambda^{\gamma}\theta=0, \\
& \theta(0,x)=\theta_{0}(x)
\end{align}
\end{subequations}
This can be obtained from (\ref{singular model d}) via $\theta \mapsto -\theta$. For this equation, we do $\dot H^{\frac{1}{2}}$ estimates and prove that there exists a local existence of a unique solution in that special case.
\begin{theorem} \label{GW}
Let $\gamma=2$ and $0<\beta<\frac{1}{2}$. For any $\theta_0 \in \dot H^{\frac{1}{2}} (\Bbb R)$, there exists $T=T(\Vert \theta_{0} \Vert_{\dot H^{\frac{1}{2}}})$ such that there exists a unique local-in-time solution in $C([0, T); \dot H^{\frac{1}{2}} (\Bbb R)) \cap L^2\left([0, T); H^{\frac{3}{2}} (\Bbb R)\right)$.
\end{theorem}
\begin{proof}
By recalling that $\Lambda^{2\beta}=(-\partial_{xx})^{\beta}$ we get
\begin{equation*}
\begin{split}
\frac{1}{2} \frac{d}{dt} \Vert \theta \Vert^{2}_{\dot H^{\frac{1}{2}}} +\left\Vert \Lambda^{\frac{1+\gamma}{2}} \theta \right\Vert^{2}_{L^{2}} &=- \int \Lambda^{\frac{1}{2}} \theta \Lambda^{\frac{1}{2}}\left\{\left(\mathcal{H}(-\partial_{xx})^{\beta}\theta\right)\theta_{x} \right\} dx \\
&=- \int \theta_{x}\Lambda \theta \ \mathcal{H}(-\partial_{xx})^{\beta}\theta dx=- \int \theta_{x} \mathcal{H} \theta_{x} \mathcal{H}(-\partial_{xx})^{\beta}\theta dx.
\end{split}
\end{equation*}
We now use the $\mathcal{H}^1$-BMO duality to estimate the right hand side of the last equality. By using the estimate \eqref{hardy} and $\dot H^{\frac{1}{2}} \hookrightarrow BMO$, we obtain
\[
\Vert \theta_{x} \mathcal{H} \theta_{x} \Vert_{\mathcal{H}^1} \leq \Vert \theta \Vert^{2}_{\dot H^{1}}, \quad \left\|\mathcal{H}(-\partial_{xx})^{\beta}\theta\right\|_{L^{2}}\leq C\Vert \theta \Vert_{\dot H^{2\beta +\frac{1}{2}}}
\]
and thus we have
\[
\frac{1}{2} \frac{d}{dt} \Vert \theta \Vert^{2}_{\dot H^{\frac{1}{2}}} +\left\Vert \Lambda^{\frac{1+\gamma}{2}} \theta \right\Vert^{2}_{L^{2}} \leq C\Vert \theta \Vert^{2}_{\dot H^{1}} \Vert \theta \Vert_{\dot H^{2\beta +\frac{1}{2}}}.
\]
By fixing $\gamma=2$ and by using the interpolation inequalities
\[
\Vert \theta \Vert^{2}_{\dot H^1} \leq \Vert \theta \Vert_{\dot H^{\frac{3}{2}}}\Vert \theta \Vert_{\dot H^{\frac{1}{2}}}, \quad \Vert \theta \Vert_{\dot H^{2\beta +\frac{1}{2}}} \leq \Vert \theta \Vert^{2\beta}_{\dot H^{\frac{3}{2}}} \Vert \theta \Vert^{1-2\beta}_{\dot H^{\frac{1}{2}}},
\]
where we use $\frac{1}{2}\leq 2\beta+\frac{1}{2} \leq \frac{3}{2}$ for $\beta \in \left(0,\frac{1}{2}\right)$ to get the second inequality. Hence, we obtain
\begin{equation*}
\begin{split}
\frac{1}{2} \frac{d}{dt} \Vert \theta \Vert^{2}_{\dot H^{\frac{1}{2}}} +\left\Vert \Lambda^{\frac{3}{2}} \theta \right\Vert^{2}_{L^{2}} &\leq \Vert \theta \Vert^{2}_{\dot H^{1}} \Vert \theta \Vert_{\dot H^{2\beta + \frac{1}{2}}} \\
& \leq \Vert \theta \Vert^{1+2\beta}_{\dot H^{\frac{3}{2}}} \Vert \theta \Vert^{2-2\beta}_{\dot H^{\frac{1}{2}}} \leq \frac{1}{2}\Vert \theta \Vert^{2}_{\dot H^{\frac{3}{2}}} + 2\Vert \theta \Vert^{4\frac{1-\beta}{1-2\beta}}_{\dot H^{\frac{1}{2}}},
\end{split}
\end{equation*}
where we use the condition $\beta \in \left(0,\frac{1}{2}\right)$ again to derive the inequality. This implies local existence of a unique solution up to some time $T=T(\Vert \theta_{0} \Vert_{\dot H^{\frac{1}{2}}})$.
\end{proof}
\subsection{Global well-posedness}
We finally deal with (\ref{singular model d}) with $\gamma=2$.
\begin{theorem} \label{GW}
Let $\gamma=2$ and $\beta<\frac{1}{4}$. For any $\theta_0 \in H^2 (\Bbb R)$, there exists a unique global-in-time solution in $C\left([0, \infty); H^2 (\Bbb R)\right)$.
\end{theorem}
\begin{proof}
By Theorem \ref{LW and blow-up criterion theorem}, we only need to control the quantities in (\ref{blow-up criterion 1}). Let $u=-\mathcal{H}(\partial_{xx})^{\beta}\theta$. We first note that (\ref{singular model d}) satisfies the maximum principle and so
\[
\left\|\theta(t)\right\|_{L^{\infty}}\leq \left\|\theta_{0}\right\|_{L^{\infty}}\leq C \|\theta_{0}\|_{H^{2}}.
\]
We take the $L^2$ inner product of (\ref{singular model d}) with $\theta$. Then,
\begin{eqnarray}
\frac{1}{2}\frac{d}{dt}\|\theta\|^{2}_{L^{2}} +\|\theta_{x}\|^{2}_{L^{2}}=-\int u\theta_{x}\theta dx\leq \|\theta_{0}\|_{L^{\infty}} \|u\|_{L^{2}}\|\theta_{x}\|_{L^{2}}.
\end{eqnarray}
Since
\[
\|u\|_{L^{2}} \leq C\|\theta\|^{1-2\beta}_{L^{2}} \|\theta_{x}\|^{2\beta}_{L^{2}} \quad \text{for $\beta< \frac{1}{2}$},
\]
we have
\begin{eqnarray} \label{theta L2 bound}
\|\theta(t)\|^{2}_{L^{2}} +\int^{t}_{0}\|\theta_{x}(s)\|^{2}_{L^{2}}ds\leq C\left(t, \|\theta_{0}\|_{H^{2}}\right).
\end{eqnarray}
\noindent We next take $\partial_x $ to (\ref{singular model d}), take its $L^2$ inner product with $\theta_x$, and integrate by parts to obtain
\begin{equation*}
\begin{split}
\frac12 \frac{d}{dt}\|\theta_x\|_{L^2} ^2 + \left\|\theta_{xx}\right\|_{L^2}^2 =\int u \theta_x \theta_{xx}dx \leq 2\|u\|^{2}_{L^{\infty}}\|\theta_{x}\|^{2}_{L^{2}}+\frac{1}{2}\left\|\theta_{xx}\right\|_{L^2}^2.
\end{split}
\end{equation*}
Since
\[
\|u\|^{2}_{L^{\infty}} \leq C\|\theta\|^{2}_{L^{2}}+C\|\theta_{x}\|^{2}_{L^{2}} \quad \text{when $\beta<\frac{1}{4}$},
\]
we obtain
\begin{eqnarray} \label{theta H1 bound}
\|\theta_{x}(t)\|^{2}_{L^{2}} +\int^{t}_{0}\|\theta_{xx}(s)\|^{2}_{L^{2}}ds\leq C\left(t, \|\theta_{0}\|_{L^{1}}, \|\theta_{0}\|_{H^{2}}\right) \quad \text{when $\beta<\frac{1}{4}$.}
\end{eqnarray}
By (\ref{theta L2 bound}) and (\ref{theta H1 bound}), we finally obtain
\[
\int^{t}_{0}\left(\|\theta_{x}(s)\|_{L^{\infty}}+ \|u_{x}(s)\|_{L^{\infty}}\right)ds \leq C\int^{t}_{0}\left(\|\theta_{x}(s)\|_{L^{2}}+ \|\theta_{xx}(s)\|_{L^{2}}\right)ds \leq C\left(t, \|\theta_{0}\|_{L^{1}}, \|\theta_{0}\|_{H^{2}}\right)
\]
and so we complete the proof of Theorem \ref{GW}.
\end{proof}
\section{Appendix}
This appendix is briefly written based on \cite{Bahouri}. We first provide notation and definitions in the Littlewood-Paley theory. Let $\mathcal{C}$ be the ring of center 0, of small radius $\frac{3}{4}$ and great radius $\frac{8}{3}$. We take smooth radial functions $(\chi, \phi)$ with values in $[0,1]$ that are supported on the ball $B_{\frac{4}{3}}(0)$ and $\mathcal{C}$, respectively, and satisfy
\begin{equation}
\begin{split}
& \chi(\xi)+ \sum^{\infty}_{j=0}\phi\left(2^{-j}\xi\right)=1 \ \ \forall \ \xi \in \mathbb{R}^{d},\\
& \sum^{\infty}_{j=-\infty}\phi\left(2^{-j}\xi\right)=1 \ \ \forall \ \xi \in \mathbb{R}^{d}\setminus\{0\},\\
& \left|j-j^{'}\right|\ge 2 \ \Longrightarrow \ \text{supp}\ \phi\left(2^{-j}\cdot\right)\bigcap \text{supp}\ \phi\left(2^{-j^{'}}\cdot\right)=\emptyset,\\
& j\ge 1 \ \Longrightarrow \ \text{supp}\ \chi \bigcap \text{supp}\ \phi\left(2^{-j}\cdot\right)=\emptyset.
\end{split}
\end{equation}
From now on, we use the notation
\[
\phi_{j}(\xi)=\phi\left(2^{-j}\xi\right).
\]
We define dyadic blocks and lower frequency cut-off functions.
\begin{equation}
\begin{split}
& h=\mathcal{F}^{-1}\phi, \quad \widetilde{h}=\mathcal{F}^{-1}\chi,\\
& \Delta_{j}f=\phi_{j}\left(D\right)f=2^{jd} \int_{\mathbb{R}^{d}} h\left(2^{j}y\right)f(x-y)dy,\\
& S_{j}f=\chi\left(2^{-j}D\right)f=2^{jd} \int_{\mathbb{R}^{d}} \widetilde{h}\left(2^{j}y\right)f(x-y)dy,\\
& \Delta_{-1}f=\chi\left(D\right)f=\int_{\mathbb{R}^{d}} \widetilde{h}\left(y\right)f(x-y)dy.
\end{split}
\end{equation}
Then, the homogeneous Littlewood-Paley decomposition is given by
\[
f=\sum_{j\in \mathbb{Z}} \Delta_{j}f \ \ \text{in} \ \ \mathcal{S}^{'}_{h},
\]
where $\mathcal{S}^{'}_{h}$ is the space of tempered distributions $u\in \mathcal{S}^{'}$ such that
\[
\lim_{j\rightarrow -\infty}S_{j}u=0\quad \text{in $\mathcal{S}'$}.
\]
We now define the homogeneous Besov spaces:
\[
\dot{B}^{s}_{p,q} =\left\{f\in \mathcal{S}^{'}_{h}: \ \left\|f\right\|_{\dot{B}^{s}_{p,q}}=\left\|\left\|2^{js}\left\|\Delta_{j}f\right\|_{L^{p}}\right\|_{l^{q}(\mathbb{Z})} \right\|<\infty \right\}.
\]
We also recall Bernstein's inequality in 1D : for $1\leq p\leq q \leq \infty$ and $k\in \mathbb{N}$,
\begin{eqnarray}\label{bernstein}
\sup_{|\alpha|=k} \left\|\partial^{\alpha}\Delta_{j}f \right\|_{L^{p}} \leq C 2^{jk} \left\|\Delta_{j}f \right\|_{L^{p}}, \quad \left\|\Delta_{j}f \right\|_{L^{q}} \leq C 2^{j\left(\frac{1}{p}-\frac{1}{q}\right)} \left\|\Delta_{j}f \right\|_{L^{p}}.
\end{eqnarray}
\section*{Acknowledgments}
H.B. was supported by the National Research Foundation of Korea (NRF-2015R1D1A1A01058892).
R.G.B. was supported by the LABEX MILYON (ANR-10-LABX-0070) of Universit\'e de Lyon, within the program ``Investissements d'Avenir'' (ANR-11-IDEX-0007) operated by the French National Research Agency (ANR), and by the Universidad de Cantabria.
O.L. was partially supported by the Marie-Curie Grant, acronym: TRANSIC, from the FP7-IEF program and by the ERC through the Starting Grant project H2020-EU.1.1.-63922.
Both O. L. and R.G.B. were partially supported by the Grant MTM2014-59488-P from the former Ministerio de Econom\'ia y Competitividad (MINECO, Spain).
|
{
"timestamp": "2018-06-05T02:15:41",
"yymm": "1806",
"arxiv_id": "1806.01011",
"language": "en",
"url": "https://arxiv.org/abs/1806.01011"
}
|
\section{Introduction}
Survival analysis has been the subject of many statistical studies in the past decades (see e.g.\ \cite{klein+m:2005,grambsch+t:2000}) and is commonly used in clinical trials (see e.g.\ \cite{collett:2015}), where the traditional main goal is to explain the death of patients having a certain disease. When analysing the effect of some covariates $X\in \mathbb{R}^{d}$ on a survival time $T\in \mathbb{R}_{\geq 0}$, a common approach in the literature is based on semiparametric estimation. The seminal paper by \cite{cox1972} introduces the so-called semiparametric \textit{proportional hazards} model, often referred to as the Cox model, which is given by the following set of conditional survival functions defined on $\mathbb{R}_{\geq 0}\times \mathbb{R}^{d}$ :
\begin{align*}
\mathcal P_{1}=\left\{ (t,x) \mapsto S(t | x) =\exp(-\exp(\gamma^Tx) \Lambda( t )) \ :\ \gamma\in\mathbb{R}^{d},\ \Lambda\in \mathcal G \right\},
\end{align*}
where $\mathcal G$ is the space of absolutely continuous cumulative hazard functions defined on $\mathbb{R}_{\geq 0}$. In this standard semiparametric model the elements are characterized by the Euclidean parameter $\gamma$, called the regression vector, and the infinite dimensional parameter $\Lambda$, called the cumulative hazard. These parameters are estimated by maximizing the \textit{profile likelihood} for $\gamma$ \citep{cox1972} and by computing the Breslow estimator for $\Lambda$ \citep{breslow1972}. Both estimators are \textit{nonparametric maximum likelihood estimators} (NPMLE), as defined in \cite{murphy1994}. Known asymptotic results include the asymptotic normality \citep{gill1982}, the semiparametric efficiency of the regression parameters \citep{wellner1988} as well as the cumulative hazard \citep{wellner1993,kosorok2008}, and the validity of general bootstrap schemes \citep{wellner1996}.
In many data sets, especially the ones arising from clinical trials, a certain proportion of the individuals will never experience the event of interest. These individuals are referred to as the \textit{cured subjects}. As the survival function $t\mapsto S(t)$ does not tend to $0$ as $t\rightarrow +\infty$ in that case (but rather tends to the proportion of cured subjects), specific models need to be considered to account for the \textit{improperness} of the distribution of $T$. The \textit{promotion time cure model} is an extension of the Cox model specially designed to handle the presence of cured subjects in the data. It is defined as the set of conditional survival functions defined on $\mathbb{R}_{\geq 0} \times \mathbb{R}^{d}$, given by
\begin{align*}
\mathcal P_2=\left\{ (t,x) \mapsto S(t|x) =\exp\left(-\eta(\beta_1+\beta_2^Tx) F(t)\right) \ :\ \beta=(\beta_1,\beta_2) \in \mathbb{R}^{d +1},\ F \in \mathcal F \right\},
\end{align*}
where $\mathcal F$ denotes the space of absolutely continuous cumulative distribution functions on $\mathbb{R}_{\geq 0} $ and $\eta:\mathbb R\rightarrow \mathbb R_{> 0} $ is a given function. This model was introduced by \cite{yakovlev1996} and seems appropriate to treat cure data as, for every $x\in \mathbb R^{d}$, $\lim_{t\r +\infty}S(t|x) >0$, so that each subject has a positive chance of being cured. In model $\mathcal P_2$, the parameter vector $\beta$ has an intercept whereas $\gamma$ in model $\mathcal P_1$ does not. This is because $\lim_{t \rightarrow +\infty} \Lambda(t) = +\infty$ and hence an intercept in model $\mathcal P_1$ would not be identified, whereas in model $\mathcal P_2$ the function $\Lambda(t)$ is replaced by $F(t)$, which tends to 1 as $t \rightarrow +\infty$. Estimation of $\mathcal P_2$ has been studied by \cite{tsodikov1998a,tsodikov1998b,tsodikov2001,chen1999,ibrahim2001,tsodikov2003,zeng2006,portier+e+v:2017}, among many others. Certain parallels might be drawn between the statistical properties related to the estimators of the classical Cox model and the ones related to the promotion time cure model. The classical estimators of $\beta$ and $F$ are the NPMLE's \citep{zeng2006}. In \cite{zeng2006}, the authors show that the resulting NPMLE is asymptotically normal and moreover that the estimated vector of regression parameters is semiparametrically efficient. In \cite{portier+e+v:2017} it is shown that the whole model is estimated efficiently and the validity of a general weighted bootstrap is proved.
There is still an important difference between the NPMLE's associated to models $\mathcal P_1$ and $\mathcal P_2$. The NPMLE of the Cox model has a much simpler expression than the NPMLE of the promotion time cure model. Within model $\mathcal P_1$, the estimated regression parameter maximizes a known (explicit) objective function and the estimated cumulative hazard is expressed through a closed formula \citep{gill1982}. Within Model $\mathcal P_2$, the estimated regression parameter is also the maximizer of a certain objective function, but this time the objective function is implicitly defined \citep{portier+e+v:2017}. Moreover, the same is true for the estimated cumulative hazard in $\mathcal P_2$, which is only known up to some quantity implicitly defined. The previous features involve important complications that intervene at two different stages. First, estimators from $\mathcal P_2$ are more difficult to describe, theoretically, than estimators from $\mathcal P_1$. This eventually deteriorates the accuracy of the confidence intervals or of the testing procedures. Second, the computation of the estimators in $\mathcal P_2$ has some numerical difficulties, e.g., long computation time, problems with local minima, etc. Given this, the question is to know, whether or not, it is legitimate to rely on a complicated estimation procedure for $\mathcal P_2$? In other words, does the presence of cured subjects in the data prevents us from having an estimation procedure as simple as in the Cox model?
The aim of this paper is to provide a new model dedicated to cure data analysis and for which the NPMLE overpasses the previous difficulties associated with $\mathcal P_2$.
The undesirable complications when estimating $\mathcal P_2$ come from the particular nature of the parameter space $\mathcal F$. This space is formed by cumulative distribution functions $F$ that satisfy the constraint $\lim_{t\rightarrow \infty} F (t) = 1$. Such a constraint is taken into account with the help of a Lagrange procedure involving an additional parameter being implicitly defined, the \textit{Lagrange multiplier}. It turns out that this constraint can be alleviated by including an additional parameter in the model, replacing $F$ by $\theta F$, with $\theta>0$. We define the set of conditional survival functions $\mathbb{R}_{\geq 0} \times \mathbb{R}^{d}\rightarrow \mathbb{R}_{\geq 0} $, given by
\begin{align*}
\mathcal P_3=\big\{ (t,x) \mapsto S(t|x) = \exp(-g(\gamma,x)\theta F(t)) \ :\ (\gamma,\theta)\in \mathbb{R}^q\times \mathbb{R}_{>0} ,\ F \in \mathcal F \big\},
\end{align*}
where $g:\mathbb R^q \times \mathbb R^{d} \rightarrow \mathbb{R}_{> 0} $ is a given function and $q\in \mathbb N$. Note that in the present form, $\mathcal P_3$ handles biological models as developed in \cite{chen1999} to analyse time to relapse of cancer through the distribution of the carcinogenic cells. It includes also a cure version of the Cox model when $g(\gamma,x)=\exp(\gamma^Tx)$. In this case, it coincides with $\mathcal P_2$ for which $\eta = \exp$. Otherwise $\mathcal P_2$ and $\mathcal P_3$ are different. In $\mathcal P_3$, the role of $\theta$ is interpreted as a simple multiplicative effect on the cumulative distribution, whereas the effect of $\beta_1$ in $\mathcal P_2$ must be analysed depending on the shape of the function $\eta$.
The main contributions of the paper are listed below.
\begin{enumerate}[(i)]
\item As the NPMLE of $\mathcal P_3$ is much simpler to evaluate than the one associated to $\mathcal P_2$, the proposed methodology provides a significant improvement in terms of computational ease. In particular, we show that the NPMLE's associated with $\mathcal P_2$ and $\mathcal P_3$ coincide when $\eta=\exp$ and $g(\gamma, x) = \exp(\gamma^ T x)$. Hence our approach provides a new way to compute the NPMLE of $\mathcal P_2$ when $\eta=\exp$ (most commonly used) which is simpler than the existing procedure \cite{zeng2006,portier+e+v:2017}.
\item We derive the asymptotics of the NPMLE associated with $\mathcal P_3$. As in the case of the Cox model, we have closed-formulas for the variance of the limiting Gaussian distributions. This allows us to develop some tests and to build confidence intervals on some quantities of interest as for instance the proportion of cure given the value of a covariate $x$. The finite sample size accuracy of the confidence intervals is investigated with the help of simulations.
\item Moreover, as the function $g$ needs to be chosen by the analyst, we consider a likelihood-based methodology to select an appropriate function $g$ among a family of proposals. Such an approach is also followed by \cite{huang:2006}, who investigate spline estimation of the function $g$ in the case of the classical Cox model.
\end{enumerate}
In section \ref{s2} we present the framework of the paper and derive the NPMLE of model $\mathcal P_3$. We also consider the links with the NPMLE of $\mathcal P_2$.
In section \ref{s3}, the asymptotic behaviour of the NPMLE of $\mathcal P_3$ is studied. In sections \ref{s6} and 5, we provide simulations and a real data analysis to give some insights in the finite sample performance of our approach. The proofs are collected in the Appendix.
\section{The data, the model, the estimator}\label{s2}
\subsection{Framework}\label{s21}
We focus on the standard right censoring context : the lifetime $T$ of interest is right censored by some random variable $C$ so that we only observe $Y=\min(T,C)$, $\delta = \mathds 1{\{T\leq C\}}$ and the vector of covariates $X$. This means that we know whether the variable of interest $T$ has been observed or censored. The covariates $X$ are in contrast always observed, and we further denote by $ \mathcal S\subseteq \mathbb R^d$ their support.
We suppose conditional independence between $T$ and $ C$, given $X$. In practice, as is the case for instance in clinical trials, $C$ might be bounded. This prevents us from observing any cured subjects, defined by $T=+\infty$. A way around this problem is to assume the existence of a threshold $\tau\in \mathbb{R}$ such that
\begin{align*}
\{T>\tau\} \Rightarrow \{T=+\infty\} .
\end{align*}
Therefore whenever $Y$ will be observed to be greater than $\tau$, the individual will be known to be cured. We use model $\mathcal P_3$ for modelling the distribution of $T$ given $X$. Hence we further assume that the conditional survival function of $T$ given $X=x$ is given by $S_0(t|x) = \exp(-g (\gamma_0, x) \theta_0 F_0(t) )$, for some $\gamma_0\in \mathbb R^q$, $\theta_0\in \mathbb R_{> 0} $, and $F_0$ an absolutely continuous cumulative distribution function. Let $P$ denote the probability measure associated to $(Y,C,X)$. Supposing in addition that $P(C>\tau|X)>0$ a.s., we obtain that $P(Y>\tau |X)>0$ a.s., meaning that every individual can be cured. These assumptions are stated in Section \ref{s3} in (H\ref{cond:identification1}).
A central object in our study is the counting process $N(y)=\delta \mathds 1_{\{Y\leq y \}}$, $y\in \mathbb R _{\geq 0}$, as it possesses some useful martingale properties as developed in \cite{fleming1991} and \cite{andersen+b+g+k:1993}. Define the random process $R(y)=\Delta\mathds 1_{\{Y\geq y\}}+(1-\Delta)$, $y\in \mathbb R _{\geq 0}$, with $\Delta = \mathds 1_{\{Y\leq \tau \}}$. It equals $1$ whenever the individual is still at risk. The presence of cure implies that $R= 1$ has positive probability. The compensator of $N$ with respect to the $\sigma$-field $\mathcal F_y$ generated by $\{N(u),\, \mathds 1_{\{ Y\leq u,\delta = 0\}},\, X \, :\, 0\leq u\leq y \}$ is the process $y\mapsto \int_0^y g(\gamma_0,X)R(u) \theta_0 dF_0(u)$. That is, $M$ defined by
\begin{align*}
\left\{ \begin{array}{l}
M(0)=0\\
dM(y) = dN(y) - g(\gamma_0,X)R(y) \theta_0 dF_0(y),\qquad y\in \mathbb R_{\geq 0}
\end{array} \right.
\end{align*}
is a martingale with respect to $\mathcal F_y$ \cite[Theorem 1.3.1]{fleming1991}. In particular, we have the formula \citep[Theorem 1.5.1]{fleming1991}
\begin{align}\label{formula:martingale}
E\left[ \delta h(Y,X) \right] =\int E\left[ h(u,X) g(\gamma_0, X) R(u)\right] \theta_0 dF_0(u)
\end{align}
for any bounded measurable function $h$. Finally, the following identity shall be useful : for any bounded measurable functions $h$ and $\tilde h$, we have \citep[Theorem 2.4.2]{fleming1991}
\begin{align}\label{formula:quadratic_variation}
E\left[ \int h(u) dM(u) \int \tilde h(u) dM(u) \right] = \int h(u)\tilde h(u) E [g(\gamma_0, X) R(u)] d\Lambda_0(u) .
\end{align}
\subsection{Nonparametric maximum likelihood}\label{s22}
Let $(T_i,C_i,X_i)_{i\in \mathbb N}$ denote a sequence of independent and identically distributed random variables with law $P$, as described in the previous subsection. The underlying probability measure is denoted by $\mathbb P$. The estimator we consider shall be based on the observed variables : $Y_i=\min(T_i,C_i)$, $\delta_i = \mathds 1{\{T_i\leq C_i\}}$, $X_i$, $i=1,\ldots, n$. Let $N_i(y)=\delta_i \mathds 1_{\{Y_i\leq y \}}$, $R_i(y)=\Delta_i\mathds 1_{\{Y_i\geq y\}}+(1-\Delta_i)$, $\Delta_i = 1_{\{Y_i \le \tau\}}$, and define the martingale $M_i = N_i- R_i$, for $i=1,\ldots, n$.
Under the current data generating process, assuming that $F$ is absolutely continuous, and assuming non-informative censoring \citep{sasieni:1992}, the likelihood of an observation $(y,\delta,x)$ in model $\mathcal P_3$ is given by
\begin{align}\label{formulalikelihood}
\t{Lik}(y,\delta,x) = \{g(\gamma,x)\theta f(y)\}^\delta\ \exp\Big[ - g(\gamma,x)\theta \{\Delta F(y)+ (1-\Delta)\}\Big],
\end{align}
where $f$ stands for the derivative of $F$. Model $\mathcal P_3$ can be re-written as the set of all survival functions of the form $ \exp(-g(\gamma,x)\Lambda(t))$ where $\gamma\in \mathbb{R}^{q}$ and $\Lambda $ belongs to $\mathcal G$, the space of absolutely continuous cumulative hazards $\Lambda$ such that $\Lambda(\tau)=\lim_{y\rightarrow +\infty} \Lambda(y)=\theta $. Note that there is a one-to-one relationship between the two sets of parameters $(\theta,F)$ and $\Lambda$, i.e., $ \Lambda =\theta F$ and $\theta=\lim_{t\rightarrow +\infty} \Lambda(t)$. As a consequence the likelihood in (\ref{formulalikelihood}) can be expressed in terms of $(\gamma,\theta,F)$ or equivalently, in terms of $(\gamma, \Lambda)$. Switching from one parametrization to another is straightforward. For the sake of simplicity, we derive the NPMLE with respect to $(\gamma, \Lambda) $ in the next few lines. By following \cite{murphy1994}, the NPMLE is defined as
\begin{align}\label{maximumlikelihood}
(\w \gamma, \w \Lambda)
&= \underset{\gamma\in \mathbb R^q , \, \Lambda}{\text{argmax}}
\ \sum_{i=1}^n \Big[{\delta_i} \log(g(\gamma,X_i) \Lambda\{Y_i\}) -g(\gamma,X_i) \{\Delta_i \Lambda(Y_i)+ (1-\Delta_i)\Lambda(+\infty) \}\Big],
\end{align}
the maximum is taken over $\Lambda$ lying in the space of cumulative hazard functions possibly discrete, and $\Lambda\{y\}=\Lambda(y)- \lim_{t\r y^-} \Lambda(t)$ is the size of the jump of $\Lambda$ at $y$. As is common practice for computing the NPMLE in semiparametric models, the above NPMLE might be profiled over the nuisance parameter $\Lambda$ \citep{murphy2000,kosorok2008}. Maximizing along submodels $d\Lambda_s = (1+sh)d\Lambda$, $s\in \mathbb R$, with $h$ a bounded real function, the value of $\Lambda$ which maximizes (\ref{maximumlikelihood}), for each $\gamma\in \mathbb R^q$, is a solution of
\begin{align*}
n^{-1}\sum_{i=1}^n \delta_i h(Y_i) - \int \w Q_\gamma (u) h(u) d\Lambda(u)=0,
\end{align*}
with $\w Q_\gamma(u)=n^{-1}\sum_{i=1}^n g(\gamma,X_i) R_i(u)$. The solution of the previous equation is given by
\begin{align*}
\w \Lambda_\gamma (y) = n^{-1} \sum_{i=1}^n \frac{\delta_i\mathds 1 _{\{Y_i\leq y \}}}{\w Q_\gamma (Y_i) },\qquad \qquad y\in\mathbb R_{\geq 0}.
\end{align*}
This is then plugged into (\ref{maximumlikelihood}) to get that
\begin{align}
\left\{ \begin{array}{l}
\displaystyle \w \gamma \in \underset{\gamma\in \mathbb R^q}{\text{argmax}}
\ \prod_{i=1}^n \left \{ {g(\gamma,X_i)} / {\w Q_\gamma (Y_i) }\right\} ^{\delta_i} \\
\displaystyle \w \Lambda (y) =n^{-1} \sum_{i=1}^n \w Q_{\w \gamma} (Y_i) ^{-1} {\delta_i\mathds 1 _{\{Y_i\leq y \}}},\qquad \qquad y\in\mathbb R_{\geq 0}.
\end{array} \right. \label{NPMLEcoxmodel_gamma_Lambda}
\end{align}
Back to the parameters $(\theta,F)$ of Model $\mathcal P_3$, the NPMLE is given by
\begin{align}
\left\{ \begin{array}{l} \displaystyle \w\theta = n^{-1} \sum_{i=1}^n \w Q_{\w \gamma} (Y_i)^{-1} {\delta_i}\\
\displaystyle \w F(y) = (\w \theta n ) ^{-1} \sum_{i=1}^n {\w Q_{\w \gamma} (Y_i)^{-1} } {\delta_i\mathds 1 _{\{Y_i\leq y \}}}, \qquad \qquad y\in\mathbb R_{\geq 0}. \end{array} \right. \label{NPMLE_coxmodel_theta_F}
\end{align}
At fixed sample size $n$, the quantities involved in the previous equations are well defined as soon as, for instance, there exists $i$ such that $\delta_i=1$ and the maximum in (\ref{NPMLEcoxmodel_gamma_Lambda}) can be taken over a known compact set $B\subset \mathbb R^q$ on which the function $\gamma \mapsto g (\gamma, x) $ is continuous, for every $x\in \mathcal S$.
Note also that the estimation of the parameters depends only on the observed variables $(Y_i,\delta_i,X_i)$ such that $Y_i\leq \tau$, and $(\Delta_i,X_i)$ such that $Y_i>\tau$, $i=1,\ldots, n$. It results that moving the threshold over $[Y_{(n,\delta)},+\infty)$, with $Y_{(n,\delta)}=\max_{i=1,\ldots,n} Y_i \delta_i$, has no effect on the NPMLE. In practice the threshold could then be fixed at $Y_{(n,\delta)}$.
An important point in many situations is to evaluate the proportion of cured subjects in the population under study for a given covariate vector $x\in \mathcal S$, i.e., $p_0(x) = \exp(-g(\gamma_0, x) \theta_0)$. The estimator of $p(x)$, within our framework, naturally follows from the plug-in rule :
\begin{align}\label{def:cure_proportion}
\w p(x) = \exp(-g(\w\gamma,x)\w \theta).
\end{align}
\subsection{Link with other estimators}
\subsubsection{Cox and Breslow estimator}
Model $\mathcal P_3$ is aimed to handle the presence of cured subjects in the data whereas the traditional Cox model, $\mathcal P_1$, is not. However, when $g(\gamma,x) = \exp(\gamma^Tx)$, (\ref{NPMLEcoxmodel_gamma_Lambda}) becomes very close to the well-known formulas of the classical Cox and Breslow estimator of $\gamma$ and $\Lambda$, respectively. As a consequence, the derivation of the asymptotics for $(\w \gamma, \w F,\w \theta)$ is somewhat similar as in the case of the Cox and Breslow estimator, provided for instance in \cite{gill1982}.
An interesting difference with the Cox and Breslow estimator comes from the fact that
\begin{align*}
\min_{i = 1,\ldots, n} \w Q_{\w \gamma} (Y_i) \geq n^{-1}\sum_{i=1}^n g(\gamma,X_i) (1-\Delta_i).
\end{align*}
From the framework described in the previous section, we deduce that $E[(1-\Delta)|X] > 0$ and $E[g(\gamma,X)(1-\Delta)]>0$, for every $\gamma\in \mathbb R^q$. Consequently, the decreasing function $u\mapsto E[g(\gamma,X)R(u)] $ is bounded from below. In Lemma \ref{Lemma:probacv}, see the Appendix, this property is shown to hold for $\w Q_\gamma$, uniformly in $\gamma$, with probability going to $1$. This raises a significant difference with respect to classical Cox estimators in which the quantity corresponding to $\w Q_\gamma$ would go to $0$ at infinity. This in turn implies that the weak convergence of the rescaled $\w \Lambda$ will still hold over $\mathbb R_{\geq 0}$. This is in contrast with the case of the Cox model for which such a convergence holds on bounded intervals. We refer to \cite{gill1982} for a discussion on the study of the Breslow estimator over $[0,+\infty )$.
\subsubsection{Promotion time cure estimator}
The NPMLE for $\mathcal P_2$ is given by \citep{portier+e+v:2017},
\begin{align}
\left\{ \begin{array}{l} \displaystyle
\w \beta \in \underset{(\beta_1,\beta_2)\in \mathbb R^d} {\text{argmax}}\ \prod_{i=1}^n\left\{ \left(\frac{\eta(\beta_1+ \beta_2^TX_i)}{\w Q_{2,\beta}(Y_i)-\w \lambda_\beta}\right)^{\delta_i} \exp( -\w\lambda_\beta ) \right\} \\
\displaystyle \w G (y) = n^{-1} \sum_{i=1}^n \frac{\delta_i\mathds 1 _{\{Y_i\leq y \}}}{\w Q_{2,\w \beta} (Y_i) -\w \lambda_{\w \beta} }, \end{array} \right. \label{newopti1}
\end{align}
where $\w Q_{2,\beta}(u)=n^{-1}\sum_{i=1}^n\eta(\beta_1+ \beta_2^TX_i) R_i(u)$ and for every $\beta\in \mathbb R^d$, $\w \lambda_{ \beta}$ is the smallest number verifying $ \sum_{i=1}^n {\delta_i} / (\w Q_{2,\beta} (Y_i) -\w \lambda_{ \beta} )=n$.
Because the function $\beta \mapsto \w \lambda_\beta$ is implicitly defined, it is more difficult to compute the NPMLE of $\mathcal P_2$ through (\ref{newopti1}), than the one of $\mathcal P_3$ through (\ref{NPMLEcoxmodel_gamma_Lambda}) and (\ref{NPMLE_coxmodel_theta_F}). In particular, solving (\ref{newopti1}) requires to run an optimization procedure over $\beta$ for which, at each iteration, we shall evaluate $\w \lambda_\beta$, by an additional procedure. When $\eta= \exp$, it is actually useless to solve (\ref{newopti1}), since it gives the same results as (\ref{NPMLEcoxmodel_gamma_Lambda}) and (\ref{NPMLE_coxmodel_theta_F}). This is the statement of the following proposition.
\begin{proposition}\label{proposition:p2_p3}
Suppose that $\eta=\exp$ and $g(\gamma,x)=\exp(\gamma^Tx)$ for every $x\in \mathbb R^{d}$. If there exists $i$ such that $\delta_i=1$, then $\w \beta^T = (\log(\w \theta), \w \gamma^T ) $ and $\w G = \w F $.
\end{proposition}
\section{Asymptotics}\label{s3}
The asymptotic analysis of the NPMLE associated to model $\mathcal P_3$ is inspired from the approach developped for the Cox model in \cite{gill1982}. We may first derive the asymptotic behaviour of the $Z$-estimator $\w\gamma$, and then rely on functional Delta-method type arguments, to describe $\w\Lambda$. The monographs of \cite{vandervaart1996} and \cite{kosorok2008} will be of good help at each of these steps to rely on suitable empirical process techniques. The preliminary study of $\w \gamma$ and $\w \Lambda$ (given in sections \ref{sec:3_1}, \ref{sec:3_2} and \ref{sec:3_3}) will provide the basis to describe the behaviour of $\w p(x)$ defined in (\ref{def:cure_proportion}).
As it is common for $M$-estimators, the asymptotic study of $\w \gamma$ starts with the establishment of its consistency. In contrast, for $\w\Lambda$, we will rely on the explicit formula (\ref{NPMLEcoxmodel_gamma_Lambda}) to directly show the weak convergence of $n ^{1/2} (\w \Lambda - \Lambda_0)$.
\subsection{Consistency of $\w\gamma$}\label{sec:3_1}
The estimator $\w \gamma$ is defined as a maximizer in (\ref{NPMLEcoxmodel_gamma_Lambda}). To obtain the consistency of $\w \gamma$, we classically show that (i) the maximum of the limiting function is well identified and that (ii) the convergence to this limiting function is uniform; see \citet[Theorem 2.1]{newey1994} or \citet[Theorem 5.7]{vandervaart1998}. To obtain the identifiability, we need the following assumptions :
\newcounter{saveenum}
\begin{enumerate}[(\text{H}1)]
\item \label{cond:identification1} The variables $T$ and $C$ are independent given $X$. Moreover, $P(C>\tau|X)>0$ a.s., $P(T= +\infty | X)>0$ a.s., and $P(T\in (\tau,+\infty) ) = 0 $.
\item \label{cond:identification2} For any $\gamma \in \mathbb R^d$, $ \var( g(\gamma_0,X) / g(\gamma,X)) = 0$ implies that $\gamma= \gamma_0 $.
\setcounter{saveenum}{\value{enumi}}
\end{enumerate}
The following hypotheses (H\ref{cond:consistency_gamma_0}) and (H\ref{cond:consistency_gamma_1}) help to control the complexity of the underlying class of functions as well as to guarantee the continuity of the function to maximize. Let $|\cdot|_k$ denote the $\ell_k$-norm.
\begin{enumerate}[(H1)]\setcounter{enumi}{\value{saveenum}}
\item \label{cond:consistency_gamma_0} The true value $\gamma_0$ belongs to the interior of a compact set $B\subset \mathbb R^q$.
\item \label{cond:consistency_gamma_1} There exist functions $m_1:\mathcal S\rightarrow \mathbb R_{\geq 0}$ and $M_1:\mathcal S\rightarrow \mathbb R_{\geq 0}$ such that for every $x\in \mathcal S$ and every $\gamma\in B$, we have $0<m_1(x)\leq g(\gamma, x)\leq M_1(x)$ and $E[|\log(m_1(X))|]$, $E[|\log(M_1(X))|]$ and $E[M_1^2(X)]$ are finite. There exists a function $c_1: \mathcal S \rightarrow \mathbb R_{\geq 0}$ such that for every $x\in\mathcal S$ and every $(\gamma,\tilde \gamma)\in B^2$,
\begin{align}\label{eq:mean_value_th1}
| g(\gamma,x) - g(\tilde \gamma,x) | \leq |\gamma - \tilde \gamma |_1 c_1(x),
\end{align}
with $0< E [c_1^2(X)]<+\infty $.
\setcounter{saveenum}{\value{enumi}}
\end{enumerate}
\begin{proposition}\label{propositionconsistency}
Under (H\ref{cond:identification1})--(H\ref{cond:consistency_gamma_1}), we have that $\w\gamma \overset{\mathbb P}{ \longrightarrow } \gamma_0$.
\end{proposition}
We now discuss assumption (H\ref{cond:identification2}) by considering some examples.
\begin{example}[Cox with cure]
When $g (\gamma,x)=\exp(\gamma^Tx)$, (H\ref{cond:identification2}) is equivalent to the statement that $ \var(X)$ has full rank.
\end{example}
Without specifying $g (\gamma,x)=\exp(\gamma^Tx)$, identifiability might not hold. Indeed, consider the case where $g (\gamma,x)=|\gamma^Tx|$, then of course different pairs $(\theta,\gamma)$ could lead to the same function $x\mapsto \theta | \gamma^Tx|$. A possibility when facing such difficulties is to restrict $\gamma$ to the unit sphere in $\mathbb{R}^{d}$. Then identifiability might be recovered. We refer to this model as a directional model.
\begin{example}[directional model]\label{ex:directionalmodel}
Suppose that $g (\gamma,x)=\eta(\gamma^Tx)$ with $|\gamma |_2=1$, $\gamma_1>0$. One can typically think of functions of the form $g (\gamma,x)= |\gamma^Tx|^k$, for some $k\geq 1$. Such models allow for a geometric interpretation in the same vein as the single-index models \citep{hardle2011}. The information available from the covariates $X$ to predict $Y$ is contained in the linear transformation $P_{\gamma}X$, where $P_\gamma$ stands for the orthogonal projector on $\text{span}(\gamma)$.
For more details about identifiability of single-index models, we refer to Theorem 1 in \cite{lin+k:2007} as well as Theorem 1 in \cite{portier+d:2013} where $X$ is required to possess a density.
\end{example}
If $|\gamma|_2=1$ does not hold, then identifiability could fail unless more specific forms are considered for $\eta$. An example where identifiability is still satisfied is given below.
\begin{example}[Modified Cox]
An interesting choice is when $g (\gamma,x)=\exp(\rho_k(\gamma^T x))$, where $\rho_k(t) = \sign (t) |t|^k$, for $k > 1$. In the following lines, we obtain (H\ref{cond:identification2}) under the assumption that $X$ has a continuous density and $B(0,r)$ is included in the support of $X$. Suppose that
$\rho_k (\gamma_0^Tx) - \rho_k (\gamma^Tx)$ is constant for almost every $x\in B(0,r)$. Suppose that $\gamma $ and $\gamma_0$ are linearly independent. Then, take $\alpha \in B(0,r)$ such that $\alpha^T\gamma = 0 \neq \alpha^T\gamma_0$. Let $g(x) = \rho_k(\gamma_0^Tx)- \rho_k(\gamma^Tx)$ and $K$ be a probability density function. For any $s \in (0,1)$ we have, using approximation theory, that $(g \star K_h ) (s \alpha ) \to s^k \rho_k (\gamma_0^T\alpha) $ as $h\to 0 $, where $K_h(\cdot) = K(\cdot/h)/h^d$. Hence for any $s\in (0,1)$, $s^k \rho_k ( \gamma_0^T \alpha)$ is constant which is impossible. Supposing that $\gamma $ and $\gamma_0$ are linearly dependent, we directly obtain that $\gamma =\gamma_0$.
\end{example}
\subsection{Asymptotic normality of $\w \gamma$}\label{sec:3_2}
We now introduce some notations that will be useful to express the asymptotic normality results. For every $y\in \mathbb R_{\geq 0}$, $\gamma\in \mathbb R ^d$, $Q_{\gamma}(y) = E[g(\gamma,X)R(y)]$,
$d_\gamma (x)= { \nabla_{\gamma} g(\gamma,x)}/{ g(\gamma,x)}$, $ h_\gamma(y) =
{\nabla_\gamma Q_\gamma(y)}/{ Q_\gamma(y)} $.
We define
\begin{align}
\label{eq:variance_I0} &I_0= \int E \left[ \{d_0 (X) - h_0(u)\} \{ d_0 (X)-h_0(u) \}^T g(\gamma_0,X) R(u)\right] d\Lambda_0(u),
\end{align}
where $d_0 = d_{\gamma_0}$ and $h_0 = h_{\gamma_0}$.
We require the following assumptions to obtain an asymptotic decomposition for $\w \gamma$.
\begin{enumerate}[(H1)]\setcounter{enumi}{\value{saveenum}}
\item \label{cond:asymptotics_gamma_1} The matrix $I_0$ has full rank.
\item \label{cond:asymptotics_gamma_2} For every $x\in \mathcal S$, $\gamma\mapsto g(\gamma, x)$ is differentiable and there exists a function $c_2: \mathcal S \rightarrow \mathbb R_{\geq 0}$ such that for every $x\in\mathcal S$ and every $(\gamma,\tilde \gamma)\in B^2$,
\begin{align}\label{eq:mean_value_th2}
| \nabla_\gamma g(\gamma,x) - \nabla_\gamma g(\tilde \gamma,x) |_1 \leq |\gamma - \tilde \gamma |_1 c_2(x),
\end{align}
with $ 0< E [c_2^2(X)]<+\infty $. Moreover there exists a function $M_2:\mathcal S\rightarrow \mathbb R_{\geq 0}$ such that, for every $x\in\mathcal S$, $|\nabla_\gamma g(\gamma,x)|_1 < M_2(x)$ where $E[M_2(X)]$, $E[ M_2^2(X) / m_1(X) ]$, $ E[ (c_2(X)+M_2(X) )^2 M_1(X)/m_1^2(X) ]$, and
$ E[ M_2^2(X) (c_1(X)+ M_1(X))^2M_1(X)/m_1^4(X) ]$ are finite.
\setcounter{saveenum}{\value{enumi}}
\end{enumerate}
\begin{proposition}\label{proposition:weak_cv_gamma}
Under (H\ref{cond:identification1})--(H\ref{cond:asymptotics_gamma_2}), we have that
\begin{align}\label{result1}
n^{1/2} (\w\gamma -\gamma_0) =n^{-1/2} I_0^{-1}\sum_{i=1} ^n \int (d_0(X_i) - h_0(u))dM_i(u) +o_{\mathbb P}(1),
\end{align}
and in particular, using Lemma \ref{lemma:weakcv}, see the Appendix, combined with (\ref{formula:quadratic_variation}), it holds that $n^{1/2} (\w\gamma -\gamma_0) \overset{\text{d}}{ \longrightarrow } \mathcal N (0, I_0^{-1})$.
\end{proposition}
\subsection{Weak convergence of $\w\Lambda$}\label{sec:3_3}
Based on the decomposition obtained for $\w \gamma$, we can now obtain a uniform representation of the process $\{n^{1/2} (\w \Lambda(y) - \Lambda(y)) : y\in\mathbb R_{\geq 0}\}$. This is the statement of the next Proposition.
\begin{proposition}\label{proposition:strong_decomp_Lambda}
Under (H\ref{cond:identification1})--(H\ref{cond:asymptotics_gamma_2}), we have that
\begin{align}\label{result2}
\sup_{y\in \mathbb R_{\geq 0}} \left| n^{1/2}(\w \Lambda(y)-\Lambda_0(y)) - \left\{ n^{-1/2} \sum_{i=1}^n \int_0^y \frac{dM_i(u)}{ Q_{0}(u)} - \int_0^y h_0(u)^T d\Lambda_0(u) ( n^{1/2} (\w \gamma-\gamma_0)) \right\} \right| = o_{\mathbb P}(1).
\end{align}
In particular, using Lemma \ref{lemma:weakcv}, the two terms involved in the decomposition are asymptotically independent and $ n^{1/2} (\w \Lambda-\Lambda_0)$ converges weakly to a tight centered Gaussian process in $\ell^\infty(\mathbb{R}_{\geq 0})$ with covariance process given by
\begin{align*}
(y,y')\mapsto \int_0^{\min(y,y')}\frac{d\Lambda_0 (u)}{ Q_{0}(u)} + \left(\int_0^y h_0(u)^Td\Lambda_0(u)\right) I_0^{-1}\left( \int_0^{y'} h_0(u)d\Lambda_0(u)\right).
\end{align*}
\end{proposition}
The two previous propositions, Proposition \ref{proposition:weak_cv_gamma} and \ref{proposition:strong_decomp_Lambda}, form the basis of the next analysis, which ultimately describes the estimator $\w p(x)$ of the cure proportion $p(x)$. The following results will be obtained as (almost direct) consequences of Propositions \ref{proposition:weak_cv_gamma} and \ref{proposition:strong_decomp_Lambda} and so are referred to as corollaries.
\subsection{Asymptotic normality of $\w \theta$}
Since $\w\theta=\lim_{y\r +\infty} \w\Lambda(y) = \w\Lambda(\tau)$ and $\theta_0 = \Lambda_0(\tau)$, the weak convergence of $ n^{1/2} (\w \theta-\theta_0)$ is deduced from the weak convergence of $ n^{1/2} (\w \Lambda-\Lambda_0)$ as the finite dimensional laws converge in distribution. The expression for the asymptotic variance is deduced from the one given in Proposition \ref{proposition:strong_decomp_Lambda}.
\begin{corollary}
Under (H\ref{cond:identification1})--(H\ref{cond:asymptotics_gamma_2}), $n^{1/2} (\w \theta-\theta_0)$ converges in distribution to a centered Gaussian distribution with variance
\begin{align}\label{express:asym_variance_theta}
v_\theta = \int \frac{d\Lambda_0(u)}{Q_0(u)} +\left( \int h_0(u)^T d\Lambda_0(u) \right)I_0^{-1}\left(\int h_0(u) d\Lambda_0(u)\right).
\end{align}
\end{corollary}
As $\w F = \w \Lambda / \w \theta$, invoking some Delta-method arguments, the weak convergence of the process $ n^{1/2} (\w F-F_0)$ can be established. This however is not needed in the following.
\subsection{Cure rate estimation}
Recall that the cure proportion associated to $x\in \mathcal S$ is given by $p_0(x)= \exp(-g(\gamma_0,x) \theta_0)$ and that the estimator is $\w p(x)= \exp(-g(\w \gamma,x) \w \theta)$.
A Taylor development gives that
\begin{align*}
n^{1/2} (\w p(x) - p_0(x) ) = -p_0(x) g(\gamma_0,x)\left \{n^{1/2} (\w \theta - \theta_0) + \theta_0d_0(x)^T n^{1/2} (\w\gamma-\gamma_0)\right\} +o_{\mathbb P}(1).
\end{align*}
Injecting (\ref{result1}) and (\ref{result2}) in the previous display leads to the following statement.
\begin{corollary}\label{prop:cure_proportion}
Under (H\ref{cond:identification1})--(H\ref{cond:asymptotics_gamma_2}), for a given $x\in \mathcal S$, we have that
\begin{align*}
&n^{1/2} (\w p(x) - p_0(x) ) \\
&= -p_0(x) g(\gamma_0,x) n^{-1/2} \sum_{i=1}^n \left\{ \int \frac{dM_i(u)}{ Q_{0}(u)} + u_0(x)^T I_0^{-1} \int (d_0(X_i) - h_0(u))dM_i(u)\right\}+o_{\mathbb P}(1),
\end{align*}
where $u_0(x)=\theta_0d_0(x) - \int h_0(u) d\Lambda_0(u) $. Consequently, $n^{1/2} (\w p(x) - p_0(x) )$ converges in distribution to a centered Gaussian distribution with variance
\begin{align*}
& v_p(x) = p_0(x)^2 g(\gamma_0,x)^2 \left( \int \frac{d\Lambda_0(u)}{ Q_{0}(u)} + u_0(x)^T I_0^{-1} u_0(x) \right).
\end{align*}
\end{corollary}
Note that a similar result can be obtained concerning the estimator $\exp(-g(\w \gamma,x) \w \Lambda(y))$ of the survival function $S_0(y|x)$ but we prefer to omit this for the sake of brevity.
\section{Simulation study}\label{s6}
We performed some extensive Monte Carlo simulations in order to assess the performnce of our suggested estimators. The simulations were performed under a variety of conditions on the censoring rate, sample size and cure rate. The data were generated according to the following model:
\begin{align}\label{def:model_simu}
S(t|x_1,x_2) =\exp\big[-\exp\big\{\Gamma(\gamma_{01}x_1+\gamma_{02} x_2)\big\} \theta_0 F_0(t)\big].
\end{align}
In the above model, we chose the link function $\Gamma(\cdot)$ to be either the identity, the cubic or the sine function. For clarity, in the first part of this simulation study we will focus on the case of the identity function. With few exceptions, all our comments and findings also apply to the case where $\Gamma(\cdot)=(\cdot)^3$ and $\Gamma(\cdot)=\sin(\cdot)$. In all our simulations, $\log(\theta_0)=0.1$, $\gamma_{01}=-2$, $\gamma_{02}=1$, $F_0$ is the cumulative distribution function of a uniform variable on $[0, 1]$, $X_1$ is a uniformly distributed random variable on $[\alpha, \alpha + 1]$, $X_2$ is a normal random variable with mean $\alpha$ and standard deviation $1/12$, and $X_1$ and $X_2$ are independent. The censoring variable is exponential with parameter $\lambda$ and is independent of $(X_1,X_2)$. By varying the latter we mainly control the censoring rate, while by varying $\alpha$ we control the cure rate.
Suppose we have a sample $(Y_i,\delta_i,X_i)$, $i=1,\ldots,n$ from the distribution described above, with $X_i=(X_{i1},X_{i2})^T$.
We obtain $\w{\gamma}=(\w{\gamma}_1,\w{\gamma}_2)^T$, the estimator of $\gamma_0=(\gamma_{01},\gamma_{02})^T$, by maximizing the partial likelihood function given by (\ref{NPMLEcoxmodel_gamma_Lambda}), using the Newton-Raphson algorithm. We get $\w{\theta}$, the estimator of $\theta_0$, by applying (\ref{NPMLE_coxmodel_theta_F}). The cure probability estimator is then obtained by
$$
\w{p}(x_1,x_2)=\exp\big[-\exp\big\{\Gamma(\w{\gamma}_1x_1+\w{\gamma}_2x_2)\big\}\w{\theta}\big].
$$
Using the plug-in principle together with (\ref{eq:variance_I0}), we obtain an estimator for the asymptotic variance-covariance matrix of $\w{\gamma}$ which is given by $\w{I}^{-1}/n$, where
\begin{align*}
\w{I}= n^{-1} \sum_{i=1}^n \delta_i \left\{ (d_{\widehat \gamma} (X_i ) -\w{h}_{\widehat \gamma} (Y_i) )( d_{\widehat \gamma} (X_i )- \w{h}_{\widehat \gamma} (Y_i)) ^T\right\},
\end{align*}
with, for every $\gamma\in B$, $\w h_\gamma(y)= {\nabla_{\gamma} \w Q_\gamma(y)}/{\w Q_\gamma(y)}$.
Similarly, using (\ref{express:asym_variance_theta}), we obtain an estimator of the asymptotic variance of $\w{\theta}$ given by $\w v_\theta/n$, where
\begin{align*}
\w v_\theta = n^{-1}\sum_{i=1}^n \frac{\delta_i}{\w Q_{\w \gamma}(Y_i)^2} + \left( n^{-1}\sum_{i=1}^n \frac{\delta_i \w h_{\w \gamma} (Y_i)}{\w Q_{\w \gamma}(Y_i)} \right)^T \w I^{-1} \left( n^{-1}\sum_{i=1}^n \frac{\delta_i \w h_{\w \gamma} (Y_i)}{\w Q_{\w \gamma}(Y_i)} \right).
\end{align*}
And using the expression for the variance of $\w p$ given in Corollary \ref{prop:cure_proportion}, we obtain an estimator of the asymptotic variance of $\w p(x)$ given by $\w v_p /n$, where
\begin{align*}
\w v_p = \w p(x)^2 g(\w \gamma,x)^2 \left( n^{-1}\sum_{i=1}^n \frac{\delta_i}{\w Q_{\w \gamma}(Y_i)^2} +\w u(x)^T \w I^{-1} \w u(x) \right),
\end{align*}
with $\w u(x) = \w \theta d_{\w \gamma}(x) - n^{-1}\sum_{i=1}^n {\delta_i\w h_{\w \gamma} (Y_i)}/{\w Q_{\w \gamma}(Y_i)}$ and $x=(x_1,x_2)^T$.
We perform $N = 2000$ replications for four sample sizes ($n = 100$, $n = 200$, $n=400$ and $n=600$), three levels
of censoring ($20\%$, $40\%$ and $60\%$) and three levels of cure ($20\%$, $40\%$ and $60\%$). For every scenario and every replication, we calculate the estimators $\w{\gamma}_1$, $\w{\gamma}_2$, $\w{\theta}$ and $\w{p}(x_1,x_2)$ together with their estimated asymptotic variance ($\widehat{AVar}$) and the corresponding asymptotic $95\%$ confidence intervals based on the asymptotic normality. Based on the $2000$ replications, we also calculate the empirical bias, the empirical variance ($VAR$), the empirical mean squared error ($MSE$) of every estimator together with the empirical coverage probability ($COV$) for the confidence intervals. In the case of the cure probability $p(x_1,x_2)$ we did the calculations for $x_2=0$ and every quantile of $X_1$ corresponding to the probability levels $0.01,0.02,\ldots,0.99.$ We summarize the results by taking the average of the resulting $99$ empirical $VAR$'s, empirical $MSE$'s and empirical $COV$'s. Due to space limitations, we provide below only some selected but representative scenarios.
\begin{figure}[H]
\centering
\begin{subfigure}{1\textwidth}
\includegraphics[width=.95\textwidth]{feg1a3.pdf}
\end{subfigure}
\par\smallskip
\begin{subfigure}{1\textwidth}
\includegraphics[width=.95\textwidth]{feg1b3.pdf}
\end{subfigure}
\caption{Boxplots of $\w{\gamma}_1$, $\w{\gamma}_2$ and $\w{\gamma}_0$ for $n=100$ and $n=600$ and for $\Gamma(\cdot)=\cdot$. The empirical mean of the estimates is indicated by a $+$. The true values are indicated by a horizontal line.}
\label{fig:fig1e}
\end{figure}
Figure \ref{fig:fig1e} provides the boxplots for $\w{\gamma}_1$, $\w{\gamma}_2$ and $\w{\gamma}_0=\log(\w{\theta})$. By comparing
the upper and lower part ($n=100$ vs $n=600$) of this figure, we clearly see that the performance of the estimators improves with increasing sample size both in terms of bias and variance. This confirms the consistency of these estimators. This figure also shows the effect of the cure rate and the censoring rate. As expected, increasing the latter rates results in a larger bias and, especially, in a larger variance of the estimators. This effect can also be seen in Figure \ref{fig:fig2} which provides the boxplots for the asymptotic estimated variances. Compared to the censoring rate, the cure rate seems to have no, or very limited, effect on $\w{\gamma}_1$ and $\w{\gamma}_2$, but it does affect the bias and the variance of $\w{\gamma}_0$. In fact, when the percentage of cure increases, the bias and the variance of $\w{\gamma}_0$ decrease (and so does the MSE). Globally, it seems that the estimation of $\w{\gamma}_0$ is more difficult than the estimation of $\w{\gamma}_1$ and $\w{\gamma}_2$. This is especially the case when the censoring percentage is large and the cure probability is small. If moreover the sample size is small, then the bias can be quite large.
\begin{figure}[H]
\centering
\begin{subfigure}{1\textwidth}
\includegraphics[width=1\textwidth]{feg2a3}
\end{subfigure}
\par\smallskip
\begin{subfigure}{1\textwidth}
\includegraphics[width=\textwidth]{feg2b3}
\end{subfigure}
\caption{Boxplots of $\widehat{AVar}(\w{\gamma}_1)$, $\widehat{AVar}(\w{\gamma}_2)$ and $\widehat{AVar}(\w{\gamma}_0)$ for $n=100$ and $n=600$ and for $\Gamma(\cdot)=\cdot$. The empirical mean of $\widehat{AVar}$ is indicated by a $+$, the empirical variance of the estimates ($\w{\gamma}_1,\w{\gamma}_2,\w{\gamma}_0$) is indicated by a $\times$.}
\label{fig:fig2}
\end{figure}
As we said before, Figure \ref{fig:fig2} provides the boxplots for the asymptotic estimated variances. The plots suggest that the proposed estimators are consistent (note that the $y$-axis in the upper and the lower plots do not have the same scale). Basically the remarks we made above on the effect of the proportion of cure and censoring remain valid for the proposed estimators of the variances. Again, it can be seen that estimating the variance of $\w{\gamma}_0$ is more difficult and can lead to, relatively, large variances especially when the censoring and cure rates are large and the sample size is small.
Figure \ref{fig:fig3e} which provides some Q-Q plots for the estimated parameters confirms the validity of the normal approximation of the sampling distributions of $\w{\gamma}_1$ and $\w{\gamma}_2$. However, this approximation seems to be less accurate for $\w{\theta}$ even when $n=600$ (figure not shown here). In fact the sampling distribution of the latter tends to be positively skewed especially when the censoring rate is large. Applying the logarithmic transformation, seems to solve the problem as it makes the distribution more symmetric (see the Q-Q plot for $\w{\gamma}_0$ in Figure \ref{fig:fig3e}).
\begin{figure}[H]
\centering
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=1\textwidth]{feg3a3}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=1\textwidth]{feg3c3}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=1\textwidth]{feg3e3}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=1\textwidth]{feg3g3}
\end{subfigure}
\caption{Normal Q-Q plot for the estimates ($\w{\gamma}_1, \w{\gamma}_2, \w{\theta}, \w{\gamma}_0$) for $n=100$ and for $\Gamma(\cdot)=\cdot$. The proportion of censoring and the cure rate both equal $0.40$.}
\label{fig:fig3e}
\end{figure}
In Table \ref{tab:tab1} we give the MSE and the variance for some of the studied scenarios and for $\Gamma(\cdot)=\cdot$, $\Gamma(\cdot)=(\cdot)^3$ and $\Gamma(\cdot)=\sin(\cdot).$ It is clear from these results that the variance is the dominant component in the mean squared errors. It can also be observed that the obtained results with the link $\Gamma(\cdot)=\cdot$ and $\Gamma(\cdot)=(\cdot)^3$ are globally better than the corresponding results obtained with $\Gamma(\cdot)=\sin(\cdot).$ Table \ref{tab:tab1} also shows the coverage probabilities ($COV$) of the $95\%$ asymptotic confidence intervals for the parameters $\gamma_1$, $\gamma_2$ and $\gamma_0=\log(\theta)$. The confidence intervals for the latter are based on the asymptotic normality of $\hat{\theta}$ and the Delta method. Globally, the obtained COV's are close to the nominal level. With $n=100$, the confidence intervals tend to be liberal when $\Gamma(\cdot)=\sin(\cdot)$ especially for $\gamma_0$. This also happens for $\gamma_1$ when $\Gamma(\cdot)=(\cdot)^3$.
\begin{table}[H]%
\centering
{\footnotesize
\begin{tabular}{|c|c|c|ccc|ccc|ccc|}
\toprule
& & & \multicolumn{3}{c|}{MSE} & \multicolumn{3}{c|}{VAR} & \multicolumn{3}{c|}{COV} \\
\toprule
$n$ & $\%cure$ & $\%cens$ & $\gamma_{1}$ & $\gamma_{2}$ & $\gamma_0$ & $\gamma_{1}$ & $\gamma_{2}$ & $\gamma_0$ & $\gamma_{1}$ & $\gamma_{2}$ & $\gamma_0$ \\
\toprule
\multicolumn{12}{c}{$\Gamma(\cdot)=\cdot$} \\
\toprule
100 & 10 & 20 & 0.171 & 0.905 & 3.516 & 0.169 & 0.905 & 3.502 & 0.962 & 0.902 & 0.879 \\
100 & 20 & 20 & 0.163 & 0.853 & 1.841 & 0.162 & 0.844 & 1.826 & 0.970 & 0.918 & 0.905 \\
100 & 20 & 40 & 0.233 & 1.304 & 2.898 & 0.231 & 1.301 & 2.881 & 0.962 & 0.913 & 0.896 \\
100 & 40 & 40 & 0.210 & 1.545 & 1.034 & 0.210 & 1.527 & 1.021 & 0.973 & 0.932 & 0.923 \\
100 & 40 & 60 & 0.310 & 2.320 & 1.550 & 0.310 & 2.318 & 1.549 & 0.980 & 0.927 & 0.924 \\
\hline
600 & 10 & 20 & 0.027 & 0.197 & 0.806 & 0.027 & 0.197 & 0.805 & 0.968 & 0.948 & 0.941 \\
600 & 20 & 20 & 0.025 & 0.200 & 0.471 & 0.025 & 0.200 & 0.471 & 0.968 & 0.956 & 0.952 \\
600 & 20 & 40 & 0.034 & 0.292 & 0.666 & 0.034 & 0.292 & 0.665 & 0.972 & 0.954 & 0.942 \\
600 & 40 & 40 & 0.034 & 0.304 & 0.208 & 0.034 & 0.304 & 0.208 & 0.968 & 0.962 & 0.956 \\
600 & 40 & 60 & 0.051 & 0.454 & 0.306 & 0.051 & 0.454 & 0.306 & 0.969 & 0.964 & 0.956 \\
\toprule
\multicolumn{12}{c}{$\Gamma(\cdot)=(\cdot)^3$} \\
\toprule
100 & 10 & 20 & 0.050 & 0.044 & 0.112 & 0.050 & 0.044 & 0.107 & 0.955 & 0.951 & 0.967 \\
100 & 20 & 20 & 0.110 & 0.102 & 0.047 & 0.108 & 0.101 & 0.047 & 0.875 & 0.873 & 0.946 \\
100 & 20 & 40 & 0.128 & 0.123 & 0.084 & 0.127 & 0.122 & 0.083 & 0.882 & 0.879 & 0.966 \\
100 & 40 & 40 & 0.227 & 0.067 & 0.059 & 0.205 & 0.067 & 0.059 & 0.899 & 0.921 & 0.953 \\
100 & 40 & 60 & 0.299 & 0.131 & 0.111 & 0.266 & 0.130 & 0.111 & 0.895 & 0.913 & 0.955 \\
\hline
600 & 10 & 20 & 0.006 & 0.005 & 0.016 & 0.006 & 0.005 & 0.016 & 0.958 & 0.962 & 0.954 \\
600 & 20 & 20 & 0.019 & 0.018 & 0.007 & 0.018 & 0.017 & 0.007 & 0.929 & 0.923 & 0.951 \\
600 & 20 & 40 & 0.023 & 0.022 & 0.012 & 0.022 & 0.021 & 0.012 & 0.936 & 0.937 & 0.956 \\
600 & 40 & 40 & 0.036 & 0.006 & 0.008 & 0.035 & 0.006 & 0.008 & 0.944 & 0.933 & 0.941 \\
600 & 40 & 60 & 0.062 & 0.010 & 0.014 & 0.059 & 0.010 & 0.014 & 0.935 & 0.924 & 0.933 \\
\toprule
\multicolumn{12}{c}{$\Gamma(\cdot)=\sin(\cdot)$} \\
\toprule
100 & 10 & 20 & 0.656 & 0.532 & 0.270 & 0.656 & 0.532 & 0.204 & 0.915 & 0.908 & 0.914 \\
100 & 20 & 20 & 0.625 & 0.353 & 0.173 & 0.538 & 0.248 & 0.132 & 0.949 & 0.959 & 0.897 \\
100 & 20 & 40 & 0.942 & 0.582 & 0.222 & 0.790 & 0.411 & 0.173 & 0.945 & 0.946 & 0.867 \\
100 & 40 & 40 & 0.988 & 0.547 & 0.138 & 0.737 & 0.522 & 0.136 & 0.954 & 0.837 & 0.833 \\
100 & 40 & 60 & 1.614 & 0.708 & 0.161 & 1.197 & 0.666 & 0.156 & 0.932 & 0.849 & 0.871 \\
\hline
600 & 10 & 20 & 0.104 & 0.085 & 0.007 & 0.104 & 0.085 & 0.007 & 0.979 & 0.977 & 0.965 \\
600 & 20 & 20 & 0.088 & 0.029 & 0.027 & 0.087 & 0.026 & 0.024 & 0.946 & 0.979 & 0.970 \\
600 & 20 & 40 & 0.124 & 0.044 & 0.042 & 0.121 & 0.038 & 0.037 & 0.937 & 0.971 & 0.967 \\
600 & 40 & 40 & 0.088 & 0.126 & 0.044 & 0.073 & 0.126 & 0.044 & 0.987 & 0.910 & 0.875 \\
600 & 40 & 60 & 0.144 & 0.182 & 0.064 & 0.116 & 0.182 & 0.062 & 0.982 & 0.897 & 0.857 \\
\bottomrule
\end{tabular}%
\caption{Empirical mean squared error (MSE), empirical variance (VAR) and empirical coverage probability (COV) for nominal $95\%$ confidence intervals for $\gamma_{1}$, $\gamma_{2}$ and $\gamma_0$.}
\label{tab:tab1}
}
\end{table}%
Figure \ref{fig:fig4} shows the empirical coverage probabilities (COV) of the confidence intervals for $p(x_1,x_2)$. We can see that these COV's can be quite unsatisfactory especially in the left tail of the support of $X_1$ even when the sample size is relatively large. To correct for this, we apply the logit transformation and the Delta method to construct confidence intervals for $\log(p/(1-p))$ and transform back (taking the logistic transformation) to get confidence intervals for the cure probabilities. This leads to very satisfactory results with coverage probabilities close to the nominal level both in the middle and in the tails especially when the sample size is large.
\begin{figure}[H]
\centering
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=1\textwidth]{feg4a3}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=1\textwidth]{feg4aexp33}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=1\textwidth]{feg4b3}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=1\textwidth]{feg4bexp33}
\end{subfigure}
\caption{Empirical coverage probabilities of nominal $95\%$ confidence intervals for the cure probability as a function of $x_1$ for $x_2=0$. The coverage probabilities obtained without transformation are indicated by a $+$, those obtained after a logit transformation are indicated by a $\times$. The proportion of censoring and the cure rate both equal $0.40$.}
\label{fig:fig4}
\end{figure}
\section{Real data application}
To illustrate the application of our model, the proposed methodology is applied on a real data set from a breast cancer study. The dataset consists of 286 patients that experienced a lymph-node-negative breast cancer between 1980 to 1995 \citep{Wang:2005}. The event time of interest is the time to distant metastasis (DM). Among the 286 patients, 107 experienced a relapse from breast cancer. As can be seen from Figure \ref{fig:fig5}, the Kaplan-Meier estimator of the survival function shows a large plateau at about $0.60$. Furthermore,
$ 88 \% $ of the censored observations are in the plateau. A cure model seems therefore appropriate for these data.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\textwidth]{KM.png}
\caption{Kaplan-Meier estimator of the survival curve for time to distant metastasis for breast cancer survival data (censored observations are indicated by $+$).}
\label{fig:fig5}
\end{figure}
We consider two covariates : the age of the patient (ranges from 26 to 83 with a median of 52 years) and the estrogen receptor (ER) status, which is a binary variable equaling 0 ($ER-$) in the case of less than $10$ fmol per mg protein (77 patients in total) and equaling 1 ($ER+$) when $10$ fmol per mg protein or more (209 patients in total). We analyse the data using the semiparametric model given in (\ref{def:model_simu}) and we choose the link function $\Gamma(x)$ to be either $x^k$ or $\sin(x^k)$ with $k=1,\ldots,8$. In Table \ref{tab:tab2} we report the values of the obtained profile log-likelihood (PLL) as given by (\ref{NPMLEcoxmodel_gamma_Lambda}), and the obtained full log-likelihood (FLL) as given by (\ref{maximumlikelihood}). Based on this result we can conclude that, in terms of the likelihood, the model that fits best these data is the model with the sine function and $k=4$.
\begin{table}[H]
\centering
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c}
\multicolumn{6}{c|}{$\Gamma(x)=x^k$} & \multicolumn{6}{c}{$\Gamma(x)=\sin(x^k)$} \\ \hline
\multicolumn{3}{c|}{$k$ odd} & \multicolumn{3}{c|}{$k$ even} & \multicolumn{3}{c|}{$k$ odd} & \multicolumn{3}{c}{$k$ even} \\ \hline
$k$ & PLL & FLL & $k$ & PLL & FLL & $k$ & PLL & FLL & $k$ & PLL & FLL \\ \hline
1 & 25.0 & -687.1 & 2 & 25.9 & -686.2 & 1 & 25.3 & -686.8 & 2 & 26.6 & -685.5 \\
3 & 25.2 & -686.9 & 4 & 25.9 & -686.2 & 3 & 25.2 & -686.9 & 4 & 28.1 & -684.0 \\
5 & 25.4 & -686.7 & 6 & 26.1 & -686.0 & 5 & 25.4 & -686.7 & 6 & 24.1 & -688.0 \\
7 & 25.6 & -686.5 & 8 & 25.3 & -686.8 & 7 & 25.5 & -686.6 & 8 & 25.8 & -686.3 \\
\end{tabular}
\caption{The profile log-likelihood (PLL) and the full log-likelihood (FLL) for different link functions.}
\label{tab:tab2}
\end{table}
\begin{appendices}
This Appendix is dedicated to the proofs of the mathematical results of the paper. In Section \ref{app_A_propositions} we give the proofs of the $4$ propositions of the paper, stated in Section \ref{s3}. In these proofs we rely on some important statements, enumerated with the letter B, whose proofs are given in Section \ref{app_B_statements}. The technical results on empirical processes are all postponed to Section \ref{app_C_emp_proc}.
\counterwithin{theorem}{section}
\section{Proofs of the propositions}\label{app_A_propositions}
\subsection{Proof of Proposition \ref{proposition:p2_p3}}
In (\ref{newopti1}), write $\beta^T= (\log \theta,\gamma^T)$ with $\gamma\in \mathbb{R}^{d}$ and $\theta\in \mathbb{R}_{>0}$. Hence $\w Q_{2,\beta}(Y_i) = \w Q_\gamma(Y_i)\theta$ and (\ref{newopti1}) becomes
\begin{align}\label{optimisationgammatheta}
\underset{\gamma\in \mathbb{R}^{d},\, \theta\in\mathbb{R}} {\text{argmax}}\ \prod_{i=1}^n\left\{ \left(\frac{\exp(\gamma^TX_i) \theta}{\w Q_\gamma(Y_i)\theta-\w \lambda_{(\log\theta,\gamma)}}\right)^{\delta_i} \exp( -\w\lambda_{(\log\theta,\gamma)} ) \right\}.
\end{align}
For any $\gamma\in \mathbb{R}^{d}$, denote by $\w \theta_\gamma$ the maximizer of (\ref{optimisationgammatheta}) over $\theta$. Hence $\w \theta_\gamma $ maximizes
\begin{align}\label{eq:maxthetagamma}
\sum_{i=1}^n \left\{- \delta_i \log \left(\frac{\w Q_\gamma(Y_i)\theta-\w \lambda_{(\log\theta,\gamma)}}{\theta}\right) -\w\lambda_{(\log\theta,\gamma)} \right\}.
\end{align}
Furthermore, as $\w\lambda_{(\log\theta,\gamma)}$ satisfies $\sum_{i=1}^n \delta_i/(\w Q_{\gamma} (Y_i)\theta -\w\lambda_{(\log\theta,\gamma)} )=n$, a concavity argument implies that
\begin{align*}
\w\lambda_{(\log\theta,\gamma)} = \underset{{\lambda\in \mathbb R}} {\argmin}\, \sum_{i=1}^n \left\{ -\delta_i \log\left( \frac{ \w Q_{ \gamma} (Y_i) \theta - \lambda}{\theta}\right) -\lambda\right\},
\end{align*}
and, in particular, considering $\lambda=0$ leads to
\begin{align*}
\sum_{i=1}^n \left\{ -\delta_i \log\left( \frac{\w Q_{ \gamma} (Y_i) \theta -\w \lambda_{(\log\theta,\gamma)}}{\theta}\right) -\w \lambda_{(\log\theta,\gamma)}\right\}
\leq \sum_{i=1}^n -\delta_i \log\left( \w Q_{ \gamma} (Y_i) \right),
\end{align*}
for any $(\theta,\gamma)\in \mathbb R_{\geq 0}\times \mathbb R^{d} $. This inequality holds for $\theta = \w \theta_\gamma$ and it provides an upper bound for (\ref{eq:maxthetagamma}). This upper bound is achieved when $\theta$ is such that
$\w \lambda_{(\log\theta,\gamma)}=0$, equivalently when $\w \theta_\gamma = n^{-1} \sum_{i=1}^n \delta_i/\w Q_\gamma(Y_i)$. Injecting this value in (\ref{optimisationgammatheta}) we obtain the assertion of the proposition.
\qed
\subsection{Proof of Proposition \ref{propositionconsistency}}
For every $y\in \mathbb R_{\geq 0}$, $\gamma\in \mathbb R ^d$, write $Q_{\gamma}(y) = E[g(\gamma,X)R(y)]$.
Since the function $\gamma\mapsto E[\delta \log(g(\gamma, X)/Q_\gamma(Y) )] $ is continuous on $B$ and has a unique maximum (see the Lemma below), it suffices to show that \cite[Theorem 2.1]{newey1994}
\begin{align}\label{statement1}
\sup_{\gamma \in B} \left|n^{-1} \sum_{i=1} ^ n \delta_i\{\log(g(\gamma,X_i))-\log(\w Q_{\gamma}(Y_i))\}- E\Big[\delta\{\log(g(\gamma,X))-\log(Q_{\gamma}(Y))\}\Big]\right| \overset{\mathbb P}{ \longrightarrow } 0.\tag{B.1}
\end{align}
This is shown in Section \ref{app_B_statements}.
\qed
\begin{lemma}\label{prop:identifiability_profile}
\begin{enumerate}[(i)]
\item\label{lemma:consistency:_i} Under (H\ref{cond:identification1}) and (H\ref{cond:identification2}), the function $\gamma \mapsto E[\delta\log(g(\gamma,X) / Q_\gamma (Y)) ]$ has a unique maximum $\gamma_0$.
\item\label{lemma:consistency:_ii} Under (H\ref{cond:consistency_gamma_0}) and (H\ref{cond:consistency_gamma_1}), the function $\gamma\mapsto E[\delta \log(g(\gamma, X)/Q_\gamma(Y) )] $ is continuous on $B$.
\end{enumerate}
\end{lemma}
\begin{proof}
We start with (\ref{lemma:consistency:_i}). Using (\ref{formula:martingale}) and that $Q_{\gamma} = E [ g(\gamma,X) R(y)] $, we get
\begin{align*}
E\left[\delta \left(\frac{g(\gamma,X) }{ g(\gamma_0,X) } \frac{ Q_{\gamma_0} (Y)} {Q_{\gamma} (Y)} - 1 \right) \right] = \int \left( E\left[ {g(\gamma,X) } \frac {Q_{\gamma_0} (u)}{ Q_\gamma (u)} R(u) \right] - Q_{\gamma_0} (u) \right)d\Lambda_0(u) = 0.
\end{align*}
Since there exist $\eta,\eta'>0$ such that \citep{murphy1994}
\begin{align*}
&\log(x) - (x-1) \leq -\ell(x),\\
&\ell(x) = \eta |x-1|\mathds 1_{\{|x-1|\geq 1/2\}} + \eta' (x-1)^2\mathds 1_{\{|x-1|< 1/2\}},
\end{align*}
it follows that
\begin{align*}
E\left[\delta \log\left(\frac{g(\gamma,X) }{Q_{\gamma} (Y)}\right) \right] - E\left[\delta \log\left( \frac { g(\gamma_0,X) } { Q_{\gamma_0} (Y)} \right) \right] \leq - E\left[\delta \ell \left( \frac{g(\gamma,X) }{ g(\gamma_0,X) } \frac{ Q_{\gamma_0} (Y)} {Q_{\gamma} (Y)} \right) \right].
\end{align*}
Consequently, using (\ref{formula:martingale}), whenever $ E\left[\delta \log\left(\frac{g(\gamma,X) }{Q_{\gamma} (Y)}\right) \right] = E\left[\delta \log\left( \frac { g(\gamma_0,X) } { Q_{\gamma_0} (Y)} \right) \right] $, it holds that
\begin{align*}
\int E \left[ \ell \left( \frac{g(\gamma,X) }{ g(\gamma_0,X) } \frac{ Q_{\gamma_0} (u)} {Q_{\gamma} (u)} \right) g(\gamma_0,X)R(u) \right] d\Lambda_0 (u) = 0.
\end{align*}
For $(d\Lambda_0)$-almost every $u$, we have
\begin{align*}
E \left[ \ell \left( \frac{g(\gamma,X) }{ g(\gamma_0,X) } \frac{ Q_{\gamma_0} (u)} {Q_{\gamma} (u)} \right) g(\gamma_0,X)R(u) \right] = 0.
\end{align*}
But by (H\ref{cond:identification1}), it holds, almost surely, $ \inf_{u\in [0,\tau]} g(\gamma_0,X) E [ R(u)|X] = g(\gamma_0,X) E [ R(\tau )|X] >0 $, which implies that
almost surely $({g(\gamma,X) }/{ g(\gamma_0,X) })({ Q_{\gamma_0} (u)} /{Q_{\gamma} (u)}) = 1 $.\\ Hence ${g(\gamma,X) }/{ g(\gamma_0,X) } $ is constant and we conclude using (H\ref{cond:identification2}).
We continue with (\ref{lemma:consistency:_ii}). We proceed in two parts. We first consider the function $\gamma\mapsto E[\delta \log(g(\gamma, X))] $ and second we deal with $\gamma\mapsto E[\delta \log(Q_\gamma(Y) )] $. Because $\delta$ is bounded, it suffices to show that
$\int \left| \log(g(\gamma, x)/g(\tilde \gamma , x))\right| dP(x)\overset{\gamma \rightarrow \tilde \gamma}{ \longrightarrow } 0$.
We apply the Lebesgue dominated convergence theorem. For every $x\in \mathcal S$, the continuity of the function $g(\gamma, x)$ at $\tilde \gamma$ and the fact that $g(\gamma, x)$ is bounded from below implies that $| \log(g(\gamma, x)/g(\tilde \gamma , x))| \rightarrow 0 $ whenever $\gamma\rightarrow \tilde \gamma$. By (H\ref{cond:consistency_gamma_1}), we also have that
\begin{align*}
| \log(g(\gamma, x)/g(\tilde \gamma , x))| \leq |\log(m_1(x))| + |\log( M_1(x) ) |,
\end{align*}
which is $(dP)$-integrable. To obtain that
$\int \left| \log(Q_\gamma(y)/Q_{\tilde \gamma}(y))\right| dP(y)\overset{\gamma \rightarrow \tilde \gamma}{ \longrightarrow } 0$,
we can follow the same path as before by applying the Lebesgue dominated convergence theorem. We have that, for every $\gamma\in B$,
\begin{align}\label{eq:bound_upper_lower}
E[m_1(X) (1-\Delta)] \leq Q_\gamma (y ) \leq E[M_1(X)].
\end{align}
The continuity of the function $g(\gamma, x)$ at $\tilde \gamma$ implies the continuity of $\gamma \rightarrow Q_\gamma(y)$ for every $y\in\mathbb R_{\geq 0}$ (by another application of the Lebesgue dominated convergence theorem), which gives, with the help of (\ref{eq:bound_upper_lower}), that $| \log(Q_\gamma( y)/Q_{\tilde \gamma}(y))| \rightarrow 0 $ whenever $\gamma\rightarrow \tilde \gamma$. It remains to note that
$| \log(Q_\gamma( y)/Q_{\tilde \gamma}(y))|\leq |\log(E[m_1(X) (1-\Delta)])| + |\log( E[M_1(X)] ) |<+\infty$.
\end{proof}
\subsection{Proof of Proposition \ref{proposition:weak_cv_gamma}}
For every $\gamma\in B$, define $\w h_\gamma(y)= {\nabla_{\gamma} \w Q_\gamma(y)}/{\w Q_\gamma(y)}$.
It is worth mentioning that, for every $\gamma\in B$,
\begin{align}\label{eq:importantidentity}
n^{-1}\sum_{i=1} ^n \int d_\gamma (X_i) g(\gamma,X_i) R_i(u) \, d\Lambda_0(u) &= \int \nabla_\gamma \w Q_\gamma(u)d\Lambda_0(u) \nonumber \\
&= \int \w h_\gamma(u) \w Q_\gamma(u)d\Lambda_0(u) \nonumber\\
&=n^{-1}\sum_{i=1} ^n \int \w h_\gamma(u) g(\gamma,X_i)R_i(u) d\Lambda_0(u).
\end{align}
As by (H\ref{cond:asymptotics_gamma_2}), $\gamma \mapsto g(\gamma, x) $ is differentiable, $\w\gamma$ satisfies the equation $S_n(\w \gamma)= 0$, where
\begin{align*}
S_n (\gamma) = n^{-1}\sum_{i=1}^n \int \{d_\gamma (X_i) - \w h_\gamma(u) \} dN_i(u) .
\end{align*}
We rely on the following decomposition. Based on (\ref{eq:importantidentity}), for every $\gamma\in B$,
\begin{align*}
S_n (\gamma) &= n^{-1}\sum_{i=1}^n \int \{d_\gamma (X_i) - \w h_\gamma(u)\} (dN_i(u) -g(\gamma, X_i ) R_i(u) d\Lambda_0(u)) \\
&= n^{-1}\sum_{i=1}^n \int \{d_\gamma (X_i) - \w h_\gamma(u)\} dM_i(u) \\
&\qquad\qquad + n^{-1}\sum_{i=1}^n \int \{d_\gamma (X_i) - \w h_\gamma(u)\}(g(\gamma_0, X_i ) -g(\gamma, X_i )) R_i(u) d\Lambda_0(u).
\end{align*}
Applying this for $\gamma\in B$ and for $\gamma_0$ implies that
\begin{align*}
S_n(\gamma) - S_n(\gamma_0)
&= n^{-1}\sum_{i=1}^n \int \{d_\gamma (X_i) - \w h_\gamma(u)\}(g(\gamma_0, X_i ) -g(\gamma, X_i )) R_i(u) d\Lambda_0(u) + r_{1,n}(\gamma) ,
\end{align*}
with
$r_{1,n}(\gamma) = n^{-1}\sum_{i=1}^n \int \{d_\gamma (X_i)-d_0(X_i) +\w h_0(u) - \w h_\gamma(u)\} dM_i(u)$.
Taking $\gamma = \w \gamma$, for which $S_n(\w \gamma)=0$, and using the mean-value theorem around the value $\gamma_0$ with the map
\begin{align*}
\gamma \mapsto n^{-1}\sum_{i=1}^n \int \{d_{\widehat\gamma} (X_i) - \w h_{\widehat \gamma}(u)\} g( \gamma, X_i ) R_i(u) d\Lambda_0(u),
\end{align*}
which is continuously differentiable by (H\ref{cond:asymptotics_gamma_2}), gives
$-S_n(\gamma_0) = - H_n (\tilde \gamma) (\w \gamma-\gamma_0)+r_{1,n}(\widehat \gamma)$,
with $\tilde \gamma$ on the line segment between $ \w\gamma$ and $\gamma_0$, and
\begin{align*}
H_n (\tilde \gamma) = n^{-1}\sum_{i=1}^n \int \{d_{\widehat\gamma} (X_i) - \w h_{\widehat \gamma}(u)\}\nabla_\gamma g(\tilde \gamma, X_i )^T R_i(u) d\Lambda_0(u) .
\end{align*}
We show in Section \ref{app_B_statements} that
\begin{align}\label{statement2}
&H_n (\tilde \gamma) \overset{\mathbb P}{ \longrightarrow } I_0,\tag{B.2}\\
&n^{1/2} r_{1,n}(\widehat \gamma) \overset{\mathbb P}{ \longrightarrow }0 .\label{statement2.0}\tag{B.3}
\end{align}
Because the matrix $I_0$ has full rank by (H\ref{cond:asymptotics_gamma_1}), we know from (\ref{statement2}) that with probability tending to $1$, $H_n (\tilde \gamma)$ is invertible. Then using (\ref{statement2.0}) gives that
$n^{1/2} (\w \gamma-\gamma_0) = H_n ( \tilde \gamma)^{-1} \{ n^{1/2} S_n(\gamma_0)\} + o_{\mathbb P}(1)$,
hence it remains to show that (see Section \ref{app_B_statements})
\begin{align}\label{statement2prime}
n^{1/2} S_n(\gamma_0) = n^{-1/2}\sum_{i=1} ^n \int (d_0(X_i) - h_0(u))dM_i(u) + o_{\mathbb P}(1),\tag{B.4}
\end{align}
to deduce the statement.
\qed
\subsection{Proof of Proposition \ref{proposition:strong_decomp_Lambda}}
For every $y\in \mathbb R_{\geq 0}$, write
\begin{eqnarray*}
n^{1/2}(\w \Lambda(y)-\Lambda_0(y))
&=&
n^{-1/2} \sum_{i=1}^n \int_0^y \frac{1}{\w Q_{\w \gamma}(u)} (dN_i(u)-\w Q_{\w \gamma}(u)d\Lambda_0(u)) \\
&=& n^{-1/2} \sum_{i=1}^n \int_0^y \frac{dM_i(u)}{\w Q_{\w \gamma}(u)} + n^{1/2} \int_0^y \left(\frac{ \w Q_{ \gamma_0}(u) - \w Q_{\w \gamma}(u) }{\w Q_{\w \gamma}(u)} \right) d\Lambda_0(u) \\
&=& n^{-1/2} \sum_{i=1}^n \int_0^y \frac{dM_i(u)}{\w Q_{\w \gamma}(u)} - n^{1/2} (\w \gamma-\gamma_0) \int_0^y \left(\frac{ \nabla_\gamma \w Q_{ \tilde \gamma}(u) }{\w Q_{\w \gamma}(u)} \right) d\Lambda_0(u),
\end{eqnarray*}
for some $\tilde \gamma$ belonging to the line segment between $ \w\gamma$ and $\gamma_0$.
As shown in Section \ref{app_B_statements},
\begin{align}\label{statement3}
&\sup_{y\in \mathbb R_{\geq 0} } \left| n^{-1/2} \sum_{i=1}^n \int_0^y \left( \frac{1}{\w Q_{\w \gamma}(u)} -\frac{1}{ Q_{0}(u)} \right) dM_i(u)\right| =o_{\mathbb P}(1),\tag{B.5}
\end{align}
and since, from Lemma \ref{Lemma:probacv},
\begin{align*}
& \sup_{u\in \mathbb{R}_{\geq 0}} \left|\frac{ \nabla_\gamma \w Q_{ \tilde \gamma}(u) }{\w Q_{\w \gamma}(u)}- h_0(u) \right| = o_{\mathbb P}(1),
\end{align*}
the result follows.
\qed
\section{Proof of the auxiliary statements (\ref{statement1}) to (\ref{statement3})} \label{app_B_statements}
\paragraph{Proof of (\ref{statement1}):}
First, we deal with the terms of the form $ \delta \log(g(\gamma,x)) $. From Lemma \ref{Lemma:GCclass} assertion (\ref{GC:consistency1}), the underlying class indexed by $\gamma \in B$, is Glivenko-Cantelli. It follows that
\begin{align*}
\sup_{\gamma \in B}\left |n^{-1} \sum_{i=1} ^ n \delta_i \log(g(\gamma,X_i))- E[\delta\log(g(\gamma,X))] \right| \overset{\mathbb P}{ \longrightarrow } 0.
\end{align*}
Second,
with probability going to $1$, we have that (with $ b = E[m_1(X)(1-\Delta) ]/2$)
\begin{align*}
&\sup_{\gamma \in B} \left |n^{-1} \sum_{i=1} ^ n \delta_i \log(\w Q_{\gamma}(Y_i))- E[\delta\log(Q_{\gamma}(Y))] \right| \\
&\leq \sup_{\gamma \in B} \left |n^{-1} \sum_{i=1} ^ n\delta_i\{ \log(\w Q_{\gamma}(Y_i))- \log(Q_{\gamma}(Y_i))\}\right | +\sup_{\gamma \in B}\left |n^{-1} \sum_{i=1} ^ n \delta_i\log( Q_{\gamma}(Y_i))- E[\delta\log(Q_{\gamma}(Y))]\right | \\
&\leq 2 b^{-1} \sup_{\gamma \in B,\, y\in \mathbb{R}_{\geq 0} }\left | \w Q_{\gamma}(y)- Q_{\gamma}(y)\right | +\sup_{\gamma \in B}\left |n^{-1} \sum_{i=1} ^ n\delta_i \log( Q_{\gamma}(Y_i))- E[\delta\log(Q_{\gamma}(Y))]\right |,
\end{align*}
which follows from the mean-value theorem applied to $x\mapsto \log(x)$, and where the bound, in probability, is given in (\ref{convergence:12}) of Lemma \ref{Lemma:probacv}. Convergence of the first term above is then implied by Lemma \ref{Lemma:probacv}, equation (\ref{convergence:11}). Convergence of the second term above is deduced from Lemma \ref{Lemma:GCclass}, assertion (\ref{GC:consistency2}). \qed
\paragraph{Proof of (\ref{statement2}):}
We show that for any random sequences $\gamma_n$ and $\tilde \gamma_n$ going to $ \gamma_0$, in $\mathbb P$-probability, we have
\begin{align*}
n^{-1}\sum_{i=1}^n \int \{d_{\gamma_n} (X_i) - \w h_{\gamma_n}(u)\}\nabla_\gamma g(\tilde \gamma_n, X_i )^T R_i(u) d\Lambda_0(u) \overset{\mathbb P}{ \longrightarrow } I_0.
\end{align*}
Some basic algebra implies that, for any bounded function $h$,
\begin{align*}
\int E \left[ \{d_0 (X) - h_0(u)\} h(u ) g(\gamma_0,X) R(u)\right] d\Lambda_0(u)=0.
\end{align*}
From the previous with $h=h_0$, we deduce that
\begin{align*}
I_0 = \int E\left[ \{d_{0} (X) - h_{0}(u)\} d_{0} (X)^T g(\gamma_0, X) R(u) \right] d\Lambda_0(u) ,
\end{align*}
hence, we have to prove that
\begin{align*}
\int n^{-1}\sum_{i=1}^n \left [ \{d_{\gamma_n} (X_i) - \w h_{\gamma_n}(u)\} \nabla_\gamma g(\tilde \gamma_n, X_i )^T R_i(u)\right] d\Lambda_0(u)
\\
\overset{\mathbb P}{ \longrightarrow } \int E\left[ \{d_{0} (X) - h_{0}(u)\} \nabla_\gamma g(\gamma_0, X)^T R(u) \right] d\Lambda_0(u) .
\end{align*}
From the triangle inequality, defining $ a(Y_i)= \int R_i(u ) d\Lambda_0(u) $, it is enough to prove that
\begin{align*}
& \left| n^{-1}\sum_{i=1}^n d_{\gamma_n} (X_i) \nabla_\gamma g(\tilde \gamma_n, X_i )^T a(Y_i) - E\left[ d_{0} (X) \nabla_\gamma g(\gamma_0, X )^T a(Y) \right] \right| \overset{\mathbb P}{ \longrightarrow } 0,\\
&\sup_{y\in\mathbb{R}_{\geq 0}} \left| \w h_{\gamma_n}(y) \nabla_\gamma \w Q_{\tilde \gamma_n} (y)^T - h_{0}(y) \nabla_\gamma Q_{0} (y)^T \right| \overset{\mathbb P}{ \longrightarrow } 0.
\end{align*}
From Lemma \ref{Lemma:GCclass}, the functions $(x,y) \mapsto d_{\gamma} (x) \nabla_\gamma g(\tilde \gamma, x )^T a(y)$, with $\gamma$ and $\tilde \gamma$ in $B$, are included in a Glivenko-Cantelli class. Hence,
\begin{align*}
\sup_{\gamma\in B,\, \tilde \gamma\in B} \left| n^{-1}\sum_{i=1}^n d_{\gamma} (X_i) \nabla_\gamma g(\tilde \gamma, X_i )^T a(Y_i) - E\left[ d_{\gamma} (X) \nabla_\gamma g(\tilde \gamma, X)^T a(Y)\right] \right| \overset{\mathbb P}{ \longrightarrow } 0.
\end{align*}
Hence, the first convergence is derived from the continuity of the map
$(\gamma, \tilde \gamma) \mapsto E[ d_{\gamma} (X) \linebreak \nabla_\gamma g(\tilde \gamma, X )^T a(Y)]$.
This is implied by (H\ref{cond:consistency_gamma_1}) and (H\ref{cond:asymptotics_gamma_2}) invoking the continuity of $\gamma \mapsto g( \gamma, x)$ and $\gamma \mapsto \nabla_\gamma g( \gamma, x)$, for every $x\in \mathcal S$ and the Lebesgue dominated convergence theorem. The second convergence is a direct consequence of Lemma \ref{Lemma:probacv}, (\ref{convergence:12}), (\ref{convergence:13}) and (\ref{convergence:22}).
\qed
\paragraph{Proof of (\ref{statement2.0}):}
We proceed in two steps. First, we show that, for any sequence $\gamma_n$ going to $0$, in $\mathbb P$-probability,
\begin{align*}
n^{-1}\sum_{i=1}^n \int \{d_{\gamma_n} (X_i)-d_0(X_i)\} dM_i(u) =o_{\mathbb P}(n^{-1/2}).
\end{align*}
We apply Lemma \ref{Lemma:equicontinuity1} to obtain the previous convergence coordinate by coordinate. For $j\in\{1,\ldots, q\}$, with probability $1$, the function $d_{\w \gamma,j}$ belongs to the class $\{x\mapsto d_{\gamma,j} (x) \,:\, \gamma\in B\}$ which, by Lemma \ref{lemma:dgamma_donsker&hboundedvariation}, satisfies (\ref{cond:uniform_entropy}). By (H\ref{cond:consistency_gamma_0}) and (H\ref{cond:asymptotics_gamma_2}), there exists some constant $C>0$ such that the envelop $L$, given in Lemma \ref{lemma:dgamma_donsker&hboundedvariation}, satisfies
\begin{align*}
&E[L^2(X) g(\gamma_0,X) ] \\
&< C \left( E\left[ \frac{(c_2(X)+M_2(X) )^2 M_1(X)}{m_1^2(X)} \right] +E\left[ \frac{M_2^2(X) (c_1(X)+ M_1(X))^2M_1(X)}{m_1^4(X)} \right]
\right)<+\infty .
\end{align*}
Moreover from (\ref{ineq:dgamma_lipschitz}) and (\ref{eq:mean_value_th2}), we find that
$E[\{d_{ \gamma_n,j} (X)-d_{\gamma_0,j}(X)\} g(\gamma_0,X)]\leq c_1 | \gamma_n-\gamma_0|$,
which goes to $0$ in $\mathbb P$-probability.
Second, we prove that, for any sequence $\gamma_n$ going to $0$, in $\mathbb P$-probability,
\begin{align}\label{statement2.0.1}
n^{-1}\sum_{i=1}^n \int \left\{\w h_{0}(u) - \w h_{ \gamma_n}(u)\right\} d M_i(u)=o_{\mathbb P}(n^{-1/2}).
\end{align}
Let $j\in\{1,\ldots, q\}$. Then, by Lemma \ref{lemma:dgamma_donsker&hboundedvariation}, $\widehat h_{\gamma_n,j} \in \text{BV} (m,v) $ with probability going to $1$, and by Lemma \ref{Lemma:probacv}, $\sup_{u\in \mathbb R_{\geq 0}} |h_{\gamma_0,j}(u) - \w h_{\gamma_n,j}(u)|\overset{\mathbb P}{\rightarrow }0$. Hence, the result follows from Lemma \ref{Lemma:equicontinuity2}.
\qed
\paragraph{Proof of (\ref{statement2prime}):}
From identity (\ref{eq:importantidentity}) with $\gamma = \gamma_0$, we have
\begin{align*}
S_n(\gamma_0) &= n^{-1}\sum_{i=1} ^n \int (d_0(X_i) - \w h_0(u))dN_i(u)\\
&= n^{-1}\sum_{i=1} ^n \int ( d_0(X_i) - \w h_0(u))dM_i(u).
\end{align*}
Using (\ref{statement2.0.1}) with $\gamma_n = \gamma_0$, gives
$n^{-1}\sum_{i=1}^n \int \left\{h_0(u) - \w h_{0}(u)\right\} d M_i(u)=o_{\mathbb P}(n^{-1/2})$,
and hence (\ref{statement2prime}) follows.
\qed
\paragraph{Proof of (\ref{statement3}):} We will apply Lemma \ref{Lemma:equicontinuity2} with $\widehat h$ equal to the function $u\mapsto \w Q_{\w \gamma}(u)^{-1} $ and $h_0$ equal to the function $u\mapsto Q_{0}(u)^{-1} $. By (\ref{convergence:12}), the functions $\{\w Q_{\gamma}^{-1} : \gamma\in B\}$, are, with probability going to $1$, valued in a bounded interval. The fact that they are non-decreasing implies that their total variation is smaller than $|2/E[m_1(X)(1-\Delta)] - 1/(2E[M_1(X)])|$, with probability going to $1$. It follows that there exist $m$ and $v$ such that with probability going to $1$, $\{ \w Q_{\gamma}^{-1}\, :\, \gamma\in B\}\subset \text{BV}(m,v) $. Furthermore, on the event $\{ \inf_{\gamma \in B,\, y\in \mathbb{R}_{\geq 0}} \w Q _\gamma (y)\geq mE(1-\Delta)\} $, which has probability going to $1$ in light of Lemma \ref{Lemma:probacv}, equation (\ref{convergence:12}), we have
\begin{align*}
\sup_{u\in \mathbb R_{\geq 0}} |\w Q_{\w \gamma}(u)^{-1} - Q_{0}(u)^{-1}|\leq 4 E[m_1(X) (1-\Delta)] ^{-2} \sup_{u\in \mathbb R_{\geq 0}} |\w Q_{\w \gamma}(u) - Q_{0}(u)| \overset{\mathbb P}{\longrightarrow }0.
\end{align*}
\qed
\section{Technical lemmas on empirical processes}\label{app_C_emp_proc}
Empirical process theory is useful to describe the asymptotics of semiparametric estimators because they usually result in empirical sums indexed possibly by some functional quantities. Helpful concepts are Glivenko-Cantelli classes and Donsker classes, as studied in \cite{vandervaart1996}. We start by showing the Glivenko-Cantelli property for certain classes of interest. Let $\xi,\xi_1,\xi_2,\ldots$ be independent and identically distributed random variables with distribution $P$. Denote the underlying probability by $\mathbb P$. A class $\mathcal F$ of real-valued functions is said to be $P$-Glivenko-Cantelli if
\begin{align*}
\sup_{f\in \mathcal F} \left| n^{-1}\sum_{i=1}^n \{ f(\xi_i)- Ef(\xi)\}\right| \overset{\mathbb P}{\longrightarrow} 0.
\end{align*}
When $\mathcal F$ is a vector-valued class, we say it is $P$-Glivenko-Cantelli when each coordinate is $P$-Glivenko-Cantelli. In what follows, the $j$-th coordinate of $d_{\gamma} $ and $\nabla _{\gamma} \widehat Q_\gamma $ are denoted by $d_{\gamma,j} $ and $\nabla _{\gamma,j} \widehat Q_\gamma $, respectively ($j= 1,\ldots, q$).
\begin{lemma}\label{Lemma:GCclass}
Let $R_{y}(u) = \mathds 1_{\{ y\leq \tau \}} \mathds 1_{\{ y\geq u\}} + \mathds 1_{\{y>\tau \}} $. Under (H\ref{cond:consistency_gamma_0}) and (H\ref{cond:consistency_gamma_1}), the following holds:
\begin{enumerate}[(i)]
\item\label{GC:consistency1} the class $\left\{ (\delta,x) \mapsto \delta \log(g(\gamma,x) ) \,:\, \gamma\in B \right\}$ is $P$-Glivenko-Cantelli,
\item \label{GC:consistency2} the class $\left\{ (\delta, y) \mapsto \delta\log( Q_\gamma(y)) \,:\, \gamma\in B \right\}$ is $P$-Glivenko-Cantelli,
\item \label{GC3} the class $\left\{(x,y) \mapsto g(\gamma,x)R_y(u) \,:\, \gamma\in B,\, u\in \mathbb R_{\geq 0} \right\}$ is $P$-Glivenko-Cantelli.
\end{enumerate}
Let $ a(y)= \int R_y(u ) d\Lambda_0(u) $. Under (H\ref{cond:consistency_gamma_0}), (H\ref{cond:consistency_gamma_1}) and (H\ref{cond:asymptotics_gamma_2}) the following holds for all $j,k\in \{1,\ldots, q\}$:
\begin{enumerate}[(i)] \setcounter{enumi}{3}
\item\label{GC:prime1} the class $\left\{(x,y) \mapsto \nabla_{\gamma,j} g(\gamma,x) R_y(u) \,:\, \gamma\in B,\, u\in \mathbb R_{\geq 0} \right\}$ is $P$-Glivenko-Cantelli,
\item\label{GC:prime1.1} the class $\left\{(x,y) \mapsto | \nabla_{\gamma,j}g(\gamma,x) | R_y(u) \,:\, \gamma\in B,\, u\in \mathbb R_{\geq 0} \right\}$ is $P$-Glivenko-Cantelli,
\item \label{GC:prime2} the class $\left\{ (x,y) \mapsto d_{\gamma,j} (x) \nabla_{\gamma,k} g(\tilde \gamma, x )^T a(y) \, : \, \gamma\in B,\, \tilde \gamma\in B \right\}$ is $P$-Glivenko-Cantelli.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\mathcal N_{[\,]}(\epsilon,\mathcal F, \|\cdot\|)$ (resp. $\mathcal N(\epsilon,\mathcal F, \|\cdot\|)$) denote the $\epsilon$-bracketing number (resp. $\epsilon$-covering number) of the metric space $(\mathcal F,\|\cdot\|)$ \citep[Definition 2.1.5]{vandervaart1996}.
As a preliminary step, we show that the class $\mathcal G=\left\{ x \mapsto g(\gamma,x ) \,:\, \gamma\in B \right\}$ is Glivenko-Cantelli whenever $0< E[c_1(X)]<+\infty$ which is true by (H\ref{cond:consistency_gamma_1}).
Because of (\ref{eq:mean_value_th1}), we are in position to apply Theorem 2.7.11 in \cite{vandervaart1996} with the $L_p(Q)$-norm, $p\geq 1$, $Q$ some probability measure, and the class $\mathcal G$. Let $B_0$ be some ball of finite radius in $\mathbb R^{q}$ such that $B\subset B_0$. Because the $\epsilon$-covering number of $(B_0,|\cdot|_1)$ is $O(\epsilon^ {-q})$, we find
\begin{align}\label{ineq:bracketing_G}
\mathcal N_{[\,]}\left( 2 \epsilon \| c_1\|_{L_p(Q)},\mathcal G, L_p(Q)\right) \leq \mathcal N_{}(\epsilon,B_0, |\cdot|_1) = K\epsilon^{-q},
\end{align}
for some $K>0$. When $p=1$ and $Q=P$, because $0< \| c_1\|_{L_1(P)}< +\infty $, we have that $\mathcal N_{[\,]}\left( \epsilon ,\mathcal G, L_1(P)\right)<+\infty $ for every $\epsilon >0$, making the class $\mathcal G$ is Glivenko-Cantelli \citep[Theorem 2.4.1]{vandervaart1996}.
We now prove (i). The class of interest $\{ (\delta, x ) \mapsto \delta \log(g(\gamma,x) ) \, :\, \gamma\in B\}$ can be written as $\mathcal F_1\times \log(\mathcal F_2)$ where $\mathcal F_1 = \{\delta\mapsto \delta\}$ and $\mathcal F_2 = \{ x \mapsto \log(g(\gamma,x) ) \, :\, \gamma\in B\}$. This is a continuous transformation of a Glivenko-Cantelli class and we can apply Theorem 3 in \cite{vandervaart+w:2000}. The envelop property is ensured as, by (H\ref{cond:consistency_gamma_1}), $E [\sup_{\gamma\in B} | \log(g(\gamma,X))| ]\leq E [ |\log(M_1(X))| +|\log(m_1(X))|] <+\infty $.
We now consider (ii). Multiplying (\ref{eq:mean_value_th1}) by $R_Y(y)$ and taking the expectation, we obtain, for every $y\in\mathbb R_{\geq 0}$, $(\gamma,\tilde \gamma)\in B^2$,
\begin{align}\label{eq:bound_lipschitz_Q}
| Q_\gamma(y) - Q_{\tilde \gamma}(y) | \leq |\gamma - \tilde \gamma |_1 E[ c_1(X)].
\end{align}
Following the preliminary step of the proof with (\ref{eq:bound_lipschitz_Q}) in place of (\ref{eq:mean_value_th1}), we again invoke Theorem 2.7.11 and Theorem 2.4.1 in \cite{vandervaart1996} to obtain that $\left\{ y \mapsto Q_\gamma(y) \,:\, \gamma\in B \right\}$ is Glivenko-Cantelli.
Then applying Theorem 3 in \cite{vandervaart+w:2000}, we show (\ref{GC:consistency2}). The (constant) envelop property is provided by (\ref{eq:bound_upper_lower}).
To show (\ref{GC3}), we apply Theorem 3 in \cite{vandervaart+w:2000} as the class of interest is the product of two classes, $\mathcal G$ and $\{y\mapsto R_y(u)\, :\, u\in \mathbb R_{\geq 0} \}$, both being $P$-Glivenko-Cantelli.
For (\ref{GC:prime1}), let $j\in \{1,\ldots, q\}$. Similarly to the preliminary step, we show that $\{\nabla _{\gamma,j} g(\gamma, x) \,:\, \gamma\in B\} $ is $P$-Glivenko-Cantelli provided that $0<E[c_2(X)] <+\infty $. Then as when proving (\ref{GC3}), because $E[ M_2(X)]<+\infty $ we apply Theorem 3 in \cite{vandervaart+w:2000} to obtain (\ref{GC:prime1}).
For (\ref{GC:prime1.1}), Theorem 3 in \cite{vandervaart+w:2000} applied to $\big\{(x,y) \mapsto \nabla_{\gamma,j} g(\gamma,x) R_y(u) : $ $ \gamma\in B,\, u\in \mathbb R_{\geq 0} \big\}$ gives that
$\left\{(x,y) \mapsto |\nabla_{\gamma,j} g(\gamma,x)| R_y(u) \,:\, \gamma\in B,\, u\in \mathbb R_{\geq 0} \right\}$ is $P$-Glivenko-Cantelli.
Concerning (\ref{GC:prime2}), let $j,k\in \{1,\ldots, q\}$. The class of interest is a continuous transformation of the $P$-Glivenko-Cantelli classes,
\begin{align*}
&\left\{(x,y) \mapsto \nabla_{\gamma,j} g(\gamma,x) \,:\, \gamma\in B,\, u\in\mathbb R_{\geq 0} \right\},\quad \left\{(x,y) \mapsto \nabla_{\gamma,k} g(\gamma,x) \,:\, \gamma\in B,\,u\in\mathbb R_{\geq 0} \right\}\\
&\left\{x \mapsto g(\gamma,x) \,:\, \gamma\in B \right\},\quad \{y\mapsto a(y) \}.
\end{align*}
Consequently, one just has to verify the envelop property which is obtained from (H\ref{cond:consistency_gamma_1}) and (H\ref{cond:asymptotics_gamma_2}),
\begin{align*}
E \left[\sup_{\gamma\in B, \, \tilde \gamma\in B} \left| d_{\gamma,j} (X) \nabla_{\gamma,k} g(\tilde \gamma, X )^T a(Y) \right| \right]< \theta_0 E\left[\frac{ M_2^2(X) }{ m_1(X) } \right] <+\infty.
\end{align*}
\end{proof}
\begin{lemma}\label{Lemma:probacv}
Let $ \gamma_n$ be a random sequence that converges to $\gamma_0$ in $\mathbb P$-probability. Under (H\ref{cond:consistency_gamma_1}), we have that
\begin{align}
& \sup_{\gamma \in B,\, y\in \mathbb{R}_{\geq 0} } | \w Q_{\gamma}(y)- Q_{\gamma}(y)|\overset{\mathbb P}{ \longrightarrow } 0,\label{convergence:11}\\
&\mathbb P \left( \forall\gamma \in B,\, \forall y\in \mathbb{R}_{\geq 0}\,:\, E[ m_1(X) (1-\Delta)]/2 \leq \widehat Q_\gamma (y ) \leq 2E[M_1(X)] \right) \longrightarrow 1, \label{convergence:12}\\
&\sup_{y\in\mathbb{R}_{\geq 0}} \left| \w Q_{ \gamma_n} (y) - Q_{0} (y) \right| \overset{\mathbb P}{ \longrightarrow } 0.\label{convergence:13}
\end{align}
Under (H\ref{cond:consistency_gamma_1}) and (H\ref{cond:asymptotics_gamma_2}), we have that
\begin{align}
& \sup_{\gamma \in B,\, y\in \mathbb{R}_{\geq 0} } \left| \nabla_\gamma \w Q_{\gamma}(y)- \nabla_\gamma Q_{\gamma}(y)\right|_1\overset{\mathbb P }{ \longrightarrow } 0, \label{convergence:21} \\
&\mathbb P \left( \sup_{\gamma \in B,\, y\in \mathbb{R}_{\geq 0}} \left| \nabla_\gamma \w Q_{\gamma}(y) \right|_1 \leq 2E[M_2(X)] \right) \longrightarrow 1, \label{convergence:21.1}\\
&\sup_{y\in\mathbb{R}_{\geq 0}} \left| \nabla_\gamma \w Q_{ \gamma_n} (y) - \nabla_\gamma Q_{0} (y) \right| _1 \overset{\mathbb P}{ \longrightarrow } 0\label{convergence:22} .
\end{align}
\end{lemma}
\begin{proof}
Convergences (\ref{convergence:11}) and (\ref{convergence:21}) are consequences of, respectively, (\ref{GC3}) and (\ref{GC:prime1}) of Lemma \ref{Lemma:Donskerclass}. Statement (\ref{convergence:12}) is an easy consequence of (\ref{eq:bound_upper_lower}) and (\ref{convergence:11}). Similarly, we obtain (\ref{convergence:21.1}) invoking (\ref{convergence:21}) and the fact that, from (H\ref{cond:asymptotics_gamma_2}),
$\left| \nabla_\gamma Q_{\gamma}(y) \right|_1\leq E[ M_2(X)]$.
Convergences (\ref{convergence:13}) and (\ref{convergence:22}) are treated similarly. Indeed, for (\ref{convergence:13}), write
\begin{align*}
\sup_{y\in\mathbb{R}_{\geq 0}} \left| \w Q_{ \gamma_n} (y) - Q_{0} (y) \right| \leq \sup_{y\in\mathbb{R}_{\geq 0},\, \gamma\in B} \left| \w Q_{ \gamma } (y) - Q_{\gamma} (y) \right| + \sup_{y\in\mathbb{R}_{\geq 0}} \left| Q_{ \gamma_n} (y) - Q_{0} (y) \right|.
\end{align*}
The first term on the right hand side goes to $0$ in $\mathbb P$-probability as shown before. For the second term on the right hand side, (\ref{eq:bound_lipschitz_Q}) yields
\begin{align*}
\sup_{y\in\mathbb{R}_{\geq 0}} \left| Q_{ \gamma_n} (y) - Q_{0} (y) \right|\leq |\gamma_n-\gamma_0 |_1 E[ c_1(X)] .
\end{align*}
The conclusion follows. For (\ref{convergence:22}) we do the same and obtain from (\ref{eq:mean_value_th2}) that
\begin{align*}
\sup_{y\in\mathbb{R}_{\geq 0}} \left| \nabla_\gamma Q_{ \gamma_n} (y) - \nabla_\gamma Q_{0} (y) \right|_1\leq |\gamma_n-\gamma_0 |_1 E[ c_2(X)] .
\end{align*}
The result now follows.
\end{proof}
We now turn our attention to some results related to the concept of Donsker classes. A class $\mathcal F$ is said to be $P$-Donsker if
\begin{align*}
n^{1/2} \left( n^{-1}\sum_{i=1}^n \{ f(\xi_i)- Ef(\xi)\}\right) \text{converges weakly to a Gaussian process in the space }\ell ^{\infty}(\mathcal F).
\end{align*}
The space $\ell ^{\infty}(\mathcal F)$ denotes the metric space of bounded functions defined on $\mathcal F$ endowed with the supremum distance. Let $\text{BV}(m,v)$ denote the space of c\`ad-l\`ag functions bounded by $m$ and with bounded variation $v$. Define, for every $(y,u,x)\in \mathbb R_{\geq 0}\times \mathbb R_{\geq 0}\times \mathcal S$,
\begin{align*}
M_{y,\delta, x}(u ) = \delta \mathds 1_{\{y\leq u\}} - \int_0^u g(\gamma_0,x) (\mathds 1_{\{ y\leq \tau \}} \mathds 1_{\{ y\geq v \}} + \mathds 1_{\{y>\tau \}} ) d\Lambda_0(v).
\end{align*}
\begin{lemma}\label{Lemma:Donskerclass}
Suppose that $E[g(\gamma_0,X)^2]<+\infty$. The class $ \{ (y,\delta, x) \mapsto \int h(u) dM_{y,\delta, x}(u)\, : \, h \in \text{BV} (m,v) \}$ is $P$-Donsker.
\end{lemma}
\begin{proof}
As a first step, we show that $\{n^{-1/2} \sum_{i=1}^n M_i(u)\, :\, u\in\mathbb R_{\geq 0}\} $ converges weakly in $\ell^\infty (\mathbb R_{\geq 0})$.
Example 2.5.4 in \cite{vandervaart1996} provides a bound on the uniform entropy numbers of the class of indicator functions. Example 2.10.23 in \cite{vandervaart1996} ensures that the product of two such classes is Donsker. It implies that $\{(y,\delta)\mapsto \delta \mathds 1 _{\{ y \leq u\}}\, : \, u\in\mathbb R\}$ is Donsker. Moreover, the set of functions defined for any $ (y,x)\in \mathbb R_{\geq 0}\times \mathcal S $ by
\begin{align*}
\int_0^u g(\gamma_0,x) (\mathds 1_{\{ y\leq \tau \}} \mathds 1_{\{ y\geq v \}} + \mathds 1_{\{y>\tau \}} ) d\Lambda_0(v) = g(\gamma_0,x)\left (\mathds 1_{\{ y\leq \tau \}} \Lambda_0 (y\wedge u ) + \mathds 1_{\{ y> \tau \}} \Lambda_0 ( u) \right),
\end{align*}
when $u$ varies in $\mathbb R_{\geq 0}$, is VC. Indeed, the class $\{y\mapsto \mathds 1_{\{ y\leq \tau \}} \Lambda_0 (y\wedge u ) + \mathds 1_{\{ y> \tau \}} \Lambda_0 ( u)\, : \, u\in \mathbb R_{\geq 0} \}$ is uniformly bounded and their subgraphs are ordered by inclusion, as $u $ increases. Therefore, any $2$ points can not be shattered by the collection of subgraphs, which means that the VC index is $2$. The class $\{x\mapsto g(\gamma_0,x)\}$ has only one element, and hence the product will be Donsker as soon as $E[ g(\gamma_0,X)^2] <+\infty$ (again from Example 2.10.23 in \cite{vandervaart1996}).
As a second step, we show that the process $\{n^{-1} \sum_{i=1}^n\int h(u) dM_i(u)\,:\, h\in \text{BV} (m,v)\}$ converges weakly in $\ell ^{\infty}( \text{BV} (m,v)) $ relying on the preservation of weak convergence through continuous mappings. The previous process is the image of $\{ n ^{-1} \sum_{i=1}^n M_i(u)\,:\, u\in \mathbb R_{\geq 0} \}$ by the linear transformation $H\mapsto \{\int h(u)dH(u)\, :\, h\in \text{BV} (m,v)\}$, defined on the space of c\`ad-l\`ag functions and valued in $\ell ^{\infty}(\text{BV} (m,v))$. Weak convergence is preserved whenever the map is continuous \citep{vandervaart1996}, so whenever
\begin{align*}
\sup_{h\in \text{BV} (m,v)}\left|\int h(u) d H(u)\right| \rightarrow 0,\qquad \text{as } \sup_{u\in \mathbb R_{\geq 0}} |H(u)|\rightarrow 0.
\end{align*}
The latter holds since both norms are in fact equivalent \citep{dudley1992}.
\end{proof}
The following lemma is useful to characterize the limiting distribution of the estimators.
\begin{lemma}\label{lemma:weakcv}
Under (H\ref{cond:consistency_gamma_1}) and (H\ref{cond:asymptotics_gamma_2}), the empirical process
\begin{align*}
& n^{-1/2} \sum_{i=1}^n \left( \begin{array}{c}
\int_0^y \frac{dM_i(u)}{Q_0(u) } \\
\int (d_0(X_i)- h_0(u)) dM_i(u)
\end{array} \right) ,
\end{align*}
converges weakly in $\ell^{\infty} (\mathbb{R})\times \mathbb{R}^q$ to a Gaussian process with covariance function
\begin{align*}
(y,y') \mapsto \begin{pmatrix}
\int_0^{y\wedge y'} \frac{d\Lambda_0(u)}{Q_0(u) } &0\\
0& \int E\left[ (d_0(X)- h_0(u))(d_0(X)- h_0(u))^T g(\gamma_0,X) R(u)\right] d\Lambda_0 (u)
\end{pmatrix} .
\end{align*}
\end{lemma}
\begin{proof}
The statement is a consequence of Lemma \ref{Lemma:Donskerclass}. By (\ref{eq:bound_upper_lower}), $Q_0^{-1} \in \text{BV}(m,v)$ for some $m>0$ and $v>0$. Using that for any $j\in \{1,\ldots, q\}$, $u<u'$,
\begin{align*}
|\nabla_{\gamma,j} Q_{\gamma}(u) - \nabla_{\gamma,j} Q_{\gamma}(u')| \leq E\left[ |\nabla_{\gamma,j} g(\gamma, X) | \, (R(u)-R(u')) \right] ,
\end{align*}
we have that $\nabla_{\gamma,j} Q_{\gamma} \in \text{BV}(m',v')$ for some $m'>0$ and $v'>0$. Consequently, $h_0\in BV(m'',v'')$for some $m''>0$ and $v''>0$. The Donsker property given by Lemma \ref{Lemma:Donskerclass} implies the tightness of each coordinate of the underlying empirical process. Then by using the multivariate central limit theorem, we obtain the convergence in distribution of the finite dimensional laws. This shows the result.
\end{proof}
Recall that $\mathcal N_{}(\epsilon,\mathcal F, \|\cdot\|)$ denotes the $\epsilon$-covering number of the metric space $(\mathcal F,\|\cdot\|)$. A class $\mathcal F$ with envelop $F$ is said to satisfy the uniform entropy condition whenever
\begin{align}\label{cond:uniform_entropy}
\int_0^{+\infty} \sup_{Q} \sqrt {\log\mathcal N_{}\left(\epsilon\|F\|_{L_2(Q)},\mathcal F,L_2(Q)\right) } d\epsilon <+\infty,
\end{align}
where the supremum is taken over all the finitely discrete probability measures. It is tempting to generalise the next Lemma with the Donsker property in place of the more technical and stronger requirement on the uniform entropy condition. Unfortunately it will generally fail when the random variables $g(\gamma,X)$, $\gamma\in B$, are unbounded. As detailed in \cite{vandervaart1996}, Section 2.10.2, stronger preservation properties are available when dealing with uniform entropy numbers \citep[Theorem 2.10.20]{vandervaart1996} rather than with the Donsker property \citep[Theorem 2.10.6]{vandervaart1996}.
\begin{lemma}\label{Lemma:equicontinuity1}
Let $\mathcal D$ denote a class of functions $\mathcal S\rightarrow \mathbb R$ satisfying (\ref{cond:uniform_entropy}) with envelop $D$ such that $E[D(X)^2g(\gamma_0,X) ]<+\infty$. If $\widehat d :\mathcal S \rightarrow \mathbb R$ is such that $\mathbb P(\widehat d\in \mathcal D) \rightarrow 1$ and $\int [\widehat d(X) ^2 g(\gamma_0,X)]dP_X =o_{\mathbb P}(1)$ (where $P_X$ is the probability measure of $X$), we have that
\begin{align*}
n^{-1} \sum_{i=1} ^n \int \widehat d (X_i) dM_i(u) = o_{\mathbb P}(n^{-1/2}) .
\end{align*}
\end{lemma}
\begin{proof}
From the martingale property of $M$, each term of the previous empirical sum has mean $0$. To apply Theorem 2.1 in \cite{wellner2007}, two statements need to be verified. First, from Ito's isometry and because $ \int R(u) d\Lambda_0(u)\leq \theta_0$, we have
\begin{align*}
\int \left(\int \widehat d (X) dM(u) \right)^2 dP_X&=\int \left[ \int \widehat d (X) ^2 g(\gamma_0, X) R(u) d\Lambda_0(u) \right] dP_X \\
&\leq \theta_0 \int \left[ \widehat d (X) ^2 g(\gamma_0, X) \right] dP_X ,
\end{align*}
which goes to $0$ in $\mathbb P$-probability, by assumption. Second, the class of functions
\begin{align*}
\{ (y,\delta, x) \mapsto d (x) \int dM_{y,\delta,x}(u) \, : \, d\in \mathcal D \},
\end{align*}
is shown to be Donsker by invoking Example 2.10.23 in \cite{vandervaart1996}. Indeed the class can be written as the product of two classes. One satisfies (\ref{cond:uniform_entropy}) and the other has only one element. The condition on the envelop is $E[D(X)^2( \int dM)^2 ]\leq \theta_0 E[ D(X)^2 g(\gamma_0, X ) ]<+\infty $ by assumption.
\end{proof}
\begin{lemma}\label{Lemma:equicontinuity2}
Assume that there exist $v>0$ and $m>0$ such that $\mathbb P(\widehat h \in \text{BV} (m,v))\rightarrow 1$ and $h_0\in \text{BV} (m,v)$. If moreover, $\sup_{y\in \mathbb R_{\geq 0}} | \widehat h(y ) - h_0(y) | =o_{\mathbb P}(1)$, we have that
\begin{align*}
\sup_{y\in\mathbb R_{\geq 0}} \left| n^{-1} \sum_{i=1}^n \int_0^y (\widehat h (u) - h_0(u) ) d M_i (u)\right| = o_{\mathbb P}(n^{-1/2}) .
\end{align*}
\end{lemma}
\begin{proof}
We rely on the asymptotic equicontinuity of empirical processes over Donsker classes. Denote by $Z_n$ the process $\{ Z_n (h) = n^{-1/2} \sum_{i=1}^n \int h (u) d M_i (u)\,:\, h\in \text{BV}(m,v)\}$.
From Lemma \ref{Lemma:Donskerclass}, $Z_n$ converges weakly in the space $\ell^{\infty}( \text{BV} (m,v))$. As asymptotic tightness is necessary to characterize weak convergence \cite[Theorem 1.5.7, see also page 89]{vandervaart1996}, for every $\epsilon>0$ and every $\eta>0$, there exists $\delta >0$ such that
\begin{align*}
\limsup_{n\rightarrow +\infty}\, \mathbb P\left(\sup_{(h,\tilde h)\in \mathcal H _\delta } |Z_n(h)- Z_n(\tilde h) | >\epsilon \right)< \eta ,
\end{align*}
where
\begin{align*}
\mathcal H _\delta = \left \{(h,\tilde h)\in \text{BV}(m,v)^2\, :\, \| h- \tilde h\|_{L_2(P)}\leq \delta\right \}.
\end{align*}
We need to show that
\begin{align*}
\sup_{y\in\mathbb R_{\geq 0}} \left| Z_n(\widehat h\mathds 1_{[0, y]} )- Z_n(h_0\mathds 1_{[0, y]})\right| = o_{\mathbb P}(1).
\end{align*}
First, note that $\widehat h\mathds 1_{[0, y]}$ and $h_0\mathds 1_{[0, y]}$ belong to $\text{BV} (m,v)$, with probability going to $1$. Second, we have that
\begin{align*}
\sup_{y\in \mathbb R_{\geq 0}}\|\widehat h \mathds 1_{[0, y]} -h_0\mathds 1_{[0, y]}\|_{L_2(P)}\leq \sup_{y\in \mathbb R_{\geq 0}} | \widehat h(y ) - h_0(y) |,
\end{align*}
which goes to $0$ in $\mathbb P$-probability. Consequently, we have, with probability going to $1$, that
\begin{align*}
\left\{ (\widehat h\mathds 1_{[0, y]}, h_0\mathds 1_{[0, y]}) \, :\, y\in\mathbb R\right\}\subset \mathcal H_\delta ,
\end{align*}
which implies that
\begin{align*}
\sup_{y\in\mathbb R_{\geq 0}} \left| Z_n(\widehat h\mathds 1_{[0, y]} )- Z_n(h_0\mathds 1_{[0, y]})\right| \leq \sup_{(h,\tilde h)\in \mathcal H _\delta } |Z_n(h)- Z_n(\tilde h) | .
\end{align*}
The fact that $\epsilon $ and $\eta$ are arbitrarily small implies the statement.
\end{proof}
The application of Lemma \ref{Lemma:equicontinuity1} and Lemma \ref{Lemma:equicontinuity2} requires the following result, which establishes that $\w d_{\w \gamma}-d_0$ (resp. $\w h_{\w \gamma}$) verifies the conditions on $\w d$ (resp. $\w h$) in Lemma \ref{Lemma:equicontinuity1} (resp. Lemma \ref{Lemma:equicontinuity2}). In what follows, the $j$-th coordinate of $d_{\gamma} $, $\widehat h_{\gamma}$ and $\nabla _{\gamma} \widehat Q_\gamma $ is denoted by $d_{\gamma,j} $, $\widehat h_{\gamma,j}$, $\nabla _{\gamma,j} \widehat Q_\gamma $, respectively ($j=1,\ldots, q)$.
\begin{lemma}\label{lemma:dgamma_donsker&hboundedvariation}
Under (H\ref{cond:consistency_gamma_0}), (H\ref{cond:consistency_gamma_1}) and (H\ref{cond:asymptotics_gamma_2}), for every $j\in \{1,\ldots, q\}$, the class of functions $\{x\mapsto d_{\gamma,j} (x) -d_{0,j} (x) \,:\, \gamma\in B\} $ satisfies (\ref{cond:uniform_entropy}) with the envelop $L = \sqrt { 8}( \left({1}/{m_1 } \right) (\diam( B)c_2 +M_2)+ \left( {M_2}/{m_1^2} \right) (\diam( B)c_1 +M_1))$.
Moreover, there exists $m>0$ and $v>0$ such that $\mathbb P( \{ \w h_{\gamma,j} \, :\, \gamma\in B\} \subset \text{BV} (m,v) ) \rightarrow 1$.
\end{lemma}
\begin{proof}
The fact that \citep[Section 2.1.1]{vandervaart1996}
\begin{align*}
\mathcal N_{}\left(2 \epsilon \|c_1\|_{L_2(Q)},\mathcal G, L_2(Q)\right)\leq \mathcal N_{[\,]}\left( 2 \epsilon \|c_1\|_{L_2(Q)},\mathcal G, L_2(Q)\right),
\end{align*}
together with (\ref{ineq:bracketing_G}) when $p=2$, implies that
\begin{align}\label{ineq:covering_G}
\mathcal N_{}\left(2 \epsilon \|c_1\|_{L_2(Q)},\mathcal G, L_2(Q)\right)\leq K \epsilon^{-q}.
\end{align}
Let $j\in \{1,\ldots, q\}$, and define $\dot{\mathcal G}_j=\left\{ x \mapsto \nabla _{\gamma,j} g(\gamma,x ) \,:\, \gamma\in B \right\}$. Then similarly as for the class $\mathcal G$, using (\ref{eq:mean_value_th2}) and invoking Theorem 2.7.11 in \cite{vandervaart1996} with the $L_2(Q)$-norm, we find that
\begin{align}\label{ineq:covering_G_point}
\mathcal N_{}\left( 2\epsilon \|c_2\|_{L_2(Q)},\dot{\mathcal G}_j, L_2(Q)\right) \leq K \epsilon^{-q}.
\end{align}
The two previous inequalities continue to hold when the functions $2c_1$ and $2c_2$ are replaced by $ \diam( B) c_1 +M_1 $ and $\diam( B) c_2 +M_2 $, respectively. Because these two functions are enveloppes for $\mathcal G$ and $\dot{\mathcal G}_j$, (\ref{cond:uniform_entropy}) is satisfied for $\mathcal G$ and $\dot{\mathcal G}_j$ with the enveloppes $ L_1 = \diam( B) c_1 +M_1 $ and $L_2 = \diam( B) c_2 +M_2 $. We are now interested in the quotient class formed by the elements $\dot g/g$, when $\dot g \in \dot{\mathcal G}_j$ and $g\in \mathcal G$. Note that, for every $\dot g_1, \dot g_2$ in $\dot{\mathcal G}_j$ and $g_1$, $g_2$ in $\mathcal G$,
\begin{align}
\left|\frac{ \dot g_1}{g_1} - \frac{\dot g_2}{g_1}\right|^2&\leq 2 \left|\frac{1}{\dot g_2 } \right| ^2\, |\dot g_1-\dot g_2|^2+2 \left| \frac{\dot g_2}{g _1g_2 } \right|^2\, |g_1-g_2|^2 \notag \\
&\leq 2\left|\frac{1}{m_1 } \right|^2 \, | \dot g_1-\dot g_2|^2 + 2 \left| \frac{M_2}{m_1^2} \right|^2 \, |g_1-g_2|^2 \label{ineq:dgamma_lipschitz}.
\end{align}
From the previous display, and because $\sqrt{ a+b} \leq \sqrt a+\sqrt b $ for $a\geq 0$ and $b\geq 0$, an envelop for $ \dot{\mathcal G}_j / {\mathcal G} - d_{0,j}$ is given by $ \sqrt 8 ( (1/m_1) L_2 + (M_2/m_1^2) L_1 ) $ which is equal to $L$ given in the statement.
As (\ref{ineq:covering_G}), (\ref{ineq:covering_G_point}) and (\ref{ineq:dgamma_lipschitz}) holds, we can apply Theorem 2.10.20 in \cite{vandervaart1996} on the classes $\mathcal G$, $\dot{ \mathcal G}_j$ and $d_{ 0,k}$, to obtain that
\begin{align*}
\int_0^{+\infty} \sup_Q \sqrt { \log \mathcal N \left( \epsilon \| L \|_{L_2(Q)} , \dot{\mathcal G}_j / {\mathcal G}-d_{0,j}, L_2(Q)\right) } d\epsilon <+\infty ,
\end{align*}
where the supremum is taken over the finitely discrete probability measures. We have shown the first statement of the Lemma.
Let $\|f\| _{\text{tv}} $ denote the total variation of $f$ over $\overline {\mathbb R_{\geq 0}}$. To show the second statement, we need to prove that
\begin{align*}
\mathbb P\left(\sup_{\gamma\in B}\| \widehat h_{\gamma, j} \| _{\text{tv}}\leq v ,\, \sup_{\gamma\in B,\, y\in \mathbb R_{\geq 0}}| \widehat h_{\gamma, j}(y) |\leq m \right ) \longrightarrow 1.
\end{align*}
Define, for every $y\in \mathbb R_{\geq 0}$,
\begin{align*}
\widehat T_{\gamma,j}(y)= n^{-1} \sum_{i=1} ^n | \nabla _{\gamma,j} g(\gamma, X_i ) | R_i(y).
\end{align*}
Introduce the event
\begin{eqnarray*}
A &=& \left \{ \omega : E[ m_1(X) (1-\Delta)]/2 \leq \widehat Q_\gamma (y ) \leq 2E[M_1(X)] \quad \right. \\
&& \hspace*{1cm} \left. \text{and } \widehat T_\gamma(u)\leq 2E[ M_2(X)] \text{ for all } \gamma\in B ,\, y\in \mathbb R_{\geq 0} \right\} .
\end{eqnarray*}
On the set $A$, we have
\begin{align}\label{bound:1}
\sup_{\gamma\in B,\, y\in \mathbb R_{\geq 0}}\left| \widehat h_{\gamma, j}(y) \right|\leq \frac{4E[M_2(X)] }{E[m_1(X)(1-\Delta)] }.
\end{align}
On $A$, we also have that, for all $\gamma\in B$ and $u < v$ in $\overline{\mathbb R_{\geq 0}}$,
\begin{align*}
|\widehat h_{\gamma, j} (u) -\widehat h_{\gamma, j} (v) | &= \left| \frac{\nabla _{\gamma,j} \widehat Q_\gamma (u) }{\widehat Q_\gamma (u)} - \frac{\nabla _{\gamma,j} \widehat Q_\gamma (v) }{\widehat Q_\gamma (v)} \right| \\
&\leq \left( 2 \frac{ | \nabla _{\gamma,j} \widehat Q_\gamma (u)-\nabla _{\gamma,j} \widehat Q_\gamma (v)|}{E[m_1(X)(1-\Delta)] } + 8\frac{E[M_2(X)] \, | \widehat Q_\gamma (u) -\widehat Q_\gamma (v) | }{E[m_1(X)(1-\Delta)] ^{2}}\right) .
\end{align*}
It follows that, there exists a $C>0$ such that, for all $\gamma\in B$ and $u < v$ in $\overline{\mathbb R_{\geq 0}}$,
\begin{align*}
| \widehat h_{\gamma, j} (u) - \widehat h_{\gamma, j} (v) | \leq C \left( | \widehat T_{\gamma,j} (u)- \widehat T_{\gamma ,j} (v)| + | \widehat Q_\gamma (u) -\widehat Q_\gamma (v) |\right).
\end{align*}
Apply the previous inequality and use the fact that $\widehat T_\gamma$ and $\widehat Q_\gamma$ are non-increasing functions to obtain that, on $A$, for all $\gamma\in B$ and any set of points $u_0<u_1\ldots <u_N $,
\begin{align*}
\sum_{k=1}^N | \widehat h_{\gamma, j} (u_k) - \widehat h_{\gamma, j} (u_{k-1}) | &\leq C \left( \sum_{k=1}^N | \widehat T_{\gamma,j} (u_k)- \widehat T_{\gamma,j} (u_{k-1})| +\sum_{k=1}^N | \widehat Q_\gamma (u_k) -\widehat Q_\gamma (u_{k-1}) |\right)\\
&= C \left ( \widehat T_{\gamma,j} (u_0)- \widehat T_{\gamma,j} (u_{N}) +\widehat Q_\gamma (u_0) -\widehat Q_\gamma (u_{N}) \right)\\
&\leq C \left( \widehat T_{\gamma,j} (0) +\widehat Q_\gamma (0) \right)\\
&\leq C \left( \sup_{\gamma\in B} \{ \widehat T_{\gamma,j} (0) \} +\sup_{\gamma\in B} \{ \widehat Q_\gamma (0)\} \right).
\end{align*}
Consequently, on $A$,
\begin{align}\label{bound:2}
\sup_{\gamma\in B} \| \widehat h_{\gamma, k} \| _{\text{tv}} \leq 2 C(E[M_1(X)]+E[M_2(X)]).
\end{align}
Hence, with (\ref{bound:1}) and (\ref{bound:2}) we have found $m$ and $v$ such that
\begin{align*}
\mathbb P(A) \leq \mathbb P\left(\sup_{\gamma\in B}\| \widehat h_{\gamma, j} \| _{\text{tv}}\leq v ,\, \sup_{\gamma\in B,\, y\in \mathbb R_{\geq 0}}| \widehat h_{\gamma, j}(y) |\leq m \right ).
\end{align*}
From Lemma \ref{Lemma:probacv}, statements (\ref{convergence:12}) and (\ref{convergence:21.1}), $\mathbb P(A) $ goes to $1$ and hence the result follows.
\end{proof}
\end{appendices}
\vspace*{.5cm}
\noindent
{\bf \large Acknowledgments} \\
This work was supported by Interuniversity Attraction Pole Research Network P7/06 of the Belgian State (Belgian Science Policy). F. Portier was in addition supported by Fonds de la Recherche Scientifique (FNRS) A4/5 FC 2779/2014- 2017 No. 22342320, I. Van Keilegom was also supported by the European Research Council (2016-2021, Horizon 2020/ ERC grant agreement No.\ 694409), and A. El Ghouch was also supported by the PDR (convention PDR.T.0080.16), a funding instrument of the FNRS.
\bibliographystyle{chicago}
|
{
"timestamp": "2018-06-05T02:17:39",
"yymm": "1806",
"arxiv_id": "1806.01082",
"language": "en",
"url": "https://arxiv.org/abs/1806.01082"
}
|
\section{Introduction}
We consider a collection of core problems related to \emph{minimums of means}. For a given finite collection of probability distributions parameterized by their means $\mu_1, \ldots, \mu_K$, we are interested in learning about $\mu^* = \min_a \mu_a$ from adaptive samples $X_t \sim \mu_{A_t}$, where $A_t$ indicates the distribution sampled at time $t$. We shall refer to these distributions as arms in reference to a multi-armed bandit model \cite{Robbins52Freq,LaiRobbins85bandits}. Knowing about minima/maxima is crucial in reinforcement learning or game-playing, where the value of a state for an agent is the \emph{maximum} over actions of the (expected) successor state value or the \emph{minimum} over adversary moves of the next state value.
The problem of estimating $\mu^* = \min_a \mu_a$ was studied in \cite{Hasselt13} and subsequently \cite{Eramo16,Imagaw17,d2017estimating}. It is known that no unbiased estimator exists for $\mu^*$, and that estimators face an intricate bias-variance trade-off. Beyond estimation, the problem of constructing \emph{confidence intervals} on minima/maxima naturally arises in (Monte Carlo) planning in Markov Decision Processes \cite{trailblazer} and games \cite{KocsisBBMCP06}. Such confidence intervals are used hierarchically for Monte Carlo Tree Search (MCTS) in \cite{Teraoka14MCTS,maximinarm,HuangASM17,mcts.by.bai}. The open problem of designing asymptotically optimal algorithms for MCTS led us to isolate one core difficulty that we study here, namely the construction of confidence intervals and associated sampling/stopping rules for learning minima (and, by symmetry, maxima).
Confidence interval (that are uniform over time) can be naturally obtained from a (sequential) test of $\set*{\mu^* < \gamma}$ versus $\set*{\mu^* > \gamma}$, given a threshold $\gamma$. The main focus of the paper goes even further and investigates the minimum number of samples required for \emph{adaptively} testing whether $\set*{\mu^* < \gamma}$ or $\set*{\mu^* > \gamma}$, that is sequentially sampling the arms in order to decide for one hypothesis as quickly as possible. Such a problem is interesting in its own right as it naturally arises in several statistical certification applications. As an example we may consider quality control testing in manufacturing, where we want to certify that in a batch of machines each has a guaranteed probability of successfully producing a widget. In e-learning, we may want to certify that a given student has sufficient understanding of a range of subjects, asking as few questions as possible about the different subjects. Then in anomaly detection, we may want to flag the presence of an anomaly faster the more anomalies are present. Finally, in a crowdsourcing system, we may need to establish as quickly as possible whether a cohort of workers contains at least one unacceptably careless worker.
We thus study a particular example of sequential adaptive hypothesis testing problem, as introduced by Chernoff \cite{Chernoff59}, in which multiple experiments (sampling from one arm) are available to the experimenter, each of which allows to gain different information about the hypotheses. The experimenter sequentially selects which experiment to perform, when to stop and then which hypothesis to recommend. Several recent works from the bandit literature fit into this framework, with the twist that they consider continuous, composite hypotheses and aim for $\delta$-correct testing: the probability of guessing a wrong hypothesis has to be smaller than $\delta$, while performing as few experiments as possible. The fixed-confidence \emph{Best Arm Identification} problem (concerned with finding the arm with largest mean) is one such example \cite{EvenDaral06,JMLR15}, of which several variants have been studied \cite{Shivaramal12,HuangASM17,Garivier17DF}. For example the Thresholding Bandit Problem \cite{Locatelli16Threshold} aims at finding the set of arms above a threshold, which is strictly harder than our testing problem.
A full characterization of the asymptotic complexity of the BAI problem was recently given in \cite{GKK16}, highlighting the existence of an \emph{optimal allocation of samples} across arms. The lower bound technique introduced therein can be generalized to virtually any testing problem in a bandit model (see, e.g.\ \cite{NIPS17,Garivier17DF}). Such an optimal allocation is also presented by \cite{ChenGLQW17} in the GENERAL-SAMP framework, which is quite generic and in particular encompasses testing on which side of $\gamma$ the minimum falls. The proposed $\mathrm{LPSample}$ algorithm is thus a candidate to be applied to our testing problem. However, this algorithm is only proved to be \emph{order-optimal}, that is to attain the minimal sample complexity up to a (large) multiplicative constant. Moreover, like other algorithms for special cases (e.g.\ Track-and-Stop for BAI \cite{GKK16}), it relies on \emph{forced exploration}, which may be harmful in practice and leads to unavoidably asymptotic analysis.
Our first contribution is a tight lower bound on the sample complexity that provides an oracle sample allocation, but also aims at reflecting the moderate-risk behavior of a $\delta$-correct algorithm.
Our second contribution is a new sampling rule for the minimum testing problem, under which the empirical fraction of selections converges to the optimal allocation without forced exploration. The algorithm is a variant of Thompson Sampling \cite{Thompson33,AGCOLT12} that is conditioning on the ``worst'' outcome $\mu^* < \gamma$, hence the name Murphy Sampling. This conditioning is inspired by the Top Two Thompson Sampling recently proposed by \cite{Russo16} for Best Arm Identification. As we shall see, the optimal allocation is very different whether $\mu^* < \gamma$ or $\mu^* > \gamma$ and yet Murphy Sampling automatically adopts the right behavior in each case. Our third contribution is a new stopping rule, that by aggregating samples from several arms that look small may lead to early stopping whenever $\mu^* < \gamma$. This stopping rule is based on a new self-normalized deviation inequality for exponential families (Theorem~\ref{thm:mainDev}) of independent interest. It generalizes results obtained by \cite{Jamiesonal14LILUCB,JMLR15} in the Gaussian case and by \cite{KLUCBJournal} without the uniformity in time, and also handles subsets of arms.
The rest of the paper is structured as follows. In Section~\ref{sec:Setup} we introduce our notation and formally define our objective. In Section~\ref{sec:LB}, we present lower bounds on the sample complexity of sequential tests for minima. In particular, we compute the optimal allocations for this problem and discuss the limitation of naive benchmarks to attain them. In Section~\ref{sec:alg} we introduce Murphy sampling, and prove its optimality in conjunction with a simple stopping rule. Improved stopping rules and associated confidence intervals are presented in Section~\ref{sec:Stopping}. Finally, numerical experiments reported in Section~\ref{sec:Expes} demonstrate the efficiency of Murphy Sampling paired with our new Aggregate stopping rule.
\newpage
\section{Setup}\label{sec:Setup}
We consider a family of $K$ probability distributions that belong to a one-parameter canonical exponential family, that we shall call \emph{arms} in reference to a multi-armed bandit model. Such exponential families include Gaussian with known variance, Bernoulli, Poisson, see \cite{KLUCBJournal} for details. For natural parameter $\nu$, the density of the distribution w.r.t.\ carrier measure $\rho$ on $\mathbb R$ is given by $e^{x \nu - b(\nu)} \rho(\!\dif x)$, where the cumulant generating function $b(\nu) = \ln \ex_\rho \sbr{e^{X \nu}}$ induces a bijection $\nu \mapsto \dot b(\nu)$ to the mean parameterization. We write $\KL\del*{\nu, \lambda}$ and $d(\mu, \theta)$ for the Kullback-Leibler divergence from natural parameters $\nu$ to $\lambda$ and from mean parameters $\mu$ to $\theta$. Specifically, with convex conjugate $b_*$,
\[
\KL \del*{\nu, \lambda}
~=~
b(\lambda) - b(\nu) + \del*{\nu - \lambda} \dot b(\nu)
\quad
\text{and}
\quad
d(\mu, \theta)
~=~
b_*(\mu) - b_*(\theta) - (\mu-\theta) \dot b_*(\theta)
.
\]
We denote by $\vmu = (\mu_1,\dots,\mu_K) \in \cI^K$ the vector of arm means, which fully characterizes the model. In this paper, we are interested in the smallest mean (and the arm where it is attained)
\[
\mu^* ~=~ \min_a \mu_a
\qquad
\text{and}
\qquad
a^* ~=~ a^*(\bm\mu) ~=~ \arg\min_a \mu_a
.
\]
Given a threshold $\gamma \in \cI$, our goal is to decide whether $\mu^* < \gamma$ or $\mu^* > \gamma$. We introduce the hypotheses
\[
\mathcal H_< ~=~ \setc{\vmu \in \cI^K}{\mu^* < \gamma}
\quad
\text{and}
\quad
\mathcal H_> ~=~ \setc{\vmu \in \cI^K}{\mu^* > \gamma},
\quad
\text{and their union}
\quad
\H ~=~ \H_< \cup \H_>
.
\]
We want to propose a sequential and adaptive testing procedure, that consists in a \emph{sampling rule} $A_t$, a \emph{stopping rule} $\tau$ and a \emph{decision rule} $\hat m \in \set{<,>}$. The algorithm samples $X_t \sim \mu_{A_t}$ while $t \le \tau$, and then outputs a decision $\hat m$. We denote the information available after $t$ rounds by $\mathcal F_t = \sigma\del*{A_1, X_1, \ldots, A_t, X_t}$. $A_t$ is measurable with respect to $\cF_{t-1}$ an possibly some exogenous random variable, $\tau$ is a stopping time with respect to this filtration and $\hat{m}$ is $\cF_{\tau}$-measurable.
We aim for a \emph{$\delta$-correct} algorithm, that satisfies $\pr_{\vmu} \del*{\vmu \in \mathcal H_{\hat m}} \ge 1-\delta$ for all $\vmu \in \H$. Our goal is to build $\delta$-correct algorithms that use a small number of samples $\tau_\delta$ in order to reach a decision. In particular, we want the \emph{sample complexity} $\ex_{\vmu}[\tau]$ to be small.
\paragraph{Notation} We let $N_a(t) = \sum_{s=1}^t \ind_{(A_s = a)}$ be the number of selections of arm $a$ up to round $t$, $S_a(t) = \sum_{s=1}^t X_s \ind_{(A_s = a)}$ be the sum of the gathered observations from that arm and $\hat{\mu}_a(t) = S_a(t)/N_a(t)$ their empirical mean.
\section{Lower Bounds} \label{sec:LB}
In this section we study information-theoretic sample complexity lower bounds, in particular to find out what the problem tells us about the behavior of oracle algorithms. \cite{GK16} prove that for any $\delta$-correct algorithm
\begin{equation}\label{eq:gen.lbd}
\ex_{\vmu}[\tau] ~\ge~ T^*(\vmu) \kl(\delta,1-\delta)
\qquad
\text{where}
\qquad
\frac{1}{T^*(\vmu)} ~=~ \max_{\w \in \triangle} \min_{\vlambda \in \Alt(\vmu)}~
\sum_a w_a d(\mu_a, \lambda_a)
\end{equation}
$\kl(x,y) = x \ln \frac{x}{y} + (1-x)\ln\frac{1-x}{1-y}$ and $\Alt(\vmu)$ is the set of bandit models where the correct recommendation differs from that on $\vmu$. The following result specialises the above to the case of testing $\mathcal H_<$ vs $\mathcal H_>$, and gives explicit expressions for the \emph{characteristic time} $T^*(\vmu)$ and \emph{oracle weights} $\w^*(\vmu)$.
\begin{lemma}\label{lem:lbd}
Any $\delta$-correct strategy satisfies \eqref{eq:gen.lbd} with
\[
T^*(\vmu) ~=~
\begin{cases}
\frac{1}{d(\mu^*, \gamma)}
& \mu^* < \gamma
,
\\
\sum_a \frac{1}{d(\mu_a, \gamma)}
& \mu^* > \gamma
,
\end{cases}
\qquad
\text{and}
\qquad
w^*_a(\vmu)
~=~
\begin{cases}
\mathbf 1_{a=a^*} & \mu^* < \gamma
,
\\
\frac{\frac{1}{d(\mu_a, \gamma)}}{\sum_j \frac{1}{d(\mu_j, \gamma)}}
& \mu^* > \gamma
.
\end{cases}
\]
\end{lemma}
Lemma~\ref{lem:lbd} is proved in Appendix~\ref{sec:pf.lb}. As explained by \cite{GK16} the oracle weights correspond to the fraction of samples that should be allocated to each arm under a strategy matching the lower bound. The interesting feature here is that the lower bound indicates that an oracle algorithm should have very different behavior on $\mathcal H_<$ and $\mathcal H_>$. On $\mathcal H_<$ it should sample $a^*$ (or all lowest means, if there are several) exclusively, while on $\mathcal H_>$ it should sample \emph{all} arms with certain specific proportions.
\subsection{Boosting the Lower Bounds}
Following~\cite{GMS18} (see also~\cite{simulator} and references therein), Lemma~\ref{lem:lbd} can be improved under very mild assumptions on the strategies.
We call a test \emph{symmetric} if its sampling and stopping rules are invariant by conjugation under the action of the group of permutations on the arms. In that case, if all the arms are equal, then their expected numbers of draws are equal. For simplicity we assume $\mu_1 \le \ldots \le \mu_K$.
\begin{proposition}\label{prop:impbinf}
Let $k = \max_a d(\mu_a,\gamma)= \max \big\{ d(\mu_1,\gamma), d(\mu_K,\gamma)\big\}$.
For any symmetric, $\delta$-correct test, for all arms $a\in\{1,\dots,K\}$,
\[\ex_\vmu[N_a(\tau)]\geq \frac{2\left(1-2\delta K^3\right)}{27K^2k}\;.\]
\end{proposition}
Proposition~\ref{prop:impbinf} is proved in Appendix~\ref{sec:pf.lb}.
It is an open question to improve the dependency in $K$ in this bound; moreover, one may expect a bound decreasing with $\delta$, maybe in $\ln(\ln(1/\delta))$ (but certainly not in $\ln(1/\delta)$).
This result already has two important consequences: first, it shows that even an optimal algorithm needs to draw all the arms a certain number of times, even on $\H_<$ where Lemma~\ref{lem:lbd} may suggest otherwise.
Second, this lower bound on the number of draws of each arm can be used to ``boost'' the lower bound on $\ex_{\vmu}[\tau]$: the following result is also proved in Appendix~\ref{sec:pf.lb}.
\begin{theorem}\label{th:impbinf}
When $\mu^* < \gamma$, for any symmetric, $\delta$-correct strategy,
\begin{align*}
\ex_{\vmu}[\tau] \geq \frac{\kl(\delta, 1-\delta)}{d(\mu_1,\gamma)} + \frac{2\left(1-2\delta K^3\right)}{27K^2k}\sum_a \left(1-\frac{d_+(\mu_a, \gamma)}{d(\mu_1,\gamma)}\right)\;.
\end{align*}
\end{theorem}
When $d(\mu_1,\gamma)\geq d(\mu_K,\gamma)$, this bound can be rewritten as:
\begin{equation}\ex_{\vmu}[\tau] \geq \frac{1}{d(\mu_1,\gamma)} \left( \kl(\delta, 1-\delta) + \frac{2\left(1-2\delta K^3\right)}{27K^2}\sum_a \left(1-\frac{d_+(\mu_a, \gamma)}{d(\mu_1,\gamma)}\right)\right)\;.\end{equation}
The lower bound for the case $\mu^*>\gamma$ can also be boosted similarly, with a less explicit result.
\subsection{Lower Bound Inspired Matching Algorithms} \label{sec:Matching} In light of the lower bound in Lemma~\ref{lem:lbd}, we now investigate the design of optimal learning algorithms (sampling rule $A_t$ and stopping rule $\tau$). We start with the stopping rule.
The first stopping rule that comes to mind consists in comparing \emph{separately} each arm to the threshold and stopping when either one arm looks significantly below the threshold or all arms look significantly above. Introducing $d^+(u,v)=d(u,v)\ind_{(u \leq v)}$ and $d^-(u,v) = d(u,v) \ind_{(u \geq v)}$, we let
\begin{equation}\label{eq:tbox}
\tau_{\mathrm{Box}} ~=~
\tau_{<}\, \glb\, \tau_{>}
\qquad
\text{where}
\qquad
\begin{aligned}
\tau_{<} &~=~
\inf\left\{ t\in \N^* : \exists a\, N_a(t) d^+(\hat{\mu}_a(t), \gamma) \geq C_<(\delta,N_a(t))\right\},
\\
\tau_{>} &~=~
\inf\left\{ t\in \N^* : \forall a\, N_a(t) d^-(\hat{\mu}_a(t), \gamma) \geq C_>(\delta,N_a(t))\right\},
\end{aligned}
\end{equation}
and $C_<(\delta, r)$ and $C_>(\delta, r)$ are two \emph{threshold functions} to be specified. $\mathrm{Box}$ refers to the fact that the decision to stop relies on individual ``box'' confidence intervals for each arm, whose endpoints are
\begin{eqnarray*}
\textrm{U}_a(t) & = & \max \{ q : N_a(t) d^+(\hat{\mu}_a(t), q) \geq C_<(\delta,N_a(t))\}, \\
\textrm{L}_a(t) & = & \min \{ q: N_a(t) d^{-}(\hat{\mu}_a(t), q) \geq C_>(\delta,N_a(t))\}.
\end{eqnarray*}
Indeed, $\tau_{\mathrm{Box}} = \inf\left\{ t\in \N^* : \text{$\min_a \textrm{U}_a(t) \leq \gamma$ or $\min_a \textrm{L}_a(t) \geq \gamma$}\right\}$. In particular, if $\forall a, \forall t\in \N^*, \mu_a \in [\textrm{L}_a(t),\textrm{U}_a(t)]$, any algorithm that stops using $\tau_{\mathrm{Box}}$ is guaranteed to output a correct decision. In the Gaussian case, existing work \cite{Jamiesonal14LILUCB,JMLR15} permits to exhibit thresholds of the form $C_{\lessgtr}(\delta,r) = \ln(1/\delta)+a \ln\ln(1/\delta) + b \ln(1+\ln(r))$ for which this sufficient correctness condition is satisfied with probability larger than $1-\delta$. Theorem~\ref{thm:mainDev} below generalizes this to exponential families.
Given that $\tau_{\mathrm{Box}}$ can be proved to be $\delta$-correct \emph{whatever the sampling rule}, the next step is to propose sampling rules that, coupled with $\tau_{\mathrm{Box}}$, would attain the lower bound presented in Section~\ref{sec:LB}. We now show that a simple algorithm, called LCB, can do that for all $\bm \mu \in \H_>$. LCB selects at each round the arm with smallest Lower Confidence Bound:
\begin{equation}\label{eq:LCB.alg}
\framebox{%
\text{\texttt{LCB}: Play
$A_t = \text{argmin}_{a} \ \textrm{L}_a(t)$
,
}%
}
\end{equation}
which is intuitively designed to attain the stopping condition $\min_a \textrm{L}_a(t) \geq \gamma$ faster. In Appendix~\ref{sec:LCB} we prove (Proposition~\ref{prop:LCB.good}) that LCB is optimal for $\bm \mu \in \H_>$ however we show (Proposition~\ref{prop:LCB.bad}) that on instances of $\H_<$ it draws all arms $a\neq a^*$ too much and cannot match our lower bound.
For $\bm\mu \in \H_<$, the lower bound Lemma~\ref{lem:lbd} can actually be a good guideline to design a matching algorithm: under such an algorithm, the empirical proportion of draws of the arm $a^*$ with smallest mean should converge to 1. The literature on regret minimization in bandit models (see \cite{Bubeck:Survey12} for a survey) provides candidate algorithms that have this type of behavior, and we propose to use the Thompson Sampling (TS) algorithm \cite{AGCOLT12,ALT12}. Given independent prior distribution on the mean of each arm, this Bayesian algorithm selects an arm at random according to its posterior probability of being optimal (in our case, the arm with smallest mean). Letting $\pi_a^t$ refer to the posterior distribution of $\mu_a$ after $t$ samples, this can be implemented as
\[
\framebox{%
\text{\texttt{TS}:
Sample
$
\forall a \in \{1,\dots,K\}, \theta_a(t) \sim \pi_a^{t-1}$, then play $A_t = \arg\min_{a \in
\{1,\dots,K\}} \ \theta_a(t)$.
}%
}
\]
It follows from Theorem~\ref{thm:TS.converge.inf} in Appendix~\ref{thm:TS.convergence} that if Thompson Sampling is run without stopping, ${N_{a^*}(t)}/{t}$ converges almost surely to 1, for every $\bm \mu$. As TS is an anytime sampling strategy (i.e. that does not depend on $\delta$), Lemma~\ref{lem:asym.smp.cplx} below permits to justify that on every instance of $\H_<$ with a unique optimal arm, under this algorithm $\tau_{\mathrm{Box}} \simeq (1/d(\mu_1,\theta))\ln(1/\delta)$. However, TS cannot be optimal for $\bm \mu \in \H_>$, as the empirical proportions of draws cannot converge to $\w^*(\bm \mu) \neq \mathbf 1_{a^*}$.
To summarize, we presented a simple stopping rule, $\tau_{\mathrm{Box}}$, that can be asymptotically optimal for every $\bm \mu \in \H_<$ if it is used in combination with Thompson Sampling and for $\bm \mu \in \H_>$ if it is used in combination with LCB. But neither of these two sampling rules are good for the other type of instances, which is a big limitation for a practical use of either of these. In the next section, we propose a new Thompson Sampling like algorithm that ensures the right exploration under both $\H_<$ and $\H_>$. In Section~\ref{sec:Stopping}, we further present an improved stopping rule that may stop significantly earlier than $\tau_{\mathrm{Box}}$ on instances of $\H_<$, by aggregating samples from multiple arms that look small.
We now argue that ensuring the sampling proportions converge to $\w^*$ is sufficient for reaching the optimal sample complexity, at least in an asymptotic sense. The proof can be found in Appendix~\ref{proof:asymp.smp.cplx}.
\begin{lemma}\label{lem:asym.smp.cplx}
Fix $\vmu \in \H$. Fix an anytime sampling strategy ($A_t$) ensuring $\frac{\bm{N}_t}{t} \to \w^*(\vmu)$. Let $\tau_\delta$ be a stopping rule such that $\tau_\delta \leq \tau_\delta^\mathrm{Box}$, for a Box stopping rule \eqref{eq:tbox} whose threshold functions $C_{\lessgtr}$ satisfy the following: they are non-decreasing in $r$ and there exists a function $f$ such that,
\[\forall r \geq r_0, \ C_{\lessgtr}(\delta, r) \leq f(\delta) + \ln r, \ \ \text{where} \ \ f(\delta) = \ln(1/\delta) + o(\ln(1/\delta)).\]
Then $\lim\sup_{\delta \to 0} \frac{\tau_\delta}{\ln \frac{1}{\delta}} \le T^*(\vmu)$ almost surely.
\end{lemma}
\section{Murphy Sampling}\label{sec:alg}
In this section we denote by $\Pi_n = \pr\delc*{\cdot}{\mathcal F_n}$ the posterior distribution of the mean parameters after $n$ rounds. We introduce a new (randomised) sampling rule called \emph{Murphy Sampling} after Murphy's Law, as it performs some conditioning to the ``worst event'' $(\bm \mu \in \H_<)$:
\begin{equation}\label{eq:ms}
\framebox{%
\text{\texttt{MS}: Sample $\vtheta_t \sim \Pi_{t-1}\delc*{\cdot}{\mathcal H_<}$, then play
$A_t ~=~ a^*(\vtheta_t)$
.
}%
}
\end{equation}
As we will argue below, the subtle difference of sampling from $\Pi_{n-1}\delc*{\cdot}{\mathcal H_<}$ instead of $\Pi_{n-1}$ (regular Thompson Sampling) ensures the required split personality behavior (see Lemma~\ref{lem:lbd}). Note that MS always conditions on $\mathcal H_<$ (and never on $\mathcal H_>$) regardless of the position of $\vmu$ w.r.t.\ $\gamma$. This is different from the symmetric Top Two Thompson Sampling \cite{Russo16}, which essentially conditions on $a^*(\vtheta) \neq a^*(\vmu)$ a fixed fraction $1-\beta$ of the time, where $\beta$ is a parameter that needs to be tuned with knowledge of $\vmu$. MS on the other hand needs no parameters.
Also note that MS is an anytime sampling algorithm, being independent of the confidence level $1-\delta$. The confidence will manifest only in the stopping rule.
MS is technically an instance of Thompson Sampling with a joint prior $\Pi$ supported only on $\H_<$. This viewpoint is conceptually funky, as we will apply MS identically to $\H_<$ \emph{and} $\H_>$. To implement MS, we use that independent conjugate per-arm priors induce likewise posteriors, admitting efficient (unconditioned) posterior sampling. Rejection sampling then achieves the required conditioning. In our experiments on $\H_>$ (with moderate $\delta$), stopping rules kick in before the rejection probability becomes impractically high.
The rest of this section is dedicated to the analysis of MS. First, we argue that the MS sampling proportions converge to the oracle weights of Lemma~\ref{lem:lbd}.
\paragraph{Assumption}
For purpose of analysis, we need to assume that the parameter space $\Theta \ni \vmu$ (or the support of the prior) is the interior of a bounded subset of $\mathbb R^K$. This ensures that $\sup_{\mu,\theta \in \Theta} d(\mu,\theta) < \infty$ and $\sup_{\mu,\theta \in \Theta} \norm{\mu-\theta} < \infty$. This assumption is common \cite[Section~7.1]{grunwald2007mdl}, \cite[Assumption~1]{Russo16}. We also assume that the prior $\Pi$ has a density $\pi$ with bounded ratio $\sup_{\mu,\theta \in \Theta} \frac{\pi(\theta)}{\pi(\mu)} < \infty$.
\begin{theorem}\label{thm:TS.convergence}
Under the above assumption, MS ensures $\frac{\bm{N}_t}{t} \to \w^*(\vmu)$ a.s.\ for any $\vmu \in \H$.
\end{theorem}
We give a sketch of the proof below, the detailed argument can be found in Appendix~\ref{sec:pf.TS.convergence}, Theorems~\ref{thm:TS.converge.inf} and~\ref{thm:TS.converge.sup}. Given the convergence of the weights, the asymptotic optimality in terms of sample complexity follows by Lemma~\ref{lem:asym.smp.cplx}, if MS is used with an appropriate stopping rule (Box \eqref{eq:tbox} or the improved Aggregate stopping rule discussed in Section~\ref{sec:Stopping}).
\paragraph{Proof Sketch}
First, consider $\vmu \in \mathcal H_<$. In this case the conditioning in MS is asymptotically immaterial as $\Pi_n(\mathcal H_<) \to 1$, and the algorithm behaves like regular Thompson Sampling. As Thompson sampling has sublinear pseudo-regret \cite{AGCOLT12}, we must have $\ex[N_1(t)]/t \to 1$. The crux of the proof in the appendix is to show the convergence occurs almost surely.
Next, consider $\vmu \in \mathcal H_>$. Following \cite{Russo16}, we denote the sampling probabilities in round $n$ by $\psi_a(n)
= \Pi_{n-1} \delc*{a = \arg\min_j \theta_j}{\mathcal H_<}
$, and abbreviate $\Psi_a(n) = \sum_{t=1}^n \psi_a(t)$ and $\bar\psi_a(n) = \Psi_a(n)/n$.
The main intuition is provided by
\begin{proposition}[{\cite[Proposition~4]{Russo16}}]\label{prop:post.conc.rate}
For any open subset $\tilde\Theta \subseteq \Theta$, the posterior concentrates at rate
$\Pi_n(\tilde\Theta)
\doteq
\exp \del*{- n \min_{\vlambda \in \tilde\Theta} \sum_a \bar\psi_a(n) d(\mu_a, \lambda_a)}
$
a.s.\
where
$a_n \doteq b_n$ means $\frac{1}{n} \ln \frac{a_n}{b_n} \to 0$.
\end{proposition}
Let us use this to analyze $\psi_a(n)$. As we are on $\mathcal H_>$, the posterior $\Pi_n(\mathcal H_<) \to 0$ vanishes. Moreover, $\Pi_n \del*{a = \arg\min_j \theta_j, \mathcal H_<} \sim \Pi_n(\theta_a < \gamma)$ as the probability that multiple arms fall below $\gamma$ is negligible. Hence
\[
\psi_a(n)
~\sim~
\frac{\Pi_n(\mu_a < \gamma)}{\sum_j \Pi_n(\mu_j < \gamma)}
~\doteq~
\frac{\exp \del*{ - n \bar\psi_a(n) d(\mu_a,\gamma)}}{\sum_j \exp \del*{ - n \bar\psi_j(n) d(\mu_j,\gamma)}}
.
\]
This is an asymptotic recurrence relation in $\psi_a(n)$. To get a good sense for what is happening we may solve the exact analogue. Abbreviating $d_a = d(\mu_a,\gamma)$, we find
\(
\Psi_a(n)
=
\del*{
n
- \sum_j \frac{\ln d_j}{d_j}
}
\frac{ \frac{1}{d_a}}{
\sum_j \frac{1}{d_j}
} + \frac{\ln d_a}{d_a}
\)
and hence
$\psi_a(t) = \Psi_a(t) - \Psi_a(t-1) = \frac{\wfrac{1}{d_a}}{\sum_j \wfrac{1}{d_j}} = w_a^*(\vmu)$.
Proposition~\ref{prop:limrat.N.Psi} then establishes that $N_a(t)/t \sim \Psi_a(t)/t \to w^*_a(\vmu)$ as well.
In our proof in Appendix~\ref{sec:pf.TS.convergence} we technically bypass solving the above approximate recurrence, and proceed to pin down the answer by composing the appropriate one-sided bounds. Yet as we were guided by the above picture of $\w^*(\vmu)$ as the eventually stable direction of the dynamical system governing the sampling proportions, we believe it is more revealing.
\section{Improved Stopping Rule and Confidence Intervals} \label{sec:Stopping}
Theorem~\ref{thm:mainDev} below provides a new self-normalized deviation inequality that given a \emph{subset} of arms controls uniformly over time how the \emph{aggregated mean} of the samples obtained from those arms can deviate from the smallest (resp. largest) mean in the subset. More formally for $\cS \subseteq [K]$, we introduce
\[N_\cS(t) ~=~ \sum_{a \in \cS} N_a(t) \qquad \text{and} \qquad
\hat \mu_\cS(t) ~=~ \frac{\sum_{a \in \cS} N_a(t) \hat \mu_a(t)}{N_\cS(t)}
\]
and recall $d^+(u,v) = d(u,v) \ind_{(u \leq v)}$ and $d^-(u,v) = d(u,v) \ind_{(u \geq v)}$. We prove the following for one-parameter exponential families.
\begin{theorem} \label{thm:mainDev} Let $T : \R^+ \rightarrow \R^+$ be the function defined by
\begin{equation}\label{eq:T}
T(x) ~=~ 2h^{-1}\left(1 + \frac{h^{-1}(1+x) + \ln\zeta(2)}{2}\right)
\end{equation}
where $h(u) = u - \ln(u)$ for $u\geq 1$ and $\zeta(s) = \sum_{n=1}^\infty n^{-s}$. For every subset $\cS$ of arms and $x\geq 0.04$,
\begin{eqnarray}
\bP\left(\exists t \in \N : N_{\cS}(t) d^+\left(\hat{\mu}_\cS(t) , \min_{a \in \cS} \mu_a\right) \geq 3 \ln(1 + \ln (N_\cS(t))) + T(x)\right) & \leq & e^{-x}, \label{ineq:DevPlus}\\
\bP\left(\exists t \in \N : N_{\cS}(t) d^-\left(\hat{\mu}_\cS(t) , \max_{a \in \cS} \mu_a\right) \geq 3 \ln(1 + \ln (N_\cS(t))) + T(x)\right) & \leq & e^{-x}.\label{ineq:DevMinus}
\end{eqnarray}
\end{theorem}
The proof of this theorem can be found in Section~\ref{sec:ProofDev} and is sketched below. It generalizes in several directions the type of results obtained by \cite{Jamiesonal14LILUCB,JMLR15} for Gaussian distributions and $|\cS|=1$. Going beyond subsets of size 1 will be crucial here to obtain better confidence intervals on minimums, or stop earlier in tests. Note that the threshold function $T$ introduced in \eqref{eq:T} does not depend on the cardinality of the subset $\cS$ to which the deviation inequality is applied. Tight upper bounds on $T$ can be given using Lemma~\ref{lem:UpperBoundT} in Appendix~\ref{sec:TechnicalResults}, which support the approximation $T(x) \simeq x + 3\ln(x)$.
\subsection{An Improved Stopping Rule}
Fix a subset prior $\pi : \powerset(\{1,\dots,K\}) \rightarrow \R^+$ such that $\sum_{\cS \subseteq \{1,\dots,K\}} \pi(\cS) = 1$ and let $T$ be the threshold function defined in Theorem~\ref{thm:mainDev}. We define the stopping rule $\tau^\pi := \tau_{>} \wedge \tau_{<}^\pi$, where
\begin{eqnarray*}
\tau_{>}& =& \inf \left\{t \in \N^* : \forall a \in \{1,\dots,K\} N_a(t)d^-\left(\hat{\mu}_a(t),\gamma\right) \geq 3 \ln(1 + \ln (N_a(t))) + T\left(\ln(1/\delta)\right) \right\},\label{TauInf}\\
\tau_{<}^\pi& =& \inf \left\{t \in \N^* : \exists \cS : N_\cS(t)d^+\left(\hat{\mu}_\cS(t),\gamma\right) \geq 3 \ln(1 + \ln (N_\cS(t))) + T\left(\ln(1/(\delta\pi(\cS))\right) \right\}. \label{TauSup}
\end{eqnarray*}
The associated recommendation rule selects $\H_>$ if $\tau^\pi = \tau_{>}$ and $\H_<$ if $\tau^\pi = \tau_{<}^\pi$. For the practical computation of $\tau_{<}^\pi$, the search over subsets can be reduced to nested subsets including arms sorted by increasing empirical mean and smaller than $\gamma$.
\begin{lemma} \label{lem:PAC} Any algorithm using the stopping rule $\tau^\pi$ and selecting $\hat m = \ >$ iff $\tau^\pi = \tau_{>}$, is $\delta$-correct.
\end{lemma}
From Lemma~\ref{lem:PAC}, proved in Appendix~\ref{proof:PAC}, the prior $\pi$
doesn't impact the correctness of the algorithm. However it may impact its sample complexity significantly. First it can be observed that picking $\pi$ that is uniform over subset of size 1, i.e. $\pi(\cS) = K^{-1} \ind{(|\cS|=1)}$, one obtain a $\delta$-correct $\tau_{\mathrm{Box}}$ stopping rule with thresholds functions satisfying the assumptions of Lemma~\ref{lem:asym.smp.cplx}. However, in practice (especially more moderate $\delta$), it may be more interesting to include in the support of $\pi$ subsets of larger sizes, for which $N_\cS(t)d^+\left(\hat{\mu}_\cS(t),\gamma\right)$ may be larger.
We advocate the use of $\pi(\cS) = {K}^{-1} {{{{K}\choose{|\cS|}}}}^{-1}$, that puts the same weight on the set of subsets of each possible size.
\paragraph{Links with Generalized Likelihood Ratio Tests (GLRT).} Assume we want to test $\cH_0$ against $\cH_1$ for composite hypotheses. A GLRT test based on $t$ observations whose distribution depends on some parameter $x$ rejects $\cH_0$ if the test statistic $\max_{x \in \cH_1} \ell(X_1,\dots,X_t;x) / \max_{x \in \cH_0\cup \cH_1} \ell(X_1,\dots,X_t;x)$ has large values (where $\ell(\cdot;x)$ denotes the likelihood of the observations under the model parameterized by $x$). In our testing problem, the GLRT statistic for rejecting $\H_<$ is $\min_{a} N_a(t) d^-(\hat{\mu}_a(t),\gamma)$ hence $\tau_{>}$ is very close to a sequential GLRT test. However, the GLRT statistic for rejecting $\H_>$ is $\sum_{a = 1}^K N_a(t)d^+(\hat{\mu}_a(t), \gamma)$, which is quite different from the stopping statistic used by $\tau_{<}^\pi$. Rather than \emph{aggregating samples} from arms, the GLRT statistic is \emph{summing evidence} for exceeding the threshold. Using similar martingale techniques as for proving Theorem~\ref{thm:mainDev}, one can show that replacing $\tau_{<}^\pi$ by
\[
\tau_{<}^{\text{GLRT}} = \inf \left\{t \in \N^* : \!\!\! \sum_{a : \hat{\mu}_a(t) \leq \gamma} \left[N_a(t)d^+\left(\hat{\mu}_a(t),\gamma\right) - 3 \ln(1 + \ln (N_a(t)))\right]^+ \geq KT\left(\frac{\ln(1/\delta)}{K}\right) \right\}
\]
also yields a $\delta$-correct algorithm (see \cite{OurFuturePaper}). At first sight, $\tau_{<}^\pi$ and $\tau_{<}^{\text{GLRT}}$ are hard to compare: the stopping statistic used by the latter can be larger than that used by the former, but it is compared to a smaller threshold. In Section~\ref{sec:Expes} we will provide empirical evidence in favor of aggregating samples.
\subsection{A Confidence Intervals Interpretation}\label{sec:ConfidenceInterpretation}
Inequality \eqref{ineq:DevPlus} (and a union bound over subsets) also permits to build a tight upper confidence bound on the minimum $\mu^*$. Indeed, defining
\[\textrm{U}_{\min}^\pi(t) : = \max \left\{q : \max_{\cS \subseteq \{1,\dots,K\}} \left[N_{\cS}(t)d^+\left(\hat\mu_{\cS}(t) , q\right) - 3\ln(1+\ln(1+N_\cS(t)))\right] \leq T\left(\ln \frac{1}{\delta\pi(\cS)}\right)\right\},\]
it is easy to show that $\bP\left(\forall t\in \N, \mu^* \leq \textrm{U}_{\min}^\pi(t)\right)\geq 1 - \delta$. For general choices of $\pi$, this upper confidence bound may be much smaller than the naive bound $\min_{a} \textrm{U}_a(t)$ which corresponds to choosing $\pi$ uniform over subset of size 1. We provide an illustration supporting this claim in Figure~\ref{fig:UCBmin} in Appendix~\ref{sec:MoreExpes}. Observe that using inequality \eqref{ineq:DevMinus} in Theorem~\ref{thm:mainDev} similarly allows to derive tighter lower confidence bounds on the maximum of several means.
\subsection{Sketch of the Proof of Theorem~\ref{thm:mainDev}}
Fix $\eta \in [0,1+e[$. Introducing $X_{\eta}(t) = \left[N_\cS(t)d^+\left(\hat{\mu}_{\cS}(t),\min_{a\in \cS} \mu_a\right) - 2(1+\eta)\ln\left(1 + \ln N_\cS(t)\right)\right]$,
the cornerstone of the proof (Lemma~\ref{lem:Heart}) consists in proving that for all $\lambda \in [0 , (1+\eta)^{-1}[$, there exists a martingale $M_t^\lambda$ that ``almost'' upper bounds $e^{\lambda X_{\eta}(t)}$: there exists a function $g_\eta$ such that
\begin{equation}\bE[M_0^\lambda] = 1 \ \ \ \text{and} \ \ \ \forall t \in \N^*, M_{t}^\lambda \geq e^{\lambda X_{\eta}(t)- g_{\eta}(\lambda)}.\label{StarMain}\end{equation}
From there, the proof easily follows from a combination of Chernoff method and Doob inequality:
\begin{eqnarray*}
\bP\left(\exists t \in \N^* : X_{\eta}(t) > u \right) & \leq & \bP\left(\exists t \in \N^* : M_t^\lambda > e^{\lambda u - g_{\eta}(\lambda)} \right) \leq \exp\left(- \left[\lambda u - g_{\eta}(\lambda)\right]\right).
\end{eqnarray*}
Inequality \eqref{ineq:DevPlus} is then obtained by optimizing over $\lambda$, carefully picking $\eta$ and inverting the bound.
The interesting part of the proof is to actually build a martingale satisfying \eqref{StarMain}. First, using the so-called method of mixtures \cite{DeLaPenaal09Book} and some specific fact about exponential families already exploited by \cite{KLUCBJournal}, we can prove that there exists a martingale $\tilde{W}_t^x$ such that for some function $f$ (see Equation~\eqref{eq:Central})
\[\left\{ X_{\eta}(t) - f(\eta) \geq x \right\} \subseteq \left\{ \tilde{W}_t^x \geq e^{\frac{x}{1+\eta}}\right\}.\]
From there it follows that, for every $\lambda$ and $z>1$, $\left\{ e^{\lambda(X_{\eta}(t) - f(\eta))} \geq z \right\} \subseteq \{ e^{-\frac{\ln(z)}{\lambda(1+\eta)}}\tilde{W}_t^{\frac{1}{\lambda}\ln(z)} \geq 1\}$ and the trick is to introduce another mixture martingale,
\[\overline{M}_t^\lambda = 1 + \int_{1}^\infty e^{-\frac{\ln(z)}{\lambda(1+\eta)}}\tilde{W}_t^{\frac{1}{\lambda}\ln(z)} dz,\]
that is proved to satisfy $\overline{M}_t^\lambda \geq e^{\lambda\left[X_{\eta}(t) - f(\eta)\right]}$. We let $M_t^\lambda = \overline{M}_t^\lambda/ \bE[\overline{M}_t^\lambda]$.
\section{Experiments} \label{sec:Expes}
We discuss the results of numerical experiments performed on Gaussian bandits with variance 1, using the threshold $\gamma=0$. Thompson and Murphy sampling are run using a flat (improper) prior on $\mathbb R$, which leads to a conjugate Gaussian posterior. The experiments demonstrate the flexibility of our MS sampling rule, which attains optimal performance on instances from both $\H_<$ and $\H_>$. Moreover, they show the advantage of using a stopping rule aggregating samples from subsets of arms when $\bm\mu \in \H_<$. This aggregating stopping rule, that we refer to as $\tau^{\text{Agg}}$ is an instance of the $\tau^\pi$ stopping rule presented in Section~\ref{sec:Stopping} for $\pi(\cS) = {K}^{-1} {{{{K}\choose{|\cS|}}}}^{-1}$. We investigate the combined use of three sampling rules, MS, LCB and Thompson Sampling with three stopping rules, $\tau^{\text{Agg}}$, $\tau^{\text{Box}}$ and $\tau^{\text{GLRT}}$.
We first study an instance $\bm \mu \in \H_<$ with $K=10$ arms that are linearly spaced between $-1$ and $1$. We run the different algorithms (excluding the TS sampling rule, that essentially coincides with MS on $\H_<$) for different values of $\delta$ and report the estimated sample complexity in Figure~\ref{fig:SampleComplexity} (left). For each sampling rule, it appears that $\bE[\tau^{\text{Agg}}] \leq \bE[\tau^{\text{Box}}] \leq \bE[\tau^{\text{GLRT}}]$. Moreover, for each stopping rule MS is outperforming LCB, with a sample complexity of order $T^*(\bm\mu)\ln(1/\delta) + C$. Then we study an instance $\bm \mu \in \H_>$ with $K=5$ arms that are linearly spaced between $0.5$ and $1$, with $\tau^{\text{Agg}}$ as the sampling rule (which matters little as the algorithm mostly stops because of $\tau_{>}$ on $\H_>$). Results are reported in Figure~\ref{fig:SampleComplexity} (right), in which we see that MS is performing very similarly to LCB (that is also proved optimal on $\H_>$), while vanilla TS fails dramatically. On those experiments, the empirical error was always zero, which shows that our theoretical thresholds are still quite conservative. More experimental results can be found in Appendix~\ref{sec:MoreExpes}: an illustration of the convergence properties of the MS sampling rule as well as a larger-scale comparison of stopping rules under $\H_<$.
\begin{figure}[h]
\centering
\begin{minipage}{0.45\textwidth}
\includegraphics[height=5cm]{10Stairs_SC}
\end{minipage}
\begin{minipage}{0.45\textwidth}
\includegraphics[height=5cm]{5StairsUp_SC}
\end{minipage}
\caption{\label{fig:SampleComplexity} $\bE[\tau_{\delta}]$ as a function of $\ln(1/\delta)$ for several algorithms on an instance $\mu \in \H_<$ (left) and $\mu \in \H_>$ (right), estimated using $N=5000$ (resp. 500) repetitions.}
\end{figure}
\vspace{-0.4cm}
\section{Discussion}
We propose new sampling and stopping rules for sequentially testing the minimum of means. As our guiding principle, we first prove sample complexity lower bounds, characterized the emerging oracle sample allocation $\w^*$, and develop the Murphy Sampling strategy to match it asymptotically. We observe in the experiments that the asymptotic regime does not necessarily kick in at moderate confidence $\delta$ (Figure~\ref{fig:Weights}, left) and that there is an important lower-order term to the practical sample complexity (Figure~\ref{fig:SampleComplexity}). It is an intriguing open problem of theoretical and practical importance to characterize and match optimal behavior at moderate confidence. We make first contributions in both directions: we prove tighter sample complexity lower bounds for symmetric algorithms (Proposition~\ref{prop:impbinf}, Theorem~\ref{th:impbinf}) and we design aggregating confidence intervals which are tighter in practice (Figure~\ref{fig:UCBmin}). The importance of this perspective arises, as highlighted in the introduction, from the \emph{hierarchical} application of maxima/minima in learning applications. A better understanding of the moderate confidence regime for learning minima will very likely translate into new insights and methods for learning about hierarchical structures, where the benefits accumulate with depth.
|
{
"timestamp": "2018-06-05T02:14:40",
"yymm": "1806",
"arxiv_id": "1806.00973",
"language": "en",
"url": "https://arxiv.org/abs/1806.00973"
}
|
\section{Introduction}
Catheters and tubes, including endotracheal tubes (ETTs), umbilical arterial catheters (UACs), umbilical venous catheters (UVCs), and nasogastric tubes (NGTs), are commonly used in the management of critically ill or very low birth weight neonates~\cite{finn2017optimal}. For example, ETTs assist in ventilation of the lungs and may prevent aspiration, umbilical catheters may be used for administration of fluids or medications and for blood sampling, and NGTs may be used for nutritional support, aspiration of gastric contents, or decompression of the gastrointestinal tract in critically ill neonates~\cite{concepcion2017current}. Because catheters and tubes (all referred as catheters in the following for simplicity) are typically placed without real-time image guidance, they are frequently malpositioned~\cite{kieran2015estimating, hoellering2014determination}, and serious complications can arise as a result~\cite{concepcion2017current}. Thus, the position of a catheter is usually assessed using X-ray imaging immediately following placement~\cite{concepcion2017current}.
Paediatric radiologists are trained to accurately accomplish the task of detecting catheters on X-ray images and assessing placement with a low diagnostic error rate~\cite{fuentealba2012diagnostic}. However, availability of expertise may be limited or delayed due to high image volumes. An automatic approach is desired to flag X-rays which may have a malpositioned catheter so that they can be immediately reviewed by a clinician or radiologist, thus promoting safer use of catheters. Since the location of a catheter impacts clinical decision making, we believe detection of catheters is a critical first step towards a fully automatic catheter placement evaluation system.
Automated catheter detection is a challenging task. Although most catheters have a radiopaque strip to facilitate detection, the strip may become less apparent depending on the projection angle. Catheters maybe confused by other similar linear structures like ECG leads and anatomy including ribs. Additionally, portions of catheters can be occluded by anatomical structures given that radiographs are a 2D projection of a 3D structure. For example, when a NGT is placed within the oesophagus, the catheter itself becomes less apparent due to the high density of the adjacent vertebrae. Finally, the number and type of catheters that could possibly appear in pediatric X-rays are unknown a priori. The catheters may be intertwined with each other thus making simple line tracing methods fail. Figure~\ref{allcatheters} gives three sample pediatric X-ray images with some common catheters highlighted in different colors.
Previous methods have heavily relied on primitive low level cues and made superficial assumptions of catheter appearance and position. These works were typically applied to only one or two catheter types and patient positions with limited generality. Machine learning, especially deep learning, has recently received significant attention in the medical imaging community due to its demonstrated potential to complement image interpretation and augment image representation and classification. For example, super human performance has been achieved in organ segmentation in adult chest X-rays~\cite{dai2017scan} and an algorithm is able to denoise low dose computed tomography with improved overall sharpness~\cite{Yi2018}. All the advances achieved so far have used accumulated annotation datasets. However, in segmentation tasks, the desired pixel level accurate annotated maps are not always available. This is partly because the annotation task requires a certain amount of medical expertise and manual marking is inherently tedious, particularly for objects with elongated structures.
To alleviate this annotation problem in catheter detection, we proposed to use X-ray images with simulated catheters by exploiting the fact that catheters are essentially tubular objects with various cross sectional profiles. To be more specific, a synthetic 2D projection of a catheter is generated by first simulating a horizontal catheter profile and then using it as a brush tip to draw along a B-spline path. This generated catheter is then composited with an X-ray image serving as the training data. Another contribution of this work is a segmentation network that can inherently take into account multi-scale information. This network adopts a UNet-style form and contains a recurrent module that can process inputs with increasing scales\footnote{Our code is available at \href{https://github.com/xinario/catheter\_detection.git}{https://github.com/xinario/catheter\_detection.git}.}. We have empirically shown that by iterating through the scale space of the input image, higher recall is achieved as compared to using a single scale. Details about the methods are discussed in Section~\ref{methodology}. Three sample detection results are shown in Figure~\ref{allcatheters}.
\section{Related Works}
There has been limited prior publications regarding automated catheter detection on X-ray images. In this section we not only review catheter detection methods but also provide a brief overview of elongated structure detection as a broader concept.
\textbf{Catheter detection} Kao et al.~\cite{kao2015automated} proposed a system to detect ETT on pediatric chest X-rays. It was based on the presumption that ETT usually has highest intensity and is continuous in the cervical region. This system is sensitive to the positioning of the neonates and it is possible to confuse an ETT with a NGT when the assumption no longer holds. Sheng et al.~\cite{sheng2009automatic} proposed a method for identification of ETT, NGT and feeding tube together in adult chest X-rays. The detection was based on the Hough transform assuming that tubes in small rectangular areas are straight.
Their algorithm would fail if the catheter forms a loop. Keller et al.~\cite{keller2007semi} proposed a semi-automated system to detect catheters in chest X-rays with users supplying initial points for catheter tracking. Line tracing was accomplished by template matching of catheter profiles. Mercan et al.~\cite{mercan2013approach} proposed a patch based neural network to detect chest tubes and a curve fitting approach to connect discontinued detected line segments. A very recent work used a fully convolutional neural network for detection of peripherally inserted central catheter (PICC) tip position on adult chest X-ray images~\cite{lee2017deep}. A similar approach was taken by Ambrosini et al.~\cite{ambrosini2017fully} to detect catheter in X-ray fluoroscopy but using a UNet-style~\cite{long2015fully} network. Both methods require human to manually annotate catheter locations for supervised training.
\begin{figure}[t]
\centering
\begin{tikzpicture} [
auto,
line/.style = { draw, thick, ->, shorten >=2pt,shorten <=2pt },
every node/.append style={font=\tiny}
]
\matrix [column sep=0.5mm, row sep=2mm,ampersand replacement=\&] {
\node (p11)[inner sep=0] at (0,0){\includegraphics[width=0.16\textwidth, height=0.2\textwidth]{overlay_12_DX_14_0017743.png}}; \&
\node (p12)[inner sep=0] at (0,0){\includegraphics[width=0.16\textwidth, height=0.2\textwidth]{post_12-DX-14-0017743.png}}; \&
\node (p13)[inner sep=0] at (0,0){\includegraphics[width=0.16\textwidth, height=0.2\textwidth]{overlay_24-DX-16-0449187.png}}; \&
\node (p14)[inner sep=0] at (0,0){\includegraphics[width=0.16\textwidth, height=0.2\textwidth]{post_24-DX-16-0449187.png}}; \&
\node (p15)[inner sep=0] at (0,0){\includegraphics[width=0.16\textwidth, height=0.2\textwidth]{overlay_27-DX-14-0229663.png}}; \&
\node (p16)[inner sep=0] at (0,0){\includegraphics[width=0.16\textwidth, height=0.2\textwidth]{post_27-DX-14-0229663.png}}; \&\\
};
\begin{scope} [every path/.style=line]
\node[anchor=north] at (p11.south) {(a1)};
\node[anchor=north] at (p12.south) {(b1)};
\node[anchor=north] at (p13.south) {(a2)};
\node[anchor=north] at (p14.south) {(b2)};
\node[anchor=north] at (p15.south) {(a3)};
\node[anchor=north] at (p16.south) {(b3)};
\end{scope}
\end{tikzpicture}
\begin{tikzpicture}
\begin{customlegend}[legend columns=4,legend style={draw=none, column sep=2ex,mark size=4pt,font=\tiny},legend entries={Umbilical venous catheter (UVC), Umbilical arterial catheter (UAC), Nasogastric tube (NGT), Endotracheal tube (ETT), ECG leads (ECG), Other tubes (OTT), Background (BK)},legend cell align={left}]
\csname pgfplots@addlegendimage\endcsname{only marks,mark=square*, mark options={UVC}}
\csname pgfplots@addlegendimage\endcsname{only marks,mark=square*, mark options={UAC}}
\csname pgfplots@addlegendimage\endcsname{only marks,mark=square*, mark options={NGT}}
\csname pgfplots@addlegendimage\endcsname{only marks,mark=square*, mark options={ETT}}
\csname pgfplots@addlegendimage\endcsname{only marks,mark=square*, mark options={ECG}}
\csname pgfplots@addlegendimage\endcsname{only marks,mark=square*, mark options={OTT}}
\csname pgfplots@addlegendimage\endcsname{only marks,mark=square*, mark options={BK}}
\end{customlegend}
\end{tikzpicture}
\label{xraysample}
\caption{Detection of catheters is challenging on pediatric X-ray images. The number of catheters is not known prior to interpretation and they can be partially occluded by the body. ECG electrode leads and other unidentified catheters also serve as sources of confusion. (a1), (a2) and (a3) show the original pediatric X-rays, with the potential catheters, wires and lines (including ECG wires and other unidentified catheters) highlighted in different colors. (b1), (b2) and (b3) show the detected catheters by our proposed method.}
\label{allcatheters}
\end{figure}
\textbf{Elongated structure detection}
One of the most common elongated structures in medical imaging is a blood vessel. Its detection has been researched in many imaging modalities, such as in retinal fundus imaging~\cite{liskowski2016segmenting} and angiography~\cite{frangi1998multiscale}. The methods used in the literature have evolved from hard coded rule based methods into machine learning based methods. In the early days, researchers tried to devise metrics to measure the ``vesselness'' directly from feature sources like Hessian matrix~\cite{frangi1998multiscale} and co-occurrence matrix~\cite{villalobos2010fast}. Later on, rather than relying on a single feature, researchers started to aggregate features from multiple sources, such as ridge based~\cite{staal2004ridge}, wavelets~\cite{soares2006retinal} and then employed a supervised learning method on top to delineate the decision boundary between the vessel and non-vessel. Most recent progress was achieved by supervised deep learning where features were directly learnt from images without the intervention of domain expertise~\cite{liskowski2016segmenting}. Since blood vessels are of various diameters by nature, multi-scale approaches have also been explored in the literature~\cite{yin2015vessel}.
\section{Methodology}\label{methodology}
\subsection{Datasets}\label{datasetdescription}
The training dataset comes from the Open-i dataset~\cite{demner2015preparing} from National Institutes of Health (NIH) which contains 7,471 adult chest X-rays. We randomly selected 2515 frontal view images and generated synthetic catheters on them.
The test dataset is collected locally and only contains frontal chest-abdominal X-rays from patients < 4 weeks old. This is the most common radiograph obtained to confirm placement of catheters such as UACs and UVCs in neonates. Currently, the test set has 35 fully labeled images with different catheter types with sample images previously shown in Figure~\ref{allcatheters}. All the annotated catheters (lines excluding ECG leads) are treated as the same class in the detection.
\subsection{Preprocessing}
The X-ray images are of various contrast due to different acquisition protocols. Rather than making the network learn a contrast invariant feature, we normalized the contrast of the input X-rays before sending them for training by using contrast limited adaptive histogram equalization~\cite{zuiderveld1994contrast} as was done in other works.
\subsection{Synthetic catheter generation}
Catheters are essentially tubular objects, a portion of which is made of radiopaque material with a higher attenuation component designed for ease of detection. Figure~\ref{realtube} (a) shows a simplified cross section profile. This profile would work for both NGT and ETT, as the only difference lies in the catheter width. Using a parallel beam geometry, the projected sinogram is obtained and shown in Figure~\ref{realtube} (b). (c) and (d) are the sampled projection profile at $\ang{0},\ang{30}, \ang{60}, \ang{90}$ and the synthetic catheters drawn with the corresponding profile. Note that the profile used to draw has to be resampled to accommodate the input image size. There are five parameters that are used to parameterize the simulated catheter, inner and outer catheter width, $d_1$ and $d_2$, attenuation coefficients of the catheter and radioopaque material $c_1$ and $c_2$, and the thickness of the strip $t$. A similar approach is used for umbilical venous and arterial catheter (UVC and UAC) but with a profile of dual edges. The tracing of the catheter was simulated using a B-spline with control points randomly generated on the image. De Boor's algorithm~\cite{de1978practical} was employed for the generation and the generated line was then rasterized with Xiaolin Wu's antialiasing line drawing algorithm~\cite{wu1991efficient}. Implementation details can be found in Section~\ref{id}.
\subsection{Text mask generation}
Our initial experiments showed that training with just synthetic lines would cause confusion for radiopaque markers (letters) which may occasionally be noted on radiographs and also share line like structures. Therefore, we explicitly created another class for text so that its misclassification can be penalized independently. For the sake of simplicity, we cropped the common text from the pediatric X-rays and randomly scaled and merged with the adult chest X-ray.
The generated catheter and text are then added to the adult chest X-ray with a weight sampled in the range of 0.15 to 0.35. Figure~\ref{sampledataset} shows two samples from our synthetic dataset.
\tikzset{%
Cote arrow/.style={%
<->,
>=stealth,
very thin
}
}
\begin{figure}[t]
\centering
\resizebox{0.8\textwidth}{!}{
\begin{tikzpicture}
\matrix [column sep=2mm, row sep=2mm,ampersand replacement=\&] {
\node (p22)[inner sep=0] at (0,0){
\resizebox{0.24\textwidth}{0.24\textwidth}{
\begin{tikzpicture}
\begin{axis}[enlargelimits=false, axis on top, axis equal image,y dir=reverse,
xtick=\empty,
ytick=\empty,
scale only axis
]
\addplot graphics [xmin=0,xmax=15.9,ymin=0,ymax=15.9] {./ng_tube_sino_sim1.png};
\node[inner sep=0, outer sep =0] at (axis cs:0,7.9) (nodeA1) {};
\node[inner sep=0, outer sep =0] at (axis cs:15.9,7.9) (nodeA2) {};
\node[inner sep=0, outer sep =0] at (axis cs:4.2,8) (nodeB1) {};
\node[inner sep=0, outer sep =0] at (axis cs:11.9,8) (nodeB2) {};
\node[inner sep=0, outer sep =0] at (axis cs:4.2,6.2) (nodeC1) {};
\node[inner sep=0, outer sep =0] at (axis cs:4.2,9.8) (nodeC2) {};
\draw[Set2-A,Cote arrow]($(nodeA1)+(0,-40)$) -- ($(nodeA2)+(0,-40)$) node[midway,anchor=south,inner sep=2pt] (TextNode1) {$d_1$};
\draw[Set2-A](nodeA1) -- ($(nodeA1)+(0,-42)$); \draw[Set2-A](nodeA2) -- ($(nodeA2)+(0,-42)$);
\draw[Set2-A,Cote arrow]($(nodeB1)+(0,-30)$) -- ($(nodeB2)+(0,-30)$) node[midway,anchor=south,inner sep=2pt] (TextNode1) {$d_2$};
\draw[Set2-A](nodeB1) -- ($(nodeB1)+(0,-32)$); \draw[Set2-A](nodeB2) -- ($(nodeB2)+(0,-32)$);
\draw[Set2-A,Cote arrow]($(nodeC1)+(20,0)$) -- ($(nodeC2)+(20,0)$) node[midway,anchor=south,inner sep=2pt,rotate=270] (TextNode1) {$t$};
\draw[Set2-A](nodeC1) -- ($(nodeC1)+(22,0)$); \draw[Set2-A](nodeC2) -- ($(nodeC2)+(22,0)$);
\draw[Set2-A](axis cs:8,8) circle (1.8cm);
\draw[Set2-A](axis cs:8,8) circle (3.65cm);
\end{axis}
\end{tikzpicture}
}
}; \&
\node (p23)[inner sep=0] at (0,0){
\resizebox{0.24\textwidth}{0.24\textwidth}{
\begin{tikzpicture}
\begin{axis}[enlargelimits=false, axis on top, axis equal image,y dir=reverse,
xtick=\empty,
ytick=\empty,
scale only axis
]
\addplot graphics [xmin=0,xmax=18,ymin=0,ymax=28.7] {./ng_tube_sino_sim.jpg};
\node[inner sep=0, outer sep =0] at (axis cs:0.1,4) (nodeA1) {};
\node[inner sep=0, outer sep =0] at (axis cs:0.1,26) (nodeA2) {};
\node[inner sep=0, outer sep =0] at (axis cs:3,4) (nodeB1) {};
\node[inner sep=0, outer sep =0] at (axis cs:3,26) (nodeB2) {};
\node[inner sep=0, outer sep =0] at (axis cs:6,4) (nodeC1) {};
\node[inner sep=0, outer sep =0] at (axis cs:6,26) (nodeC2) {};
\node[inner sep=0, outer sep =0] at (axis cs:9,4) (nodeD1) {};
\node[inner sep=0, outer sep =0] at (axis cs:9,26) (nodeD2) {};
\draw[Set2-A](nodeA1) -- (nodeA2) node[anchor=north west] (TextNode1) {$\ang{0}$};
\draw[Set2-B](nodeB1) -- (nodeB2) node[anchor=north west] (TextNode1) {$\ang{30}$};
\draw[Set2-C](nodeC1) -- (nodeC2) node[anchor=north west] (TextNode1) {$\ang{60}$};
\draw[Set2-D](nodeD1) -- (nodeD2) node[anchor=north west] (TextNode1) {$\ang{90}$};
\end{axis}
\end{tikzpicture}
}
}; \&
\node (p24)[inner sep=0] at (0,0){
\resizebox{0.24\textwidth}{0.24\textwidth}{
\begin{tikzpicture}
\begin{axis}[xmin=0,xmax=229,ymin=0, ymax=1,ticks=none,
tick pos=left,tickwidth=1mm, legend pos= north east, legend entries={},axisStyle]
\addplot[ Set2-A,thick] plot coordinates {
(1.00, 0.00)(3.00, 0.00)(5.00, 0.00)(7.00, 0.00)(9.00, 0.00)(11.00, 0.00)(13.00, 0.00)(15.00, 0.00)(17.00, 0.00)(19.00, 0.00)(21.00, 0.00)(23.00, 0.00)(25.00, 0.00)(27.00, 0.00)(29.00, 0.00)(31.00, 0.00)(33.00, 0.00)(35.00, 0.05)(37.00, 0.56)(39.00, 0.62)(41.00, 0.64)(43.00, 0.66)(45.00, 0.67)(47.00, 0.68)(49.00, 0.69)(51.00, 0.70)(53.00, 0.71)(55.00, 0.72)(57.00, 0.73)(59.00, 0.73)(61.00, 0.74)(63.00, 0.74)(65.00, 0.75)(67.00, 0.75)(69.00, 0.76)(71.00, 0.76)(73.00, 0.77)(75.00, 0.69)(77.00, 0.19)(79.00, 0.18)(81.00, 0.17)(83.00, 0.16)(85.00, 0.16)(87.00, 0.15)(89.00, 0.15)(91.00, 0.14)(93.00, 0.14)(95.00, 0.14)(97.00, 0.14)(99.00, 0.14)(101.00, 0.13)(103.00, 0.13)(105.00, 0.13)(107.00, 0.13)(109.00, 0.13)(111.00, 0.13)(113.00, 0.13)(115.00, 0.13)(117.00, 0.13)(119.00, 0.13)(121.00, 0.13)(123.00, 0.13)(125.00, 0.13)(127.00, 0.13)(129.00, 0.13)(131.00, 0.14)(133.00, 0.14)(135.00, 0.14)(137.00, 0.14)(139.00, 0.14)(141.00, 0.15)(143.00, 0.15)(145.00, 0.16)(147.00, 0.16)(149.00, 0.17)(151.00, 0.18)(153.00, 0.19)(155.00, 0.22)(157.00, 0.22)(159.00, 0.22)(161.00, 0.21)(163.00, 0.21)(165.00, 0.20)(167.00, 0.20)(169.00, 0.19)(171.00, 0.19)(173.00, 0.18)(175.00, 0.17)(177.00, 0.17)(179.00, 0.16)(181.00, 0.15)(183.00, 0.14)(185.00, 0.13)(187.00, 0.11)(189.00, 0.10)(191.00, 0.08)(193.00, 0.06)(195.00, 0.01)(197.00, 0.00)(199.00, 0.00)(201.00, 0.00)(203.00, 0.00)(205.00, 0.00)(207.00, 0.00)(209.00, 0.00)(211.00, 0.00)(213.00, 0.00)(215.00, 0.00)(217.00, 0.00)(219.00, 0.00)(221.00, 0.00)(223.00, 0.00)(225.00, 0.00)(227.00, 0.00)(229.00, 0.00)
};
\addplot[ Set2-B,thick] plot coordinates {
(1.00, 0.00)(3.00, 0.00)(5.00, 0.00)(7.00, 0.00)(9.00, 0.00)(11.00, 0.00)(13.00, 0.00)(15.00, 0.00)(17.00, 0.00)(19.00, 0.00)(21.00, 0.00)(23.00, 0.00)(25.00, 0.00)(27.00, 0.00)(29.00, 0.00)(31.00, 0.00)(33.00, 0.00)(35.00, 0.01)(37.00, 0.06)(39.00, 0.14)(41.00, 0.26)(43.00, 0.34)(45.00, 0.42)(47.00, 0.50)(49.00, 0.58)(51.00, 0.66)(53.00, 0.71)(55.00, 0.78)(57.00, 0.80)(59.00, 0.81)(61.00, 0.81)(63.00, 0.82)(65.00, 0.83)(67.00, 0.83)(69.00, 0.84)(71.00, 0.84)(73.00, 0.79)(75.00, 0.72)(77.00, 0.62)(79.00, 0.54)(81.00, 0.46)(83.00, 0.38)(85.00, 0.31)(87.00, 0.24)(89.00, 0.16)(91.00, 0.15)(93.00, 0.14)(95.00, 0.14)(97.00, 0.14)(99.00, 0.14)(101.00, 0.13)(103.00, 0.13)(105.00, 0.13)(107.00, 0.13)(109.00, 0.13)(111.00, 0.13)(113.00, 0.13)(115.00, 0.13)(117.00, 0.13)(119.00, 0.13)(121.00, 0.13)(123.00, 0.13)(125.00, 0.13)(127.00, 0.13)(129.00, 0.13)(131.00, 0.14)(133.00, 0.14)(135.00, 0.14)(137.00, 0.14)(139.00, 0.15)(141.00, 0.15)(143.00, 0.15)(145.00, 0.16)(147.00, 0.16)(149.00, 0.17)(151.00, 0.18)(153.00, 0.19)(155.00, 0.22)(157.00, 0.22)(159.00, 0.22)(161.00, 0.21)(163.00, 0.21)(165.00, 0.20)(167.00, 0.20)(169.00, 0.19)(171.00, 0.19)(173.00, 0.18)(175.00, 0.17)(177.00, 0.16)(179.00, 0.16)(181.00, 0.15)(183.00, 0.14)(185.00, 0.13)(187.00, 0.11)(189.00, 0.10)(191.00, 0.08)(193.00, 0.06)(195.00, 0.01)(197.00, 0.00)(199.00, 0.00)(201.00, 0.00)(203.00, 0.00)(205.00, 0.00)(207.00, 0.00)(209.00, 0.00)(211.00, 0.00)(213.00, 0.00)(215.00, 0.00)(217.00, 0.00)(219.00, 0.00)(221.00, 0.00)(223.00, 0.00)(225.00, 0.00)(227.00, 0.00)(229.00, 0.00)
};
\addplot[ Set2-C,thick] plot coordinates {
(1.00, 0.00)(3.00, 0.00)(5.00, 0.00)(7.00, 0.00)(9.00, 0.00)(11.00, 0.00)(13.00, 0.00)(15.00, 0.00)(17.00, 0.00)(19.00, 0.00)(21.00, 0.00)(23.00, 0.00)(25.00, 0.00)(27.00, 0.00)(29.00, 0.00)(31.00, 0.00)(33.00, 0.00)(35.00, 0.01)(37.00, 0.06)(39.00, 0.08)(41.00, 0.10)(43.00, 0.11)(45.00, 0.13)(47.00, 0.14)(49.00, 0.15)(51.00, 0.16)(53.00, 0.17)(55.00, 0.17)(57.00, 0.18)(59.00, 0.19)(61.00, 0.27)(63.00, 0.34)(65.00, 0.43)(67.00, 0.50)(69.00, 0.57)(71.00, 0.64)(73.00, 0.71)(75.00, 0.78)(77.00, 0.82)(79.00, 0.86)(81.00, 0.86)(83.00, 0.85)(85.00, 0.83)(87.00, 0.82)(89.00, 0.82)(91.00, 0.79)(93.00, 0.73)(95.00, 0.65)(97.00, 0.59)(99.00, 0.52)(101.00, 0.45)(103.00, 0.38)(105.00, 0.32)(107.00, 0.25)(109.00, 0.18)(111.00, 0.13)(113.00, 0.13)(115.00, 0.13)(117.00, 0.13)(119.00, 0.13)(121.00, 0.13)(123.00, 0.13)(125.00, 0.13)(127.00, 0.13)(129.00, 0.13)(131.00, 0.14)(133.00, 0.14)(135.00, 0.14)(137.00, 0.14)(139.00, 0.14)(141.00, 0.15)(143.00, 0.15)(145.00, 0.16)(147.00, 0.16)(149.00, 0.17)(151.00, 0.18)(153.00, 0.19)(155.00, 0.22)(157.00, 0.22)(159.00, 0.22)(161.00, 0.21)(163.00, 0.21)(165.00, 0.20)(167.00, 0.20)(169.00, 0.19)(171.00, 0.19)(173.00, 0.18)(175.00, 0.17)(177.00, 0.17)(179.00, 0.16)(181.00, 0.15)(183.00, 0.14)(185.00, 0.13)(187.00, 0.11)(189.00, 0.10)(191.00, 0.08)(193.00, 0.06)(195.00, 0.01)(197.00, 0.00)(199.00, 0.00)(201.00, 0.00)(203.00, 0.00)(205.00, 0.00)(207.00, 0.00)(209.00, 0.00)(211.00, 0.00)(213.00, 0.00)(215.00, 0.00)(217.00, 0.00)(219.00, 0.00)(221.00, 0.00)(223.00, 0.00)(225.00, 0.00)(227.00, 0.00)(229.00, 0.00)
};
\addplot[ Set2-D,thick] plot coordinates {
(1.00, 0.00)(3.00, 0.00)(5.00, 0.00)(7.00, 0.00)(9.00, 0.00)(11.00, 0.00)(13.00, 0.00)(15.00, 0.00)(17.00, 0.00)(19.00, 0.00)(21.00, 0.00)(23.00, 0.00)(25.00, 0.00)(27.00, 0.00)(29.00, 0.00)(31.00, 0.00)(33.00, 0.00)(35.00, 0.00)(37.00, 0.06)(39.00, 0.08)(41.00, 0.10)(43.00, 0.11)(45.00, 0.13)(47.00, 0.14)(49.00, 0.15)(51.00, 0.16)(53.00, 0.17)(55.00, 0.17)(57.00, 0.18)(59.00, 0.19)(61.00, 0.19)(63.00, 0.20)(65.00, 0.20)(67.00, 0.21)(69.00, 0.21)(71.00, 0.22)(73.00, 0.22)(75.00, 0.22)(77.00, 0.19)(79.00, 0.18)(81.00, 0.17)(83.00, 0.16)(85.00, 0.16)(87.00, 0.15)(89.00, 0.15)(91.00, 0.14)(93.00, 0.14)(95.00, 0.22)(97.00, 0.71)(99.00, 0.71)(101.00, 0.71)(103.00, 0.72)(105.00, 0.72)(107.00, 0.72)(109.00, 0.72)(111.00, 0.72)(113.00, 0.72)(115.00, 0.71)(117.00, 0.72)(119.00, 0.72)(121.00, 0.72)(123.00, 0.72)(125.00, 0.72)(127.00, 0.71)(129.00, 0.71)(131.00, 0.70)(133.00, 0.21)(135.00, 0.14)(137.00, 0.14)(139.00, 0.14)(141.00, 0.15)(143.00, 0.15)(145.00, 0.16)(147.00, 0.16)(149.00, 0.17)(151.00, 0.18)(153.00, 0.19)(155.00, 0.22)(157.00, 0.22)(159.00, 0.22)(161.00, 0.21)(163.00, 0.21)(165.00, 0.20)(167.00, 0.20)(169.00, 0.19)(171.00, 0.19)(173.00, 0.18)(175.00, 0.17)(177.00, 0.17)(179.00, 0.16)(181.00, 0.15)(183.00, 0.14)(185.00, 0.13)(187.00, 0.11)(189.00, 0.10)(191.00, 0.08)(193.00, 0.06)(195.00, 0.00)(197.00, 0.00)(199.00, 0.00)(201.00, 0.00)(203.00, 0.00)(205.00, 0.00)(207.00, 0.00)(209.00, 0.00)(211.00, 0.00)(213.00, 0.00)(215.00, 0.00)(217.00, 0.00)(219.00, 0.00)(221.00, 0.00)(223.00, 0.00)(225.00, 0.00)(227.00, 0.00)(229.00, 0.00)
};
\end{axis}
\end{tikzpicture}}
};\&
\node (p25)[inner sep=0] at (0,0){\includegraphics[width=0.24\textwidth, height=0.24\textwidth]{ng_sim.jpg}}; \& \\
};
\begin{scope} [every node/.append style={font=\large}]
\node[anchor=north] at (p22.south) {(a)};
\node[anchor=north] at (p23.south) {(b)};
\node[anchor=north] at (p24.south) {(c)};
\node[anchor=north] at (p25.south) {(d)};
\end{scope}
\end{tikzpicture}
}
\caption{Simulation of catheters in 2D. (a) Simulated cross section profile. (b) Projection profile from $\ang{0}$ to $\ang{180}$. (c): Projection profile sampled at $\ang{0}, \ang{30}, \ang{60}, \ang{90}$. (d) Simulated catheter trace in 2D with the corresponding profile in (c).}
\label{realtube}
\end{figure}
\begin{figure}[t]
\centering
\resizebox{0.8\textwidth}{!}{
\begin{tikzpicture}
\matrix [column sep=2mm, row sep=2mm,ampersand replacement=\&] {
\node (p11)[inner sep=0] at (0,0){\includegraphics[width=0.24\textwidth, height=0.24\textwidth]{15_fake_B.jpg}}; \&
\node (p12)[inner sep=0] at (0,0){\includegraphics[width=0.24\textwidth, height=0.24\textwidth]{15_fake_B.png}}; \&
\node (p13)[inner sep=0] at (0,0){\includegraphics[width=0.24\textwidth, height=0.24\textwidth]{17_fake_B.jpg}}; \&
\node (p14)[inner sep=0] at (0,0){\includegraphics[width=0.24\textwidth, height=0.24\textwidth]{17_fake_B.png}}; \&\\
};
\begin{scope}[every node/.append style={font=\small}]
\node[anchor=north] at (p11.south) {(a1)};
\node[anchor=north] at (p12.south) {(b1)};
\node[anchor=north] at (p13.south) {(a2)};
\node[anchor=north] at (p14.south) {(b2)};
\end{scope}
\end{tikzpicture}
}
\caption{Exemplar training image pairs for the proposed catheter detection network. (a1) and (a2) An adult chest X-ray with synthetic catheters overlaid on the image. (b1) and (b2) The annotation mask used for supervised training.}
\label{sampledataset}
\end{figure}
\tikzstyle{loosely dashdotted}=[dash pattern=on 11pt off 16pt on \the\pgflinewidth off 18pt]
\def\featuresep{5cm
\def\featuresepspecial{7cm
\def\featureseptwo{-10pt
\def\featureseptwospecial{3cm
\def\featuresepthree{1.75cm
\def\featuresepthreespecial{2cm
\def6cm{6cm}
\begin{figure}
\resizebox{\textwidth}{!}{
\begin{tikzpicture}
\begin{scope}[yshift=-4cm, every label/.append style={text=black, font=\Huge}]
\node(in)[inner sep=0,yshift=0.4cm,xshift=0.4cm,]{\tikz[scale=0.3]\parapp{6}{0}{6}{blue!30}; };
\node (input)[inner sep=0,label=left:scale 1] {\includegraphics[width=2cm]{11.jpg}};
\node[right= \featuresep of input.center,label=above:](f1){\tikz[scale=0.3]\parapp{6}{2}{6}{blue!10}; };
\node[right=\featuresep of f1.center,label=above:](f2){\tikz[scale=0.3]\parapp{3}{4}{3}{blue!10}; };
\node[right=\featuresep of f2.center,label=above:](f3){\tikz[scale=0.3]\parapp{1.5}{8}{1.5}{blue!10}; };
\node[above right=\featureseptwo of f3,xshift=-5pt,yshift=-5pt](f3a){\tikz[scale=0.3]\parappdash{1.5}{8}{1.5}{blue!30}; };
\node[right=\featuresepspecial of f3.center,label=above:](f4){\tikz[scale=0.3]\parapp{1.5}{8}{1.5}{blue!10}; };
\node[right=\featuresep of f4.center,yshift=0.6cm,xshift=0.6cm,label=above:](f55){\tikz[scale=0.3]\parappdash{3}{4}{3}{blue!10}; };
\node[right=\featuresep of f4.center](f5){\tikz[scale=0.3]\parapp{3}{4}{3}{blue!10}; };
\node[right=\featuresep of f5.center,yshift=0.4cm,xshift=0.4cm,label=above:](f66){\tikz[scale=0.3]\parappdash{6}{2}{6}{blue!10}; };
\node[right=\featuresep of f5.center](f6){\tikz[scale=0.3]\parapp{6}{2}{6}{blue!10}; };
\node[right=\featuresep of f6.center,label=above:$\hat{I}_1$](f7){\tikz[scale=0.3]\parapp{6}{0}{6}{blue!10}; };
\node(in2)[below= 6cm of in.center, inner sep=0]{\tikz[scale=0.45]\parapp{6}{0}{6}{blue!30}; };
\node (input2)[below= 6cm of input.center, inner sep=0,label=left:scale 2] {\includegraphics[width=3cm]{11.jpg}};
\node[below= 6cm of f1.center](f12){\tikz[scale=0.45]\parapp{6}{2}{6}{blue!10}; };
\node[below= 6cm of f2.center](f22){\tikz[scale=0.45]\parapp{3}{4}{3}{blue!10}; };
\node[below= 6cm of f3a.center, yshift=.5cm,xshift=.5cm](f32a){\tikz[scale=0.45]\parappdash{1.5}{8}{1.5}{blue!30}; };
\node[below= 6cm of f3.center](f32){\tikz[scale=0.45]\parapp{1.5}{8}{1.5}{blue!10}; };
\node[below= 6cm of f4.center](f42){\tikz[scale=0.45]\parapp{1.5}{8}{1.5}{blue!10}; };
\node[below= 6cm of f55.center,yshift=0.4cm,xshift=0.4cm](f552){\tikz[scale=0.45]\parappdash{3}{4}{3}{blue!10}; };
\node[below= 6cm of f5.center](f52){\tikz[scale=0.45]\parapp{3}{4}{3}{blue!10}; };
\node[below= 6cm of f66.center,yshift=0.2cm,xshift=0.2cm](f662){\tikz[scale=0.45]\parappdash{6}{2}{6}{blue!10}; };
\node[below= 6cm of f6.center](f62){\tikz[scale=0.45]\parapp{6}{2}{6}{blue!10}; };
\node[below= 6cm of f7.center,label=above:$\hat{I}_2$](f72){\tikz[scale=0.45]\parapp{6}{0}{6}{blue!10}; };
\node(in3)[below= 1.3*6cm of in2.center, inner sep=0]{\tikz[scale=0.6]\parapp{6}{0}{6}{blue!30}; };
\node (input3)[below= 1.3*6cm of input2.center, inner sep=0,label=left:scale 3] {\includegraphics[width=4cm]{11.jpg}};
\node[below= 1.3*6cm of f12.center](f13){\tikz[scale=0.6]\parapp{6}{2}{6}{blue!10}; };
\node[below= 1.3*6cm of f22.center](f23){\tikz[scale=0.6]\parapp{3}{4}{3}{blue!10}; };
\node[below= 1.3*6cm of f32a.center, yshift=.25cm,xshift=.25cm](f33a){\tikz[scale=0.6]\parapp{1.5}{8}{1.5}{blue!30}; };
\node[below= 1.3*6cm of f32.center](f33){\tikz[scale=0.6]\parapp{1.5}{8}{1.5}{blue!10}; };
\node[below= 1.3*6cm of f42.center](f43){\tikz[scale=0.6]\parapp{1.5}{8}{1.5}{blue!10}; };
\node[below= 1.3*6cm of f552.center,yshift=0.2cm,xshift=0.2cm](f553){\tikz[scale=0.6]\parappdash{3}{4}{3}{blue!10}; };
\node[below= 1.3*6cm of f52.center](f53){\tikz[scale=0.6]\parapp{3}{4}{3}{blue!10}; };
\node[below= 1.3*6cm of f662.center,yshift=0.1cm,xshift=0.1cm](f663){\tikz[scale=0.6]\parappdash{6}{2}{6}{blue!10}; };
\node[below= 1.3*6cm of f62.center](f63){\tikz[scale=0.6]\parapp{6}{2}{6}{blue!10}; };
\node[below= 1.3*6cm of f72.center,label=above:$\hat{I}_3$](f73){\tikz[scale=0.6]\parapp{6}{0}{6}{blue!10}; };
\path[](input) -- coordinate[midway,yshift=5cm](p0) (f1) ;
\path[](f1) -- coordinate[midway,yshift=5cm](p1) (f2) ;
\path[](f2) -- coordinate[midway,yshift=5cm](p2) (f3) ;
\path[](f3) -- coordinate[midway,yshift=5cm](p3) (f4) ;
\path[](f4) -- coordinate[midway,yshift=5cm](p4) (f5) ;
\path[](f5) -- coordinate[midway,yshift=5cm](p5) (f6) ;
\path[](f6) -- coordinate[midway,yshift=5cm, xshift=-0.5cm](p6) (f7) ;
\draw[color=blue!30,->,>=stealth, line width=2pt] (f1.south) -- ++(0,-15pt) -| (f6.south);
\draw[color=blue!30,->,>=stealth, line width=2pt] (f2.south) -- ++(0,-10pt) -| (f5.south);
\draw[color=blue!30,->,>=stealth, line width=2pt] (f12.south) -- ++(0,-15pt) -| (f62.south);
\draw[color=blue!30,->,>=stealth, line width=2pt] (f22.south) -- ++(0,-10pt) -| (f52.south);
\draw[color=blue!30,->,>=stealth, line width=2pt] (f13.south) -- ++(0,-15pt) -| (f63.south);
\draw[color=blue!30,->,>=stealth, line width=2pt] (f23.south) -- ++(0,-10pt) -| (f53.south);
\draw[color=blue!30,->,>=stealth, line width=2pt] (f4.south) -- ++(0,-45pt) node [midway, yshift=-1cm, circle, inner sep=4pt,draw=black, solid, fill=white] {\tikz{\pic at(0,0){drawarrow}}} |- (f32a.east);
\draw[color=blue!30,->,>=stealth, line width=2pt] (f42.south) -- ++(0,-70pt)node [midway, yshift=-1cm, circle, inner sep=4pt, draw=black, solid, fill=white] {\tikz{\pic at(0,0){drawarrow}}} |- (f33a.east);
\draw[color=blue!30,->,>=stealth, line width=2pt] (f7.south) -- ++(0,-75pt) node [midway, yshift=0, circle, draw=black, inner sep=4pt, solid, fill=white] {\tikz{\pic at(0,0){drawarrow}}} -| (in2.north);
\draw[color=blue!30,->,>=stealth, line width=2pt] (f72.south) -- ++(0,-75pt) node [midway, yshift=0, circle, draw=black, inner sep=4pt, solid, fill=white] {\tikz{\pic at(0,0){drawarrow}}} -| (in3.north);
\end{scope}
\begin{scope}[yshift=-4cm, every label/.append style={text=black, font=\Large}]
\node[inner sep=0pt,rotate=90, anchor=center,label=right:](set0) at (p0){
\tikz{
\pic(e1) at (0,0) {encoder1};
}
};
\node[inner sep=0pt,rotate=90, anchor=center,label=right: ](set1) at (p1){
\tikz{
\pic(e2) at (0,0) {encoder1};
}
};
\node[inner sep=0pt,rotate=90, anchor=center,label=right:](set2) at (p2){
\tikz{
\pic(e3) at (0,0) {encoder1};
}
};
\node[inner sep=0pt,rotate=90, anchor=center,label=right: ](set3) at(p3){
\tikz{
\pic(l1) at (0,0) {lstm};
}
};
\draw[myarrow,loosely dashdotted]($(set3.south)+(20pt,0)$) to[out=80,in=-10] ($(set3.east)+(0pt,20pt)$) to[out=180,in=100] ($(set3.north)+(-20pt,0)$);
\node[inner sep=0pt,rotate=90, anchor=center,yshift=-20pt,label=right: ](set4) at(p4){
\tikz{
\pic(d3) at (0,0) {decoder1};
}
};
\node[inner sep=0pt,rotate=90, anchor=center,yshift=-25pt,label=right: ](set5) at(p5){
\tikz{
\pic(d2) at (0,0) {decoder1};
}
};
\node[inner sep=0pt,rotate=90, anchor=center,yshift=-25pt,label=right: ](set6) at(p6){
\tikz{
\pic(d1) at (0,0) {decoder1};
}
};
\end{scope}
\begin{scope}[on background layer]
\draw[line width=5pt,shorten <=-60pt,shorten >=-100pt,->,>=stealth](set0.north) -- (set6.south);
\end{scope}
\begin{scope}[yshift=-34cm,xshift=9cm,node distance=8p, font=\Huge]
\node[line width=2pt, draw,fill=BuPu-B,inner sep=16pt,label={[label distance=1.5cm] below: convLSTM block}](r1){
\tikz{
\node[layer2](conv) {conv};
\node[below=4*\layersepsmall of conv, inner sep=0](temp) {};
\node[layer2, below=8*\layersepsmall of conv](sigmoid1) {sigmoid};
\node[layer2,right=3*\layersepsmall of sigmoid1](sigmoid2) {sigmoid};
\node[layer2,left=3*\layersepsmall of sigmoid1](tanh1) {tanh};
\node[layer2,left=3*\layersepsmall of tanh1](sigmoid3) {sigmoid};
\node[draw,circle, minimum width=1cm,inner sep=2pt, below=5*\layersepsmall of sigmoid1](mul1){$\times$};
\node[draw,circle, minimum width=1cm,inner sep=2pt, below=5*\layersepsmall of mul1](plus1){+};
\node[draw,circle, minimum width=1cm,inner sep=2pt, below=17.5*\layersepsmall of sigmoid2](mul2){$\times$};
\node[layer2, right=8*\layersepsmall of mul2](state){State};
\node[layer2, below=10*\layersepsmall of plus1](tanh2){tanh};
\node[inner sep=0, below=6*\layersepsmall of plus1](temp2){};
\node[draw,circle, minimum width=1cm,inner sep=2pt, below=5*\layersepsmall of tanh2](mul3){$\times$};
\draw[line width=16pt](conv) -- (temp);
\draw[line width=4pt](temp) -- (sigmoid1); \draw[line width=4pt](temp) -| (tanh1); \draw[line width=4pt](temp)-| (sigmoid3); \draw[line width=4pt](temp)-|(sigmoid2);
\draw[myarrow](sigmoid1) --(mul1); \draw[myarrow](tanh1)|-(mul1.west);
\draw[myarrow](sigmoid2) -- (mul2); \draw[myarrow](mul1) -- (plus1);
\draw[myarrow](state)-- (mul2); \draw[myarrow](mul2)--(plus1);
\draw[myarrow](plus1) -- (tanh2)--(mul3);
\draw[myarrow](sigmoid3)|-(mul3.west);
\draw[myarrow,shorten >=-40pt](mul3) -- ($(mul3.south)+(0,-20pt)$) ;
\draw[line width=4pt,shorten <=-40pt] ($(conv.north)+(0,20pt)$)--(conv);
\draw[myarrow, loosely dashdotted](temp2) to[out=-20,in=-120](state.south);
}
};
\draw[myarrow, loosely dashdotted]($(r1.south)+(4pt,-10pt)$) to[out=10,in=-90] ($(r1.east)+(20pt,0pt)$) to[out=90, in=-10] ($(r1.north)+(4pt, 10pt)$);
\node[line width=2pt,draw,fill=BuPu-E,inner sep=16pt,label={[label distance=1.5cm] below: residual block}, above right=0 and 30*\layersepsmall of r1.south east](r2) {
\tikz{
\node[layer2](conv1) {conv};
\node[layer2,below=3*\layersepsmall of conv1](bn1) {bn};
\node[layer2,below=3*\layersepsmall of bn1](relu) {relu};
\node[layer2,below=3*\layersepsmall of relu](conv2) {conv};
\node[layer2,below=3*\layersepsmall of conv2](bn2) {bn};
\node[circle,below=3*\layersepsmall of bn2, draw,inner sep=2pt, minimum width=1.2cm](c1){ +};
\draw[line width=4pt,shorten <=-40pt]($(conv1.north)+(0, 30pt)$) -- (conv1);
\draw[line width=4pt] (conv1)-- (bn1) -- (relu) -- (conv2) -- (bn2) -- (c1) -- ($(c1.south)+(0, -20pt)$) ;
\draw[myarrow,shorten >=-40pt](c1) --($(c1.south)+(0, -30pt)$);
\draw[myarrow]($(conv1.center)+(0, 30pt)$) --++ (50pt,0) |- (c1.east);
}
};
\node[line width=2pt,draw, fill=BuPu-G,inner sep=16pt,label={[label distance=1.5cm] below: basic conv block}, above right=0 and 30*\layersepsmall of r2.south east](r3) {
\tikz{
\node[layer2](conv1) {conv};
\node[layer2, below=3*\layersepsmall of conv1](bn1) {bn};
\node[layer2,below=3*\layersepsmall of bn1](relu) {relu};
\draw[line width=4pt,shorten <=-40pt]($(conv1.north)+(0, 15pt)$) -- (conv1);
\draw[line width=4pt] (conv1)-- (bn1) -- (relu);
\draw[line width=4pt,->,>=stealth,shorten >=-40pt](relu) --($(relu.south)+(0, -15pt)$); }
};
\node[line width=2pt, draw,fill=BuPu-J,inner sep=16pt,label={[label distance=1.5cm] below: basic deconv block}, above right=0 and 30*\layersepsmall of r3.south east](r4){
\tikz{
\node[layer2](conv1) {deconv};
\node[layer2, below=3*\layersepsmall of conv1](bn1) {bn};
\node[layer2,below=3*\layersepsmall of bn1](relu) {relu};
\draw[line width=4pt,shorten <=-40pt]($(conv1.north)+(0, 15pt)$) -- (conv1);
\draw[line width=4pt] (conv1)-- (bn1) -- (relu);
\draw[line width=4pt,->,>=stealth,shorten >=-40pt](relu) --($(relu.south)+(0, -15pt)$); }
};
\node[xshift=80pt,line width=2pt, inner sep=16pt, above=10*\layersepsmall of r3.north east](r4){
\tikz{
\node(x1) [circle, inner sep=4pt,draw, solid, fill=white,label={[label distance=2.8cm] right: upsampling}] {\tikz{\pic at(0,0){drawarrow}}};
\node(x2)[below=0.3cm of x1,draw,circle, minimum width=1.3cm,inner sep=2pt]{$\times$};
\node(x3)[draw,circle, minimum width=1.3cm,inner sep=2pt, right=0.3cm of x2,label={[label distance=1.2cm] right: element wise operation}](plus1){+};
\node(x4)[xshift=25pt,below=0.3cm of x2,label={[label distance=.5cm] right: shuttle connection}]{\tikz{\draw[color=blue!30,->,>=stealth,line width=4pt](0,0)--(3cm,0);}};
\node(x5)[below=0.3cm of x4,label={[label distance=.5cm] right: recurrent connection}]{\tikz{\draw[->,>=stealth,line width=4pt, loosely dashdotted](0,0)--(3cm,0);}};
}
};
\end{scope}
\end{tikzpicture}
}
\caption{Overview of the network architecture. Note that for the last deconv block, bn and relu were replaced with a softmax layer to get a multi-channel likelihood map.}
\label{arch}
\end{figure}
\subsection{Network architecture}
Given an input image, the network has to learn to assign each pixel to one of three classes $\{c_{bg},c_{catheter},c_{text}\}$. A scale recurrent neural network~\cite{tao2018scale} was employed for this task. It is comprised of an encoder-decoder architecture with shuttle connections and recurrent modules. The encoder progressively increases the number of feature channels and decreases the spatial size (height, width) of the feature map to achieve a certain degree of translation invariance and save memory. The decoder in turn performs an inverse operation to gradually recover the size of the input. During the encoding and decoding process, every single pixel in the final output feature map contains information computed from a large portion of the image hence encodes the global information. The shuttle connection directly communicates lower level features to the higher level so that the network can make final predictions based on a fusion of both local and global cues.
The network is fully convolutional thus can accept images of different scales. Input of increasing scale was sent to the network at different time steps. The recurrent module takes the form of convolutional long-short term memory (convLSTM)~\cite{xingjian2015convolutional}. It takes concatenated inputs from the current and previous scale. To maintain size compatibility, we upscaled the feature maps from the previous scale with strided convolution.
Figure~\ref{arch} provides a general overview of this architecture. Residual block was used to facilitate the training process. Both the skip connection and the residual block~\cite{he2016deep} benefits training by making the gradient propagate more easily through the network.
\subsection{Training Objective}
The output of the network is a multi-channel feature map with the number of channels equal to the number of predicted classes. We normalized the feature maps with a softmax function so that each channel of the map can be interpreted as the likelihood of belonging to each class. Cross entropy (CE) loss was used to measure the difference between the output and the groundtruth. Loss at each scale was aggregated together as the final optimization objective, which can be expressed mathematically as:
\begin{equation}
\mathcal{L} = \sum_{i=1}^{m}CE(\hat{I}_i,I_i,;w),
\end{equation}
where $\hat{I}_i$ is the output of the network at scale $i $ and $I_i$ is the corresponding groundtruth label map. $m$ is the number of the scale and was chosen as 3 in this work. $w$ is the weights to balance the unequal distribution of $\{c_{bg},c_{catheter},c_{text}\}$ and was chosen as 1, 40, 80 respectively.
\section{Experiment setup}
\subsection{Implementation details}~\label{id}
The images from the Open-i dataset all have a width of 512. This size was found to be sufficiently large to discriminate different catheters. For NGT and ETT, $d_1$ and $d_2$ were selected as 160 and 80 while $c_1$ and $c_2$ were set as 0.1 and 1, and $t$ was set to be 30 pixels. Note that in the current implementation, the size of $d_2$ was not varied to cope with the width difference of NGT and ETT. For UAC and UVC, only one projection profile at $\ang{0}$ was selected. The generated catheter width is 9, 9 and 6 pixels for NGT, ETT, UAC and UVC respectively to accommodate image size. During the training time, the training image pairs were augmented with rotation (in the range of [-\ang{60}, \ang{60}]), horizontal flipping, and scale changes (in the range of [0.5, 1.1]) to generate random training image on the fly. Due to the scale change, the augmented images were cropped or padded to the size of $512\times512$. The testing images collected locally were all resized to a width of 480 and denoised using BM3D~\cite{dabov2007image} with $\sigma=0.1$.
The segmentation network was trained on the Cedar cluster of Compute Canada with a P100 GPU. Adam optimizer [35] with $\beta_1=0.9$ and $\beta_2=0.999$ was used for the optimization with a initial learning rate of 0.0001. The learning rate decayed by 0.1 every 10 epochs. The network was trained to convergence after 50 epochs. All training parameters were initialized with numbers drawn from a Gaussian distribution $\mathcal{N}$(0, 0.02). Batch size was chosen to be 2 due to the constraint of the GPU memory size.
\subsection{Evaluation Metrics}
In the evaluation, pixels of background and text were treated as the negative class and pixels of catheter were treated as the positive class. Since the class is highly imbalanced, precision and recall were computed with each expressed mathematically as:
\begin{equation}
\text{Precision}=\frac{\text{TP}}{\text{TP}+\text{FP}}, \text{ }\text{Recall} = \frac{\text{TP}}{\text{TP}+\text{FN}}
\end{equation}
where $\text{TP}, \text{TN}, \text{FP}, \text{FN}$ represents the number of true positives, true negatives, false positives, false negatives respectively. The threshold for computing the precision and recall curve was sampled within the range of 0 to 255 at an interval of 30.
Another measure we used for the evaluation is the weighted harmonic mean of precision and recall (or $F_{\beta}$-measure) which is defined as:
\begin{equation}
F_{\beta} = \frac{(1+\beta^2)\times \text{Precision} \times \text{Recall}}{\beta^2 \times \text{Precision} +\text{Recall}}
\end{equation}
where $\beta^2$ is a weighting term and was set as 0.3 to weight precision more than recall as in~\cite{achanta2009frequency}. The threshold in calculating the corresponding precision and recall was an image dependent value defined as:
\begin{equation}
T_\mathit{seg} = \frac{2}{W\times H}\sum_{x=1}^W\sum_{y=1}^H \hat{I}^k(x,y)
\end{equation}
where, $W,H$ are the width and height of the obtained catheter likelihood map $\hat{I}^k$ (assuming at the $k$-th channel of the network output).
\subsection{Experiments}
No prior method is applicable to detect all the catheters of interest, therefore we only compared our method with another deep learning approach which used fcn8s~\cite{long2015fully} for PICC line tip detection~\cite{lee2017deep}. Further, in order to demonstrate the effectiveness of the recurrent module, we trained another network termed w/oR with the recurrent module removed under the exact same settings. This network resembles the typical UNet-style network used in~\cite{ambrosini2017fully} .
\begin{figure}[t]
\resizebox{\textwidth}{!}{
\begin{tikzpicture}
\matrix [column sep=2mm, row sep=2mm,ampersand replacement=\&] {
\node (p11)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth]{10-DX-17-0329256_real_A2.png}}; \&
\node (p12)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth]{gt_10-DX-17-0329256.png}}; \&
\node (p13)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth]{10-DX-17-0329256_b0.png}}; \&
\node (p14)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth]{10-DX-17-0329256_b1.png}}; \&
\node (p15)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth]{10-DX-17-0329256_b2.png}}; \&
\node (p16)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth]{10-DX-17-0329256_rcnn_norecurrent_best.png}}; \&
\node (p17)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth]{10-DX-17-0329256_fcn8s_best.png}}; \&\\
\node (p21)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth]{14-DX-17-0224825_real_A2.png}}; \&
\node (p22)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth]{gt_14-DX-17-0224825.png}}; \&
\node (p23)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth]{14-DX-17-0224825_b0.png}}; \&
\node (p24)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth]{14-DX-17-0224825_b1.png}}; \&
\node (p25)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth]{14-DX-17-0224825_b2.png}}; \&
\node (p26)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth]{14-DX-17-0224825_rcnn_norecurrent_best.png}}; \&
\node (p27)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth]{14-DX-17-0224825_fcn8s_best.png}}; \&\\
\node (p31)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth]{2-DX-13-0291088_real_A2.png}}; \&
\node (p32)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth]{gt_2-DX-13-0291088.png}}; \&
\node (p33)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth]{2-DX-13-0291088_b0.png}}; \&
\node (p34)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth]{2-DX-13-0291088_b1.png}}; \&
\node (p35)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth]{2-DX-13-0291088_b2.png}}; \&
\node (p36)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth]{2-DX-13-0291088_rcnn_norecurrent_best.png}}; \&
\node (p37)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth]{2-DX-13-0291088_fcn8s_best.png}}; \&\\
};
\begin{scope}[every node/.append style={font=\scriptsize}]
\node[anchor=north] at (p31.south) {test image};
\node[anchor=north] at (p32.south) {groundtruth};
\node[anchor=north] at (p33.south) {scale 1 (proposed)};
\node[anchor=north] at (p34.south) {scale 2 (proposed)};
\node[anchor=north] at (p35.south) {scale 3 (proposed)};
\node[anchor=north] at (p36.south) {w/oR};
\node[anchor=north] at (p37.south) {fcn8s~\cite{lee2017deep}};
\end{scope}
\end{tikzpicture}
}
\caption{Raw catheter likelihood maps for different networks on test images: proposed, w/oR, and fcn8s (best viewed in digital version). }
\label{datasetsample}
\end{figure}
\pgfplotsset{every tick label/.append style={font=\scriptsize}}
\begin{figure}[t]
\centering
\begin{tikzpicture}[scale=1,]
\begin{axis}[
xlabel=Recall,
ylabel=Precision,
domain=0:1,
width=0.48\textwidth,
ymin=0,ymax=1,
xmin=0,xmax=1,
grid=major,
grid style={dashed, gray!50},
legend pos= south west,legend style={nodes={scale=0.5, transform shape}}, axisStyle,title=(a)]
\addplot[smooth,Set2-A,thick] plot coordinates {
(0.373,0.630)(0.528,0.523)(0.615,0.466)(0.690,0.422)(0.737,0.380)(0.777,0.338)(0.811,0.297)(0.847,0.250)(0.915,0.157)
};
\addlegendentry{scale 1
\addplot[smooth,Set2-B,thick] plot coordinates {
(0.378,0.945)(0.476,0.926)(0.547,0.908)(0.589,0.892)(0.624,0.871)(0.659,0.846)(0.698,0.815)(0.748,0.762)(0.844,0.543)
};
\addlegendentry{scale 2
\addplot[smooth,Set2-C,thick] plot coordinates {
(0.438,0.944)(0.521,0.923)(0.558,0.913)(0.584,0.905)(0.604,0.897)(0.623,0.884)(0.647,0.870)(0.677,0.850)(0.788,0.712)
};
\addlegendentry{scale 3
\addplot[smooth,Set2-D,thick] plot coordinates {
(0.376,0.948)(0.452,0.929)(0.488,0.916)(0.513,0.903)(0.536,0.889)(0.562,0.874)(0.586,0.860)(0.616,0.834)(0.721,0.679)
};
\addlegendentry{\textbf{w/oR}
\addplot[smooth,Set2-E,thick] plot coordinates {
(0.125,0.850)(0.157,0.885)(0.174,0.875)(0.187,0.866)(0.200,0.859)(0.212,0.850)(0.226,0.840)(0.247,0.826)(0.327,0.796)
};
\addlegendentry{fcn8s
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}[scale=1,]
\begin{axis}[
xlabel=$F_{\beta}$-measure,
ymin=0,
symbolic x coords={scale 3, w/oR,scale 2,fcn8s, scale 1},
width=0.48\textwidth,
ybar=1pt
bar width=.2cm,
enlarge x limits=0.1,
legend columns=3,
grid=major,
grid style={dashed, gray!50},
ymax = 1.2,
ytick={0,0.2,0.4,0.6,0.8,1,1.2},
legend pos= north east,legend style={nodes={scale=0.5, transform shape}},
xtick=data,
axisStyle,
title=(b)
]
\addplot[fill=Set2-A] coordinates {
(scale 3, 0.8411)(w/oR, 0.8189)(scale 2, 0.7455)(fcn8s, 0.8260)(scale 1, 0.3126)
};
\addlegendentry{precision}
\addplot[fill=Set2-B] coordinates {
(scale 3, 0.6909)(w/oR, 0.6305)(scale 2, 0.7603)(fcn8s, 0.2884)(scale 1, 0.7968)
};
\addlegendentry{recall}
\addplot[fill=Set2-C] coordinates {
(scale 3, 0.8009)(w/oR, 0.7661)(scale 2, 0.7489)(fcn8s, 0.5775)(scale 1, 0.3636)
};
\addlegendentry{$F_{\beta}$-measure}
\end{axis} \end{tikzpicture}
\caption{Quantitative results for different methods on the pediatric X-ray test set. (a) Precision and recall curves. (b) $F_{\beta}$-measures (methods ordered according to the value of the $F_{\beta}$-measure).}
\label{pr}
\end{figure}
\section{Results and Discussion}
Qualitative visual examples of the raw catheter likelihood maps obtained directly from the network without any postprocessing are shown in Figure~\ref{datasetsample}. It can be seen that the proposed network at the highest scale (scale 3) achieves the best visual appearance as compared to the other methods. The maps from the proposed network at scale 2 and scale 3 look much cleaner than w/oR and fcn8s. We would attribute this to the iterative refinement of the detection results by using the recurrent module. When comparing results from the proposed network at different scales, we can see that the likelihood map from the smallest scale contains almost all line-like structures, including not only catheters but also ribs and ECG leads. This is because catheters, ribs, ECG leads look similar at a smaller scale. These irrelevant line-like structures are gradually filtered out in higher scales because catheters, especially UVCs and UACs, begin to appear as two parallel edges whereas ribs and ECG leads continue to appear as a single solid line.
Precision and recall curves are shown in Figure~\ref{pr} (a) and $F_{\beta}$-measures computed from adaptive threshold are shown in Figure~\ref{pr} (b). Note that before computing these quantitative measures, the obtained binary map underwent morphological operations to filter out small irregular regions. It can be clearly seen that the proposed method achieves the highest precision and recall than the competitors. The results at lowest scale have the highest recall but lowest precision. The results at the two higher scales achieve approximately the same performance. The reason we believe is that even though there are some improvements in the raw likelihood map, the middle scale has already detected all majority parts of the catheter. The local refinement is too small to be manifested in the quantitative measures. Nonetheless, the $F_{\beta}$-measure for proposed method at the highest scale ranks the first among the comparators.
\begin{figure}
\centering
\resizebox{0.8\textwidth}{!}{
\begin{tikzpicture}
\matrix [column sep=2mm, row sep=2mm,ampersand replacement=\&] {
\node (p11)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth, height=0.2\textwidth]{overlay_19-DX-16-0073180.png}}; \&
\node (p12)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth, height=0.2\textwidth]{overlay_15-DX-17-0360268.png}}; \&
\node (p13)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth, height=0.2\textwidth]{overlay_7-DX-17-0329784.png}}; \&
\node (p14)[inner sep=0] at (0,0){\includegraphics[width=0.14\textwidth, height=0.2\textwidth]{overlay_11-DX-17-0321619.png}}; \&\\
};
\begin{scope}[every node/.append style={font=\scriptsize}]
\node[anchor=north] at (p11.south) {(a)};
\node[anchor=north] at (p12.south) {(b)};
\node[anchor=north] at (p13.south) {(c)};
\node[anchor=north] at (p14.south) {(d)};
\end{scope}
\end{tikzpicture}
}
\caption{Typical failure cases. (a) and (b) partially detected NGT possibly resulted by its similarity to ECG leads. Occlusion of UVC. (c) Confusion with other lines on the X-ray image. (d) Confusion with vertical the lateral aspect of the rib cage (best viewed in digital version).}
\label{failure}
\end{figure}
Catheters are represented as thin lines of just a few pixels wide on X-ray images. Therefore, a slight pixel shift in the groundtruth annotation could adversely impact the quantitative results. This could inevitably happen due to the nature of this annotation task and we believe our method could provide assistance for annotators in the future by detecting line candidates in the first place.
There are certain situations where our proposed method would fail. Figure~\ref{failure} (a) and (b) show a partially detected NGT. This mostly likely resulted from the decreased visibility of the radiopaque strip. Figure~\ref{failure} (a) also shows another failure situation where the inferior portion of the UVC is occluded by the abdomen. (c) shows the case of a falsely detected unidentified line and (d) shows part of the lateral aspect of the rib cage falsely identified as a catheter.
\section{Conclusion}
In this work, we have proposed a simple catheter synthetic approach and a scale recurrent network for catheter and tube detection. Catheters were simulated by using a horizontal projection profile drawn over a randomly generated B-spline. The proposed network could refine the segmentation results by iterating through the scale space of the radiograph input. We have shown that just by training on adult chest X-rays with synthetic catheters, the detection network achieved promising results on real pediatric chest/abdomen X-rays. Although we have experimented with only pediatric X-rays, we believe the methodology should be also applicable to adult X-rays provided the profile is carefully designed with consideration given to the large variation of catheter and wire types. The approach described in this work may contribute to the development of a system to detect and assess the placement of catheters and tubes on X-ray images, thus providing a solution to triage and prioritize X-ray images which have potentially malpositioned catheters for a radiologists urgent review, and ensuring patient safety by alerting the clinician in a timely manner.
\bibliographystyle{plain}
\small
|
{
"timestamp": "2018-06-05T02:13:38",
"yymm": "1806",
"arxiv_id": "1806.00921",
"language": "en",
"url": "https://arxiv.org/abs/1806.00921"
}
|
\section{Introduction}
Bandwidth shortage is a major challenge for current wireless networks. Most of the available spectrum at microwave frequencies is occupied while there is a pressing need for higher throughputs and larger bandwidths.
During the past few years, mmWave frequencies have attracted the interest of academia and industry due to the capability of multi-Gbps data rates and the huge amount of bandwidth available at frequencies between 30 - 300 GHz. The mmWave band is a promising candidate for the next generation of cellular networks (5G).
User association plays an important role in the resource allocation for cellular networks. The conventional max-SINR user association is sub-optimal in dense networks as incoming user equipments (UEs) may receive the strongest signal from a congested base station (BS) and overload it.
In this case, we need to design a load-aware user association scheme in order to move the traffic of congested BS to lightly-loaded smaller BSs.
Load balancing user association schemes are studied for single antenna heterogeneous networks (HetNets) in \cite{Andrew} and for massive MIMO networks in \cite{Caire}, where it is assumed that the user instantaneous rates converge to deterministic values independent of user association and active user sets. This assumption leads to a simple but inaccurate full interference structure that degrades the network sum rate, since the effect of association coefficients on network interference structure is ignored.
Further, the interference which was ignored in 60-GHz mmWave indoor networks \cite{Atha}, is no longer negligible in a dense mmWave outdoor network at frequencies considered for cellular application (28, 38, and 73 GHz) \cite{Niu}.
A few existing works also considered the joint problem of user association and beamforming design (see \cite{Hong}, \cite{Sanjabi}, and the references therein). This problem is shown to be NP-hard, and researchers usually proposed different iterative algorithms to achieve near-optimal solutions.
In this paper, we formulate and solve an optimization problem for optimal user association in mmWave MIMO networks. The first step is the generation of a mmWave channel which is drastically different from an i.i.d channel.
The considered channel model is based on the clustered channel model introduced in \cite{SS} and the 3GPP-style 3D mmWave channel proposed in \cite{Nokia}. The next step is to formulate an optimization problem and solve it in order to find the optimal user association.
We introduce the \textit{Activation Matrix} which defines UE-BS connections in each time slot, from which association coefficients are derived as its time average.
Unlike existing works, here we assume that the user instantaneous rate is a function of user association, as is the case in mmWave. Consequently, the total interference coming from other BSs (while serving other UEs) also depends on association coefficients and has a considerable effect on network sum rate.
\section{Channel and System Model}
\subsection{mmWave Channel Model}
The mmWave channel has completely different characteristics compared to i.i.d. channel.
The channel model considered in this paper is based on the clustered channel model introduced in \cite{SS} and the 3GPP-style 3D channel model proposed for the urban micro (UMi) environments in \cite{Nokia}, which is developed using a ray-tracing study.
This channel model has $C$ clusters with $L$ rays per cluster, and it can be expressed as
\begin{align}
H=\frac{1}{\sqrt{CL}}\sum_{c=1}^{C}\sum_{l=1}^{L} \sqrt{\gamma_c}~\mathbf{a}(\phi_{c,l}^{\textrm{UE}},\theta_{c,l}^{\textrm{UE}}) ~\mathbf{a}^*(\phi_{c,l}^{\textrm{BS}},\theta_{c,l}^{\textrm{BS}})
\end{align}
where $\gamma_c$ is the gain of the $c$th cluster. The parameters $\phi^{\textrm{UE}}$, $\theta^\textrm{UE}$, $\phi^\textrm{BS}$, $\theta^\textrm{BS}$ represent azimuth angle of arrival (AoA), elevation angle of arrival (EoA), azimuth angle of departure (AoD), and elevation angle of departure (EoD), respectively. These parameters are generated randomly based on different distributions and cross correlations given in \cite[Tables 1-3]{Nokia}. The vector $\mathbf{a}(\phi,\theta)$ is the antenna array response vector, and it can considered as either uniform linear array (ULA) or uniform planar array (UPA). In order to enable beamforming in the elevation direction (3D beamforming), we use the uniform $U\times V$ planar array given by \cite{SS}
\begin{align}
\mathbf{a}(\phi,\theta)=\big[ 1, ..., e^{jkd_{\textrm{a}}(u\sin(\phi)\sin(\theta)+v\cos(\theta))}, ..., \nonumber \\ e^{jkd_{\textrm{a}}((U-1)\sin(\phi)\sin(\theta)+(V-1)\cos(\theta))} \big]^T
\end{align}
where $d_a$ is the distance between antenna elements, and $u\in\{1, ..., U\}$ and $v\in\{1, ..., V\}$ are the indices of antenna elements.
We consider two link states for each channel, (line of sight) LoS and (non-line of sight) NLoS, and use the following probability functions obtained based on the New York City measurements in \cite{RapLetter}
\begin{align}
p_{\text{LoS}}(d)&=\Big[\min\Big(\frac{d_\textrm{BP}}{d},1\Big).\Big(1-e^{-\frac{d}{\eta}}\Big)+e^{-\frac{d}{\eta}} \Big]^2\\
p_{\text{NLoS}}(d)&=1-p_{\text{LoS}}(d)
\end{align}
where $d$ is the 3D distance between UE and BS in meters, $d_{\textrm{BP}}$ is the breakpoint distance at which the LoS probability is not equal to 1 anymore, and $\eta$ is a decay parameter. The obtained values for these parameters are $d_{\textrm{BP}}=27$ m and $\eta=71$ m.
Moreover, we use the following omni-directional path loss model for LoS and NLoS links \cite{Nokia}
\begin{align}
PL[\textrm{dB}]=20\log_{10}\Big(\frac{4\pi d_0}{ \lambda}\Big) + 10n \log_{10}\Big(\frac{d}{d_0}\Big) + X_{\sigma_{\textrm{SF}}}
\end{align}
where $\lambda$ is the wavelength, $d_0$ is the reference distance, $n$ is the path loss exponent, and
$X_{\sigma_{\textrm{SF}}}$ is the lognormal random variable with standard deviation $\sigma_{\textrm{SF}}$ (dB) which describes the shadow fading.
At 73 GHz, the path loss exponents and the shadowing factors are $n_{\textrm{LoS}}=2$, $n_{\textrm{NLoS}}=3.4$, $\sigma_{\textrm{SF, LoS}}=4.8$ dB, and $\sigma_{\textrm{SF, NLoS}}=7.9$ dB.
\subsection{System Model}
We consider a downlink scenario in a cellular mmWave MIMO network with $J$ BSs and $K$ UEs. $M_j$ and $N_k$ are the number of antennas at BS $j$ and UE $k$, respectively. Also, we assume $M_j \geq N_k$, which is a reasonable assumption. Let $\mathcal{J}=\{1, ..., J\}$ denotes the set of BSs and $\mathcal{K}=\{1, ..., K\}$ represents the set of UEs. Here, we consider TDD operation and assume that the channel state information (CSI) is available at both the transmitter and the receiver.
Each UE $k$ aims to receive $n_k$ data streams from its serving BS such that $1\leq n_k\leq N_k$, where the second inequality comes from the fact that the number of data streams for each UE cannot exceed the number of its antennas. Thus, we can define the total number of downlink data streams sent by BS $j$ as
\begin{equation}
D_j=\sum_{k \in \mathcal{Q}_j(t)}n_k
\end{equation}
where $\mathcal{Q}_j(t)$ is called the \textit{Activation Set} and it represents the set of active UE in BS $j$ within time slot $t$, such that $\mathcal{Q}_j(t) \subseteq \mathcal{K}$ and $|\mathcal{Q}_j(t)|=Q_j(t)\leq K$. Note that the total number of downlink data streams sent by each BS should be less than or equal to its number of antennas, i.e., $D_j \leq M_j$. For notational simplicity, we drop the time index $t$ in definition of $D_j$, and only keep the time index for $Q_j(t)$ due to its importance.
The $M_j\times 1$ transmitted signal from BS $j$ is given by
\begin{equation}\label{x_j}
\mathbf{x}_j = \mathbf{F}_j \mathbf{d}_j = \sum_{k\in \mathcal{Q}_j(t)}\mathbf{F}_{k,j}\mathbf{s}_k
\end{equation}
where $\mathbf{s}_k\in \mathbb{C}^{n_k}$ is the data stream vector for UE $k$ consists of mutually uncorrelated zero-mean symbols, with $\mathbb{E}\lbrack \mathbf{s}_k\mathbf{s}_k^*\rbrack = \mathbf{I}_{n_k}
$. The column vector $\mathbf{d}_j\in \mathbb{C}^{D_j}$ represents the vector of data symbols of BS $j$ which is the concatenation of the data stream vectors $\mathbf{s}_k,~k\in\mathcal{Q}_j(t)$, such that $\mathbb{E}\lbrack \mathbf{d}_j\mathbf{d}_j^*\rbrack = \mathbf{I}_{D_j}$.
$\mathbf{F}_{k,j}\in\mathbb{C}^{M_j\times n_k}$ is the linear precoder matrix that should be designed for each UE $k$ associated with BS $j$, and $\mathbf{F}_j\in\mathbb{C}^{M_j \times D_j}$ is the total linear precoder matrix of BS $j$ which is the concatenation of all $\mathbf{F}_{k,j}, k\in\mathcal{Q}_j(t)$.
The power constraint at BS $j$ can be described as
\begin{equation}\label{power}
\mathbb{E}[\mathbf{x}_j^* \mathbf{x}_j]=\sum_{k\in \mathcal{Q}_j(t)}\textrm{Tr}(\mathbf{F}_{k,j}\mathbf{F}_{k,j}^*)\leq P_j
\end{equation}
where $P_j$ is the transmit power of BS $j$.
Now, we can express the $N_k\times 1$ received signal at UE $k$ antennas as
\begin{equation}\label{y_k}
\mathbf{y}_k = \sum_{j\in \mathcal{J}}\mathbf{H}_{k,j}\mathbf{x}_j + \mathbf{z}_k
\end{equation}
and the final processed signal received by each UE is
\begin{equation}\label{y_tilde_k}
\tilde{\mathbf{y}}_k = \sum_{j\in \mathcal{J}}\mathbf{W}_k^* \mathbf{H}_{k,j}\mathbf{x}_j + \mathbf{W}_k^*\mathbf{z}_k
\end{equation}
where $\mathbf{W}_k\in\mathbb{C}^{N_k \times n_k}$ is the linear combiner matrix of UE $k$, $\mathbf{H}_{k,j}\in\mathbb{C}^{N_k\times M_j}$ represents the channel matrix between BS $j$ and UE $k$, and $\mathbf{z}_k\in\mathbb{C}^{N_k}$ is the white Gaussian noise vector at UE $k$, with $\mathbf{z}_k\sim\mathbb{CN}(\mathbf{0},N_0 \mathbf{I}_{N_k})$.
It is worth mentioning that in MIMO mmWave systems hybrid (analog and digital) beamforming should be implemented to reduces cost and power consumption of large antenna arrays \cite{SS}.
\begin{comment}
When UE $k$ is connected to BS $j$, the variance of desired received signal at user $k$ can be expressed as
\begin{align}\label{B}
\mathbf{G}_{k,j}=\mathbf{W}_{k}^*\mathbf{H}_{k,j}\mathbf{F}_{k,j} \mathbf{F}_{k,j}^* \mathbf{H}_{k,j}^*\mathbf{W}_{k}
\end{align}
Similarly, we can define the interference coming frm BS $i$ (while serving user $l$) to user $k$ as
\begin{align}\label{X}
\mathbf{X}_{l,i,k} &= \mathbf{W}_{k}^*\mathbf{H}_{k,i}\mathbf{F}_{l,i} \mathbf{F}_{l,i}^* \mathbf{H}_{k,i}^*\mathbf{W}_{k}
\end{align}
In the next section, we will use this interference in formulating the network sum rate.
\end{comment}
\section{Time-Fractional User Association}
In the literature, when computing the instantaneous rate for a specific UE (connected to a BS), the interference coming from other UE-BS connections is assumed to be both independent of user association and present all the time (full interference). This assumption is not realistic and results in lower instantaneous user rates. In fact, network interference structure highly depends on user association and we need to consider this while computing the user rates. Moreover, user association depends on channel realizations which vary fast in mmWave frequencies. Thus, we cannot use the full interference structure in mmWave systems.
In this section, we introduce a new user association model named \textit{Time-Fractional Association} (TFA). First, we need to introduce our definition of \textit{time slot} throughout this paper. Each time slot $t$ is a fraction of time which is considered to be comparable to channel coherence time such that the small-scale fading characteristics of the channel remains constant within it, and they only change from one time slot to another.
During time slot $t$ each UE is connected to one BS. Thus, the interference structure in each time slot depends on the user association on that specific slot. This interference structure is appropriate for mmWave channels where the channel variation can be very fast.
Moreover, we assume it is possible to split the data streams of each UE and transmit them in different time slots.
Considering the above definitions, we study two association approaches in this paper: (i) instantaneous user association, which is performed within each time slot and results in unique association (each UE can be associated with only one BS during each time slot), and (ii) fractional (joint) user association, which is obtained by averaging over $T$ time slots. For each time slot, the mmWave channels are generated independently based on the channel model presented in Section II-A. We consider both approaches to evaluate the performance of our proposed TFA method and compare it with existing user association schemes.
We start by defining the \textit{Activation Matrix} as
\begin{equation}
\mathbf{B}\triangleq
\left[ \begin{array}{ccc}
\mathbold{\beta}(1)&\cdots &\mathbold{\beta}(T)
\end{array}
\right]=
\left[ \begin{array}{ccc}
\beta_1(1) & \cdots & \beta_1(T) \\
\vdots & \ddots &\vdots \\
\beta_K(1) & \cdots & \beta_K(T) \\
\end{array} \right]
\end{equation}
where $\mathbold{\beta}(t)$ is called the \textit{Activation Vector} at time slot $t$, and each element of $\mathbf{B}$ is the index of BS to whom user $k$ is associated with during time slot $t$, i.e., $\beta_k(t)\in\mathcal{J}$ with $k\in\mathcal{K}$ and $t\in\mathcal{T}=\{1, ..., T\}$.
Considering the above definition, the relationship between the activation set of BS $j$ and elements of activation matrix can be described as
\begin{equation}\label{Q_j}
\mathcal{Q}_j(t) = \{ k: \beta_k(t)=j\}.
\end{equation}
As stated earlier, we assume each UE can be associated with only one BS at any time slot $t$, i.e.,
\begin{equation}
\mathcal{Q}_j(t) \cap \mathcal{Q}_i(t) = \varnothing,~~j\neq i
\end{equation}
\begin{equation}\label{union}
\bigcup\limits_{j=1}^{J}\mathcal{Q}_j(t) = \mathcal{K}
\end{equation}
where (\ref{union}) follows from the fact that during each time slot, all UEs are served by networks' BSs.
The elements of activation matrix should satisfy the following conditions
\begin{align}
\sum_{j\in\mathcal{J}} 1_{\beta_k(t)}(j) &\leq 1, ~~\forall k\in \mathcal{K}\label{TFA_cons_1}\\
\sum_{k\in\mathcal{K}}1_{\beta_k(t)}(j). n_k &\leq D_j, ~~\forall j\in \mathcal{J}\label{TFA_cons_2}
\end{align}
where the indicator function is defined as
\begin{align}
1_{\beta_k(t)}(j)=
\begin{cases}
1& ~~\textrm{if}~~ \beta_k(t)=j\\
0& ~~\textrm{if}~~ \beta_k(t)\neq j
\end{cases}
\end{align}
The activation constraints in (\ref{TFA_cons_1}) reflect the fact that each UE cannot be associated with more than one BS in each time slot, and the resource allocation constraints in (\ref{TFA_cons_2}) denote that sum of data streams of UEs served by each BS cannot exceed the total number of available data streams on that BS. Note that $1_{\beta_k(t)}(j)$ is equal to one only if $\beta_k(t)=j$ or equivalently, $k\in\mathcal{Q}_j(t)$. Thus, the summation in (\ref{TFA_cons_2}) is actually over the set of active users in BS $j$.
Now, we define the \textit{Association Matrix} $\mathbf{A}$ as follows
\begin{equation}
\mathbf{A}\triangleq
\left[ \begin{array}{ccc}
\alpha_{1,1} & \cdots & \alpha_{1,J} \\
\vdots & \ddots &\vdots \\
\alpha_{K,1} & \cdots & \alpha_{K,J} \\
\end{array} \right]
\end{equation}
where $\alpha_{k,j}\in [0,1]$ is the association coefficient (fraction), and it represents the average connectivity of UE $k$ to BS $j$. If $\alpha_{k,j}=0$, we say UE $k$ is not associated with BS $j$.
In this model, association coefficients are considered as a fraction of time. More specifically, we assume that $\alpha_{k,j}$ is the average fraction of time during which UE $k$ is connected to BS $j$.
The relationship between the association coefficients and the elements of activation matrix is given by
\begin{equation}\label{alpha_beta}
\alpha_{k,j} = \lim_{T\rightarrow \infty} \frac{1}{T}\sum_{t=1}^T 1_{\beta_k(t)}(j)
\end{equation}
According to (\ref{alpha_beta}), given the activation matrix $\mathbf{B}$, one can easily obtain the association matrix $\mathbf{A}$.
\section{User Association Optimization Problem}
In this section, we evaluate the instantaneous and average per-user throughputs by processing the received signal at each user. Then, we formulate an optimization problem and use a heuristic search method to find the optimal user association.
\subsection{Formulation of instantaneous user rate}
Considering (\ref{x_j}) and (\ref{y_tilde_k}), the signal received by UE $k$ at slot $t$ can be decomposed as
\begin{align}\label{y_tilde_k_t}
\tilde{\mathbf{y}}_k (t)&= \sum_{j\in \mathcal{J}}\mathbf{W}_k^* \mathbf{H}_{k,j} \mathbf{x}_j + \mathbf{W}_k^* \mathbf{z}_k\nonumber \\
&= \underbrace{\mathbf{W}_k^*\mathbf{H}_{k,j}\mathbf{F}_{k,j} \mathbf{s}_k}_\text{Desired signal} + \underbrace{\mathbf{W}_k^*\mathbf{H}_{k,j}\sum_{\substack{l\in \mathcal{Q}_j(t) \\ l\neq k}}\mathbf{F}_{l,j} \mathbf{s}_l}_\text{Intra-cell interference} \nonumber\\
&+ \underbrace{\mathbf{W}_k^*\sum_{\substack{i\in \mathcal{J} \\ i\neq j}}\sum_{\substack{l\in \mathcal{Q}_i(t)}} \mathbf{H}_{k,i} \mathbf{F}_{l,i} \mathbf{s}_l}_\text{Inter-cell interference} + \underbrace{\mathbf{W}_k^*\mathbf{z}_k}_\text{Noise}
\end{align}
where the first term is the received signal from desired BS ($j$), the second term represents the interference coming from the same BS ($j$) by signals intended for its other active UEs, the third term is the interference coming from other BSs ($i\neq j$) by signals sent to their active UEs, and the last term is the received noise at UE $k$.
The activation sets $\mathcal{Q}_j(t)$ and $\mathcal{Q}_i(t)$ appeared in the interference terms indicate that the interference highly depends on the user association, which highlights the novelty of this work.
Again, we note that all vectors and matrices in (\ref{y_tilde_k_t}) are time-dependent, and the time index $t$ is dropped for the sake of notational simplicity.
When UE $k$ is connected to BS $j$ in time slot $t$, its instantaneous rate can be obtained as \cite{Telatar}
\begin{equation}\label{R_kj}
R_{k,j}(t) = \log_2\Big |\mathbf{I}_{N_k} + (\mathbf{Y}_{k,j}(t))^{-1}\mathbf{W}_{k}^*\mathbf{H}_{k,j}\mathbf{F}_{k,j} \mathbf{F}_{k,j}^* \mathbf{H}_{k,j}^*\mathbf{W}_{k}\Big |
\end{equation}
\begin{align}
\mathbf{Y}_{k,j}&(t)= \mathbf{W}_{k}^*\mathbf{H}_{k,j}\Big( \sum_{\substack{l\in \mathcal{Q}_{j}(t) \\ l\neq k}} \mathbf{F}_{l,j} \mathbf{F}_{l,j}^* \Big ) \mathbf{H}_{k,j}^*\mathbf{W}_{k} \nonumber \\
&+ \mathbf{W}_{k}^* \Big( \sum_{\substack{i\in \mathcal{J} \\ i\neq j}} \sum_{\substack{l\in \mathcal{Q}_i(t)}} \mathbf{H}_{k,i}\mathbf{F}_{l,i} \mathbf{F}_{l,i}^* \mathbf{H}_{k,i}^* \Big ) \mathbf{W}_{k} + N_0 \mathbf{W}_k^*\mathbf{W}_k
\end{align}
\begin{comment}
Considering (\ref{B}) and (\ref{X}), we can rewrite the instantaneous rate in compact form as
\begin{equation}\label{inst_rate_t}
R_{k,j}(t) = \log_2\det (\mathbf{I}_{N_k} + (\mathbf{Y}_{k,j}(t))^{-1}\mathbf{G}_{k,j})
\end{equation}
\begin{align}
\mathbf{Y}_{k,j}(t) &= \sum_{\substack{l\in \mathcal{Q}_{j}(t) \\ l\neq k}} \mathbf{X}_{l,j,k} + \sum_{\substack{i\in \mathcal{J} \\ i\neq j}} \sum_{\substack{l\in \mathcal{Q}_i(t)}} \mathbf{X}_{l,i,k} + N_0 \mathbf{W}_k^*\mathbf{W}_k
\end{align}
\end{comment}
The instantaneous rate given in (\ref{R_kj}) is a function of activation sets $\mathcal{Q}_j(t)$. Thus, the instantaneous per-user throughput at time slot $t$ can be expressed as
\begin{align}\label{r_k_t}
r_{k}(t)=\sum_{j\in\mathcal{J}} 1_{\mathbold{\beta}_k(t)}(j)\times R_{k,j}(t)
\end{align}
and the average per-user throughput is given by
\begin{align}
r_{k}=\lim_{T\rightarrow \infty}\frac{1}{T}\sum_{t=1}^{T} r_k(t)
\end{align}
\subsection{Optimization Problem}
As stated before, the channel variation can be very fast in mmWave frequencies and the small-scale characteristics of the channel could change a lot even during two consecutive time slots. Thus, we need to perform the user association in each time slot.
Defining the instantaneous user throughput vector $\mathbf{r}(t)\triangleq (r_1(t), ..., r_K(t))$, we wish to find the optimal activation vector $\mathbold{\beta}(t)$ which maximizes an overall network utility function. This utility function should be concave and monotonically increasing.
In this paper, we consider the well-known and widely used sum-rate utility function defined by
\begin{align}\label{u(r(t))}
U(\mathbf{r}(t))&=\sum_{k\in\mathcal{K}} r_k(t)=\sum_{k\in\mathcal{K}}\sum_{j\in\mathcal{J}} 1_{\mathbold{\beta}_k(t)}(j)\times R_{k,j}(t)
\end{align}
Thus, the optimization problem for each time slot $t$ can be written as
\begin{subequations}\label{opt_prob2}
\begin{align}
\maxi_{\beta_k(t)\in \mathcal{J}}~~&U(\mathbf{r}(t))\\
\mathrm{subject~to}~~&\sum_{j\in\mathcal{J}} 1_{\mathbold{\beta}_k(t)}(j) \leq 1, ~~\forall k\in \mathcal{K}\\
&\sum_{k\in\mathcal{K}}1_{\mathbold{\beta}_k(t)}(j). n_k \leq D_j, ~~\forall j\in \mathcal{J}
\end{align}
\end{subequations}
This is an optimization problem with the integer variables $\beta_k(t)\in\mathcal{J}$ for $k\in\mathcal{K}, t\in\mathcal{T}$. Here, we use equal power allocation to split the BS power among its active users. Thus, the power constraint in (\ref{power}) is no longer applicable and can be ignored.
In this paper, we use singular value decomposition (SVD) to obtain the precoder and combiner matrices (SVD beamforming). To this end, we first need to decompose the channel matrix $\mathbf{H}\in\mathbb{C}^{N_k\times M_j}$ as
\begin{align}
\mathbf{H} &= \mathbf{\Phi}\mathbf{\Sigma}\mathbf{\Gamma}^*
\end{align}
where $\mathbf{\Phi}\in\mathbb{C}^{N_k\times \textrm{rank}(\mathbf{H})}$ is the unitary matrix of left singular vectors, $\mathbf{\Sigma}\in\mathbb{C}^{\textrm{rank}(\mathbf{H})\times \textrm{rank}(\mathbf{H})}$ is the diagonal matrix of singular values (in decreasing order), and $\mathbf{\Gamma}\in\mathbb{C}^{M_j\times \textrm{rank}(\mathbf{H})}$ is the unitary matrix of right singular vectors. Then, we partition the channel matrix as
\begin{align}\label{Ch_partitioning}
\mathbf{H} &=
\left[
\begin{matrix}
\mathbf{\Phi}_1 & \mathbf{\Phi}_2
\end{matrix}
\right]
\left[
\begin{matrix}
\mathbf{\Sigma}_1 & \mathbf{0}\\
\boldsymbol{0} & \mathbf{\Sigma}_2
\end{matrix}
\right]
\left[
\begin{matrix}
\mathbf{\Gamma}_1^* \\ \mathbf{\Gamma}_2^*
\end{matrix}
\right]\nonumber \\
&= \mathbf{\Phi}_1\mathbf{\Sigma}_1\mathbf{\Gamma}_1^*+ \mathbf{\Phi}_2\mathbf{\Sigma}_2\mathbf{\Gamma}_2^*
\end{align}
where $\mathbf{\Phi}_1\in\mathbb{C}^{N_k \times n_k}$, $\mathbf{\Sigma}_1\in\mathbb{C}^{n_k \times n_k}$, $\mathbf{\Gamma}_1\in\mathbb{C}^{M_j \times n_k}$, and $n_k$ is the number of data streams intended for user $k$.
The above partitioning is done to extract the precoder and combiner of appropriate sizes. More specifically, each precoder $\mathbf{F}_{k,j}$ and combiner $\mathbf{W}_k$ need to be of size $M_j\times n_k$ and $N_k\times n_k$, respectively.
Then, the SVD precoder and combiner can be obtained as
\begin{align}
\mathbf{F}&=\mathbf{\Gamma}_1\\
\mathbf{W}&=\mathbf{\Phi}_1
\end{align}
\begin{figure}
\centering
\hspace*{-1.7em}
\includegraphics[scale=.25]{Fig1.eps}
\vspace*{-1.1em}
\caption{Max-SINR user association with full interference}
\label{max-SINR}
\end{figure}
The optimization problem in (\ref{opt_prob2}) is a mixed integer nonlinear programming (MINLP), which is known to be NP-hard due to its nonlinear structure and presence of integer variables. The nonlinearity comes from the indicator function appeared in (\ref{r_k_t}) and constraints (\ref{opt_prob2}b-c).
These problems are typically difficult to solve due to their combinatorial structure and potential presence of multiple local minima in the search space.
Genetic algorithms (GA) are considered as powerful and effective tools for solving combinatorial optimization problems.
GA is a method based on natural selection which simulates biological evolution. The algorithm iteratively generates and modifies a population of individual solutions.
After successive generations, the population eventually evolves toward an optimal solution \cite{GA}.
In the next section, we use the GA solver provided in Global Optimization Toolbox of MATLAB to solve the optimization problem in (\ref{opt_prob2}).
\section{Numerical Results}
In this section, we investigate the performance of the proposed user association scheme in a simple mmWave MIMO network operating at 73 GHz with $J=4$ BSs and $K=8$ UEs. The mmWave links are generated as described in Section II-A, and each link is composed of 5 clusters with 10 rays per cluster. Each BS is equipped with a $8\times 8$ UPA of antennas, and each UE is equipped with a $2\times 2$ UPA of antennas. The Noise power spectral density is $-174$ dBm/Hz, and all BSs transmit at the same power level $P_j$.
Moreover, we assume that the network nodes are deployed in a region of size $300~\textrm{m} \times 300~\textrm{m}$. The BSs are placed at specific locations and the UEs are distributed randomly within the given area, as shown in Fig. \ref{max-SINR}. There are $n_k=2$ data streams for each UE and the total number of data streams sent by each BS is $D_j=4$. Thus, the maximum number of allowed active users at each BS is $Q_j(t)=2$, and a BS is considered to be overloaded (congested) if more than 2 UEs are associated with it.
\begin{figure}
\centering
\hspace*{-1.7em}
\includegraphics[scale=.25]{Fig2.eps}
\vspace*{-1.1em}
\caption{Proposed load balancing TFA scheme}
\label{TFA_fig}
\end{figure}
First, we compare the TFA model with the conventional max-SINR scheme.
Fig. \ref{max-SINR} shows the result of max-SINR association with full interference, where the BS at the center of the network (BS 1) is overloaded by 3 extra UEs.
Load balancing user association using the proposed TFA scheme is shown in Fig. \ref{TFA_fig}. It can be seen from the figure that the proposed method perfectly balances the BSs' load by pushing the overloading UEs from the congested BS to other BSs.
Next, we compare the performance of the TFA method with three other user association schemes: (i) Max-SINR - Drop, (ii) Max-SINR - Sharing \& Drop, and (iii) Load Balancing User Association in \cite{Caire}. In the first method, those UEs who overloaded the congested BS are dropped, and in the second one, the data streams of the congested BS are shared among maximum number of UEs that it can serve. For instance, in our scenario depicted in Fig. \ref{max-SINR}, the available 4 data streams of BS 1 are shared among the first 4 UEs (which receive the highest SINR from BS 1) and the 5th UE is dropped. For the last scheme, we perform the load balancing user association based on the approach presented in \cite{Caire}.
Fig. \ref{Ass_Coeff} compares the association coefficients (averaged over 1000 time slots) for above schemes. It is clear from the figure that both load balancing user association schemes successfully balance the BSs' loads.
\begin{figure*}
\centering
\hspace*{-3em}
\includegraphics[scale=.35]{Fig3.eps}
\vspace*{-1.5em}
\caption{Comparison of association coefficients for different user association schemes}
\label{Ass_Coeff}
\end{figure*}
Finally, we examine the performance of the TFA scheme in terms of network sum rate.
Fig. \ref{SumRate} depicts the network sum rate (given in (\ref{u(r(t))})) versus the BSs' transmit power for different association schemes.
Note that all other three association schemes assume a full interference structure. It can be inferred from the figure that network interference highly depends on user association, since our TFA method outperforms the other user association schemes which all ignore the effect of user association on the network interference. Also, we can see that the load balancing scheme presented in \cite{Caire} slightly underperforms the max-SINR schemes.
This result is expected since, contrary to max-SINR schemes, the load balancing approach takes into account the BS loads.
\section{Conclusion}
In this paper we investigated the problem of optimal user association in a mmWave MIMO network.
We first introduced the activation matrix and showed that the user instantaneous rate is a function of the elements of this matrix.
Then, we formulated a new association model, called TFA, in which network interference depends on user association. The performance of the proposed TFA scheme is investigated by considering three other association schemes: (i) max-SINR with user drop, (ii) max-SINR with resource sharing and user drop, and (iii) load balancing user association proposed in [2]. Simulation results confirmed the fact that the network interference structure highly depends on user association and showed that the proposed scheme outperforms all other three methods, which ignore the effect of interference on user association.
|
{
"timestamp": "2018-06-05T02:14:55",
"yymm": "1806",
"arxiv_id": "1806.00990",
"language": "en",
"url": "https://arxiv.org/abs/1806.00990"
}
|
\section{Introduction}
\label{sec:intro}
Motivation for a particle dark matter (DM) comes from different astrophysical/cosmological evidences like rotation curves of galaxies~\cite{zwicky,rubin}, anisotropies in CMBR~\cite{cmbr}, observations in Bullet cluster~\cite{bullet} etc., which triggers physics beyond the Standard Model (SM). DM as fundamental particles necessarily lack electromagnetic interactions, but can have different properties depending on the masses (cold, warm or hot) and interaction strength. The major classification goes as $(i)$ weakly interacting massive particle (WIMP)~\cite{Kolb:1990vq,Jungman:1995df}, $(ii)$ feebly interacting massive particles (FIMP)~\cite{Hall:2009bx} and $(iii)$ strongly interacting massive particles (SIMP)~\cite{Hochberg:2014dra}. Stabilization of DM (or the decay life time as large as the age of the universe) is also required to fit the observed DM relic density ($\Omega h^2\sim 0.1$)~\cite{WMAP,Ade:2015xua} and is achieved by an additional unbroken symmetry under which the dark sector particles transform non-trivially while the SM particles do not. DM can also have any intrinsic spin and therefore can be a scalar, fermion or a vector boson. Vector boson dark matter (VBDM) models are not abundant in literature as it is more involved with the necessity of extending SM gauge group: $SU(3)_c\times SU(2)_L \times U(1)_Y$. The paper aims to discuss one such possibility of a non-abelian vector boson as a DM and its consequences in relic density, direct and collider search prospects.
The additional gauge bosons must be electromagnetic charge neutral to be qualified as DM. The simplest possibility is to assume an abelian $U(1)_X$ extension and make sure it remains hypercharge zero~\cite{Farzan:2012hh,Baek:2013nr,Duch:2015jta,Davoudiasl:2013jma}. Simplest non-abelian extension can then be assumed as $SU(2)$ (see for example,~\cite{Hambye:2008bq,Arcadi:2017kky}). How to save the gauge bosons from SM hypercharge is a matter of group theoretic manipulation and is not unique. One way of achieving so, is described in this paper, following the analysis in~\cite{Fraser:2014yga}. However, the requirement of a VBDM, also demands the breaking of the additional gauge group completely through spontaneous symmetry breaking (SSB). Therefore the symmetry required to keep DM stable is often as additional one and we assume it to be an unbroken $U(1)$ in this analysis. The particle content and their transformation properties provide the phenomenology through which DM can interact and therefore freeze out or freeze-in. The guiding principle for choosing the additional fields here is motivated by (i) neutrino mass generation, (ii) successful SSB to generate massive gauge bosons and (iii) a possible high-scale realization of the model in $SU(7)$~\cite{Ma:2013nga}. Together, they point out to a completely different DM phenomenology from the case of VBDM framework addressed in~\cite{Bhattacharya:2011tr,Barman:2017yzr}.
The key feature of this model is to assume that SM particles do not transform under additional $SU(2)_N$ symmetry unlike the case in~\cite{DiazCruz:2010dc}. Therefore, the VBDM lacks a direct search cross-section except for the Higgs portal which is constrained from Higgs data to avoid large mixing. This indeed helps the model to be allowed in a large parameter space from non-observation of DM in direct searches, for example in PANDA data~\cite{Cui:2017nnn}. Another interesting aspect of this analysis is to show the presence of scalar triplet as additional DM component apart from the VBDM as pointed out in~\cite{Fraser:2014yga}. The scalar DM will again have interactions to SM via Higgs portal (not necessarily small) and has direct search prospect. The analysis explores such a two-component DM parameter space of the model poised with non-zero DM-DM interactions.
The model also assumes the presence of not-so-heavy neutrinos to generate light neutrino masses through {\it inverse seesaw} mechanism. This allows, in one hand, the heavy neutrinos to be stable and contribute as DM, while on the other hand, they can be produced in the Large Hadron Collider (LHC) hadronically quiet single lepton and hadronically quiet opposite sign dilepton (OSD) channel, with missing energy. This serves as one of the important directions of this analysis, which was not addressed in the earlier proposal of the model~\cite{Fraser:2014yga}. The SM background can be tamed down to some extent by large missing energy cut ($\slashed{E_T}$) and $H_T$ cut ($\slashed{H_T}$). The discovery potential thus can be reached with a high luminosity. Generation of light neutrino masses (with not-so-heavy neutrinos ($\sim \mathcal{O}(500)\rm{GeV}$)) also necessitates the VBDM ($X,\bar{X}$) to be degenerate with the third gauge boson component $X_3$. Therefore, co-annihilations play a crucial part on top of annihilation for VBDM (this was also not taken into account in the earlier analysis~\cite{Fraser:2014yga}) and bridges a connection between the neutrino and the dark sector. For more illuminating discussions on this, see for example~\cite{Boehm:2006mi,Ma:2006km}.
The paper is organised as follows: we discuss the model in Sec.~\ref{sec:model}, neutrino mass generation mechanism in Sec.~\ref{sec:neutrino mass}, followed by the vector boson DM analysis in Sec.~\ref{sec:DM pheno}. The multipartite DM features are elaborated in subsection.~\ref{sec:degenerate DM} and~\ref{sec:x-delta DM}. Collider signatures are analysed in Sec.~\ref{sec:collider pheno}. Finally we conclude in Sec.~\ref{sec:conclusion}.
\section{The Model}
\label{sec:model}
The model under consideration has an extended gauge group $SU(2)_N$, where $N$ stands for neutral\footnote{The electromagnetic charge neutrality of the vector bosons under this
gauge group is ensured through spontaneous symmetry breaking, as discussed in~\cite{Fraser:2014yga}.}. The main idea is to have the lightest of the gauge bosons as a DM candidate.
The particle content is chosen here minimally to have a spontaneous symmetry breaking (SSB) of $SU(2)_N$ to yield massive gauge bosons and also to have a successful neutrino
mass generation as proposed in~\cite{Fraser:2014yga}. An important difference from the $SU(2)_N$ model proposed in~\cite{DiazCruz:2010dc,Bhattacharya:2011tr}, is that all of the
SM fermions here are singlet under $SU(2)_N$. The stability of DM is ensured by an added global $U(1)$ symmetry ($S^{'}$), imposed on the new particles (as in~\cite{Fraser:2014yga}),
so that $S=S^{'}+T_{3N}$ remains unbroken. The stability of DM under an unbroken global continuous symmetry may however be broken by the presence of a possible quantum theory
of gravity~\cite{Mambrini:2015sia}, which will therefore have observable effects in gamma ray, X-ray, neutrino and CMB data through DM decay, thus constraining such a case.
The analysis in ~\cite{Mambrini:2015sia} shows that the limits on DM mass scale can be as stringent as few MeVs, by assuming SM gauge non-invariant dimension five effective
operators suppressed by Planck scale\footnote {Gauge invariance requires higher dimensional effective operators, where the limit on DM mass becomes much more relaxed.}, which explicitly breaks the DM symmetry. However, due to the lack of our knowledge of a possible quantum theory of gravity, and the fact that $S$ is generated by a combination of global symmetry $S^{'}$
together with $T_{3N}$ (isospin of a broken gauge symmetry), we assume $S$ to be unbroken up to Planck scale and avoid such constraints.
The new particles and their charges under $SU(3)_C \,\otimes$ $SU(2)_L\, \otimes$ $U(1)_Y \otimes$ $SU(2)_N \otimes\, S^{'}$ are given as:
\begin{align*}
\text{Three SU}(2)_N \text{ gauge bosons: }&\qquad\qquad X_{1,2,3}\equiv (1, 1, 0, 3, 0),\\ \\
\text{Three Dirac fermion doublets: }&\qquad\qquad n=(n_1, n_2)_{L,R}\equiv (1, 1, 0, 2, \frac{1}{2}),\\ \\
\text{One scalar doublet: }&\qquad\qquad \chi=(\chi_1, \chi_2) \equiv (1, 1, 0, 2, \frac{1}{2}),\\ \\
\text{One scalar bi-doublet: }&\qquad\qquad \zeta=\begin{pmatrix}
\zeta_1^0 & \zeta_2^0\\
\zeta_1^- & \zeta_2^-
\end{pmatrix} \equiv (1, 2, -\frac{1}{2}, 2, -\frac{1}{2}),
\end{align*}
where $\zeta$ transforms (vertically) under $SU(2)_L$ and (horizontally) under $SU(2)_N$. Furthermore, an $SU(2)_N$ scalar triplet ($\Delta$) is introduced:
\begin{align*}
\Delta = \begin{pmatrix}
\Delta_2/\sqrt{2} & \Delta_3\\
\Delta_1 & -\Delta_2/\sqrt{2}
\end{pmatrix} \equiv (1, 1, 0, 3, -1),
\end{align*}
for generating neutrino masses, which will be discussed in the next section. The crucial construct of the model lies in the choice of $S^{'}$ charges, which will be clear in a moment. Note that the only additional fermions introduced here are three families of a vector like $SU(2)_N$ doublet $n$. This mediates the interactions of the dark sector (non-zero $S$ charged particles as noted below) with the SM sector. This was the reason that the authors in~\cite{Fraser:2014yga} proposed the model as vector boson dark matter with {\it leptonic} connection. The field content of this model is essentially motivated by a unified $SU(7)$ prescription to generate neutrino mass and to have a stable DM as described in~\cite{Ma:2013nga}. Also, note that, the presence of left chiral heavy neutrinos $(n_1,n_2)_L$ plays an important role in achieving light neutrino masses through {\it inverse seesaw mechanism}, resulting in $m_n \sim \mathcal{O}$(TeV) and therefore allowing to explore them at the colliders.
Spontaneous symmetry breaking of $SU(2)_N\otimes S^{'}$ to $S=S'+T_{3N}$ happens via the non-zero vacuum expectation value (VEV) of $SU(2)_N$ scalar doublet: $\langle\chi_2\rangle = u_2$. $S$ charge assignment for the new particles is given as:
\begin{align*}
n_1, \chi_1 \sim +1, \quad &\quad n_2, \chi_2, \zeta_2, \Delta_3 \sim 0,\quad\quad
\zeta_1, \Delta_2 \sim -1, \quad\quad \Delta_1 \sim -2,\\
&X(\overline{X}) = \frac{X_1 \mp iX_2}{
\sqrt{2}} \sim \pm 1, \quad\quad Z' = X_3 \sim 0.
\end{align*}
All the SM particles have zero $S$ charge. Therefore, particles with non-zero $S$ charge will be protected from decaying into the SM. We can assume $X$ to be the lightest of the particles with non-zero $S$ charge, and therefore a possible DM candidate. Furthermore, $\Delta_{1,2,3}$ scalars can become kinematically stable in certain regions of parameter space~\cite{Fraser:2014yga}, and be part of a multi-component DM framework. We will investigate this possibility in details.
The three other scalars which acquire VEV are: $\langle \zeta_2^0\rangle =v_2$, $\langle\Delta_3\rangle =u_3$, and $\langle \phi^0\rangle=v_1$. Note that this assignment is different from that in~\cite{Bhattacharya:2011tr} where $\langle \Delta_1^0 \rangle$ is also non-zero. Therefore, the $X_{1,2}$ bosons will have equal masses in this model, and more importantly $S=S'+T_{3N}$ global symmetry remains unbroken unlike in~\cite{Bhattacharya:2011tr}. The masses of the gauge bosons are given by:
\begin{equation}
\begin{split}
m_W^2 = \frac{1}{2} g_2^2\left(v_1^2+v_2^2\right), \quad\quad
m_{X}^2 = \frac{1}{2}g_N^2\left(u_2^2+v_2^2+2u_3^2\right), \quad\quad
m_{Z'}^2 \simeq \frac{1}{2}g_N^2\left(u_2^2+v_2^2+4u_3^2\right),
\end{split}
\end{equation}
where $Z-Z'$ mixing matrix is given by:
\begin{equation}
m_{Z,Z'}^2 = \frac{1}{2}\begin{pmatrix}
\left(g_1^2+g_2^2\right)\left(v_1^2+v_2^2\right) & -g_N\sqrt{g_1^2+g_2^2}\, v_2^2\\
-g_N\sqrt{g_1^2+g_2^2}\, v_2^2 & g_N^2\left(u_2^2 + v_2^2 + 4u_3^2\right)
\end{pmatrix}.
\end{equation}
To ensure small $Z$-$Z^{'}$ mixing~\cite{Andreev:2014fwa}, we assume $v_2\ll u_2$. Furthermore, $u_3$ is assumed to be small which breaks the lepton number global symmetry ($L$) to lepton parity ($(-1)^L$) as explained in Sec.~\ref{sec:neutrino mass}. Therefore, the $X$ boson masses are nearly degenerate, i.e. $m_{Z'}(m_{X_3})\simeq m_X$. This still makes the model phenomenologically viable in a large parameter space as $Z'$ doesn't have a tree level coupling to SM. This hides $Z'$ of this model from being observed at the LHC, and adds to the the freedom of choosing $m_{Z'}$ as a free parameter. This should again be contrasted to the case in~\cite{Barman:2017yzr}, where there is a minimum limit on $M_{X_{1,2,3}} \geqslant 1$ TeV, for the degenerate vector boson DM case to respect the bound from $Z^{'}$ search data.
The scalar potential of this model remains the same as in the original proposal~\cite{Fraser:2014yga} and noted in Appendix-A of the paper. We also do not address the details of SSB and the physical scalars appearing in this framework. We would however, provide the approximate SM-like Higgs eigenstate:
\begin{equation}
h = -\phi_{2R}^0 + \left(\frac{f_5\, v_1}{\lambda_4\, u_2}\right)\,\chi_{2R} - \left( \frac{2 f_5^2\, v_1}{f_4\lambda_4\, v_2}\right)\, \zeta_{2R}^0,
\label{eq:higgsmass}
\end{equation}
with
\begin{equation}
m_h^2 \simeq \frac{2v_1^2 \left(\lambda_2\lambda_4 - f_5^2\right)}{\lambda_4}\label{eq:M-Hig}.
\end{equation}
All the dimensionless couplings are borrowed from the scalar potential. One important point is, to note that, from the current knowledge of the Higgs mass (125 GeV) we will get a relation of $\frac{f_5^2}{\lambda_4}$ with $\lambda_2$ (using Eq.~\ref{eq:higgsmass}) as shown in Fig.~\ref{fig:f5}. Note that Fig.~\ref{fig:f5} does not strictly constrain $f_5^2/\lambda_4$ (can be large with larger $\lambda_2$). $f_5^2/\lambda_4$ essentially determines $SU(2)_N$ Higgs components $\left(\chi_{2R},\zeta_{2R}\right)$ to be present in SM-like Higgs and this will be limited from the production and decay of Higgs observed at the LHC. In the limit of heavier $SU(2)_N$ fields, we choose a moderate limit on $f_5/\lambda_4$: \{0.1-0.6\} for further analysis.
\begin{figure}[htb!]
\centering
\includegraphics[scale=0.4]{f5Higgs}
\caption{$\frac{f_5^2}{\lambda_4}$ plotted against $\lambda_2$ using the Higgs mass constraint in Eq.~\ref{eq:M-Hig}. Note that $\lambda_{2,4} > 0$ in order to ensure the stability of the scalar potential.
\label{fig:f5}}
\end{figure}
\section{Neutrino Mass}
\label{sec:neutrino mass}
One of the important features of the model is to generate neutrino mass successfully and thus addressing dark matter and neutrinos under one umbrella. The scalar bi-doublet ($\zeta$), which acts as a mediator between the dark and visible sectors, also generates masses for neutrinos. The Yukawa terms responsible for neutrino mass generation are given by:
\begin{eqnarray}
f_{\zeta} &\left[ \left(\overline{\nu}_L \zeta_1^0 + \overline{e}_L \zeta_1^- \right) n_{1R} + \left( \overline{\nu}_L \zeta_2^0 + \overline{e}_L \zeta_2^- \right) n_{2R} \right] \label{f-zeta}\\
f_{\Delta} &\left[ n_1 n_1 \Delta_1 + \left(n_1 n_2 + n_2 n_1\right) \Delta_2/\sqrt{2} - n_2 n_2 \Delta_3 \right],\label{f-delta}
\label{eq:yukawaint}
\end{eqnarray}
where in the second line $nn$ includes both of $n_Ln_L$ and $n_Rn_R$. The lepton number is conserved in~\eqref{f-zeta} with $n$ carrying $L=1$, and is broken to lepton parity, i.e. $(-1)^L$ by the $nn$ terms in~\eqref{f-delta}. After SSB, we have the following mass terms for the neutrinos:
\begin{align}
f_{\zeta}\, v_2\, \overline{\nu}_L n_{2R} - f_{\Delta}^L\, u_3\, n_{2L} n_{2L} - f_{\Delta}^R\, u_3\, n_{2R} n_{2R}+ \text{h.c.}
\end{align}
where $f_{\zeta}$ and $f_{\Delta}$ are $3\times 3$ matrices, and the neutrino mass matrix in the $\left(\overline{\nu}_L, n_{2R}, \overline{n}_{2L}\right)$ basis is given by:
\begin{align}
M_{\nu} = \begin{pmatrix}
0 & m_D & 0\\
m_D & m_2' & M \\
0 & M & m_2
\end{pmatrix},
\end{align}
where each entry is a $3\times 3$ matrix with $m_D = f_{\zeta}\, v_2$, $m_2' = f_{\Delta}^R\, u_3$, $m_2 = {f_{\Delta}^{L}}^*\, u_3$, and $M$ is a free Dirac mass term in $M \left(\overline{n}_{2L} n_{2R} + \overline{n}_{2R} n_{2L}\right)$. The inverse seesaw neutrino mass is thus generated and given by:
\begin{eqnarray}
m_{\nu} \simeq \frac{m_D^2\, m_2}{M^2} = f_{\zeta}^2 f_{\Delta}\, \left(\frac{v_2}{M}\right)^2 u_3.
\label{eq:correctnumass}
\end{eqnarray}
\begin{figure}[htb!]
$$
\includegraphics[height=6.cm]{nuplt1.png}
\includegraphics[height=6.cm]{fzeta.png}
$$
\centering
$$
\includegraphics[height=6.cm]{u3mplt.png}
$$
\caption{Top Left: $f_\Delta$ versus heavy neutrino mass $M$ ($\sim$ $\mathcal{O}$ (hundreds of GeVs)) for different choices of $u_3$ ($\sim$ MeV) to keep $m_{\nu}\sim~0.1~eV$ with $f_\zeta\sim\mathcal{O}(1)$; Top Right: $f_{\zeta}$ versus heavy neutrino mass $M$($\sim\mathcal{O}$ (hundreds of GeVs)) corresponding to different values of the VEV $u_3$ to obtain right neutrino mass for $f_{\Delta}\sim\mathcal{O}(1)$. The black dashed line shows the mass of the heavy neutral chosen for selecting the BPs (Table.~\ref{tab:bp}). Bottom: $u_3$ (in GeV) versus $M$ ($\sim\mathcal{O}(10^7$) GeV) for different values of coupling $f_{\Delta}$, where each contour satisfies $m_{\nu}\sim0.1~eV$.}
\label{fig:rhnmass}
\end{figure}
Assuming $m_2, m_2', m_D \ll M$, $n$ remains pseudo-dirac with $m_n \simeq M$. Since, $\zeta$ is the portal between the SM and the hidden sector, the collider signatures of this model involve processes with $n$ in the final states. Therefore, a phenomenologically interesting choice of parameters would be $M\sim \mathcal{O}(\text{TeV})$, with $f_{\zeta}\sim 1$. Furthermore, we assume $v_2 \simeq 1$ GeV in order to have a small $Z-Z'$ mixing. Using $\sum m_{\nu} < 0.17$ eV~\cite{Couchot:2017pvz}, we take $m_{\nu}\simeq \mathcal{O}(0.1 \text{ eV})$ such that:
\begin{eqnarray}
u_3 \sim \frac{0.1}{f_{\Delta}} \text{ MeV}.
\end{eqnarray}
A contour plot for correct neutrino mass $m_{\nu}\simeq\mathcal{O}(0.1 \text{ eV})$, following Eq.~\ref{eq:correctnumass}, is depicted in Fig.~\ref{fig:rhnmass}. The contours in $M-f_\Delta$ plane has been shown for $f_\zeta\sim\mathcal{O}(1)$ for different choices of $u_3$ in the LHS of top panel in Fig.~\ref{fig:rhnmass}. The same exercise is done in $M-f_\zeta$ plane for $f_\Delta\sim\mathcal{O}(1)$ {\footnote{While a large Yukawa may cause trouble to vacuum stability, the extended scalar sector is expected to save it.}} in top RHS graph for different $u_3$. We choose a few benchmark points at the scale of heavy neutrino mass $450$ GeV, shown by the vertical dashed line in this plot. In both of these cases, $X$ is nearly degenerate with $X_3$ due to very small values of $u_3$ ($\sim$ MeV). Therefore, co-annihilations play an important role in determining the relic abundance of the $X$ DM. We will explore this in details in the DM section.
The other possible regime is to assume $M\sim\mathcal{O}(10^7)$ GeV, which allows larger $u_3$ ($\sim$ hundreds of GeVs). This is shown on the bottom panel of Fig.~\ref{fig:rhnmass} for $f_\zeta\sim\mathcal{O}(1)$ and $f_\Delta:\{0.01, 0.9\}$. The mass degeneracy between $X, X_3$ is broken in such a scenario and thus co-annihilations become subdominant to the annihilation processes for $X$ DM. We will also show that when $M\sim 500$ GeV, the heavy neutrinos are stable and can be DM candidate, while heavy neutrinos with $M\sim 10^7$ GeV will decay and will not contribute as DM. Therefore such heavy neutrinos are also viable from neutrino mass and DM constraints, but will complicate the model in collider detection. We will therefore choose lighter $n_{1,2}$ scenario (as in the top panel of Fig.~\ref{fig:rhnmass}) and show that it plays a crucial role in yielding possible leptonic signature at the LHC.
\section{Dark Matter Phenomenology}
\label{sec:DM pheno}
In this analysis, we highlight a couple of interesting features regarding DM phenomenology of the model : (i) The alteration to the single component vector boson DM freeze-out and its relic density due to co-annihilation contribution, which was not taken into account in the earlier analysis~\cite{Fraser:2014yga} and (ii) the presence of a second DM candidate ($\Delta$) in a large region of parameter space of the model, which is significantly influenced by DM-DM interactions. The heavy neutrinos $(n_1,n_2)$, assumed in this framework, can also be kinematically stable and serve as DM. However, for generating correct neutrino masses, the relic density of these particles will be very small. We will discuss this separately in subsection.~\ref{sec:rhn}.
\subsection{Possible DM candidates of the model}
\label{sec:dmcandidate}
At the very outset, we will sketch the parameter space of the model, where we can have different DM components coexisting together.
\begin{figure}[htb]
$$
\includegraphics[scale=0.72]{deltadecay.pdf}
$$
\caption{Decay of the triplet scalars to vector boson $X$ for $m_{\Delta_{1,2,3}} > m_X$. Left: Decay of $\Delta_2$ to SM via $\Delta_3$; Right: Decay of $\Delta_1$ to SM and $X$ via off-shell $\Delta_3$ and $\Delta_2$.}
\label{fig:del2decay}
\end{figure}
$\Delta_1$ and $\Delta_2$ component of the $SU(2)_N$ scalar triplet have non-zero $S$ charges (as mentioned in Sec.~\ref{sec:model}). As they are charge neutral, they can qualify as DM if their stability is ensured. $\Delta_3$ having zero $S$ charge, mixes with the SM Higgs due to non-zero VEV (instigated by $f_8\Phi^{\dagger}\Phi Tr(\Delta^{\dagger}\Delta)$ term in the scalar potential) and decaying to SM. Therefore, $\Delta_3$ does not qualify as DM. On the other hand, $\Delta_1$ and $\Delta_2$ have the following interaction vertices with the vector boson $X$:$\Delta_1\Delta_2^{*}X$,~$\Delta_2\Delta_3^{*}X$,~$\Delta_1XX$,~$\Delta_2 X X_3$. As a result, possible decay of $\Delta_2$ to SM can occur via off-shell $\Delta_3$ as shown in left hand side (LHS) of Fig.~\ref{fig:del2decay}. Similarly, $\Delta_1$ can also decay to SM via off-shell $\Delta_2$ and $\Delta_3$, shown in right hand side (RHS) of Fig.~\ref{fig:del2decay}. So, $\Delta_1$ and/or $\Delta_2$ can be potential DM candidates if we can stop the decays shown in Fig.~\ref{fig:del2decay}. The viability of $\Delta_1$ and $\Delta_2$ as DM are discussed in two possible scenarios: (i) Degenerate triplet scalar ($m_{\Delta_1}=m_{\Delta_2}=m_{\Delta_3}=m_{\Delta}$), (ii) Non-degenerate triplet scalar ($m_{\Delta_1} \ne m_{\Delta_2} \ne m_{\Delta_3}$). \\
\begin{figure}
\centering
\includegraphics[scale=0.45]{regiondivide.png}
\caption{Regions of $m_X-m_{\Delta}$ (in GeV) parameter space, where single component and multi-component DM frameworks can be realised for degenerate scalar triplet masses $m_{\Delta_1}=m_{\Delta_2}=m_{\Delta_3}=m_{\Delta}$. In the white region ($2m_X<m_\Delta$), only $X$ can be a single component DM. In the pink region ($m_\Delta/2 <m_X<m_\Delta$), two component DM with $\{X,\Delta_1\}$ is operative. In the green region ($m_X>m_\Delta$), $\{\Delta_1,\Delta_2\}$ forms degenerate two-component DM.}
\label{fig:regions}
\end{figure}
(a) {\bf Degenerate triplet scalar}: The triplet scalar components can be degenerate in the limit of $f_7=0$~\cite{Fraser:2014yga}. In this limit,
\begin{itemize}
\item when $m_{\Delta}>m_X$:
\begin{itemize}
\item[i)] $X$ is stable and a DM.
\item[ii)] $\Delta_2 \rightarrow X b\bar{b}$ is always possible with $m_X<m_{\Delta}$, hence $\Delta_2$ can never be a DM.
\item [iii)] If $m_{\Delta}<2 m_X$ then $\Delta_1$ is stable and becomes second DM component.
\item[iv)] If $m_{\Delta}>2 m_X$, then $\Delta_1$ decays and is not a DM candidate.
\end{itemize}
\item when $m_{\Delta}<m_X$:
\begin{itemize}
\item[i)] By default this implies $m_{\Delta}<2 m_X$ and hence $\Delta_1$ is stable and a DM.
\item[ii)] $\Delta_2$ is also stable and acts as second degenerate DM component with $\Delta_1$.
\item[iii)] $X$ can decay into $\Delta_2$ (and subsequently to $\bar{b}b$) so it can not be a DM candidate.
\end{itemize}
\end{itemize}
Therefore, when $m_{\Delta}>m_X$, we can have both two-component (for $m_{\Delta}<2 m_X: \{X, \Delta_1\}$) and one-component DM scenario (for $m_{\Delta}>2 m_X: \{X\}$). On the other hand, when $m_{\Delta}<m_X$, we will have a degenerate 2-component DM scenario comprising of $\Delta_1$ and $\Delta_2$. The above situation for degenerate scalar triplet case is summarised in Fig.~\ref{fig:regions}.
\begin{figure}
\centering
\includegraphics[scale=0.3]{Non-deg.png}
\caption{Main kinematic regions for single and two component DMs for non-degenerate scalar triplet scenario. They are: $\{\Delta_1,\Delta_2\},\{\Delta_1,X\},\{X\},\{\Delta_2\}$.}
\label{fig:regions2}
\end{figure}
(b) {\bf Non-Degenerate triplet scalar}: Non-degenerate scalar triplet scenario ($f_7 \ne 0$) can have four possible DM framework depending on the hierarchy of $m_{\Delta_1},~m_{\Delta_2}, ~m_X$. It is quite understood from Figure \ref{fig:del2decay}, between $\Delta_2~\rm{and}~X$, we can have one of them as a DM, while the possibility of $\Delta_1$ as DM will be guided by the hierarchy between $m_{\Delta_1}~vs~2 m_X$. Therefore, the situations of interest are:
\begin{enumerate}
\item $\Delta_1, \Delta_2$ forming non-degenerate DM components : when $m_X>m_{\Delta_2}$ and $m_{\Delta_1}<2 m_X$,
\item $\Delta_1, X$ forming non-degenerate DM components : when $m_X<m_{\Delta_2}$ and $m_{\Delta_1}<2 m_X$
\item $X$ as single component DM: when $m_{\Delta_2}>m_X,~ m_{\Delta_1}>2m_X$
\item $\Delta_2$ as single component DM: when $m_{\Delta_2}<m_X,~ m_{\Delta_1}>2m_X$
\end{enumerate}
This is also summarised in Fig.~\ref{fig:regions2}. Here, we would like to mention that, the decay lifetime of $\Delta_3$ to SM $b\bar{b}$ can be comparable to the age of the Universe (or in other words, $\Delta_3$ can be made stable compared to the Universe's life time), if we consider the coupling of $\Delta_3$ to SM (which is given by $f_8$) to be vanishingly small $\sim\mathcal{O}(10^{-22})$ as estimated in Appendix-C. In that case $\Delta_3$ can also be a DM along with $\Delta_{1,2} ~\rm{and/or}~ X$. However given the fact that annihilation of the scalars to SM is controlled by $f_8$, such a tiny value of the coupling will yield overabundance of scalar DM through freeze-out mechanism. Therefore, we restrict ourselves in elaborating such prospects.
We will analyze the degenerate scalar triplet model here for simplicity and economy of parameters. This itself offers a variety of single component ($X$) or a multi-component interacting DM set-up (in the form of \{$\Delta_1,X$\} or \{$\Delta_1,\Delta_2$\}).
\subsection{$X$ as single component vector boson DM}
\label{sec:X DM}
$X$ can appear as a single component DM in degenerate scalar triplet case when $m_\Delta>m_X$ and $m_\Delta>2m_X$. It can also be a single component DM for non-degenerate scalar triplet case when $m_{\Delta_2}>m_X$ and $m_{\Delta_1}>2m_X$. The dominant annihilation channels for $X$, shown in Fig.~\ref{fig:annihilx1}, can be classified into two categories: (i) annihilation to heavy scalars ($\zeta$), (shown in the upper panel) and (ii) annihilation to the SM through Higgs portal (shown in the lower panel). The latter was not considered in the previous work~\cite{Fraser:2014yga}. Throughout the analysis, all the annihilation cross sections are calculated on threshold: $s_0=4 m_X^2$, assuming only dominant $s$-wave contribution. The total annihilation cross section of $X$ is then given by:
\begin{align}
\langle\sigma v_{rel}\rangle &= \frac{g_N^4}{576\pi m_X^2}\sqrt{1-\frac{m_{\zeta_2}^2}{m_X^2}}\left(2+\left[1+\frac{4\left(m_X^2-m_{\zeta_2}^2\right)}{m_{\zeta_1}^2+m_X^2-m_{\zeta_2}^2}\right]^2\right)\nonumber\\
&+ \frac{m_{W,Z}^4}{48\pi m_X^2}\sqrt{1-\frac{m_{W,Z}^2}{m_X^2}}\left[\frac{g_N^4 \left(f_5/\lambda_4\right)^2}{\left(4 m_X^2-m_h^2\right)^2+\Gamma_h^2 m_h^2}\right]\left[3+4\Big\{\left(\frac{m_X}{m_{W,Z}}\right)^4-\left(\frac{m_X}{m_{W,Z}}\right)^2\Big\}\right]\nonumber\\
&+ \frac{m_f^2}{24\pi}\left(1-\frac{m_f^2}{m_X^2}\right)^{3/2}\left(\frac{g_N^4 \left(f_5/\lambda_4\right)^2}{\left(4 m_X^2-m_h^2\right)^2+\Gamma_h^2 m_h^2}\right)+\frac{3 m_h^4}{128\pi m_X^2}\sqrt{1-\frac{m_h^2}{m_X^2}}\nonumber\\
& \left[\frac{g_N^4 \left(f_5/\lambda_4\right)^2}{\left(4 m_X^2-m_h^2\right)^2+\Gamma_h^2 m_h^2}\right].
\label{eq:sigx}
\end{align}
The first term corresponds to the annihilation of $X$ to lighter exotic scalar $\zeta_2$ via $t$-channel mediation of heavier companion $\zeta_1$, and a four point interaction as shown in the upper panel of Fig.~\ref{fig:annihilx1}. These interaction vertices are solely dependent on the $SU(2)_N$ gauge coupling $g_N$. The next three terms are annihilation to the SM Higgs, SM gauge bosons ($W^{\pm},Z$) and SM fermions respectively, through Higgs portal. These cross sections additionally depend on $f_5/\lambda_4$. The cross sections are obtained for $m_{\zeta_1}>m_X > m_{\zeta_2}$, ensuring the stability of $X$. Otherwise $(m_{\zeta_{1,2}}>m_X)$ the annihilation will occur to SM final states only.
\begin{figure}[htb!]
$$
\includegraphics[height=4cm]{annihilx1.png}
\includegraphics[height=3.8cm]{annihilx2.png}
$$
$$
\includegraphics[height=3.0cm]{annihil1.png}
\includegraphics[height=3.0cm]{annihil2.png}
\includegraphics[height=3.0cm]{annihil3.png}
$$
\caption{Top: Annihilation of $X$ DM into heavy scalar $\zeta_2, {\zeta_2}^{\dagger}$ via $t$-channel mediation of $\zeta_1$ and four pint interaction assuming $m_{\zeta_2} < m_X < m_{\zeta_1}$. Bottom: Annihilation of $X$ into SM via Higgs mediation in $s$-channel.}
\label{fig:annihilx1}
\end{figure}
\begin{figure}[htb!]
$$
\includegraphics[height=3.5cm]{coannihilx.png}
$$
\caption{Co-annihilation of $X$ with $X_3$ to $\zeta_1 \zeta_2^\dagger$ for $\frac{1}{2}\left(m_{\zeta_1}+m_{\zeta_2}\right) < m_X < m_{\zeta_1}+m_{\zeta_2} $ (see text for details).}
\label{fig:coannihilx}
\end{figure}
Importantly, $X$ can undergo co-annihilation with $X_3$ via the diagram shown in Fig.~\ref{fig:coannihilx}. The effective cross section in this case can be written as:
\begin{equation}
\langle\sigma \; {\rm v}\rangle_{\text{eff}} = (\sigma\; {\rm v} )_{X\bar{X}\rightarrow SM,~\zeta_2\zeta_2^\dagger}~+~(\sigma\; {\rm v} )_{\bar{X}X_3\rightarrow \zeta_1 \zeta_2^\dagger +hc}\left(1+\frac{\Delta m}{m_X}\right)^{\frac{3}{2}}exp({-\Delta m}~x/{m_X}),
\label{eq:coann}
\end{equation}
where $\Delta m=m_{X_3}-m_{X}$ and $x=\frac{m_X}{T}$. The contribution from co-annihilation has not been considered in the earlier analysis of this model.
For co-annihilation to occur:
\begin{equation*}
m_{\zeta_1}+m_{\zeta_2} < m_X + m_{X_3}\implies m_X > \frac{1}{2}\left(m_{\zeta_1}+m_{\zeta_2}\right),
\end{equation*}
in the limit $m_X\sim m_{X_3}$. Again, for stability of $X$:
\begin{equation*}
m_X < m_{\zeta_1}+m_{\zeta_2}.
\end{equation*}
Together, we have the following condition for co-annihilation:
\begin{eqnarray}
\frac{1}{2}\left(m_{\zeta_1}+m_{\zeta_2}\right) < m_X < m_{\zeta_1}+m_{\zeta_2}.
\end{eqnarray}
We would once again remind that co-annihilation contributions become very important in this model as $\Delta m=m_{X_3}-m_{X} \to 0$. This happens for small $u_3$ ($\sim$ MeV), which is required for neutrino mass generation with heavy neutrinos of the order of hundreds of GeVs, as discussed earlier.
\begin{figure}[htb!]
$$
\includegraphics[height=7cm]{beqcompare.png}
$$
\caption{Freeze out of vector boson DM ($X$) is shown in $y=\lambda Y$ versus $x=\frac{m}{T}$ plane for two different combinations of DM mass and $SU(2)_N$ couplings: $\{m_X,g_N^2\}=\{320 ~\rm{GeV},0.3\} (\rm{in ~Red}); \{860 ~\rm{GeV},0.6\} (\rm{in~ Green})$. In each case, the equilibrium distributions are shown in dotted lines. The cases with inclusion of co-annihilation contributions are shown through darker thick lines. Right relic density in shown through blue dotted line. }
\label{fig:beq1}
\end{figure}
Boltzmann equation (BEQ) for the single component $X$ DM can be written as :
\begin{equation}
\frac{dy}{dx}= -\frac{m_{X}}{x^2} \left[\sigma _0(y^2-{y^{EQ}}^2)\right],
\label{eq:beq4}
\end{equation}
where $\sigma_0=(\sigma\rm{v})_{\rm{eff}}$ given in Eq.~\ref{eq:coann}. The equilibrium co-moving number density is $Y^{EQ}=0.145 \frac{g}{g_{*s}} (\frac{m_{X}}{T})^{\frac{3}{2}}e^{-\frac{m_{X}}{T}}$, where $g=3$ is the degrees of freedom (DoF) associated with the vector boson DM $X$ and $g_{*s}=106.7$ is the total DoF. The solution for BEQ is easier in terms of modified yield $y=\lambda Y$, where $\lambda={(0.264 ~ m_{Pl} \frac{g_{*s}}{\sqrt{g_*}})}$. A typical freeze-out of $X$ is shown in Fig.~\ref{fig:beq1}.
For brevity, we choose two different combinations of DM mass and $SU(2)_N$ couplings $\{m_X,g_N^2\}=\{320 ~\rm{GeV},0.3\}, \{860 ~\rm{GeV},0.6\}$ as shown in red and green thick lines respectively. Corresponding equilibrium distributions are shown by dashed lines and therefore, the freeze out can easily be identified by the departure of the thick lines from the dashed ones. Inclusion of maximal co-annihilation with $\Delta m \to 0$ as in Eq.~\ref{eq:coann}, is shown by the darker thick lines for both the chosen points and indicate the non-negligible effect. Here, we have kept other masses as:~\{$m_{\zeta_2}$,$m_{\zeta_1}$\}=~\{210 GeV, 360 GeV\};~\{250 GeV, 880 GeV\} corresponding to $\{m_X,g_N^2\}=\{320 ~\rm{GeV},0.3\}, \{860 ~\rm{GeV},0.6\}$ respectively. We also point out the observed relic density using a blue dashed line, which shows that the combination of $\{m_X,g_N^2\}=\{860 ~\rm{GeV},0.6\}$ yields the correct relic density with co-annihilation contribution included.
\begin{figure}[htb!]
$$
\includegraphics[height=5cm]{relicmxf5l4plt.png}\hspace{0.5cm}
\includegraphics[height=5cm]{relicmxg2plt.png}
$$
\caption{Relic density allowed parameter space for $X$ as a single component DM. On the left panel, allowed $m_X-g_N^2$ parameter space is shown for different $f_5/\lambda_4$; and on the right panel, $m_X-f_5/\lambda_4$ allowed parameter space is shown for different choices of $g_N^2$.}
\label{fig:mxg2relic}
\end{figure}
We will now scan the relic density allowed parameter space of single component X in two different regions: (i) $m_{\zeta_1}>m_X>m_{\zeta_2}$ and (ii) $m_X<m_{\zeta_{2,1}}$. In the first case, annihilation occurs to the heavy scalar $\zeta_2$ and to SM, while in the second the annihilation occurs only to SM. The free parameters for the DM analysis can be chosen as:
\begin{eqnarray}
\big\{g_N^2,\frac{f_5}{\lambda_4},m_X,m_{\zeta_1},m_{\zeta_2}\big\}.
\end{eqnarray}
Both the couplings are varied in the range \{$g_N^2$:~0.01-0.6\} and \{$\frac{f_5}{\lambda_4}$:~0.01-0.6\} for scanning the parameter space. The relic density (PLANCK data: $0.1165\le \Omega h^2 \le 0.1227$) allowed parameter space for X is shown in Fig.~\ref{fig:mxg2relic}. On the left panel, we have shown the allowed parameter space in terms of $m_X$ (in GeV) versus $g_N^2$ for different choices of $\frac{f_5}{\lambda_4}$. On the right panel, we show it in terms of $m_X$ (in GeV) versus $\frac{f_5}{\lambda_4}$ for different choices of $g_N^2$. First of all, we see that for larger $g_N^2$, we obtain a larger range of DM mass, that can satisfy relic density constraints. On the other hand, we see that the effect of $\frac{f_5}{\lambda_4}$ is milder than $g_N^2$. Essentially this is due to the fact that the t-channel annihilation of the DM to the heavy scalars ($\zeta_2$) is larger than the s-channel annihilation to SM particles through Higgs mediation. Low DM mass is favored by smaller \{$g_N^2$, $\frac{f_5}{\lambda_4}$\}. For the same reason, $g_N^2$ or $f_5/\lambda_4$ need to be as large as $\sim 0.6$ for DM masses in the range $\sim$ 1 TeV.
Dependence on $m_{\zeta_2}$ for correct relic density is shown in $m_X$-$m_{\zeta_2}$ plane in Fig.~\ref{fig:mxmz2pln} assuming $m_X>m_{\zeta_2}$. Different colour shades indicate different choices of $g_N^2$ in the left plot. Constant $g_N^2$ regions exploit the freedom on $f_5/\lambda_4$ as shown in the right hand side of Fig.~\ref{fig:mxmz2pln}. $f_5/\lambda_4$ is insensitive to the choice of $m_{\zeta_2}$. Larger DM masses have to be adjusted with larger $f_5/\lambda_4$ to keep the total relic density intact.
\begin{figure}[htb!]
$$
\includegraphics[scale=0.34]{mxmz2pln.png}
\includegraphics[scale=0.3]{g204f5l4rel.png}
$$
\caption{LHS: Relic density allowed parameter space in $m_X$-$m_{\zeta_2}$ plane for all possible allowed values of $m_{\zeta_1}$ with different values of $g_N^2$ showed in different colours. RHS: Relic density allowed parameter space for $g_N^2=0.4$ in $m_X$-$m_{\zeta_2}$ plane, where different allowed values of $f_5/\lambda_4$ are shown.}
\label{fig:mxmz2pln}
\end{figure}
The direct detection interaction for $X$ occurs via $t$-channel Higgs mediation as shown in Fig.~\ref{fig:ddX}. The spin-independent direct detection cross section scattering off a nucleus with $Z$ protons and $A-Z$ neutrons normalized to one nucleon is given by:
\begin{eqnarray}
\sigma^{\text{SI}} = \frac{1}{\pi}\left(\frac{m_N}{m_X+A m_N}\right)^2\left(\frac{Z f_p+\left(A-Z\right)f_n}{A}\right)^2,
\end{eqnarray}
where $f_p$ and $f_n$ are the form factors given by~\cite{Durr:2015dna}
\begin{align}
\frac{f_p}{m_p}=& -0.152\left[\frac{g_N^2\left(f_5/\lambda_4\right)}{4 m_h^2}\right]-0.848\left[\frac{g_N^2\left(f_5/\lambda_4\right)}{54 m_h^2}\right]\\
\frac{f_n}{m_n}=& -0.155\left[\frac{g_N^2\left(f_5/\lambda_4\right)}{4 m_h^2}\right]-0.845\left[\frac{g_N^2\left(f_5/\lambda_4\right)}{54 m_h^2}\right]
\end{align}
where we used:
\begin{equation}
\frac{f_N}{m_N} = \left[ \sum_{u,d,s} f_q^N + \frac{2}{27} \left(1- \sum_{u,d,s} f_q^N \right)\right] \left[\frac{g_N^2\left(f_5/\lambda_4\right)}{4 m_h^2}\right]
\end{equation}
\begin{figure}[htb!]
$$
\includegraphics[scale=0.4]{xdd.pdf}
$$
\caption{Direct search diagram for vector boson DM X.}
\label{fig:ddX}
\end{figure}
The above equations yield a bound on $f_5/\lambda_4$ from non-observation of $X$ in direct search experiment for a given $g_N^2$. This can be seen from Fig.~\ref{fig:f5l4bound} where we show different contours for $f_5/\lambda_4$ as function of DM mass, satisfying direct search constraints from PandaX~\cite{Cui:2017nnn} for different choices of $g_N^2$. Any point in the shaded region below the curve is available by direct search data. Here we can see, larger the $g_N^2$, tighter is the limit on $\frac{f_5}{\lambda_4}$. Again, for larger DM mass, $\frac{f_5}{\lambda_4}$ is also large as the direct search cross-section is proportional to the coupling and inversely proportional to the DM mass.
\begin{figure}[htb!]
$$
\includegraphics[scale=0.33]{f5l4panda_contours.png}
$$
\caption{Contours in $\frac{f_5}{\lambda_4}-m_X$ plane, satisfying direct search bound from PandaX for different values of the gauge coupling $g_N^2$, shown in blue ($g_N^2=0.2$),orange ($g_N^2=0.4$) and green ($g_N^2=0.8$).}
\label{fig:f5l4bound}
\end{figure}
Spin independent direct search cross-section for relic density allowed parameter space for the single component $X$ DM is shown in Fig.~\ref{fig:ddx}, as a function of DM mass, where $m_{\zeta_1}>m_X>m_{\zeta_2}$. On the upper panel, we show $g_N^2$ dependence through different colour shades, while on the lower panel, we show the dependence on $\frac{f_5}{\lambda_4}$ through different colour shades. The exclusion limit from PandaX is shown by the black dashed line and future limit from XENONnT~\cite{Aprile:2017iyp} is also shown by the black dot-dashed line. We see that single component $X$ DM fits nicely between these two curves, giving this model a chance to be discovered by future direct search experiments. Note that $g_N^2$ has low sensitivity to the direct search cross-sections as the constant $g_N^2$ planes are placed horizontally along larger DM mass. On the other hand, constant $\frac{f_5}{\lambda_4}$ planes are stacked vertically, yielding larger direct search cross-sections for larger $\frac{f_5}{\lambda_4}$.
\begin{figure}[htb!]
$$
\includegraphics[scale=0.58]{mxg2sigplt.pdf}
$$
$$
\includegraphics[scale=0.58]{mxf5l4sigplt.pdf}
$$
\caption{Spin independent direct search cross-section for relic density allowed parameter space for a single component $X$. Top: different $g_N^2$ regions are shown with different colours. Bottom: different $f_5/\lambda_4$ regions are shown. The exclusion limit from PandaX and future limit from XENONnT are shown through black dashed and black dot-dashed lines respectively. }
\label{fig:ddx}
\end{figure}
Finally, we tabulate some benchmark points (BP) in Table.~\ref{tab:bp}, which satisfy both relic density and direct search constraints. Here we note that as we have chosen these points between $2m_\Delta>m_X>m_\Delta$, $X$ is the only DM component between $X$ and $\Delta$ in the degenerate scalar triplet scenario. However, as we are also choosing the heavy neutrino in the ballpark of few hundreds of GeVs and $m_N<m_{\zeta_1}$, heavy neutrinos are stable and contribute to DM relic (although the contribution is negligible). The fate of heavy neutrinos as DM is elaborated in subsection.~\ref{sec:rhn}. These BPs will be used further for the collider analysis in section \ref{sec:collider pheno}.
\begin{table}[htb!]
\small
\setlength\tabcolsep{1.5pt}
\setlength\thickmuskip{1.5mu}
\setlength\medmuskip{1.5mu}
\small
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
Benchmark & $g_N$ & $\frac{f_5}{\lambda_4}$ & $m_X$ & $m_{\zeta_2}$ & $m_{\zeta_1}$ & $m_{n_{1R}}$ & $\Omega_X h^2$& $\sigma^{X}_{DD}$ & $\Omega_{n_{1,2}}h^2$ \\ [0.5ex]
Point & & & (GeV) & (GeV) & (GeV) & (GeV) & & $(cm^2)$ & \\ [0.5ex]
\hline\hline
BP1 & 0.64 & 0.56 & 480 & 330 & 620 & 450 & 0.12 & $10^{-45.7802}$ & 0.004 \\
\hline
BP2 & 0.70 & 0.60 & 660 & 350 & 700 & 450 & 0.12 & $10^{-45.7918}$ & 0.004 \\
\hline
BP3 & 0.77 & 0.59 & 800 & 410 & 820 & 450 & 0.12 & $10^{-45.7839}$ & 0.004 \\
\hline
\end{tabular}
\end{center}
\caption {Choices of the benchmark points used for collider analysis. Masses, couplings, relic density and direct search cross-sections for the DM candidates are tabulated where $2 m_{\Delta}>m_X>m_{\Delta}$. $X$ has dominant contribution to relic density, while a subdominant contribution comes from $n_{1,2}$.}
\label{tab:bp}
\end{table}
\begin{figure}[htb!]
$$
\includegraphics[scale=0.35]{XtoSMdirect.png}
$$
\caption{Direct search parameter space for single component VBDM $X$ when $m_X<m_{\zeta_2}$. The region allowed by relic density is extremely slim as the only annihilation channel available for $X$ is to SM. The colourbar shows different values of $f_5/\lambda_4$. The exclusion limit from PandaX and future limit from XENONnT are shown through black dashed and black dot-dashed lines respectively.}
\label{fig:mxltmzeta2}
\end{figure}
Finally in Fig.~\ref{fig:mxltmzeta2} we show the parameter space for case (ii), where $m_X<m_{\zeta_2}$. Under this condition $X$ can only annihilate into SM via Higgs portal. This reduces the annihilation cross-section, thus increasing the relic abundance, which in turn increases the $f_5/\lambda_4$ required to obtain correct relic density. As the same coupling now controls both relic density and direct search, hence all of the relic density allowed region is ruled out by direct detection constraint as can be seen in Fig.~\ref{fig:mxltmzeta2}.
\subsection{$\Delta_1$ and $\Delta_2$ as degenerate two component scalar DM}
\label{sec:degenerate DM}
$\Delta_1$ can not be a single component DM in any region of the parameter space as has already been described. When $m_{\Delta}< m_X$, $\Delta_1$ and $\Delta_2$ in the degenerate triplet scenario can yield a two component DM (see Fig.~\ref{fig:regions}). In this case, each of $\Delta_1$ and $\Delta_2$ can annihilate to the SM via Higgs portal as shown in Fig.~\ref{fig:delannihil1} (where `SM' indicates all the SM gauge bosons, scalar and fermions). The annihilation cross section at threshold is given by:
\begin{eqnarray}
\begin{split}
\langle\sigma v_{rel}\rangle_{m_{\Delta}<m_X} &=\frac{f_8^2}{32\pi m_{\Delta}^2} \sqrt{1-\frac{m_h^2}{m_{\Delta}^2}}\left(\frac{\left(4 m_h^2-m_{\Delta}^2\right)^2}{\left(4 m_{\Delta}^2-m_h^2\right)^2+\Gamma^2 m_h^2}\right)+\\& \frac{3 f_8^2}{8\pi}\sqrt{1-\frac{m_f^2}{m_{\Delta}^2}}\frac{m_f^2}{\left(4 m_{\Delta}^2-m_h^2\right)^2+\Gamma^2 m_h^2}\\ & +\frac{f_8^2}{8\pi m_{\Delta}^2}\sqrt{1-\frac{m_W^2}{m_{\Delta}^2}}\frac{m_W^4}{\left(4m_{\Delta}^2-m_h^2\right)^2}\left(2+\frac{\left(2 m_{\Delta}^2-m_W^2\right)^2}{m_W^4}\right)+\\ & \frac{f_8^2}{8\pi m_{\Delta}^2}\sqrt{1-\frac{m_Z^2}{m_{\Delta}^2}}\frac{m_Z^4}{\left(4m_{\Delta}^2-m_h^2\right)^2}\left(2+\frac{\left(2 m_{\Delta}^2-m_Z^2\right)^2}{m_Z^4}\right).
\end{split}
\end{eqnarray}
The free parameters for DM analysis in this region are:
\begin{eqnarray}
\{f_8,m_{\Delta}\}.
\end{eqnarray}
The relic density of such a scenario, will be described by
\begin{eqnarray}
\Omega_{\text{total}}=2~\Omega_{\Delta},
\end{eqnarray}
where the factor of `2' is because $\Delta_1$ and $\Delta_2$ are degenerate.
\begin{figure}[htb!]
$$
\includegraphics[scale=0.45]{delhiggs.pdf}
$$
\caption{Annihilation of $\Delta$ to SM when the components of the scalar triplet are degenerate with $m_{\Delta}< m_X$.}
\label{fig:delannihil1}
\end{figure}
Direct search for both $\Delta_1$ and $\Delta_2$ again follows through the t-channel Higgs portal graph as shown in Fig.~\ref{fig:deldd}.
\begin{figure}[htb!]
\centering
\includegraphics[scale=0.4]{deltaDD.pdf}
\caption{Direct search diagram for scalar DM.}
\label{fig:deldd}
\end{figure}
The spin-independent DM-nucleon scattering cross section is given by~\cite{Belanger:2008sj}:
\begin{eqnarray}
\sigma_{N_{i}}^{\text{SI}}=\frac{\alpha_N^2 \mu_{N}^2}{4\pi m_{\Delta_{i}}^2},
\label{eq:dd1comp}
\end{eqnarray}
where $\alpha_N$ is the effective DM-nucleon vertex (folded with form factors etc.), which is given by
\begin{equation}
\alpha_N=\frac{m_N f_8}{m_h^2}\left[f_{T_{u}}^{(N)}+f_{T_{d}}^{(N)}+f_{T_{s}}^{(N)}+\frac{2}{27}\left[1-\left(f_{T_{u}}^{(N)}+f_{T_{d}}^{(N)}+f_{T_{s}}^{(N)}\right)\right]\right],
\end{equation}
$N$ stands for both proton and neutron, and $\mu_N$ is the DM-nucleon reduced mass, and the total cross section per nucleon is given by
\begin{eqnarray}
\sigma^{\text{SI}}_i=\frac{ \mu_{n}^2}{4 \pi \, A^2\,m_{\Delta_{i}}^2}\left[ \alpha_p Z + \alpha_n (A-Z) \right]^2,
\label{eq:dd1comp}
\end{eqnarray}
with $\mu_n$ being the DM-nucleus reduced mass.
\begin{figure}
$$
\includegraphics[scale=0.3]{f8pandaplt.png}
\includegraphics[scale=0.3]{degenmdl.png}
$$
\caption{Left: Allowed values of $f_8$ which satisfy bounds from PandaX for scalar DM scenario. Right: Relic density allowed parameter space for degenerate two component scalar DM. Limits from PandaX, future prediction of XENONnT and neutrino floor are shown in Black dashed, Black dot-dashed and thick orange line respectively.}
\label{fig:degenare2comp}
\end{figure}
For multi-component DM case, we can express the effective spin independent direct search cross-section of one individual component to be multiplied by roughly the percentage by which it is present~\cite{Cao:2007fy}. For two-component degenerate DM case however, the individual cross-sections can be added together as they are indistinguishable DMs, with same mass and coupling. Therefore, the effective spin-independent direct search cross-section can be expressed as:
\begin{eqnarray}
\sigma_{\text{SI}}^{\text{eff}}\left(n_i\right)= 2\times \frac{\Omega_i}{\Omega_T}\sigma_{n_{i}}^{\text{SI}}= \frac{\alpha_n^2 \mu_{n}^2}{4\pi m_{\Delta_{i}}^2}.
\label{eq:dd2comp}
\end{eqnarray}
We can see the fate of this degenerate two-component DM scenario from relic density and direct search constraints, summarised in Fig.~\ref{fig:degenare2comp}. On the LHS of Fig.~\ref{fig:degenare2comp} we show allowed values of $f_8$ as a function of $m_{\Delta}$ to respect direct search bound from PandaX, while on the RHS we show the direct search parameter space allowed by relic density in the degenerate two-component set-up (\{$\Delta_1$,$\Delta_2$\}). This essentially shows that direct search constraint severely discard this parameter space of the model. This is easy to appreciate as the channel which helps the DM to freeze-out also crucially controls the direct search cross-section of the DM; more importantly the degeneracy of the two components ensure twice as large annihilation which causes the couplings to be increased appropriately to yield an enhancement in direct search cross-sections.
\subsection{$\Delta_1$ and $X$ as two component DM}
\label{sec:x-delta DM}
$X$ and $\Delta_1$ can form two component DM when $m_X<m_{\Delta}<2 m_X$ (see Fig.~\ref{fig:regions}) in degenerate triplet scenario. First of all, here $\Delta_1$ can annihilate to $X \bar {X}$ additionally, shown in the upper panel of Fig.~\ref{fig:delannihil}, which was not accessible earlier when $m_{\Delta}<m_X$. This DM-DM conversion will play a crucial role in this region of the parameter space as we will elaborate.
The annihilation cross-section for $\Delta_1$ will include the contributions from these additional graphs and will read as follows:
\begin{eqnarray}
\begin{split}
\langle\sigma v_{rel}\rangle_{m_{\Delta}>m_X} &= \frac{g_N^4}{32\pi m_{\Delta}^2} \sqrt{1-\frac{m_X^2}{m_{\Delta}^2}}\left[2+\left(\frac{2 m_{\Delta}^2}{m_X^2}-1\right)^2\right]\\ & \left[1-\sqrt{2}f_8 \left(\frac{f_5}{\lambda_4}\right) \frac{v^2 \left(4 m_{\Delta}^2-m_h^2\right)}{\left(4 m_{\Delta}^2-m_h^2\right)^2+\Gamma_h^2 m_h^2}+\frac{1}{2} f_8^2 \left(\frac{f_5}{\lambda_4}\right) ^2 \frac{v^4}{\left(4 m_{\Delta}^2-m_h^2\right)^2+\Gamma_h^2 m_h^2}\right]\\ &+ \frac{f_8^2}{32\pi m_{\Delta}^2} \sqrt{1-\frac{m_h^2}{m_{\Delta}^2}}\left(\frac{\left(4 m_h^2-m_{\Delta}^2\right)^2}{\left(4 m_{\Delta}^2-m_h^2\right)^2+\Gamma^2 m_h^2}\right)+\\& \frac{3 f_8^2}{8\pi}\sqrt{1-\frac{m_f^2}{m_{\Delta}^2}}\frac{m_f^2}{\left(4 m_{\Delta}^2-m_h^2\right)^2+\Gamma^2 m_h^2}\\ & +\frac{f_8^2}{8\pi m_{\Delta}^2}\sqrt{1-\frac{m_W^2}{m_{\Delta}^2}}\frac{m_W^4}{\left(4m_{\Delta}^2-m_h^2\right)^2}\left(2+\frac{\left(2 m_{\Delta}^2-m_W^2\right)^2}{m_W^4}\right)+\\ & \frac{f_8^2}{8\pi m_{\Delta}^2}\sqrt{1-\frac{m_Z^2}{m_{\Delta}^2}}\frac{m_Z^4}{\left(4m_{\Delta}^2-m_h^2\right)^2}\left(2+\frac{\left(2 m_{\Delta}^2-m_Z^2\right)^2}{m_Z^4}\right).
\end{split}
\label{eq:deltannihil}
\end{eqnarray}
\begin{figure}[htb!]
$$
\includegraphics[scale=0.6]{delannihil.pdf}
$$
\caption{Annihilation of $\Delta_1$ to $X$ and SM when $m_X<m_{\Delta}<2 m_X$.}
\label{fig:delannihil}
\end{figure}
The parameters for DM analysis in this case are given by:
\begin{eqnarray}
\{g_N^2,f_8,m_{\Delta},m_X\}.
\end{eqnarray}
\begin{figure}[htb!]
$$
\includegraphics[scale=0.34]{mxg2relicxcase1.png}
\includegraphics[scale=0.34]{mdlf8reldelcase1.png}
$$
$$
\includegraphics[scale=0.34]{mdlf8reldel.png}
$$
\caption{Top left: Relic density allowed parameter space for two component DM set-up in $m_X-g_N^2$ plane, where the colour shades indicate $\Omega_X/\Omega_T$. Top right: Same in $m_\Delta-g_N^2$ plane, where the colour shades indicate $\Omega_\Delta/\Omega_T$. Bottom panel: Same in $m_\Delta-f_8$ plane where colour shades indicate $\Omega_\Delta/\Omega_T$. Here $m_{\Delta}=m_{\zeta_1}=m_X+50$ and $m_{\zeta_2}=m_X-50$ has been chosen for illustration.}
\label{fig:relicdelta}
\end{figure}
As we have already elucidated in Eq.~\ref{eq:beq4} of Sec.~\ref{sec:X DM}, the BEQ can be expressed in terms of the dimensionless quantity $x=m/T$, where $m$ is the mass of the DM. However, in the two-component case, we have a coupled Boltzmann equation, due to DM-DM interactions. Here, using a common $x$ is problematic since now there are two DM candidates with different masses: $\{m_{\Delta},m_X\}$. This issue didn't arise in the previous case of $\Delta_1, \Delta_2$ as they were degenerate and didn't have an effective DM-DM interactions. The way-out for this non-degenerate scenario is to introduce a reduced mass: $\mu=\frac{m_{\Delta}m_X}{m_{\Delta}+m_X}$, in terms of which the BEQs read~\cite{Bhattacharya:2016ysw}:
\begin{eqnarray}
\begin{split}
\nonumber \frac{dy_1}{dx} =& A\left[\langle\sigma v_{\Delta\Delta^{*}\to SM SM}\rangle\left(y_1^2-y_1^{EQ^2}\right)+\langle\sigma v_{\Delta\Delta^{*}\to X\bar{X}}\rangle\left(y_1^2-\frac{y_1^{EQ^2}}{y_2^{EQ^2}}y_2^2\right)\right],
\end{split}
\label{eq:coupedbeq1}
\end{eqnarray}
\begin{eqnarray}
\begin{split}
\frac{dy_2}{dx} =& A\left[\langle\sigma v_{X\bar{X}\to SM SM}\rangle\left(y_2^2-y_2^{EQ^2}\right)-\langle\sigma v_{\Delta\Delta^{*}\to X\bar{X}}\rangle\left(y_1^2-\frac{y_1^{EQ^2}}{Y_2^{EQ^2}}y_2^2\right)\right],
\end{split}
\label{eq:coupedbeq1}
\end{eqnarray}
where $A=-0.264~M_{PL} \sqrt{g_{*}}\frac{\mu}{x^2}$ and the equilibrium distribution, recast in terms of $\mu$ has the form:
\begin{eqnarray}
y_{i}^{EQ}\left(x\right)= 0.145 \frac{g}{g_{*}} x^{3/2} \left(\frac{m_{i}}{\mu}\right)^{3/2} e^{-x m_{i}/\mu},
\end{eqnarray}
with $i\in(X,\Delta)$. It has already been established in the case of multicomponent DM scenario, that the relic density of the heavier one is affected by the added annihilation to the other DM component, while the one for the lighter component remains the same~\cite{Bhattacharya:2016ysw}. Therefore, we can safely use an approximate analytical solution in determining the relic density for individual components as follows:
\begin{eqnarray}
\nonumber \Omega_{X}h^2=\frac{854.45\times10^{-13}}{\sqrt{g_{*}}} y_{X}\left(x_{\infty}\right) &\simeq& \frac{0.1 ~\rm{pb}}{\langle\sigma v\rangle_{\text{eff}}},\\
\Omega_{\Delta}h^2=\frac{854.45\times10^{-13}}{\sqrt{g_{*}}} y_{\Delta}\left(x_{\infty}\right) &\simeq& \frac{0.1 ~\rm{pb}}{\langle\sigma v\rangle_{\Delta\Delta^{*}\to X\bar{X}}+\langle\sigma v\rangle_{\Delta\Delta^{*}\to SM~SM}},
\end{eqnarray}
where for annihilation of $X$, $\langle\sigma v\rangle_{\text{eff}}$ is given by Eq.~\ref{eq:coann}.
\begin{figure}[htb!]
$$
\includegraphics[scale=0.52]{dirvsdelf8case1.pdf}\hspace{0.3cm}
\includegraphics[scale=0.52]{mxsigcase1dirplt.pdf}
$$
\caption{LHS: Spin independent effective direct search cross-section for $\Delta$, in terms of $m_{\Delta}$ vs. $Log{(\frac{\Omega_{\Delta}}{\Omega_T}\times\sigma_{\Delta N\to \Delta N})}$, when it is a part of two component dark matter scenario with $\rm{DM}:\{\Delta, X\}$. Allowed region of relic density parameter space have been divided into different $f_8$ values. RHS: Same for $X$ in terms of $m_X$ vs. $Log~(\frac{\Omega_X}{\Omega_T}\times\sigma_{X N\to X N})$, where different coloured regions correspond to different values of $f_5/\lambda_4$. In both the plots the bound from PANDA, future sensitivity of XENONnT and neutrino floor are depicted.}
\label{fig:deltadir}
\end{figure}
The relic density allowed parameter space of this two component model is shown in Fig.~\ref{fig:relicdelta}. In the top left panel, we show the allowed region in $m_X-g_N^2$ plane while the colour shades indicate $\Omega_X/\Omega_T$. Top right panel shows a similar graph in $m_\Delta-g_N^2$ plane while the colour shades indicate the fraction of $\Delta$ DM in total abundance $\Omega_\Delta/\Omega_T$. The main take from these two graphs are that, the relic density is dominated here by the $X$ component. This can be explained simply as $X$ is the lighter component, it has less annihilations to the SM. On the contrary, larger annihilations of $\Delta$ compared to $X$, depletes the abundance of this component down to $20\%$ of the total. In the bottom panel of Fig.~\ref{fig:relicdelta}, we show that $f_8$, when varied within this limit ($\{0.001-0.1\}$), does not constrain $m_\Delta$ at all in achieving the right relic density in this multipartite framework. For ease of the scan, we choose: $m_{\Delta}=m_{\zeta_1}=m_X+50$ and $m_{\zeta_2}=m_X-50$. However, they do not have a crucial role to play unless we change the hierarchy.
Now the question is whether the relic density allowed parameter space of the two-component set-up ($\Delta,X$) is allowed by the direct search constraint. This is depicted in Fig.~\ref{fig:deltadir}. In the LHS we show the fate of $\Delta$ in direct search plane, where in the x-axis we have $m_{\Delta}\rm~(GeV)$ and along y-axis we have effective spin-independent direct search cross section $\left(\frac{\Omega_{\Delta}}{\Omega_T}\right)\times\sigma_{\Delta N\to\Delta N}$ in log-scale. Here different colour shades represent different values of $f_8:\{0.01,0.03,0.05,0.09\}$ chosen for the scan. We can see that except for the low mass region of $\Delta$ ($m_\Delta \le$ 250 GeV) for $f_8=0.09$, the whole relic density allowed parameter space is available through direct search constraints. It is easy to understand that the higher the values of $f_8$, the higher is the effective direct search cross-section is. Therefore, direct search crucially tames the coupling $f_8 \le 0.1$.
In the RHS of Fig.~\ref{fig:deltadir} we show the parameter space allowed by relic density and direct search for $X$ in the two-component DM scenario. All of the parameter space allowed by relic density lies below the PandaX limit and a part of it even goes below the neutrino floor~\cite{Liu:2017drf}. Different coloured regions in this plot correspond to different values of $f_5/\lambda_4$, which typically controls the direct detection cross-section of $X$ as discussed earlier. As expected, smaller values of $f_5/\lambda_4$ (shown for example by the black region) produces smaller cross-section, while a larger $f_5/\lambda_4$ is ruled out by XENONnT.
\begin{table}[htb!]
\small
\setlength\tabcolsep{1.5pt}
\setlength\thickmuskip{1.5mu}
\setlength\medmuskip{1.5mu}
\small
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
Benchmark & $g_N$ & $\frac{f_5}{\lambda_4}$ & $m_X$ & $m_{\zeta_2}$ & $m_{\zeta_1}$ & $m_{\Delta}$ & $\Omega_X h^2$& $\Omega_{\Delta}h^2$ & $\sigma^{\Delta}_{DD}$ & $\sigma^{X}_{DD}$ \\ [0.5ex]
Point & & & (GeV) & (GeV) & (GeV) & (GeV) & & & $(cm^2)$ & $(cm^2)$ \\ [0.5ex]
\hline\hline
BP4 & 0.63 & 0.30 & 481 & 320 & 621 & 500 & 0.077 & 0.043 & $10^{-50}$ & $10^{-46.35}$ \\
\hline
BP5 & 0.70 & 0.10 & 541 & 380 & 701 & 560 & 0.079 & 0.037 & $10^{-50}$ & $10^{-47.20}$ \\
\hline
BP6 & 0.83 & 0.20 & 681 & 540 & 821 & 700 & 0.087 & 0.033 & $10^{-50}$ & $10^{-46.47}$ \\
\hline
\end{tabular}
\end{center}
\caption {Choices of the benchmark points for two-component \{$X$,$\Delta$\} DM scenario. Masses, couplings, relic density and direct search cross-sections for both the DM candidates are tabulated. }
\label{tab:bp2}
\end{table}
In Table.~\ref{tab:bp2} we have tabulated possible values of masses of the dark gauge boson and triplet scalar for different couplings satisfying relic density and direct search for two-component DM scenario \{$\Delta_1$,$X$\}. As the collider signature of the model is independent of the choice of $m_{\Delta}$ and dependent only on the masses of the charged scalars \{$\zeta_1^{\pm}$,$\zeta_2^{\pm}$\}, the model would give rise to the same final states as that of single component vector boson DM framework as the mass hierarchy between $X$ and $\zeta$ remains unaltered.
\subsection{Fate of the heavy neutrino as DM}
\label{sec:rhn}
\begin{figure}[htb!]
$$
\includegraphics[scale=0.5]{rhndecay.pdf}
$$
\caption{Decay of the right handed neutrinos.}
\label{fig:rhndecay1}
\end{figure}
The right handed neutrino (RHN) can decay into different final states through the Yukawa interaction mentioned in Eq.~\eqref{f-zeta} and shown in Fig.~\ref{fig:rhndecay1}. If we assume $m_{n_1}<m_{\zeta_1}$, then $n_{1R}$ is stable and contributes to the DM relic density. $n_{2R}$, on the other hand, can decay into leptons and $\zeta_2$. As $\zeta_2$ mixes with SM Higgs, it can readily decay to SM and $n_{2R}$ can not qualify as DM.
\begin{figure}[htb!]
\centering
\includegraphics[scale=0.6]{RHNann.pdf}
\caption{Left: Annihilation of RHN into SM leptons via $t$-channel mediation of the heavy scalars $\zeta_1^{\pm,0}$. Right: Annihilation of RHN into exotic scalars $\Delta$ via the RHNs.}
\label{fig:rhnann}
\end{figure}
Therefore, $n_{1R}$, in this model, can contribute to the relic density if the following conditions are satisfied:
\begin{itemize}
\item If $m_{n_1}<m_{\zeta_1}$, then $n_{1R}$ is stable.
\end{itemize}
\begin{figure}[htb!]
\centering
\includegraphics[scale=0.38]{rhnRelicdensityplt.png}
\caption{The figure shows underabundant regions of $\Omega_n$ ($0.02<\Omega_n h^2<0.1$ in green and $\Omega_n h^2<0.02$ in pink) for different values of heavy neutrino mass $m_N$ (GeV) and Yukawa coupling $f_{\zeta}$. The typical choice of $m_N$ and coupling for the chosen BPs (Table.~\ref{tab:bp} and Table.~\ref{tab:bp2}) lies in the under abundant region shown by the blue cross.}
\label{fig:rhnrelic}
\end{figure}
As the interactions in Eq.~\ref{eq:yukawaint} suggest, the RHN can undergo annihilation via the channels shown in Fig.~\ref{fig:rhnann}. The thermally averaged cross section of these channels computed at $s=4~m_{n_{1R}}^2$ is given by:
\begin{eqnarray}
\begin{split}
\langle\sigma v\rangle_{n_{1R}} =& \frac{f_{\zeta}^4}{32\pi}\frac{m_{n_{1R}}^2}{\left(m_{n_{1R}}^2+m_{\zeta_1}^2\right)^2}+\\ & \frac{f_{\Delta}^4}{64\pi}\left(1-\frac{m_{\Delta}^2}{m_{n_1}^2}\right)^{3/2}\left(\frac{m_{n_{1R}}^2}{\left(2 m_{n_{1R}}^2-m_{\Delta}^2\right)^2}+\frac{1}{2}\frac{m_{n_{1R}}^2}{\left(2 m_{n_2}^2-m_{\Delta}^2\right)^2}\right),
\end{split}
\end{eqnarray}
where we have assumed $m_{\Delta_1}=m_{\Delta_2}$ and $m_{n_{1R}}=m_{n_{2R}}$. Fig.~\ref{fig:rhnrelic} shows the under abundant region for $\Omega_n$ with $f_{\Delta}\sim\mathcal{O}(1)$ for different values of $f_{\zeta}$. $0.02<\Omega h^2<0.1$ region is shown in green, while $\Omega h^2<0.02$ is shown in pink. We can see, that our choice of the benchmark points (BP1-BP6) lies very much in the under abundant region. Therefore, we can safely ignore this contribution for our study. However, for smaller $f_\Delta~(\le 1)$, the annihilation can be smaller and the relic density will have sizable contribution. We would like to mention that adding the contribution of RHN as DM merely changes the relic density parameter space, and leaves the phenomenology of the other DM candidates unchanged. Also note that the left chiral components $(n_1,n_2)_L$ are always stable as they only have Yukawa interactions with the triplet scalar following Eq.~\eqref{f-delta}, so that they can also serve as DM. Their annihilation will be followed by the Feynman graph on the RHS of Fig.~\ref{fig:rhnann} with
\begin{eqnarray}
\langle\sigma v\rangle_{n_{1,2L}}= \frac{f_{\Delta}^4}{64\pi}\left(1-\frac{m_{\Delta}^2}{m_{n_{1,2L}}^2}\right)^{3/2}\left(\frac{m_{n_{1,2L}}^2}{\left(2 m_{n_{1,2L}}^2-m_{\Delta}^2\right)^2}+\frac{1}{2}\frac{m_{n_{1,2L}}^2}{\left(2 m_{n_{2,1L}}^2-m_{\Delta}^2\right)^2}\right),
\end{eqnarray}
which is of the same order as that of $\langle\sigma v\rangle_{n_{1R}}$ in the limit of $f_{\Delta}\sim\mathcal{O}(1)$. So the relic density contribution is again small and can be neglected. The advantage of the heavy neutrinos as DM is that they lack a tree-level direct search interaction and do not alter the conclusions for the chosen BPs.
\section{Collider Phenomenology}
\label{sec:collider pheno}
Out of all the BSM particles introduced in the model, only the scalar bi-doublet $\zeta$ transforms under SM $SU(2)_L$, and one can produce both the charged ($\zeta_{1,2}^{\pm}$) and neutral components ($\zeta_{1,2}^{0}$) at the collider. The Feynman graphs for the production of such particles in the Large Hadron Collider (LHC) is shown in Fig.~\ref{fig:scalarproduction}. These processes involve derivative couplings arising from the gauge kinetic term as detailed in Appendix-B. Here we have elucidated two different processes which yield leptonic final states. One possibility is the charged current production of $\zeta_1^{\pm},\zeta_1^{0}$ shown in the left panel of Fig.~\ref{fig:scalarproduction}, and the other possibility is to have a neutral current production of $\zeta_1^{\pm},\zeta_1^{\mp}$. Subsequent decays of these scalars to SM fermions and to RHN $n_{1R}$ via the Yukawa interactions enlisted in Eq.~\ref{eq:yukawaint}, are also shown in the figure. Here we assume the same mass hierarchy as chosen for DM phenomenology: $m_{\zeta_2}<m_X<m_{\zeta_1}$ and $m_{n_{1R}}<m_{\zeta_1}$. In such a hierarchy, the decay of the scalar bi-doublet components occur with 100 $\%$ branching ratios to the final states: $\zeta_1^{\pm} \to \ell^{\pm}n_{1R}$ and $\zeta_1^0 \to \nu n_{1R}$, as shown in the figure.
\subsection{Signals at LHC}
\label{sec:modelsignals}
Following the mass hierarchy, the production of the scalar bi-doublets at LHC will end up with two different leptonic final states:
\begin{itemize}
\item Single lepton plus missing energy $\left(1 \ell^{\pm}+ \slashed{E}_T\right)$ due to charged current interaction, as shown in the left panel of Fig.~\ref{fig:scalarproduction}.
\item Opposite sign di-lepton plus missing energy (OSD+$\slashed{E}_T$) due to neutral current interaction, as shown in the right panel of Fig.~\ref{fig:scalarproduction}.
\end{itemize}
\begin{figure}[htb!]
$$
\includegraphics[scale=0.7]{scalarprod.pdf}
$$
\caption{Figure showing production of heavy charged scalars and their subsequent decays into a hadronically quiet single lepton $\ell^{\pm}+ \slashed{E}_T~~$ channel (on left) and hadronically quiet opposite sign dilepton channel $\ell^+\ell^- + \slashed{E}_T ~~$ on right.}
\label{fig:scalarproduction}
\end{figure}
These channels are essentially hadronically quiet, as they contain no jets at the parton level, except for those which may arise due to initial state radiation (ISR). We will therefore focus only on the leptonic final states with zero jet veto, as we know, they are cleaner and suffer less from SM background contamination. We will analyze these two hadronically quiet lepton channels in details for the chosen benchmark points as in Table \ref{tab:bp}. We also note here, that the right handed neutrinos are stable for the chosen hierarchy, and hence contribute to missing energy. As has already been stated, given the interactions proposed in this model, the other two DMs, namely $ \Delta, X$ are harder to produce, if not impossible. Before proceeding further we would also like to note, since both $\zeta_1$ and $\zeta_2$ belong to the same bi-doublet, $\zeta_2^\pm, \zeta_2^0$ can also be produced in the collider via similar diagrams. But since $\zeta_2^{0}$ mixes with SM Higgs, it decays to $b\bar{b}$, while the charged companion will decay $\zeta_2^{\pm} \to \zeta_2^0 \ell^{\pm}\nu$ through off-shell $W$. The missing energy distribution for such a case is identical to those of SM and provides no way to distinguish the signal from background. Therefore, we will refrain from discussing $\zeta_2$ production in details here, although the outcomes are mentioned in subsection~\ref{sec:zetaatlhc}.
The final state signal event rates are primarily dictated by the production of the scalar bi-doublet components at the LHC. In Fig.~\ref{fig:ccnc} we have shown the variation of the production cross section of $\zeta_1$ at the LHC with respect to its mass $m_{\zeta_1}$ with $E_{cm}=$14 TeV. We have not addressed the mass difference of the charged and neutral components which might appear from loop corrections and assumed them in the same ballpark. It is evident, with larger $m_{\zeta_1}$, the cross section falls due to phase space suppression, which is clearly visible from the plot. Noteworthy feature here is that the charged current interaction dominates over the neutral current one since the coupling strength is larger in the former case, contrary to SM fermions. Here, the ratio of the vertices in $Z$-mediation to that of $W$-mediation goes as $\sim\frac{cos 2\theta_{w}}{\sqrt{2}cos\theta_{w}}<1$. The difference in the production of charged current process versus neutral current process is also evident from Fig.~\ref{fig:ccnc}. Hence it is expected that this will give rise to larger $1 \ell^{\pm}+ \slashed{E}_T$ events over OSD+$\slashed{E}_T$ events. The chosen benchmark points, as in Table \ref{tab:bp}, are also indicated in Fig.~\ref{fig:ccnc}, from which we can clearly see that the production cross-section is already reduced to $\sim 1$ fb or less at $E_{cm}=$14 TeV LHC. We use {\tt CTEQ6l}~\cite{Placakyte:2011az} as a representative parton distribution function for generating this graph. The event simulation methodology is further detailed in the next subsection.
\begin{figure}[htb!]
$$
\includegraphics[scale=0.7]{ccnc.pdf}
$$
\caption{The variation of production cross section of $pp\to \zeta_1^{\pm}\zeta_1^{0},\zeta_1^{+}\zeta_1^{-}$ via charged current (red bold curve) and neutral current (red dashed curve) interaction at the LHC with $E_{cm}=$14 TeV. {\tt CTEQ6l} has been chosen as parton distribution function for generating the curves. The vertical lines in black, green and blue show the masses of $\zeta_1$ chosen for BP1, BP2 and BP3 respectively.}
\label{fig:ccnc}
\end{figure}
\subsection{Simulation technique and event selection criteria}
\label{sec:simultech}
We implemented this model in {\tt CalcHEP}~\cite{Belyaev:2012qa} to generate the parton level events and then the events are fed into {\tt Pythia-6.4}~\cite{Sjostrand:2006za} for showering and hadronization. We have simulated all the events at $\sqrt{s}=14$ TeV using {\tt CTEQ6l} as the parton distribution function. To mimic the experimental environment of the LHC, we have reconstructed all the leptons and jets using the following criteria:
\begin{itemize}
\item {\it Lepton ($l=e,\mu$):} Leptons are required to have a minimum transverse momentum $p_T>20$ GeV and pseudorapidity $|\eta|<2.5$. Two leptons are isolated objects if their mutual distance in the $\eta-\phi$ plane is $\Delta R=\sqrt{\left(\Delta\eta\right)^2+\left(\Delta\phi\right)^2}\ge 0.2$, while the separation between a lepton and a jet has to satisfy $\Delta R\ge 0.4$.
\item {\it Jets ($j$):} All the partons within $\Delta R=0.4$ from the jet initiator cell are included to form the jets using the cone jet algorithm {\tt PYCELL} built in {\tt PYTHIA}. We require $p_T>20$ GeV for a clustered object to be considered as jet. Jets are isolated from unclustered objects if $\Delta R>0.4$.
\item {\it Unclustered Objects:} All the final state objects which are neither clustered to form jets, nor identified as leptons, belong to this category. All particles with $0.5<p_T<20$ GeV and $|\eta|<5$, are considered as unclustered.
\item {\it Missing Energy ($\slashed{E}_T$):} The transverse momentum of all the missing particles (those are not registered in the detector) can be estimated from the momentum imbalance in the transverse direction associated to the visible particles. Thus missing energy (MET) is defined as:
\begin{eqnarray}
\slashed{E}_T = -\sqrt{(\sum_{\ell,j} p_x)^2+(\sum_{\ell,j} p_y)^2},
\end{eqnarray}
where the sum runs over all visible objects that include the leptons, jets and the unclustered components.
\item $H_T$: $H_T$ is defined as the scalar sum of all isolated jet and lepton $p_T$'s:
\begin{eqnarray}
H_T = \sum_{\ell,j} p_T
\end{eqnarray}
\end{itemize}
\begin{figure}[htb!]
$$
\includegraphics[scale=0.55]{1lep_MET_BP13.pdf}
\includegraphics[scale=0.55]{1lep_MET_BP2.pdf}
$$
$$
\includegraphics[scale=0.55]{1lep_HT_BP13.pdf}
\includegraphics[scale=0.55]{1lep_HT_BP2.pdf}$$
\caption{Top: Missing energy distribution for $1 \ell^{\pm}+\slashed{E}_T$ final state for the benchmark points (BP1, BP2, BP3) are shown in red. Those of the dominant SM backgrounds are also shown. Bottom: $H_T$ distribution for the same. The simulation is done assuming LHC with $\sqrt s=14$ TeV.}
\label{fig:met1}
\end{figure}
The dominant SM backgrounds have been generated in {\tt MADGRAPH}~\cite{Alwall:2014hca} and then showered through {\tt PYTHIA}. Appropriate $K$-factors were used to incorporate the Next-to-Leading order (NLO) cross section for the backgrounds. Dominant SM backgrounds for the chosen signal are: $t\bar{t}$, $W^{+}W^{-}$, $W^{\pm}Z$, $ZZ$ and $Drell-Yan$. Since the backgrounds dominate over the signal, MET and $H_T$ cut has to be chosen in a sensible way such that it eliminates most of the backgrounds, while retaining the signal. For the backgrounds the contribution to MET comes from the neutrinos, while for the signal, it comes dominantly from the stable RHN (as shown in Fig.~\ref{fig:scalarproduction}). The MET distribution for the chosen BPs are plotted in the upper panel of Fig.~\ref{fig:met1} for single lepton and in Fig.~\ref{fig:met2} for OSD final states. Corresponding $H_T$ distributions are also shown in the lower panel of the same figures. In both cases, the dominant SM backgrounds are also shown. As it is clear from both Fig.~\ref{fig:met1} and Fig.~\ref{fig:met2}, a high MET cut can reduce SM background, while retaining some of the signal strength in both single and two lepton channels. This is also true for $H_T$-cut as well. Therefore, the final state event selection required to have the following selection criteria on top of the trigger level cuts:
\begin{itemize}
\item Missing energy cut of $\slashed{E}_T>$ 100, 200 and 300 GeV have been employed in both single and two-lepton cases to reduce SM backgrounds.
\item $H_T$ cut of 200 and 300 GeV are also applied on top of MET cut to reduce the backgrounds further.
\item For OSD events, an invariant mass cut over the $Z$-window $|m_z-15|<m_{ll}<|m_Z+15|$ has been applied to get rid-off the $ZZ$ background to a significant extent.
\end{itemize}
\begin{figure}[htb!]
$$
\includegraphics[scale=0.55]{osd_MET_BP13.pdf}
\includegraphics[scale=0.55]{osd_MET_BP2.pdf}
$$
$$
\includegraphics[scale=0.55]{osd_HT_BP13.pdf}
\includegraphics[scale=0.55]{osd_HT_BP2.pdf}$$
\caption{Top: Missing energy distribution for $ \ell^{\pm}\ell^{\mp}+\slashed{E}_T$ final state for the benchmark points (BP1, BP2, BP3) are shown in red. Those of the dominant SM backgrounds are also shown. Bottom: $H_T$ distribution for the same. The simulation is done assuming LHC with $\sqrt s=14$ TeV.}
\label{fig:met2}
\end{figure}
\subsection{Event rates for the signal and the SM background}
Cross sections of $1 \ell^{\pm}+\slashed{E}_T$ and $ \ell^{\pm}\ell^{\mp}+\slashed{E}_T$ channels and corresponding number of events at a luminosity $\mathcal{L}=100~fb^{-1}$ for $E_{CM}=14$ TeV at the LHC are listed in Table.~\ref{tab:signal} for the Benchmark points (BP1, BP2, BP3). Here, the production cross-sections are also mentioned, so that we see the sensitivity of the missing energy cut as has been used with $\slashed {E}_T>100, ~200,~300$ GeVs.
\begin{table}[htb!]
\begin{center}
\scalebox{0.85}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Benchmark Point & $\sigma_{\zeta_1^\pm\zeta_1^0}$ (fb) & $\sigma_{\zeta_1^{0}\bar{\zeta_1^0}}/\sigma_{\zeta_1^{+}\zeta_1^{-}}$ (fb) & $\slashed{E}_T$ (GeV) & $H_T$ (GeV) & $\sigma^{\ell\pm}$ (fb) & $N^{\ell\pm}_{\text{eff}}$ & $\sigma^{\text{OSD}}$ & $N^{\text{OSD}}_{\text{eff}}$ \\
\hline\hline
& & & $>$100 & $>$ 100 & 0.25 & 25 & 0.04 & 4 \\
&&&& $>$ 200 & 0.14 & 14 & 0.04 & 4 \\
&&&& $>$ 300 & 0.06 & 6 & 0.03 & 3 \\
\cline{4-9}
BP1 & 1.89 & 1.29 & $>$200 & $>$ 100 & 0.15 & 15 & 0.02 & 2 \\
&&&& $>$ 200 & 0.14 & 14 & 0.02 & 2 \\
&&&& $>$ 300 & 0.06 & 6 & 0.01 & 1 \\
\cline{4-9}
& & & $>$300 & $>$ 100 & 0.06 & 6 & 0 & 0\\
&&&& $>$ 200 & 0.06 & 6 & 0 & 0\\
&&&& $>$ 300 & 0.06 & 6 &0 & 0\\
\cline{4-9}
\hline
& & & $>$100 & $>$ 100 & 0.17 & 17 & 0.03 & 3 \\
&&&& $>$ 200 & 0.12 & 12 & 0.03 & 3 \\
&&&& $>$ 300 & 0.07 & 7 & 0.02 & 2 \\
\cline{4-9}
BP2 & 1.16 & 0.81 & $>$200 & $>$ 100 & 0.12 & 12 & 0.02 & 2 \\
&&&& $>$ 200 & 0.11 & 11 & 0.02 & 2 \\
&&&& $>$ 300 & 0.07 & 7 & 0.02 & 2 \\
\cline{4-9}
& & & $>$300 & $>$ 100 & 0.07 & 7 & 0.01 & 1 \\
&&&& $>$ 200 & 0.07 & 7 & 0.01 & 1 \\
&&&& $>$ 300 & 0.07 & 7 & 0.01 & 1 \\
\cline{4-9}
\hline
& & & $>$100 & $>$ 100 & 0.09 & 9 & 0.02 & 2 \\
&&&& $>$ 200 & 0.07 & 7 & 0.02 & 2 \\
&&&& $>$ 300 & 0.05 & 5 & 0.02 & 2 \\
\cline{4-9}
BP3 & 0.59 & 0.43 & $>$200 & $>$ 100 & 0.07 & 7 & 0.01 & 1\\
&&&& $>$ 200 & 0.07 & 7 & 0.01 & 1 \\
&&&& $>$ 300 & 0.05 & 5 & 0.01 & 1 \\
\cline{4-9}
& & & $>$300 & $>$ 100 & 0.05 & 5 & 0.01 & 1 \\
&&&& $>$ 200 & 0.05 & 5 & 0.01 & 1 \\
&&&& $>$ 300 & 0.05 & 5 & 0.01 & 1 \\
\cline{4-9}
\hline
\end{tabular}
}
\end{center}
\caption {Signal events with $\sqrt{s}$ = 14 TeV at the LHC for luminosity $\mathcal{L} = 100~fb^{-1}$ for benchmark points (BP1, BP2, BP3). The variation of number of final state signal events with cut-flow are also tabulated.}
\label{tab:signal}
\end{table}
\begin{table}[htb!]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
Process & $\sigma_{\text{production}}$ (pb) & $\slashed{E}_T$ (GeV) & $H_T$ (GeV) & $\sigma^{\ell\pm}$ (fb) & $N^{\ell\pm}_{\text{eff}}$ & $\sigma^{\text{OSD}}$ & $N^{\text{OSD}}_{\text{eff}}$ \\
\hline\hline
& & $>$100 & $>$100 & 22.80 & 2280 & 17.10 & 1710 \\
&&& $>$200 & 1.62 & 162 & 2.44 & 244\\
&&& $>$300 & $<$ 0.81 & $<$ 1 & $<0.81$ & $<$1 \\
\cline{3-8}
$t\bar {t}$ & 814.64 & $>$200 & $>$ 100 & 1.62 & 162 & $<0.81$ & $<$ 1 \\
&&& $>$200 & 0.81 & 81 & $<0.81$ & $<$ 1 \\
&&& $>$300 & $<0.81$ & $<$ 1 & $<0.81$ & $<1$ \\
\cline{3-8}
& & $>$300 & $>$ 100 & $<$ 0.81 & $<$ 1 & $<$ 0.81 & $<$ 1 \\
&&& $>$200 & $<0.81$ & $<$ 1 & $<0.81$ & $<$ 1 \\
&&& $>$300 & $<0.81$ & $<$ 1 & $<0.81$ & $<$ 1 \\
\cline{3-8}
\hline
& & $>$100 & $>$ 100 & 54.48 & 5448 & 20.49 & 2049 \\
&&& $>$200 & 3.99 & 399 & 9.99 & 999 \\
&&& $>$300 & 0.49 & 49 & 1.99 & 199 \\
\cline{3-8}
$W^+ W^-$ & 99.98 & $>$200 & $>$ 100 & 1.99 & 199 & 1.99 & 199 \\
&&& $>$200 & 0.49 & 49 & 1.99 & 199 \\
&&& $>$300 & 0.49 & 49 & 0.49 & 49 \\
\cline{3-8}
& & $>$300 & $>$ 100 & 0.49 & 49 & $<$ 0.49 & $<$ 1 \\
&&& $>$200 & 0.49 & 49 & $<0.49$ & $<$ 1 \\
&&& $>$300 & 0.49 & 49 & $<0.49$ & $<$ 1 \\
\hline
& & $>$100 &$>$ 100 & 0.14 & 14 & 0 & 0 \\
&&& $>$200 & 0.01 & 1 & 0 & 0 \\
&&& $>$300 & 0 & 0 & 0 & 0\\
\cline{3-8}
$W^\pm Z$ & 0.15 & $>$200 & $>$100 & 0.012 & 1 & 0 & 0 \\
&&& $>$200 & 0 & 0 & 0 & 0 \\
&&& $>$300 & 0 & 0 & 0 & 0 \\
\cline{3-8}
& & $>$300 &$>$ 100 & 0 & 0 & 0 & 0 \\
&&& $>$200 & 0 & 0 & 0 & 0 \\
&&& $>$300 & 0 & 0 & 0 & 0 \\
\hline
& & $>$100 &$>$100 & 7.07 & 707 & 0.21 & 21\\
&&& $>$200 & 0.35 & 35 & 0.14 & 14 \\
&&& $>$300 & $<$ 0.07 & $<$ 1 & 0.07 & 7 \\
\cline{3-8}
$ZZ$ & 14.01 & $>$200 &$>$ 100 & 0.35 & 35 & $<$ 0.07 & $<$ 1 \\
&&& $>$200 & 0.28 & 28 & $<$ 0.07 & $<$ 1 \\
&&& $>$300 & $<$ 0.07 & $<$ 1 & $<$ 0.07 & $<$ 1 \\
\cline{3-8}
& & $>$300 &$>$100 & $<$ 0.07 & $<$ 1 & $<$ 0.07 & $<$ 1 \\
&&& $>$200 & $<$ 0.07 & $<$ 1 & $<$ 0.07 & $<$ 1 \\
&&& $>$300 & $<$ 0.07 & $<$ 1 & $<$ 0.07 & $<$ 1 \\
\hline
\end{tabular}
\end{center}
\caption {SM background events at $\sqrt{s}$ = 14 TeV for luminosity $\mathcal{L} = 100~fb^{-1}$ at the LHC. The cross sections have been multiplied by the appropriate $K$-factors to match with their NLO order cross-section (see text for details). The variation of number of final state background events with cut-flow are also tabulated.}
\label{tab:background}
\end{table}
The first obvious thing to notice is, with heavier masses for the charged bi-doublet scalars for the benchmark points, the cross-section in both the final states diminishes accordingly due to larger phase space suppression. Secondly, the number of events in OSD is smaller than that of single lepton ones, owing to (i) the hierarchy of the charge current and neutral current production cross-section itself and (ii) those cases, where the neutral current production may also yield an effective single lepton event, if one of the leptons is soft and fails to register with the desired $p_T$ cut. The production cross-section is small for all the benchmark points, and so is the numbers of expected events at a luminosity as high as $\sim 100 ~\rm{fb}^{-1}$. Here, the effective number of final state events is given as:
\begin{eqnarray}
N_{\text{eff}} = \frac{\sigma_{\text{p}}\times n}{N}\times\mathcal{L},
\end{eqnarray}
where $n$ is the simulated number of events obtained by simulating $N$ events corresponding to production cross-section of $\sigma_p$ and $\mathcal{L}$ is the integrated luminosity.
Using the same selection criteria, the number of final state events for dominant SM backgrounds are tabulated in Table.~\ref{tab:background}. Here we have multiplied the cross section in leading order (LO) with the appropriate $K$-factors to obtain the cross section in the NLO approximation. The $K$-factors for the different SM processes are chosen as~\cite{Alwall:2014hca}: for $t\bar{t}:~K=1.47$, $WW:~K=1.38$, $WZ:~K=1.61$, $ZZj:~K=1.33$, $Drell$-$Yan: ~K=1.2$. Again, in Table.~\ref{tab:background}, we show that we can reduce SM backgrounds to a significant extent, by employing the MET cut. However, for $\slashed{E}_T> 100, 200$ GeV, a significant number of events are still left from $W^+W^-$ final state. This can only be reduced with $\slashed{E}_T> 300$ GeV for two lepton but single lepton case will still be submerged under a large $W^+W^-$ background events.
The main take of this analysis is that, although it is possible to eliminate or at least reduce the SM background with judicious choice of different cuts, but as the production cross section for the signals itself is very low, it is only possible to see such a signal at the LHC at a very high integrated luminosity for both single lepton and opposite-sign-dilepton cases. And as it is evident from Table.~\ref{tab:signal}, the OSD case is even harder to probe at LHC. In Fig.~\ref{fig:signalsig}, we show the significance for the single lepton channel only. As it is seen, with $\slashed{E}_T>$ 200 GeV and $\slashed{H}_T>$ 200 GeV (which removes most of the backgrounds), the significance reaches the discovery limit (5$\sigma$) at a luminosity of $\gtrsim$ 1000 $fb^{-1}$. As the OSD case fails to reach at least $3\sigma$ confidence even at $\gtrsim 1000~\rm fb^{-1}$ luminosity, we are not showing its discovery potential.
\begin{figure}[htb!]
$$
\includegraphics[scale=0.62]{singleLep_HT.pdf}
$$
\caption{Significance plot for the signal $1 \ell^{\pm}+ \slashed{E}_T$ events for the chosen benchmark points in terms of luminosity. The red solid (dashed) line shows the $5\sigma~(3\sigma)$ discovery limit.}
\label{fig:signalsig}
\end{figure}
\subsection{Fate of $\zeta_2$ at the LHC}
\label{sec:zetaatlhc}
\begin{figure}[htb!]
$$
\includegraphics[scale=0.65]{zeta2prod.pdf}
$$
\caption{Top Left: Production of $\zeta_2^{0}\bar{\zeta_2^{0}}$ at the LHC via neutral current interaction and their subsequent decays to four $b$-jet final state; Top Right: $\zeta_2^{+}\zeta_2^{-}$ production at the LHC via neutral current interaction leading to opposite sign dilepton plus four $b$-jet final state; Bottom: $\zeta_2^{+}(\zeta_2^{-})\zeta_2^{0}$ production via charged current interaction through $W^{+}(W^{-})$ leading to four $b$-jet final state with single lepton. }
\label{fig:zetaproduction}
\end{figure}
As $\zeta_2$ is also part of the scalar bi-doublet introduced in the model, it has the same interaction vertices with SM as that of $\zeta_1$. Therefore, it is possible to produce $\zeta_2$ at the LHC through the same channels as $\zeta_1$. However, the decay channels of $\zeta_2$ are different from that of $\zeta_1$. We already have mentioned in Sec.~\ref{sec:model}, that $\zeta_2^{0}$ mixes with the SM Higgs through electroweak (EW) symmetry breaking. As a result, the neutral component, $\zeta_2^0$ will readily decay to SM particles, for example, $b$-jets at the LHC. The charged components $\zeta_2^{\pm}$ will only decay to $\zeta_2^0$ through charged current interactions $\zeta_2^{\pm} \to \zeta_2^0 W^{*\pm} \to \zeta_2^0 \ell^{\pm} \nu$. This is due to the specific hierarchy chosen for the DM analysis of the model : $m_{\zeta_1}>m_{nR}>m_{\zeta_2}$, where right handed neutrinos are assumed to be heavier than $\zeta_2$. Also, note here, due to the small mass difference between $\zeta_2^{\pm}$ with $\zeta_2^0$ (which can happen through loop corrections) the decay of the charged components will always occur through off-shell $W$ bosons. Therefore, different combinations of charged and neutral current productions of $\zeta_2$ in pairs at the LHC will yield the following final states:
\begin{itemize}
\item 4$b$-jets plus no missing energy,
\item 1$\ell^{\pm}$+4$b$-jets+missing energy,
\item $\ell^{\pm}\ell^{\mp}$+4$b$-jets+missing energy.
\end{itemize}
Different productions that lead to such final states are shown in the Feynman graphs in Fig.~\ref{fig:zetaproduction}. One important point to note here, is that the missing energy in the above channels, essentially appear from SM neutrinos and not from the DMs assumed in the framework. So, it is very difficult to segregate these channels from the SM background with a missing energy cut. This is even more true, because of the off-shell decays of $\zeta_2^{\pm}$, which leaves no way to separate the signal from SM missing energy distribution. Therefore, we do not elaborate on the event level analysis of these channels for the chosen benchmark points at the LHC.
However, the small mass difference between the charged and the neutral scalar component may aid to a large decay lifetime of $\zeta_2^{\pm}$. This can lead to the observation of one or two displaced vertex signatures or stable charged tracks within the detector.
\section{Conclusion}
\label{sec:conclusion}
In the absence of direct and collider search evidences for WIMP-like DM, an important emerging issue is to address the existence of such DM candidates. While LHC puts a milder limit on heavy WIMP-like DMs due to huge SM background, spin-independent direct search puts a stronger limit on such DM-nucleon interaction. The challenge is to produce right relic density even in the absence of a direct detection signal. Segregation of annihilation processes from that of direct search interaction plays a key role in this context. In particular, annihilation of DM to non-SM particles, DM-DM interaction and co-annihilation serve as crucial features to save WIMP-like DM candidates. The paper exemplifies one such case with a detailed parameter space scan.
The model of our interest is an $SU(2)_N$ gauge extension of SM as proposed in~\cite{Fraser:2014yga}. The lightest vector boson $X$ is stabilized by an unbroken $S$ charge arising from $SU(2)_N \times S^{'} \to S$ through spontaneous symmetry breaking. We assume symmetry $S$ remains intact up to the Planck scale to avoid constraints coming from CMB, Gamma ray, neutrino flux etc due to the DM decay. The model offers a multipartite DM framework, involving the scalar triplet $\Delta$ and heavy neutrinos, depending on the kinematics. We highlighted the case of DM-DM interactions to govern thermal freeze-out of the heavier component. For example, when $\{\Delta,X\}$ can both be DM, we show that $X$ always has the larger part of relic density while obeying DD bound (thanks to its annihilation to non-SM particles). $\Delta$ being heavier in such a case, can annihilate to $X$ yielding a larger annihilation cross-section and smaller relic density. However, as the freeze-out is mainly governed by DM-DM interaction, it is saved from direct search.
We also have explored the possibility of collider search of such models and we see that this model might manifest itself through hadronically quiet leptonic final states along with missing energy in the LHC. This can come from the production and subsequent decays of the scalar bi-doublets assumed in the theory. Although, the missing energy (MET) and $H_T$ allows us to separate the signal from the SM background, the small strength of EW production cross-section does not allow the model to be accessible in the next run of the LHC, postponing it to a large luminosity regime. We also note here, that as no SM particle are charged under the additional $SU(2)_N$, the phenomenology is in sharp contrast to what we obtained in~\cite{Barman:2017yzr}. This is even more true for collider signatures, as the model studied here, could not even produce the vector DM at LHC.
The model also addresses the generation of neutrino masses through inverse see-saw mechanism by assuming the presence of heavy chiral neutrinos $(n_1,n_2)$. The proportionality of neutrino mass to the vev $\langle \Delta_3\rangle$, requires it to have a small value thus making the vector boson ($X$) degenerate with $X_3$. This predicts additional contribution to the thermal freeze-out of $X$ DM through $X-X_3$ co-annihilation. Hence the model offers a very interesting connection between the neutrino sector and DM phenomenology. Secondly, the presence of inverse seesaw mechanism to generate light neutrino masses, also allows one to assume the heavy neutrinos to be within $\sim \mathcal{O}$(TeV). This yields the only possibility of exploring the model in the LHC through multi lepton channels. As the scalar bi-doublet can only decay to the right handed neutrinos along with SM leptons (through Yukawa interactions)
, a very heavy neutrino can stop such a decay chain and allows one to see displaced vertex signature or stable charge tracks only.
\section{Acknowledgements}
\label{sec:ack}
We would like to acknowledge discussions with Joydeep Chakrabortty at different stages of this work. SB would like to acknowledge the DST-INSPIRE research grant IFA-13 PH-57. BB would acknowledge the hospitality at IIT Kanpur, where a major part of the work was carried out. BB would also like to thank Sunando Patra for technical help with Mathematica and Amit Duttabanik for useful discussions.
\section{Appendix A: Scalar Potential}
The scalar potential of the model is given by:
\begin{equation}
\begin{split}
V &= \mu_\zeta^2 Tr(\zeta^\dagger \zeta) + \mu_\Phi^2 \Phi^\dagger \Phi + \mu_\chi^2
\chi^\dagger \chi + \mu_\Delta^2 Tr(\Delta^\dagger \Delta) + (\mu_1
\tilde{\Phi}^\dagger \zeta \chi + \mu_2 \tilde{\chi}^\dagger \Delta \chi + H.c.)\\
&+ {1 \over 2} \lambda_1 [Tr(\zeta^\dagger \zeta)]^2 + {1 \over 2} \lambda_2 (\Phi^\dagger \Phi)^2 + {1 \over 2} \lambda_3 Tr(\zeta^\dagger \zeta \zeta^\dagger \zeta) + {1 \over 2} \lambda_4 (\chi^\dagger \chi)^2 +
{1 \over 2} \lambda_5 [Tr(\Delta^\dagger \Delta)]^2 \\
&+ {1 \over 4} \lambda_6 Tr(\Delta^\dagger \Delta - \Delta \Delta^\dagger)^2 + f_1 \chi^\dagger \tilde{\zeta}^\dagger \tilde{\zeta} \chi + f_2 \chi^\dagger \zeta^\dagger \zeta \chi + f_3 \Phi^\dagger \zeta \zeta^\dagger \Phi + f_4 \Phi^\dagger \tilde{\zeta} \tilde{\zeta}^\dagger \Phi \\
&+ f_5 (\Phi^\dagger \Phi)(\chi^\dagger \chi) + f_6 (\chi^\dagger \chi) Tr(\Delta^\dagger \Delta) + f_7 \chi^\dagger
(\Delta \Delta^\dagger - \Delta^\dagger \Delta) \chi + f_8 (\Phi^\dagger \Phi) Tr(\Delta^\dagger \Delta)\\
&+ f_9 Tr(\zeta^\dagger \zeta) Tr(\Delta^\dagger \Delta) + f_{10} Tr[\zeta(\Delta^\dagger \Delta - \Delta \Delta^\dagger) \zeta^\dagger],
\end{split}
\end{equation}
where
\begin{equation}
\tilde{\Phi}^\dagger = (\phi^0, -\phi^+), ~~~ \tilde{\chi}^\dagger =
(\chi_2, -\chi_1), ~~~ \tilde{\zeta} = \begin{pmatrix}
\zeta_2^+ & - \zeta_1^+ \cr
-\bar{\zeta}_2^0 & \bar{\zeta}_1^0\end{pmatrix}.
\end{equation}
\section{Appendix B: Gauge Interactions and masses of the scalar triplets}
Covariant derivative for the scalar bi-doublet:
\begin{eqnarray}
\mathcal{D}_{\mu}\zeta=\partial_{\mu}\zeta-ig_{L}\vec{W_{L_\mu}}\frac{\vec{\tau}}{2}\zeta+ig_{N}\zeta\frac{\vec{\tau}}{2}\vec{X_\mu}.
\end{eqnarray}
Covariant derivative for the $SU(2)_N$ triplet:
\begin{eqnarray}
\mathcal{D}_{\mu}\Delta= \partial_{\mu}\Delta-\frac{i g_N}{2}\left[\vec{\tau}.X_{\mu},\Delta\right].
\end{eqnarray}
Relevant diagrams come from the gauge kinetic terms:
\begin{eqnarray}
\mathcal{L}_{gauge}\supset Tr\left[\left(\mathcal{D}_{\mu}\zeta\right)^{\dagger}\left(\mathcal{D}_{\mu}\zeta\right)\right]+Tr\left[\left(\mathcal{D}_{\mu}\Delta\right)^{\dagger}\left(\mathcal{D}_{\mu}\Delta\right)\right].
\end{eqnarray}
The masses of the scalar triplet $\Delta$ are given by:
\begin{eqnarray}
\begin{split}
m^2(\Delta_3)\simeq \mu^2_{\Delta}+(f_6-f_7)u_2^2+f_8 v_1^2\\
m^2(\Delta_2)\simeq \mu^2_{\Delta}+f_6 u_2^2+f_8 v_1^2\\
m^2(\Delta_1)\simeq \mu^2_{\Delta}+(f_6+f_7)u_2^2+f_8 v_1^2.
\end{split}
\end{eqnarray}
\section{Appendix C: Decay of $\Delta_3$ to $b\bar{b}$}
In the rest frame of $\Delta_3$:
\begin{eqnarray}
\Gamma_{\Delta_3\to b\bar{b}} = \frac{m_{\Delta_3}f_8^2}{8\pi}\left(1-\frac{m_b^2}{m_{\Delta_3}^2}\right)^{\frac{3}{2}}.
\end{eqnarray}
Now, for $m_b=4.18$ GeV and age of the universe $\sim 4.1\times 10^{17}$ sec, we obtain: $f_8\simeq 2.008\times 10^{-22}$.
|
{
"timestamp": "2018-09-21T02:05:25",
"yymm": "1806",
"arxiv_id": "1806.01129",
"language": "en",
"url": "https://arxiv.org/abs/1806.01129"
}
|
\section{Introduction}
The terminology and notation used but undefined in this paper can
be found in~\cite{bondy}. Let $G=(V,E)$ be a graph. We use $V(G)$,
$E(G)$, $\Delta(G)$ and $\delta(G)$ to denote the vertex set, edge
set, maximum degree and, minimum degree of $G$, respectively.
Particularly, we use $F(G)$ to denote the face set of $G$ when $G$
is a plane graph. Let $d_G(x)$ or simply $d(x)$, denote the degree
of a vertex (resp. face) $x$ in $G$. A vertex (resp. face) $x$ is
called a $k$-$vertex$ (resp. $k$-$face$), $k^+$-$vertex$ (resp.
$k^+$-$face$), $k^-$-$vertex$ or $k^{--}$-$vertex$, if $d(x)=k$,
$d(x)\geq k$, $2\leq d(x)\leq k$, or $1\leq d(x)\leq k$. We use
$(d_1, d_2, \cdots, d_n)$ to denote a face $f$ if $d_1, d_2,
\cdots, d_n$ are the degrees of vertices incident with the face
$f$ where $3\leq n\leq 5$. Let $\delta(f)$ denote the minimal
degree of vertices incident with $f$. In the following, let
$f_i(v)$ denote the number of $i$-faces incident with $v$ for each
$v\in V(G)$. Let $n_i(f)$ denote the number of $i$-vertices which
are incident with $f$. A graph $G$ is $k$-degenerate if every
subgraph of $G$ has a vertex of degree at most $k$. A cycle $C$ of
length $k$ is called a $k$-$cycle$. Moreover, if there exists an
edge $xy\in E(G)-E(C)$ and $x$, $y\in V(C)$, then the cycle $C$ is
called a chordal $k$-$cycle$.
A proper $k$-coloring of a graph $G$ is a mapping $\pi$ from the
vertex set $V(G)$ to the set of colors $\{1,2,\cdots,k\}$ such
that $\pi(x)\neq\pi(y)$ for every edge $xy\in E(G)$. A graph $G$
is equitably $k$-colorable if $G$ has a proper $k$-coloring such
that the sizes of the color classes differ by at most 1. The
equitable chromatic number of $G$, denoted by $\chi_e(G)$, is the
smallest integer $k$ such that $G$ is equitably $k$-colorable. The
equitable chromatic threshold of $G$, denoted by $\chi^*_e(G)$, is
the smallest integer $k$ such that $G$ is equitably $l$-colorable
for every $l\geq k$. It is obvious that $\chi_e(G)\leq
\chi^*_e(G)$ for any graph $G$. However these two parameters may
not be the same. For example, if $K_{2n+1, 2n+1}$ ($n$ is a
positive integer) is a complete bipartite graph, then
$\chi_e(K_{2n+1, 2n+1})=2$, $\chi^*_e(K_{2n+1, 2n+1})=2n+2$.
In many applications of graph coloring, it is desirable that the
color classes are not too large. For example, when using a
coloring model to find an optimal final exam schedule, one would
like to have approximately equal number of final exams in each
time slot because the whole exam period should be as short as
possible and the number of classrooms available is limited.
Recently, ~\cite{pemma}, ~\cite{jans} used equitable colorings to
derive deviation bounds for sums of dependent random variables
that exhibit limited dependence. In all of these applications, the
fewer colors we use, the better the deviation bound is. Equitable
coloring has a well-known property that restricts the size of each
color class by its definition.
In 1970, ~\cite{hajs} proved that $\chi^*_e(G)\leq \Delta(G)+1$
for any graph $G$. This bound is sharp as the example of $K_{2n+1,
2n+1}$ shows. In 1973, ~\cite{meye} introduced the notion of
equitable coloring and made the following conjecture.
\begin{conjecture}\label{meyerconj} If $G$ is a connected graph which is neither a
complete graph nor odd cycle, then $\chi_e(G)\leq \Delta(G)$.
\end{conjecture}
In 1994, ~\cite{chenlw} put forth the following conjecture.
\begin{conjecture}\label{chenconj} For any connected graph $G$, if it is different
from a complete graph, a complete bipartite graph and an odd
cycle, then $\chi^*_e(G)\leq \Delta(G)$.
\end{conjecture}
~\cite{chenlw, chenl} proved Conjecture~\ref{chenconj} for graphs
with $\Delta(G)\leq 3$ or $\Delta(G)\geq \frac{|V(G)|}{2}$.
Recently, ~\cite{chenY} improved the former result and confirmed
the Conjecture~\ref{chenconj} for graphs with $\Delta(G)\geq
\frac{|V(G)|}{3}+1$. ~\cite{yapz1, yapz2} showed that
Conjecture~\ref{chenconj} holds for planar graphs with
$\Delta(G)\geq 13$. Recently, ~\cite{Nakprasit} confirmed the
Conjecture~\ref{chenconj} for planar graphs with $\Delta(G)\geq
9$. ~\cite{lihw} verified $\chi^*_e(G)\leq \Delta(G)$ for
bipartite graphs other than complete bipartite graphs.
~\cite{wanz} proved Conjecture~\ref{chenconj} for line graphs, and
~\cite{kosn1, kosn2} proved it for graphs with low average degree,
and $d$-degenerate graphs with $\Delta(G)\geq
14d+1$.~\cite{yanwang} showed that Conjecture~\ref{chenconj} holds
for Kronecker products of complete multipartite graphs and
complete graphs. ~\cite{jianl}, ~\cite{luo} confirmed
Conjecture~\ref{chenconj} for some planar graphs with large girth,
respectively. Recently, ~\cite{qiong}, ~\cite{jun1}, ~\cite{dong1,
dong2, dong3}, ~\cite{Nakprasit2} confirmed
Conjecture~\ref{chenconj} for some planar graphs with some
forbidden cycles, respectively. ~\cite{zhang}, ~\cite{jun2}
verified the Conjecture~\ref{chenconj} for some series-parallel
graphs and outerplanar graphs, respectively.
For a graph $G$ and a list assignment $L$ assigned to each vertex
$v\in V(G)$ a set $L(v)$ of acceptable colors, an $L$-coloring of
$G$ is a proper vertex coloring such that for every $v\in V(G)$
the color on $v$ belongs to $L(v)$. A list assignment $L$ for $G$
is $k$-$uniform$ if $|L(v)|=k$ for all $v\in V(G)$. A graph $G$ is
equitably $k$-choosble if, for any $k$-uniform list assignment
$L$, $G$ is $L$-colorable and each color appears on at most
$\lceil\frac{|V(G)|}{k}\rceil$ vertices.
In 2003, Kostochka, Pelsmajer and West investigated the equitable
list coloring of graphs. They proposed the following conjectures
in~\cite{kostochka}.
\begin{conjecture}\label{kostochconj1} Every graph $G$ is equitably $k$-choosable
whenever $k>\Delta(G)$.
\end{conjecture}
\begin{conjecture}\label{kostochconj2} If $G$ is a connected graph with maximum
degree at least $3$, then $G$ is equitably $\Delta(G)$-choosable,
unless $G$ is a complete graph or is $K_{k,k}$ for some odd $k$.
\end{conjecture}
It has been proved that Conjecture~\ref{kostochconj1} holds for
graphs with $\Delta(G)\leq3$ in ~\cite{pelsmajer, wang} and graphs
with $\Delta(G)\leq7$ in~\cite{kosn3}. Kostochka, Pelsmajer and
West proved that a graph $G$ is equitably $k$-choosable if either
$G\neq K_{k+1}, K_{k,k}$ (with $k$ odd in $K_{k,k}$) and
$k\geq\max\{\Delta, \frac{|V(G)|}{2}\}$, or $G$ is a connected
interval graph and $k\geq\Delta(G)$ or $G$ is a $2$-degenerate
graph and $k\geq\max\{\Delta(G),5\}$ in~\cite{kostochka}.
Pelsmajer proved that every graph is equitably $k$-choosable for
any $k\geq\frac{\Delta(G)(\Delta(G)-1)}{2}+2$ in~\cite{pelsmajer}.
Bu and his collaborators have established a series results for
Conjecture~\ref{kostochconj2} in class of planar graph as follows
~\cite{qiong,jun1,jun2,jun3}. Zhang and Wu proved
Conjecture~\ref{kostochconj2} for series-parallel graphs
in~\cite{zhang}. Some improved results on planar graphs were
obtained in~\cite{dong1}, ~\cite{dong2} and~\cite{dong3}.
In this paper, we improve the result in~\cite{qiong} and confirm
the Conjecture~\ref{chenconj}, Conjecture~\ref{kostochconj2} for
some planar graphs in which $4$- and $6$-cycles are allowed to
exist, which shows that if $G$ is a planar graph without chordal
$4$- and $6$-cycles, then $G$ is equitably $k$-colorable and
equitably $k$-choosable where $k\geq\max\{\Delta(G), 7\}$.
\section{Planar graphs without chordal 4- and 6-cycles}
First let us introduce some lemmas.
\vspace{-0.3cm}
\begin{lemma}\label{lemma1} Let $G$ be a planar
graph without chordal $4$- or $6$-cycles. Then in $G$, there is no
$3$-cycle adjacent to a $3$-cycle, nor a $4$-cycle adjacent to two
$3$-cycles. Furthermore, if $\delta(G)\geq3$, then there is no
$3$-cycle adjacent to a $5$-cycle, nor a $4$-cycle adjacent to a
$4$-cycle.
\end{lemma}
By Lemma~\ref{lemma1}, we have the following lemma.
\begin{lemma}\label{lemma01} Let $G$ be a planar graph with $\delta(G)\geq3$ and $f$ be a $3$-face
which is incident with a $3$-vertex in $G$. Then $f$ is adjacent
to at least one $6^+$-face.
\end{lemma}
\begin{lemma}\label{lemma2} Let $G$ be a planar graph without chordal $4$- and $6$-cycles.
If $\delta(G)\geq 4$, then $G$ contains the configuration $H$
depicted in Figure 1.
\end{lemma}
\begin{proof} Suppose to the contrary that $G$ does not contain
the configuration $H$ depicted in Figure 1, i.e. none of the
$(4,4,4)$-faces is adjacent to a $(4,4,4,4)$-face.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=2.5cm]{3-d}
\caption{}
\end{center}
\end{figure}
By Euler's formula, we have
\begin{eqnarray}\label{formula0}
\sum_{v\in V(G)}(2d(v)-6)+\sum_{f\in
F(G)}(d(f)-6)=-6(|V|-|E|+|F|)=-12.
\end{eqnarray}
\vspace{2mm} Define an initial charge function $w$ on $V(G)\cup
F(G)$ by setting $w(v)=2d(v)-6$ if $v\in V(G)$ and $w(f)=d(f)-6$
if $f\in F(G)$, then $\sum_{x\in V(G)\cup F(G)}w(x)=-12$ by
Equation~(\ref{formula0}). Now redistribute the charges according
to the following discharging rules.
\noindent $D1.$ If $f$ is a $3$-face incident with a vertex $v$,
then $v$ gives $1$ to $f$ if $d(v)=4$ and $f$ is a $(4,4,4)$-face,
$v$ gives $\frac{3}{4}$ if $d(v)=4$ and $f$ is a $3$-face of
another type, and $v$ gives $\frac{3}{2}$ if $d(v)\geq5$.
\noindent $D2.$ If $f$ is a $4$-face incident with a vertex $v$,
then $v$ gives $\frac{1}{2}$ to $f$ if $d(v)=4$ and $f$ is a
$(4,4,4,4)$-face, $v$ gives $\frac{2}{5}$ if $d(v)=4$ and $f$ is a
$4$-face of another type, and $v$ gives $\frac{4}{5}$ if
$d(v)\geq5$.
\noindent $D3.$ Transfer $\frac{1}{5}$ from each vertex $v$ to the
$5$-face which is incident with $v$.
\vspace{2mm} Let the new charge of each element $x\in V(G)\cup
F(G)$ be $w'(x)$. In the following, we will show that $\sum_{x\in
V(G)\cup F(G)}w'(x)\geq0$, a contradiction to
Equation~(\ref{formula0}). This
will complete the proof.\\
Consider any vertex $v\in V(G)$, \textbf{suppose $d(v)=4$.} Then
$w(v)=2$, $f_3(v)\leq 2$ by Lemma~\ref{lemma1}.
First, we assume that $f_3(v)=2$. Then $f_4(v)=0$ and $f_5(v)=0$
by Lemma~\ref{lemma1}. Thus $w'(v)\geq 2-1\times2=0$ by $D1$.
Now we assume that $f_3(v)=1$. Then $f_4(v)\leq 2$ by
Lemma~\ref{lemma1}. If $f_4(v)=2$, then $f_5(v)\leq 1$. Since $G$
does not contain the configuration $H$ depicted in Figure 1, thus
$w'(v)\geq 2-1-\frac{2}{5}\times2-\frac{1}{5}=0$ or
$w'(v)\geq2-\frac{3}{4}-\frac{1}{2}\times2-\frac{1}{5}=\frac{1}{20}>0$
by $D1$, $D2$ and $D3$. If $f_4(v)\leq1$, then $f_5(v)\leq 1$ by
Lemma~\ref{lemma1}. Thus $w'(v)\geq
2-1-\frac{1}{2}-\frac{1}{5}=\frac{3}{10}>0$ by $D1$, $D2$ and
$D3$.
Now we assume that $f_3(v)=0$. Then $f_4(v)\leq 2$, $f_5(v)\leq4$
by Lemma~\ref{lemma1}. Thus $w'(v)>
2-\frac{1}{2}\times2-\frac{1}{5}\times4=\frac{1}{5}>0$ by $D2$ and
$D3$.\\
\textbf{Suppose $d(v)=5$.} Then $w(v)=4$, $f_3(v)\leq2$ by
Lemma~\ref{lemma1}. If $f_3(v)=2$, then $f_4(v)\leq1$ and
$f_5(v)=0$ by Lemma~\ref{lemma1}. Thus $w'(v)\geq
4-\frac{3}{2}\times2-\frac{4}{5}=\frac{1}{5}>0$ by $D1$ and $D2$.
If $f_3(v)=1$, then $f_4(v)\leq2$ and $f_5(v)\leq2$ by
Lemma~\ref{lemma1}. Thus $w'(v)\geq
4-\frac{3}{2}-\frac{4}{5}\times2-\frac{1}{5}\times2=\frac{1}{2}>0$
by $D1$, $D2$ and $D3$. If $f_3(v)=0$, then $f_4(v)\leq2$ and
$f_5(v)\leq5$ by Lemma~\ref{lemma1}. Thus $w'(v)>
4-\frac{4}{5}\times2-\frac{1}{5}\times5=\frac{7}{5}>0$ by $D2$ and
$D3$.\\
\textbf{Suppose $d(v)\geq6$.} Then $w(v)=2d(v)-6$, $f_4(v)\leq
d(v)-2f_3(v)$, $f_5(v)\leq d(v)-2f_3(v)$ by Lemma~\ref{lemma1}. So
$w'(v)\geq
2d(v)-6-\frac{3}{2}f_3(v)-\frac{4}{5}f_4(v)-\frac{1}{5}f_5(v)\geq
d(v)-6+\frac{1}{2}f_3(v)\geq d(v)-6\geq0$ by $D1$, $D2$ and
$D3$.\\
Consider any face $f\in F(G)$, \textbf{suppose $d(f)=3$.} Then
$w(f)=-3$. If $f$ is a $(4,4,4)$-face, then $w'(f)=-3+1\times3=0$
by $D1$. Otherwise,
$w'(f)\geq -3+\frac{3}{4}+\frac{3}{4}+\frac{3}{2}=0$ by $D1$.\\
\textbf{Suppose $d(f)=4$.} Then $w(f)=-2$. If $f$ is a
$(4,4,4,4)$-face, we have that $w'(f)\geq -2+\frac{1}{2}\times4=0$
by $D2$. Otherwise, $w'(f)\geq
-2+\frac{2}{5}\times3+\frac{4}{5}=0$ by $D2$.\\
\textbf{Suppose $d(f)=5$.} Then $w(f)=-1$. We have $w'(f)\geq
-1+\frac{1}{5}\times5=0$ by $D3$.\\
\textbf{Suppose $d(f)\geq6$.} Then $w'(f)=w(f)\geq0$.
\end{proof}
\begin{lemma}\label{jun1} (\cite{jun1}) Let $S=\{x_1, x_2, \cdots, x_k\}$ be a set of $k$
different vertices in $G$ such that $G-S$ has an equitable
$k$-coloring. If $|N_G(x_i)-S|\leq k-i$ for $1\leq i\leq k$, then
$G$ has an equitable $k$-coloring.
\end{lemma}
\begin{lemma}\label{kostochka1} (\cite{kostochka}) Let $G$ be a graph with
a $k$-uniform list assignment $L$. Let $S=\{x_1,
x_2,\cdots,x_k\}$, where $x_1, x_2,\cdots,x_k$ are distinct
vertices in $G$. If $G-S$ has an equitable $L$-coloring and
$|N_G(x_i)-S|\leq k-i$ for $1\leq i\leq k$, then $G$ has an
equitable $L$-coloring.
\end{lemma}
\begin{lemma}(\cite{borodin})\label{lemborodin} Every planar graph without adjacent triangles is
$4$-degenerate.
\end{lemma}
By Lemma~\ref{lemborodin}, we have the following corollary.
\begin{corollary}\label{cor4degenerate}
Let $G$ be a planar graph without chordal $4$-cycles. Then $G$ is
$4$-degenerate.
\end{corollary}
\begin{lemma}\label{lemma3} Let $G$ be a connected planar graph with order at least
$5$ and without chordal $4$- and $6$-cycles. If $\delta(G)\leq3$,
then $G$ has at least one of the configurations depicted in Figure
2.
\end{lemma}
\vspace{-0.3cm}
\begin{proof}
Suppose to the contrary that $G$ does not contain the
configurations $H_1$ $\ldots$ $H_{41}$ depicted in Figure 2.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=13.5cm]{1}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1\textwidth]{2}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=13.5cm]{3}
\caption{}
\end{center}
\end{figure}
\emph{Each configuration depicted in Figure 2 is such that: $~$
(1) the vertices labelled $x_k, x_{k-1}, x_{k-2}$ are distinct and
the other vertices may coincide if they have the same degree and
multiple edges cannot be resulted in; $~$(2) solid vertices have
no incident edges other than the ones shown; and $~$(3) except for
being specially pointed, the degree of a hollow vertex may be any
integer from $[d, \Delta(G)]$, where $d$ is the number of edges
incident with the hollow vertex shown in the configuration; $~$(4)
the order of the vertices on the boundary of a $4$-face can be
rearranged except for the vertex which is also adjacent to other
labelled vertex that is not on
the boundary of the $4$-face.}\\
A face is said to be a $special$ $face$ if it is a
$(3,3,5^+)$-face, $(3,4,4)$-face, $(3,4,5)$-face or a
$(3,4,6)$-faces. In the following, we call a $3$-vertex a
$special$ $3$-$vertex$ if it is incident with a special face,
otherwise, it is called a $simple$ $3$-$vertex$.
Since $G$ contains neither $H_1$ nor $H_2$, we obtain the
following property.
\begin{claim}\label{claim1} There is at most one special face in
$G$.
\end{claim}
By Claim~\ref{claim1}, $G$ has at most two special $3$-vertices.
For convenience, let $n_3(v)$ denote the number of simple
$3$-vertices adjacent to $v$ for each $v\in V(G)$. Since $G$
contains neither $H_{3}$ nor $H_{4}$, we can conclude the
following properties.
\begin{claim}\label{claim2} For each $v\in V(G)$ with $d(v)\geq4$, if
$v$ is adjacent to a simple $3$-vertex which is adjacent to two
$3^{--}$-vertices, then it is not adjacent to another
$4^{--}$-vertex.
\end{claim}
\begin{claim}\label{claim3} For any $v\in V(G)$ with $d(v)\geq4$, $v$ is adjacent to at
most one simple $3$-vertex which is adjacent to another
$3^{--}$-vertex.
\end{claim}
By Euler's formula $|V|-|E|+|F|=2$ and $\sum_{v\in
V(G)}d(v)=\sum_{f\in F(G)}d(f)=2|E|$, thus
\begin{eqnarray}\label{formula1}
\sum_{v\in V(G)}(3d(v)-10)+\sum_{f\in
F(G)}(2d(f)-10)=-10(|V|-|E|+|F|)=-20.
\end{eqnarray}
Define an initial charge function $w$ on $V(G)\cup F(G)$ by
setting $w(v)=3d(v)-10$ if $v\in V(G)$ and $w(f)=2d(f)-10$ if
$f\in F(G)$.
In the following, we divide the proof into four cases.
\vspace{0.3cm} \noindent {\bf Case 1.} $\delta(G)=3$.
Since $G$ does not contain the configuration $H_{5}$, $G$ has the
following property.
\begin{fact}\label{fact11} Any $3$-face in $G$ is a $(3, 3, 5^+)$-,
$(3,4^+,4^+)$- or $(4^+,4^+,4^+)$-face, i.e. there is no
$(3,3,4^-)$-face.
\end{fact}
Since $G$ does not contain the configuration
$H_{6}$, $G$ has the following property.
\begin{fact}\label{fact12} Any $4$-face in $G$ is a $(3, 3,
5^+,5^+)$-, $(3,4^+,4^+,4^+)$- or $(4^+,4^+,4^+,4^+)$-face, i.e.
there is no $(3,3,3,3^+)$- or $(3,3,4,4^+)$-face.
\end{fact}
For convenience, if a face is a $(3,3,5,5^+)$- or
$(3,4,5^-,6^-)$-face, then we call it a $bad$ $face$. The
$3$-vertex which is incident with a bad face is said to be a $bad$
$3$-$vertex$. If a vertex $v$ is adjacent to a bad $3$-vertex $w$
and $v$ is not incident with the bad face $f$ which is incident
with the vertex $w$, then we say that $v$ is $weakly$ $incident$
with the bad face $f$.
Now redistribute the charge according to the following discharging
rules.
\begin{itemize}
\item \textbf{$R1.$ Transfer $1$ from each $5^+$-vertex to every adjacent simple
$3$-vertex which is adjacent to exactly two $3^{--}$-vertices}.
\item \textbf{$R2.$ Transfer $\frac{1}{2}$ from each $4^+$-vertex
to every adjacent simple $3$-vertex which is adjacent to exactly
one $3^{--}$-vertex}.
\item \textbf{$R3.$ Transfer $\frac{1}{3}$ from each $4^+$-vertex
to every adjacent simple $3$-vertex which is not adjacent to any
$3^{--}$-vertex}.
\item \textbf{$R4.$ Transfer $\frac{1}{3}$ from each $6^+$-face
$f$ to every adjacent $3$-face and $4$-face via each common edge}.
\item \textbf{$R5.$ If $f$ is a $4$-face incident with a vertex
$v$, then $v$ gives $\frac{1}{2}$ to $f$ if $d(v)=4$ and $f$ is a
$(3,4,5^-,5^-)$- or $(4,4,4,4^+)$-face, $\frac{1}{3}$ if $d(v)=4$
and $f$ is either a $(3,4,4^+,6^+)$- or a $(4,4^+,5^+,5^+)$-face;}
\textbf{$\frac{1}{2}$ if $d(v)=5$ and $f$ is a
$(5,5^+,5^+,5^+)$-face; $\frac{2}{3}$ if $d(v)=5$ and $f$ is a
$4$-face of another type};
\textbf{$1$ if $d(v)=6$ and $f$ is a $(3,3,6,6^+)$- or
$(3,4,6,4^+)$-face, $\frac{2}{3}$ if $d(v)=6$ and $f$ is a
$(3,6,5^+,5^+)$- or $(4^+,6,4^+,4^+)$-face};
\textbf{$\frac{4}{3}$ if $d(v)\geq 7$}.
\item \textbf{$R6.$ If $f$ is a $3$-face incident with a vertex
$v$ with $d(v)=4$, then $v$ gives $\frac{2}{3}$ to $f$} if $f$ is
a $(3,4,4^+)$-face, $\frac{4}{3}$ if $f$ is a $(4,4,4)$-face, $1$
if $f$ is a $(4,4,5^+)$- or $(4,5,5^+)$-face, $0$ if $f$ is a
$(4,6^+,6^+)$-face;
\textbf{If $f$ is a $3$-face incident with a vertex $v$ with
$d(v)=5$, then $v$ gives $\frac{11}{6}$ to $f$ if $f$ is a
$(3,5,3^+)$-face, $2$ if $f$ is a $(4,4,5)$- or $(4,5,5)$-face,
$\frac{4}{3}$ if $f$ is a $(5,5,5)$-face, $1$ if $f$ is a
$(4,5,6^+)$-, $(5,5,6^+)$- or $(5,6^+,6^+)$-face};
\textbf{If $f$ is a $3$-face incident with a vertex $v$ with
$d(v)=6$, then $v$ gives $2$ to $f$};
\textbf{If $f$ is a $3$-face incident with a vertex $v$ with
$d(v)\geq7$, then $v$ gives $3$ to $f$ if $f$ is a $(3,3,7^+)$- or
$(3,4,7^+)$-face, $2$ if $f$ is a $(3,5^+,7^+)$- or
$(4^+,4^+,7^+)$-face}.
\item \textbf{$R7.$ If $f$ is a bad face and $v$ is weakly
incident with $f$, then $v$ gives charge $\frac{1}{2}$ to $f$}.
\end{itemize}
\vspace{0.2cm}
In the following, let us check the new charge of each element $x$
for $x\in V(G)\cup F(G)$.
For convenience, we use $f_{k}^{\alpha}(v)$ (respectively,
$n_3^{\alpha}(v)$) to denote the number of $k$-faces
(respectively, $3$-vertices) which are incident with $v$ and
receive charge at least $\alpha$ from $v$ according to the
discharging rules.
By Claim~\ref{claim2}, Claim~\ref{claim3}, $R1$, $R2$ and $R3$, we
have the following fact.
\begin{fact}\label{fact13}
For each $v\in V(G)$, obviously, $n_3^{\frac{1}{2}}(v)\leq1$, and
if $n_3^{1}(v)\neq0$, then $n_3(v)=1$ and the degrees of other
neighbors of $v$ are at least 5.
\end{fact}
Since $G$ contains no configurations $H_{7}$ and $H_{8}$, thus the
following fact holds.
\begin{fact}\label{fact14} For
each $v\in V(G)$, $v$ is weakly incident with at most one bad
face. Furthermore, if $v$ is weakly incident with a bad face, then
$n_3(v)=1$.
\end{fact}
Let $v\in V(G)$. \textbf{Suppose $d(v)=3$}. Then $w(v)=-1$. Since
$G$ contains no configuration $H_9$, $v$ is not weakly incident
with any bad face. Since $G$ contains no configuration $H_{10}$,
$v$ is adjacent to at least one $5^+$-vertex or is adjacent to at
least two $4^+$-vertices. If $v$ is a simple $3$-vertex, then
$w'(v)=-1+1=0$ by $R1$, $w'(v)=-1+\frac{1}{2}\times2=0$ by $R2$ or
$w'(v)=-1+\frac{1}{3}\times3=0$ by $R3$. Otherwise, i.e. if $v$ is
a special $3$-vertex, then $w'(v)=w(v)=-1$.\\
\textbf{Suppose $d(v)=4$}. Then $w(v)=2$.
First, we assume that $v$ is weakly incident with a bad face.
Since $G$ contains no configuration $H_{11}$, we have
$f_3(v)\leq1$. Additionally, if $f_3^{\alpha}=1$, we have
$\alpha=0$ because $G$ contains no configuration $H_{12}$ and by
$R6$. By Lemma~\ref{lemma1}, we have $f_4(v)\leq1$. Clearly,
$w'(v)\geq 2-\frac{1}{2}-\frac{1}{2}-\frac{1}{2}=\frac{1}{2}>0$ by
Fact~\ref{fact14}, $R2$, $R5$ and $R7$.
Now we assume that $v$ is not weakly incident with a bad face.
Clearly, we have $f_3(v)\leq2$. For convenience, we divide the
proof into the following cases.\\
\textbf{Case $1.1$} $f_3(v)=2$. Then $n_3(v)\leq 1$, $f_4(v)=0$
for the reason that $G$ contains no configuration $H_{13}$, by
Fact 1 and Lemma~\ref{lemma1}. If $f_3(v)=2$, $n_3(v)=1$, then we
have that $f_3^{\frac{4}{3}}(v)=0$ and $n_3^{\frac{1}{2}}(v)=0$
for the reason that $G$ contains no configurations $H_{15}$,
$H_{14}$ and by $R6$, $R2$, $R1$. Clearly, $w'(v)\geq
2-\frac{2}{3}-1-\frac{1}{3}=0$ by $R6$ and $R3$. If $f_3(v)=2$,
$n_3(v)=0$ and $f_3^{\frac{4}{3}}(v)\neq0$, then we have that
$w'(v)\geq 2-\frac{4}{3}=\frac{2}{3}>0$ for the reason that $G$
contains no configuration $H_{15}$ and by $R6$. If $f_3(v)=2$,
$n_3(v)=0$ and $f_3^{\frac{4}{3}}(v)=0$, then we
have that $w'(v)\geq 2-1\times2=0$ by $R6$.\\
\textbf{Case $1.2$} $f_3(v)=1$. Then $f_4(v)\leq2$ by
Lemma~\ref{lemma1}.
\textbf{Case $1.2.1$} $f_4(v)=2$.
If $f_4(v)=2$ and the $3$-face incident with $v$ is a
$(3,4,4^+)$-face, then $n_3(v)=1$ and $n_3^{\frac{1}{2}}(v)=0$ for
the reason that $G$ contains no configurations $H_{13}$, $H_{14}$
and by $R2$, $R1$. Thus $w'(v)\geq
2-\frac{2}{3}-\frac{1}{2}\times2-\frac{1}{3}=0$ by $R6$, $R5$,
$R3$.
If $f_4(v)=2$ and the $3$-face incident with $v$ is a
$(4,4,4)$-face, then $n_3(v)=0$, $f_4^{\frac{1}{2}}(v)=0$ for the
reason that $G$ contains no configuration $H_{16}$ and $R5$. Thus
$w'(v)\geq 2-\frac{4}{3}-\frac{1}{3}\times2=0$ by $R6$, $R5$.
If $f_4(v)=2$ and the $3$-face is a $(4,4,5^+)$- or
$(4,5,5^+)$-face, then $n_3(v)\leq1$ for the reason that $G$
contains no configuration $H_{17}$. First, we assume $n_3(v)=1$.
Since $G$ contains no configurations $H_{18}$, $H_{19}$ and by
$R5$, we have that $f_4^{\frac{1}{2}}(v)=0$. So $w'(v)\geq
2-1-\frac{1}{3}\times2-\frac{1}{3}=0$ by $R6$, $R5$ and $R3$. Now,
we assume that $n_3(v)=0$. Thus $w'(v)> 2-1-\frac{1}{2}\times2=0$
by $R6$ and $R5$.
If $f_4(v)=2$ and the $3$-face is a $(4,6^+,6^+)$-face, then
$f_3^{\frac{2}{3}}(v)=0$ by $R6$. Furthermore, as $n_3(v)\leq2$,
we have that $w'(v)>
2-\frac{1}{2}\times2-\frac{1}{3}-\frac{1}{2}=\frac{1}{6}>0$ by
$R5$, $R3$ and $R2$. By Fact~\ref{fact11}, this concludes the case
where $f_4(v)=2$.\\
\textbf{Case $1.2.2$} $f_4(v)=1$.
If $f_4(v)=1$ and the $3$-face is a $(4,4,4)$-face, then
$n_3(v)=0$ for the reason that $G$ contains no configurations
$H_{16}$, $H_{20}$ and $H_{21}$. Thus $w'(v)\geq
2-\frac{4}{3}-\frac{1}{2}=\frac{1}{6}>0$ by $R6$ and $R5$.
If $f_4(v)=1$ and the $3$-face is a $(4,4,5^+)$- or
$(4,5,5^+)$-face, then $n_3(v)=1$ and $n_3^{\frac{1}{2}}(v)=0$ for
the reason that $G$ contains no configurations $H_{17}$, $H_{22}$
and by $R2$, $R1$. Thus
$w'(v)\geq2-1-\frac{1}{2}-\frac{1}{3}=\frac{1}{6}>0$ by $R6$,
$R5$, $R3$.
If $f_4(v)=1$ and the $3$-face is a $(3,4,4^+)$-face, then
$n_3(v)=1$ and $n_3^{\frac{1}{2}}(v)=0$ for the reason that $G$
contains no configurations $H_{13}$ and $H_{14}$ and by $R2$,
$R1$. Thus
$w'(v)\geq2-\frac{2}{3}-\frac{1}{2}-\frac{1}{3}=\frac{1}{2}>0$ by
$R6$, $R5$ and $R3$.
If $f_4(v)=1$ and the $3$-face is a $(4,6^+,6^+)$-face, then
$f_3^{\frac{2}{3}}(v)=0$, $n_3(v)\leq2$ by Fact~\ref{fact12} and
$R6$. Thus $w'(v)\geq
2-\frac{1}{2}-\frac{1}{2}\times2=\frac{1}{2}>0$ by $R5$ and $R2$.
By fact~\ref{fact11}, this completes this subcase.\\
\textbf{Case $1.2.3$} $f_4(v)=0$.
If the $3$-face is a $(4,4,4)$-face, then $n_3(v)\leq1$ and
$n_3^{\frac{1}{2}}(v)=0$ for the reason that $G$ contains no
configurations $H_{17}$, $H_{22}$ and by $R2$, $R1$. Thus
$w'(v)\geq 2-\frac{4}{3}-\frac{1}{3}=\frac{1}{3}>0$ by $R6$ and
$R3$.
If the $3$-face is a $(3,4,4^+)$- or $(4,4^+,5^+)$-face, then
$n_3(v)\leq2$ for the reason that $G$ contains no configuration
$H_{13}$. Thus $w'(v)\geq 2-1-1=0$ by Fact~\ref{fact13}, $R5$ and
$R1$. By
Fact~\ref{fact11}, this conclude the subcase $f_4(v)=0$.\\
\textbf{Case $1.3$} $f_3(v)=0$. Then $f_4(v)\leq2$ by
Lemma~\ref{lemma1}.
If $f_4(v)=2$, then $n_3(v)\leq2$ by Fact~\ref{fact12}. Thus
$w'(v)\geq2-\frac{1}{2}\times2-1=0$ by Fact~\ref{fact13}, $R6$ and
$R1$. If $f_4(v)=1$, then $n_3(v)\leq3$ by Fact~\ref{fact12}. Thus
$w'(v)\geq2-\frac{1}{2}-\frac{1}{2}-\frac{1}{3}\times2=\frac{1}{3}>0$
by $R5$, $R2$ and $R3$. Otherwise, $f_4(v)=0$, then $n_3(v)\leq4$.
Thus $w'(v)\geq 2-\frac{1}{2}-\frac{1}{3}\times3=\frac{1}{2}>0$
by Fact~\ref{fact13} and $R2$ and $R3$.\\
\textbf{Suppose $d(v)=5$}. Then $w(v)=5$.
\textbf{Case $1.4$} $v$ is weakly incident with a bad face.
Clearly, $f_3(v)\leq2$. Furthermore, if $f_3(v)=2$, then
$f_4(v)\leq1$ by Lemma~\ref{lemma1}.
If $f_3(v)=2$ and $f_4(v)=1$, then one of the two $3$-faces must
be adjacent to a bad face which is weakly incident with $v$ by
Lemma~\ref{lemma1}. Obviously, it is a $(3,5,3^+)$-face. In
detail, it is a special face (i.e. a $(3,5,3)$-face) or a
$(3,5,4^+)$-face. Since $G$ contains no configuration $H_{23}$,
the other $3$-face is neither a $(4,4,5)$- nor a $(4,5,5)$-face.
Thus
$w'(v)\geq5-\frac{11}{6}-\frac{4}{3}-\frac{2}{3}-\frac{1}{2}=\frac{2}{3}>0$
or
$w'(v)\geq5-\frac{11}{6}-\frac{4}{3}-\frac{2}{3}-\frac{1}{2}-\frac{1}{2}=\frac{1}{6}>0$
by Fact~\ref{fact14}, $R6$, $R5$, $R7$ and $R2$.
If $f_3(v)=2$ and $f_4(v)=0$, we have that
$w'(v)\geq5-2\times2-\frac{1}{2}-\frac{1}{2}=0$ by $R6$, $R2$ and
$R7$.
If $f_3(v)\leq1$, then $f_4(v)\leq2$. We have that
$w'(v)\geq5-2-\frac{2}{3}\times2-\frac{1}{2}-\frac{1}{2}=\frac{2}{3}>0$
by $R6$, $R5$, $R7$ and $R2$.\\
\textbf{Case $1.5$} $v$ is not weakly incident with a bad face.
Clearly, $f_3(v)\leq2$.
\textbf{Case $1.5.1$} $f_3(v)=2$. Then $f_4(v)\leq1$.
If both of the $3$-faces are $(4,4,5)$- or $(4,5,5)$-faces, then
$n_3(v)=0$ for the reason that $G$ contains no configuration
$H_{24}$. Thus $w'(v)\geq5-2\times2-\frac{2}{3}=\frac{1}{3}>0$ by
$R6$ and $R5$.
If only one of the $3$-faces is a $(4,4,5)$- or $(4,5,5)$-face,
then the other $3$-face is not a $(3,5,3^+)$-face for the reason
that $G$ contains no configuration $H_{23}$. Thus $n_3(v)\leq1$
and $n_3^{1}=0$ by the Fact~\ref{fact13}. We have that
$w'(v)\geq5-2-\frac{4}{3}-\frac{2}{3}-\frac{1}{2}=\frac{1}{2}>0$
by $R6$, $R5$ and $R2$.
If both of the $3$-faces are $(3,5,3^+)$-faces, then $n_3(v)=2$,
$n_3^{\frac{1}{2}}(v)=0$ for the reason that $G$ contains no
configurations $H_{25}$, $H_{26}$ and by $R2$, $R1$. Thus
$w'(v)\geq5-\frac{11}{6}\times2-\frac{2}{3}-\frac{1}{3}\times2=0$
by $R6$, $R5$ and $R3$.
If only one of the $3$-faces is a $(3,5,3^+)$-face, then
$n_3(v)\leq2$, $n_3^{\frac{1}{2}}(v)\leq1$ for the reason that $G$
contains no configurations $H_{25}$ and $H_{26}$ and by $R2$,
$R1$. Thus
$w'(v)\geq5-\frac{11}{6}-\frac{4}{3}-\frac{2}{3}-\frac{1}{3}-\frac{1}{2}=\frac{1}{3}>0$
by $R6$, $R5$, $R3$ and $R2$.
If any of the $3$-faces does not belong to $(3,5,3^+)$-,
$(4,4,5)$- and $(4,5,5)$-faces, then $n_3(v)\leq1$. Thus
$w'(v)\geq5-\frac{4}{3}\times2-\frac{2}{3}-1=\frac{2}{3}>0$ by
$R6$, $R5$ and $R1$.\\
\textbf{Case $1.5.2$} $f_3(v)=1$. Then $f_4(v)\leq2$,
$n_3(v)\leq4$ ($v$ could be adjacent to five $3$-vertices, but at
most four of them are simple) by Lemma~\ref{lemma1}. Clearly,
$w'(v)\geq5-2-\frac{2}{3}\times2-\frac{1}{2}-\frac{1}{3}\times3=\frac{1}{6}>0$
by Fact~\ref{fact13}, $R6$, $R5$, $R2$ and $R3$.\\
\textbf{Case $1.5.3$} $f_3(v)=0$. Then $f_4(v)\leq2$,
$n_3(v)\leq5$ by Lemma~\ref{lemma1}. Clearly,
$w'(v)\geq5-\frac{2}{3}\times2-\frac{1}{2}-\frac{1}{3}\times4=\frac{11}{6}>0$
by Fact~\ref{fact13}, $R5$, $R2$ and $R3$.\\
\textbf{Suppose $d(v)=6$}. Then $w(v)=8$.
First, we assume that $v$ is weakly incident with a bad face.
Clearly, $f_3(v)\leq3$. If $f_3(v)=3$, then $f_4(v)=0$ by
Lemma~\ref{lemma1}. Clearly,
$w'(v)\geq8-2\times3-\frac{1}{2}-\frac{1}{2}=1>0$ by
Fact~\ref{fact14}, $R6$, $R7$ and $R2$. If $f_3(v)\leq2$, then
$f_4(v)\leq2$. Clearly,
$w'(v)\geq8-2\times2-1\times2-\frac{1}{2}-\frac{1}{2}=1>0$ by
Fact~\ref{fact14}, $R6$, $R5$, $R7$ and $R2$.\\
Now we assume that $v$ is not weakly incident with a bad face.
Clearly, $f_3(v)\leq3$. If $f_3(v)=3$, then $f_4(v)=0$,
$n_3(v)\leq3$ (a $3$-face is incident with at most one simple
$3$-vertex) by Lemma~\ref{lemma1}. Thus $w'(v)\geq
8-2\times3-\frac{1}{2}-\frac{1}{3}\times2=\frac{5}{6}>0$ by
Fact~\ref{fact13}, $R6$, $R2$ and $R3$. If $f_3(v)=2$. Then
$f_4(v)\leq2$, $n_3(v)\leq4$ by Lemma~\ref{lemma1}. Thus
$w'(v)\geq
8-2\times2-1\times2-\frac{1}{3}\times3-\frac{1}{2}=\frac{1}{2}>0$
by $R6$, $R5$, $R3$ and $R2$. If $f_3(v)\leq1$, then
$f_4(v)\leq3$, $n_3(v)\leq6$ by Lemma~\ref{lemma1}. Clearly,
$w'(v)> 8-2-1\times3-\frac{1}{3}\times5-\frac{1}{2}=\frac{5}{6}>0$
by $R6$, $R5$, $R3$ and $R2$.\\
\textbf{Suppose $d(v)=7$}. Then $w(v)=11$.
First, we assume that $v$ is weakly incident with a bad face.
Clearly, $f_3(v)\leq3$ by Lemma~\ref{lemma1}. Furthermore,
$f_3^{3}(v)\leq1$ for the reason that $G$ contains no
configuration $H_{27}$ and by $R6$. If $f_3(v)=3$, then
$f_4(v)\leq1$ by Lemma~\ref{lemma1}. Clearly,
$w'(v)\geq11-3-2\times2-\frac{4}{3}-\frac{1}{2}-\frac{1}{2}=\frac{5}{3}>0$
by $R6$, $R5$, $R7$ and $R2$. If $f_3(v)\leq2$, then
$f_4(v)\leq3$. Clearly,
$w'(v)\geq11-3-2-\frac{4}{3}\times3-\frac{1}{2}-\frac{1}{2}=1>0$
by $R6$, $R5$, $R7$ and $R2$.\\
Now we assume that $v$ is not weakly incident with a bad face.
Clearly, we have $f_3(v)\leq3$. Since $G$ contains no
configuration $H_{27}$, there exists at most one $(3,4,7)$-face
which is incident with $v$. If $f_3(v)=3$, then $f_4(v)\leq1$,
$n_3(v)\leq4$ by Lemma~\ref{lemma1}. Thus
$w'(v)\geq11-3-2\times2-\frac{4}{3}-\frac{1}{3}\times
3-\frac{1}{2}=\frac{7}{6}>0$ by Fact~\ref{fact13}, $R6$, $R5$,
$R3$ and $R2$. If $f_3(v)=2$, then $f_4(v)\leq3$, $n_3(v)\leq5$ by
Lemma~\ref{lemma1}. Thus
$w'(v)\geq11-3-2-\frac{4}{3}\times3-\frac{1}{3}\times4-\frac{1}{2}=\frac{1}{6}>0$
by Fact~\ref{fact13}, $R6$, $R5$, $R3$ and $R2$. If $f_3(v)\leq1$,
then $f_4(v)\leq3$, $n_3(v)\leq7$ by Lemma~\ref{lemma1}. Thus
$w'(v)\geq11-3-\frac{4}{3}\times3-\frac{1}{3}\times6-\frac{1}{2}=\frac{3}{2}>0$
by Fact~\ref{fact13}, $R6$, $R5$, $R3$ and $R2$.\\
\textbf{Suppose $d(v)\geq8$}. Then $w(v)=3d(v)-10$.
In any case, whether $v$ is weakly incident with a bad face or
not, we have
\begin{eqnarray}\label{formula20}
f_3(v)+f_4(v)\leq \frac{3}{4}d(v)
\end{eqnarray} by Lemma~\ref{lemma1}. Moreover,
\begin{eqnarray}\label{formula2}
f_3^{3}(v)\leq1
\end{eqnarray}
for the reason that $G$ contains no configuration $H_{27}$ and by
$R6$. Since a $3$-face has at most one simple $3$-vertex,
\begin{eqnarray}\label{formula3}
n_3(v)\leq f_3(v)+d(v)-2f_3(v)=d(v)-f_3(v).
\end{eqnarray}
It follows from (\ref{formula20}) and (\ref{formula3}) that
$f_4(v)\leq \frac{3}{4}d(v)-f_3(v)$ and $n_3(v)\leq d(v)-f_3(v)$,
respectively. Thus $w'(v)\geq
3d(v)-10-3-2(f_3(v)-1)-\frac{4}{3}f_4(v)-\frac{1}{2}-\frac{1}{3}(n_3(v)-1)-\frac{1}{2}\geq
3d(v)-10-3-2f_3(v)+2-d(v)+\frac{4}{3}f_3(v)-\frac{1}{2}-\frac{1}{3}d(v)+\frac{1}{3}f_3(v)+\frac{1}{3}-\frac{1}{2}=
\frac{5}{3}d(v)-\frac{1}{3}f_3(v)-\frac{70}{6}$ by $R6$, $R5$,
$R2$, $R3$ and $R7$. Since
\begin{displaymath}
f_3(v)\leq\frac{1}{2}d(v),
\end{displaymath}
we obtain $w'(v)\geq \frac{3}{2}d(v)-\frac{70}{6}\geq
\frac{1}{3}>0$.\\
Now we consider $f\in F(G)$. \textbf{Suppose $d(f)=3$}. Then
$w(f)=-4$. By Fact~\ref{fact11}, we only discuss the following
situations. If $f$ is a special face $(3,3,5^+)$-face, then we
have that $w'(f)\geq -4+\frac{11}{6}+\frac{1}{3}=-\frac{11}{6}>-2$
by Lemma~\ref{lemma01}, $R6$ and $R4$. If $f$ is a $(3,4,4)$-,
$(3,4,5)$- or $(3,4,6)$-face, we have that $w'(f)\geq -4+
\frac{2}{3}\times2+\frac{1}{3}=-\frac{7}{3}$ by
Lemma~\ref{lemma01}, $R6$ and $R4$. If $f$ is a $(3,4,7^+)$-face,
then $w'(f)\geq -4+\frac{2}{3}+3+\frac{1}{3}=0$ by
Lemma~\ref{lemma01}, $R6$ and $R4$. If $f$ is a
$(3,5^+,5^+)$-face, then $w'(f)\geq
-4+\frac{11}{6}\times2+\frac{1}{3}=0$ by Lemma~\ref{lemma01},
$R6$ and $R4$. If $f$ is a $(4,4,4)$-face, then $w'(f)\geq
-4+\frac{4}{3}\times3=0$ by $R6$. If $f$ is a $(4,4,5^+)$-face,
then $w'(f)\geq -4+1\times2+2=0$ by $R6$. If $f$ is a
$(4,5,5^+)$-face, we have $w'(f)\geq -4+1+2\times2=1>0$ by $R6$.
If $f$ is a $(4,6^+,6^+)$-face, then $w'(f)\geq -4+2\times2=0$ by
$R6$. If $f$ is a $(5,5,5)$-face, we have that $w'(f)\geq
-4+\frac{4}{3}\times3=0$ by $R6$. If $f$ is a
$(5^+,5^+,6^+)$-face, we have that $w'(f)\geq -4+1\times2+2=0$ by
$R6$.\\
\textbf{Suppose $d(f)=4$}. Then $w(f)=-2$. If $f$ is a
$(3,3,5,5^+)$-face, then it is a bad face. Thus $w'(f)\geq
-2+\frac{1}{2}\times2+\frac{2}{3}\times2=\frac{1}{3}>0$ by $R5$
and $R7$. If $f$ is a $(3,3,6^+,6^+)$-face, then
$w'(v)\geq-2+1\times2=0$ by $R5$. If $f$ is a $(3,4,4,4)$- or
$(3,4,4,5)$-face, then it is a bad face. Thus $w'(f)\geq
-2+\frac{1}{2}+\frac{1}{2}\times2+\frac{1}{2}=0$ by $R5$ and $R7$.
If $f$ is a $(3,4,4,6)$-face, then it is a bad face. Thus
$w'(f)\geq -2+\frac{1}{2}+\frac{1}{3}\times2+1=\frac{1}{6}>0$ by
$R5$ and $R7$. If $f$ is a $(3,4,4,7^+)$-face, then we have that
$w'(v)\geq-2+\frac{1}{3}\times2+\frac{4}{3}=0$ by $R5$. If $f$ is
a $(3,4,5,5)$-face, then it is a bad face. Thus $w'(f)\geq
-2+\frac{1}{2}+\frac{1}{2}+\frac{2}{3}\times2=\frac{1}{3}>0$ by
$R5$ and $R7$. If $f$ is a $(3,4,5,6)$-face, then it is a bad
face. Thus $w'(f)\geq
-2+\frac{1}{2}+\frac{1}{3}+\frac{2}{3}+1=\frac{1}{2}>0$ by $R5$
and $R7$. If $f$ is a $(3,4,5,7^+)$-face, then $w'(f)\geq
-2+\frac{1}{3}+\frac{2}{3}+\frac{4}{3}=\frac{1}{3}>0$ by $R5$. If
$f$ is a $(3,4,6^+,6^+)$-face, then $w'(f)\geq
-2+\frac{1}{3}+1\times2=\frac{1}{3}>0$ by $R5$. If $f$ is a
$(3,5^+,5^+,5^+)$-face, then $w'(f)\geq -2+\frac{2}{3}\times3=0$
by $R5$. If $f$ is a $(4,4,4,4^+)$-face, then $w'(f)\geq
-2+\frac{1}{2}\times4=0$ by $R5$. If $f$ is a
$(4^+,4^+,5^+,5^+)$-face, then $w'(f)\geq
-2+\frac{1}{3}\times2+\frac{2}{3}\times2=0$ by $R5$.\\
\textbf{Suppose $d(f)=5$}. Then $w'(f)=w(f)=0$.\\
\textbf{Suppose $d(f)\geq 6$}. Then $w'(f)\geq
w(f)-\frac{1}{3}\times
d(f)=2d(f)-10-\frac{1}{3}\times d(f)\geq0$ by $R4$.\\
From the above discussion, if $x$ is neither a special vertex nor
a special face, then $w'(x)\geq 0$ for each $x\in V(G)\cup F(G)$.
Let $w'_s$ denote the total new charge of the special $3$-vertices
and the special $3$-faces. Since the new charge of the special
$3$-vertices is $-1$ (see the case "$d(v)=3$") and since the new
charge of the special face is at least $-2$ if it is a
$(3,3,5^+)$-face and at least $-\frac{7}{3}$ if it is a
$(3,4,4)$-, a $(3,4,5)$-, or a $(3,4,6)$-face (see the case
"$d(f)=3$"), Claim~\ref{claim1} implies that $w'_s\geq
\min\{-2-1-1, -\frac{7}{3}-1\}=-4$. So we obtain that
\begin{eqnarray}\label{formula4} \sum_{x\in V(G)\cup
F(G)}w'(x)\geq -4,
\end{eqnarray}a contradiction to Equation~\ref{formula1}.
\vspace{0.3cm} \noindent{\bf Case 2.} $\delta(G)=2$ and there are
at most two $2$-vertices in $G$.
Since $G$ contains no structure isomorphic to the configuration
$H_{5}$, the $3$-faces which are incident with $2$-vertices may be
$(2,3,5)$- or $(2,4^+,4^+)$-faces. Since $G$ contains no structure
isomorphic to the configuration $H_{6}$, the $4$-faces which are
incident with $2$-vertices may be $(2,3^-,5^+,5^+)$- or
$(2,4^+,4^+,4^+)$-faces.
The discharging rules are the same as the rules in Case 1 except
for the charge which is given to a $3$- or $4$-face which is
incident with $2$-vertices. For each $v\in V(G)$, if $d(v)\geq4$,
then $v$ gives charge $\frac{2}{3}$ to its incident $(2,x,y)$-face
$f$; and $v$ gives charge $\frac{1}{3}$ to its incident
$(2,x,y,z)$-face $f$ only if the face $f$ is not adjacent to other
$4$-faces which are incident with $v$, otherwise, $v$ gives charge
$\frac{1}{3}$ to only one of the adjacent $4$-faces. Clearly, the
charge which is given to a $(2,x,y)$- (resp. $(2,x,y,z)$)-face is
not greater than that which is given to $(3,x,y)$- (resp.
$(3,x,y,z)$)-faces. For each $v\in V(G)$, the number of $(2,x,y)$-
(resp. $(2,x,y,z)$)-faces which is incident with and accept charge
from $v$ is not greater than that of $(3,x,y)$- (resp.
$(3,x,y,z)$)-faces which is incident with $v$. So we can guarantee
the new charge of each element $x\in V(G)\cup F(G)$ is larger than
or equal to zero except for the special $3$-vertices, the special
$3$-faces, the $2$-vertices and the $3$- or $4$-faces which are
incident with the $2$-vertices. For convenience, let $w'_{t1}$
(resp. $w'_{t2}$) denote the total new charge of one $2$-vertex
(resp. two $2$-vertices) and the faces which are incident with the
$2$-vertex (resp. the two $2$-vertices).\\
\textbf{Suppose that there exists only one $2$-vertex in $G$}. If
the $2$-vertex is incident with one $3$-face, then it will be not
incident with any $4$-face by Lemma~\ref{lemma1}. Since $G$
contains no configuration $H_5$, the $3$-face is a $(2,3^+,5^+)$-
or $(2,4^+,4^+)$-face, thus $w'_{t1}\geq
-4-4+\frac{2}{3}=-\frac{22}{3}$ or $w'_{t1}\geq
-4-4+\frac{2}{3}\times2=-\frac{20}{3}$. If the $2$-vertex is
incident with a $4$-face, then it may be incident with two
$4$-faces. Furthermore, the $4$-face is a $(2,3^+,5^+,5^+)$- or a
$(2,4^+,4^+,4^+)$-face for the reason that $G$ contains no
configuration $H_6$. Clearly, $w'_{t1}\geq
-2-2-4+\frac{1}{3}\times2=-\frac{22}{3}$ or $w'_{t1}\geq
-2-2-4+\frac{1}{3}\times3=-7$. From the above discussion, we
obtain that
\begin{eqnarray}\label{formula5}
w'_{t1}\geq \min\{-7, -\frac{20}{3},
-\frac{22}{3}\}=-\frac{22}{3}.
\end{eqnarray} By~(\ref{formula4}), we have that $\sum_{x\in V(G)\cup F(G)}w'(x)\geq
-4+w'_{t1}\geq-4-\frac{22}{3}=-\frac{34}{3}$, a contradiction to Equation~\ref{formula1}.\\
\textbf{Suppose that there exist two $2$-vertices in $G$}. If the
two $2$-vertices are incident with a same $3$-face, then $f$ is a
$(2,2,5^+)$-face for the reason that $G$ contains no configuration
$H_{5}$. Thus $w'_{t2}\geq-4\times2-4+\frac{2}{3}=-\frac{34}{3}$.
If the two $2$-vertices are incident with a same $4$-face, then
the $4$-face is a $(2,2,5^+,5^+)$-face for the reason that $G$
contains no configuration $H_{6}$. Since each of the two
$2$-vertices may be incident with another $4$-face, we have that
$w'_{t2}\geq -2-2-2-4-4+\frac{1}{3}\times2=-\frac{40}{3}$. If the
two $2$-vertices are not incident with a same face, then the
discussion is similar to the situation when there exists only one
$2$-vertex in $G$. By~(\ref{formula5}), we have
$w'_{t2}\geq-\frac{22}{3}\times2=-\frac{44}{3}$. From the above
discussion, we have $w'_{t2}\geq\min\{-\frac{44}{3},
-\frac{34}{3}, -\frac{40}{3}\}=-\frac{44}{3}$.
By~(\ref{formula4}), we have that $\sum_{x\in V(G)\cup
F(G)}w'(x)\geq -4-\frac{44}{3}=-\frac{56}{3}$, a contradiction to
Equation~\ref{formula1}.
\vspace{0.3cm} \noindent {\bf Case 3.} $\delta(G)=2$ and there are
at least three $2$-vertices in $G$.
Since $G$ contains no configurations $H_{28}$ $\ldots$ $H_{35}$,
$G$ has the following properties.
\vspace{-0.2cm}
\begin{fact}\label{fact31} Any vertex $v$ is adjacent to at most one
$2$-vertex.
\end{fact}
\vspace{-0.4cm}
\begin{fact}\label{fact32} No two $2$-vertices are adjacent to each
other.
\end{fact}
\vspace{-0.4cm}
\begin{fact}\label{fact33} For each $v\in V(G)$ with $d(v)\geq4$, if
$v$ is adjacent to a $2$-vertex, then it is not incident with any
$3$-face that is incident with a $3$-vertex.
\end{fact}
\vspace{-0.4cm}
\begin{fact}\label{fact34} If $v$ is adjacent to a $3$-vertex, then
it is not incident with any $3$-face that is incident with a
$2$-vertex.
\end{fact}
\vspace{-0.4cm}
\begin{fact}\label{fact35} Every $3$-face in $G$ that is incident with
a $2$-vertex is a $(2,6^+,6^+)$-face.
\end{fact}
\vspace{-0.4cm}
\begin{fact}\label{fact36} If a vertex is adjacent to a $2$-vertex,
then it is not adjacent to any $3$-vertex that is adjacent to
another $3^{--}$-vertex.
\end{fact}
\vspace{-0.4cm}
\begin{fact}\label{fact37} There is at most one $2$-vertex which is
adjacent to a $k$-vertex ($3\leq k\leq 4$) in $G$.
\end{fact}
\vspace{-0.4cm}
\begin{fact}\label{fact38} Any $4$-face that is incident with a
$2$-vertex in $G$ is a $(2,3^+,7^+,7^+)$- or
$(2,6^+,6^+,6^+)$-face.
\end{fact}
For convenience, we call a $2$-vertex a $special$ $2$-$vertex$ if
it is adjacent to a $k$-vertex ($3\leq k\leq 4$), otherwise a
$simple$ $2$-$vertex$. By Fact~\ref{fact37}, there is at most one
special $2$-vertex. Let $n_2(v)$ denote the number of simple
$2$-vertices which are adjacent to $v$. Obviously,
$n_2(v)\in\{0,1\}$ by
Fact~\ref{fact31}.\\
Now redistribute the charge according to the following discharging
rules.
For each $x\in V(G)\bigcup F(G)$, if $x$ is neither a $2$-vertex
nor a face which is not incident with any $2$-vertex, then the
discharging rules are the same as those in Case 1. Otherwise, the
following discharging rules are abided.
\begin{itemize}
\item \textbf{$R8$. Transfer $2$ from each $5^+$-vertex to every
adjacent $2$-vertex.}
\item \textbf{$R9$. Transfer $2$ from each $6^+$-vertex to every
incident $3$-face.}
\item \textbf{$R10$. If $f$ is a $4$-face which is incident with a
$2$-vertex and $v$, then $v$ gives $0$ to $f$ if $d(v)=3$, $4$ or
$5$; $\frac{2}{3}$ if $d(v)=6$; $\frac{4}{3}$ if $d(v)\geq 7$.}
\end{itemize}
By Fact~\ref{fact36}, $R1$ and $R2$, we have the following fact.
\vspace{-0.1cm}
\begin{fact}\label{fact39} For each $v\in V(G)$, if $n_2(v)=1$,
then $n_3^{\frac{1}{2}}(v)=0$.
\end{fact}
In the following, let us check the new charge of each element
$x\in V(G)\bigcup F(G)$.
Consider any vertex $v\in V(G)$, suppose $d(v)=2$. Then $w(v)=-4$,
$n_2(v)=0$ by Fact~\ref{fact32}. Since $G$ contains no structure
$H_9$, $v$ is not weakly incident with any bad face. If $v$ is a
simple $2$-vertex, then $w'(v)=-4+2\times2=0$ by $R8$. Otherwise,
$v$ is a special $2$-vertex. We have $w'(v)=w(v)=-4$.
\textbf{Suppose $d(v)\geq3$}. If $n_2(v)=0$, then the discussion
is similar to the one of the corresponding situation in Case 1. In
the following, we only focus on the situation $n_2(v)=1$.
Since $G$ contains no configurations $H_7$ and $H_8$, we have the
following fact.
\begin{fact}\label{fact310}
For each $v\in V(G)$, if $n_2(v)=1$, then $v$ is not weakly
incident with any bad face.
\end{fact}
Suppose $d(v)=3$. By Fact~\ref{fact33}, $v$ is a simple
$3$-vertex. Since $G$ contains no configuration $H_{10}$, $v$ is
adjacent to at least one $5^+$-vertex or is adjacent to at least
two $4^+$-vertices. We have $w'(v)=-1+1=0$ by $R1$,
or $w'(v)=-1+\frac{1}{2}\times2=0$ by $R2$.\\
\textbf{Suppose $d(v)=4$}. Then $w(v)=2$, $f_3(v)\leq1$ by
Fact~\ref{fact35}.
First we assume $f_3(v)=1$. Then $f_4(v)\leq2$. If the $3$-face is
a $(4,4,4)$-face, then $f_4(v)\leq1$ and $n_3(v)=0$ for the reason
that $G$ contains no configuration $H_{16}$, $H_{17}$ and by
Fact~\ref{fact38}. Thus $w'(v)\geq
2-\frac{4}{3}-\frac{1}{2}-0=\frac{1}{6}>0$ by $R6$, $R9$ and
$R10$. Otherwise, if the $3$-face is not a $(4,4,4)$-face, we have
$f_4(v)\leq2$, $f_4^{\frac{1}{2}}(v)\leq1$ and $n_3(v)\leq1$ for
the reason that $G$ contains no $H_{13}$ and by Fact~\ref{fact38},
$R5$, $R10$. Thus $w'(v)\geq
2-1-\frac{1}{2}-\frac{1}{3}=\frac{1}{6}>0$ by Fact~\ref{fact39},
$R6$, $R9$, $R5$ and $R3$.\\
Now we assume that $f_3(v)=0$. Then $f_4(v)\leq2$,
$f_4^{\frac{1}{3}}\leq1$ and $n_3(v)\leq3$ for the reason that $G$
contains no chordal $6$-cycles and by $R10$. Thus $w'(v)\geq
2-\frac{1}{2}-\frac{1}{3}\times3=\frac{1}{2}>0$ by
Fact~\ref{fact39}, $R10$ and $R3$.\\
\textbf{Suppose $d(v)=5$}. Then $w(v)=5$, $f_3(v)\leq2$.
\textbf{Case $3.1$} $f_3(v)=2$. Then $f_4(v)\leq1$ for the reason
that $G$ contains no chordal $4$- and $6$-cycles. By Fact 9 and
Fact 12, the $4$-face which is incident with $v$ is a
$(2,5,7^+,7^+)$-face. Thus $f_4^{\frac{2}{3}}(v)=0$ by $R10$.
Additionally, since $G$ contains no configuration $H_{36}$, we
have that $f_3^{\frac{11}{6}}(v)=0$ by Fact~\ref{fact33} and $R6$.
Thus $w'(v)\geq 5-\frac{4}{3}\times2-2-0=\frac{1}{3}>0$ by $R6$,
$R9$, $R5$ and $R8$.
\textbf{Case $3.2$} $f_3(v)=1$. Since $G$ contains neither chordal
$4$- and $6$-cycles nor configuration $H_{37}$, we have
$f_4(v)\leq3$.
\textbf{Case $3.2.1$} $f_4(v)=3$. Then $n_3(v)\leq1$ by
Fact~\ref{fact33} and Fact~\ref{fact38}. Furthermore, since at
most one $4$-face which is incident with $v$ is not a
$(2,5,7^+,7^+)$-face, we have that $f_4^{\frac{2}{3}}(v)\leq1$ by
$R5$ and $R10$. Thus $w'(v)\geq5-2-\frac{2}{3}-2-\frac{1}{3}=0$ by
Fact~\ref{fact39}, $R6$, $R9$, $R5$, $R8$ and $R3$.
\textbf{Case $3.2.2$} $f_4(v)=2$.
\textbf{Case $3.2.2.1$} The $2$-vertex which is adjacent to $v$ is
not around any of the two $4$-faces. If the $3$-face which is
incident with $v$ is a $(5,6^+,6^+)$-face, then
$f_4^{\frac{2}{3}}(v)\leq2$, $n_3(v)\leq2$ as $G$ contains no
configuration $H_{37}$ and by $R5$, Fact~\ref{fact33}. Thus we
have $w'(v)\geq 5-1-\frac{2}{3}\times2-2-\frac{1}{3}\times2=0$ by
$R6$, $R9$, $R5$, $R8$ and $R3$. Otherwise, the $4$-faces which
are incident with $v$ are both $(5,5^+,5^+,5^+)$-faces as $G$
contains no configuration $H_{37}$. Clearly, $n_3(v)=0$. Thus we
have $w'(v)\geq 5-2-\frac{1}{2}\times2-2=0$ by $R6$, $R9$, $R5$
and $R8$.
\textbf{Case $3.2.2.2$} The $2$-vertex which is adjacent to $v$ is
around one of the two $4$-faces. Then $f_4^{\frac{2}{3}}(v)\leq1$,
$n_3(v)\leq1$ as $G$ contains no configuration $H_{38}$ and by
$R5$, Fact~\ref{fact33}. Thus we have $w'(v)\geq
5-2-\frac{2}{3}-2-\frac{1}{3}=0$ by $R6$, $R9$, $R5$, $R8$ and
$R3$.
\textbf{Case $3.2.2.3$} The $2$-vertex which is adjacent to $v$ is
around the two $4$-faces. Then $f_4^{\frac{1}{2}}(v)=0$,
$n_3(v)\leq1$ by Fact~\ref{fact38}. Thus we have $w'(v)\geq
5-2-2-\frac{1}{3}=\frac{2}{3}>0$ by $R6$, $R9$, $R8$ and $R3$.
\textbf{Case $3.2.3$} $f_4(v)=1$. Then $n_3(v)\leq2$ by
Fact~\ref{fact35} and Fact~\ref{fact36}. If $n_3(v)=2$, then the
$4$-face is adjacent to the $3$-face and the $3$-face is a
$(5,6^+,6^+)$-face as $G$ contains no configuration $H_{37}$ and
$H_{38}$. We have $w'(v)\geq
5-1-\frac{2}{3}-2-\frac{1}{3}\times2=\frac{2}{3}>0$ by $R6$, $R5$,
$R8$ and $R3$. Otherwise, $n_3(v)\leq1$. We have
$w'(v)\geq5-2-\frac{2}{3}-2-\frac{1}{3}=0$ by $R6$, $R5$, $R8$ and
$R3$.
\textbf{Case $3.2.4$} $f_4(v)=0$. Then $n_3(v)\leq2$ by
Fact~\ref{fact35} and Fact~\ref{fact36}. We have
$w'(v)\geq5-2-2-\frac{1}{3}\times2=\frac{1}{3}>0$ by $R6$, $R8$
and $R3$.
\textbf{Case $3.3$} $f_3(v)=0$. Then $f_4(v)\leq3$ for the reason
that $G$ contains no chordal $6$-cycles. Since at most two
$4$-faces which are incident with $v$ are not
$(2,5,7^+,7^+)$-faces, we have $f_4^{\frac{2}{3}}(v)\leq2$ by
$R5$. Furthermore, $n_3(v)\leq4$. Thus $w'(v)\geq
5-\frac{2}{3}\times2-\frac{1}{3}\times4-2=\frac{1}{3}>0$ by $R5$,
$R3$ and $R8$.\\
\textbf{Suppose $d(v)=6$}. Then $w(v)=8$, $f_3(v)\leq3$. If
$f_3(v)=3$, then $f_4(v)=0$, $n_3(v)=0$ for the reason that $G$
contains no chordal $4$- and $6$-cycles and by Fact~\ref{fact33}.
Thus $w'(v)\geq 8-2\times3-2=0$ by $R6$, $R9$ and $R8$. If
$f_3(v)=2$, then $f_4(v)\leq2$, $n_3(v)\leq1$ for the reason that
$G$ contains no chordal $4$- and $6$-cycles and by
Fact~\ref{fact33}, Fact~\ref{fact34}. Since $G$ contains no
configuration $H_{38}$ and by $R10$, we have that $f_4^{1}(v)=0$.
Thus
$w'(v)\geq8-2\times2-\frac{2}{3}\times2-2-\frac{1}{3}=\frac{1}{3}>0$
by Fact~\ref{fact37}, $R6$, $R9$, $R10$, $R8$ and $R3$. If
$f_3(v)\leq1$, then $f_4(v)\leq3$, $n_3(v)\leq5$. Since $G$
contains no configuration $H_{38}$, we have that $f_4^{1}(v)=0$.
Thus
$w'(v)\geq8-2-\frac{2}{3}\times3-2-\frac{1}{3}\times5=\frac{1}{3}>0$
by Fact~\ref{fact39}, $R6$, $R9$, $R10$, $R8$ and $R3$.\\
\textbf{Suppose $d(v)=7$}. Then $w(v)=11$, $f_3(v)\leq3$. By
Fact~\ref{fact33}, there is no $(3,4,7)$-face which is incident
with $v$. If $f_3(v)=3$, then $f_4(v)\leq1$, $n_3(v)=0$ for the
reason that $G$ contains no chordal $4$- and $6$-cycles and by
Fact~\ref{fact33}. Thus $w'(v)\geq
11-2\times3-\frac{4}{3}-2=\frac{5}{3}>0$ by $R6$, $R9$, $R10$ and
$R8$. If $f_3(v)=2$, then $f_4(v)\leq3$, $n_3(v)\leq2$ for the
reason that $G$ contains no chordal $4$- and $6$-cycles and by
Fact~\ref{fact33}, Fact~\ref{fact34}. Thus
$w'(v)\geq11-2\times2-\frac{4}{3}\times3-\frac{1}{3}\times2-2=\frac{1}{3}>0$
by Fact~\ref{fact39}, $R6$, $R9$, $R10$, $R3$ and $R8$. If
$f_3(v)=1$, then $f_4(v)\leq4$, $n_3(v)\leq4$ for the reason that
$G$ contains no chordal $6$-cycles and by Fact~\ref{fact33},
Fact~\ref{fact34}. Thus $w'(v)\geq
11-2-\frac{4}{3}\times4-\frac{1}{3}\times4-2=\frac{1}{3}>0$ by
Fact~\ref{fact39}, $R6$, $R9$, $R10$, $R3$ and $R8$. If
$f_3(v)=0$, then $f_4(v)\leq4$, $n_3(v)\leq6$ for the reason that
$G$ contains no chordal $6$-cycles. Thus $w'(v)\geq
11-\frac{4}{3}\times4-\frac{1}{3}\times6-2=\frac{5}{3}>0$ by
Fact~\ref{fact39}, $R10$, $R3$ and $R8$.\\
\textbf{Suppose $d(v)\geq8$}. Then $w(v)=3d(v)-10$. By
Fact~\ref{fact33}, there is no $(3,4,8^+)$-face which is incident
with $v$. Since $n_3(v)+2f_3(v)+1\leq d(v)$, we have that
\begin{displaymath}
n_3(v)\leq d(v)-2f_3(v)-1.
\end{displaymath}
Since $G$ contains no chordal $4$- and $6$-cycles, we have that
$f_3(v)+f_4(v)\leq \frac{3}{4}d(v)+1$. Thus
\begin{displaymath}
f_4(v)\leq \frac{3}{4}d(v)-f_3(v)+1.
\end{displaymath}
Thus $w'(v)\geq
3d(v)-10-2f_3(v)-\frac{4}{3}f_4(v)-\frac{1}{3}n_3(v)-2\geq
3d(v)-10-2f_3(v)-d(v)+\frac{4}{3}f_3(v))-\frac{4}{3}-\frac{1}{3}d(v)+\frac{2}{3}f_3(v)+\frac{1}{3}-2=
\frac{5}{3}d(v)-\frac{39}{3}\geq\frac{1}{3}\geq0$ by
Fact~\ref{fact39}, $R6$, $R9$, $R10$, $R3$ and $R8$.\\
Consider $f\in F(G)$. \textbf{Suppose $d(f)=3$}. Then $w(f)=-4$
and $n_2(f)\leq 1$. If $n_2(f)=1$, then $f$ is a
$(2,6^+,6^+)$-face by Fact~\ref{fact35}. Thus $w'(f)\geq
-4+2\times2=0$ by $R9$. Otherwise, the discussion is similar to
the corresponding
situation when $d(f)=3$ in Case 1, so it is omitted here.\\
\textbf{Suppose $d(f)=4$}. Then $w(f)=-2$, $n_2(f)\leq1$ by
Fact~\ref{fact32}.
If $n_2(f)=1$. Then $f$ is a $(2,3^+,7^+,7^+)$- or a
$(2,6^+,6^+,6^+)$-face by Fact~\ref{fact38}. Thus $w'(f)\geq
-2+\frac{4}{3}\times2=\frac{2}{3}>0$ or
$w'(v)\geq-2+\frac{2}{3}\times3=0$ by $R10$. If $n_2(f)=0$, then
the discussion is similar to the corresponding situation when
$d(f)=4$ in Case 1, so it is omitted here.\\
\textbf{Suppose $d(f)\geq5$}. Then the discussion is similar to
the corresponding situation in Case 1 and is omitted here.
From the above discussion, we can obtain that $w'(x)\geq 0$ for
each $x\in V(G)\cup F(G)$ that is not a special $3$-vertex, a
special $2$-vertex, nor a special face. From (\ref{formula4}), we
have $w'_s\geq-4-4=-8$ by Claim~\ref{claim1} and
Fact~\ref{fact37}. So we obtain $\sum_{x\in V(G)\cup
F(G)}w'(x)\geq -8$, a contradiction to Equation~\ref{formula1}.
\vspace{0.3cm}
\noindent {\bf Case 4} $\delta(G)=1$.
Now, the $3$-faces in $G$ are $(3^-,5^+,5^+)$-faces or
$(4^+,4^+,4^+)$-faces and any $4$-face that is incident with a
$2$-vertex is a $(2,5^+,5^+,5^+)$-face for the reason that $G$
contains no configurations $H_{39}$ and $H_{40}$. Then there is
neither any special $3$-vertex nor any special face in $G$.
\textbf{Case $4.1$} There is only one $1$-vertex in $G$.
\textbf{Case $4.1.1$} There are at most two $2$-vertices in $G$.
The discharging rules are the same as the rules in Case 1 except
for the charge which is given to a $3$- or $4$-face which is
incident with $2$-vertices. For each $v\in V(G)$, if $d(v)\geq5$,
then $v$ gives charge $1$ to its incident $(2,x,y)$-face $f$; and
$v$ gives charge $\frac{1}{2}$ to its incident $(2,x,y,z)$-face
$f$ only if the face $f$ is not adjacent to other $4$-faces which
are incident with $v$, otherwise, $v$ gives charge $\frac{1}{2}$
to only one of the adjacent $4$-faces. Clearly, the charge which
is given to a $(2,x,y)$- (resp. $(2,x,y,z)$)-face is not greater
than that which is given to $(3,x,y)$- (resp. $(3,x,y,z)$)-faces.
For each $v\in V(G)$, the number of $(2,x,y)$- (resp.
$(2,x,y,z)$)-faces which is incident with and accept charge from
$v$ is not greater than that of $(3,x,y)$- (resp.
$(3,x,y,z)$)-faces which is incident with $v$. So we can guarantee
the new charge of each element $x\in V(G)\cup F(G)$ is larger than
or equal to zero except for the $2$-vertices and the $3$- or
$4$-faces which are incident with the $2$-vertices. For
convenience, let $w'_{t1}$ (resp. $w'_{t2}$) denote the total new
charge of one $2$-vertex (resp. two $2$-vertices) and the faces
which are incident to the $2$-vertex (resp. the two $2$-vertices).
\textbf{Suppose that there is one $2$-vertex in $G$}. If the
$2$-vertex is incident with one $3$-face, then it will be not
incident with any $4$-face as $G$ contains no chordal $4$-cycles.
Since the $3$-face is a $(2,5^+,5^+)$-face, we have that
$w'_{t1}\geq -4-4+1\times2=-6$. If the $2$-vertex is incident with
some $4$-faces, since each such $4$-face is a
$(2,5^+,5^+,5^+)$-face, we have that $w'_{t1}\geq
-2-2-4+\frac{1}{2}\times4=-6$. From the above discussion, we
obtain that
\begin{eqnarray}\label{formula10}
w'_{t1}\geq -6.
\end{eqnarray} So $\sum_{x\in V(G)\cup F(G)}w'(x)\geq -7+w'_{t1}\geq-7-6\geq-13$ (a $1$-vertex has charge $-7$),
a contradiction to Equation~\ref{formula1}.\\
\textbf{Suppose that there are two $2$-vertices in $G$}. Since the
two $2$-vertices are not incident with a same $3$- or $4$-face, by
(\ref{formula10}), we have that $w'_{t2}\geq-6\times2=-12$. So
$\sum_{x\in V(G)\cup F(G)}w'(x)\geq -7-12=-19$, a contradiction to
Equation~\ref{formula1}.\\
\textbf{Case $4.1.2$} There are at least three $2$-vertices in
$G$. The discharging rules are the same as Case 3. It follows from
the discussion which is the same as the situation in Case 3 that
$\sum_{x\in V(G)\cup F(G)}w'(x)\geq
-7-4=-11$, a contradiction to Equation~\ref{formula1}.\\
\textbf{Case $4.2$} There are at least two $1$-vertices in $G$.
If there are two $1$-vertices in $G$, then there is neither a
$2$-vertex nor a third $1$-vertex in $G$ for the reason that $G$
contains no configuration $H_{41}$. The discharging rules are the
same as Case 1. It follows from the discussion which is the same
as the situation in Case 1 that $\sum_{x\in V(G)\cup
F(G)}w'(x)\geq -7\times2=-14$, a contradiction to
Equation~\ref{formula1}.
\end{proof}
\begin{lemma}\label{hajs} (\cite{hajs}) Every graph has an equitable
$k$-coloring whenever $k\geq \Delta(G)+1$.
\end{lemma}
\begin{lemma}\label{wang} (\cite{pelsmajer, wang}) Every graph $G$ with maximum degree $\Delta(G)\leq 3$
is equitably $k$-choosable whenever $k\geq\Delta(G)+1$.
\end{lemma}
In the following, let us give the proof of the main theorem.
\begin{theorem}\label{theorem1} If $G$ is a planar graph without chordal $4$- and $6$-cycles,
then $G$ is equitably $k$-colorable where
$k\geq\max\{7,\Delta(G)\}$.
\end{theorem}
\begin{proof} Let $G$ be a counterexample with fewest vertices. If each component of $G$ has
at most four vertices, then $\Delta(G)\leq 3$. Clearly, $G$ is
equitably $k$-colorable by Lemma~\ref{hajs}. Otherwise, there is
at least one component with at least five vertices.
For convenience, we divide all the configurations in Figure 1 and
Figure 2 into two classes according to whether it contains the
vertex which is labelled $x_{k-3}$ or not. A configuration belongs
to $C_1$ if it contains the vertex labelled $x_{k-3}$, otherwise,
it belongs to $C_2$.
Suppose that $G$ has one of the configurations of $C_1$. In the
following, we show how to find a set $S$ in order to apply
Lemma~\ref{jun1}. For convenience, let $S'$ be the set of the
labelled vertices of this configuration. For example, if $G$ has
the configuration $H$ depicted in Figure 1, then let $S'=\{x_k,
x_{k-1}, \cdots, x_{k-4}, x_1\}$. By
Corollary~\ref{cor4degenerate}, $G$ is $4$-degenerate. Thus
starting from $S'$, we can find the remaining unspecified vertices
to obtain the set $S$ of Lemma~\ref{jun1} from highest to lowest
indices by choosing a vertex with the minimum degree in the graph
obtained from $G$ by deleting the vertices already being chosen
for $S$ at each step. By the minimality of $G$, we have $G-S$ is
equitably $k$-colorable. By Lemma~\ref{jun1}, we can obtain that
$G$ is equitably $k$-colorable, a contradiction.
Thus $G$ has a configuration of $C_2$ and $\delta(G)\leq3$ by
Lemma~\ref{lemma2}. Similarly, let $S''$ be the set of the
labelled vertices of this configuration, in which the vertices are
labelled as they are in Figure 2. Let $G'=G-S''$. If there exists
a vertex $v\in V(G')$ such that $d_{G'}(v)\leq3$ or there exists a
vertex $u\in \{x_1, x_2, x_3\}\cap S''$ such that $d_G(u)\leq4$,
then we label $v$ or $u$ with $x_{k-3}$ and let
$S'''=S''\cup\{x_{k-3}\}$. By Corollary~\ref{cor4degenerate}, $G$
is $4$-degenerate. Now starting from $S'''$, we can find the
remaining unspecified vertices to obtain the set $S$ of
Lemma~\ref{jun1} from highest to lowest indices by choosing a
vertex with the minimum degree in the graph obtained from $G$ by
deleting the vertices already being chosen for $S$ at each step.
By the minimality of $G$, we have $G-S$ is equitably
$k$-colorable. By Lemma~\ref{jun1}, we can obtain that $G$ is
equitably $k$-colorable, a contradiction.
Thus $\delta(G')\geq4$ and $d_G(v)\geq5$ for each vertex
$v\in\{x_1, x_2, x_3\}\cap S''$. Clearly, it follows the Fact.
\vspace{-0.3cm}
\begin{fact}\label{fact01} For each $x\in V(H')-\{x_k, x_{k-1},
x_{k-2}\}$, we have that $d_G(x)\geq5$ where $H'\in C_2$.
\end{fact}
Now we can easily get that $G$ has only one configuration that
belongs to $C_2$. Otherwise, $\delta(G')\leq3$. Additionally, by
Lemma~\ref{lemma2}, $G'$ contains the configuration $H$ of Figure
1. If $G$ does not contain the configuration $H_{41}$, then by
Fact~\ref{fact01}, at most one $1$-vertex, at most two
$3^-$-vertices and at most one special face can exist in $G$
simultaneously, i.e. $G$ contains the configuration $H_{39}$. Let
us now show a self-contradictory conclusion by a discharging
procedure. The discharging rules are the same as Case 1 in
Lemma~\ref{lemma3}. Clearly, we can guarantee that the new charge
of each face other than the special face, and each vertex $v\in
V(G)$ with $d(v)\geq 4$ is larger than or equal to zero. Hence
$\sum_{x\in V(G)\cup F(G)}w'(x)\geq -7+-4\times2-4=-19$, a
contradiction to $\sum_{x\in V(G)\cup F(G)}w(x)=-20$.
Thus $G$ contains the configuration $H_{41}$. Additionally, from
the above discussion, we know $G$ has no configuration $H$, and
$G'$ has the configuration $H$ in Figure 1. It is clear that one
of the vertices $\{x_k, x_{k-1}, x_{k-2}, x_1\}$ of configuration
$H_{41}$ in Figure 2 must be adjacent to one of the vertices
$\{x_k, x_{k-1}, x_{k-2}\}$ of configuration $H$ in Figure 1. It
is not difficult to find a set $\bar{S}$, starting from which, we
can find the remaining unspecified vertices in $S$ of
Lemma~\ref{jun1} from highest to lowest indices by choosing a
vertex with the minimum degree in the graph obtained from $G$ by
deleting the vertices already being chosen for $S$ at each step.
By the minimality of $G$, we have that $G-S$ is equitably
$k$-colorable. By Lemma~\ref{jun1}, we have that $G$ is equitably
$k$-colorable, a contradiction. In the following, we give the
detailed steps on how to find the set $\bar{S}$.
For convenience, we use $w_1$, $w_2$, $w_3$ and $w_4$ to denote
the vertices $x_k$, $x_{k-1}$, $x_{k-2}$ and $x_1$ of
configuration $H_{41}$ in Figure 2, respectively, and use $u_1$,
$u_2$ and $u_3$ to denote the vertices $x_k$, $x_{k-1}$ and
$x_{k-2}$ of configuration $H$ in Figure 1, respectively.
If there exists one $1$-vertex which is adjacent to one of the
vertices in $\{u_1, u_2, u_3\}$, then the $1$-vertex only may be
$w_2$ or $w_3$ from the above discussion. Without loss of
generality, we assume $w_2$ and $u'$ are adjacent to $u$ for which
$\{u, u'\}\subset\{u_1, u_2, u_3\}$. Now we label the vertices
$w_2, w_1, w_3, u, u'$ with $x_k, x_{k-1}, x_{k-2}, x_{k-3},
x_{k-4}$, respectively. We choose $\bar{S}=\{x_k, x_{k-1},
x_{k-2}, x_{k-3}, x_{k-4}\}$.
Otherwise, if $w_3$ is adjacent to one of the vertices in $\{u_1,
u_2, u_3, u_4\}$ such that $d_G(w_3)=2$, for convenience, we
assume $w_3$ and $u'$ are adjacent to $u$ for which $\{u,
u'\}\subset\{u_1, u_2, u_3\}$. Now we label the vertices $w_1,
w_2, w_3, u, u', w_4$ with $x_k, x_{k-1}, x_{k-2}, x_{k-3},
x_{k-4}, x_1$, respectively. We choose $\bar{S}=\{x_k, x_{k-1},
x_{k-2}, x_{k-3}, x_{k-4}, x_1\}$.
If $w_4$ is adjacent to one of the vertices in $\{u_1, u_2, u_3,
u_4\}$, for convenience, we assume $w_4$ and $u'$ are adjacent to
$u$ for which $\{u, u'\}\subset\{u_1, u_2, u_3\}$. Now we label
the vertices $w_1, w_2, w_3, u, u', w_4$ with $x_k, x_{k-1},
x_{k-2}, x_{k-3}, x_{k-4}, x_1$, respectively. We choose
$\bar{S}=\{x_k, x_{k-1}, x_{k-2}, x_{k-3}, x_{k-4}, x_1\}$. This
completes the proof of Theorem~\ref{theorem1}.
\end{proof}
\begin{corollary}
Let $G$ be a planar graph without chordal $4$- and $6$-cycles. If
$\Delta(G)\geq 7$, then $\chi_e(G)\leq \Delta(G)$.
\end{corollary}
\begin{corollary}
Let $G$ be a planar graph without chordal $4$- and $6$-cycles. If
$\Delta(G)\geq 7$, then $\chi^*_e(G)\leq \Delta(G)$.
\end{corollary}
\begin{theorem} If $G$ is a planar graph without chordal $4$- and $6$-cycles and
$k\geq \max\{7,\Delta(G)\}$, then $G$ is equitably $k$-choosable.
\end{theorem}
\begin{proof} Let $G$ be a counterexample with the fewest vertices, i.e. $G$ is a
critical graph. If each component of $G$ has at most four
vertices, then $\Delta(G)\leq 3$. So $G$ is equitably
$k$-choosable by Lemma~\ref{wang}. Otherwise, the proof is similar
to the proof of Theorem~\ref{theorem1} by Lemma~\ref{lemma3} and
Lemma~\ref{kostochka1}.
\end{proof}
\begin{corollary}
Let $G$ be a planar graph without chordal $4$- and $6$-cycles. If
$\Delta(G)\geq 7$, then $G$ is equitably $\Delta(G)$-choosable.
\end{corollary}
\section{Remarks and perspective}
Most of the results on equitable and list equitable colorings on
planar graphs are restricted to $3$-degenerate graphs. In this
paper, we confirm the Conjecture~\ref{chenconj} and
Conjecture~\ref{kostochconj2} for the planar graphs without
chordal $4$- and $6$-cycles which are not necessarily
$3$-degenerate. Can a similar conclusion be drawn for
$4$-degenerate graphs and ordinary planar graphs?
\acknowledgements We would like to thank the referees for
providing some very helpful suggestions for revising this paper.
\nocite{*}
\bibliographystyle{abbrvnat}
|
{
"timestamp": "2019-11-04T02:14:53",
"yymm": "1806",
"arxiv_id": "1806.01064",
"language": "en",
"url": "https://arxiv.org/abs/1806.01064"
}
|
\section{Introduction and main results}
We are interested in several constants appearing in the study of eigenfunctions concentration and control theory, and the links between them. In the whole paper, we are given a connected compact Riemannian manifold $(\ensuremath{\mathcal M},g)$ with or without boundary $\d \ensuremath{\mathcal M}$, we denote by $\Delta_g$ the (negative) Laplace-Beltrami operator on $\ensuremath{\mathcal M}$. In case $\d \ensuremath{\mathcal M} \neq \emptyset$, we denote by $\Int(\ensuremath{\mathcal M})$ the interior of $\ensuremath{\mathcal M}$, so that $\ensuremath{\mathcal M} = \d \ensuremath{\mathcal M} \sqcup \Int(\ensuremath{\mathcal M})$ (see e.g.~\cite[Chapter~1]{Lee:book}).
For readability, we first focus in the next section on the results concerning the observability constant for the heat equation.
\subsection{The control cost for the heat equation}
Here, we study the so-called \emph{cost of controllability} of the heat equation.
It is well known since the seminal papers of Lebeau-Robbiano \cite{LR:95} and Fursikov-Imanuvilov \cite{FI:96} that for any time $T>0$, the heat equation is controlable to zero.
More precisely, by duality, the controlability problem is equivalent to the observability problem for solutions of the free heat equation (see e.g. \cite[Section 2.5.2]{Cor:book}):
For any non empty open set $\omega$ and $T>0$, there exist $C_{T,\omega}$ such that we have
\bnan
\label{e:co-heat}
\nor{e^{T\Delta_g}u}{L^2(\ensuremath{\mathcal M})}^2 \leq C_{T,\omega}^2 \intop_0^T \nor{e^{t\Delta_g}u}{L^2(\omega)}^2 dt , \quad \text{ for all $T>0$ and all $u \in L^2(\ensuremath{\mathcal M})$.}
\enan
Here, $(e^{t\Delta_g})_{t>0}$ denotes the semigroup generated by the {\em Dirichlet} Laplace operator on $\ensuremath{\mathcal M}$ (otherwise explicitely defined).
The observability constant $C_{T,\omega}$ is then directly related to the cost of the control to zero and has been the object of several studies.
It has been proved by Seidman \cite{Seidman:84} in dimension one (in the closely related case of a boundary observation) and by Fursikov-Imanuvilov~\cite{FI:96} in general (see also \cite{Miller:10} for obtaining this result via the Lebeau-Robbiano method), that the cost in small time blows up at most exponentially:
\bnan
\label{defcoheat}
\omega \neq \emptyset \quad \implies \quad \text{ there is } C,\ensuremath{\mathfrak K}>0 \text{ such that } C_{T,\omega}\leq Ce^{\frac{\ensuremath{\mathfrak K}}{T}} \quad \text{for all } T>0.
\enan
Gu\"ichal \cite{Guichal:85} in one dimension and Miller~\cite{Miller:04} in the general case proved that exponential blowup indeed occurs:
$$
\overline{\omega} \neq \ensuremath{\mathcal M} \quad \implies \quad \text{ there is } c>0 \text{ such that } C_{T,\omega}\geq c e^{\frac{c}{T}}\quad \text{for all } T>0.
$$
This suggest to define
\bnan
\label{e:def-Kheat}
\ensuremath{\mathfrak K}_{heat} (\omega) = \inf \left\{ \ensuremath{\mathfrak K} >0 , \exists C>0 \text{ s.t. \eqref{e:co-heat} holds with } C_{T,\omega}= Ce^{\frac{\ensuremath{\mathfrak K}}{T}}\right\},
\enan
which, according to the abovementionned results satisfies $\ensuremath{\mathfrak K}_{heat}(\omega) <\infty$ as soon as $\omega \neq \emptyset$ and $\ensuremath{\mathfrak K}_{heat}(\omega)>0$, as soon as $\overline{\omega} \neq \ensuremath{\mathcal M}$.
This constant depends only on the geometry of the manifold $(\ensuremath{\mathcal M},g)$ and the subset $\omega$. It is expected to contain geometric features of short time heat propagation, and has thus received a lot of attention in the past fifteen years~\cite{Miller:04,Miller:04ARMA,Miller:06c,TT:07,Miller:10,EZ:11s,TT:11,BP:17,Darde:17,EV:17,NTTV:18,PhunglogCarl}.
In this direction, the result of Miller~\cite{Miller:04} is actually more precise and provides a geometric lower bound: for all $(\ensuremath{\mathcal M}, g), \omega$, we have
$$\ensuremath{\mathfrak K}_{heat} (\omega) \geq \frac{\L(\ensuremath{\mathcal M},\omega)^2}{4},
$$
where, for $E \subset \ensuremath{\mathcal M}$, we write
\begin{equation}
\label{e:def-L-omega-M}
\L(\ensuremath{\mathcal M},E) = \sup_{x \in \ensuremath{\mathcal M}}\dist_g(x,E) .
\end{equation} The proof relies on heat kernel estimates. In~\cite{Miller:04,Miller:06b}, Luc Miller also proved that in case $\omega$ satisfies the Geometric Control Condition in $(\ensuremath{\mathcal M}, g)$ (see~\cite{BLR:92}) we have
$$\ensuremath{\mathfrak K}_{heat} (\omega)\leq \alpha_{*} L_{\omega}^2,$$
where $L_\omega$ is the maximal length of a ``ray of geometric optics'' (i.e. geodesic curve in case $\d \ensuremath{\mathcal M} =\emptyset$) not intersecting $\omega$, and $\alpha_*\leq 2$ is an absolute constant (independent of the geometry).
Based on these results and the idea that the heat kernel provides the most concentrated solutions of the heat equation, he formulated the following conjecture~\cite[Section~2.1]{Miller:04}-\cite[Section~3.1]{Miller:06c}.
\begin{conjecture}[Luc Miller]
\label{c:Miller-conj}
For all $(\ensuremath{\mathcal M},g)$ and $\omega \subset \ensuremath{\mathcal M}$ such that $\overline{\omega} \neq \ensuremath{\mathcal M}$, we have $\ensuremath{\mathfrak K}_{heat} (\omega) = \frac{\L(\ensuremath{\mathcal M},\omega)^2}{4}$.
\end{conjecture}
Note that it has been proved in \cite{Lissy:15} that, in the related context of the 1D heat equation with a boundary observation, the factor $\frac{1}{4}$ might not be correct (and should be replaced by $\frac12$, see Section \ref{sectpreviousres} below). Our first result disproves Conjecture~\ref{c:Miller-conj} in a stronger sense.
\begin{theorem}[Counterexamples]
\label{thm:counterexamples}
Assume $(\ensuremath{\mathcal M}, g)$ is one of the following
\begin{enumerate}
\item $\ensuremath{\mathcal M} =\S^n \subset {\mb{R}}^{n+1}$ and $g$ is the canonical metric (see Section~\ref{s:sphere});
\item $\ensuremath{\mathcal M}=\mathcal{S}\subset {\mb{R}}^3$ is a surface of revolution diffeomorphic to the sphere $\S^2$, and $g$ is the metric induced by the Euclidean metric on ${\mb{R}}^3$ (with additional non degeneracy conditions, see Section~\ref{s:revol});
\item $\ensuremath{\mathcal M} = \ensuremath{\mathbb D}=\left\{(x_1,x_2)\in {\mb{R}}^2\left| \ x_1^2+x_2^2\leq 1\right.\right\}\subset {\mb{R}}^2$ is the unit disk, $g$ the Euclidean metric and Dirichlet conditions are taken on $\d \ensuremath{\mathcal M}$ (see Section \ref{sectDisk}).
\end{enumerate}
Then, for any $C>0$, there exists $\omega\subset \ensuremath{\mathcal M}$ so that $ \ensuremath{\mathfrak K}_{heat} (\omega) \geq C \L(\ensuremath{\mathcal M},\omega)$ and $\ensuremath{\mathfrak K}_{heat} (\omega) \geq C$.
More precisely, assume that $x_0$ is either
\begin{enumerate}
\item any point in $\S^n$,
\item one of the two points that intersect the axis of revolution of $\mathcal{S}\subset {\mb{R}}^3$,
\item the center of $\ensuremath{\mathbb D}$.
\end{enumerate}
Then, there exists $C>0$ and $r_0>0$ so that we have
\bnan
\label{lowerlogheat}
\ensuremath{\mathfrak K}_{heat} (B_g(x_0,r)) \geq C|\log(r)|^2
\enan
for any $0<r\leq r_0$.
\end{theorem}
Here, $B_g(x_0,r)$ denotes the geodesic ball of $\ensuremath{\mathcal M}$ centered at $x_0$ of radius $r$.
The results we obtain are slightly more precise. In particular, the constant $C$ is an explicit geometric constant.
The lower bounds are related to an appropriate Agmon distance associated to the problem. We refer to Corollary~\ref{t:agmon-introenkappa} below for more precise estimates.
Note also that this blow up of $\ensuremath{\mathfrak K}_{heat} (B(x_0,r))$ for small $r$ does not always happen and is due here to a particular (de)concentration phenomenum. For instance on $\ensuremath{\mathcal M}=\mathbb{T}^1$, the set $\omega=B(x_0,r)$ always satisfies the Geometric Control Condition for any time $T>1-2r$. Abstract results (see \eqref{upperboundMiller} below for more details) give $\ensuremath{\mathfrak K}_{heat} (B(x_0,r))\leq \alpha^*$ for any $r>0$ and blowup does not occur.
\medskip
Our next result shows that the blow up given by~\eqref{lowerlogheat} is actually optimal as far as the asymptotics of $\ensuremath{\mathfrak K}_{heat}$ for small balls is concerned.
We prove the following observability result from small balls (closely related to previous results of Jerison-Lebeau \cite{JL:99}, see Section~\ref{subsub:LR-Jerison-Lebeau} below).
\begin{theorem}
\label{t:heatlog}
For all $x_0\in \ensuremath{\mathcal M}$, there exist $C>0$ such that for all $r>0$ we have
$$
\ensuremath{\mathfrak K}_{heat} (B(x_0,r)) \leq C|\log(r)|^2+C .
$$
\end{theorem}
Note that Bardos and Phung~\cite{BP:17,PhunglogCarl} recently proved independently that $\ensuremath{\mathfrak K}_{heat} (B(x_0,r)) \leq \frac{C_\epsilon}{r^\epsilon} +C_\epsilon$ for all $\epsilon>0$ in case $\ensuremath{\mathcal M} \subset {\mb{R}}^n$ is star-shaped w.r.t. $x_0$.
These results seem to suggest that $\L(\ensuremath{\mathcal M},\omega)$ is not the only appropriate parameter needed for estimating $\ensuremath{\mathfrak K}_{heat} (\omega)$.
There are indeed some solutions of the heat equation concentrating more than the heat kernel for small times.
Our last result concerning the heat equation goes actually in the opposite direction. It provides with a large class of solutions of the heat equation, namely {\em positive} solutions, that do not concentrate more than the heat kernel, thus proving Conjecture~\ref{c:Miller-conj} when restricted to this class of solutions.
Recall that $\L(\ensuremath{\mathcal M},E)$ is defined in~\eqref{e:def-L-omega-M}.
\begin{theorem}
\label{thmpositive}
Assume that $(\ensuremath{\mathcal M}, g)$ has geodesically convex boundary $\d \ensuremath{\mathcal M}$.
Then, for any nonempty open set $\omega\subset \ensuremath{\mathcal M}$ and $z_0\in\ensuremath{\mathcal M}$, for any $\e>0$, there exists $C,D>0$ so that for any $0<T\leq D$, we have
\bnan
\label{estimpos1}\nor{u(T)}{L^2(\ensuremath{\mathcal M})}^2\leq \frac{C}{T} e^{\frac{(1+\e)\mathcal{L}(M,\omega)^2}{2T}}\intop_0^{T}\nor{u(t, \cdot)}{L^2(\omega)}^2~dt ,\\
\label{estimpos2}\nor{u(T)}{L^2(\ensuremath{\mathcal M})}^2\leq \frac{C}{T} e^{\frac{(1+\e)\mathcal{L}(M,z_0)^2}{2T}}\intop_0^{T}u(t,z_0)^2~dt ,
\enan
for all $u_0 \in L^2(M)$ such that $u_0 \geq 0$ a.e. on $\ensuremath{\mathcal M}$ and associated solution $u$ to
$$
(\d_t - \Delta_g) u =0 \text{ on }{\mb{R}}^+_* \times \Int(\ensuremath{\mathcal M}), \quad u|_{t=0} = u_0 \text{ in } \Int(\ensuremath{\mathcal M}), \quad \d_\nu u =0 \text{ on }{\mb{R}}^+ \times \d \ensuremath{\mathcal M} .
$$
\end{theorem}
Theorem~\ref{thmpositive} follows from classical Li-Yau estimates~\cite{LY:86}.
Notice that here, Neumann boundary conditions are taken ($\nu$ denotes a unit vector field normal to $\d \ensuremath{\mathcal M}$), and an additional geometric assumption is made (convexity of $\d \ensuremath{\mathcal M}$). The result still holds without the convexity assumption up to replacing $(1+\e)$ in the exponent by a geometric constant, see Remark~\ref{r:non-conv-Li-Yau}.
We also recall that for nonnegative initial data $u_0 \geq0$, the solution of the heat equation remains nonnegative for all times.
Of course, the counterexamples of Theorem~\ref{thm:counterexamples} prevent these estimates to hold in general.
Estimate~\eqref{estimpos2} is particularly surprising (even without considering the value of the constants) and of course only true for positive solutions (otherwise just taking $z_0$ in a nodal set of an eigenfunction of $\Delta_g$ invalidates~\eqref{estimpos2}).
Finally, let us mention that the constants $C$ and $D$ are explicitely estimated by geometric quantities (see Remark~\ref{rkexplicitpos}).
\bigskip
Let us now put these results in a broader context, and introduce several related geometric constants appearing in tunneling estimates and control theory.
\subsection{Tunneling constants in control theory, and their links}
\label{sectlinkintro}
The lower bounds of Theorem~\ref{thm:counterexamples} are proved using very particular solutions to the heat equation arising from by eigenfunctions (exhibiting a very strong concentration far from $x_0$ as well as a strong deconcentration near $x_0$). It is therefore natural to study related constants measuring such (de)concentration properties.
In this section, we introduce all geometric constants studied in the paper and collect known links between them.
We first introduce spectral subspaces of the Laplace operator $\Delta_g$ (with Dirichlet boundary conditions if $\d \ensuremath{\mathcal M} \neq \emptyset$), which are at the core of most results presented here.
Namely, for $\lambda \in \Sp(-\Delta_g)$, the space
$$E_{\lambda} := \vect\{\psi \in L^2(\ensuremath{\mathcal M}), -\Delta_g \psi = \lambda \psi \}$$
denotes the eigenspace associated to the eigenvalue $\lambda$ and, for all $\lambda >0$,
$$E_{\leq \lambda} := \vect\{ E_{\lambda_j}, \lambda_j \in \Sp(-\Delta_g), \lambda_j \leq \lambda\} ,
$$ the space of linear combinations of eigenfunctions associated to eigenvalues $\leq \lambda$.
Let us now introduce the constants studied in the article, else than that involved in~\eqref{e:co-heat}-\eqref{defcoheat}.
For any nonempty open subset $\omega \subset \ensuremath{\mathcal M}$, we recall the following results:
\begin{itemize}
\item Vanishing of eigenfunctions~\cite{DF:88,LR:95}: there exist $C,\ensuremath{\mathfrak K}$ such that we have
\bnan
\label{e:co-eig}
\nor{\psi}{L^2(\ensuremath{\mathcal M})} \leq Ce^{\ensuremath{\mathfrak K}\sqrt{\lambda}}\nor{\psi}{L^2(\omega)} , \quad \text{for all $\lambda \in \Sp(-\Delta_g)$ and $\psi \in E_\lambda$}.
\enan
\item Vanishing of sums of eigenfunctions (so-called Lebeau-Robbiano spectral inequality)~\cite{LR:95,JL:99,LZ:98}: there exist $C,\ensuremath{\mathfrak K}$ such that we have
\bnan
\label{e:co-sum}
\nor{u}{L^2(\ensuremath{\mathcal M})} \leq Ce^{\ensuremath{\mathfrak K}\sqrt{\lambda}}\nor{u}{L^2(\omega)} , \quad \text{ for all $\lambda >0$ and all $u \in E_{\leq \lambda}$.}
\enan
\item Infinite time observability of the heat equation~\cite{FI:96}: there exist $C,\ensuremath{\mathfrak K}$ such that we have
\bnan
\label{e:co-infty}
\intop_{{\mb{R}}^+} e^{-\frac{2\ensuremath{\mathfrak K}}{t}} \|e^{t\Delta_g}u\|_{L^2(\ensuremath{\mathcal M})}^2 dt \leq C\intop_{{\mb{R}}^+} \|e^{t\Delta_g}u\|_{L^2(\omega)}^2 dt , \quad \text{ for all $u \in L^2(\ensuremath{\mathcal M})$.}
\enan
\item Approximate observability for the wave equation~\cite{LL:15},
\begin{align}
\label{e:wave}
(\d_t^2 - \Delta_g)u = 0, \quad u|_{(0,T)\times \d \ensuremath{\mathcal M}}=0, \quad (u,\d_tu)|_{t=0} =(u_0, u_1):
\end{align}
For all $T> 2\L(\ensuremath{\mathcal M},\omega)$, there exist $C,\ensuremath{\mathfrak K}, \mu_0>0$ such that we have
\begin{align}
\label{e:co-wave}
\|(u_0,u_1)\|_{L^2(\ensuremath{\mathcal M}) \times H^{-1}(\ensuremath{\mathcal M})} \leq Ce^{\ensuremath{\mathfrak K} \mu} \|u\|_{L^2((0,T)\times\omega)} + \frac{1}{\mu} \|(u_0,u_1)\|_{H^1_0(\ensuremath{\mathcal M})\times L^2(\ensuremath{\mathcal M})}, \nonumber \\ \text{ for all $\mu \geq \mu_0$ and all $(u_0, u_1) \in H^1_0(\ensuremath{\mathcal M}) \times L^2(\ensuremath{\mathcal M})$, and $u$ solution to~\eqref{e:wave}}.
\end{align}
\end{itemize}
Recall the definition of $\L(\ensuremath{\mathcal M},\omega)$ in~\eqref{e:def-L-omega-M}.
Remark that this last estimate is equivalent to (see~\cite{LL:15} or Corollary~\ref{c:lambda=mu} below)
\bnan
\label{e:co-wave-bis}
\|(u_0,u_1)\|_{H^1_0(\ensuremath{\mathcal M})\times L^2(\ensuremath{\mathcal M})}\leq C' e^{\ensuremath{\mathfrak K}' \Lambda} \|u\|_{L^2((0,T)\times\omega)}, \quad \Lambda = \frac{\|(u_0,u_1)\|_{H^1_0(\ensuremath{\mathcal M})\times L^2(\ensuremath{\mathcal M})}}{\|(u_0,u_1)\|_{L^2(\ensuremath{\mathcal M}) \times H^{-1}(\ensuremath{\mathcal M})} }, \nonumber \\ \text{ for all $(u_0, u_1) \in H^1_0(\ensuremath{\mathcal M}) \times L^2(\ensuremath{\mathcal M})$, and $u$ solution to~\eqref{e:wave}.}
\enan
Note that in the reference~\cite{LL:15}, the observation term in the right hand-side of these inequalities is $\|u\|_{L^2(0,T;H^1(\omega))}$ instead of $\|u\|_{L^2((0,T)\times\omega)}$. That the stronger inequalities above holds is proved in~\cite[Section~5.3]{LL:17Hypo} (see also~\cite{LL:17approx}).
\bigskip
In all these inequalities, we are interested in the ``best constant $\ensuremath{\mathfrak K}$'' such that the estimate holds for some $C$. More precisely, we are interested in the way it depends on the geometry of $(\ensuremath{\mathcal M},g)$ and $\omega$ (and, in the case of~\eqref{e:co-wave}, the time $T$).
Let us first formulate the precise definitions of these constants. These are the analogues to that of $\ensuremath{\mathfrak K}_{heat} (\omega)$ given in~\eqref{e:def-Kheat}.
\begin{definition}
\label{def-coco}
Given $\omega\subset \ensuremath{\mathcal M}$ an open set, we define $\ensuremath{\mathfrak K}_{eig} (\omega), \ensuremath{\mathfrak K}_\Sigma (\omega) , \ensuremath{\mathfrak K}_{\infty}(\omega), \ensuremath{\mathfrak K}_{wave}(\omega,T)$ to be the best exponents in the above estimates~\eqref{e:co-eig}-\eqref{e:co-wave}, namely:
\bna
\ensuremath{\mathfrak K}_{eig} (\omega) = \inf \left\{ \ensuremath{\mathfrak K} >0 , \exists C>0 \text{ s.t. \eqref{e:co-eig} holds} \right\} ,
\ena
\bna
\ensuremath{\mathfrak K}_{\Sigma} (\omega) = \inf \left\{ \ensuremath{\mathfrak K} >0 , \exists C>0 \text{ s.t. \eqref{e:co-sum} holds} \right\} ,
\ena
\bna
\ensuremath{\mathfrak K}_{\infty} (\omega) = \inf \left\{ \ensuremath{\mathfrak K} >0 , \exists C>0 \text{ s.t. \eqref{e:co-infty} holds} \right\} ,
\ena
\begin{align}
\label{e:co=co'}
\ensuremath{\mathfrak K}_{wave} (\omega,T) & = \inf \left\{ \ensuremath{\mathfrak K} >0 , \exists C>0, \mu_0>0 \text{ s.t. \eqref{e:co-wave} holds} \right\} \nonumber \\
&= \inf \left\{ \ensuremath{\mathfrak K}' >0 , \exists C'>0, \text{ s.t. \eqref{e:co-wave-bis} holds} \right\}.
\end{align}
\end{definition}
A proof of the equality in~\eqref{e:co=co'} is given in Corollary~\ref{c:lambda=mu} below. Note that we may write $\ensuremath{\mathfrak K}_{wave} (\omega,T) = + \infty$ if $T< 2\L(\ensuremath{\mathcal M},\omega)$ since~\eqref{e:co-wave}-\eqref{e:co-wave-bis} are known not to hold (see the discussion in~\cite{LL:15}). However, $\ensuremath{\mathfrak K}_{wave} (\omega,T) < + \infty$ as soon as $T>2\L(\ensuremath{\mathcal M},\omega)$, by virtue of~\eqref{e:co-wave}-\eqref{e:co-wave-bis}.
\bigskip
Let us now collect some known facts concerning these constants, in addition to the already discussed bound $\ensuremath{\mathfrak K}_{heat}(\omega)\geq \frac{\L(\ensuremath{\mathcal M},\omega)^2}{4}$~\cite{Miller:04}. A first trivial (but useful) fact is that $\ensuremath{\mathfrak K}_{eig} (\omega) \leq \ensuremath{\mathfrak K}_{\Sigma} (\omega)$.
The following properties can also be found in the literature:
\begin{enumerate}
\item For all $(\ensuremath{\mathcal M}, g), \omega$ such that $\overline{\omega} \neq \ensuremath{\mathcal M}$, we have $\ensuremath{\mathfrak K}_{\Sigma}(\omega)\geq \frac{\L(\ensuremath{\mathcal M},\omega)}{2}$, see~\cite[Theorem~5.3]{Miller:10} (that $\ensuremath{\mathfrak K}_{\Sigma}(\omega)>0$ had already been proved in~\cite{JL:99}).
\item $\ensuremath{\mathfrak K}_{\infty} (\omega) \leq \ensuremath{\mathfrak K}_{heat} (\omega)$, \cite[Theorem~1]{Miller:06c}.
\item For all $(\ensuremath{\mathcal M}, g), \omega$, we have $\ensuremath{\mathfrak K}_{\infty} (\omega) \geq \frac{d_1(\omega)^2}{4}$, with $d_1(\omega) = \sup \left\{ r>0 , \exists x\in \ensuremath{\mathcal M}, B(x,r) \subset \ensuremath{\mathcal M}\setminus \overline{\omega} \right\}$, see~\cite{FCZ:00} and~\cite[Section~4.1]{Zua:01}.
\item Assume $\omega$ satisfies the Geometric Control Condition in $(\ensuremath{\mathcal M}, g)$ and denote by $L_\omega$ the maximal length of a ray of geodesic optics not intersecting $\omega$. Then, we have
\bnan
\label{upperboundMiller}
\ensuremath{\mathfrak K}_{heat} (\omega) \leq \alpha_* L_\omega^2
\enan with $\alpha_* = 2 \left(\frac{36}{37}\right)^2$, see~\cite{Miller:04,Miller:06b} (improved to $\alpha_* =3/4$ in~\cite{TT:07} and to $0.6966$ in~\cite{Darde:17}).
\item Assume $\omega$ satisfies the Geometric Control Condition in $(\ensuremath{\mathcal M}, g)$ and denote by $L_\omega$ the maximal length of a ray of geometric optics not intersecting $\omega$. Then, we have $\ensuremath{\mathfrak K}_{\infty} (\omega) \leq \frac1{16} L_\omega^2$, see~\cite[Theorem~1.1]{EZ:11s}.
\item $\ensuremath{\mathfrak K}_{heat} (\omega) \leq 4 \ensuremath{\mathfrak K}_{\Sigma} (\omega)^2$, see~\cite[Corollary~1, see also the discussion in Section~2.4]{Miller:10} (see also~\cite{Sei:08} for a proof $\ensuremath{\mathfrak K}_{heat} (\omega) \leq 8 \ensuremath{\mathfrak K}_{\Sigma} (\omega)^2$).
\item If $(\omega,T)$ satisfy the geometric control condition~\cite{BLR:92}, then $\ensuremath{\mathfrak K}_{wave}(\omega,T) =0$ (more precisely,~\eqref{e:co-wave}-\eqref{e:co-wave-bis} hold with $\ensuremath{\mathfrak K}=0$). Conversally, if $(\ensuremath{\mathcal M},g)$ is real-analytic and $(\overline{\omega},T)$ does not satisfy the geometric control condition (for a ray that only intersects $\d \ensuremath{\mathcal M}$ transversally), then $\ensuremath{\mathfrak K}_{wave}(\omega,T) >0$, see~\cite{Leb:Analytic}.
\end{enumerate}
Notice that in all these statements, the constants $\ensuremath{\mathfrak K}_{heat}$ and $\ensuremath{\mathfrak K}_\infty$ (heat equation) are homogeneous to a square of a distance (as for the heat kernel), whereas the other ones are homogeneous to a distance (as for the wave kernel).
Remark also that every comparison statement above follows, in the associated reference, from a proper inequality (the above statements being only a weak form of those).
Also notice that the converse inequality $ \ensuremath{\mathfrak K}_{\Sigma} (\omega)^2\leq C \ensuremath{\mathfrak K}_{heat} (\omega)$ for a universal constant $C$ is certainly not true in general. For instance, in the case of boundary control on an interval $(0,1)$ (see Section \ref{sectpreviousres}), $\ensuremath{\mathfrak K}_{heat} (\{0\})$ is finite while it is easy to see that $\ensuremath{\mathfrak K}_{\Sigma} (\{0\})$ is infinite since no spectral inequality can be true just by dimensional analysis.
\bigskip
We first complete the above list of comparison results by the following proposition.
\begin{proposition}[Other links between the constants]
\label{p:link-eigenfct-heat-etc}
We have
\bna
\frac{\ensuremath{\mathfrak K}_{eig}(\omega)^2}{4} \leq \ensuremath{\mathfrak K}_{heat}(\omega) , \quad
\frac{\ensuremath{\mathfrak K}_{eig}(\omega)^2}{4} \leq \ensuremath{\mathfrak K}_{\infty}(\omega) .
\ena
Also for all $T>0$, we have $\ensuremath{\mathfrak K}_{eig}(\omega) \leq \ensuremath{\mathfrak K}_{wave}(\omega,T)$.
\end{proposition}
Note that the last statement is empty if $T< 2\L(\ensuremath{\mathcal M},\omega)$ since~\eqref{e:co-wave}-\eqref{e:co-wave-bis} are known not to hold (see the discussion in~\cite{LL:15}), but is nonempty if we have $\ensuremath{\mathfrak K}_{wave}(\omega,T) <\infty$, that is if $T>2\L(\ensuremath{\mathcal M},\omega)$, by virtue of~\eqref{e:co-wave}-\eqref{e:co-wave-bis}.
Hence, in order to produce lower bounds for $ \ensuremath{\mathfrak K}_\Sigma(\omega), \ensuremath{\mathfrak K}_{heat}(\omega), \ensuremath{\mathfrak K}_{\infty}(\omega) , \ensuremath{\mathfrak K}_{wave}(\omega,T)$, we shall product lower bounds for $\ensuremath{\mathfrak K}_{eig}(\omega)$, i.e. construct sequences of eigenfunctions having a maximal vanishing rate on $\omega$. Note also that, summarizing the inequalities so far, we have:
\bnan
\label{e:comparison-constants}
\frac{\ensuremath{\mathfrak K}_{eig}(\omega)^2}{4} \leq \ensuremath{\mathfrak K}_{\infty}(\omega) \leq \ensuremath{\mathfrak K}_{heat}(\omega) \leq 4 \ensuremath{\mathfrak K}_\Sigma(\omega)^2,
\enan
so that the understanding of concentration properties for eigenfunctions and sums of eigenfunctions essentially contains those of the heat equation. Therefore, our main focus in the following is to produce:
\begin{itemize}
\item maximally vanishing eigenfunctions in particular geometries to yields a lower bound for $\ensuremath{\mathfrak K}_{eig}$;
\item a uniform Lebeau-Robbiano spectral inequality on small balls to yields an upper bound for $\ensuremath{\mathfrak K}_\Sigma$.
\end{itemize}
Note that reducing our attention to $\ensuremath{\mathfrak K}_{eig}$ in the seek of lower bounds is already very restrictive!
Indeed, as soon as the Schr\"odinger equation on $(\ensuremath{\mathcal M},g)$ is observable from $\omega$ in finite time (in particular if $\omega$ satisfies the geometric control condition, see~\cite{BLR:92,Leb:92}), then $\ensuremath{\mathfrak K}_{eig} (\omega)=0$ (more precisely, \eqref{e:co-eig} holds with $\ensuremath{\mathfrak K}=0$).
\bigskip
Before starting to state these lower/upper bounds, let us give a link between $\ensuremath{\mathfrak K}_{heat} (\omega)$ and $\ensuremath{\mathfrak K}_{wave}(\omega,T)$, consequence of a result of Ervedoza-Zuazua~\cite{EZ:11} (weak observability with exponential cost for the wave equation implies observability of the heat equation).
\begin{proposition}
\label{propheatwave}
There exist universal constants $\alpha_{1},\alpha_{2}>0$ so that for any $S>0$, we have
\bna
\ensuremath{\mathfrak K}_{heat} (\omega)\leq \alpha_{1}S^2 + \alpha_{2}\ensuremath{\mathfrak K}_{wave}(\omega,S)^{2}.
\ena \end{proposition}
The proof of this result in Section~\ref{s:proof-wave-heat} is a little more precise about this estimate. In particular, several values of $(\alpha_1, \alpha_2)$ can be deduced from it.
The value of $\alpha_{1}$ is thought to be related to the cost of the boundary control of the 1D heat equation.
Note that, as in~\eqref{e:comparison-constants}, this yields
\bna
\frac{\ensuremath{\mathfrak K}_{eig}(\omega)^2}{4} \leq \ensuremath{\mathfrak K}_{\infty}(\omega) \leq \ensuremath{\mathfrak K}_{heat}(\omega) \leq \alpha_{1}S^2 + \alpha_{2}\ensuremath{\mathfrak K}_{wave}(\omega,S)^{2}, \quad \text{ for all } S>0 .
\ena
However, this upper bound seems for the moment less useful than that of~\eqref{e:comparison-constants}, since the proof of~\eqref{e:co-wave}-\eqref{e:co-wave-bis} in~\cite{LL:15} is more technically involved than that of~\eqref{e:co-sum} in~\cite{LR:95,JL:99,LZ:98}. The computation of $\ensuremath{\mathfrak K}_{wave}(\omega,S)$ seems thus more intricate than that of $\ensuremath{\mathfrak K}_\Sigma(\omega)$.
\subsection{Main results}
\subsubsection{Constructing maximally vanishing eigenfunctions: lower bound for $\ensuremath{\mathfrak K}_{eig}$}
\label{s:intro-const}
In this Section, we provide lower bounds for $\ensuremath{\mathfrak K}_{eig}$ in three different geometries.
This then proves Theorem~\ref{thm:counterexamples} as a direct corollary of Proposition~\ref{p:link-eigenfct-heat-etc}.
\paragraph{The sphere}
We first state the results we obtain on two dimensional sphere $\S^2$, since they are particularly simple. The higher dimensional case $\S^n$ is completely similar.
The sphere $\S^2$ is parametrized by $(s, \theta) \in (0,\pi)\times \S^1$. We denote by $N$ (resp. $S$) the north pole described by $s=0$ (resp. the south pole described by $s=\pi$), and remark that $s$ is the geodesic distance to the point $N$.
\begin{theorem}
\label{t:th-sphere}
For $k\in {\mb{N}}$, the function
$$
\psi_k (s,\theta)= c_k \sin (s)^k e^{ik\theta}, \quad \quad c_k = \frac{k^{1/4}}{2^{1/2}\pi^{3/4}}\left( 1 + O(\frac{1}{k})\right) \text{ as } k\to +\infty
$$
satisfies
\begin{align*}
& -\Delta_g \psi_k = k(k+1) \psi_k \quad \text{on} \quad \S^2 , \quad \psi_k \in C^\infty(\S^2) , \quad \|\psi_k\|_{L^2(\S^2)} = 1,\\
&|\psi_k (s,\theta)| = c_k \sin (s)^k \leq c_k s^k\quad \text{ for } s\in[0,\pi], k\in {\mb{N}} ,\\
&\|\psi_k\|_{L^2(B(N,r))}^2 = \frac{c_k^2\pi}{k+1}\frac{\sin(r)^{2k+2}}{\cos(r)} (1+R) , \quad |R|\leq \frac{\tan(r)^2}{2k+2} \quad \text{ for } r\in[0,\frac{\pi}{2}), k\in {\mb{N}} .
\end{align*}
\end{theorem}
This result is a much more explicit, more precise (and simpler to prove) version of the general results we obtain on surfaces of revolution. We turn to the general case and shall explain at the end of the section the links with Theorem~\ref{t:th-sphere}.
\paragraph{Surfaces of revolutions}
The precise description of the geometry of the surfaces we consider is given in Section~\ref{s:revol} and we only give here the features required to state the result.
We consider $\ensuremath{\mathcal M} =\ensuremath{\mathcal S} \subset {\mb{R}}^3$ a smooth compact surface diffeomorphic to the sphere $\S^2$. We assume moreover that it has revolution invariance around an axis, that intersects $\ensuremath{\mathcal S}$ in two points, the north and the south poles, respectively $N, S \in \ensuremath{\mathcal S}$. These points are the only invariant points of the revolution symmetry.
The surface is then endowed with the metric $g$ inherited from the Euclidean metric on ${\mb{R}}^3$, which itself enjoy the rotation invariance.
Then, we describe (almost all) the surface by two coordinates, namely $s = \dist_g(\cdot , N)$, the geodesic distance to the north pole and $\theta$, the angle of rotation. The variable $s$ is in $(0,L)$ where $L=\dist_g(N,S)$.
The surface is characterized by the function $R(s)$ associating to $s$ the Euclidean distance in ${\mb{R}}^3$ to the symmetry axis, which, by definition, is rotationally invariant, and satisfies $R(0) = 0= R(L)$. This function $R$ is the ``profile'' of the revolution surface $\ensuremath{\mathcal S}$.
\medskip
We shall now assume that $R$ reaches at $s_0$ a global maximum, and introduce the relevant Agmon distance to the ``equator'' $s=s_0$, defined by the eikonal equation
\bnan
\label{e:defdA}
\big(d_A'(s) \big)^2-\left( \frac{1}{R(s)^2}- \frac{1}{R(s_0)^2} \right) = 0 , \quad d_A(s_0)=0 , \quad \sgn(d_A'(s_0)) = \sgn(s-s_0) ,
\enan
or, more explicitely, for $s\in (0,L)$, by
\bnan
\label{e:defbisdA}
d_A(s) = \left| \intop_{s_0}^s \sqrt{ \frac{1}{R(y)^2}- \frac{1}{R(s_0)^2}} dy \right| .
\enan
A more intrinsic definition of $d_A$ is given in Remark~\ref{r:def-intrinsic-dA} below (and requires additional notation).
\begin{theorem}
\label{t:agmon-intro}
Assume that $s \mapsto R(s)$ admits a {\em non-degenerate strict global} maximum at $s_0 \in (0,L)$.
Then, for all $k\in{\mb{N}}$, there exists $\psi_k \in C^{\infty}(\ensuremath{\mathcal S})$, and $\lambda_k \geq 0$ such that
$$
\lambda_k= \frac{k^2}{R(s_0)^2}+k\sqrt{\frac{|R''(s_0)|}{R^3(s_0)}} + O(k^{1/2}), \qquad \|\psi_k\|_{L^2(\ensuremath{\mathcal S})}=1, \qquad - \Delta_g \psi_k = \lambda_k \psi_k .
$$
Moreover, there exist $C,C_0,k_0>0$ such that, for all $k \in {\mb{N}}$, $k \geq k_0$ and all $0\leq r \leq s_0$, we have the estimate
$$
\nor{\psi_{k}}{L^2(B(N,r))}
\leq C\lambda_k^{C_0} e^{- d_A(r)R(s_0) \left( \sqrt{\lambda_k} - C\right)}.
$$
\end{theorem}
This statement has to be completed by the asymptotic behavior of $d_A$ (proved in Lemma~\ref{lemma-prop-dA}) when $s\to 0$, namely
\bnan
\label{e:asympt-dA}
d_A(s) = -\log(s) + O(1) , \quad \text{as } s \to 0^+ .
\enan
That is to say that the equator and the poles are infinitely distant to each other for the Agmon distance $d_A$ (as opposed to the geodesic distance $\dist_g$).
Note that at first order, $d_A$ does not depend on the geometry of the surface $\ensuremath{\mathcal S}$ close to the north pole $N$ ($s=0$). A similar statement holds close to the south pole $S$ ($s=L$).
This, together with Definition~\ref{def-coco} and Proposition~\ref{p:link-eigenfct-heat-etc}, yields the following direct corollary.
\begin{corollary}
\label{t:agmon-introenkappa}
Under the assumptions of Theorem \ref{t:agmon-intro}, for all $0\leq r \leq s_0$, we have the estimate
$$
\ensuremath{\mathfrak K}_{eig} (B_g(N,r))\geq d_A(r)R(s_0).
$$
This yields also
\bna
\ensuremath{\mathfrak K}_{\Sigma} (B_g(N,r))\geq d_A(r)R(s_0),&\quad &\ensuremath{\mathfrak K}_{wave} (B_g(N,r),T)\geq d_A(r)R(s_0) ,\quad \textnormal{ for any }T>0,\\
\ensuremath{\mathfrak K}_{\infty} (B_g(N,r))\geq \frac{\big(d_A(r)R(s_0)\big)^2}{4}, &\quad& \ensuremath{\mathfrak K}_{heat} (B_g(N,r))\geq \frac{\big(d_A(r)R(s_0)\big)^2}{4}.
\ena
\end{corollary}
Note also that Theorem~\ref{t:agmon-intro}, combined with the explicit asymptotic expansion~\eqref{e:asympt-dA} of the Agmon distance $d_A$ implies the following result.
\begin{corollary}[Rate of vanishing]
\label{c:rate-vanish}
With $(\lambda_k, \psi_k)$ as in Theorem~\ref{t:agmon-intro}, there exist $C,C_0,k_0>0$ such that, for all $k \in {\mb{N}}$, $k \geq k_0$ and all $r \geq 0$, we have
\bna
\nor{\psi_{k}}{L^2(B(N,r))}\leq Ce^{C\sqrt{\lambda_k}}r^{R(s_0)\sqrt{\lambda_k}-C} ,
\ena
and, in any local chart centered at $N$, we have $\d^{\alpha}\psi_{k}(N)=0$ for all $|\alpha|\leq R(s_0)\sqrt{\lambda_k}-C$.
\end{corollary}
As on the sphere, these eigenfunctions saturate the maximal vanishing rate predicted by the Donnelly-Fefferman Theorem~\cite{DF:88}.
Note that in these estimates, $R(s_0) \sqrt{\lambda_k} \sim k$ does not depend on the geometry.
\bigskip
The proofs rely on classical semiclassical decay estimates for eigenfunctions~\cite{Simon:83,HS:84}. We refer to the monographs \cite{Helffer:booksemiclassic,DS:book} for the historical background and more references. An additional difficulty here is linked to the degeneracy of the function $R$ close to the north and south poles.
Note also that, to our knowledge, the idea of constructing such examples on surfaces of revolution is due to Lebeau~\cite{Leb:96} and Allibert\cite{Allibert:98}.
\paragraph{The disk}
Recall that $\ensuremath{\mathbb D} = \{(x,y)\in {\mb{R}}^2 , x^2+y^2 \leq 1\}$.
Our results on the disk are quite similar to the previous results on revolution surfaces. They are proved in Section \ref{sectDisk}.
Note the construction is more explicit there since it involves Bessel functions.
As in the above example, the concentration is related to an Agmon distance to the maximum of the radius $r$, which corresponds to the boundary $\d\ensuremath{\mathbb D}$ here.
\begin{theorem}[Whispering galleries on the disk]
\label{t:agmon-diskintro}
Denote, for $r \in (0,1]$,
\begin{equation}
\label{e:def-dA-disk}
d_A(r)= - \left(\tanh(\alpha(r))-\alpha(r)\right) ,\quad \text{ with } \quad \alpha(r)=\cosh^{-1}(1/r).
\end{equation}
Then, for all $k\in{\mb{N}}$, there exists $\psi_k \in C^{\infty}(\ensuremath{\mathbb D})\cap H^1_0(\ensuremath{\mathbb D})$, and $\lambda_k \geq 0$ such that
$$
\lambda_k= k^2+ O(k^{4/3}), \qquad \|\psi_k\|_{L^2(\ensuremath{\mathcal S})}=1, \qquad - \Delta_g \psi_k = \lambda_k \psi_k .
$$
Moreover, there exist $C, \beta , k_0 >0$ such that for all $k \geq k_0$ and $0< r \leq 1-\beta \lambda_k^{-1/3}$, we have
\bna
\|\psi_{k}\|_{L^\infty(B(0,r))} \leq \exp \left(-\left( \sqrt{\lambda_k} - C\lambda_k^{1/6} \right)d_A(r) + C \lambda_k^{1/6} \right) .
\ena
\end{theorem}
That $d_A$ indeed represents an Agmon distance in the present context is justified in the next paragraph. Note that $d_A$ still satisfies $d_A(r) \sim_{r\to 0^+} \log(\frac{1}{r})$ here, so that the analogues of Corollaries~\ref{t:agmon-introenkappa} and~\ref{c:rate-vanish} still holds in this setting.
\paragraph{Remarks on the Agmon distance}
In this paragraph, we compare the three geometries discussed above. In particular, we stress the fact that the results obtained on the sphere are refinements of those on general surfaces of revolution, and explain the similarities in the case of the disk.
\begin{remark}[Agmon distance on the sphere]
\label{rksphereAgmon}
Note that the coordinates $(s, \theta)$ introduced on the unit sphere are the same as those defining general surfaces of revolution, with $L=\pi$, $s\in (0,\pi)$, $R(s) = \sin(s)$
and the maximum of $R$ is reached at $s_0= \frac{\pi}{2}$.
In particular, recalling the definition of the Agmon distance in~\eqref{e:defbisdA}, we obtain, for $s\in (0,\pi)$,
\bna
d_A(s) = \left| \intop_{s_0}^s \sqrt{ \frac{1}{R(y)^2}- \frac{1}{R(s_0)^2}} dy \right| = \left| \intop_{\pi/2}^s \sqrt{ \frac{1}{\sin(y)^2}- 1} dy \right| = \left| \intop_{\pi/2}^s \frac{\cos(y)}{\sin(y)}dy \right| = |\log (\sin(s))|.
\ena
This can be rewritten intrinsically as
$$
d_A(m) = - \log \big(\sin (\dist_g(m,N) )\big) , \quad m \in \S^2 ,\quad \text{(recall $\dist_g(m,N) + \dist_g(m,S) = \pi$)}.
$$
In view of this identity for the sphere, the estimates on the eigenfunctions $\psi_k$ of Theorem~\ref{t:th-sphere} can be reformulated as ($\lambda_k = k (k+1)$)
\begin{align*}
&|\psi_k (s,\theta)| = c_k e^{-k d_A(s)} \quad \text{ for } s\in[0,\pi], k\in {\mb{N}} ,\\
&\|\psi_k\|_{L^2(B(N,r))}^2 = \frac{c_k^2\pi}{k+1}\frac{e^{-(2k+2)d_A(r)}}{\cos(r)} (1+R) , \quad |R|\leq \frac{\tan(r)^2}{2k+2} \quad \text{ for } r\in[0,\frac{\pi}{2}), k\in {\mb{N}} .
\end{align*}
These two statements (ponctual estimate and fine asymptotics of the $L^2$ norm) are much stronger than those of general result of Theorem~\ref{t:agmon-intro} on general surfaces of revolution.
\end{remark}
\begin{remark}[Agmon distance in the disk]
Recalling the definition of $d_A$ in~\eqref{e:def-dA-disk}, we have
$\alpha'(r)=-\frac{1}{r^2}\frac{1}{\sqrt{\frac{1}{r^2}-1}}$, so that
$$
(d_A'(r))^2=\alpha'(r)^2 \left(\frac{1}{\cosh^2(\alpha(r))}-1 \right)^2= \frac{1}{r^2}\frac{1}{1-r^2}(r^2-1)^2=\frac{1}{r^2}-1 , \quad \text{ and }\quad d_A(1)=0.
$$
As a consequence, $d_A$ is exactly the Agmon distance to the boundary $r=1$, and we have
\bna
d_A'(r) = - \sqrt{\frac{1}{r^2}-1}, \quad r \in (0,1] .
\ena
Note again that $d_A(r) \sim_{r\to 0^+} \log(\frac{1}{r})$ and, in particular, the center of the disk is at infinite Agmon distance to the boundary: $d_A(0) = +\infty$.
\end{remark}
\subsubsection{Uniform Lebeau-Robbiano spectral inequalities: upper bound for $\ensuremath{\mathfrak K}_\Sigma$}
\label{subsub:LR-Jerison-Lebeau}
The counterpart to Corollary~\ref{c:rate-vanish} is due to Donnelly-Fefferman~\cite{DF:88}, and roughly states that eigenfunctions vanish at most like $r^{C \sqrt{\lambda}+C}$ on balls of radius
$r$ ($\lambda$ is the eigenvalue). It has been generalized in some sense to sums of eigenfunctions by Jerison and Lebeau~\cite{JL:99}.
We prove here a variant of this result under the form of a uniform Lebeau-Robbiano spectral inequality with observation on small balls.
\begin{theorem}[Uniform Lebeau-Robbiano spectral inequality with observation on small balls]
\label{t:unif-LR-ineq}
Let $(\ensuremath{\mathcal M},g)$ be a compact Riemannian manifold with (or without) boundary $\d \ensuremath{\mathcal M}$. For all $x_0\in \ensuremath{\mathcal M}$, there exist constants $C_1,C_2>0$ such that for all $r>0$, $\lambda \geq 0$ and $\psi \in E_{\leq\lambda}$, we have
$$
\| \psi\|_{L^2(\ensuremath{\mathcal M})} \leq e^{\left(C_1\sqrt{\lambda} + C_2\right) \left(1+\log(\frac{1}{r})\right)} \|\psi \|_{L^2(B(x_0,r))} .
$$
\end{theorem}
Note that a careful inspection of the proofs (of all Carleman estimates used, that are stable by small perturbations) shows that the constant $C_1,C_2$ can actually be taken independent of the point $x_0$.
Note that we prove the result in the context of a Lipschitz metric $g$ and in the case of Neumann boundary conditions as well.
This uniform Lebeau-Robbiano spectral inequality directly implies Theorem~\ref{t:heatlog} using~\cite[Corollary~1]{Miller:10} (recalled in Lemma~\ref{lmspectraldonneheat} below).
\bigskip
One of the tools we develop for the proof of Theorem~\ref{t:heatlog} also yields a uniform Lebeau-Robbiano in a class of Lipschitz metrics. Even though not completely related to the main results of the paper, we choose to state is here since we believe it is of independent interest.
On the manifold $\ensuremath{\mathcal M}$, we denote here by $\mathfrak{g}$ a metric and $(\lambda_j^\mathfrak{g})_{j \in {\mb{N}}}$ the spectrum of the associated Laplace-Beltrami operator $-\Delta_\mathfrak{g}$ (with Dirichlet boundary condition if $\d \ensuremath{\mathcal M} \neq \emptyset$) and by $(\psi_{\lambda_j}^\mathfrak{g})_{j \in {\mb{N}}}$ an associated Hilbert basis of eigenfunctions, in order to stress the dependence with respect to the metric. We also write
$$
E_{\leq\lambda}^\mathfrak{g} = \vect \{\psi_{\lambda_j}^\mathfrak{g}, \lambda_j^\mathfrak{g} \leq \lambda\} ,
$$
which of course, depends on the metric $\mathfrak{g}$.
Now, given a reference Lipschitz metric $\mathfrak{g}_0$, we define
$$
\Gamma_{\varepsilon, D}(\ensuremath{\mathcal M}, \mathfrak{g}_0) = \left\{ \mathfrak{g} \text{ Lipschitz continuous metric on } \ensuremath{\mathcal M}, \quad \|\mathfrak{g}\|_{W^{1,\infty} (\ensuremath{\mathcal M})} \leq D, \quad \varepsilon \mathfrak{g}_0 \leq \mathfrak{g} \leq D \mathfrak{g}_0\right\} .
$$
\begin{theorem}[Uniform Lebeau-Robbiano spectral inequality in a class of metrics]
\label{t:uniform-LR-ineq-metric}
Let $\ensuremath{\mathcal M}$ be a compact Riemannian manifold with (or without) boundary $\d \ensuremath{\mathcal M}$, $\mathfrak{g}_0$ be a Lipschitz continuous Riemannian metric on $\ensuremath{\mathcal M}$, and $\omega \subset \ensuremath{\mathcal M}$ a nonempty open set. Then, for all $D\geq\varepsilon>0$, there exist constants $C, c>0$ such that for all $\mathfrak{g} \in \Gamma_{\varepsilon, D}(\ensuremath{\mathcal M}, \mathfrak{g}_0)$, $\lambda \geq 0$ and $w \in E_{\leq\lambda}^{\mathfrak{g}}$, we have
\begin{equation}
\label{e:unif-LR-ineq-metric}
\| w \|_{L^2(\ensuremath{\mathcal M})} \leq Ce^{c \sqrt{\lambda}} \| w \|_{L^2(\omega)} .
\end{equation}
\end{theorem}
Note that the above estimate is valid whatever the choice of $L^2$-norm (i.e. w.r.t. $\mathfrak{g}$ or $\mathfrak{g}_0$) since all these norms are uniformly equivalent for metrics $\mathfrak{g}$ the class $\Gamma_{\varepsilon, D}(\ensuremath{\mathcal M}, \mathfrak{g}_0)$.
This result could be reformulated by saying that~\eqref{e:unif-LR-ineq-metric} holds for all $w \in \bigcup_{\mathfrak{g} \in \Gamma_{\varepsilon, D}(\ensuremath{\mathcal M}, \mathfrak{g}_0)} E_{\leq\lambda}^{\mathfrak{g}}$.
This uniform Lebeau-Robbiano spectral inequality directly implies the following uniform estimate on the cost of the heat equation, using~\cite[Corollary~1]{Miller:10}, recalled in Lemma~\ref{lmspectraldonneheat} below (in which the constants are explicitely computed in terms of the constants in the spectral inequality).
\begin{corollary}
Let $\ensuremath{\mathcal M}$ a compact Riemannian manifold with (or without) boundary $\d \ensuremath{\mathcal M}$, $\mathfrak{g}_0 \in \mathcal{T}^2_{W^{1,\infty}(\ensuremath{\mathcal M})}$ be a Riemannian metric on $\ensuremath{\mathcal M}$, and $\omega \subset \ensuremath{\mathcal M}$ a nonempty open set. Then, for all $D\geq\varepsilon>0$, there exist constants $C, \mathfrak{K}>0$ such that for all $\mathfrak{g} \in \Gamma_{\varepsilon, D}(\ensuremath{\mathcal M}, \mathfrak{g}_0)$, we have
\bna
\nor{e^{T\Delta_{\mathfrak{g}}}u}{L^2(\ensuremath{\mathcal M})}^2 \leq Ce^{\frac{2\mathfrak{K}}{T}} \intop_0^T \nor{e^{t\Delta_{\mathfrak{g}}}u}{L^2(\omega)}^2 dt , \quad \text{ for all $T>0$ and all $u \in L^2(\ensuremath{\mathcal M})$.}
\ena
\end{corollary}
\subsubsection{The case of a barrel: upper bound for $\ensuremath{\mathfrak K}_{wave}$ and $\ensuremath{\mathfrak K}_{heat}$}
\label{subsubbarrel}
To conclude with the upper bounds on the constant, we present in this section some applications of results obtained by Allibert in \cite{Allibert:98}.
In case of a ``barrel-type surface'' with boundary (a geometric setting close to that of surfaces of revolution described above), Allibert estimates the att
tainable space for the controlled wave equation. As corollaries, we deduce from this result estimates of $\ensuremath{\mathfrak K}_{wave}$ and, in view of Proposition~\ref{propheatwave}, of $\ensuremath{\mathfrak K}_{heat}$.
We first present the geometric context which (quite similar to the one of surface of revolution described above).
In this section, $\ensuremath{\mathcal M}=\ensuremath{\mathcal S}$ is a surface of revolution of ${\mb{R}}^{3}$ with boundary, parametrized by the equation $$\ensuremath{\mathcal S} = \{ (x,y,z)\in {\mb{R}}^3 , z\in [a,b] , x^{2}+y^{2}=\mathsf{R}(z)\} , \quad $$ where $\mathsf{R}$ is a strictly positive smooth function on $[a,b]$, that admits a unique local (and therefore global) non degenerate maximum (i.e. $\mathsf{R}''(c)<0$) at one point $c \in (a,b)$. The control is a boundary control from the bottom side, that is $\Gamma=\{(x,y,a)\in {\mb{R}}^{3}; x^{2}+y^{2}=\mathsf{R}(a) \}$. We also describe $\ensuremath{\mathcal S}$ by $(z,\theta)$, with $(x,y) =( \mathsf{R}(z)\cos\theta ,\mathsf{R}(z)\sin\theta ) $
We refer to Remark~\ref{rkautreparam} to explain the link between the two parametrizations of revolution surfaces by $s$ and $z$ (and in particular, that we may write $z=z(s)$ and $R(s)= \mathsf{R}(z(s))$).
As before, we define the Agmon distance to the point $c$. With the parametrization of the embedding into ${\mb{R}}^{3}$, it gives the following definition (note that it is almost the same as \eqref{e:defbisdA} but in different coordinates):
\bna
d_{A}(z)=\left| \intop_{c}^z \sqrt{1+\mathsf{R}'^{2}(y)}\sqrt{ \frac{1}{\mathsf{R}(y)^2}- \frac{1}{\mathsf{R}(c)^2}} dy \right| .
\ena
We also need the following definition of a critical time $T_{1}$ (see Allibert \cite{Allibert:98} for more details), which, roughly speaking, represents the smallest period of the geodesic flow, modulo rotation. More precisely, the principal symbol of the wave operator on ${\mb{R}}\times \ensuremath{\mathcal S}$ is given by
\bna
p(t,z,\theta,\tau,\zeta,\eta)=\frac{\zeta^{2}}{1+\mathsf{R}'^{2}(z)}+\frac{\eta^{2}}{\mathsf{R}^{2}(z)}-\tau^{2} ,
\ena
where $(\tau,\zeta,\eta)$ denote the dual variable to $(t,z,\theta)$.
For any bicaracteristic $\gamma$ of $p$, bouncing on the boundary according to the reflection law $\zeta \rightarrow -\zeta$, we denote $T(\gamma)$ the smallest period of the function $\Pi_{z}(\gamma)$ where $\Pi_{z}$ is the projection on the component $z$.
Then, $T_{1}$ is defined by
\bna
T_{1}=\sup_{\gamma \textnormal{ bicar}}T(\gamma),
\ena
and we have $T_{1}\geq 2\L(\ensuremath{\mathcal M},\Gamma)$ (this critical time is larger than the time of unique continuation from $\Gamma$).
In this context, we define similarly $\ensuremath{\mathfrak K}_{heat} (\Gamma)$ and $\ensuremath{\mathfrak K}_{wave} (\Gamma,T)$ with exactly the same definition as in \eqref{defcoheat} and Definition \ref{def-coco} with $\nor{u}{L^2([0,T]\times \omega)}$ replaced by $\nor{\partial_{\nu}u}{L^2([0,T]\times \Gamma)}$ in \eqref{e:co-heat} and \eqref{e:co-wave}. Note that $\partial_{\nu}u$ is in $L^2([0,T]\times \Gamma)$ for initial data in $L^2$ (resp. $H^1_0\times L^2$) for the heat (resp. wave) equation thanks to hidden regularity. We deduce from~\cite{Allibert:98} the following result.
\begin{theorem}
\label{t:positrev}
Under the above geometric assumptions, we have the estimates
\begin{align}
&\ensuremath{\mathfrak K}_{wave} (\Gamma,T)\leq d_A(\Gamma), \quad \text{ for all } T>T_1, \label{Allibertwave}\\
&\ensuremath{\mathfrak K}_{heat}(\Gamma)\leq \alpha (T_1(\Gamma)^2+ d_A(\Gamma)^{2}) , \label{Allibertheat}
\end{align}
for some universal constant $\alpha>0$.
\end{theorem}
The first estimate~\eqref{Allibertwave} follows simply from \cite[Th\'eor\`eme~2]{Allibert:98} (see Proposition \ref{lienanalyticekwave} below), which is stated in terms of analytic spaces with respect to the rotation variable $\theta$. Then, \eqref{Allibertwave} implies \eqref{Allibertheat} thanks to Proposition \ref{propheatwave}.
Note that~\eqref{Allibertwave} also proves an analogue of Theorem~\ref{t:agmon-intro} in this geometry, so that in fact:
\bnan
\label{e:estim-=Allibert}
\ensuremath{\mathfrak K}_{eig} (\Gamma) = d_A(\Gamma), \quad \text{ and }\quad \ensuremath{\mathfrak K}_{wave} (\Gamma,T) = d_A(\Gamma), \quad \text{ for all } T>T_1 .
\enan
He also proves upper and lower estimates for $T \in (2\L(\ensuremath{\mathcal M},\Gamma), T_1)$ (which do not coincide). The proof of Theorem~\ref{t:positrev} in Proposition \ref{lienanalyticekwave} yields the according estimates of $\ensuremath{\mathfrak K}_{wave} (\Gamma,T)$.
\subsection{Previous results}
\label{sectpreviousres}
Except for the bounds \eqref{e:estim-=Allibert} following from Allibert's result and the computation of $\ensuremath{\mathfrak K}_\infty(\{0\})$ on $(0,L)$ in \cite{FR:71}, we are not aware of other situations in which the constants described in the previous paragraph are known exactly.
We collect in this section previous results on the constant $\ensuremath{\mathfrak K}_{heat}$ and $\ensuremath{\mathfrak K}_{wave}$, which received a lot of attention in the past fifteen years.
\paragraph{Parabolic equations in dimension one}
The most studied case concerns the constant $\ensuremath{\mathfrak K}_{heat}$, with observation/control at the boundary in the one dimensional case.
Yet, it seems that the constant $\ensuremath{\mathfrak K}_{heat} (\{-1,1 \})$ is still unknown.
Note that the latter has a particular importance since it has applications to higher dimensions (with geometric conditions) via the transmutation method of Luc Miller~\cite{Miller:06b}.
Here, we list previous results on $(-1,1)$ with Dirichlet control on both side of the interval. Note also that each improvement of the constant was also the occasion of finding new techniques.
\begin{itemize}
\item $\ensuremath{\mathfrak K}_{heat} (\{-1,1 \}) \leq 2 \left(\frac{36}{37} \right)^2 $ Miller \cite{Miller:06b}, using the transmutation method;
\item $\ensuremath{\mathfrak K}_{heat} (\{-1,1 \}) \leq \frac{3}{4}$ Tenenbaum-Tucsnak \cite{TT:07}, using some results of analytic number theory;
\item $\ensuremath{\mathfrak K}_{heat} (\{-1,1 \})\geq \frac12$, Lissy \cite{Lissy:15}, using complex analysis arguments;
\item $\ensuremath{\mathfrak K}_{heat} (\{-1,1 \}) \leq 0,7$, Dard\'e-Ervedoza \cite{Darde:17}, combining some Carleman estimates and complex analysis.
\end{itemize}
Note that in this context, the analogue of Conjecture \ref{c:Miller-conj} would be $\ensuremath{\mathfrak K}_{heat} (\{-1,1 \})=\frac{1}{4}$, which \cite{Lissy:15} disproved in this context (by a factor $2$). However, this result does not prevent the existence of a universal constant $C>0$ so that $\ensuremath{\mathfrak K}_{heat} (\omega) = C \L(\ensuremath{\mathcal M},\omega)^2$.
As noticed in~\cite{EZ:11s}, the result in~\cite{FR:71} implies that on the interval $(0,L)$, we have $\ensuremath{\mathfrak K}_\infty(\{0\}) = \frac{L^2}{4}$ (and~\cite{EZ:11s} even prove~\eqref{e:co-infty} for the critical $\ensuremath{\mathfrak K}=\frac{L^2}{4}$).
\paragraph{Parabolic equations in higher dimensions}
There are many papers concerning the control of the heat equation. We give here a short presentation of those giving some estimates on the constants studied in this paper.
The first computable estimates were obtained using the transmutation method to give estimates similar to \eqref{upperboundMiller}. We can find several references improving the universal constant involved: \cite{Miller:04,Miller:06b} \cite{TT:07}, \cite{Darde:17}.
In \cite{TT:07}, the authors prove $\ensuremath{\mathfrak K}_\Sigma(\omega^*) \leq 3\log (\frac{(4 \pi e)^N}{|\omega^*|})$ where $\ensuremath{\mathcal M} = (0,\pi)^N$ is a cubic domain and $|\omega^*|$ is the volume of the biggest rectangle included in $\omega$. The proof of this result uses number theoretic argument of Tur\'an concerning families of the complex exponential $(e^{ikx})_{k \in \mathbb Z}$ (which can be interpretated as an estimate of $\ensuremath{\mathfrak K}_{\Sigma}(I)$ for $I$ a subinterval of $\ensuremath{\mathbb T}$). Remark that in this particular flat-torus geometry, we have no idea of what the right constant should be.
In \cite{BP:17}, the authors prove $\ensuremath{\mathfrak K}_\Sigma(B(0,r)) \leq \frac{C_\varepsilon}{r^\varepsilon}$ for all $\varepsilon>0$ in convex geometries. This has just been extended by Phung \cite{PhunglogCarl}. Our Theorem \ref{t:heatlog} improves this result. Note also that \cite{NTTV:18} gave results related to this in a periodic setting, tracking uniformity with respect to several parameters.
In the Euclidian space ${\mb{R}}^n$ where $\Delta$ is the usual flat Laplacian, spectral estimates as \eqref{e:co-sum} can be interpretated as a manifestation of the uncertainty principle. Several results relying on this fact have been recently stated. We refer for instance to the recent articles~\cite{EV:17} and \cite{WWZZ:17} and the references therein.
\paragraph{The wave equation}
Lebeau \cite{Leb:Analytic} proved in the analytic setting that $\ensuremath{\mathfrak K}_{wave}(\omega,T)$ is finite for any open set $\omega$ and in optimal time $T>2\L(\ensuremath{\mathcal M},\omega)$ (the result is stated in a quite different way actually). It was only very recently shown to be finite by the authors \cite{LL:15} in a general $C^\infty$ context. We refer the reader to the introduction of~\cite{LL:15} for a detailed discussion of the literature on unique continuation for waves, and estimates like~\eqref{e:co-wave}-\eqref{e:co-wave-bis}.
Estimates on analytic spaces of controllable data were computed by Allibert in the above described examples. We refer to Section \ref{subsectionanalytlink} for more details about why they have implications on the constant $\ensuremath{\mathfrak K}_{wave}$ (and therefore $\ensuremath{\mathfrak K}_{heat}$ by Proposition \ref{propheatwave}). In \cite{Allibert:98}, he studies the example of the barrel as we describe it in Section \ref{subsubbarrel}. In \cite{Allibert:99}, he studies the example of a cylinder $(0,\pi)\times \S^1 $. The results he obtain in that paper should imply $\ensuremath{\mathfrak K}_{wave}(\Gamma,T)\leq \frac{C_{\delta}}{T^{1-\delta}}$ where $\Gamma=\{0\}\times \S^1$ and $T> 2\pi$.
\subsection{Plan of the paper}
The paper is divided in four main parts. In Section~\ref{s:prelim}, we give the links between the different constants, proving in particular Propositions~\ref{p:link-eigenfct-heat-etc} and~\ref{propheatwave}. We also interpret the description of the reachable set as an upper bound on the constant $\ensuremath{\mathfrak K}_{wave}(\omega,T)$.
In Section~\ref{s:contruction}, we construct the various counterexamples on rotationally invariant geometries, presented in Section~\ref{s:intro-const}. This proves in particular Theorem~\ref{thm:counterexamples}.
Section~\ref{s:unif-LR-ineq} is devoted to the proof of the uniform Lebeau-Robbiano inequality on small balls, stated in Theorem~\ref{t:unif-LR-ineq}.
Finally, we prove in Section~\ref{s:positive} the observability inequality of Theorem~\ref{thmpositive} concerning positive solutions of the heat equation.
The paper ends with two appendices, in the first of which, Appendix~\ref{s:carleman}, we prove a uniform Carleman estimate for bounded families of Lipschitz metrics. Such an estimate is used as an intermediate in the proof of Theorem~\ref{t:unif-LR-ineq}. The result also yields Theorem~\ref{t:uniform-LR-ineq-metric}.
Note finally that in a companion paper \cite{LL:18vanish}, we will use similar techniques to disprove natural conjectures for the control cost of transport equations in the vanishing viscosity limit.
\bigskip
\noindent
{\em Acknowledgements.}
The first author is partially supported by the Agence Nationale de la Recherche under grant SRGI ANR-15-CE40-0018 and IPROBLEMS ANR-13-JS01-0006.
The second author is partially supported by the Agence Nationale de la Recherche under grant GERASIC ANR-13-BS01-0007-01.
Both authors are partially supported by the Agence Nationale de la Recherche under grants ISDEEC ANR-16-CE40-0013.
Part of this research was done when the second author was in CRM, CNRS UMI 3457, Universit\'e de Montr\'eal, and Universit\'e Paris Diderot, IMJ-PRG, UMR 7586.
\section{Preliminaries: links between the different constants}
\label{s:prelim}
\subsection{Different definitions of \texorpdfstring{$\ensuremath{\mathfrak K}_{wave} (\omega,T)$}{kwave(omega,T)}}
Let us start by proving equality~\eqref{e:co=co'}. This is a consequence of the following lemma.
\begin{lemma}
\label{l:lambda-mu-co-co}
Let $\mu_0\geq 0$, $\ensuremath{\mathfrak K} \geq 0$ and assume that $\Lambda > 0$ and $X\geq 0$ satisfy
\bnan
\label{e:asspt-Lambda-mu}
\frac{1}{\Lambda} \leq e^{\ensuremath{\mathfrak K} \mu} X + \frac{1}{\mu} , \quad \text{for all }\mu > \mu_0 .
\enan
Then, for all $\alpha >0$, we have
\bnan
\label{e:conclu-Lambda-mu}
1 \leq \left( \mathds{1}_{\Lambda + \alpha \leq \mu_0} \frac{\mu_0 - \alpha}{\alpha} e^{ \ensuremath{\mathfrak K} \mu_0} + \mathds{1}_{\Lambda + \alpha> \mu_0}\frac{e^{\ensuremath{\mathfrak K} \alpha}}{\alpha}\Lambda(\Lambda+\alpha)e^{\ensuremath{\mathfrak K} \Lambda} \right)X .
\enan
Let $F : {\mb{R}}^+ \to {\mb{R}}^+$ be a nondecreasing function and assume that $\Lambda > 0$ and $X\geq 0$ satisfy
\bnan
\label{e:asspt-Lambda-mu-bis}
\Lambda \geq 1 \quad \text{and} \quad 1 \leq F(\Lambda) X .
\enan
Then, we have
\bnan
\label{e:conclu-Lambda-mu-bis}
\frac{1}{\Lambda} \leq F(\mu) X + \frac{1}{\mu} , \quad \text{for all }\mu > 0.
\enan
\end{lemma}
As a direct consequence of this lemma, we obtain the following corollary, clarifying the definition of $\ensuremath{\mathfrak K}_{wave} (\omega,T)$.
\begin{corollary}
\label{c:lambda=mu}
Assume~\eqref{e:co-wave} with constants $\ensuremath{\mathfrak K}, C, \mu_0>0$. Then, there is $C''>0$ such that
\bna
\|(u_0,u_1)\|_{H^1_0(\ensuremath{\mathcal M})\times L^2(\ensuremath{\mathcal M})}\leq C''\Lambda^2 e^{\ensuremath{\mathfrak K}\Lambda}\|u\|_{L^2((0,T)\times\omega)},\quad\Lambda =\frac{\|(u_0,u_1)\|_{H^1_0(\ensuremath{\mathcal M})\times L^2(\ensuremath{\mathcal M})}}{\|(u_0,u_1)\|_{L^2(\ensuremath{\mathcal M}) \times H^{-1}(\ensuremath{\mathcal M})}},\nonumber \\ \text{ for all } (u_0, u_1) \in H^1_0(\ensuremath{\mathcal M}) \times L^2(\ensuremath{\mathcal M}), \text{ and }u \text{ solution to~\eqref{e:wave},}
\ena
\bna
\|(u_0,u_1)\|_{L^2(\ensuremath{\mathcal M}) \times H^{-1}(\ensuremath{\mathcal M})} \leq C'' \mu^2 e^{\ensuremath{\mathfrak K} \mu} \|u\|_{L^2((0,T)\times\omega)} + \frac{1}{\mu} \|(u_0,u_1)\|_{H^1_0(\ensuremath{\mathcal M})\times L^2(\ensuremath{\mathcal M})}, \nonumber \\ \text{ for all $\mu > 0$ and all $(u_0, u_1) \in H^1_0(\ensuremath{\mathcal M}) \times L^2(\ensuremath{\mathcal M})$, and $u$ solution to~\eqref{e:wave}.}
\ena
Reciprocally, if~\eqref{e:co-wave-bis} holds with constants $\ensuremath{\mathfrak K}', C'>0$, then~\eqref{e:co-wave} holds with $\ensuremath{\mathfrak K}=\ensuremath{\mathfrak K}'$, $C=C'$, and $\mu_0=0$ (and for all $\mu>0$).
In particular, we have
\begin{align*}
\ensuremath{\mathfrak K}_{wave} (\omega,T) &= \inf \left\{ \ensuremath{\mathfrak K} >0 , \exists C>0, \mu_0>0 \text{ s.t. \eqref{e:co-wave} holds} \right\} \\
&= \inf \left\{ \ensuremath{\mathfrak K}' >0 , \exists C'>0, \text{ s.t. \eqref{e:co-wave-bis} holds} \right\} \\
& = \inf \left\{ \ensuremath{\mathfrak K} >0 , \exists C>0, \text{ s.t. \eqref{e:co-wave} holds with } \mu_0 = 0 \text{ (and all $\mu>0$)}\right\} .
\end{align*}
\end{corollary}
\bnp[Proof of Lemma~\ref{l:lambda-mu-co-co}]
Let $\alpha>0$.
In case $\Lambda+\alpha > \mu_0$, the assumption~\eqref{e:asspt-Lambda-mu} with $\mu = \Lambda+\alpha>\mu_0$ yields
$$
\frac{1}{\Lambda}\left(1 - \frac{\Lambda}{\Lambda+\alpha} \right) \leq e^{\ensuremath{\mathfrak K} (\Lambda + \alpha)} X ,
$$
and hence
\bnan
\label{e:Lambda-part1}
1 \leq \frac{1}{\alpha}e^{\ensuremath{\mathfrak K} \alpha}\Lambda(\Lambda+\alpha)e^{\ensuremath{\mathfrak K} \Lambda} X .
\enan
If now $\Lambda+\alpha \leq \mu_0$ (and, in particular, $\alpha < \mu_0$), that is $\frac{1}{\Lambda} \geq \frac{1}{\mu_0-\alpha}>0$, the assumption~\eqref{e:asspt-Lambda-mu} implies
$$
\frac{1}{\mu_0-\alpha} \leq \frac{1}{\Lambda} \leq e^{\ensuremath{\mathfrak K} \mu} X + \frac{1}{\mu}, \quad \text{for all }\mu \geq \mu_0 .
$$
This yields in particular
$$
X \geq \left( \frac{1}{\mu_0-\alpha}- \frac{1}{\mu} \right) e^{- \ensuremath{\mathfrak K} \mu} , \quad \text{for all }\mu \geq \mu_0 ,
$$
and hence $X \geq \max_{\mu \geq \mu_0} \left( \frac{1}{\mu_0-\alpha}- \frac{1}{\mu} \right) e^{- \ensuremath{\mathfrak K} \mu} \geq \frac{\alpha}{\mu_0 - \alpha} e^{- \ensuremath{\mathfrak K} \mu_0} >0$. With~\eqref{e:Lambda-part1}, this proves~\eqref{e:conclu-Lambda-mu}.
Let us now prove~\eqref{e:conclu-Lambda-mu-bis}. If $\Lambda \geq \mu$, then $\frac{1}{\Lambda} \leq \frac{1}{\mu}$ and~\eqref{e:conclu-Lambda-mu-bis} holds. If $\Lambda \leq \mu$, then~\eqref{e:asspt-Lambda-mu-bis} gives $\frac{1}{\Lambda} \leq 1 \leq F(\Lambda)X \leq F(\mu)X$ and \eqref{e:conclu-Lambda-mu-bis} also holds in this case, concluding the proof.
\enp
\subsection{{The constant \texorpdfstring{$\ensuremath{\mathfrak K}_{eig}(\omega)$}{Keig(omega,T)} as a lower bound for \texorpdfstring{$\ensuremath{\mathfrak K}_{heat}(\omega),\ensuremath{\mathfrak K}_{\infty}(\omega),\ensuremath{\mathfrak K}_{wave}(\omega,T)$}{Kheat(omega),Kinfty(omega),Kwave(omega,T)}: Proof of Proposition~\ref{p:link-eigenfct-heat-etc}}}
We prove a slightly more precise version of Proposition~\ref{p:link-eigenfct-heat-etc}.
\begin{lemma}
Assume that~\eqref{e:co-heat} holds with constants $\ensuremath{\mathfrak K}, C >0$. Then, we have
\bnan
\label{e:comp-co-co-heat}
\nor{\psi}{L^2(\ensuremath{\mathcal M})} \leq \sqrt{\frac{C}{2\lambda}} e^{2\sqrt{\ensuremath{\mathfrak K}\lambda}}\nor{\psi}{L^2(\omega)} , \quad \text{for all $\lambda \in \Sp(-\Delta_g) \setminus\{0\}$ and $\psi \in E_\lambda$}.
\enan
In particular,
\bnan
\label{e:comp-co-co-heat-infimum}
\frac{\ensuremath{\mathfrak K}_{eig}(\omega)^2}{4} \leq \ensuremath{\mathfrak K}_{heat}(\omega) .
\enan
Assume that~\eqref{e:co-infty} holds with constants $\ensuremath{\mathfrak K}, C >0$. Then, there exists $C''>0$ such that
\bnan
\label{e:comp-co-co-infty}
\nor{\psi}{L^2(\ensuremath{\mathcal M})} \leq \frac{C''}{\lambda^{1/8}} e^{2\sqrt{\ensuremath{\mathfrak K}\lambda}}\nor{\psi}{L^2(\omega)} , \quad \text{for all $\lambda \in \Sp(-\Delta_g) \setminus\{0\}$ and $\psi \in E_\lambda$}.
\enan
In particular
\bnan
\label{e:comp-co-co-infty-infimum}
\frac{\ensuremath{\mathfrak K}_{eig}(\omega)^2}{4} \leq \ensuremath{\mathfrak K}_{\infty}(\omega) .
\enan
Assume that \eqref{e:co-wave-bis} holds in time $T$ with constants $C',\ensuremath{\mathfrak K}'$. Then, we have
\bnan
\label{e:wave-eig-idem}
\|\psi\|_{L^2(\ensuremath{\mathcal M})} \leq \sqrt{\frac{T}{\lambda}} C' e^{\ensuremath{\mathfrak K}' \sqrt{\lambda}}\|\psi\|_{L^2(\omega)}, \quad \text{for all $\lambda \in \Sp(-\Delta_g) \setminus\{0\}$ and $\psi \in E_\lambda$}.
\enan
In particular, for all $T>0$, we have $\ensuremath{\mathfrak K}_{eig}(\omega)\leq \ensuremath{\mathfrak K}_{wave}(\omega,T)$.
\end{lemma}
\bnp[Proof of Proposition~\ref{p:link-eigenfct-heat-etc}]
From~\eqref{e:co-heat}, applied to $u(t,x) = e^{-t\lambda}\psi$ with $\lambda \in \Sp(-\Delta_g) \setminus\{0\}$ and $\psi \in E_{\lambda}$, we have
$$
e^{-2T\lambda}\nor{\psi}{L^2(\ensuremath{\mathcal M})}^2 \leq Ce^{\frac{2\ensuremath{\mathfrak K}}{T}} \intop_0^T e^{-2t\lambda}\nor{\psi}{L^2(\omega)}^2 dt = Ce^{\frac{2\ensuremath{\mathfrak K}}{T}} \frac{1-e^{-2T\lambda}}{2\lambda} \nor{\psi}{L^2(\omega)}^2 , \quad \text{ for all } T>0 .
$$
Taking $T = \frac{D}{\sqrt{\lambda}}$, with $D>0$ to be chosen, this implies
$$
\nor{\psi}{L^2(\ensuremath{\mathcal M})}^2 \leq Ce^{2T\lambda}e^{\frac{2\ensuremath{\mathfrak K}}{T}} \frac{1}{2\lambda} \nor{\psi}{L^2(\omega)}^2 = \frac{C}{2\lambda} e^{2\sqrt{\lambda}(D +\frac{\ensuremath{\mathfrak K}}{D})} \nor{\psi}{L^2(\omega)}^2 .
$$
Minimizing the exponent with respect to $D$ leads to choosing $D = \sqrt{\ensuremath{\mathfrak K}}$, which implies~\eqref{e:comp-co-co-heat} when taking the square root. From~\eqref{e:comp-co-co-heat}, \eqref{e:comp-co-co-heat-infimum} follows directly when taking the infimum over all $\ensuremath{\mathfrak K}$.
Let us now prove the second statement of the proposition. From~\eqref{e:co-infty}, again applied to $u(t,x) = e^{-t\lambda}\psi$ with $\lambda \in \Sp(-\Delta_g) \setminus\{0\}$ and $\psi \in E_{\lambda}$, we have
\bnan
\label{E-Z-bis}
\intop_{{\mb{R}}^+} e^{-\frac{2\ensuremath{\mathfrak K}}{t}} e^{-2t\lambda}\nor{\psi}{L^2(\ensuremath{\mathcal M})}^2 dt \leq C\intop_{{\mb{R}}^+}e^{-2t\lambda}\nor{\psi}{L^2(\omega)}^2 dt = \frac{C}{2\lambda} \nor{\psi}{L^2(\omega)}^2 .
\enan
The left hand-side may also be computed asymptotically for $\lambda \to + \infty$ using Laplace method, setting $\mu = \sqrt{\lambda}$, as
\bna
\intop_{{\mb{R}}^+} e^{-\frac{2\ensuremath{\mathfrak K}}{t}}e^{-2\mu^2 t} dt
& = & \intop_{{\mb{R}}^+} e^{-2\sqrt{\ensuremath{\mathfrak K}}\mu( \frac{1}{s} + s)} \frac{\sqrt{\ensuremath{\mathfrak K}}}{\mu} ds \\
& = & (1+o(1)) \frac{\sqrt{\ensuremath{\mathfrak K}}}{\mu} \intop_{\mb{R}} e^{-2\sqrt{\ensuremath{\mathfrak K}}\mu(2 + (s-1)^2 )} ds \\
& = & (1+o(1)) \frac{\sqrt{\ensuremath{\mathfrak K}}}{\mu} e^{-4\sqrt{\ensuremath{\mathfrak K}} \mu} \sqrt{\frac{\pi}{2\sqrt{\ensuremath{\mathfrak K}}\mu}}
= (1+o(1)) \left( \frac{\pi \sqrt{\ensuremath{\mathfrak K}}}{2 \mu^3} \right)^{\frac12}e^{-4\sqrt{\ensuremath{\mathfrak K}} \mu} .
\ena
From~\eqref{E-Z-bis}, we then obtain that, for all eigenfunction $\psi$ associated to the eigenvalue $\mu^2$, for $\mu \to \infty$, we have
\bna
(1+o(1)) \left( \frac{\pi \sqrt{\ensuremath{\mathfrak K}}}{2 \mu^3} \right)^{\frac12}e^{-4\sqrt{\ensuremath{\mathfrak K}} \mu} \nor{\psi}{L^2(\ensuremath{\mathcal M})}^2 \leq \frac{C}{2\mu^2}\|\psi \|_{L^2(\omega)}^2 .
\ena
Coming back to $\lambda=\mu^2$, this implies that the existence of $\tilde{C}, \lambda_0>0$ such that for all $\lambda \geq \lambda_0$
\bna
\nor{\psi}{L^2(\ensuremath{\mathcal M})}^2 \leq \frac{\tilde{C}}{\lambda^{1/4}} e^{4\sqrt{\ensuremath{\mathfrak K} \lambda}} \|\psi \|_{L^2(\omega)}^2 ,
\ena
and hence the sought result of~\eqref{e:comp-co-co-infty}. That of~\eqref{e:comp-co-co-infty-infimum} follows as above.
Let us now prove the last statement of the proposition. We want to apply~\eqref{e:co-wave-bis} to the function $u(t,x) = \cos(t \sqrt{\lambda}) \psi$ with $\lambda \in \Sp(-\Delta_g) \setminus\{0\}$ and $\psi \in E_{\lambda}$, which is a particular solution to~\eqref{e:wave}. We have
$\Lambda = \frac{\|(u|_{t=0},\d_t u|_{t=0})\|_{H^1_0(\ensuremath{\mathcal M})\times L^2(\ensuremath{\mathcal M})}}{\|(u|_{t=0},\d_t u|_{t=0})\|_{L^2(\ensuremath{\mathcal M}) \times H^{-1}(\ensuremath{\mathcal M})}} = \frac{\|\psi\|_{H^1_0(\ensuremath{\mathcal M})}}{\|\psi\|_{L^2(\ensuremath{\mathcal M})}}= \sqrt{\lambda}$ together with
$$
\sqrt{\lambda} \|\psi\|_{L^2(\ensuremath{\mathcal M})} = \|\psi\|_{H^1_0(\ensuremath{\mathcal M})} =\|u|_{t=0},\d_t u|_{t=0})\|_{H^1_0(\ensuremath{\mathcal M})\times L^2(\ensuremath{\mathcal M})}\leq C' e^{\ensuremath{\mathfrak K}' \Lambda} \|u\|_{L^2((0,T)\times\omega)},
$$
where
$$
\|u\|_{L^2((0,T)\times\omega)}^2 = \intop_0^T \cos^2(t\sqrt\lambda) \|\psi\|_{L^2(\omega)}^2 dt \leq T\|\psi\|_{L^2(\omega)}^2 .
$$
This finally implies~\eqref{e:wave-eig-idem}. The last result follows from Corollary~\ref{c:lambda=mu}. This concludes the proof of the proposition.
\enp
\subsection{Link between \texorpdfstring{$\ensuremath{\mathfrak K}_{heat} (\omega)$}{Kheat(omega)} and \texorpdfstring{$\ensuremath{\mathfrak K}_{wave}(\omega,T)$}{Kwave(omega,T)}: Proof of Proposition \ref{propheatwave}}
\label{s:proof-wave-heat}
The proof will follow very closely the method of Ervedoza-Zuazua \cite{EZ:11}, but with a different assumption. We summarize the results of~\cite{EZ:11,EZ:11s} we need in the next proposition for readibility.
\begin{proposition}[\cite{EZ:11,EZ:11s}]
\label{propoEZ}
Let $T,S>0$ and $\alpha > 2 S^2$. Let $\L$ be a negative self adjoint operator. Then, there exists some kernel function $k_T(t,s)$ such that
\begin{itemize}
\item
if $\ensuremath{y}$ is solution of the heat equation~$\partial_s\ensuremath{w} -\L \ensuremath{w}=0$, then $\ensuremath{w}(s)=\intop_0^T k_T(t,s)\ensuremath{y}(t)dt$ is solution of
\bneqn
\label{wave-w}
\partial_s^2\ensuremath{w} -\L \ensuremath{w}&=&0, \quad \text{ for } s \in ]-S,S[ , \\
(\ensuremath{w},\partial_s\ensuremath{w})|_{s=0}&=&\left(0,\intop_0^T \d_s k_T (t,0) \ensuremath{y}(t)dt\right) = \left(0,\intop_0^T e^{-\alpha \left(\frac{1}{t}+\frac{1}{T-t}\right)}\ensuremath{y}(t)dt\right) ;
\eneqn
\item for all $\delta \in ]0,1[$ and all $(t, s) \in ]0,T[ \times ]-S,S[$, $k_T$ we have
\bnan
\label{e:estim-kT}
|k_T(t,s)|\leq |s|\exp\left( \frac{1}{\min \left\{ t,T-t\right\}}\left(\frac{s^2}{\delta}-\frac{\alpha}{(1+\delta)}\right)\right) .
\enan
\end{itemize}
\end{proposition}
Note that this last estimate is most useful for $\delta$ sufficiently close to one so that $\alpha \geq S^2(1+\frac{1}{\delta})$.
We first prove the spectral observability property.
\bnp[Proof of Proposition \ref{propheatwave}]
To simplify notations, we prove the existence of universal constants so that $\ensuremath{\mathfrak K}_{heat} (\omega)\leq \alpha_{3}S^2 + \alpha_{4}\ensuremath{\mathfrak K}_{wave}(\omega,2S)^{2}$ for all $S>0$.
Let $C_0> \ensuremath{\mathfrak K}_{wave}(\omega,2S)$ so that there exists $C>0$ so that we have the estimate (see Corollary \ref{c:lambda=mu} for the equivalence)
\bnan
\label{e:co-wave-bisspectral}
\|(u_0,u_1)\|_{H^1_0(\ensuremath{\mathcal M})\times L^2(\ensuremath{\mathcal M})}\leq C e^{C_0 \Lambda} \|u\|_{L^2((-S,S)\times\omega)}, \quad \Lambda = \frac{\|(u_0,u_1)\|_{H^1_0(\ensuremath{\mathcal M})\times L^2(\ensuremath{\mathcal M})}}{\|(u_0,u_1)\|_{L^2(\ensuremath{\mathcal M}) \times H^{-1}(\ensuremath{\mathcal M})} }, \nonumber \\ \text{ for all $(u_0, u_1) \in H^1_0(\ensuremath{\mathcal M}) \times L^2(\ensuremath{\mathcal M})$, and $u$ solution to~\eqref{e:wave}.}
\enan
Note that when compared to \eqref{e:co-wave}, we have changed the interval $(0,2S)$ to $(-S,S)$ which gives the same result by conservation of energy.
The proof is a direct consequence of Lemma \ref{lmwavedonnespectral} and Lemma \ref{lmspectraldonneheat} below that we state separately since they have their own interest.
\enp
\begin{lemma}
\label{lmwavedonnespectral}
Assume \eqref{e:co-wave-bisspectral}, then, we have
\bna
\nor{e^{T\Delta_g}\ensuremath{y}_0}{L^2(\ensuremath{\mathcal M})}^2 \leq \frac{C (1+\lambda)S^{2}e^{2C_0(1+\lambda)^{\frac{1}{2}}}}{T}e^{\frac{18S^2}{T}}\intop_0^T \nor{e^{t\Delta_g}\ensuremath{y}_0}{L^2(\omega)}^2 dt , \quad \text{ for all $0<T\leq \alpha$ and all $\ensuremath{y}_0 \in E_{\leq \lambda}$.}
\ena
\end{lemma}
\bnp
We pick $\alpha > 2S^2$ and use the kernel $k_{T}$ described in Proposition \ref{propoEZ}.
Assume now that $\ensuremath{w}(s)$ is associated to $\ensuremath{y}$ as $\ensuremath{w}(s)=\intop_0^T k_T(t,s)\ensuremath{y}(t)dt$, where $\ensuremath{y}=e^{t\Delta_g}\ensuremath{y}_0$ with $\ensuremath{y}_0 \in E_{\leq \lambda}$. Then, in~\eqref{wave-w}, $\ensuremath{\mathcal W}_0$ is of the particular form $\ensuremath{\mathcal W}_0 = \left(0,\intop_0^T e^{-\alpha \left(\frac{1}{t}+\frac{1}{T-t}\right)}\ensuremath{y} (t)dt\right)$, so that a calculation (see~\cite[Equation~(3.3)]{EZ:11}) yields
\bnan
\label{e:brutale-est}
\nor{\ensuremath{\mathcal W}_0}{L^2\times \H^{-1}_\L}^2 & \geq & (1+ \lambda)^{-1}\nor{\ensuremath{\mathcal W}_0}{\H^1_\L \times L^2}^2
= (1+ \lambda)^{-1} \nor{\intop_0^T e^{-\alpha \left(\frac{1}{t}+\frac{1}{T-t}\right)}\ensuremath{y} (t)dt}{L^2}^2 \nonumber \\
&\geq& (1+ \lambda)^{-1} \sum_i|y_i|^2e^{-2\lambda_i T}\left|\intop_0^T e^{-\alpha \left(\frac{1}{t}+\frac{1}{T-t}\right)}dt\right|^2\nonumber
\enan
The integral can be estimated by Laplace method
\bna
\intop_0^T e^{-\alpha \left(\frac{1}{t}+\frac{1}{T-t}\right)}dt=T\intop_0^1 e^{-\frac{\alpha}{T}\left(\frac{1}{s}+\frac{1}{1-s}\right)}ds\geq CT\left(\frac{T}{\alpha}\right)^{1/2}e^{-4\frac{\alpha}{T}}, \textnormal{ for } \frac{\alpha}{T}\geq 1,
\ena
since the non degenerate minimum of $\frac{1}{s}+\frac{1}{1-s}$ is $4$ reached at $s=1/2$ and the fonction is positive.
\bnan
\nor{\ensuremath{\mathcal W}_0}{L^2\times \H^{-1}_\L}^2& \geq & C (1+ \lambda)^{-1}T^3\alpha^{-1}e^{-\frac{8\alpha}{T}}\nor{\ensuremath{y}(T)}{L^2}^2 .
\enan
Moreover, we have $\ensuremath{\mathcal W}_0 \in E_{\leq \lambda}\times E_{\leq \lambda}$ so that
\bna
\frac{\nor{\ensuremath{\mathcal W}_0}{H^{1}_0 \times L^2}}{\nor{\ensuremath{\mathcal W}_0}{L^2\times H^{-1}}} \leq (1+\lambda)^{\frac{1}{2}}.
\ena
As a consequence,~\eqref{e:co-wave-bisspectral} implies
\bnan
\label{observfrequence}
\nor{\ensuremath{\mathcal W}_0}{L^2\times \H^{-1}_\L} \leq C e^{C_0(1+\lambda)^{\frac{1}{2}}}\nor{ \ensuremath{w}}{L^2(]-S,S[\times \omega)}.
\enan
Using Cauchy-Schwarz inequality, we have
\begin{align}
\label{estimobserv}
\nor{\ensuremath{w}}{L^2(]-S,S[\times \omega)}^2 &\leq \left( \intop_{]0,T[\times ]-S,S[} k_T(t,s)^2 dt~ds \right) \intop_{0}^T \intop_{\omega}\left|\ensuremath{y} (t,x)\right|^2 dx~dt
\end{align}
Now, we use ~\eqref{e:estim-kT} with $\delta\in (0,1)$ fixed sufficiently close to one so that $\alpha \geq S^2\frac{(1+\delta)}{\delta}$ (which is possible since we have assumed $\alpha> 2S^2$).
\bnan
\label{estimkT}
\intop_{]0,T[\times ]-S,S[} k_T(t,s)^2 dt~ds \leq CS^{2}
\intop_{]0,T[\times ]-S,S[} \exp\left( \frac{1}{\min \left\{ t,T-t\right\}}\left(\frac{S^2}{\delta}-\frac{\alpha}{(1+\delta)}\right)\right) dt~ds\leq CS^{3}T .
\enan
Combining \eqref{e:brutale-est}, \eqref{observfrequence}, \eqref{estimobserv} and \eqref{estimkT} gives the result since the estimate is true for any $\alpha > 2S^2$.
\enp
\begin{lemma}[Miller \cite{Miller:10}]
\label{lmspectraldonneheat}
Assume
\bna
\nor{e^{T\Delta_g}\ensuremath{y}_0}{L^2(\ensuremath{\mathcal M})}^2 \leq Ce^{2a\lambda^{\frac{1}{2}}+\frac{2b}{T}}\intop_0^T \nor{e^{t\Delta_g}\ensuremath{y}_0}{L^2(\omega)}^2 dt , \quad \text{ for all $0<T<T_{0}$ and all $\ensuremath{y}_0 \in E_{\leq \lambda}$.}
\ena
Then, we have
\bna
\nor{e^{T\Delta_g}\ensuremath{y}_0}{L^2(\ensuremath{\mathcal M})}^2 \leq C'e^{2\frac{c^*}{T}}\intop_0^T \nor{e^{t\Delta_g}\ensuremath{y}_0}{L^2(\omega)}^2 dt , \quad \text{ for all $0<T<T_{0}$ and all $\ensuremath{y}_0 \in L^2(\Omega)$,}
\ena
with $c_* =\left(a+\sqrt{b}+\sqrt{a^2+2a\sqrt{b}}\right)^2$ and $C'$ a constant dependent on $C$, $T_{0}$, $a$ and $b$.
\end{lemma}
\bnp
The result is not stated exactly, but the author proves this as an intermediate result of \cite[Theorem~2.2]{Miller:10}. More precisely, the assumptions of our Lemma are exactly estimate (10) in \cite{Miller:10}, with $\alpha=1/2$ and $\beta=1$.
It gives the result with $c_* = 4b^2 \left( \sqrt{a+2\sqrt{b}}-\sqrt{a}\right)^{-4} = \frac14 \left( \sqrt{a+2\sqrt{b}}+\sqrt{a}\right)^{4} = \left(a+\sqrt{b}+\sqrt{a^2+2a\sqrt{b}}\right)^2$.
\enp
\subsection{Link between \texorpdfstring{$\ensuremath{\mathfrak K}_{wave}(\omega,T)$}{Kwave(omega,T)} and some space of analytic functions}
\label{subsectionanalytlink}
As already mentioned, Theorem \ref{t:positrev} is a corollary of observability estimates in analytic spaces (characterizing the attainable set for the control problem) obtained by Allibert~\cite{Allibert:98}. The following proposition explains the link between such estimates and~\eqref{e:co-wave}-\eqref{e:co-wave-bis} (see also~\cite{Leb:Analytic}).
\begin{proposition}
\label{lienanalyticekwave}
Assume there is $C_0, C>0$ such that for all $(u_0,u_1)\in H^1_0(\ensuremath{\mathcal M})\times L^2(\ensuremath{\mathcal M})$ and associated $u$ solution of~\eqref{e:wave}, we have
\bnan
\label{e:obs-analytic-space}
\nor{e^{-C_0\sqrt{-\Delta_g}}(u_0,u_1)}{L^2(\ensuremath{\mathcal M})\times H^{-1}(\ensuremath{\mathcal M})}\leq C \|u\|_{L^2((0,T)\times\omega)} \quad \text{(resp. }\leq C \|\partial_{\nu}u\|_{L^2((0,T)\times\Gamma)} \text{)}.
\enan
Then~\eqref{e:co-wave} is satisfied with constant $\ensuremath{\mathfrak K} = C_0$ and all $\mu>0$. In particular, we have
\bna
\ensuremath{\mathfrak K}_{wave}(\omega,T)\leq C_0 , \quad \text{(resp. }\ensuremath{\mathfrak K}_{wave}(\Gamma,T)\leq C_0 \text{)}.
\ena
\end{proposition}
Again, in this statement, $\Delta_g$ denotes the Laplace operator with Dirichlet boundary conditions.
\bnp
Given $\mu>0$, we decompose the data $(u_0,u_1)$ as $u_0=\mathds{1}_{\sqrt{-\Delta_g}\leq \mu}u_0+\mathds{1}_{\sqrt{-\Delta_g}> \mu}u_0$ (and similarly for $u_1$). Here $\mathds{1}_{\sqrt{-\Delta_g}\leq \mu}$ denotes the orthogonal projector on the spectral space of $-\Delta_g$ associated to eigenfunctions $\lambda_j$ with $\sqrt{\lambda_j}\leq \mu$.
Remarking that
\begin{align*}
\|1_{\sqrt{-\Delta_g}>\mu}(u_0,u_1)\|_{L^2(\ensuremath{\mathcal M}) \times H^{-1}(\ensuremath{\mathcal M})} & \leq \frac{1}{\mu}\|\mathds{1}_{\sqrt{-\Delta_g}> \mu} (u_0,u_1)\|_{H^1_0(\ensuremath{\mathcal M})\times L^2(\ensuremath{\mathcal M})}\\
& \leq \frac{1}{\mu}\|(u_0,u_1)\|_{H^1_0(\ensuremath{\mathcal M})\times L^2(\ensuremath{\mathcal M})},
\end{align*}
we obtain
\bna
\|(u_0,u_1)\|_{L^2(\ensuremath{\mathcal M}) \times H^{-1}(\ensuremath{\mathcal M})}&\leq& \|1_{\sqrt{-\Delta_g}\leq \mu}(u_0,u_1)\|_{L^2(\ensuremath{\mathcal M}) \times H^{-1}(\ensuremath{\mathcal M})}+\frac{1}{\mu}\|(u_0,u_1)\|_{H^1_0(\ensuremath{\mathcal M})\times L^2(\ensuremath{\mathcal M})}\\
&\leq& e^{C_0\mu}\|e^{-C_0\sqrt{-\Delta_g}}(u_0,u_1)\|_{L^2(\ensuremath{\mathcal M}) \times H^{-1}(\ensuremath{\mathcal M})}+\frac{1}{\mu}\|(u_0,u_1)\|_{H^1_0(\ensuremath{\mathcal M})\times L^2(\ensuremath{\mathcal M})}\\
&\leq& Ce^{C_0\mu} \|u\|_{L^2((0,T)\times\omega)} +\frac{1}{\mu}\|(u_0,u_1)\|_{H^1_0(\ensuremath{\mathcal M})\times L^2(\ensuremath{\mathcal M})},
\ena
where we used the assumption~\eqref{e:obs-analytic-space} in the last inequality. This concludes the proof of~\eqref{e:co-wave}, and that of the proposition.
\enp
We now extract an estimate like~\eqref{e:obs-analytic-space} on some surfaces of revolution from~\cite{Allibert:98}.
Indeed, a combination of several estimates in~\cite{Allibert:98} gives the following result.
\begin{theorem}[Allibert \cite{Allibert:98}]
\label{thmAllibert}
For any $T>T_1$ and $C_0>d_A(\Gamma)$, there exists $C>0$ so that
\bnan
\label{observanalyticAllibert}
\nor{e^{-C_0\sqrt{-\Delta_g}}(u_0,u_1)}{H^1_0\times L^2}\leq C\|\partial_{\nu}u\|_{L^2((0,T)\times\Gamma)}
\enan
for any $(u_0,u_1)\in H^1_0(\ensuremath{\mathcal M})\times L^2(\ensuremath{\mathcal M})$ and associated solution $u$ of~\eqref{e:wave}.
\end{theorem}
The result is not stated exactly this way in the article. It is also more precise since it involves analytic spaces only in the $\theta$ variable. More precisely, denoting $E_0^k$ the spaces of functions in $H^1_0\times L^2$ of the form $f(s)e^{i k\theta}$, the following estimate is proved in~\cite[Theor\`eme~2, D\'efinition~3 and Proposition~1]{Allibert:98}:
\bna
\nor{(u_0,u_1)}{H^1_0\times L^2}\leq C(k)\|\partial_{\nu}u\|_{L^2((0,T)\times\Gamma)}
\ena
for any $(u_0,u_1)\in E_0^k$, where $C(k)$ satisfies
\bna
\limsup_{n\to +\infty}\frac{\ln C(k)}{k}=d_A(\Gamma).
\ena
This gives \eqref{observanalyticAllibert} for any $C_0>d_A$, taking into account the orthogonality of the subspaces $E_0^k$ for the norm of $H^1_0\times L^2$ and the norm of the observation.
\bigskip
With Theorem~\ref{thmAllibert} in hand, Theorem~\ref{t:positrev} is now a straightforward consequence of Propositions~\ref{lienanalyticekwave} and~\ref{propheatwave}.
\subsection{Reformulation of the definition of the constants in terms of localization functions}
This section is aimed at giving an alternative definition for the geometric constants $ \ensuremath{\mathfrak K}_{eig}(\omega)$, $\ensuremath{\mathfrak K}_{Sigma}(\omega)$, $\ensuremath{\mathfrak K}_{heat}(\omega)$ in terms of localization functions.
\begin{definition}
Let $\omega\subset \ensuremath{\mathcal M}$ be an open set. We set:
\bna
\Loc_{eig}(\omega,\lambda)=\inf \left\{\frac{\nor{\psi}{L^2(\omega)}}{\nor{\psi}{L^2(\ensuremath{\mathcal M})}}, \psi \in E_{\lambda} \setminus \{0\} \right\} \in [0,1], \quad \lambda \in \Sp(-\Delta_g) ,
\ena
\bna
\Loc_{\Sigma}(\omega,\lambda)=\inf \left\{\frac{\nor{u}{L^2(\omega)}}{\nor{u}{L^2(\ensuremath{\mathcal M})}}, u \in E_{\leq \lambda} \setminus \{0\} \right\} \in [0,1],
\ena
\bna
\Loc_{heat}(\omega,T)=\inf \left\{\frac{\nor{e^{t\Delta}u_0}{L^2((0,T) \times \omega)}}{\nor{e^{T\Delta}u_0}{L^2(\ensuremath{\mathcal M})}}, u_0 \in L^2(\ensuremath{\mathcal M}) \setminus \{0\} \right\} ,
\ena
\end{definition}
Note that if the Schr\"odinger equation is observable from $\omega$ in finite time (in particular if $\omega$ satisfies the geometric control condition, see~\cite{BLR:92,Leb:92}), , then, there exists $C>0$ so that $\Loc(\omega,\lambda_j)\geq C$ for all $j\in {\mb{N}}$. Under the sole assumtion only $\omega\neq \emptyset$, there exists $C> 0$ so that $\Loc_{\omega,\lambda_j}\geq Ce^{-C\sqrt{\lambda_j}}$
\begin{lemma}
We have
\bna
\ensuremath{\mathfrak K}_{eig}(\omega) = \limsup_{\lambda \to +\infty, \lambda \in \Sp(-\Delta_g)} \frac{- \log \Loc_{eig}(\omega,\lambda)}{\sqrt{\lambda}},
\qquad \ensuremath{\mathfrak K}_{\Sigma}(\omega) = \limsup_{\lambda \to +\infty} \frac{- \log \Loc_{\Sigma}(\omega,\lambda)}{\sqrt{\lambda}},
\ena
\bna
\ensuremath{\mathfrak K}_{heat}(\omega) = \limsup_{T\to 0^+} -T\log \Loc_{heat}(\omega,T),
\ena
\end{lemma}
Note that we do not have a similar formulation for the constants $\ensuremath{\mathfrak K}_\infty(\omega)$ and $\ensuremath{\mathfrak K}_{wave}(\omega,T)$ since they do not correspond to an asymptotic r\'egime (like $T\to 0$ or $\lambda \to +\infty$).
\bnp
We only prove the second statement, the other proofs being similar. Setting $$\mathfrak{C}_{\Sigma}(\omega) = \limsup_{\lambda \to +\infty} \frac{- \log \Loc_{\Sigma}(\omega,\lambda)}{\sqrt{\lambda}},$$ we want to prove that $\mathfrak{C}_{\Sigma}(\omega)=\ensuremath{\mathfrak K}_{\Sigma}(\omega)$. Assume $\ensuremath{\mathfrak K}, C$ satisfy~\eqref{e:co-sum}, then we have
$$
\Loc_{\Sigma}(\omega,\lambda) \geq \frac{1}{C}e^{-\ensuremath{\mathfrak K} \sqrt{\lambda}},
$$
and hence
$$
\frac{- \log \Loc_{\Sigma}(\omega,\lambda)}{\sqrt{\lambda}} \leq \frac{\ensuremath{\mathfrak K} \sqrt{\lambda} +\log(C)}{ \sqrt{\lambda}} .
$$
Taking the $\limsup_{\lambda \to +\infty}$, this implies $\mathfrak{C}_{\Sigma}(\omega) \leq \ensuremath{\mathfrak K}$. Taking the infimum over all such $\ensuremath{\mathfrak K}$ and recalling Definition~\ref{def-coco}, we obtain $\mathfrak{C}_{\Sigma}(\omega) \leq \ensuremath{\mathfrak K}_{\Sigma}(\omega)$.
We now prove the converse inequality. The definition of $\mathfrak{C}_{\Sigma}(\omega)$ implies that for all $\varepsilon$, there exists $\lambda_0(\varepsilon)$ such that for all $\lambda \geq \lambda_0(\varepsilon)$,
$$
\frac{- \log \Loc_{\Sigma}(\omega,\lambda)}{\sqrt{\lambda}} \leq \mathfrak{C}_{\Sigma}(\omega) + \varepsilon ,
$$
that is $\Loc_{\Sigma}(\omega,\lambda) \geq e^{-(\mathfrak{C}_{\Sigma}(\omega) + \varepsilon)\sqrt{\lambda}}$. This, together with the fact that $\Loc_{\Sigma}(\omega,\lambda) >0$ does not vanish on $[0, \lambda_0(\varepsilon)]$, implies the existence of a constant $C(\varepsilon)>1$ such that $\Loc_{\Sigma}(\omega,\lambda) \geq \frac{1}{C(\varepsilon)}e^{-(\mathfrak{C}_{\Sigma}(\omega) + \varepsilon)\sqrt{\lambda}}$ for all $\lambda\geq 0$. This is precisely estimate~\eqref{e:co-sum} with $\ensuremath{\mathfrak K} = \mathfrak{C}_{\Sigma}(\omega) + \varepsilon$ and $C= C(\varepsilon)$. Taking the infimum over all such $\ensuremath{\mathfrak K}$ and recalling Definition~\ref{def-coco}, we obtain $\ensuremath{\mathfrak K}_{\Sigma}(\omega) \leq \mathfrak{C}_{\Sigma}(\omega) + \varepsilon$ for all $\varepsilon >0$, and hence $\ensuremath{\mathfrak K}_{\Sigma}(\omega) \leq \mathfrak{C}_{\Sigma}(\omega)$, which concludes the proof.
\enp
\section{Construction of maximally vanishing eigenfunctions}
\label{s:contruction}
\subsection{The sphere}
\label{s:sphere}
In this section, we consider the simplest case of our results that is, the unit sphere in ${\mb{R}}^3$:
$$
\S^2 = \{(x_1,x_2,x_3) \in {\mb{R}}^3 , x_1^2 + x_2^2 + x_3^2 = 1\} = \{ x \in {\mb{R}}^3 ,|x|= 1\} .
$$
Eigenfunctions and eigenvalues of the Laplace-Beltrami operator on $\S^2$ are well-understood : eigenfunctions are restrictions to $\S^2$ of {\em harmonic homogeneous polynomials} of ${\mb{R}}^3$, associated to $k(k+1)$, where $k$ is the degree of the polynomial. We are particularly interested in so called equatorial spherical harmonics, given by
$$
u_k = P_k |_{\S^2} \in C^\infty(\S^2) , \quad P_k(x_1,x_2,x_3) = (x_1 + i x_2)^k ,
$$
known to concentrate exponentially on the equator given by $x_3 = 0$.
Since it can be written $P_k=z^k$ where $z=x_1 + i x_2\in{\mb{C}}$, it is easy to check that $P_k$ is holomorphic as a function of $z$ and indeed harmonic as a function of $(x_1,x_2,x_3)\in {\mb{R}}^3$. Moreover, $P_k$ is homogeneous of order $k$. Therefore, see e.g.~\cite[Proposition 22.2 p169]{Shubin:01}, we obtain that $u_k$ is an eigenfunction of the Laplace-Beltrami on $\S^2$:
\bna
-\Delta_{\S^2} u_k=\lambda_k u_k \textnormal{ with } \lambda_k=k(k+1).
\ena
Note that we have $$|u_k(\omega)|^2 = (x_1^2 +x_2^2)^k = (1-x_3^2)^k , \quad \omega = \frac{x}{|x|} .$$
We denote by $N = (0,0, 1)$ and $S = (0,0,-1)$, the north and south poles, and have coordinates :
$$
\begin{array}{ccc}
(0 , \pi) \times \S^1 & \to & \S^2 \setminus\{N,S\} \\
(s ,\theta)& \mapsto & (\sin s \cos \theta , \sin s \sin \theta , \cos s)
\end{array}
$$
Remark that $s(x) = \dist_g(x, N)$, for $x\in \S^2$.
In these coordinates, the metric is given by $d s^2 +(\sin s)^2 d\theta^2$, the Riemannian volume element is $d\omega=\sin s ds d\theta$, and the sequence $u_k$ is defined by
$$
u_k(s, \theta) = \sin(s)^k e^{ik\theta} .
$$
\begin{remark}
The construction works equally well in the unit sphere $\S^n\subset {\mb{R}}^{n+1}$, $n\geq 2$. The coordinates has to be changed by
$$
\begin{array}{ccc}
(0 , \pi) \times \S^1\times \S^{n-2} & \to & \S^{n} \setminus\{N,S\} \\
(s ,\theta , t)& \mapsto & (\sin s \cos \theta , \sin s \sin \theta , t\cos s)
\end{array}
$$
and we can still consider the eigenfunction $u_k=(x_1 + i x_2)^k|_{\S^{n}}$ with $-\Delta_{\S^n} u_k=\lambda_k u_k$ and $\lambda_k=k(k+n-1)$.
\end{remark}
With the above choice of the eigenfunction $u_k$, we have
$$|u_k(x)|^2 = (1-x_3^2)^k = (\sin s)^{2k} = |\sin \dist_g(x , N)|^{2 k} =e^{- 2k d_A(x)} , \quad d_A(x) = - \log \sin \dist_g(x , N) .$$
Note that $d_A$ is actually the Agmon distance to the equator ($s=\frac{\pi}{2}$) where $\S^2$ is seen as a surface of revolution, see Remark \ref{rksphereAgmon} below.
Also, given $f \in L^1(\S^2)$, we have
\bna
\intop_{\S^2} f(\omega) |u_k(\omega)|^2 d\omega
& =& \intop_{(0,\pi) \times \S^1}f(s , \theta) (\sin s)^{2k+1} d s d\theta \\
& =& 2\pi \intop_{(0,\pi) } F(s) (\sin s)^{2k+1} d s , \qquad F(s) = \frac{1}{2\pi} \intop_{\S^1} f(s , \theta) d\theta .
\ena
In case $f=1$, this yields the asymptotics of the norm of $u_k$, given by the Laplace method (see e.g.~\cite{Erd:56,CopsonBook}):
\bna
\frac{1}{2\pi}\|u_k\|^2_{L^2(\S^2)}& = &\frac{1}{2\pi}\intop_{\S^2} |u_k(\omega)|^2 d\omega
= \intop_{-1}^1 (1-x_3^2)^{k} d x_3 = \intop_{-1}^1 e^{k \log(1-x_3^2)} d x_3 \\
& = & (1+O(\frac{1}{k}))\intop_{\mb{R}} e^{- k x_3^2} d x_3 = \sqrt{\frac{\pi}{k}}(1+O(\frac{1}{k})) ,
\ena
and hence $\|u_k\|_{L^2(\S^2)}\sim 2^{1/2}\pi^{3/4}k^{-1/4}$ as $k \to + \infty$.
We have the elementary estimate
$$
\|u_k\|^2_{L^2(B(N,r))} = 2\pi \intop_0^r (\sin s)^{2k+1} d s \leq \frac{\pi}{k+1} r^{2k+2} .
$$
This can be slightly refined, e.g. by writing
\begin{align*}
\left| \|u_k\|^2_{L^2(B(N,r))} - \frac{\pi}{k+1} (\sin r)^{2k+2} \right| & = \left| \|u_k\|^2_{L^2(B(N,r))} - 2\pi \intop_0^r \cos s (\sin s)^{2k+1} d s \right| \\
& = 2\pi \intop_0^r (1-\cos s) (\sin s)^{2k+1} d s\\
& \leq \frac{r^2}{2}2\pi \intop_0^r (\sin s)^{2k+1} ds = \frac{r^2}{2} \|u_k\|^2_{L^2(B(N,r))}
\end{align*}
To be a little more precise, let us now prove an equivalent for $\|u_k\|^2_{L^2(B(N,r))}$ as $k \to \infty$, which is uniform in $r$.
\begin{lemma}
For all $k \in {\mb{N}}^*$ and all $r \in (0,\frac{\pi}{2})$, we have
$$
\|u_k\|_{L^2(B(N,r))}^2 = \frac{\pi}{k+1}\frac{\sin(r)^{2k+2}}{\cos(r)}\left(1+R \right) , \quad \text{ with }\quad |R| \leq \frac{\tan(r)^2}{2k+2} .
$$
\end{lemma}
This furnishes an optimal lower/upper bound for this quantity which is uniform with respect to $\varepsilon$.
\bnp
We write $a= -\log \sin r >0$, change variable $y =-\log \sin s$, and want to have an asymptotic expansion of
\bna
\frac{1}{2\pi} \|u_k\|^2_{L^2(B(N,r))} = \intop_0^r (\sin s)^{2k+1} d s
= \intop_{a}^{+\infty} e^{-(2k+2)y} \frac{1}{\sqrt{1-e^{-2y}}} dy
\ena
This integral is of the form
$$
\mathcal{I}(a,k) := \intop_{a}^{+\infty} e^{-(2k+2) y} f(y)dy ,
$$ where $f(y)=\frac{1}{\sqrt{1-e^{-2y}}}$ is smooth on $[a, +\infty)$. Writing
$$
|f(y) - f(a)|\leq (y-a)\sup_{[a,\infty)}|f'| \leq (y-a) \frac{e^{-2a}}{(1-e^{-2a})^{3/2}},
$$
since $f'(y) =- e^{-2y} (1-e^{-2y})^{-3/2}$
and integrating on $(a,+ \infty)$, we obtain
$$
\left| \mathcal{I}(a,k) - f(a)\frac{e^{-(2k+2)a}}{2k+2} \right| \leq \frac{e^{-(2k+2)a}}{(2k+2)^2} \frac{e^{-2a}}{(1-e^{-2a})^{3/2}}.
$$
Coming back to the original notation, this is precisely
$$
\left|\frac{1}{2\pi} \|u_k\|^2_{L^2(\O_\varepsilon)} -\frac{\sin(r)^{2k+2}}{(2k+2) \cos(r)} \right| \leq \frac{\sin(r)^{2k+4}}{(2k+2)^2\cos(r)^3} = \frac{\sin(r)^{2k+2}}{(2k+2)^2\cos(r)} \tan(r)^2,
$$
which concludes the proof of the lemma.
\enp
Note that the eigenfunctions we have constructed are complex valued. Yet, since $u_k=(\sin(s))^k e^{ik\theta}$, we have for instance $\Re(u_k)=(\sin(s))^k \cos(k\theta)$ and the same estimates work exactly the same except that $\intop_{\S^1}|e^{ik\theta}|^2 d\theta=2\pi$ should be replaced by $\intop_{\S^1}\cos(k\theta)^2 d\theta=\pi$.
\subsection{General surfaces of revolution}
\label{s:revol}
In this section we consider a revolution surface $\mathcal{S}\subset {\mb{R}}^3$ being diffeomorphic to a sphere $\S^2$, generalizing the results of Section~\ref{s:sphere}.
We follow~\cite[Chapter4~B p95]{Besse} for the precise geometric description of such manifolds.
Assume that $(\ensuremath{\mathcal S} , g)$ is an embedded submanifold of ${\mb{R}}^3$ (endowed with the induced Euclidean structure), having $\S^1 = ({\mb{R}}/2\pi\mathbb Z) \sim SO(2)$ as an effective isometry group.
The action of $\S^1$ on $\ensuremath{\mathcal S}$, denoted by $\theta \mapsto \ensuremath{\mathcal R}_\theta$ (such that $\ensuremath{\mathcal R}_\theta \ensuremath{\mathcal S} = \ensuremath{\mathcal S}$) has exactly two fixed points denoted by $N, S \in \ensuremath{\mathcal S}$ (the so-called North and South poles).
We now describe a nice parametrization of $(\ensuremath{\mathcal S} , g)$. Let $L= \dist_g(N, S)$ and $\gamma_0$ be a geodesic from $N$ to $S$ (thus with length $L$). For any $\theta \in \S^1$, the isometry $\ensuremath{\mathcal R}_\theta$ transforms the geodesic $\gamma_0$ into $\ensuremath{\mathcal R}_\theta (\gamma_0)$, which is another geodesic joining $N$ to $S$. Set
$U =\ensuremath{\mathcal S} \setminus \{ N, S \}$. For every $m \in U$, there exists a unique $\theta \in \S^1$ such that $m$ belongs to $\ensuremath{\mathcal R}_\theta (\gamma_0)$. The geodesic $\ensuremath{\mathcal R}_\theta (\gamma_0)$ can be parametrized by arclength
$$
\rho : [0,L]\to \ensuremath{\mathcal R}_\theta (\gamma_0) , \quad \rho (0) = N , \quad \rho (L) = S , \quad s = \dist_g( \rho(s) , N) = L- \dist_g( \rho(s) , S),
$$
and there exists a unique $s \in (0,L)$ such that $\rho(s) = m$. We use $(s,\theta)$ as a parametrization of $U \subset \ensuremath{\mathcal S}$:
$$
\begin{array}{cccc}
\zeta : & U = \mathcal{S} \setminus\{N,S\} & \to & ( 0,L) \times \S^1\\
& m& \mapsto &\zeta(m) = (s, \theta).
\end{array}
$$
We define two other exponential charts $(U_N, \zeta_N)$ and $(U_S, \zeta_S)$ centered at the fixed points $N$ and $S$ by
$$
U_N = \{N\} \cup \zeta \left( \big( 0, \frac{L}{2} \big) \times \S^1 \right) = B_g\left(N, \frac{L}{2}\right) \subset \ensuremath{\mathcal S},
\quad U_S = \{S\} \cup \zeta \left( \big( \frac{L}{2}, L \big) \times \S^1 \right) = B_g\left(S, \frac{L}{2}\right) \subset \ensuremath{\mathcal S} ,
$$
$$
\zeta_N : U_N \to B_{{\mb{R}}^2}\left(0 , \frac{L}{2}\right) , \quad \zeta_N (N) = 0 , \quad \zeta_S : U_S \to B_{{\mb{R}}^2}\left(0 , \frac{L}{2}\right) , \quad \zeta_S (S) = 0 ,
$$
with the transition maps
$$
\begin{array}{cccc}
\zeta_N \circ \zeta^{-1} : & \zeta \big(U\cap U_N \big) = \left( 0,\frac{L}{2} \right) \times \S^1 & \to & \zeta_N \big(U\cap U_N \big) = B_{{\mb{R}}^2}\left(0 , \frac{L}{2}\right) \setminus \{ 0 \}\\
& (s ,\theta)& \mapsto & \big(s \cos(\theta), s \sin (\theta) \big).
\end{array}
$$
and
$$
\begin{array}{cccc}
\zeta_S \circ \zeta^{-1} : & \zeta \big(U\cap U_S \big) = \left(\frac{L}{2} , L \right) \times \S^1 & \to & \zeta_S \big(U\cap U_S \big) = B_{{\mb{R}}^2}\left(0 , \frac{L}{2}\right) \setminus \{ 0 \}\\
& (s ,\theta)& \mapsto & \big((L-s) \cos(\theta), (L-s) \sin (\theta) \big).
\end{array}
$$
On the cylinder $(0,L) \times \S^1$, the metric $g$ is given by
$$
(\zeta^{-1})^*g = ds^2 + R(s)^2 d\theta^2
$$
for some smooth function $R : (0,L) \to {\mb{R}}^+_*$ (see below Remark \ref{rkautreparam} for the geometric interpretation of $R$). Since $g$ is a smooth metric on $\ensuremath{\mathcal S}$, \cite[Proposition~4.6]{Besse} gives that $R$ extends to a $C^\infty$ function $[0,L] \to {\mb{R}}^+$ satisfying
\bnan
\label{e:condR}
R(0) = R(L) = 0 , \quad R'(0) = 1 , \quad R'(L) = -1 , \quad R^{(2p)}(0) = R^{(2p)}(L)=0 \quad \text{for any } p \in {\mb{N}} .
\enan
In these coordinates, the Riemannian volume form is hence $R(s) ds d\theta$, the Riemannian gradient of a function is
\bnan
\label{e:reim-gradient}
\nabla_g f = \d_s f \frac{\d}{\d s} + \frac{1}{R(s)^2} \d_\theta f \frac{\d}{\d \theta} , \quad \text{ with } \quad g(\nabla_g f , \nabla_g f ) = |\d_s f|^2 + \frac{1}{R(s)^2} |\d_\theta f|^2
\enan
and the Laplace-Beltrami operator is given by
\bnan
\label{deltacoord}
\Delta_{s,\theta} = \frac{1}{R(s)} \d_s R(s) \d_s + \frac{1}{R(s)^2} \d_\theta^2 .
\enan
Another important operator is the infinitesimal generator $X_\theta$ of the group $(\ensuremath{\mathcal R}_\theta)_{\theta \in \S^1}$, defined, for $f \in C^\infty(\ensuremath{\mathcal S})$, by
\bnan
\label{e:Xtheta}
X_\theta f = \lim_{\theta \to 0} \theta^{-1} (f\circ \ensuremath{\mathcal R}_\theta -f).
\enan In the chart $(U, \zeta)$, the action of $\ensuremath{\mathcal R}_\theta$ is given by $(\zeta^{-1})^* \ensuremath{\mathcal R}_\theta (u, \theta') = (u, \theta' +\theta)$, so that $(\zeta^{-1})^* X_\theta = \d_\theta$. Let us now check that $X_\theta$ is a smooth vector field. Indeed, we have
$$
(\zeta_N^{-1})^*X_\theta = (\zeta_N^{-1})^*\zeta^* \d_\theta = d \left( \zeta_N \circ \zeta^{-1}\right) \cdot \d_\theta,
$$
and hence
$$
(\zeta_N^{-1})^*X_\theta \big(s \cos(\theta), s \sin (\theta) \big) = \left(- s \sin (\theta) \d_{x_1} + s \cos(\theta) \d_{x_2} \right) \big(s \cos(\theta), s \sin (\theta) \big) ,
$$
that is
$$
\big( (\zeta_N^{-1})^*X_\theta \big)(x_1,x_2) = - x_2 \d_{x_1} + x_1 \d_{x_2}.
$$
Since $\big( (\zeta_N^{-1})^*X_\theta \big)(0) = 0$ (and since the computation is similar in $U_S$), we have obtained that $X_\theta$ is smooth. Note also that $X_\theta(N) = X_\theta(S)=0$ and that its norm is given by $\sqrt{g(X_\theta, X_\theta) (s, \theta)}= R(s)$ (in the coordinates of $U$).
We define by $L^2(\ensuremath{\mathcal S}) := L^2(\ensuremath{\mathcal S}, d\Vol_g)$ the space of square integrable functions, which is also invariant by the action of $(\ensuremath{\mathcal R}_\theta)_{\theta \in \S^1}$.
Now, remark that $(\ensuremath{\mathcal R}_\theta)_{\theta \in \S^1}$ acts as a (periodic) one-parameter unitary group on $L^2(\ensuremath{\mathcal S})$ by $f \mapsto f\circ \ensuremath{\mathcal R}_\theta$. The Stone Theorem (see e.g.~\cite[Theorem~VIII-8~p266]{RS:I}) hence implies that its infinitesimal generator is $i A$, where $A$ is a selfadjoint operator on $L^2(\ensuremath{\mathcal S})$ with domain $D(A) \subset L^2(\ensuremath{\mathcal S})$. Since $i Af = X_\theta f$ for $f \in C^\infty(\ensuremath{\mathcal S})$ (which is dense in $D(A)$) according to~\eqref{e:Xtheta}, we have that $A$ is the selfadjoint extension of $\frac{X_\theta}{i}$. From now on, we slightly abuse the notation and still denote $\frac{X_\theta}{i}$ for its selfadjoint extension $A$.
Since $g$ is invariant by the action of $\ensuremath{\mathcal R}_\theta$, we have
$$
[X_{\theta} , \Delta_g ] =0 .
$$
Moreover, $\Delta_g$ has compact resolvent, so that the operators $\Delta_g$ and $X_\theta$ share a common basis of eigenfunctions: indeed, $X_\theta$ preserves each (finite dimensional) eigenspace of $\Delta_g$, and it can be diagonalized on these spaces. In $U$ it can be written as $e^{ik \theta}f(s)$ with $k\in \mathbb Z$, $f \in C^{\infty}(0,L)\cap L^2\left((0,L), R(s)ds\right)$ solution of
\bnan
\label{equationk1D}
- \frac{1}{R(s)} \d_s\left( R(s) \d_s f\right) + \frac{k^2}{R(s)^2} f=\lambda f .
\enan
for some $\lambda\geq 0$, eigenvalue for $-\Delta$. Let us detail this assertion. Take $u$ a necessarily smooth common eigenvalue of $\Delta_g$ and $X_{\theta}$. In $U$ (with the coordinates $(s,\theta)$), denote $f(s)=u(s,0)$. $u$ is smooth and satisfies $X_{\theta}u=i\lambda_{\theta}u$ in the classical sense. Then, for any fixed $s_0\in (0,L)$, the function $g_{s_0}\in C^{\infty}(\S^1)$ defined by $\theta \mapsto g_{s_0}(\theta)=u(s_0,\theta)$, is solution of $\d_{\theta} g_{s_0}(\theta)=i\lambda_{\theta}g_{s_0}(\theta)$ and can be written $g_{s_0}(\theta)=e^{i\lambda_{\theta}\theta}f(s_0)$. By periodicity, $\lambda_{\theta}=k\in \mathbb Z$ and is thus independent of $\theta$. So, $u(s_0,\theta)=e^{ik\theta}f(s_0)$, and it is clear from \eqref{deltacoord} that $f$ must satisfy \eqref{equationk1D}.
We will call these normalized eigenfunctions $\varphi_{k,n}=e^{ik \theta}f_{k,n}(s)$ with eigenvalues $\lambda_{k,n}$ for $-\Delta_g$, where $n\in {\mb{N}}$. In particular, we can write $L^2(\ensuremath{\mathcal S})= \oplus^{\perp }_{(k,n)\in \mathbb Z\times {\mb{N}}}\vect (\varphi_{k,n}) $.
We will denote $L^2_k=\ker (X_{\theta}-ik))=\left\{\varphi\in L^2(\ensuremath{\mathcal S});\varphi_{|U} =e^{ik \theta}f(s),f \in L^2\left((0,L),R(s)ds\right)\right\}$ and $H^2_k=H^2(\ensuremath{\mathcal S})\cap L^2_k$. The commutation property implies that $\Delta H^2_k\subset L^2_k$, so we can define the operator $\Delta_k=\Delta_{\left|L^2_k\right.}$ which is self-adjoint with domain $H^2_k$. This can be seen for instance directly on the simultaneous diagonalization which implies an isometry $L^2(\ensuremath{\mathcal S}) \approx \ell^2(\mathbb Z\times {\mb{N}})$ where $L^2_k\approx \left\{(k,n)\left|n\in {\mb{N}}\right.\right\}$ as a closed subspace of $\ell^2(\mathbb Z\times {\mb{N}})$. The fact that $\Delta_g$ has compact resolvent implies the same things for $\Delta_k$.
\begin{remark}
\label{r:def-intrinsic-dA}
Note that the introduction of $X_\theta$ allows to give a more intrinsic definition of $d_A$ introduced in \eqref{e:defdA}: given a point $m_0$ on the ``strict global non-degenerate equator'' of $\ensuremath{\mathcal S}$, the Agmon distance $d_A$ is the unique continuous function such that
$$
X_\theta d_A = 0 , \quad d_A(m_0) = 0 , \quad |\nabla_g d_A|_g^2(m) - \left( \frac{1}{g(X_\theta ,X_\theta)(m)}- \frac{1}{g(X_\theta ,X_\theta)(m_0)} \right) = 0.
$$
All properties of Lemma~\ref{lemma-prop-dA} can be formulated intrinsically since $s$ measures the geodesic distance to the north pole, and hence $s(m) = \dist_g(m, N)$, $L- s(m) = \dist_g(m, S)$, and $s(m)-s_0 =\dist_g(m, equator) $.
\end{remark}
\begin{remark}(Another possible parametrization)
\label{rkautreparam}
Some such surfaces of revolution admit the following ``cylindrical'' parametrization on the set $U$:
with $ \pm z_\pm>0$ and the two poles $P_\pm = (0,0, z_\pm)$, we have
$$
\begin{array}{ccc}
( z_- , z_+) \times \S^1 & \to & U = \mathcal{S} \setminus\{N,S\} \subset {\mb{R}}^3\\
(z ,\theta)& \mapsto & (\mathsf{R}(z) \cos \theta , \mathsf{R}(z)\sin \theta , z)
\end{array}
$$
where $\mathsf{R}: [z_- , z_+] \to (0,\infty)$ is the profile of the surface, that is, a smooth positive function on $(z_-, z_+)$ satisfying $\mathsf{R}(z_\pm)=0$ and $\lim_{z \to z_\pm} \mathsf{R}'(z) = \mp\infty$.
We have
$$
\left\{
\begin{array}{l}
dx_1 = \mathsf{R}'(z) \cos \theta dz - \mathsf{R}(z) \sin \theta d\theta \\
dx_2 = \mathsf{R}'(z) \sin \theta dz + \mathsf{R}(z) \cos \theta d\theta \\
dx_3 = dz
\end{array}
\right.
$$
so that the metric on $\mathcal{S}$ induced by the Euclidean structure is given by
$$g = dx_1^2 +dx_2^2+dx_3^2 = (1+\mathsf{R}'(z)^2)dz^2 + \mathsf{R}(z)^2 d\theta^2 . $$
As a consequence, the Riemannian volume element is $V(z) dz d\theta$ with $V(z)= \mathsf{R}(z)\sqrt{1+\mathsf{R}'(z)^2}$ and the Laplace-Beltrami operator is given in this coordinates, by
$$
\Delta_{z,\theta} = \frac{1}{V(z)} \d_z \left(\frac{V(z)}{1+\mathsf{R}'(z)^2} \d_z \right)+ \frac{1}{\mathsf{R}(z)^2} \d_\theta^2 ,
$$
with a suitable selfadjoint extension on $L^2\big((z_- , z_+) \times \S^1, V(z) dz d\theta \big)$.
The link between $s$ and $z$ is the following diffeomorphism
$$
s(z) = \intop_{z_-}^z \sqrt{1+\mathsf{R}'(t)^2} dt ,
$$
and we have $L= \intop_{z_-}^{z_+} \sqrt{1+\mathsf{R}'(t)^2} dt$, together with $R(s(z)) =\mathsf{R}(z)(=\sqrt{g(X_\theta, X_\theta)})$, so that $R(s)$ indeed measures the distance to the axis of revolution.
\end{remark}
\begin{remark}[The sphere]
Note that, in the $z$-parametrization, the sphere is given by $z_\pm = \pm1$ and $r(z)= \sqrt{1-z^2}$ and hence $r'(z)= \frac{-z}{\sqrt{1-z^2}}$ and $V(z) = 1$ is smooth (which is not the general case if the surface is flat near the poles).
\end{remark}
In the proofs below, we shall often consider $h=k^{-1}$ as a semiclassical parameter.
\begin{lemma}
\label{l:exist-mu-psi}
Assume that $s \mapsto R(s)$ admits a non-degenerate local maximum at $s_0 \in (0,L)$.
Then, for all $k\in{\mb{N}}$, there exists $\psi_k \in C^{\infty}(\ensuremath{\mathcal S})\cap L^2_k $, and $\mu_k \in {\mb{R}}$ such that $\mu_k= \frac{1}{R(s_0)^2}+\frac{1}{k}\sqrt{\frac{|R''(s_0)|}{R^3(s_0)}} + O(\frac{1}{k^{\frac32}})$, $\|\psi_k\|_{L^2(\ensuremath{\mathcal S})}=1$, and we have $- \Delta_g \psi_k = k^2 \mu_k \psi_k$.
\end{lemma}
Note that the assumption of the lemma is $R'(s_0) = 0$ and $R''(s_0)<0$.
\bnp
We first construct a family of exponentially accurate quasimodes (i.e. approximate eigenfunctions) compactly supported in $U$ and of the form (in the coordinates $(s, \theta)$ of $U$) $e^{ik \theta} u_k(s)$.
The function $u_k(s)$ should thus solve~\eqref{equationk1D} approximately. Setting $h=k^{-1}$ and $\mu = \lambda h^2$, we are left with the following semiclassical eigenvalue (or approximate eigenvalue) problem in the limit $h\to 0^+$
$$
(P_h - \mu)f = - \frac{h^2}{R(s)} \d_s \left(R(s) \d_s f \right)+\left( \frac{1}{R(s)^2} - \mu \right) f = 0 .
$$
According to the assumption, the potential $\frac{1}{R(s)^2}$ is positive, tends to plus infinity near $0$ and $L$, and admits $\frac{1}{R(s_0)^2}$ as a nondegenerate local minimum. Namely, this is $R'(s_0)=0$ and $R'' (s_0) <0$.
The construction is classical (harmonic approximation) and follows e.g. that of~\cite[Theorem~4.23]{DS:book} in a simpler setting. The idea is to approximate the operator $P_h$ by its harmonic approximation at $s_0$, namely
\bnan
\label{e:model-harmonic-oscillator}
\tilde{P}_h := - \frac{h^2}{R(s_0)} \d_s R(s_0) \d_s + \frac{1}{R(s_0)^2} + \left(\frac{1}{R^2} \right)''(s_0) \frac{(s-s_0)^2}{2} = -h^2 \d_s^2 + \frac{1}{R(s_0)^2} - \frac{2R''(s_0)}{R^3(s_0)}\frac{(s-s_0)^2}{2}
\enan
Recall that the spectrum of the operator $-h^2 \d_y^2+ c_0 y^2$ on ${\mb{R}}$ ($c_0>0$) is given by $E_n(h) = hE_n(1) = h (2n+1)\sqrt{c_0}$, associated with the eigenfunctions $u_n^h(y) = h^{-\frac14}u_n^1(y/\sqrt{h})$ where $u_n^1(y) = p_n(y)e^{-\sqrt{c_0}\frac{y^2}{2}}$ ($p_n$ being a Hermite polynomial).
Here, this applies with $c_0 = \frac{|R''(s_0)|}{R^3(s_0)}$.
We consider a cutoff function $\chi \in C^\infty_c(0,L)$ such that $\chi=1$ in a neighborhood of $s_0$. We set
$u^h(s) = \chi(s) u_0^h(s)$, with $u_0^h(s) = C h^{-\frac14}e^{-\sqrt{c_0}\frac{(s-s_0)^2}{2h}}$ where
$C$ is a normalizing constant,
and prove this is an approximate eigenfunction (quasimode).
First notice that we have, with $\tilde{P}_h$ defined in \eqref{e:model-harmonic-oscillator}, that
$$
\tilde{P}_h u^h = \chi \tilde{P}_h u_0^h + [ \tilde{P}_h , \chi] u_0^h
= \Big(\frac{1}{R(s_0)^2}+h\sqrt{c_0}\Big)\chi u_0^h + [ -h^2 \d_s^2 , \chi] u_0^h .
$$
In this expression, $[ -h^2 \d_s^2 , \chi]$ is a first order differential operator supported away from zero, where $u_0^h$ and its derivatives are exponentially small. This yields
\bna
\| \tilde{P}_h u^h - \Big(\frac{1}{R(s_0)^2}+h\sqrt{c_0}\Big) u^h \|_{L^2\big((0 ,L) , R(s) ds\big)} = O(e^{-c/h}).
\ena
Now we consider, with norms $L^2\big((0 ,L) , R(s) ds\big)$
\bna
\nor{\left(P_h - \Big(\frac{1}{R(s_0)^2}+ h\sqrt{c_0} \Big) \right)u^h}{L^2} & \leq &
\nor{\left(P_h - \tilde{P}_h \right)u^h}{L^2}
+\nor{\left( \tilde{P}_h u^h - \Big(\frac{1}{R(s_0)^2}+h\sqrt{c_0}\Big)\right)u^h}{L^2} \\
& \leq &
\nor{\left(\frac{h^2}{R(s)} \d_s R(s) \d_s - h^2 \d_s^2\right)u^h}{L^2} \\
&& + \nor{\left(\frac{1}{R(s)^2} -\frac{1}{R(s_0)^2} - c_0 (s-s_0)^2\right)u^h}{L^2}
+ Ce^{-c/h} .
\ena
According to the Taylor formula and the definition of $c_0$, we have $\frac{1}{R(s)^2} -\frac{1}{R(s_0)^2} - c_0 (s-s_0)^2 = O((s-s_0)^3)$ on the support of $\chi$, so that
$$
\nor{\left(\frac{1}{R(s)^2} -\frac{1}{R(s_0)^2} - c_0 (s-s_0)^2\right)u^h}{L^2}^2 \leq C \intop_{{\mb{R}}} |(s-s_0)^3 h^{-\frac14}e^{-\sqrt{c_0}\frac{(s-s_0)^2}{2h}}|^2 dz \leq C h^3 .
$$
We now estimate the term
\bna
\nor{\left(\frac{h^2}{R(s)} \d_s R(s) \d_s - h^2 \d_s^2\right)u^h}{L^2}
& = & \nor{\frac{h R'(s)}{R(s)} h\d_s u^h}{L^2}
\ena
Notice that $h \d_s u^h = h \chi' u_0^h + h \chi \d_s u_0^h = O_{L^2}(e^{-c/h}) - \sqrt{c_0} (s-s_0) \chi u_0^h$.
Moreover, since $R'(s_0)=0$, the Taylor formula yields
$$
\nor{\frac{h R'(s)}{R(s)} h\d_s u^h}{L^2} \leq C e^{-c/h}+ C\nor{h (s-s_0)^2 \chi u_0^h}{L^2} \leq Ch^2 .
$$
Now, combining the above estimates finally yields the existence of constants $D, h_0>0$ such that for all $h<h_0$, we have, with $\nu_h = \frac{1}{R(s_0)^2}+ h\sqrt{c_0}$,
$$
\nor{(P_h - \nu_h) u^h}{L^2\big((0 ,L) , R(s) ds\big)} \leq Dh^{3/2} \approx Dh^{3/2} \nor{u^h}{L^2\big((0 ,L) , R(s) ds\big)} .
$$
Now, we define in coordinates in $U\subset \ensuremath{\mathcal S}$, $f_k(s,\theta)= e^{ik\theta}u^h(s)$, $h=k^{-1}$. This function is smooth and compactly supported in $U$ thanks to the cutoff $\chi$, and can therefore be extended as a function in $C^{\infty}(\ensuremath{\mathcal S})\cap L^2_k$, still denoted $f_k$, which satisfies
\bna
\nor{(h^2\Delta_k - \nu_h) f_k}{L^2_k}\leq Dh^{3/2} \approx Dh^{3/2} \nor{f_k}{L^2_k} .
\ena
Hence, if $\nu_h \notin \Sp(-h^2\Delta_k )$, this implies $\nor{(-h^2\Delta_k- \nu_h)^{-1}}{L^2_k\to L^2_k}\geq \frac{1}{Dh^{3/2}}$.
Finally, the operator $h^2\Delta_k$ being selfadjoint on $L^2_k$, we have, for $z \in {\mb{C}} \setminus \Sp(-h^2\Delta_k)$, $\|(-h^2\Delta_k-z)^{-1}\| = \frac{1}{d(z, \Sp(-h^2\Delta_k))}$, so that, if $\nu_h \notin \Sp(-h^2\Delta_k)$,
$$
\frac{1}{d(\nu_h, \Sp(-h^2\Delta_k))} \geq \frac{1}{Dh^{3/2}} .
$$
In any case, this implies $d(\nu_h, \Sp(-h^2\Delta_k))\leq Dh^{3/2}$, and using that the spectrum of $h^2\Delta_k$ is purely pointwise, this proves the sought result.
\enp
The next step is to study the behavior of the eigenfunction $\psi_k$ constructed in the previous lemma (and under a stronger assumption on the point $s_0$). This is the goal of the so-called Agmon estimates.
We first need the following integration-by-parts lemma.
\begin{lemma}
\label{l:IPPphi}
For all $\Psi \in W^{1,\infty}(\mathcal{S})$ real valued and all $w \in H^2(\mathcal{S})$, we have
\bna
\intop_\mathcal{S} |\nabla_g(\Psi w)|_g^2 d \Vol_g - \intop_\mathcal{S} |\nabla_g\Psi |_g^2 |w|^2 d \Vol_g
= \Re \Big( \intop_\mathcal{S} |\Psi|^2(-\Delta_g w)\overline{w} \ d\Vol_g \Big) .
\ena
\end{lemma}
\bnp
For $\Psi \in C^2(\mathcal{S})$, this is a direct consequence of the integration by parts formula (also valid when $\mathcal{S}$ has a boundary $\d \mathcal{S}$ and $w|_{\d \mathcal{S}}=0$)
\bna
\intop_\mathcal{S} |\nabla_g(\Psi w)|_g^2 d \Vol_g
& = & - \intop_\mathcal{S} \Delta_g (\Psi w)\Psi \overline{w} d \Vol_g \\
& = & \Re\left( \intop_\mathcal{S} \big(- \Psi (\Delta_g w) - (\Delta_g \Psi) w - 2 \nabla_g \Psi \cdot \nabla_g w\big) \Psi \overline{w}d \Vol_g \right) \\
& = & \Re \Big( \intop_\mathcal{S} |\Psi|^2(-\Delta_g w)\overline{w} \ d\Vol_g \Big) + A
\ena
with
\bna
A & = & \Re\left( \intop_\mathcal{S} \big( - (\Delta_g \Psi) \Psi |w|^2 - 2 \nabla_g \Psi \cdot \nabla_g w \Psi \overline{w}\big) d \Vol_g \right) \\
& = & \Re\left( \intop_\mathcal{S} \big( |\nabla_g \Psi|^2 |w|^2 + \nabla_g \Psi \cdot \nabla_g ( |w|^2) \Psi - 2 \nabla_g \Psi \cdot \nabla_g w \Psi \overline{w}\big) d \Vol_g \right) \\
& = & \intop_\mathcal{S} \big( |\nabla_g \Psi|^2 |w|^2 d \Vol_g ,
\ena
where we integrated by parts in the second line. This is the sought estimate in case $\Psi \in C^2(\mathcal{S})$.
The result of the lemma follows by a classical approximation argument, see e.g.~\cite[Proof of Proposition~6.1]{DS:book}.
\enp
\medskip
We shall now assume that $R$ reaches at $s_0$ a {\em strict global non-degenerate} maximum, and introduce the relevant Agmon distance to the ``equator'' $s=s_0$. The latter is defined in the coordinates of $U$ by the eikonal equation~\eqref{e:defdA}, or, more explicitely, for $s\in (0,L)$, by~\eqref{e:defbisdA}.
\begin{lemma}[Properties of $d_A$]
\label{lemma-prop-dA}
Assume that $R$ reaches at $s_0$ a {\em strict global non-degenerate} maximum. Then, $d_A \in C^2(0,L)$, and we have
\bnan
\label{equivdAlog}
d_A(s) = -\log(s) + O(1) , \quad \text{as } s \to 0^+ , \qquad d_A(s)= - \log (L-s) + O(1), \quad \text{as } s \to L^- ,
\enan
\bnan
\label{equivdAs0}
d_A(s) = \frac{1}{2}\sqrt{\frac{-R''(s_0)}{R^3(s_0)}} (s-s_0)^2 + O((s-s_0)^3), \quad \text{as }s \to s_0 .
\enan
\end{lemma}
\bnp
Remark that according to \eqref{e:condR}, we have $\frac{1}{R(y)}\to + \infty$ as $y\to 0^+$ or $y \to L^-$, with
$$
R(s) = s + O(s^3) , \text{ when } s \to 0^+ , \quad \text{ and } \quad R(s) = L- s + O((L-s)^3) , \text{ when }s \to L^- .
$$
As a consequence, with~\eqref{e:defbisdA}, we obtain $d_A(s) = \left| \intop^s_{s_0} \frac{1}{y}(1+O(y^2)) dy \right| = -\log(s) + \rouge{O(1)},$ as $s \to 0^+$ (and similarly when $s \to L^-$), that is~\eqref{equivdAlog}.
Let us also study the behavior of $d_A$ near $s_0$. Denoting $V(s)=\frac{1}{R(s)^2} - \frac{1}{R(s_0)^2} $, we have $V(s_0)=V'(s_0)=0$ and $V''(s_0)=\frac{-2R''(s_0)}{R^3(s_0)} >0 $. This implies ~\eqref{equivdAs0} and that $d_A$ is of class $C^2$ near $s_0$, by Taylor expansion of $d_A$ and its derivatives.
\enp
We can now state the following very precise result. All results concerning surfaces of revolution are corollaries of this one.
\begin{theorem}[Agmon estimate]
\label{lmagmon}
Assume that $R$ reaches at $s_0$ a {\em strict global non-degenerate} maximum, and consider the associated numbers $\mu_k$ and functions $\psi_k$ given by Lemma~\ref{l:exist-mu-psi}.
There exist $C,C_0,k_0>0$ such that, for all $k \in {\mb{N}}$, $k \geq k_0$, the following integral is well defined with the estimate
$$
\intop_\ensuremath{\mathcal S} e^{2 k d_A(m)} |\psi_k|^2(m) d\Vol_g(m)
\leq Ck^{2C_0} .
$$
\end{theorem}
Using first that $d_A$ is decreasing on $(0,s_0]$, we obtain the following direct Corollary.
\begin{corollary}
\label{coragmonfaible}
Under the assumptions of Theorem~\ref{lmagmon}, there exist $C,C_0,k_0>0$ such that, for all $k \in {\mb{N}}$, $k \geq k_0$ and all $s_1 \leq s_0$, we have
$$
\intop_{B(N,s_1)} |\psi_k|^2d \Vol_g
\leq Ck^{2C_0} e^{- 2d_A(s_1)k}.
$$
\end{corollary}
From this result, we may now derive a proof of Theorem~\ref{t:agmon-intro} and Corollary~\ref{c:rate-vanish}.
\bnp[Proof of Theorem~\ref{t:agmon-intro} and Corollary~\ref{c:rate-vanish}]
The eigenfunctions constructed in Lemma \ref{l:exist-mu-psi} satisfy $\lambda_k=k^2\left(\frac{1}{R(s_0)^2}+O(k^{-1}\right)$. In particular, $k\geq \sqrt{\lambda_k}R(s_0)-C$ for an appropriate constante $C$ and $k$ large enough. This gives $e^{- 2kd_A(s_1)}\leq e^{C d_A(s_1)} e^{- 2d_A(s_1)R(s_0)\sqrt{\lambda_k}}$. Then, Theorem~\ref{t:agmon-intro} follows directly from Corollary~\ref{coragmonfaible} up to changing the constants involved. The second pat of Theorem~\ref{t:agmon-intro} follows directly from Proposition \ref{p:link-eigenfct-heat-etc}.
Corollary~\ref{c:rate-vanish} follows from the asymptotic \eqref{equivdAlog} of $d_A$ and the fact than Theorem~\ref{t:agmon-intro} is uniform for $r$ small. Indeed, for an appropriate constant $C$, we have $ d_A(s)\leq -\log(s)+C$ for all $0<s_1\leq s_0$, .
For fixed $\lambda_k$ and using the uniformity for $r$ small, we get the order of vanishing using the general Lemma \ref{lmordrevanishinL2} of the Appendix.
\enp
\bigskip
We will need a very simple Lemma
\begin{lemma}
\label{lminfgrad}
Let $\varphi \in W^{1,\infty}(\ensuremath{\mathcal S})\cap L^2_k$, then, we have the pointwise estimate on $U$
\bna
|\nabla_g(\varphi)|_g^2\geq \frac{k^2}{g(X_\theta, X_\theta)} \left|\varphi\right|^2.
\ena
\end{lemma}
\bnp
We have, in the coordinates of $U$, that $\varphi$ writes $\varphi(s, \theta) = e^{ik\theta}f(s)$, with, according to~\eqref{e:reim-gradient},
\bna
|\nabla_g(\varphi)|_g^2 & = & |\d_sf|^2 + \frac{1}{R(s)^2}|\d_\theta(e^{ik\theta}f(s))|^2 = |\d_sf|^2 + \frac{k^2}{R(s)^2}|e^{ik\theta}f(s)|^2 \\
& \geq & \frac{k^2}{R(s)^2}|e^{ik\theta}f(s)|^2 = \frac{k^2}{g(X_\theta, X_\theta)} \left|\varphi\right|^2,
\ena
which is the sought result.
\enp
The proof follows that of \cite[Proposition 3.3.5]{Helffer:booksemiclassic}.
\bnp[Proof of Theorem~\ref{lmagmon}]
As in the above proof, we use the notation $h=k^{-1}$, considered as a semiclassical parameter.
We define, for some constant $C_0>1$, $h_0>0$ and $h \in (0,h_0)$ the sets
\bna
\Omega_- = \{s \in (0,L), d_A(s) \leq C_0 h\}, \quad \Omega_+ = \{s \in (0,L), d_A(s)> C_0 h\},
\ena
We set
\bna
\phi(s)
& = & d_A(s) - C_0 h \log(C_0) , \quad \text{for } s\in \Omega_- ,\\
& = & d_A(s) - C_0 h \log(d_A(s)/h) , \quad \text{for } s\in \Omega_+ .
\ena
For $M>1$, set $\phi_M = \min(\phi , M)$ and $\Omega_M = \phi_M^{-1}(\{M\})$.
Moreover, on $\Omega_-$, we have $\phi=d_A-C_0h\log(C_0)\leq d_A\leq C_0 h<C_0 h_0$, so for $M\geq C_0 h_0$, we have $\Omega_-\cap \Omega_M=\emptyset$. Indeed, we have a partition $\Omega_- \sqcup (\Omega_+ \setminus \Omega_M) \sqcup (\Omega_+ \cap \Omega_M)$.
Note that it will be very important in what follows that all the estimates are independent on $M$ while $C_0$ will be defined later on.
The function $\phi_M$ is Lipschitz on $(0,L)$, and can be pulled back to a $(R_\theta)$ invariant Lipschitz function defined on $U$, and extended to $\ensuremath{\mathcal S}$ by $\phi_M(N)=\phi_M(S)=M$. We will call $\ensuremath{\mathcal S}_+$, $\ensuremath{\mathcal S}_-$ and $\ensuremath{\mathcal S}_M$, the naturally defined zones so that
$$
\ensuremath{\mathcal S}=\ensuremath{\mathcal S}_-\sqcup ( \ensuremath{\mathcal S}_+ \setminus \ensuremath{\mathcal S}_M )\sqcup\ensuremath{\mathcal S}_M .
$$
We now apply the formula of Lemma~\ref{l:IPPphi} with $\Psi=e^{\frac{\phi_M}{h}}$ with $\phi_M$ given above and $M$ large, and $w=\psi_h$ (note that $\psi_h\in C^{\infty}(\ensuremath{\mathcal S})$ since it is an eigenfunction of $\Delta_g$, so the Lemma applies).
\bna
\intop_\mathcal{S} |\nabla_g(\Psi \psi_h )|_g^2 d \Vol_g -\intop_\mathcal{S} |\nabla_g\Psi |_g^2 |\psi_h|^2 d \Vol_g
= k^2\mu_h \intop_\mathcal{S} |\Psi|^2 |\psi_h|^2 d \Vol_g .
\ena
Applying now Lemma \ref{lminfgrad} since $\Psi \psi_h \in W^{1,\infty}(\ensuremath{\mathcal S})\cap L^2_k$ and using $|\nabla_g\Psi |_g^2=k^2 |\phi_M'(s)|^2 e^{2\phi_M/h} $ in $U$ and so almost everywhere in $\ensuremath{\mathcal S}$, we get
\bna
\intop_\mathcal{S}\left( \frac{1}{R(s)^2}- |\phi_M'(s)|^2 -\mu_h\right) e^{2\phi_M/h} |\psi_h|^2 d \Vol_g\leq 0.
\ena
Using the expression of $\phi_M$ on $\Omega_-$ and of $\mu_h = \frac{1}{R(s_0)^2} + O(h)$, this yields, for some $C>0$ (independent of $h$ and $M$),
\bna
\intop_{\mathcal{S}_+}\left( \frac{1}{R(s)^2}- |\phi_M'(s)|^2 -\mu_h\right) e^{2\phi/h} |\psi_h|^2 d \Vol_g
& \leq & C h \intop_{\mathcal{S}_-} e^{2d_A(s)/h}|\psi_h|^2 d \Vol_g \\
& \leq & C h e^{2C_0}\intop _{\mathcal{S}_-}|\psi_h|^2 d \Vol_g \leq Ch e^{2C_0},
\ena
since $\psi_h$ is normalized.
Note also that on $\Omega_M \cap \Omega_+$, we have $d_A\geq C_0h$ and so $d_A\geq d_A - C_0 h \log(C_0) \geq \phi \geq M\geq 1$.
Hence, since $d_A$ is continuous, there is a constant $\e>0$ so that $s\in \Omega_M \cap \Omega_+$ implies $|s-s_0|\geq \e$. In particular, since $s_0$ is a nondegenerate maximum for $R$, there is $\eta>0$ so that it also implies $ \frac{1}{R(s)^2} -\frac{1}{R(s_0)^2}\geq \eta$. In particular, on $\ensuremath{\mathcal S}_M \cap \ensuremath{\mathcal S}_+$, we have
$$
\frac{1}{R(s)^2}- |\phi_M'(s)|^2 -\mu_h = \frac{1}{R(s)^2} -\frac{1}{R(s_0)^2} + O(h) \geq 0
$$ for $h<h_0$ for $h_0$ only depending on the geometry, and not on $M$. Therefore, we have obtained
\bnan
\label{recap1}
\intop_{\mathcal{S}_+ \setminus \ensuremath{\mathcal S}_M}\left( \frac{1}{R(s)^2}- |\phi'(s)|^2 -\mu_h\right) e^{2\phi/h} |\psi_h|^2 d \Vol_g
& \leq & Ch e^{2C_0}.
\enan
Next, on $\Omega_+ \setminus \Omega_M$, we have $\phi' = d_A' - C_0 h \frac{d_A'}{d_A}$ and hence
\bna
\frac{1}{R(s)^2}- |\phi'|^2- \mu_h
& = & - h\sqrt{\frac{|R''(s_0)|}{R^3(s_0)}} + O(h^{\frac32}) + 2C_0 h \frac{(d_A')^2}{d_A} - C_0^2 h^2\frac{(d_A')^2}{d_A^2} \\
& \geq & - h\sqrt{\frac{|R''(s_0)|}{R^3(s_0)}} + O(h^{\frac32}) + C_0 h \frac{(d_A')^2}{d_A}
\ena
where we used that $d_A\geq C_0 h$.
According to~\eqref{equivdAs0}, $ \frac{(d_A')^2}{d_A} \to 2 \sqrt{\frac{-R''(s_0)}{R(s_0)^3}}>0$ and $ \frac{(d_A')^2}{d_A}$ can thus be extended by continuity at $s_0$. Since $d_A'(s) = 0$ iff $s=s_0$ ($R$ reaches at $s_0$ its {\em unique} global maximum), the extended function is uniformly bounded from below on any compact of $(0, L)$. Moreover, according to~\eqref{equivdAlog}, we have
$$
\frac{(d_A')^2}{d_A}(s) \sim_{s \to 0^+} \frac{1}{s^2\log(s^{-1})} , \quad \text{and} \quad \frac{(d_A')^2}{d_A}(s) \sim_{s \to L^-} \frac{1}{(L-s)^2\log((L-s)^{-1})} .
$$
Hence, there is a constant $C_1>0$ such that $\frac{(d_A')^2}{d_A}\geq C_1$ on $(0, L)$, and we have
\bna
\frac{1}{R(s)^2}- |\phi'|^2- \mu_h
& \geq & h \left(C_0 \frac{(d_A')^2}{d_A} - \sqrt{\frac{|R''(s_0)|}{R^3(s_0)}} + O(h^{\frac12}) \right) \geq C_0 h \frac{(d_A')^2}{2d_A},
\ena
when taking $C_0$ large w.r.t. $C_1^{-1}$ and $h\leq h_0$ with $h_0$ depending on $C_0,C_1$. We can now fix $C_0, h_0$. After \eqref{recap1}, we have thus obtained
\bna
C h \intop _{\mathcal{S}_+ \setminus \ensuremath{\mathcal S}_M} \frac{(d_A')^2}{d_A} e^{2\phi/h}|\psi_h|^2 d \Vol_g
\leq Ch e^{2C_0} .
\ena
Our next task is to replace $\phi$ by $d_A$ in this expression. Note that $e^{2\phi(s)/h} = e^{2d_A(s)/h}\left(\frac{h}{d_A(s)} \right)^{2C_0}$.
In particular, this yields
$$
C h \intop _{\mathcal{S}_+ \setminus \ensuremath{\mathcal S}_M} \frac{(d_A')^2}{d_A}e^{2d_A(z)/h}\left(\frac{h}{d_A(s)} \right)^{2C_0} |\psi_h|^2 d \Vol_g
\leq Ch .
$$
Now, the function $\frac{(d_A')^2}{d_A^{1+2C_0}}$ is positive on $(0,s_0) \cup (s_0,L)$, tends to $+\infty$ at $s_0$, and satisfies, as above
$$
\frac{(d_A')^2}{d_A^{1+2C_0}} \sim \frac{1}{s^2(\log(s^{-1}))^{1+2C_0}} \to + \infty , \text{ as } s\to 0^+,
$$
and similarly $\frac{(d_A')^2}{d_A^{1+2C_0}} \sim \frac{1}{(L-s)^2(\log((L-s)^{-1}))^{1+2C_0}} \to + \infty,\text{ as } s\to L^-$.
Hence, it is bounded from below on $(0, L)$ by a constant, and we obtain
$$
\intop_{\mathcal{S}_+ \setminus \ensuremath{\mathcal S}_M} e^{2d_A(z)/h} |\psi_h|^2d \Vol_g
\leq Ch^{-2C_0} ,
$$
which, combined with the already remarked fact that $\intop_{\mathcal{S}_-} e^{2d_A(z)/h} |\psi_h|^2 \Vol_g \leq Cte$, gives
$$
\intop_{\ensuremath{\mathcal S} \setminus \mathcal{S}_M} e^{2d_A(z)/h} |\psi_h|^2d \Vol_g
\leq Ch^{-2C_0}.
$$
Since all the constants are independent on $M$, it gives the sought result by dominated convergence making $M$ tends to infinity.
\enp
\subsection{The disk}
\label{sectDisk}
Denote $\ensuremath{\mathbb D}=\left\{(x,y)\in {\mb{R}}^2\left|x^2+y^2\leq 1\right.\right\}\subset {\mb{R}}^2$ the unit disk. We denote by $\Delta$ the (negative) flat Laplace operator in ${\mb{R}}^2$.
In polar coordinates, $x=r \cos\theta$, $y=r\sin\theta$, we have
$$
\Delta = \partial_x^2+\partial_y^2=\frac{\d^2}{\d r^2}+\frac{1}{r}\frac{\d}{\d r}+\frac{1}{r^2}\frac{\d^2}{\d \theta^2} .
$$
Then, it can be seen that
\bnan
\label{e:disk-eig}
\psi_{n,k}(r,\theta)=J_n(z_{n,k}r)e^{in\theta}
\enan
is an orthogonal basis of $L^2(\ensuremath{\mathbb D})$, where
\begin{itemize}
\item $J_n$ is the Bessel function of order $n$, namely:
\bnan
\label{e:bessel}
J_n(z) = \frac{1}{2\pi} \intop_{-\pi}^{\pi} e^{i z \sin \theta} e^{- i n \theta} d \theta , \quad n \in \mathbb Z , z \in {\mb{C}} \setminus {\mb{R}}_- ,
\enan
\item $0<z_{n,1}<z_{n,2}<z_{n,3}<\cdots$ is the sequence of the positive zeros of $J_n$.
\end{itemize}
We refer for instance to~\cite[Chapters~14.4 and~15]{VasyBook} for an elementary introduction.
In particular, the functions defined in~\eqref{e:disk-eig} satisfy
$$
-\Delta \psi_{n,k}=\lambda_{n,k}\psi_{n,k} \ \text{ in } \Int(\ensuremath{\mathbb D}), \quad \text{ with } \lambda_{n,k}=z_{n,k}^2 \quad \text{ and }\psi_{n,k} |_{\d\ensuremath{\mathbb D}} = 0 .
$$
Roughly speaking, the index $n$ encodes the oscillation in the $\theta$ variable while the index $k$ will contain an oscillation in the radial variable. We refer to~\cite{ALM:cras} for a description of concentration/delocalization properties of general eigenfunctions (or, more generally, quasimodes) on the disk. Here, we want to analyse some eigenfunctions corresponding to the so-called whispering gallery modes that are concentrated close to the boundary of $\ensuremath{\mathbb D}$. They ``rotate'' very fast and concentrate towards one of the two trajectories of the billiard contained in $S^*\d \ensuremath{\mathbb D}$. This phenomenon corresponds to $n \to + \infty$ and $k$ small, typically $k=1$.
In the following, we thus focus on:
\bna
\psi_{n,1}(r,\theta)=J_n(z_{n,1}r)e^{in\theta} ,
\ena
and hence on the function $J_n(z_{n,1}r)$. This requires information on $z_{n,1}$.
A huge amount of information is known on the Bessel functions ant its zeros. But we will need very few of them. First, we need to normalize them. This is for instance done in Lemma 5.1 of Burq-G\'erard-Tzvetkov \cite{BGT:03} in the case $k=1$ which is of interest for us.
\bna
\nor{\psi_{n,1}}{L^2(D)}\approx n^{-\frac{2}{3}}.
\ena
We also need a rough estimate on the asympotic of the $z_{n,1}$, see \cite{BGT:03} Lemma 4.3 for instance, namely,
\bna
z_{n,1}=n+\O(n^{1/3}) , \qquad z_{n,1} > n .
\ena
To estimate the norm of $\psi_{n,1}$ on $B(0,\varepsilon)$, $\varepsilon<1$, we first prove the following lemma.
\begin{lemma}
\label{l:bessel-asympt}
For all $\alpha \geq 0$ and $n\in {\mb{N}}$, we have
\bna
\left|J_n\left(\frac{n}{\cosh(\alpha)}\right) \right| \leq e^{n(\tanh(\alpha)-\alpha)} .
\ena
\end{lemma}
Note that in \cite[Section 32 p79]{CopsonBook}, for fixed $\alpha$, a full asymptotics in terms of $n$ is proved, with principal term:
\bnan
\label{e:copson}
J_n\left(\frac{n}{\cosh(\alpha)}\right) \approx \frac{e^{n(\tanh(\alpha)-\alpha)}}{\sqrt{2\pi n \tanh(\alpha)}} .
\enan
Here, we need only the principal term but also a uniform bound in terms of $\alpha$. Note that the short proof below is not very informative, and the reader is referred to~\cite[Section 32]{CopsonBook} for a complete steepest descent approach to this asymptotic expansion.
\bnp[Proof of Lemma~\ref{l:bessel-asympt}]
We start from formula~\eqref{e:bessel}, in which we write $\nu = \frac{n}{\cosh(\alpha)}$, and use the holomorphy of the integrand, together with the fact that $e^{i \nu\left( \sin z - z \cosh\alpha \right)}$ is a periodic function of $\Re(z)$ to change the contour. This yields:
\begin{align*}
J_n\left(\nu\right) &= \frac{1}{2\pi} \intop_{-\pi}^{\pi} e^{i \left(\frac{n}{\cosh(\alpha)}\right) \sin \theta} e^{- i n \theta} d \theta
= \frac{1}{2\pi} \intop_{-\pi}^{\pi} e^{i \nu\left( \sin \theta - \theta \cosh\alpha \right)} d \theta \\
&= \frac{1}{2\pi} \intop_{-\pi - i \alpha}^{\pi - i \alpha} e^{i \nu\left( \sin z - z \cosh\alpha \right)} d z
= \frac{1}{2\pi} \intop_{-\pi}^{\pi} e^{i \nu\left( \sin x \cosh\alpha - i \cos x \sinh\alpha - x\cosh\alpha + i \alpha \cosh\alpha \right)} d x .
\end{align*}
This implies
\begin{align*}
| J_n\left(\nu\right)| &\leq \frac{1}{2\pi} \intop_{-\pi}^{\pi} e^{\nu \left( \cos x \sinh\alpha - \alpha \cosh\alpha \right)} d x \leq e^{\nu \left( \sinh\alpha - \alpha \cosh\alpha \right)} = e^{n \left( \tanh\alpha - \alpha \right)} ,
\end{align*}
and concludes the lemma.
\enp
\begin{lemma}
There exist $C, \beta , n_0 >0$ such that for all $n \geq n_0$ and $0< r \leq 1-\beta n^{-2/3}$, we have
\bna
\|\psi_{n,1}\|_{L^\infty(B(0,r))} \leq \exp \left(-n d_A(r) + C n^{1/3} \right) .
\ena
\end{lemma}
Note that for $r \in (0,1)$ fixed, the asymptotic formula~\eqref{e:copson} implies that such eigenfunctions have indeed the decay prescribed by this formula.
\bnp
We have $\frac{z_{n,1}}{n} = 1 + O(n^{-2/3})$ and $\frac{z_{n,1}}{n} >1$.
Hence recalling that $|d_A'|$ is decreasing on $(0,1]$, we have, as long as $\frac{r z_{n,1}}{n} \leq 1$,
$$
\left|d_A(\frac{r z_{n,1}}{n}) - d_A(r) \right| \leq C n^{-2/3} r |d_A'(r)|= C n^{-2/3}r \sqrt{\frac{1}{r^2}-1} =C n^{-2/3} \sqrt{1-r^2} .
$$
Thus we obtain from Lemma~\ref{l:bessel-asympt}
$$
|J_n(z_{n,1}r)|= |J_n(n\frac{z_{n,1}}{n}r)| \leq \exp \left(-n d_A(\frac{z_{n,1}}{n}r)\right) \leq \exp \left(-n d_A(r) + C n^{1/3} \right)
$$
for all $n \in {\mb{N}}$ and $0< r \leq \frac{n}{z_{n,1}}$.
\enp
The combination of the previous estimates give Theorem \ref{t:agmon-diskintro}.
\section{Maximal vanishing rate of sums of eigenfunctions, and observability on small balls}
\label{s:unif-LR-ineq}
In this section, we prove Theorem~\ref{t:unif-LR-ineq}, i.e. the Lebeau-Robbiano spectral inequality with observation in balls of (small) radius $r$ and constants uniform in $r$.
We follow the proof proposed by Jerison and Lebeau in~\cite[middle of p231]{JL:99}. There are three main steps, that we summarize in three lemmata. We then prove Theorem~\ref{t:unif-LR-ineq} from these lemmata, and prove the lemmata afterwards.
In the following, for $\beta>0$, we set $X_\beta= (-\beta , \beta) \times \ensuremath{\mathcal M}$, and denote $P = -\d_{s}^2 - \Delta_g$.
In the set $X_{2S} =(-2S , 2S) \times \ensuremath{\mathcal M}$, we denote by $(s, x)$ the running point and by $B_{r}$ a geodesic ball (for the metric $\id \otimes g$) of radius $r$ (its center being implicit in the notation).
We also use the rescaled $H^1$ norm on an open set $U$, denoted $H_r^1(U)$ and defined by
\bnan
\label{e:def-norm-r}
\|F\|_{H_r^1(U)}^2 = \|F\|_{L^2(U)}^2 + r^2 \|\nabla_g F\|_{L^2(U)}^2 .
\enan
This will only be used on small geodesic balls or annuli, namely $U = B_{\alpha r}$ or $U = B_{\alpha r}\setminus B_{\beta r}$.
\subsection{The three key lemmata}
In this section, we state the three key lemmata needed for the proof of Theorem~\ref{t:unif-LR-ineq}.
The first lemma is a classical global Lebeau-Robbiano interpolation inequality, \cite[Section~3, Estimate~(1)]{LR:95}.
\begin{lemma}[Global interpolation inequality from unit balls to the whole space]
\label{t:global-interp-LR}
Let $S>0$ and let $U \subset X_{2S}$ be any nonempty open set, then there is $C>0$ and $\alpha_0 \in (0,1)$ such that we have
\begin{equation*}
\|F\|_{H^1(X_{S})} \leq C \left(\| P F\|_{L^2(X_{2S})} + \| F \|_{H^1(U)} \right)^{\alpha_0} \|F\|_{H^1(X_{2S})}^{1-\alpha_0} .
\end{equation*}
for all $F \in H^2(X_{2S})$ such that $F|_{(-2S,2S)\times \d M}=0$.
\end{lemma}
The next lemma states a local interpolation inequality. Its specificity is that the observation term is on a small ball $B_r$ and the constants are uniform in $r$ small. For this, the exponent has to depend on $r$ as $|\log(r)|^{-1}$.
\begin{lemma}[Local interpolation inequality from small balls to unit balls]
\label{l:aronsajn-arleman-interp}
Let $P= -\d_s^2 - \Delta_g$ and let $B_r$ denote balls centered at $(s_0, x_0) \in X_T$, away from the boundary.
Then, there exists constants $r_1 >0$ such that for all $0< r_0 \leq r_1$, there is $C>0$ such that for all $r \in (0, \frac{r_0}{10})$, and $F\in H^2(B_{r_0})$, we have
\begin{align*}
\|F\|_{H^1(B_{\frac{r_0}{4}})} \leq C\left( \| P F \|_{L^2(B_{r_0})}+ \|F\|_{H^1_r(B_r)} \right)^{\alpha_r} \|F\|_{H^1(B_{r_0})}^{1-\alpha_r } ,
\quad \alpha_r = \frac{\log 2}{\log \left(\frac{2r_0}{r}\right) + \log 2}.
\end{align*}
\end{lemma}
A proof of this Lemma is given in Section~\ref{s:proof-from-aronszajn}, starting from a Carleman estimate (with singular weight) due to Aronszajn~\cite{Aronsjajn:57} (see also \cite{AKS:62,DF:88,DF:90}).
The last lemma is an interpolation inequality with boundary observation term. All terms are taken on sets of size $r$, and the important feature of this estimate is that the constants are uniform in $r$.
\begin{lemma}[Uniform local interpolation at the boundary on small balls]
\label{t:unif-LR}
Let $(0 , x_0) \in \{0\} \times \ensuremath{\mathcal M}$, $ \dist_g(x_0, \d \ensuremath{\mathcal M})>0$ (all balls are centered in $x_0$). Then, there exists $C>0$, $r_0>0$ and $\alpha_0 \in (0,1)$ such that we have for all $0<r<r_0$
\begin{equation*}
\|F\|_{H^1_r(B_r)} \leq C \left(r^2 \|P F\|_{L^2(B_{2r})} + r^{3/2} \| \d_s F|_{s=0} \|_{L^2(B_{2r}\cap \{0\}\times \ensuremath{\mathcal M})} \right)^{\alpha_0} \|F\|_{H^1_r(B_{2r})}^{1-\alpha_0}
\end{equation*}
for all $F \in H^2(X_{2S})$ such that $F|_{(-2S,2S)\times \d M}=0$.
\end{lemma}
This lemma is proved in Section \ref{subsectLmscaling}, consequence of a uniform Carleman estimate proved in Appendix~\ref{s:carleman}.
\subsection{Concluding the proof of Theorem~\ref{t:unif-LR-ineq} from the three lemmata}
From these three lemmata, we may now give a proof of Theorem~\ref{t:unif-LR-ineq}. We first formulate a straightforward corollary of the three lemmata to prepare the proof.
\begin{corollary}
Let $P= -\d_s^2 - \Delta_g$
and $(0 , x_0) \in \{0\} \times \Int( \ensuremath{\mathcal M})$ and consider balls centered at $(0,x_0)$. Then, there exists $r_0 >0$, $C>0$ and $\alpha_0 \in (0,1)$ such that, for all $r \in (0, \frac{r_0}{10})$ and $F\in H^2(X_{2S})$ with $PF=0$ and $F|_{(-2S,2S)\times \d M}=0$, we have
\begin{align*}
\|F\|_{H^1(X_{S})} & \leq C \| F \|_{H^1(B_{\frac{r_0}{4}})}^{\alpha_0} \|F\|_{H^1(X_{2S})}^{1-\alpha_0} , \\
\|F\|_{H^1(B_{\frac{r_0}{4}})} &\leq C \|F\|_{H^1(B_r)}^{\alpha_r} \|F\|_{H^1(X_{2S})}^{1-\alpha_r } ,
\quad \alpha_r = \frac{\log 2}{\log \left(\frac{2r_0}{r}\right) + \log 2} ,\\
\|F\|_{H^1(B_r)} &\leq C \|\d_s F|_{s=0} \|_{L^2(B_{2r}\cap \{0\}\times \ensuremath{\mathcal M})}^{\alpha_0} \|F\|_{H^1(X_{2S})}^{1-\alpha_0} .
\end{align*}
\end{corollary}
\begin{proof}[Proof of Theorem~\ref{t:unif-LR-ineq}]
Let us first treat the case where $\d \ensuremath{\mathcal M} = \emptyset$, or $\d \ensuremath{\mathcal M} \neq \emptyset$ but the center of the balls, $x_0$ is in $\Int(\ensuremath{\mathcal M})$. The case $x_0$ near $\d \ensuremath{\mathcal M}$ will be treated afterwards.
We reformulate (again) these three results as (in a form close to that of~\cite{DF:88})
\begin{align*}
\frac{ \|F\|_{H^1(X_{2S})}}{\|F\|_{H^1(B_{\frac{r_0}{4}})}}
& \leq \left( C \frac{ \|F\|_{H^1(X_{2S})}}{\|F\|_{H^1(X_S)}} \right)^{\frac{1}{\alpha_0}} , \\
\frac{ \|F\|_{H^1(X_{2S})}}{\|F\|_{H^1(B_r)}}
& \leq \left( C \frac{ \|F\|_{H^1(X_{2S})}}{\|F\|_{H^1(B_{\frac{r_0}{4}})}} \right)^{\frac{1}{\alpha_r}} , \\
\frac{ \|F\|_{H^1(X_{2S})}}{\|\d_s F|_{s=0} \|_{L^2(B_{2r}\cap \{0\}\times \ensuremath{\mathcal M})}}
& \leq \left( C \frac{ \|F\|_{H^1(X_{2S})}}{\|F\|_{H^1(B_r)}} \right)^{\frac{1}{\alpha_0}} ,
\end{align*}
and combine them to obtain
\begin{align}
\label{e:interp-interp}
\frac{ \|F\|_{H^1(X_{2S})}}{\|\d_s F|_{s=0} \|_{L^2(B_{2r}\cap \{0\}\times \ensuremath{\mathcal M})}}
& \leq C^{\frac{1}{\alpha_0}} C^{\frac{1}{\alpha_0\alpha_r}} C^{\frac{1}{\alpha_0^2\alpha_r}} \left( \frac{ \|F\|_{H^1(X_{2S})}}{\|F\|_{H^1(X_S)}} \right)^{\frac{1}{\alpha_0^2\alpha_r}} .
\end{align}
We then follow~\cite{LR:95,JL:99,LZ:98,LeLe:09}, and, given $\psi \in E_{\leq \lambda}$ take the function
$$
F(s) = \frac{\sinh(s \sqrt{-\Delta_g})}{\sqrt{-\Delta_g}} \Pi_+ \psi + s \Pi_0 \psi ,
$$
where $\Delta_g$ is the Dirichlet Laplacian, $\Pi_0$ the orthogonal projector on $\ker(\Delta_g)$ and $\Pi_+ = \id-\Pi_0$, that is $F$ is the unique solution to
$$
(-\d_s^2-\Delta_g)F = 0 , \quad F|_{(-2S,2S)\times \d M}=0, \quad (F, \d_s F)|_{s=0} = (0 , \psi) .
$$
Classical computations (see e.g.~\cite[Proof of Theorem~5.4]{LeLe:09}) show that there is $C>1$ such that for all $\lambda\geq 0$ and $\psi \in E_{\leq\lambda}$, we have
$$
\frac{1}{C}\|\psi\|_{L^2(\ensuremath{\mathcal M})} \leq \|F\|_{H^1(X_{S})} \leq \|F\|_{H^1(X_{2S})} \leq C e^{3S \sqrt{\lambda}} \|\psi\|_{L^2(\ensuremath{\mathcal M})} .
$$
As a consequence, \eqref{e:interp-interp} yields for some $C, \kappa>0$, for all $\lambda\geq 0$, $\psi \in E_{\leq\lambda}$, and $r \in (0,\frac{r_0}{4})$
\begin{equation}
\label{e:JL-almost-finished}
\frac{\|\psi\|_{L^2(\ensuremath{\mathcal M})}}{\|\psi\|_{L^2(B_\ensuremath{\mathcal M}(x_0, 2r))}} \leq C^{\kappa + \frac{1}{\alpha_r}} e^{(\kappa + \frac{1}{\alpha_r})\sqrt{\lambda}} .
\end{equation}
Recalling the definition of $\alpha_r$, this is the sought result of Theorem~\ref{t:unif-LR-ineq} (up to changing $2r$ into $r$, and the names of the constants accordingly) with the restriction $r \in (0,\frac{r_0}{4})$. To conclude for all $r>0$, it suffices to notice that \eqref{e:JL-almost-finished} remains true with $\alpha_{\frac{r_0}{16}}$ on the r.h.s. uniformly for observation terms $\|\psi\|_{L^2(B_\ensuremath{\mathcal M}(x_0, 2r))}$ with $r \geq \frac{r_0}{8}$ (the constants are non-increasing functions of the observation set).
\bigskip
To conclude the proof in the general case, we need to consider the situation $\d \ensuremath{\mathcal M} \neq \emptyset$ in full generality. We again follow~\cite{DF:88,JL:99}.
In this case, we define the double manifold $\widetilde{\ensuremath{\mathcal M}} = \ensuremath{\mathcal M} \sqcup \ensuremath{\mathcal M}$, consisting in gluing two copies of $\ensuremath{\mathcal M}$, endowed with a smooth structure of compact manifold, as in~\cite[Theorem~9.29-Example~9.32]{Lee:book}. Then, the procedure very well explained in~\cite[Section~3]{Anton:08} and we only sketch the proof.
We extend the metric $g$ on $\ensuremath{\mathcal M}$ by symmetry/parity with respect to the boundary $\d\ensuremath{\mathcal M}$ as a metric $\tilde{g}$ on $\widetilde{\ensuremath{\mathcal M}}$. Note that even if $g$ is smooth, the extended metric $\tilde{g}$ is only Lipschitz on $\widetilde{\ensuremath{\mathcal M}}$. This is not an issue since the three lemmata~\ref{t:global-interp-LR},~\ref{l:aronsajn-arleman-interp} and~\ref{t:unif-LR} remain valid for Lipschitz metrics (as a consequence of Appendix~\ref{s:carleman}, \cite{AKS:62,DF:90}, and Appendix~\ref{s:carleman}, respectively).
In the case of Dirichlet boundary condition on $\d \ensuremath{\mathcal M}$, and given $\psi \in E_{\leq \lambda}$ we take its anti-symmetric/odd extension on $\widetilde{\ensuremath{\mathcal M}}$, yielding a function $\tilde{\psi} \in \tilde{E}_{\leq \lambda}$. Here, $ \tilde{E}_{\leq \lambda}$ is the counterpart of $E_{\leq \lambda}$ defined for the Laplace-Beltrami operator $\Delta_{\tilde{g}}$ on $\widetilde{\ensuremath{\mathcal M}}$.
The above computations are then made for $\Delta_{\tilde{g}}$ on $\widetilde{\ensuremath{\mathcal M}}$ and the estimate~\eqref{e:JL-almost-finished} is proved for $\tilde{\psi}$. The same estimate for $\psi$ follows.
Similarly, in the case of Neumann boundary condition, we take the symmetric/even extension of functions, yielding the sought result.
\end{proof}
\subsection{A proof of Lemma~\ref{l:aronsajn-arleman-interp} from Aronszajn estimates}
\label{s:proof-from-aronszajn}
In section, we give a proof of Lemma~\ref{l:aronsajn-arleman-interp} starting from Carleman-Aronszajn estimates as stated in~\cite[Proposition~2.10]{DF:88} and~\cite[Proposition~2.10]{DF:90} (and slightly modified according to the remarks in~\cite[Beginning of Section~14.3]{JL:99}), which we now state. An alternative proof of a closely related estimate is given by H\"ormander in~\cite[Inequality~(17.2.11), Chapter~XVII.2]{Hoermander:V3}.
\begin{proposition}
\label{p:carleman-aronsajn}
Let $P= -\d_s^2 - \Delta_g$ and let $(\rho, t) \in (0,r_1) \times \S^{n}$ be geodesic polar coordinates around a point $(s_0, x_0) \in X_S$ away from the boundary.
Then, there exists a function $\bar{\rho}(\rho)$ with
\bnan
\label{e:rho=rhobar}
\bar{\rho}=\rho + O(\rho^2) , \quad \text{ as } \rho \to 0^+ ,
\enan
and constants $\tau_0, C, r_0 >0$, such that we have
\bna
C \intop |\bar{\rho}^{-\tau}Pu|^2 \rho^{-1} d\rho dt \geq \intop \left(|\bar{\rho}^{-\tau}\nabla u|^2 + |\bar{\rho}^{-\tau}u|^2 \right)\rho^{-1} d\rho dt , \quad \text{for all } \tau \geq \tau_0 , \quad u \in C^\infty_0(B_{r_0} \setminus \{0\} ).
\ena
\end{proposition}
With this Carleman-Aronszajn estimate in hand, we now give a proof of Lemma~\ref{l:aronsajn-arleman-interp}.
\bnp[Proof of Lemma~\ref{l:aronsajn-arleman-interp}]
We use the estimate of Proposition~\ref{p:carleman-aronsajn} as in~\cite{LR:95} (see also~\cite[Section~5]{LeLe:09}) to deduce an interpolation inequality. We introduce for this (as in~\cite[Beginning of Section~3]{DF:88}) a cutoff function $\chi_\rr = \chi_\rr(\rho)$ such that, with $0<\rr < \frac{r_0}{2}$ a small parameter (appearing in the statement of the lemma)
\bna
\supp(\chi_\rr ) \subset \left\{ \frac{\rr}{2} < \bar{\rho} < r_0 \right\}, \quad \chi_\rr = 1 \text{ on } \left\{ \rr < \bar{\rho} < \frac{r_0}{2} \right\} , \\
|\d^{\alpha} \chi_\rr| \leq C_\alpha \rr^{-|\alpha|} \text{ on } \left\{ \frac{\rr}{2} < \bar{\rho} < \rr \right\},
\quad |\d^{\alpha} \chi_\rr| \leq C_\alpha \text{ on } \left\{ \frac{r_0}{2} < \bar{\rho} < r_0 \right\}.
\ena
We apply Proposition~\ref{p:carleman-aronsajn} to $u = \chi_\rr F$. The operator $[P, \chi_\rr]$ is a first order differential operator with $\supp [P, \chi_\rr] \subset \left\{ \frac{\rr}{2} < \bar{\rho} < \rr \right\} \cup \left\{ \frac{r_0}{2} < \bar{\rho} <r_0 \right\}$, being moreover of the form $O(r^{-1})D + O(r^{-2})$ on the set $\left\{ \frac{\rr}{2} < \bar{\rho} < \rr \right\}$. Therefore, we obtain using~\eqref{e:rho=rhobar}, for all $\tau \geq \tau_0$
\begin{align*}
\intop \left(|\bar{\rho}^{-\tau}\nabla( \chi_\rr F)|^2 + |\bar{\rho}^{-\tau}\chi_\rr F|^2 \right)\rho^{-1} d\rho dt & \leq C \intop |\bar{\rho}^{-\tau}\chi_\rr P F|^2 \rho^{-1} d\rho dt + C \intop |\bar{\rho}^{-\tau}[P ,\chi_\rr] F|^2 \rho^{-1} d\rho dt \\
& \leq C\left(\frac{\rr}{2}\right)^{-2\tau-1} \| P F \|_{L^2(\bar{B}_{r_0})}^2 + C\left(\frac{\rr}{2}\right)^{-2\tau-2} \|F\|_{H^1_r(\frac{\rr}{2}\leq \bar{\rho} \leq \rr)}^2 \\
& \quad + C \left(\frac{r_0}{2}\right)^{-2 \tau}\|F\|_{H^1(\frac{r_0}{2}\leq \bar{\rho} \leq r_0)}^2 ,
\end{align*}
where $\bar{B}_{r_0}$ denotes the set $\{\bar{\rho}\leq r_0\}$. Recall that the norm $H^1_r$ is defined in~\eqref{e:def-norm-r}.
Concerning the left hand-side, we bound it from below by
\begin{align*}
\intop \left(|\bar{\rho}^{-\tau}\nabla( \chi_\rr F)|^2 + |\bar{\rho}^{-\tau}\chi_\rr F|^2 \right)\rho^{-1} d\rho dt & \geq \intop_{2\rr \leq \bar{\rho} \leq \frac{r_0}{4}} \left(|\bar{\rho}^{-\tau}\nabla( \chi_\rr F)|^2 + |\bar{\rho}^{-\tau}\chi_\rr F|^2 \right)\rho^{-1} d\rho dt \\
& \geq \left(\frac{r_0}{4}\right)^{-2 \tau} \|F\|_{H^1(\rr \leq \bar{\rho} \leq \frac{r_0}{4})}^2 .
\end{align*}
Combining the last two estimates together with the fact that $\left(\frac{r_0}{4}\right)^{-\tau} \|F\|_{H^1(\bar{B}_\rr)} \leq \left(\frac{\rr}{2}\right)^{- \tau} \|F\|_{H^1(\bar{B}_\rr)}$ yields, for some $\tau_0>0$ and all $\tau\geq \tau_0$ and $\rr \in (0, \frac{r_0}{10})$,
\begin{align*}
\left(\frac{r_0}{4}\right)^{-\tau} \|F\|_{H^1(\bar{B}_{\frac{r_0}{4}})} \leq C\left(\frac{\rr}{2}\right)^{-\tau} \left( \| P F \|_{L^2(\bar{B}_{r_0})}+ \|F\|_{H^1_r(\bar{B}_\rr)} \right)
+ C \left(\frac{r_0}{2}\right)^{- \tau}\|F\|_{H^1(\bar{B}_{r_0})} .
\end{align*}
Multiplying by $r_0^\tau$ and recalling~\eqref{e:rho=rhobar} to replace balls in $\bar{\rho}$ by real balls, we obtain, up to changing the names of the parameters $\rr, r_0$, that
\begin{align*}
\|F\|_{H^1(B_{\frac{r_0}{4}})} \leq C\left(\frac{2r_0}{\rr}\right)^{\tau} \left( \| P F \|_{L^2(B_{r_0})}+ \|F\|_{H^1_r(B_\rr)} \right)
+ \frac{C}{2^\tau}\|F\|_{H^1(B_{r_0})} .
\end{align*}
An optimization in $\tau \geq\tau_0$~\cite{Robbiano:95} (see also~\cite[Lemma~5.2]{LeLe:09}), then implies the following interpolation inequality
\begin{align*}
\|F\|_{H^1(B_{\frac{r_0}{4}})} \leq C\left( \| P F \|_{L^2(B_{r_0})}+ \|F\|_{H^1_r(B_\rr)} \right)^{\alpha_\rr} \|F\|_{H^1(B_{r_0})}^{1-\alpha_\rr } ,
\quad \alpha_\rr = \frac{\log 2}{\log \left(\frac{2r_0}{\rr}\right) + \log 2},
\end{align*}
and concludes the proof of the lemma.
\enp
\subsection{A proof of Lemma~\ref{t:unif-LR} from Proposition~\ref{p:unif-interp-boundary-metric}}
\label{subsectLmscaling}
In this section, we give a proof of Lemma~\ref{t:unif-LR}. The latter consists in performing a scaling argument to reduce the problem to fixed-size balls. However, the scaling argument yields in these fixed balls a family of metrics (converging to a fixed metric as $r\to 0$), and we need to use uniform interpolation/Carleman estimates for such families of metrics. These uniform estimates are proved in Appendix~\ref{s:carleman} (Proposition~\ref{p:unif-interp-boundary-metric}).
\bnp[Proof of Lemma~\ref{t:unif-LR}]
We first choose $r_0$ small enough so that $\overline{B_{2r_0}} \subset X_S$ and there exist local coordinate patch on $\ensuremath{\mathcal M}$ : $\Phi: \{x \in \ensuremath{\mathcal M}, \dist(x,x_0)<2r_0\}\to U$ where $U$ is a neighborhood of $0$ in ${\mb{R}}^{n}$, with $\Phi(x_0) = 0$. Up to a multiplication by an invertible constant matrix, we may assume that $\left((\Phi^{-1})^*g\right) (0) = \id$.
As a consequence, $ds^2 \otimes \left((\Phi^{-1})^*g\right) (ry)$, defined on the ball of radius $2$, converges uniformly in this ball towards the flat metric on the flat ball of ${\mb{R}}^{n+1}$ in the limit $r \to 0^+$.
We will thus only use the flat metric in the present proof which behaves well with respect to scaling. The distance (hence the balls, still denoted $B_r$ or $B_1$ below, all centered at $0$) will be defined with respect to the flat metric, as well as the Sobolev norms (still denoted $H^1_r(B_r)$, $H^1(B_1)$ below).
The final result we obtain will be formulated in terms of the flat metric, and associated balls and Sobolev spaces. Coming back to a formulation on the manifold ${\mb{R}} \times \ensuremath{\mathcal M}$ with the metric $ds^2 \otimes g$ only uses the uniform equivalence of norms in $T^*({\mb{R}} \times \ensuremath{\mathcal M})$ and in $L^2({\mb{R}} \times \ensuremath{\mathcal M})$ for $r$ sufficiently small.
With this in mind, let us now proceed with the scaling argument in the coordinate chart. Denote by $F_r(x)=F(rx)$ and $P_r$ the Laplace-Beltrami operator with respect the metric $ds^2 \otimes \left((\Phi^{-1})^*g\right)(ry)$ defined on the ball of radius $2$, we have
\begin{align*}
\|F\|_{H^1_r(B_r)}&=r^{(n+1)/2}\|F_r\|_{H^1(B_1)}, \\
r^2 \|P F\|_{L^2(B_{2r})}&= r^{(n+1)/2}\|P_r F_r\|_{L^2(B_{2})}, \\
r^{3/2} \| \d_s F|_{s=0} \|_{L^2(B_{2r}\cap \{0\}\times \ensuremath{\mathcal M})} &=r^{1/2}r^{n/2} \| \d_s F_r|_{s=0} \|_{L^2(B_{2}\cap \{0\}\times \ensuremath{\mathcal M})}.
\end{align*}
Note that the metric $ds^2 \otimes g(r\cdot)$ defined on $B_2$ converges uniformly for $r$ converges to zero to the flat metric $ds^2 \otimes g(0) = ds^2\otimes dy_1^2\otimes \cdots \otimes dy_n^2$ for the Lipschitz topology on metrics.
So, the result follows if we are able to prove the following estimate: there exist $\epsilon, \alpha_0 , C$ such that for all Lipschitz metric $\mathfrak{g}$ with $\|\mathfrak{g}- \id\|_{W^{1,\infty}}<\epsilon$ and all $u\in H^2(B_2)$ such that $u|_{s=0}=0$, we have
\begin{equation*}
\|u\|_{H^1(B_1)} \leq C \left(\|(-\d_s^2 - \Delta_{\mathfrak{g}}) u\|_{L^2(B_2)} + \| \d_s u|_{s=0} \|_{L^2(B_2\cap \{0\}\times {\mb{R}}^{n})} \right)^{\alpha_0} \|F\|_{H^1(B_2)}^{1-\alpha_0}.
\end{equation*}
This is the object of Proposition~\ref{p:unif-interp-boundary-metric} proved in the Appendix. Note that the result of Proposition~\ref{p:unif-interp-boundary-metric} is stated with half-balls $B_k^+$ but is also true with real balls $B_k$ instead by a symmetry argument.
\enp
\section{The observability constant for positive solutions}
\label{s:positive}
The aim of this Section is to prove the positive result of Theorem \ref{thmpositive} about the observability of positive solutions. The main tool will be the following Li-Yau estimates.
\begin{theorem}[Theorem 2.3 of Li-Yau \cite{LY:86}]
\label{t:Li-Yau}
Let $\ensuremath{\mathcal M}$ be a compact manifold. Let $$- K = \min(0, \min_{x \in \ensuremath{\mathcal M}} Ricc(x) ) \leq 0 ,$$ where $Ricc(x)$ is the Ricci curvature at $x$.
We assume that the boundary of $\ensuremath{\mathcal M}$ is convex, i.e. $II>0$. Let $u(t,x)$ be a positive solution on $(0, +\infty)$ of the heat equation
with Neumann boundary condition. Then for any $\alpha>1$, $x, y\in \ensuremath{\mathcal M}$, and $0<t_1<t_2$, we have
\bna
u(t_1,x)\leq \left(\frac{t_2}{t_1}\right)^{n\alpha/2}e^{\frac{n\alpha K(t_2-t_1)}{\sqrt{2}(\alpha-1)}}e^{\alpha\frac{ d(x,y)^2}{4(t_2-t_1)}}u(t_2,y) .
\ena
\end{theorem}
\begin{remark}
\label{r:non-conv-Li-Yau}
The convexity assumption is not necessary to obtain a Li-Yau type estimate (if the boundary is smooth), up to a loss in the exponent. Indeed, setting $- H= \min(0, \min_{x \in \d \ensuremath{\mathcal M}} II(x) ) \leq 0$, where $II(x)$ is the second fundamental form of $\d M$ with respect to outward pointing normal, Wang proves in \cite[Theorem 3.1]{W:97} the estimate
\bna
u(t_1,x)\leq \left(\frac{t_2}{t_1}\right)^{C_\alpha}e^{C_\alpha' (t_2-t_1)} e^{\alpha\frac{ d(x,y)^2}{4(t_2-t_1)}}u(t_2,y) , \quad \text{for all } \alpha> (1+H)^2 .
\ena
The proof of Theorem~\ref{thmpositive} shows that the result still holds without the convexity argument, but yields
\bna
\nor{u(T)}{L^2(\ensuremath{\mathcal M})}^2\leq \frac{C_\e}{T} e^{(1+H+\e)^2\frac{\mathcal{L}(M,\omega)^2}{2T}}\intop_0^{T}\nor{u(t, \cdot)}{L^2(\omega)}^2~dt ,\\
\nor{u(T)}{L^2(\ensuremath{\mathcal M})}^2\leq \frac{C_\e}{T} e^{(1+H +\e)^2\frac{\mathcal{L}(M,z_0)^2}{2T}}\intop_0^{T}u(t,z_0)^2~dt ,
\ena
instead of~\eqref{estimpos1}-\eqref{estimpos2} (hence with a loss $(1+H)^2$ in the exponent). We do not know wether this is optimal.
Finally, we did not find any analogue estimate in the case of Dirichlet boundary conditions.
\end{remark}
\bnp[Proof of Theorem \ref{thmpositive}]
Actually, this will appear along the proof that we will need the following asymptotic constants, all depending on the chosen $\e>0$.
Namely, we shall use $\eta_0>0$ arbitrarily small, $r>1$ arbitrarily large, $\lambda \in (0,1)$ arbitrarily close to $1$, and $\alpha>1$ arbitrary close to $1$. Given $\varepsilon>0$, they will all be fixed at the end so that $$ \frac{r\alpha }{(r-1)\lambda}(d_\omega + 3\eta_0 )^2 \leq (1+\e) d_\omega^2 .$$
\medskip
For any $x_0\in\ensuremath{\mathcal M}$ and for any $\eta_0>0$, there exist $\eta = \eta (x_0, \eta_0) \in (0, \eta_0)$ and $y_0\in\omega$ such that
\bna
d(x_0,y_0)\leq d_\omega +\eta, \quad \text{ and } \quad B(y_0,\eta)\subset \omega .
\ena
In particular, we have $\ensuremath{\mathcal M} \subset \bigcup_{x_0 \in \ensuremath{\mathcal M}}B(x_0, \eta)$ so that, the compactness of $\ensuremath{\mathcal M}$ yields the following statement: given $\eta_0>0$, there exist a finite set $J$ and families $(x_j)_{j\in J} \in \ensuremath{\mathcal M}^J$, $(y_j)_{j\in J} \in \omega^J$ and $(\eta_j)_{j \in J} \in (0,\eta_0)^J$ such that
\bna
\ensuremath{\mathcal M} \subset \bigcup_{j \in J}B(x_j, \eta_j), \quad
d(x_j,y_j)\leq d_\omega +\eta_j , \quad \text{ and } \quad B(y_j,\eta_j )\subset \omega , \quad \text{for all }j \in J.
\ena
Now, fix $j \in J$, and take $x\in B(x_j,\eta_j)$ and $y\in B(y_j,\eta_j)\subset \omega$, and we have
$$
d(x,y)\leq\eta_j + d_\omega + \eta_j + \eta_j \leq d_\omega + 3 \eta_0 = :d_m .
$$
For $t\in [0,T/r]$, Theorem~\ref{t:Li-Yau} with $t_1=t$ and $t_2=rt_1=rt$ then yields
\bna
u(t,x)^2\leq r^{n\alpha}e^{\frac{2n\alpha Kt(r-1)}{\sqrt{2}(\alpha-1)}}e^{\frac{\alpha d_m^2}{2(r-1)t}}u(rt,y)^2.
\ena
Denoting
$$
\gamma := \frac{2n\alpha K(r-1)}{\sqrt{2}(\alpha-1)} ,
$$
this may be rewritten as
\bnan
\label{e:before-int-y}
u(t,x)^2 e^{-\frac{\alpha d_m^2}{2(r-1)t}}\leq r^{n\alpha}e^{\gamma t}u(rt,y)^2.
\enan
We may now integrate this estimate for $x\in B(x_j,\eta_j)$ and $y\in B(y_j,\eta_j)\subset \omega$,
\bna
e^{-\frac{\alpha d_m^2}{2(r-1)t}}\nor{u(t)}{L^2(B(x_j,\eta_j))}^2
\leq \frac{|B(x_j, \eta_j)|}{|B(y_j, \eta_j)|}r^{n\alpha} e^{\gamma t} \nor{u(rt)}{L^2(B(x_j,\eta_j))}^2
\leq \frac{|B(x_j, \eta_j)|}{|B(y_j, \eta_j)|}r^{n\alpha} e^{\gamma t} \nor{u(rt)}{L^2(\omega)}^2 .
\ena
Summing all these estimates for $j \in J$ yields, for a constant $C(\eta_0)$ depending only on the geometry of $(\ensuremath{\mathcal M},g)$, of $\omega$, and the constant $\eta_0$, the inequality
\bna
e^{-\frac{\alpha d_m^2}{2(r-1)t}}\nor{u(t)}{L^2(\ensuremath{\mathcal M})}^2
\leq C(\eta_0) r^{n\alpha} e^{\gamma t} \nor{u(rt)}{L^2(\omega)}^2 .
\ena
Given $\lambda \in (0,1)$, integrating this on the interval $t\in [\lambda T/r,T/r]$ yields
\bna
\intop_{\lambda T/r}^{T/r}e^{-\frac{\alpha d_m^2}{2(r-1)t}}\nor{u(t)}{L^2}^2 dt
& \leq &C(\eta_0) r^{n\alpha} \intop_{\lambda T/r}^{T/r} e^{\gamma t} \nor{u(rt)}{L^2(\omega)}^2 dt \\
& \leq & C(\eta_0) r^{n\alpha} e^{\gamma \frac{T}{r}} \intop_{\lambda T/r}^{T/r} \nor{u(rt)}{L^2(\omega)}^2 dt
= C(\eta_0) r^{n\alpha} e^{\gamma \frac{T}{r}} \intop_{\lambda T}^{T} \nor{u(s)}{L^2(\omega)}^2 ds,
\ena
after the change of variables $s=rt$. Concerning the left hand-side, we use the decay of the $L^2$ norm of solutions to the heat equation to write
\bnan
\label{e:remplace-r-t}
\nor{u(t)}{L^2(\ensuremath{\mathcal M})}\geq \nor{u(T/r)}{L^2(\ensuremath{\mathcal M})} \geq \nor{u(T)}{L^2(\ensuremath{\mathcal M})} ,
\enan for all $t\in [\lambda T/r,T/r]$ since $r>1$. Noting also that $t \mapsto e^{-\frac{\alpha d_m^2}{2(r-1)t}}$ is increasing in $t>0$, we have
\bna
\intop_{\lambda T/r}^{T/r}e^{-\frac{\alpha d_m^2}{2(r-1)t}} dt \geq \frac{T(1-\lambda)}{r}e^{-\frac{r\alpha d_m^2}{2(r-1)\lambda T}}.
\ena
Combining the above three estimates yields
\bna
\frac{T(1-\lambda)}{r}e^{-\frac{r\alpha d_m^2}{2(r-1)\lambda T}} \nor{u(T)}{L^2(\ensuremath{\mathcal M})}^2
\leq C(\eta_0) r^{n\alpha} e^{\gamma \frac{T}{r}} \intop_{\lambda T}^{T} \nor{u(s)}{L^2(\omega)}^2 ds ,
\ena
that is, for all $\eta>0$, $r>1$, $\lambda \in (0,1)$, and $\alpha>1$,
\bna
\nor{u(T)}{L^2(\ensuremath{\mathcal M})}^2
\leq \frac{C(\eta) r^{n\alpha+1}}{T(1-\lambda)} e^{\frac{2n\alpha K(r-1)}{\sqrt{2}(\alpha-1)} \frac{T}{r}} e^{\frac{r\alpha (d_\omega +\eta)^2}{2(r-1)\lambda T}} \intop_{\lambda T}^{T} \nor{u(s)}{L^2(\omega)}^2 ds .
\ena
But $\frac{r}{r-1}=1+\frac{1}{r-1}$ can be made arbitrary close to $1^+$ for large $r$, $\lambda$ close to $1^-$, $\alpha$ close to $1^+$, and $\eta$ to $0^+$, so that $\frac{r\alpha (d_\omega +\eta)^2}{2(r-1)\lambda T} \leq \frac{d_\omega^2+\varepsilon }{2T}$. We have thus proved the first statement.
To be a little more precise, we can choose $\alpha, r$ such that $\frac{1}{r}+\frac{1}{\alpha}=1$. This yields
\bna
\nor{u(T)}{L^2(\ensuremath{\mathcal M})}^2
\leq \frac{C(\eta) \left( \frac{\alpha}{\alpha-1} \right)^{n\alpha+1}}{T(1-\lambda)} e^{\frac{2nK}{\sqrt{2}(\alpha-1)}T} e^{\frac{\alpha^2 (d_\omega +\eta)^2}{2 \lambda T}} \intop_{\lambda T}^{T} \nor{u(s)}{L^2(\omega)}^2 ds ,
\ena
or, with $\alpha=1+\epsilon$ and $\lambda=1-\epsilon$, we obtain for all $\epsilon \in (0,1)$
\bna
\nor{u(T)}{L^2(\ensuremath{\mathcal M})}^2
& \leq & \frac{C(\eta) \left(\frac{1+\epsilon}{\epsilon}\right)^{(1+\epsilon)n+1}}{T\epsilon} e^{\frac{2nK}{\sqrt{2}\epsilon}T} e^{\frac{(1+\epsilon)^2}{1-\epsilon} \frac{(d_\omega +\eta)^2}{2 T}} \intop_{(1-\epsilon) T}^{T} \nor{u(s)}{L^2(\omega)}^2 ds \\
& \leq & \frac{C(\eta)}{T\epsilon^{2n+2}} e^{\frac{2nK}{\sqrt{2}\epsilon}T} e^{\frac{(1+\epsilon)^2}{1-\epsilon} \frac{(d_\omega +\eta)^2}{2 T}} \intop_{(1-\epsilon) T}^{T} \nor{u(s)}{L^2(\omega)}^2 ds .
\ena
So we have proved the first estimate of the theorem. The second can be obtained similarly by integrating~\eqref{e:before-int-y} in the $x$ variable only, and not in the $y$ variable.
\enp
\begin{remark}
In fact, remark that from~\eqref{e:remplace-r-t} on, we could also put $\nor{u(T/r)}{L^2(\ensuremath{\mathcal M})}^2$ on the left hand-side of all estimates of the proof, which amounts to $\nor{u(T\frac{\epsilon}{1+\epsilon})}{L^2(\ensuremath{\mathcal M})}^2$, and, in particular, we have the much stronger statement
\bna
\nor{u((1-\epsilon)T)}{L^2(\ensuremath{\mathcal M})}^2
\leq \frac{C(\eta) r^{n\alpha+1}}{T\epsilon} e^{\frac{2nK}{\sqrt{2}\epsilon}T} e^{\frac{(1+\epsilon)^2}{1-\epsilon} \frac{(d_\omega +\eta)^2}{2 T}} \intop_{(1-\epsilon) T}^{T} \nor{u(s)}{L^2(\omega)}^2 ds .
\ena
\end{remark}
\begin{remark}
\label{rkexplicitpos}
All constants can be made explicit. We denote by $K := \min \left\{0 , - \min_{x\in \ensuremath{\mathcal M}} Ricci(x) \right\}$. For instance, we have for all $\eta>0$, all
\bna
\nor{u(T)}{L^2(\ensuremath{\mathcal M})}^2
\leq \frac{C(\eta) r^{n\alpha+1}}{T(1-\lambda)} e^{\frac{2nK}{\sqrt{2}(\alpha-1)}T} e^{\frac{\alpha^2 (d_\omega +\eta)^2}{2 \lambda T}} \intop_{\lambda T}^{T} \nor{u(s)}{L^2(\omega)}^2 ds ,
\ena
Choosing the constants, we have, for all $\epsilon \in (0,1)$, for all $\eta>0$,
\bna
\nor{u(T\frac{\epsilon}{1+\epsilon})}{L^2(\ensuremath{\mathcal M})}^2
& \leq & \frac{C(\eta)}{T\epsilon^{2n+2}} e^{\frac{2nK}{\sqrt{2}\epsilon}T} e^{(1+\epsilon)^3 \frac{(d_\omega +\eta)^2}{2 T}} \intop_{(1-\epsilon) T}^{T} \nor{u(s)}{L^2(\omega)}^2 ds .
\ena
Remark that for non-negatively (Ricci) curved manifolds (this is the case of a convex domain in ${\mb{R}}^n$), then $K=0$ and the constant is
$ \frac{C(\eta)}{T\epsilon^{2n+2}} e^{(1+\epsilon)^3 \frac{(d_\omega +\eta)^2}{2 T}}$ and hence decays like $1/T$ for $T$ large.
\end{remark}
|
{
"timestamp": "2018-06-05T02:14:35",
"yymm": "1806",
"arxiv_id": "1806.00969",
"language": "en",
"url": "https://arxiv.org/abs/1806.00969"
}
|
\section{INTRODUCTION}
\label{sec:intro}
Making robot programming feasible for beginners and easy for experts is the key to use the robots beyond their conventional applications in industrial manufacturing and assembly.
Industrial and communication robots of the future will understand natural interfaces like speech and human demonstrations to learn complex skills.
Instead of a single brain-like component, these robots will be powered by many smaller cognitive services that will work in harmony to exhibit a higher level of cognition.
Each service in the system will demonstrate basic intelligence to efficiently perform a single well-defined task.
A framework for connection and orchestration of these services will be the key to a truly intelligent system.
Although often used interchangeably, we use the term ``cognitive'' instead of ``machine learning'' or ``artificial intelligence'' to give a notion of human like intelligence across multiple domains.
Mostly, the term artificial intelligence is used very broadly, while machine learning usually refers to systems that solve a distinct problem in a single domain.
To achieve this higher level of cognitive capabilities, in this paper we present a robotics framework -- MaestROB.
Different components of MaestROB communicate through a novel robotics middleware named Project Intu.
Intu is provided as an open source project at \mbox{\url{https://github.com/watson-intu/self}}.
Intu uses IBM Watson APIs~\cite{url:watson} to provide a seamless access to many services including conversation, image recognition etc.
With Intu at its core, MaestROB introduces a hierarchical structure to planning by defining several levels of instructions.
By using the knowledge and ontology of physical constraints and relationships, these abstractions allow the grounding of human instructions to actionable commands.
The framework performs symbolic reasoning at higher level, which is important for long term autonomy and to make the whole system accountable for its actions.
Individual skills are allowed to use machine learning or rule based systems.
We provide a mechanism to extend the framework by developing new services or connecting it with other robot middlewares and scripting languages.
This allows the higher level reasoning to be done using PDDL (Planning Domain Definition Language) planner with the proposed extension for handling semantics, while the lower level skills can be executed as ROS (Robotics Operating System) nodes.
In MaestROB the primitive intelligence of each component is orchestrated to demonstrate complex behaviors.
Important features of MaestROB include but are not limited to a cloud based skill acquisition and sharing service, task planning with physical reasoning, perception service, ability to learn from demonstration, multi-robot collaboration, and teaching by natural language.
We show the capabilities of the framework by a scenario where a human teaches a task to a communication robot (Pepper~\cite{url:pepper}) by demonstration.
The robot understands the tasks and collaborates with an industrial manipulator (UR5~\cite{url:ur5}) to execute the task, using the action primitives that UR5 has previously acquired by learning or programming.
The industrial robot has the ability to perform physical manipulation, but it lacks the key sensors that can help in a particular situation (e.g. error recovery etc.).
Using the planning and collaboration services provided by MaestROB, the communication robot having these sensors can analyze and convert the plan to a command sequence for the robotic arm.
The learning capabilities of the framework and the hierarchical control capabilities of the middleware help to enable these tasks easily without the need of any explicit programming.
\section{RELATED WORK}
\label{sec:relatedwork}
Middleware serves the purpose of gluing together various components of the robot and communication between them.
Robotics Operating System (ROS)~\cite{conf:icra:quigley2009} is arguably the most used robotics middleware especially in the research community.
Although ROS provides a backbone structure for many nodes to collaborate, it does not intrinsically provide any components that can help in planning or training the robots to perform a task.
Some other papers discuss smaller frameworks to solve more complex problems.
ROSPlan~\cite{conf:icaps:cashmore2015} is a task planning system that can generate a sequence of actions to achieve the goal when the initial model of the environment is known.
ROSPlan being a planning system depend on other components for learning and communication.
ROSPlan is not a full robot framework, also it does not provide any mechanism to train the robots.
The problem of providing the robot with reasoning capabilities in order to allow them to generate new motion types autonomously has been tackled throughout the recent years.
The RoboHow project aimed at enabling robots to perform everyday manipulation activities from web navigation, and human observation.
The project introduced KnowRob, a knowledge base on actions, objects, properties and relations~\cite{article:ijrr:tenorth2013}.
This knowledge base was combined with object recognition algorithms to recognize and reason on visual observations through the RoboSherlock software framework~\cite{conf:icra:beetz2015}.
In the RoboBrain knowledge engine, the nodes of the graph structure can represent any type of robotic concept (e.g., grasping features, trajectory parameters and visual data)~\cite{article:corr:saxena2014}.
The RoboBrain knowledge engine was used to execute manipulation tasks from instructions given in natural language~\cite{article:ijrr:misra2016}.
As RoboBrain uses the knowledge acquired from semi-reliable sources, the outcome actions of an instruction are not predictable.
We on the other hand wants to communicate using natural language but at the same time we would like the end actions to match our expectations.
An increasing number of research papers use machine learning and cognitive technologies for collaboration, handling uncertainties, and taking optimal actions.
We discuss some of the interesting research in the area of cognitive robots with capabilities beyond that of the current generation of robots.
S. Levine et al.~\cite{article:ijrr:levine2017} defined a hand-eye coordination for robotic grasping using monocular images.
It shows how deep convolutional neural networks can be used with big data to learn a complex task in an uncertain environment without explicit need of camera calibration and the robot pose estimation.
Delft team won the Amazon picking challenge 2016~\cite{arxiv:carlos2016} by using machine learning for pose estimation, grasp planning, and motion planning.
T. Inoue et al.~\cite{conf:iros:inoue2017} showed how the data from conventional sensors can be combined with the deep reinforcement learning to solve precision insertion task.
A. Munawar et al.~\cite{conf:wacv:asim2017} presented an anomaly detection system using vision.
Such systems can be used to enable the systems to keep a check on themselves with little intervention from the humans.
J. Connell et al.~\cite{conf:agi:connel2012} presented a system for physical manipulation with a robot arm.
The system can be used to teach the robot to learn new actions using a a fixed grammar.
The grammar is basic and can be difficult to extend to make it more general.
R. Paul~\cite{conf:ijcai2017:paul} presents a probabilistic model that enables incremental grounding of natural language utterances using learned knowledge from accrued visual observations and language utterances.
The model infers over the constraints for future actions in the context of a rich space of perceptual concepts.
While interesting, most of the existing research use customized learning and control solutions that cannot be generalized.
The proposed framework is an effort to present a common framework that can handle all these scenarios and beyond without any redundant effort.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\columnwidth]{figures/cmd-act-skill.pdf}
\caption{Different level of instructions in MaestROB.}
\label{fig:cmd-act-skill}
\end{figure}
\begin{table}[b]
\centering
\caption{Example of instructions at different levels.}
\label{table:cmdtable}
\includegraphics[width=1.0\columnwidth]{figures/cmdtable.pdf}
\end{table}
\section{MOTIVATION AND CONCEPTS}
\label{sec:maestrobmot}
The motivation behind MaestROB is to create a framework that performs accurate physical manipulations in the real world by watching demonstrations or taking natural language instructions.
Humans communicate at higher level of abstraction and assume the underlying knowledge.
Machines on the other hand cannot accurately comprehend such vague instructions.
In order to make the robots understand natural language commands in a predictable manner, we propose a hierarchical structure of instructions.
Fig.~\ref{fig:cmd-act-skill} shows different levels of instructions in MaestROB, namely gestures, skills, and plans.
Gestures are platform dependent but abstraction is provided as we move to higher level instructions.
Examples of different level of instructions are given in Table~\ref{table:cmdtable}.
\subsection{Gestures}
Gestures are used to directly control the robot.
They are equivalent to the motor skills in a human body.
Gestures are executed by the underlying platform or the robot controller.
Depending on the platform, some of the gestures might not be available.
\subsection{Skills}
The next level of abstraction is the skills.
We define a skill as a piece of logic that can consume sensor input and generate a list of gestures.
A skill is an atomic operation that performs a part of the overall task.
Skills however cannot be executed on their own.
MaestROB provides three methods of teaching new skills to the robot:
\begin{enumerate}
\item \textit{List of gestures or skills}:
In its simplest form, a skill consists of sequential or parallel list of gestures or other skills.
For example, a pick skill can take the pick position and perform the sequence of gestures by going on top of the position, then going down, and finally closing the gripper.
\item \textit{Rule based}:
In this method, a set of rules can be defined to consume the sensor's input and to issue appropriate gestures.
Rule based skills are simply defined as ``if\{A\}then\{B\}'' rules.
One example can be to stop the robot if the end-effector position is outside a predefined safety zone.
\item \textit{Machine learning}:
Some skills are too difficult to be defined by a program.
MaestROB provides a cloud based learning API that can be used to learn complex skills like different shape insertion task or visual servoing.
The learning service supports supervised and reinforcement learning paradigms.
The inference is done locally to satisfy real-time constraints of the robots, while the models are stored and learned in the cloud.
In this manner, robots with minimal processing capabilities can also learn new skills by making REST calls to the MaestROB learning API.
The API also supports transfer learning of models from a simulator to the real environment.
\end{enumerate}
MaestROB provides an interface to extend the framework by implementing additional methods to define or learn new skills.
A skill can take one or more arguments as its input, e.g. \texttt{place(pose)}, \texttt{moveTo(pose, pose)}.
All skill return \texttt{success} or \texttt{failure}, based on the results of actual execution of the action in the prescribed amount of time.
\subsection{Plans}
A plan is conceptually a high-level abstraction for going from a given initial state to a goal state.
It consists of a sequence of skills that are defined, learned, or computed by a symbolic planner.
A plan corresponds to a single instruction in a user manual or a single command issued by human.
The role of the planner in the framework is to act as a bridge between the cognitive semantics and the skill.
The framework currently provides three methods for defining new plans.
\begin{enumerate}
\item \textit{List of skills or plans}:
A plan can be defined as a parallel or sequential list of skills or other plans defined either programmatically or by using natural language communication using fixed grammar.
This method is inspired by previous work by J. Connell et al.\cite{conf:agi:connel2012}, etc.
For example, we can tell the robot that he can learn to wave hands by raising the right hand and moving it right and left a few times.
\item \textit{Learning from demonstration or conversation}:
Plans can also be described by defining the final or goal state verbally or programmatically.
MaestROB provides a mechanism for grounding the natural language commands to the intended goal states by using natural language classification and mapping.
Another method supported by the framework is to show the key states of the system, instead of giving the initial and the goal states explicitly.
Humans can understand the sequence of actions they need to perform just by looking at the initial and the final state of a task.
By using the perception and relationship extraction, the planner can help the robot do the same by using the available skills.
Before computing and executing the plan, the framework checks the current condition of the world to know the initial state.
\end{enumerate}
Similar to skills, we define an interface to implement other methods of planning.
A plan is a logic that can generate a sequence of one or more skills.
All plans return \texttt{success} or \texttt{failure}.
Unlike skills, a plan can be executed independently.
It can consist of a single skill, a list of skills or dynamically computed skills based on a planner or a similar system.
Plans can be launched by human instructions, in response to some condition, or in a continuous loop.
In case of a skill failure a new plan can be generated to accomplish the target by using current state as the initial state.
However, human intervention may be required, if the planner fails to find a suitable plan.
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\columnwidth]{figures/first.pdf}
\caption{MaestROB framework.}
\label{fig:first}
\end{figure}
\section{MaestROB FRAMEWORK}
\label{sec:maestrobframe}
MaestROB is a scalable and extendable framework that can be used to control robots or smart environments.
It is built using a hybrid architecture, encompassing explicit symbolic computation at the center and neural networks at the edge.
At the core of the framework is the robot middleware Intu.
The library around the middleware consists of numerous algorithms and components, each of which performs a well defined task.
In this section, we discuss the middleware and the key services of MaestROB.
We provide numerous services to enable a large number of commonly occurring use case scenarios.
A service is defined as a set of components that performs a well defined complex task, for example conversation.
Components can be shared among multiple services.
We use the term components loosely to represent data storage, program, learned models or any part of the framework.
Fig.~\ref{fig:first} shows an abstract level configuration of the middleware, cloud APIs, and the main services of the framework.
\subsection{Project Intu}
Project Intu is provided as an open source platform for embodied cognition\footnote{\url{https://github.com/watson-intu/self}}.
It is based on a cognitive architecture called Self.
Self is an agent-based architecture that combines connectionist and symbolic models of computation, using blackboards for opportunistic collaboration.
Project Intu provides a mechanism for connecting and orchestrating cognitive services in a manner that brings higher level cognition to an embodied system.
Self is inspired by Minsky's Society of Mind~\cite{book:minsky:1986}, therefore, behavior takes place in the context of multiple concurrent agents who communicate opportunistically via blackboards.
Inspired by Brook’s subsumption architecture~\cite{article:ijra:brook1986}, behavior takes place in a hierarchy of cognition, from involuntary reflexes to voluntary skills to goals and planning.
Self imposes a clear separation of concerns among perception, actuation, models, and behavior, and as much as possible, behavior is either taught or is learned, not programmed.
For perception the extractors are used to process the raw sensor streams into a meaningful data (e.g. speech to text).
The data then goes to the respective classifiers or agents for further processing.
Actions are performed by learning or programming different skills.
Based on a goal that needs to be achieved, the plan manager finds the most suitable execution plan before invoking that plan (sequence of skills).
Most of the components communicate opportunistically by using the publish and subscribe (pub/sub) model of Intu.
The communication with the outside world or other instances of Self happens through topics.
Plugins can be developed to connect Intu with other middleware’s and scripting languages.
Open source version of Intu comes with many components that allow a seamless access to IBM Watson services~\cite{url:watson}.
Intu is devised to be applicable to a multitude of use cases, from avatar to concierge to retail to elder care to industrial robots.
It is available for a number of platforms, including Linux, Windows, macOS, Nao, Pepper, and Raspberry Pi.
\setcounter{figure}{3}
\begin{figure*}[b]
\centering
\includegraphics[width=1.0\textwidth]{figures/pddls.pdf}
\caption{An example to show how proposed PDDL with semantics can leverage the ontology and reasoning to solve a problem.}
\label{fig:pddls}
\end{figure*}
\subsection{Perception Service}
Sensing and perception is required for both physical manipulation and learning.
MaestROB supports different sensors including microphone, camera, and touch.
Raw stream from the sensors is converted into meaningful format by the help of extractors and classifiers.
While different sensors are used by MaestROB, in this section we will focus on the vision sensor (camera).
Camera is used for recognizing a human in front of the robot, estimating the age and gender, recognizing objects and their poses etc.
For accurate visual perception in industrial settings, we assume that an object database provides the properties (shape, size etc.) of all the objects of interest.
Visual perception computes the pose for each instance of the object in the visible world.
For the demonstration, we have used a barcode based pose estimation technique proposed by H. Kato~\cite{kato2002artoolkit}.
In addition to perception, spatial relationships are also extracted geometrically to understand the state of the physical world.
Relationships currently supported by the framework are in, on, right, left, front, and back.
With the supported interface of object detection and relationship extraction, the framework can be extended by including better pose estimation algorithms.
\subsection{Converstaion Service}
MaestROB conversation service uses IBM Watson cloud based Conversation API~\cite{url:watson}.
The Conversation API combines machine learning, natural language understanding, and integrated dialog tools to graphically create conversation flows.
MaestROB conversation service helps the machine understand human intentions and act accordingly.
Depending on the conversation model, the robot can clarify missing information or even start a dialog proactively.
This is especially important for robots working as salesman or concierge.
Raw stream from microphone is extracted by \textit{text extractor} that calls IBM Watson Speech-To-Text (STT) API.
The text then goes to a \textit{natural language classifier} which internally calls IBM Conversation API and classifies the text according to its intention.
For example, in case of a \textit{question intent}, the \textit{question agent} finds the appropriate response and generates an \textit{answer goal}.
The \textit{goal agent} then executes the appropriate plan with the help of \textit{plan manager}.
The \textit{plan manager} may decide to invoke \textit{say} action, which is ultimately invoked by \textit{skills manager} by calling the appropriate \textit{skill}.
\setcounter{figure}{2}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\columnwidth]{figures/planner.pdf}
\caption{MaestROB Planner.}
\label{fig:planner}
\end{figure}
\setcounter{figure}{4}
\subsection{Planning Service}
Planner is responsible to observe the current state of the system, understand the goal and generate a sequence of skills to achieve the goal.
Configuration of the proposed planner is shown in Fig.~\ref{fig:planner}.
Initial state of the world is extracted by the ``initial state extractor''.
The initial state is the truth about the world and is always detected before computing a plan.
This is usually done by using vision sensors.
Other input devices like mic or depth sensors can also be used to define the initial state.
The first step in defining the initial state is to get the poses (positions and orientations) of all the available objects.
We assume that all the objects (classes) are defined in an object database.
Computing poses of all the class instances is not enough, we also need to define the states of the objects, e.g. if a hole is filled or not.
This is done by relationship extractor of the perception service.
Understanding the goal state is also a crucial part of the planner.
The goal can be defined either by a key frame based demonstration or by using natural language.
In MaestROB we use symbolic mapping for grounding natural commands to the respective goal state.
Instead of simple matching, Watson Natural Language Classification API~\cite{url:watson} is used to match a command to the intended goal state.
Although this may limit the variation of natural language that can be used, it produces consistent and predictable results.
For the planning part, we extended PDDL~\cite{tech:ghallab1998} (Planning Domain Definition Language) for describing and indexing skills by allowing semantic annotations.
We call the extended version of the language as PDDLS (Planning Domain Definition Language with Semantics).
The primary role of the semantics is to give the compatibility among skills and the recognized states.
Symbols are bound to be globally identifiable references as URIs (Uniform Resource Identifier), for which equals-to, is-a, and other equality relationships and compatibilities are defined in standard ontologies.
Another important role of the semantic resolution provided by the framework is to use ontology to compensate for the constraints, or common sense in the domain, which is difficult to be captured through cognition.
For example, while an image analysis can recognize the shape and size of objects, it would not directly give the constraints for combining those objects - e.g. if a peg can be inserted into a hole or not.
The skills in the skill database can be defined as actions in PDDLS for a particular domain, in which the preconditions and effects of the skills are described.
A planning query is given as a PDDLS problem, in which the goal and initial states are described as PDDL with semantic labels.
Extended semantic annotations enable global linking between actions and problems with semantic resolution.
The mechanism for the semantic resolution is beyond the scope of this paper.
The semantics, actions, and required relationships are defined in PDDLS domain files, therefore, a suitable domain file must be defined explicitly by the user to generate an optimal plan for the problem.
The output of the planner is a sequence of skills that is executed by the runtime, which might request the planner to generate an alternative plan in case of a failure.
Fig.~\ref{fig:pddls} shows a simple example to illustrate how the planner works.
We have two pegs and a hole and only the cylindrical peg can be inserted in the hole.
The common sense ontology is provided to the system, including the knowledge for obtaining constraints among objects, such as whether an object can be inserted (insertable) to another object or not.
We search for the appropriate domain descriptions in the PDDLS domain database.
The domain file defines, all the available actions, which in this case include only one action ({\it pick-n-insert}).
Note that in the context section symbols are semantically annotated by URIs (shown in red color), which are defined in the ontology or runtime object properties.
The initial and goal states of the PDDLS problem file, as well as runtime object properties ontology (e.g. object shapes and sizes), are defined by perception and grounding of natural language commands as defined above.
The PDDLS resolver then generates a runnable PDDL problem and domain file, using the semantic annotations to resolve necessary constraints.
These constraints are shown in blue color in the figure and they are required by the PDDL solver (PDDL planner) to find a valid solution.
The problem and the domain files are used by the PDDL solver to output the correct sequence of actions, which in this case is to perform the ``pick-n-insert'' to put the cylindrical peg in the cylindrical hole.
\subsection{Skill Acquisition}
The process for acquiring skills is different from that of plans.
Plans can be acquired using natural language communication or demonstration, provided that all the skills are available in the skill database.
The skills are to be acquired individually by programming, defining rules, or machine learning.
MaestROB provides cloud based machine learning API to allow the robot to acquire or fine tune machine learning based skills.
\subsection{Bridges}
MaestROB can be connected to other scripting languages, robot middlewares, or rule engines by creating bridges.
Bridges are currently available for ROS, and Python.
A bridge converts the blackboard based communication that is native to Intu into a protocol that is understandable by other middlewares or languages.
ROS bridge, therefore, is an Intu agent and a ROS node at the same time.
\subsection{Runtime}
Runtime is the part of MaestROB that is responsible to execute a plan.
The runtime essentially takes the output of a planner which is a list of skills and execute them in the given sequential or parallel order.
In case of a failure in a skill, the runtime returns the feedback to the planner.
Then depending on the planner, a new plan can be generated as an alternate sequence of actions.
While the goal does not change, the current state of the world has to be recomputed.
If the system finds a situation where no suitable plan can be found, it may request help from the user.
\begin{figure}[b]
\centering
\includegraphics[width=0.8\columnwidth]{figures/setuppic.pdf}
\caption{Demo setup: Pepper uses its camera to analyze the location of the objects, creates a plan and send it to UR5.}
\label{fig:setuppic}
\end{figure}
\section{DEMONSTRATION}
\label{sec:demo}
In the demo scenario, two robots (UR5 and Pepper) collaborate with a human to perform a task.
MaestROB is general enough to handle both the communication robot and the industrial manipulator arm, we show how very different robots can achieve a common goal by using their strengths.
Pepper is a humanoid robot from Softbank Robotics that possesses several sensors including vision.
It is mobile, but it lacks a powerful gripper and cannot perform accurate physical manipulations.
On the other hand, UR5 (collaborative robot arm from Universal Robots) is a fixed industrial grade manipulator robot.
UR5 can do physical manipulations with high precision and repeatability ($\pm$\SI{0.1}{\milli\metre}) but it moves blindly due to the lack of any vision sensor.
In the demo scenario, the human is responsible for teaching the task by performing demonstration, controlling the robot, and taking final decisions.
The setup of the two robots for the demonstration is shown in Fig.~\ref{fig:setuppic}.
The demo can also run on other robot platforms, given the required gestures are available.
\begin{table}[]
\centering
\caption{Step-by-step demo scenario. Actors are human, Pepper, and UR5.}
\label{table:demosteps}
\includegraphics[width=1.0\columnwidth]{figures/demo_scenario.pdf}
\end{table}
Table~\ref{table:demosteps} shows the detailed scenario and the actions performed by different actors.
One of the strengths of MaestROB is the connection with IBM Watson APIs.
This allows to easily implement complex conversation systems.
Grounding of instructions given in a smooth natural language conversion to complex but predictable sequences of actions demonstrate the strength of MaestROB.
\begin{figure}[b!]
\centering
\includegraphics[width=0.96\columnwidth]{figures/arch.pdf}
\caption{Important services to run the demonstration. The services communicate via pub/sub model of Intu. Robot controller of UR5 is a ROS node and is connected with Intu via ROS bridge.}
\label{fig:arch}
\end{figure}
The demo starts with a human performing the task and having conversation with Pepper robot at the same time.
Understanding the intent of a command by using conversation service, the robot can understand when to capture key frames and when the demonstration is over.
In the example, the robot records the initial state and the final state of the demonstration.
From the conversation, it remembers the name of the task it is learning to perform (peg assembly task).
It also understands that the final state is the last frame of demonstration.
The initial and final frame are sent to perception service that uses barcode pose detection to detect the location of all the barcodes.
The barcode number of each part and the transformation between the barcodes and the objects are defined separately in the object database.
The state of the final frame is determined by the relationship extractor.
As the domain for the task is predefined to be insertion, the appropriate ontologies and the relationship database are loaded.
\begin{figure*}[t!]
\centering
\includegraphics[width=1.0\textwidth]{figures/movie.pdf}
\caption{Screenshots of the demo video (video is available at \url{https://www.youtube.com/watch?v=19JsdZi0TWU}). Barcode pose estimation results are shown as overlays.}
\label{fig:movie}
\end{figure*}
In this scenario, the goal is not used by Pepper to compute a plan for itself, rather Pepper implements the scenario on UR5.
Pepper uses predefined locations of the demo table and UR5 to move from one place to the other.
When Pepper arrives at UR5, it captures an image of the initial state.
This image is used to calibrate Pepper's camera w.r.t UR5 robot, the barcode pose location for all the objects are also computed.
Pepper uses the initial state observed from the image and the goal state learned from the human demonstration to generate an executable plan for the manipulator robot.
However, before this can be done, the skill database of UR5 is shared with Pepper.
The common sense ontology is used by the planner to check if an operation is permitted or not.
For example, it is not permitted to insert a big peg into a smaller hole.
After the plan is transmitted to UR5, it starts doing the task based on position control.
Once the task succeeds Pepper can return while UR5 continues to perform the task.
In this demo we have human helper to put the pegs back to the initial state before every UR5 iteration.
In a factory environment this is usually done by conveyer belts or other machines.
UR5 keeps executing the task unless something goes wrong, in this case UR5 warns about the failed plan.
In this case, Pepper goes back to the manipulator to observe the state of the world.
Pepper compares the current state with the initial state it had seen before.
It finds that a peg is missing and raises a request for human assistance.
The human then comes to put the missing peg.
UR5 can resume to do the repetitive task of putting pegs in the respective holes.
Contrary to several simple skills that UR5 can perform, the insertion skill uses machine learning to find the direction of hole by using force-torque sensor.
It was trained by reinforcement learning using the MaestROB cloud based learning API.
The training method for the insertion skill is similar to T. Inoue et al.~\cite{conf:iros:inoue2017}.
Fig.~\ref{fig:arch} shows the MaestROB services that are running on each robot to accomplish the tasks defined in the demo scenario.
Most of the services use cloud based APIs to solve the problem.
It is important to note that in order to make a plan for UR5, Pepper robot must be aware of the skills available for UR5 robot.
This is done by sharing UR5 skill database with Pepper.
In the demo scenario, Pepper is running an instance of the Project Intu middleware.
Intu makes the implementation of Speech-To-Text (STT), Text-To-Speech (TTS), conversation, perception, planning etc. easy and streamlined.
UR5 is running the plan generated by Pepper using MaestROB services on ROS.
Snapshots of some of the key moments in the demo video can be seen in Fig.~\ref{fig:movie}.
The demo video is available at \url{https://www.youtube.com/watch?v=19JsdZi0TWU}.
\section{CONCLUSIONS}
\label{sec:conclusions}
We present a framework to support the next generation of robots to help solve the problems that are not solvable by conventional programming methods.
The robot middleware (Project Intu) presented in this paper is now available as an open source project.
We also presented several key services that enable us to demonstrate sophisticated scenarios involving collaboration between multiple robots and human.
MaestROB is especially useful for small and medium-sized enterprises (SMEs), which need relatively quick time to market, frequent changes in manufacturing lines and have low production volumes.
The workers can communicate with the robot in natural language and teach it new skills or execute existing skills.
As a future direction, we would like to make the machine learning based skills sharable among multiple robots.
As mentioned in the paper and the demo, a failed plan usually requires human assistance.
One of the future directions can be to make the robot resolve common problems on its own.
We also plan to demonstrate a system that understands written and spoken instructions to create a complex object like IKEA furniture.
{\small
\bibliographystyle{ieee}
|
{
"timestamp": "2018-06-05T02:11:14",
"yymm": "1806",
"arxiv_id": "1806.00802",
"language": "en",
"url": "https://arxiv.org/abs/1806.00802"
}
|
\section{Introduction}
The current trend is towards training ever deeper networks as deeper
networks have a larger capacity to learn. Since backpropagation
requires the complete state of the forward propagation in reverse
order, training a neural network with backpropagation requires memory
that is proportional to the size of the network. Many state-of-the-art
models already run out of memory on current hardware and this trend is
only expected to get worse. \cite{rhu2016vdnn}
One of the most common ways of managing memory consumption of neural
network training is by controlling the batch size
\cite{rhu2016vdnn}. However, since the batch size is also used to
sample from the training data, the choice of batch size can affect the
convergence rate and cannot be used to tune the model's memory
consumption without side-effects.
Another common mitigation strategy is to split the training over
multiple computational nodes \cite{krizhevsky2014one}. However, this
incurs significant message passing overheads and costs for hardware
with low-latency interconnects. This strategy can also be wasteful if
the peak memory consumption is only slightly larger than that of a
single compute node.
A third strategy that is recently getting increased attention is checkpointing,
and is briefly reviewed in the following section.
\subsection{Checkpointing for neural networks}
\label{sec:intro}
\begin{figure}
\begin{center}
\def0.6\textwidth{0.5\textwidth}
\input{figures/async.pdf_tex}
\end{center}
\caption{Memory requirement of a neural network during training. In conventional
backpropagation, all states need to be stored, leading to a peak memory
footprint at the end of the forward computation. During the backward pass, the
stored states are used and their memory subsequently freed in the reverse order.
Training can not be performed on hardware with too small memory. In contrast,
checkpointing strategies store some intermediate states and resume recomputation
from there when required. With asynchronous multistage checkpointing, the data
is further offloaded to a larger, slower storage system (e.g. solid state drive)
in the background while the computation is running, and prefetched before it is
needed.}
\label{fig:title}
\end{figure}
The idea behind checkpointing is not to store the entire state of the
network through the forward propagation. Instead, the state of forward
propagation is stored only at certain layers, and the number of layers
that are kept at any given time can be limited to fit into the
available memory. During the backpropagation phase, states that have
not been stored can be recomputed as needed from the nearest available
state. This allows a tradeoff between memory and computation. With
this, problems can be made to fit on systems with limited memory in
exchange for an increased computation time.
The pressure on the memory system during a backpropagation
execution can be quantified using a \emph{memory ratio}, i.e. the
ratio between the memory available on a computer system and the
expected peak memory consumption of a particular instance of
backpropagation. We are only interested here in scenarios where the
memory ratio is less than 1.
The amount of recomputation required in a checkpointing strategy is
quantified using a \emph{recompute factor} where a factor of 1 implies
no recomputation. The factor grows as the memory ratio is reduced. The
choice of layers at which to store checkpoints during the forward
propagation directly affects the recompute factor and is called the
\emph{checkpointing schedule}.
Checkpointing is widely used for similar purposes in adjoint based
optimisation problems, and a number of schedules have been developed
that are optimal under certain assumptions. If the number of layers is
known a priori, states have a uniform size, each layer takes the same
time to compute, and the memory is fast and the time to store a
checkpoint is thus negligible, then the Revolve algorithm
\cite{griewank2000algorithm} gives the optimal schedule that minimises
recomputation given a fixed amount of memory. Another schedule has
been found to be optimal if the number of layers is not known
initially \cite{wang2009minimal}. The development of these algorithms
was motivated by adjoint solvers, where these assumptions are usually
valid.
In contrast, the state size and computation cost of layers in neural
networks is often non-uniform (e.g. different before and after a
pooling layer). New checkpointing schedules have been developed
specifically for machine learning
applications~\cite{chen2016training}, including a dynamic program that
can be used to compute optimal schedules for networks of uniform and
non-uniform checkpoint sizes~\cite{gruslys2016memory}.
\subsection{Multistage checkpointing}
When additional levels of memory are available, it is possible to
leverage these additional levels to reduce the recompute factor
\cite{stumm2009multistage}. In the context of modern computer
systems, the two levels of memory could be the accelerator memory and
the host memory. Even on systems where only one level of memory is
usually used, the second level memory could be a high-bandwidth disk,
e.g. an SSD. In the foreseeable future, other types of memory are
expected to become available, such as storage-class
memory~\cite{JSFI164}.
For systems with two levels of memory, \cite{aupy2016optimal}
describes the optimal schedule that reduces the total time to solution
for adjoint solvers or backpropagation, assuming that the first level
memory is fast but has limited-capacity, while the second level is
slow but has infinite capacity. The key idea is to increase the number
of stored checkpoints, by storing the least frequently used
checkpoints on the slow, large storage. The schedule assumes blocking
data transfer, that is, the computation waits while data is
transferred from the fast to the slow storage level.
Since transfers between first-level memory and second-level memory
take a non-trivial amount of time, they can be carried out in
parallel. This motivated a recent paper~\cite{schanen2016asynchronous}
describing the use of asychronous multistage checkpointing for a PDE
solver. In that work, the solver itself uses all available RAM on a
system, and the checkpoints are thus stored directly to a hard
drive. Since the overall stored data is much larger than available
hard drives, another system is transferring the data over the network
to a tape archive while the computation is running.
A similar concept was also previously applied to neural networks
\cite{rhu2016vdnn}. However, in this case every layer was transferred
to the second-level memory, which slows down the forward
propagation. A variation of this strategy, where a subset of states is
transferred to the host memory and transferred back when required was
also implemented for Tensorflow, but without any recomputation of
forward propagated states. \cite{mengtraining}
\subsection{Contributions}
While the work in this paper is conceptually similar to that presented
in~\cite{schanen2016asynchronous}, to the best of our knowledge,
multistage checkpointing with recomputation of forward states has not
been applied in the context of neural networks before. It has also not
previously been investigated for systems other than the aforementioned
hard drive/tape system. This is despite the fact that non-blocking
asynchronous data transfer is possible on a variety of commonly used
systems, such as GPU DRAM / CPU RAM, or from host RAM to another
devide using direct memory access (DMA). We therefore investigate
asynchronous multistage checkpointing for neural networks on a system
that consists an Intel XeonPhi Knight's Landing (KNL), where the main
computation and Level 1 memory is in fast MCDRAM, and the Level 2
storage is in the system's DRAM. Figure~\ref{fig:title} gives a
high-level illustration of this idea.
After presenting the scheme in Section~\ref{sec:method}, we present a
performance model for asynchronous checkpointing that works across a
variety of hardware configurations in Section~\ref{sec:model}. We also
developed a prototype implementation for asynchronous multistage
checkpointing in Python, shown in Section~\ref{sec:implementation}. In
Section~\ref{sec:experiments}, we demonstrate the use of this scheme
on two different modern hardware platforms using an LSTM based network
as a test case. We conclude in Section~\ref{sec:conclusions}.
\section{Asynchronous multistage checkpointing}
\label{sec:method}
In this section we outline the asynchronous multistage checkpointing strategy.
We assume that there are two storage stages: Level 1, a fast but small memory,
and Level 2, a large but slow storage. Examples for Level 1 memory include GPU
DRAM or Xeon Phi MCDRAM, while a example for Level 2 storage is a solid
state drive (SSD). Note that these roles depend on the overall
configuration of the system. For example, RAM could either be a Level 2 storage
in a system that is using DRAM as Level 1, or it could be Level 1 memory in a
system that is using SSD or a hard drive for Level 2. What matters is not the
absolute speed or size of the storage, but rather the relative speed and size
compared to other storage in the same system.
In the asynchronous multistage checkpointing strategy, the computation itself
completely resides in Level 1 memory. During the
forward pass, copies of the state are transferred to the Level 2 storage at
regular intervals, i.e. after every $I$ layers, where $I$ is the \emph{checkpointing
interval}. The transfer to storage happens asychnronously
so as to not slow down the computation of the forward propagation. All
forward activations are then cleared from Level 1 memory.
The backward pass will require the stored data in reverse order, at well-known
points in time during the computation. For this reason, checkpoints that are
required from Level 2 storage can be asynchronously transferred to Level 1
before they are needed. Since every $I$-th state was stored, the intermediate
states need to be recomputed from the restored state.
Assuming there is enough Level 1 memory available to store the entire
forward propagation state for $I$ layers, backpropagation can then
proceed normally for these $I$ layers. If there is not enough
memory available, Revolve can be applied to find an optimal schedule for
backpropagation through $I$ layers within the limits of the Level 1 memory.
Compared to conventional backpropagation where every state is stored, this
obviously has the advantage that it can fit into limited amounts of memory.
Perhaps less obviously, this strategy is guaranteed to be faster than the
``optimal'' Revolve checkpointing strategy. This is because Revolve (or any of
the other published single-stage checkpointing strategies) trades memory for
additional computations, resulting in a time overhead that increases with the
number of layers. Through the use of Level 2 storage, Revolve is only used for
the $n$ states between two subsequent stores, resulting in a time overhead that
is constant in the number of layers. This is illustrated in
Figure~\ref{fig:timeline} and explained in more detail in
Section~\ref{sec:model}.
\begin{figure}
\begin{center}
\def0.6\textwidth{0.8\textwidth}
\input{figures/asynctimeline.pdf_tex}
\end{center}
\caption{
Timeline of events for conventional backpropagation, Revolve checkpointing, and
asynchronous multistage checkpointing. The conventional backpropagation would
have the shortest runtime, but exceeds the available memory. Both other
strategies respect the memory limits, but result in different time overheads.
Revolve alternates between forward and reverse computations in a rather complex
fashion to minimise the overhead if only one level of memory is available. The
asynchronous strategy stores data to Level 2 storage in regular intervals, and
restores the data before it is needed in backpropagation.
}
\label{fig:timeline}
\end{figure}
\section{Performance Model}
\label{sec:model}
We analyse in this section the expected performance of asynchronous
multistage checkpointing and compare it with Revolve checkpointing. Following
that, we demonstrate the performance in an experiment in
Section~\ref{sec:experiments}.
On a given target computer, let the time taken to compute one layer's
activations be given as $T_A$ and the time taken to propagate sensitivities
backwards through that layer as $T_B$. For a network with $n$ layers, the total
time $T_\infty$ for a combined forward/backward pass as used in training, assuming that
there is no memory limit, is then obviously
$$T_\infty = n\cdot T_A + n\cdot T_B.$$
If Revolve checkpointing is used, some states need to be recomputed,
leading to additional computations of activations. This is expressed
in the \emph{recompute factor}, which depends on the total number of
layers $n$, as well as the number of checkpoints that simultaneously
fit into memory, $s$. We refer to this as
$$R(n,s)$$. The recompute factor is defined in~\cite{revolve}, and can
be computed by the reference implementation of Revolve or by using the
pyrevolve Python package that can be downloaded from
\texttt{https://github.com/opesci/pyrevolve/}. We note that the
recompute factor increases if the number of layers $n$ is increased,
and also increases if the storage space $s$ is decreased. This is true
for all known single-stage checkpointing schemes, and the precise
nature of the increase (sub-linear for most schemes, logarithmic for
Revolve) determines the optimality of a schedule. The total time
$T_\textit{revolve}$ for the combined forward/backward pass is then
$$T_\textit{revolve} = n\cdot R(n,s) \cdot T_A + n\cdot T_B.$$
For asynchronous multistage checkpointing, we are also interested in the time
that it takes to transfer a state from Level 1 memory to Level 2 storage. We
refer to this time as $T_T$. If $T_T \leq T_A$, then we could asynchronously
stream all data to storage while the computation is running without ever waiting
for disk access. If $T_T > T_A$, then we can only store a subset of all states. We choose
to store states in regular intervals of length $I$, given by
$$I=\left\lceil{\frac{T_T}{T_A}}\right \rceil.$$
In general, there are then $n/I$ such intervals. Storing and prefetching happens
asynchronously, meaning that these operations do not affect performance in
this model (albeit they have a slight effect on performance in practice, see
Section~\ref{sec:experiments}. Within each interval, we can
use Revolve with a recompute factor of $R(I,s)$. Overall, we thus have a runtime
\begin{align*}
T_\textit{async} &= \frac{n}{I}\cdot\left(I\cdot R(I,s)\cdot T_A + I\cdot T_B\right) \\
&= n\cdot R(I,s) \cdot T_A + n\cdot T_B.
\end{align*}
Due to the fact that $R(I,s) \leq R(n,s)$ if the interval is at most $n$, the
asynchronous strategy is at least as fast as the classic Revolve strategy. In
particular, the recompute factor in $T_\textit{async}$ depends only on $I$, not
on the total sequence length $n$. Figure~\ref{fig:perf} shows this for a
small number of interval lengths and assuming that $100$ states fit into memory.
\begin{figure}
\begin{center}
\def0.6\textwidth{0.6\textwidth}
\input{figures/perfmodel.pdf_tex}
\end{center}
\caption{
Recompute factors, assuming that $s=100$ (that is, $100$ states fit into memory), for classic
Revolve, and asynchronous multistage checkpointing with interval sizes
$I=8,64,1024$.
}
\label{fig:perf}
\end{figure}
Note that in the case where there are very few layers, there might not be time
to save a single checkpoint to second level memory before the entire
forward pass is over. In this case this strategy would fall back to
classic Revolve.
\section{Implementation}
\label{sec:implementation}
The Revolve algorithm was accompanied by a similarly named utility
that could be used to compute optimal schedules for a particular
checkpointing scenario. pyrevolve \cite{kukreja2018high} is a python
package that uses schedules automatically derived from this utility to
provide checkpointing as a feature in python applications with minimal
changes to the code. pyrevolve expects function references to a
\emph{Forward Operator}, and a \emph{Backward Operator}, along with a
\emph{Checkpoint} object that describes the state variables from the
\emph{Forward Operator} that the \emph{Backward Operator}
requires. Provided these three objects, pyrevolve can drive the entire
forward and backward passes, automatically calling the forward or
backward operator as required by the optimal schedule. The
implementation of the asynchronous multistage checkpointing strategy
is offered as an additional mode in pyrevolve
\footnote{https://github.com/opesci/pyrevolve}. Due to the way it has
been formulated, pyrevolve, and consequently the implementation for
this strategy, can be used in applications ranging from
PDE-constrained optimisation problems in seismic imaging and CFD to
neural networks.
The implementation uses the python threading API to create background
threads where the asynchronous reads and writes happen. Python threads
are known to suffer from issues relating to the Global Interpreter
Lock (GIL). However, python releases the GIL when doing IO-bound
operations \cite{pythonGIL}. Hence, this implementation is expected to
be asynchronous despite, if not even due to, the python GIL.
As of now, we implemented this strategy with two hardware
architectures in mind - compute on CPU, DRAM for first level memory
and SSD for second level memory - here we shall call this the CPU
platform. The second architecture is - compute on an accelerator such
as the Intel\textsuperscript{\textregistered} Xeon
Phi\textsuperscript{\texttrademark} 7210 (KNL), with the accelerator
memory, the MCDRAM in the case of the KNL, acting as the first-level
memory and the host memory, or the DRAM in the case of KNL, acting as
the second-level memory. In principle, what we describe here for the
KNL platform applies equally to a GPU architecture where the GDDR
memory acts as the first level and the host memory acts as the second
level.
On the CPU platform, the background threads use the SSD by writing and
reading the data to files using the python filesystem API. On the KNL
platform, a ramdisk mounted to host memory is used as a second level
memory, though this could be improved in future implementations.
\section{Experimental Results}
\label{sec:experiments}
The test case on which to measure the performance of this strategy and
implementation was adapted from an open source implementation of
simple vanilla LSTM
\footnote{https://github.com/kevin-bruhwiler/Simple-Vanilla-LSTM}. An
LSTM was chosen because a simple LSTM has uniformly sized checkpoints
as we go through the network. Using one of the popular frameworks like
Tensorflow or pyTorch we could have implemented an LSTM in very few
lines but the multiple layers of abstraction involved would hide some
very important details that were relevant for this study. For example,
the framework might be calling precompiled primitives for performant
calculations, and choosing which implementation of a function to call
based on runtime parameters. This caused spikes at certain network
depths that are not relevant to the study at hand. Another issue was
about the transparency of memory management, since we would like to
choose exactly which objects to keep in memory. However, because the
purpose of this experiment is to demonstrate the principle of
asynchronous multistage checkpointing, we believe that this
implementation written with \emph{numpy} as the only external library
is sufficiently representative of a full-fledged LSTM training inside
any of the popular NN frameworks.
\label{sec:experiments}
\begin{figure}
\begin{center}
\includegraphics[width=0.6\textwidth]{figures/memory-cpu.pdf}
\end{center}
\caption{Comparison of memory consumption on CPU}
\label{fig:memory}
\end{figure}
The test case \footnote{Code provided as supplementary material} sets
up a basic LSTM for text generation, including a manual implementation
of \emph{RMSProp}. Additional tweaks like learning rate decay would
probably help the convergence of this code in a real-life
scenario. However here we are not concerned about a complete training
cycle, our interest is limited to a single forward-backward iteration
and its performance characteristics as the number of LSTM recurrences
is changed.
\label{sec:experiments}
\begin{figure}
\begin{center}
\includegraphics[width=0.6\textwidth]{figures/recompute_knl.pdf}
\end{center}
\caption{Comparison of recompute factors on KNL}
\label{fig:recompute}
\end{figure}
Figure \ref{fig:memory} shows the peak memory footprint for a
single forward-backward pass for a network of given depth, and figure
\ref{fig:recompute} shows how the recompute factor varies with
network depth. The times
were measured for 5 individual runs and the minimum
reported. The memory reported was measured using the \emph{maximum
resident set size} reported by the \emph{time} utility on the bash
command line. The python interpreter was exited after each iteration
to ensure that the memory is released back to the OS.
Although the peak memory footprint is theoretically expected to be constant,
regardless of the number of recurrent layers, we observe in the plots
that the memory does go up slightly although at a rate significantly
lower than standard backpropagation. This is because the
implementation still requires some variables whose size is dependent
on the depth of the network. In the case of this LSTM implementation,
the list of expected outputs is the main such variable that can not be
easily made to be independent of the depth of the network.
\section{Conclusions and future work}
\label{sec:conclusions}
We introduced asynchronous multistage checkpointing for backpropagation in large
RNNs in environments with limited memory. The method allows backpropagation
through sequences with arbitrary length for any given memory size, by
recomputing some data and asynchronously transferring other data to a larger,
slower storage such as host memory, RAM, or even SSDs. The runtime overhead
compared to a pure inference is constant in the sequence length, as was shown in
our experiment. The overhead is also at most as large as that of the optimal
single-stage checkpointing strategy Revolve, as shown in a theoretical
performance model.
The implementation currently only supports networks
that have layers of the same size throughout, i.e. uniform checkpoint
size. Instead of storing every $I$-th state for some fixed interval $I$, one
could instead easily store the next state whenever the previous data transfer
has completed, thereby supporting non-uniform checkpoint sizes.
Within each
interval, the known algorithm for non-uniform single-stage checkpointing could
be used instead of Revolve.
The implementation currently supports Intel XeonPhi processors.
In future work, we plan to extend our implementation to support more platforms,
such as GPUs. Finally, the current implementation assumes that the states within
each interval fit into memory, and this was true for the experiments conducted
in this work. If required, our package can be modified to use Revolve within
each interval, for example using the pyrevolve package.
\subsubsection*{Acknowledgments}
This work has been funded by the Intel Parallel Computing Centre at
Imperial College London. This paper benefitted greatly from
conversations with Paul Kelly, Nicolas Melot, Lukas
Mosser, Paul Hovland and Michel Schanen. This work was performed using
the Darwin Supercomputer of the University of Cambridge High
Performance Computing Service (http://www.hpc.cam.ac.uk/), provided by
Dell Inc. using Strategic Research Infrastructure Funding from the
Higher Education Funding Council for England and funding from the
Science and Technology Facilities Council.
|
{
"timestamp": "2018-06-05T02:18:19",
"yymm": "1806",
"arxiv_id": "1806.01117",
"language": "en",
"url": "https://arxiv.org/abs/1806.01117"
}
|
\section*{ACKNOWLEDGMENTS}\label{ack}
This work was supported by NSF PFC grant number PHY 1734006, DARPA QuASAR and Extreme Sensing, and NIST. J.R.K.C. acknowledges financial support from NSF GRFP.
\bibliographystyle{apsrev4-1}
|
{
"timestamp": "2018-06-05T02:11:42",
"yymm": "1806",
"arxiv_id": "1806.00838",
"language": "en",
"url": "https://arxiv.org/abs/1806.00838"
}
|
\section{Introduction} \label{intro}
In this contribution, we consider a two-phase flow for incompressible fluids of different densities and different viscosities. The two fluids are assumed to be macroscopically immiscible and to be miscible in a thin interface region, i.e., we consider a diffuse interface model
(also called phase field model) for the two-phase flow.
In contrast to sharp interface models, where the interface between the two fluids is a sufficiently smooth hypersurface, diffuse interface model can describe topological changes due to pinch off and droplet collision.
There are several diffuse interface models for such two-phase flows. Firstly, in the case of matched densities, i.e., the densities of both fluids are assumed to be identical, there is a well-known model H, cf.\ Hohenberg and Halperin or Gurtin et al. \cite{HH77, GPV96}.
In the case that the fluid densities do not coincide there are different models. On one hand Lowengrub and Truskinovsky \cite{LT98} derived
a quasi-incompressible model, where the mean velocity field of the mixture is in general not
divergence free. On the other hand, Ding et al. \cite{DSS07} proposed a model
with a divergence free mean fluid velocities. But this model is not known to be thermodynamically
consistent. In \changes{Abels}, Garcke and Gr\"un \cite{AGG11}
a thermodynamically consistent diffuse interface model for two-phase flow
with different densities and a divergence free mean velocity field was derived, which we call AGG model for short.
The existence of weak solutions of the AGG model was shown in \cite{ADG13}. For analytic result in the case of matched densities, i.e., the model H, we refer to \cite{Abe09b} \changes{and \cite{GMT19}} and the reference given there. Existence of weak and strong solutions for \changes{a slight modification} of the model by Lowengrub and Truskinovsky was proven in \cite{Abe09a,LTModelShortTime}.
Concerning the Cahn-Hilliard equation, Giacomin and Lebowitz \cite{GL97, GL98} observed that a physically more rigorous derivation leads to a nonlocal equation, which we call a nonlocal Cahn-Hilliard equation.
There are two types of nonlocal Cahn-Hilliard equations. One is the equation where the second order differential operator in the equation for the chemical potential is replaced by a convolution operator with a sufficiently smooth even function. We call it a nonlocal Cahn-Hilliard equation with a regular kernel in the following.
The other is one where the second order differential operator is replaced by a regional fractional Laplacian. We call it a nonlocal Cahn-Hilliard equation with a singular kernel, since the regional fractional Laplacian is defined by using singular kernel. The nonlocal Cahn-Hilliard equation with a regular kernel was analyzed in \cite{GZ03, GG14, GL98, LP11a, LP11b}. On the other hand, the nonlocal Cahn-Hilliard equation with a singular kernel was first analyzed in Abels, Bosia and Grasselli \cite{ABG15}, where they proved the existence and uniqueness of a weak solution of the nonlocal Cahn-Hilliard equation, its regularity properties and the existence of a (connected) global attractor.
Concerning the nonlocal model H with a regular kernel, where the convective Cahn-Hilliard equation is replaced by the convective nonlocal Cahn-Hilliard equation with a regular kernel, first studies were done by \cite{CFG12, FG12a, FG12b}\changes{, see also \cite{FGGS19} and the references there for more recent results}. More recently, the nonlocal AGG model with a regular kernel,
where the convective Cahn-Hilliard equation is replaced by the convective nonlocal Cahn-Hilliard equation with a regular kernel, was studied by Frigeri~\cite{F15} and he showed the existence of a weak solution for that model.
The method of the proof in \cite{F15} is based on the Faedo-Galerkin method of a suitably mollified system and the method of passing to the limit with two parameters tending to zero. The method is different from \cite{ADG13} which is based on implicit time discretization and a Leray-Schauder fixed point argument.
In this contribution, we consider a nonlocal AGG model with a singular kernel, where a convective Cahn-Hilliard equation in the AGG model is replaced by a convective nonlocal Cahn-Hilliard equation with a singular kernel. Our aim is to prove the existence of a weak solution of such a system.
In this contribution we consider existence of weak solutions of the following system, which couples a nonhomogeneous Navier-Stokes equation system with a nonlocal Cahn-Hilliard equation:
\begin{align}
\partial_t (\rho \mathbf{v}) + \operatorname{div} ( \mathbf{v} \otimes(\rho \mathbf{v} + \widetilde{\mathbf{J}})) - \operatorname{div} (2 \eta(\varphi) D \mathbf{v})
+ \nabla p
& = \mu \nabla \varphi & \mbox{in } \, Q , \label{eq:1}
\\
\operatorname{div} \, \mathbf{v} &= 0& \mbox{in } \, Q, \label{eq:2}
\\
\partial_t \varphi + \mathbf{v} \cdot \nabla \varphi &= \mbox{div}\left(m(\varphi) \nabla \mu \right)& \mbox{in } \, Q, \label{eq:3}
\\
\mu = \Psi'(\varphi) &+\mathcal{L}\varphi & \mbox{in } \, Q , \label{eq:4}
\end{align}
where $\rho=\rho(\varphi):= \frac{\tilde{\rho}_1+\tilde{\rho}_2}2+ \frac{\tilde{\rho}_2-\tilde{\rho}_1}2\varphi $, $\widetilde{\mathbf{J}} = -\frac{\tilde{\rho}_2 - \tilde{\rho}_1}{2} m(\varphi) \nabla \mu$,
$Q=\Omega\times(0,\infty)$. We assume that $\Omega \subset \mathbb{R}^d$, $d=2,3$, is a bounded domain with $C^2$-boundary. Here and in the following $\mathbf{v}$, $p$, and $\rho$ are the (mean) velocity, the pressure and the
density of the mixture of the two fluids, respectively. Furthermore $\tilde{\rho}_j$, $j=1,2$, are the specific densities of the unmixed fluids,
$\varphi$ is the difference of the volume fractions of the two fluids, and $\mu$ is the chemical
potential related to $\varphi$.
Moreover, ${D}\mathbf{v}= \frac12(\nabla \mathbf{v} + \nabla \mathbf{v}^T)$,
$\eta(\varphi)>0$ is the viscosity of the fluid mixture, and $m(\varphi)>0$ is a mobility coefficient. \changes{The term $\widetilde{\mathbf{J}}$ describes the mass flux, i.e., we have
\begin{equation*}
\partial_t \rho = - \operatorname{div} \widetilde{\mathbf{J}}.
\end{equation*}
It is important to have the term with $\widetilde{\mathbf{J}}$ in \eqref{eq:1} in order to obtain a thermodynamically consistent model, cf.\ \cite{AGG11} for the case with a local free energy.}
Finally,
$\mathcal{L}$ is defined as
\begin{align}\label{eq:defnL}
\mathcal{L} u(x) &= \operatorname{p.v.} \int_{\Omega} (u(x)-u(y))k(x,y,x-y)dy\\\nonumber
&=\lim_{\ensuremath{\varepsilon}\to 0} \int_{\Omega\setminus B_\ensuremath{\varepsilon}(x)} (u(x)-u(y))k(x,y,x-y)dy\qquad \text{for }x\in\Omega
\end{align}
for suitable $u\colon \Omega\to \mathbb{R}$. Here the kernel $k\colon \mathbb{R}^d\times \mathbb{R}^d\times (\mathbb{R}^d\setminus\{0\})\to \mathbb{R}$ is assumed to be $(d+2)$-times continuously differentiable and to satisfy the conditions
\begin{alignat}{2}
k(x,y,z)&=k(y,x,-z)\,, \label{k-ass-one} \\
|\partial_x^\beta\partial_y^\gamma\partial_z^\delta k(x,y,z)| &\leqslant
C_{\beta,\gamma,\delta}|z|^{-d-\alpha-|\delta|} \, , \label{k-ass-two} \\
c_0 |z|^{-d-\alpha} &\leqslant k(x,y,z)\leqslant C_0 |z|^{-d-\alpha} \,. \label{k-ass-three}
\end{alignat}for all $x,y,z\in\mathbb{R}^d$, $z\neq 0$ and $\beta, \gamma, \delta\in\ensuremath{\mathbb{N}}_0^d$ with $|\beta|+|\gamma|+|\delta|\leqslant d+2$ and some constants $C_{\beta,\gamma,\delta}, c_0,C_0>0$. Here $\alpha$ is the order of the operator, cf.~\cite{AK07}). We restrict ourselves to the case $\alpha \in (1,2)$. If $\omega\in C^{d+2}_{b}(\mathbb{R}^d)$, then $k(x,y,z) = \omega(x,y) |z|^{-d-\alpha}$ is an example of a kernel satisfying the previous assumptions.
We add to our system the boundary and initial conditions
\begin{alignat}{2}\label{eq:5}
\mathbf{v}|_{\partial \Omega} &= 0 &\qquad& \text{on}\ \partial\Omega\times (0,\infty), \\\label{eq:6}
\partial_\mathbf{n} \mu|_{\partial \Omega} &= 0&& \text{on}\ \partial\Omega\times (0,\infty), \\\label{eq:7}
\left(\mathbf{v} , \varphi \right)|_{t=0} &= \left( \mathbf{v}_0 , \varphi_0 \right) &&\text{in}\ \Omega.
\end{alignat}
Here $\partial_\mathbf{n} = \mathbf{n}\cdot \nabla$ and $\mathbf{n}$ denotes the exterior normal at $\partial\Omega$. We note that \eqref{eq:5} is the usual no-slip boundary condition for the velocity field and $\partial_\mathbf{n} \mu |_{\partial \Omega} = 0$ describes that there is no mass flux of the fluid components
through the boundary.
Furthermore we complete the system above by an additional boundary condition for $\varphi$, which will be part of the weak formulation, cf.\ Definition~\ref{defweaksolution} below.
If $\varphi$ is smooth enough (e.g.\, $\varphi(t)\in C^{1, \beta}(\overline{\Omega})$ for every $t\geq 0$) and $k$ fulfills suitable assumptions, then
\begin{equation}\label{b.c.}
\mathbf{n}_{x_{0}}\cdot \nabla \varphi(x_{0}) = 0 \qquad \text{for all }x_0\in\partial\Omega
\end{equation}
where $\mathbf{n}_{x_{0}}$ depends on the interaction kernel $k$, cf. \cite[Theorem~6.1]{ABG15}, and $x_0\in\partial\Omega$.
The total energy of the system at time $t\geq 0$ is given by
\begin{align} \label{totalenergy}
E_{\mbox{\footnotesize tot}}(\varphi,\mathbf{v})
= E_{\mbox{\footnotesize kin}}(\varphi,\mathbf{v}) + E_{\mbox{\footnotesize free}}(\varphi)
\end{align}
where
\begin{equation*}
E_{\mbox{\footnotesize kin}}(\varphi,\mathbf{v}) = \int_\Omega \rho \frac{|\mathbf{v}|^2}{2} \, dx,\qquad E_{\mbox{\footnotesize free}}(\varphi) = \int_\Omega \Psi(\varphi) \, dx + \frac12\ensuremath{\mathcal{E}}(\varphi,\varphi)
\end{equation*}
are the kinetic energy and the free energy of the mixture, respectively, and
\begin{equation}\label{eq:DefnE}
\ensuremath{\mathcal{E}}(u,v)= \int_\Omega\int_\Omega (u(x)-u(y))(v(x)-v(y))k(x,y,x-y)\, d x\, d y
\end{equation}
for all $u,v\in H^{\frac{\alpha}{2}}(\Omega)$ is the natural bilinear form associated to $\mathcal{L}$, which will also be used to formulate the natural boundary condition for $\varphi$ weakly.
Every sufficiently smooth solution of the system above satisfies the energy identity
\begin{equation*}
\frac{d}{dt} E_{\mbox{\footnotesize tot}}(\varphi,\mathbf{v})= -\int_\Omega 2\eta(\varphi)|D\mathbf{v}|^2\, dx - \int_\Omega m(\varphi)|\nabla \mu |^2\, dx
\end{equation*}
for all $t\geq 0$. This can be shown by testing \eqref{eq:1} with $\mathbf{v}$, \eqref{eq:3} with $\mu$ and \eqref{eq:4} with $\partial_t \varphi$, where the product of $\mathcal{L} \varphi$ and $\partial_t \varphi$ coincides with
\begin{equation*}
\ensuremath{\mathcal{E}}(\varphi(t),\partial_t \varphi(t))
\end{equation*}
under \changes{the same natural boundary condition for $\varphi(t)$ as before, cf. \eqref{b.c.}}.
We consider
a class of singular free energies,
which will be specified below and which includes the homogeneous free energy of the so-called regular solution models used by Cahn and Hilliard~\cite{CahnHilliard}:
\begin{equation}
\label{logpot}\changes{\Psi(\varphi) = \frac{\vartheta}2 \left((1+\varphi)\ln
(1+\varphi)+ (1-\varphi)\ln (1-\varphi)\right) - \frac{\vartheta_c}2 \varphi^2,\quad \varphi \in [-1,1]}
\end{equation}
where
$\changes{0<\vartheta<\vartheta_c}$.
\changes{This choice of the free energies ensures that $\varphi(x,t)\in [-1,1]$ almost everywhere.}
In order to deal with these terms we apply
techniques, which were developed in Abels and Wilke~\cite{AW07} and extended to the present nonlocal Cahn-Hilliard equation in \cite{ABG15}.
Our proof of existence of a weak solution of \eqref{eq:1}-\eqref{eq:4}
together with a suitable initial and boundary condition follows closely the proof of the main result of \cite{ADG13}. The following are the main differences and difficulties of our paper compared with \cite{ADG13}.
Since we do not expect $H^1$-regularity in space for the volume fraction $\varphi$ of a weak solution of our system, we should eliminate $\nabla \varphi$ from our weak formulation taking into account the incompressibility of $\mathbf{v}$.
Implicit time discretization has to be constructed carefully, using a suitable
mollification of $\varphi$ and an addition of a small Laplacian term to the chemical potential equation taking into account of the lack of $H^1$-regularity in space of $\varphi$. While the arguments for the weak convergence of temporal interpolants of weak solutions of the time-discrete problem are similar to \cite{ADG13}, the function space used for the order parameter has less regularity in space
since the nonlocal operator of order less than 2 is involved in the equation for the chemical potential.
For the convergence of the singular term $\Psi'(\varphi)$, we employ the argument in \cite{ABG15}. The only difference is that we work in space-time domains directly. For the validity of the energy inequality, additional arguments using the equation of chemical potential and the fact that weak convergence together with norm convergence in uniformly convex Banach spaces imply strong convergence are needed.
The structure of the contribution is as follows:
In Section~\ref{prelimi} we present some preliminaries, we fix notations and collect the
needed results on nonlocal operator. In Section~\ref{secexistence}, we define weak solutions
of our system and state our main result concerning the existence of weak solutions. In Section~\ref{sec:Implicit}, we define an implicit time discretization of our system and
show the existence of weak solutions of an associated time-discrete problem using the Leray-Schauder theorem. In Section~\ref{sec:proof}, we obtain compactness in time of temporal interpolants of the weak solutions of time-discrete problem and obtain weak solutions of our system as weak limits of a suitable subsequence.
\section{Preliminaries} \label{prelimi}
As usual
$a\otimes b = (a_i b_j)_{i,j=1}^d$ for $a,b\in \mathbb{R}^d$ and $A_{\operatorname{sym}}= \frac12 (A+A^T)$ for $A\in \mathbb{R}^{d\times d}$.
Moreover,
\begin{equation*}
\weight{f,g} \equiv \weight{f,g}_{X',X} = f(g), \qquad f\in X', g\in X
\end{equation*}
denotes the duality product, where $X$ is a Banach space and $X'$ is its duak.
We write $X\hookrightarrow \hookrightarrow Y$ if $X$ is compactly embedded into $Y$.
For a Hilbert space $H$ its inner product is denoted by $(\cdot\,,\cdot )_H$.
\medskip
Let $M\subseteq \mathbb{R}^d$ be measurable. As usual
$L^q(M)$, $1\leq q \leq \infty$, denotes the Lebesgue space, $\|.\|_q$ its norm and $(.\,,.)_{M}=(.\,,.)_{L^2(M)}$ its inner product if $q=2$.
Furthermore $L^q(M;X)$ denotes the set of all $f\colon M\to X$ that are strongly measurable and
$q$-integrable functions/essentially bounded functions. Here $X$ is a Banach
space. If $M=(a,b)$, we denote these spaces for simplicity by $L^q(a,b;X)$ and $L^q(a,b)$.
Recall that $f\colon [0,\infty)\to X$ belongs $L^q_{\operatorname{loc}}([0,\infty);X)$ if and only if $f\in L^q(0,T;X)$ for every $T>0$.
Furthermore, $L^q_{\operatorname{uloc}}([0,\infty); X)$ is the \emph{uniformly
local} variant of $L^q(0,\infty;X)$ consisting of all strongly measurable $f\colon
[0,\infty)\to X$ such that
\begin{equation*}
\|f\|_{L^q_{\operatorname{uloc}}([0,\infty); X)}= \sup_{t\geq 0}\|f\|_{L^q(t,t+1;X)} <\infty.
\end{equation*}
If $T<\infty$, we define $L^q_{\operatorname{uloc}}([0,T); X) := L^q(0,T;X)$.
For a domain $\Omega \subset \mathbb{R}^d$, $m\in \ensuremath{\mathbb{N}}_0$, $1\leq q\leq \infty$, the standard Sobolev space is denoted by
$W^m_q(\Omega)$.
$W^m_{q,0}(\Omega)$ is the closure of $C^\infty_0(\Omega)$ in $W^m_q(\Omega)$,
$W^{-m}_q(\Omega)= (W^m_{q',0}(\Omega))'$, and $W^{-m}_{q,0}(\Omega)= (W^m_{q'}(\Omega))'$.
$H^s(\Omega)$ denotes the $L^2$-Bessel potential of order $s\geq 0$.
Let
$
f_\Omega = \frac1{|\Omega|}\int_\Omega f(x) \,dx
$
denote the mean value of $f\in L^1(\Omega)$. For $m\in\mathbb{R}$ we define
\begin{equation*}
L^q_{(m)}(\Omega):=\{f\in L^q(\Omega):f_\Omega=m\}, \qquad 1\leq q\leq \infty.
\end{equation*}
Then the orthogonal projection onto $L^2_{(0)}(\Omega)$ is given by
\begin{align*}
P_0 f:= f-f_\Omega= f-\frac1{|\Omega|}\int_\Omega f(x) \,dx\qquad \text{for all }f\in L^2(\Omega).
\end{align*}
For the following we denote
\begin{equation*}
H^1_{(0)}\equivH^1_{(0)} (\Omega)= H^1(\Omega)\cap L^2_{(0)}(\Omega), \qquad (c,d)_{H^1_{(0)}(\Omega)} := (\nabla c,\nabla d)_{L^2(\Omega)}.
\end{equation*}
Because of Poincar\'e's inequality, $H^1_{(0)}(\Omega)$ is a Hilbert space.
More generally, we define for $s \geq 0$
\begin{alignat*}{2}
\Hs[s]_{(0)} \equiv \Hs[s]_{(0)}(\Omega) &= \Hs[s](\Omega) \cap L^2_{(0)}(\Omega), &\quad \Hs[-s]_{(0)}(\Omega)&= (\Hs[s]_{(0)}(\Omega))', \\
\Hs[-s]_{0}(\Omega) &= (\Hs[\changes{s}](\Omega))',&\quad \Hs[-s](\Omega) &= (\Hs[s]_{0}(\Omega))'.
\end{alignat*}
Finally, $f\in \Hs[s]_{\operatorname{loc}}(\Omega)$ if and only if $f|_{\Omega'}\in H^s(\Omega')$ for every open and bounded subset $\Omega'$ with $\ol{\Omega'}\subset \Omega$.
\medskip
We denote by $L^2_\sigma(\Omega)$ is the closure of $C^\infty_{0,\sigma}(\Omega)$ in $L^2(\Omega)^d$, where
$C^\infty_{0,\sigma}(\Omega)$ is the set of all
divergence free vector fields in $C^\infty_0(\Omega)^d$. The corresponding
Helmholtz projection, i.e., the $L^2$-orthogonal projection onto $L^2_\sigma(\Omega)$, is denoted by $P_\sigma$,
cf.\ e.g.\ Sohr \cite{Soh01}.
\medskip
Let $I=[0,T]$ with $0<T< \infty$ or $I=[0,\infty)$ if $T=\infty$ and let $X$ is a Banach
space. The Banach space of all bounded and continuous
$f\colon I\to X$ is denoted by $BC(I;X)$. It is equipped with the supremum norm. Moreover, $BUC(I;X)$ is defined as the
subspace of all bounded and uniformly continuous functions. Furthermore, $BC_w(I;X)$ is the set of all bounded and weakly
continuous $f\colon I\to X$. $C^\infty_0(0,T;X)$ denotes
the vector space of all smooth functions $f\colon (0,T)\to X$ with $\operatorname{supp}
f\subset\subset (0,T)$.
By definition $f\in W^1_p(0,T;X)$, $1\leq p <\infty$, if and only if $f,
\frac{df}{dt}\in L^p(0,T;X)$
Furthermore, $W^1_{p,\operatorname{uloc}}([0,\infty);X)$ is defined by replacing $L^p(0,T;X)$ by $L^p_{\operatorname{uloc}}([0,\infty);X)$
and we set $H^1(0,T;X)= W^1_2(0,T;X)$ and $H^1_{\operatorname{uloc}}([0,\infty);X) := W^1_{2,\operatorname{uloc}}([0,\infty);X)$.
Finally, we note:
\begin{lemma}\label{lem:CwEmbedding}
Let $X,Y$ be two Banach spaces such that $Y\hookrightarrow X$ and $X'\hookrightarrow Y'$ densely.
Then $L^\infty(I;Y)\cap BUC(I;X) \hookrightarrow BC_w(I;Y)$.
\end{lemma}
\noindent
For a proof see e.g. Abels \cite{Abe09a}.
\subsection{Properties of the Nonlocal Elliptic Operator $\mathcal{L}$}\label{S:nonlocal_operator}
In the following let $\ensuremath{\mathcal{E}}$ be defined as in \eqref{eq:DefnE}.
Assumptions~\eqref{k-ass-one}--\eqref{k-ass-three} yield that there are positive constants $c$ and $C$ such that
\begin{equation*}
c\norm{H^{\frac{\alpha}{2}}(\Omega)}{u}^{2} \leqslant |\changes{u_{\Omega}}|^2 + \ensuremath{\mathcal{E}}(u,u) \leqslant C \norm{H^{\frac{\alpha}{2}}(\Omega)}{u}^{2} \qquad \text{for all}\,u \in \Hs[\frac{\alpha}{2}](\Omega).
\end{equation*}
This implies that the following norm equivalences hold:
\begin{alignat}{2}
\mathcal{E}(u,u)&\sim \norm{\Hs[\frac{\alpha}{2}](\Omega)}{u}^2 &\qquad& \text{for all}\, u\in\Hs[\frac{\alpha}{2}]_{(0)}(\Omega),\\\label{eq:EquivNorm2}
\mathcal{E}(u,u) + | \changes{u_{\Omega}} |^{2} &\sim \norm{\Hs[\frac{\alpha}{2}](\Omega)}{u}^2 &\qquad& \text{for all}\, u\in\Hs[\frac{\alpha}{2}](\Omega),
\end{alignat}
cf.\ \cite[Lemma 2.4 and Corollary 2.5]{ABG15}.
In the following we will use a variational extension of the nonlocal linear operator $\mathcal{L}$ (see~\eqref{eq:defnL}) by defining $\mathcal{L} \colon \Hs[\frac{\alpha}{2}](\Omega) \to \Hs[-\frac{\alpha}{2}]_0(\Omega)$ as
\begin{equation*}
\longduality{\Hs[-\frac{\alpha}{2}]_0}{\mathcal{L}u}{\varphi}{\Hs[\frac{\alpha}{2}]} = \mathcal{E}(u, \varphi) \quad \text{for all $\varphi \in \Hs[\frac{\alpha}{2}](\Omega)$}.
\end{equation*}
This implies
\begin{equation*}
\duality{\mathcal{L}u}{1} = \mathcal{E}(u, 1) = 0.
\end{equation*}
We note that $\mathcal{L}$ agrees with~\eqref{eq:defnL} as soon as $u \in \Hs[\alpha]_{\operatorname{loc}}(\Omega)\cap \Hs[\frac{\alpha}2](\Omega)$ and $\varphi\in C_0^\infty(\Omega)$, cf. \cite[Lemma 4.2]{AK07}. But this weak formulation also includes a natural boundary condition for $u$, cf. \cite[Theorem~6.1]{ABG15} for a discussion.
We will also need the following regularity result, which essentially states that the operator $\mathcal{L}$ is of lower order with respect to the usual Laplace operator. This result is from \cite[Lemma 2.6]{ABG15}.
\begin{lemma}\label{L:regularity_regularization}
Let $g \in \Lp_{(0)}(\Omega)$ and $\theta>0$. Then the unique solution
$u \in \Hs_{(0)}(\Omega) $ for the problem
\begin{equation}\label{eq:regularized_nonlocal}
- \theta \int_{\Omega} \nabla u \cdot \nabla \varphi + \mathcal{E}(u, \varphi) = \Ltwoprod{g}{\varphi} \qquad \text{for all }\varphi \in \Hs_{(0)}(\Omega)
\end{equation}
belongs to $\Hs[2]_{\operatorname{loc}}(\Omega) $ and satisfies the estimate
\begin{equation*}
\theta \|\nabla u\|^2_{L^2(\Omega)} + \|u\|_{H^{\alpha/2}(\Omega)}^2 \leqslant C\|g\|_{L^2(\Omega)}^2,
\end{equation*}
where $C$ is independent of $\theta>0$ and $g$.
\end{lemma}
For the following let $\phi\colon [a,b]\to \mathbb{R}$ be continuous and define $\phi(x)=+\infty$ for $x\not\in [a,b]$.
As in \cite[Section~3]{ABG15} we fix $\theta\geqslant 0$
and consider the functional
\begin{equation}\label{eq:DefnF}
F_{\theta} (c) = \frac{\theta}{2} \int_{\Omega} |\nabla c|^2 \, d x + \frac12\ensuremath{\mathcal{E}}(c,c) + \int_{\Omega} \phi( c(x)) \, d x
\end{equation}
where
\begin{eqnarray*}
\operatorname{dom} F_0&=& \left\{c\in H^{\alpha/2}(\Omega)\cap L^2_{(m)}(\Omega): \phi(c)\in L^1(\Omega)\right\},\\
\operatorname{dom} F_\theta&=& H^1(\Omega)\cap \operatorname{dom} F_0\qquad \text{if}\ \theta >0
\end{eqnarray*}
for a given $m\in(a,b)$.
Moreover, we define
\begin{equation*}
\ensuremath{\mathcal{E}}_\theta (u,v)= \theta \int_\Omega \nabla u\cdot \nabla v \, d x +\ensuremath{\mathcal{E}}(u,v)
\end{equation*}
for all $u,v\in H^1(\Omega)$ if $\theta>0$ and $u,v\in H^{\alpha/2}(\Omega)$ if $\theta = 0$.
In the following $\partial F_\theta(c)\colon L^2_{(m)}(\Omega)\to \mathcal{P}(L^2_{(0)}(\Omega))$ denotes the subgradient of $F_\theta$ at $c\in \operatorname{dom} F$, i.e., $w\in \partial F_\theta(c)$ if and only if
\begin{equation*}
(w,c'-c)_{L^2} \leqslant F_\theta(c')-F_\theta(c)\qquad \text{for all }c'\in L^2_{(m)}(\Omega).
\end{equation*}
The following characterization of $\partial F_\theta(c)$ is an important tool for the existence proof.
\begin{theorem}\label{thm:Regularity}
Let $\phi\colon [a,b]\to \mathbb{R}$ be a convex function that is twice continuously differentiable in $(a,b)$ and satisfies $\lim_{x\to a} \phi'(x)=-\infty$, $\lim_{x\to b} \phi'(x) =+\infty $. Moreover, we set $\phi(x)=+\infty$ for $x\not\in (a,b)$ and let $F_\theta$ be defined as in~\eqref{eq:DefnF}. Then $\partial F_\theta\colon \mathcal{D}(\partial F_\theta)\subseteq L^2_{(m)}(\Omega)\to L^2_{(0)}(\Omega)$ is a single valued, maximal monotone operator with
\begin{eqnarray*}
\mathcal{D}(\partial F_0) &=&\Big\{ c\in H^\alpha_{\operatorname{loc}}(\Omega)\cap H^{\alpha/2}(\Omega)\cap L^2_{(m)}(\Omega): \phi'(c)\in L^2(\Omega),\exists f\in L^2(\Omega):\\
& & \quad \ensuremath{\mathcal{E}}(c,\varphi) + \int_\Omega \phi'(c)\varphi \, dx= \int_{\Omega} f\varphi \, d x \quad \forall \, \varphi \in H^{\alpha/2}(\Omega)
\Big\}
\end{eqnarray*}
if $\theta=0$ and
\begin{eqnarray*}
\mathcal{D}(\partial F_\theta) &=&\Big\{ c\in H^2_{\operatorname{loc}}(\Omega) \cap H^1(\Omega) \cap L^2_{(m)}(\Omega): \phi'(c)\in L^2(\Omega),\exists f\in L^2(\Omega):\\
& &\quad \ensuremath{\mathcal{E}}_\theta(c,\varphi) + \int_\Omega \phi'(c)\varphi \, dx= \int_{\Omega} f\varphi \, d x \quad\, \forall\,\varphi \in H^1(\Omega) \Big\}
\end{eqnarray*}
if $\theta > 0$ as well as
\begin{equation*}
\partial F_\theta (c) = -\theta \Delta c + \mathcal{L} c + P_0\phi'(c)\quad \text{in}\ \mathcal{D}'(\Omega) \qquad \text{for $\theta \geqslant 0$.}
\end{equation*}
Moreover, the following estimates hold
\begin{alignat}{1}\label{eq:DomEstim}
\theta \|c\|_{H^1}^2+ \|c\|_{H^{\alpha/2}}^2+ \|\phi'(c)\|_2^2&\leqslant C\left(\|\partial F_\theta(c)\|_2^2 + \|c\|_2^2+1\right)\\\nonumber
\int_\Omega\int_\Omega (\phi'(c(x))-\phi'(c(y)))&(c(x)-c(y))k(x,y,x-y)\, d x\, d y\\\nonumber
&\leqslant C\left(\|\partial F_\theta(c)\|_2^2 + \|c\|_2^2+1\right)\\\nonumber
\theta \int_\Omega \phi''(c)|\nabla c|^2 \, d x &\leqslant C\left(\|\partial F_\theta(c)\|_2^2 + \|c\|_2^2+1\right)
\end{alignat}
for some constant $C>0$ independent of $c\in \mathcal{D}(\partial F_\theta) $ and $\theta \geqslant 0$.
\end{theorem}
The result follows from \cite[Corollary~3.2 and Theorem~3.3]{ABG15}.
\section{Weak Solutions and Main Result} \label{secexistence}
In this section we define weak solutions for the system \eqref{eq:1}-\eqref{eq:4}, \eqref{eq:5}-\eqref{eq:7} together with a natural boundary condition for $\varphi$ given by the bilinear form $\ensuremath{\mathcal{E}}$, summarize the assumptions and state the main result.
\begin{assumption} \label{assumptions}
Let $\Omega \subset \mathbb{R}^d$, $d=2,3$, be a bounded domain with $C^2$-boundary. The following conditions hold true:
\begin{enumerate}
\item $\rho(\varphi) = \frac{1}{2}(\tilde{\rho}_1 + \tilde{\rho}_2) + \frac{1}{2} (\tilde{\rho}_2 - \tilde{\rho}_1) \varphi$ for all $\varphi \in [-1,1]$
\item $m \in C^1(\mathbb{R})$, $\eta \in C^0(\mathbb{R})$ and there are constants $m_0,K>0$ such that $0 < m_0 \leq m(s),\eta(s) \leq K$ for
all $s\in\mathbb{R}$.
\item
$\Psi \in C([-1,1]) \cap C^2((-1,1))$ and
\begin{align} \label{assumptionsphi}
\lim_{s \to \pm 1} \Psi'(s) = \pm\infty \,, \quad
\Psi''(s) \geq -\kappa \; \mbox{ for some $\kappa \in \mathbb{R}$} \,.
\end{align}
\end{enumerate}
\end{assumption}
A standard example for a homogeneous free energy density $\Psi$ satisfying the previous conditions is given by \eqref{logpot}.
Since for solutions we will have $\varphi(x,t)\in [-1, 1]$ almost everywhere, we only need the functions $m,\eta$
on this interval. But for simplicity we assume $m,\eta$ to be defined on $\mathbb{R}$.
\begin{definition} \label{defweaksolution}
Let $\mathbf{v}_0 \in L^2_\sigma(\Omega)$ and $\varphi_0 \in H^{\alpha/2}(\Omega)$ with $|\varphi_0| \leq 1$ almost everywhere in $\Omega$ and let Assumption \ref{assumptions} be satisfied. Then $(\mathbf{v},\varphi,\mu)$
such that
\begin{align*}
& \mathbf{v} \in BC_w([0,\infty);L^2_\sigma(\Omega)) \cap L^2(0,\infty;H_0^1(\Omega)^d) \,, \\
& \varphi \in BC_w([0,\infty);H^{\alpha/2}(\Omega)) \cap L^2_{\mbox{\footnotesize uloc}}([0,\infty);H^\alpha_{\operatorname{loc}}(\Omega)) \,, \; \
\Psi'(\varphi) \in L^2_{\mbox{\footnotesize uloc}}([0,\infty);L^2(\Omega)) \,, \\
& \mu \in L^2_{\mbox{\footnotesize uloc}}([0,\infty);H^1(\Omega))
\; \mbox{ with } \; \nabla \mu \in L^2(0,\infty;L^2(\Omega))
\end{align*}
is called a weak solution of \eqref{eq:1}-\eqref{eq:4}, \eqref{eq:4}-\eqref{eq:5}
if the following conditions hold true:
\begin{align}
- \left(\rho \mathbf{v} , \partial_t \boldsymbol{\psi} \right)_{Q}
&+ \left( \operatorname{div}(\rho \mathbf{v} \otimes \mathbf{v}) , \boldsymbol{\psi} \right)_{Q}
+ \left(2 \eta(\varphi) D\mathbf{v} , D\boldsymbol{\psi} \right)_{Q}
- \left( (\mathbf{v} \otimes \widetilde{\mathbf{J}}) , \nabla \boldsymbol{\psi} \right)_{Q} \nonumber \\
&= -\left( \varphi\nabla \mu , \boldsymbol{\psi} \right)_{Q} \label{weakline1}
\end{align}
for all $\boldsymbol{\psi} \in C_0^\infty(\Omega \times (0,\infty))^d$ with $\operatorname{div} \boldsymbol{\psi} = 0$,
\begin{align}
- \left(\varphi , \partial_t \psi \right)_{Q}
+ \left( \mathbf{v} \cdot \nabla \varphi , \psi \right)_{Q}
&= - \left(m(\varphi) \nabla \mu , \nabla \psi \right)_{Q} \label{weakline2} \\
\int_0^\infty \int_\Omega \mu \psi \, dx \, dt = \int_0^\infty \int_\Omega\Psi'(\varphi) \psi \, dx\, dt &+ \int_0^\infty \ensuremath{\mathcal{E}}(\varphi(t),\psi(t))\, dt \label{weakline3}
\end{align}
for all $\psi \in C_0^\infty((0,\infty);C^1(\overline{\Omega}))$ and
\begin{align}
\left.\left( \mathbf{v},\varphi \right)\right|_{t=0} &= \left( \mathbf{v}_0 , \varphi_0 \right) \,. \label{weakline4}
\end{align}
\changes{Recall $ \widetilde{\mathbf{J}} = -\frac{\tilde{\rho}_2 - \tilde{\rho}_1}{2} m(\varphi) \nabla \mu.$}
Finally, the energy inequality
\begin{align}
E_{\mbox{\footnotesize tot}}(\varphi(t),\mathbf{v}(t)) &+ \int_s^t\int_{\Omega} 2 \eta(\varphi) \, |D\mathbf{v}|^2 \, dx\, d\tau
+ \int_s^t\int_{\Omega} m(\varphi) |\nabla \mu|^2 \, dx\, d\tau \nonumber \\
&\leq E_{\mbox{\footnotesize tot}}(\varphi(s),\mathbf{v}(s)) \label{weakline5}
\end{align}
holds true for all $t \in [s,\infty)$ and almost all $s \in [0,\infty)$ (including $s=0$). Here $E_{\mbox{\footnotesize tot}}$ is as in \eqref{totalenergy}.
\end{definition}
The main result of this contribution is:
\begin{theorem}[Existence of Weak Solutions] \label{existenceweaksolution}~\\
Let Assumption \ref{assumptions} hold \changes{and $\alpha\in (1,2)$}.
Then for every $\mathbf{v}_0 \in L^2_\sigma(\Omega)$ and $\varphi_0 \in H^{\alpha/2}(\Omega)$
such that $|\varphi_0| \leq 1$ almost everywhere and $\changes{(\varphi_0)_{\Omega}} \in (-1,1)$ there exists a weak solution $(\mathbf{v},\varphi,\mu)$ of \eqref{eq:1}-\eqref{eq:4}, \eqref{eq:5}-\eqref{eq:7}.
\end{theorem}\changes{
\begin{remark}
Using e.g.\ $\varphi \nabla \mu \in L^2(0,\infty; L^2(\Omega))$ one can consider this term in \eqref{weakline1} as a given right-hand side and obtain the existence of a pressure such that \eqref{eq:1} holds in the sense of distributions in the same way as for the single Navier-Stokes equations, cf. e.g.\ \cite{Soh01}.
\end{remark}}
\section{Approximation by an Implicit Time Discretization}\label{sec:Implicit}
Let $\Psi$ be as in Assumption~\ref{assumptions}. We define $\Psi_0\colon [-1,1]\to \mathbb{R}$ by $\Psi_0(s)= \Psi(s)+ \kappa \frac{s^2}2$ for all $s\in [a,b]$. Then $\Psi_0\colon [-1,1]\to\mathbb{R}$ is convex and $\lim_{s\to\pm 1}\Psi'_0(s)= \pm \infty$. A basic idea for the following is to use this decomposition to split the free energy $E_{\mbox{\footnotesize free}}$ into a singular convex part $E$ and a quadratic perturbation. In the equations this yields a decomposition into a singular monotone operator and a linear remainder.
To this end we define an energy $E \colon L^2(\Omega) \to \mathbb{R}\cup\{+\infty\}$
with domain $$\mbox{dom}\, E = \{\varphi \in H^{\alpha/2}(\Omega) \;|\; -1 \leq \varphi \leq 1 \,\mbox{ a.e.} \}$$
given by
\begin{align} \label{helpenergy}
E(\varphi) = \left\{
\begin{array}{cl}
\frac12\ensuremath{\mathcal{E}}(\varphi,\varphi) + \int_\Omega \Psi_0(\varphi) \, dx
& \mbox{for } \; \varphi \in \mbox{dom}\, E \,, \\
+ \infty & \mbox{else} \,.
\end{array}
\right.
\end{align}
This yields the decomposition
\begin{align*}
E_{\mbox{\footnotesize free}}(\varphi) &= E(\varphi) - \frac{{\kappa}}{2} \|\varphi\|_{L^2}^2 \qquad \text{for all }\varphi\in \mbox{dom}\ E.
\end{align*}
Moreover, $E$ is convex and $E=F_0$ if one chooses $\phi=\Psi_0$ and $F_0$ is as in Subsection~\ref{S:nonlocal_operator}. This is a key relation for the following analysis in order to make use of Theorem~\ref{thm:Regularity}, which in particular implies that $\partial E=\partial F_0$ is a maximal monotone operator.
To prove our main result we discretize our system semi-implicitly in time in a suitable manner.
To this end, let $h=\frac{1}{N}$ for $N \in \mathbb{N}$ and $\mathbf{v}_k \in L^2_\sigma(\Omega)$,
$\varphi_k \in H^1(\Omega)$ with $\varphi_k(x)\in [-1,1]$ almost everywhere and $\rho_k =
\frac{1}{2}(\tilde{\rho}_1 + \tilde{\rho}_2) + \frac{1}{2} (\tilde{\rho}_2 - \tilde{\rho}_1) \varphi_k$
be given. Then $\Psi(\varphi_k)\in L^1(\Omega)$. We also define a smoothing operator $P_h$ on $L^2(\Omega)$ as follows.
We choose $u$ as the solution of the following heat equation
\begin{eqnarray*}
\left\{ \begin{array}{rcll}
\partial_t u - \Delta u &=& 0 & \mbox{in } \, \Omega \times (0,T) \,, \\
u|_{t=0} &=& \varphi' & \mbox{on } \, \Omega \,, \\
\left.\partial_\nu u\right|_{\partial \Omega} &=& 0 & \mbox{on } \, \partial \Omega \times (0,T),
\end{array}
\right.
\end{eqnarray*}
where $ \varphi' \in L^2(\Omega) $, and set $ P_h \varphi' := u|_{t=h} $. Then $P_h \varphi'
\in H^2(\Omega)$ and $P_h \varphi' \rightarrow \varphi'$ in $L^2(\Omega)$ as $h\to 0$ for all $\varphi'\in L^2(\Omega)$. Moreover, we have $|P_h \varphi'| \leq 1$ in $\Omega$ if $|\varphi'(x)|\leq 1$ almost everywhere and $P_h \varphi' \rightarrow_{h\to 0} \varphi'$ in $H^{\frac{\alpha}{2}}(\Omega)$ as $h\to 0$ for all $\varphi'\in H^{\frac{\alpha}{2}}(\Omega)$.
Now we determine $(\mathbf{v},\varphi,\mu)=(\mathbf{v}_{k+1},\varphi_{k+1},\mu_{k+1})$, $k\in\ensuremath{\mathbb{N}}$, successively as solution of the
following problem:
Find $\mathbf{v} \in H_0^1(\Omega)^d \cap L^2_\sigma(\Omega)$, $\varphi \in \mathcal{D}(\partial E)$
and
$$
\mu \in H^2_n(\Omega) = \{u \in H^2(\Omega) \,|\, \left. \partial_\mathbf{n} u \right|_{\partial \Omega} = 0 \mbox{ on } \partial \Omega\},
$$
such that
\begin{align}
\left( \frac{\rho \mathbf{v} - \rho_k \mathbf{v}_k}{h} , \boldsymbol{\psi}\right)_\Omega
&+ \left( \mbox{div}(\rho(P_h \varphi_k) \mathbf{v} \otimes \mathbf{v}) , \boldsymbol{\psi} \right)_{\Omega}
+ \left(2 \eta(\varphi_k) D\mathbf{v} , D \boldsymbol{\psi} \right)_\Omega
+ \left( \mbox{div}( \mathbf{v} \otimes \widetilde{\mathbf{J}}) , \boldsymbol{\psi} \right)_{\Omega} \nonumber \\
&= -\left((P_h \varphi_k) \nabla \mu , \boldsymbol{\psi} \right)_\Omega \label{timediscretizationline1}
\end{align}
for all $\boldsymbol{\psi} \in C_{0,\sigma}^\infty(\Omega)$,
\begin{align}
&\frac{\varphi - \varphi_k}{h} + \mathbf{v} \cdot \nabla P_h \varphi_k = \mbox{div} \left( m(P_h \varphi_k) \nabla \mu \right)
\; \mbox{ almost everywhere in $\Omega$ }, \label{timediscretizationline2}
\end{align}
and
\begin{align}
& \int_\Omega \left(\mu + {\kappa} \, \frac{\varphi + \varphi_k}{2}\right)\psi \, dx
= \ensuremath{\mathcal{E}}(\varphi,\psi) +\int_\Omega {\Psi}_0'(\varphi)\psi\, dx + h \int_\Omega \nabla \varphi \cdot \nabla \psi \, dx \label{timediscretizationline3}
\end{align}
for all $\psi\in H^{\alpha/2}(\Omega)$, where
\begin{align*}
\widetilde{\mathbf{J}} \equiv \widetilde{\mathbf{J}}_{k+1} := - \tfrac{\tilde{\rho}_2 - \tilde{\rho}_1}{2} m(P_h \varphi_k) \nabla \mu_{k+1}
= - \tfrac{\tilde{\rho}_2 - \tilde{\rho}_1}{2} m(P_h \varphi_k) \nabla \mu \,.
\end{align*}
For the following let
\begin{align}
E_{\mbox{\footnotesize tot}, h}(\varphi, \mathbf{v})=\int_{\Omega} \rho \frac{|\mathbf{v}|^2}{2} \, dx + \int_{\Omega} \Psi(\varphi) \, dx + \frac12 \ensuremath{\mathcal{E}}(\varphi, \varphi) + \frac{h}{2} \int_{\Omega} |\nabla \varphi|^2 \, dx.
\end{align}
denote the total energy of the system \eqref{timediscretizationline1}-\eqref{timediscretizationline3}.
\begin{remark}
\begin{enumerate}
\item As in \cite{ADG13} we obtain the important relation
\begin{align*}
-\frac{\rho - \rho_k}{h} - \mathbf{v} \cdot \nabla \rho(P_h \varphi_k) = \operatorname{div} \widetilde{\mathbf{J}} \,,
\end{align*}
by multiplication of \eqref{timediscretizationline2} with $-\frac{\tilde{\rho}_2 - \tilde{\rho}_1}{2}= \frac{\partial\rho(\varphi)}{\partial \varphi}$.
Because of $\operatorname{div}(\mathbf{v} \otimes \widetilde{\mathbf{J}}) = (\operatorname{div} \widetilde{\mathbf{J}}) \mathbf{v}
+ \left(\widetilde{\mathbf{J}} \cdot \nabla \right) \mathbf{v}$ this yields that
\begin{align} \label{equivtimediscretizationline1}
&\left( \frac{\rho \mathbf{v} - \rho_k \mathbf{v}_k}{h} , \boldsymbol{\psi}\right)_\Omega
+ \left( \operatorname{div}(\rho(P_h \varphi_k) \mathbf{v} \otimes \mathbf{v}) , \boldsymbol{\psi} \right)_{\Omega}
+ \left(2 \eta(\varphi_k) D\mathbf{v} , D \boldsymbol{\psi} \right)_\Omega \\
+& \left( \left(\operatorname{div} \widetilde{\mathbf{J}} - \frac{\rho - \rho_k}{h} - \mathbf{v} \cdot \nabla \rho(P_h \varphi_k) \right) \frac{\mathbf{v}}{2}
, \boldsymbol{\psi} \right)_{\Omega}
+ \left( \left( \widetilde{\mathbf{J}} \cdot \nabla \right) \mathbf{v} , \boldsymbol{\psi} \right)_{\Omega}
= - \left( (P_h \varphi_k) \nabla \mu , \boldsymbol{\psi} \right)_\Omega \nonumber
\end{align}
for all $\boldsymbol{\psi} \in C_{0,\sigma}^\infty(\Omega)$ to \eqref{timediscretizationline1}, which will be used to derive suitable
a-priori estimates.
\item Integrating \eqref{timediscretizationline2} in space one obtains $\int_\Omega \varphi \, dx = \int_\Omega \varphi_k \, dx$ because of $\operatorname{div} \, \mathbf{v} = 0$ and the boundary
conditions.
\end{enumerate}
\end{remark}
The following lemma is important to control the derivative of the singular free energy density $\Psi'(\varphi)$.
\begin{lemma} \label{derivativeforchempot}
Let $\varphi \in \mathcal{D}(\partial F_h)$ and
$\mu \in H^1(\Omega)$ be a solution of \eqref{timediscretizationline3} for given $\varphi_k \in H^1(\Omega)$
with $|\varphi_k(x)| \leq 1$ almost everywhere in $\Omega$ such that
\begin{align*}
\changes{\varphi_{\Omega}=}\tfrac{1}{|\Omega|} \int_\Omega \varphi \, dx = \tfrac{1}{|\Omega|} \int_\Omega \varphi_k \, dx \in (-1,1) \,.
\end{align*}
Then there is a constant $C=C(\int_\Omega \varphi_k, \Omega)>0$, independent of $\varphi, \mu, \varphi_k$, such that
\begin{align*}
\| \Psi_0'(\varphi) \|_{L^2(\Omega)} + \left| \int_\Omega \mu \, dx \right|
& \leq C (\|\nabla \mu\|_{L^2} + \|\nabla \varphi\|_{L^2}^2 + 1) \; \mbox{and} \\
\|\partial F_h(\varphi) \|_{L^2(\Omega)} & \leq C \left( \|\mu\|_{L^2} + 1 \right) \,.
\end{align*}
\end{lemma}
\begin{proof}
The proof is an adaptation of the corresponding result in \cite{ADG13}. For the convenience of the reader we give the details.
First we choose $\psi = \varphi - \changes{\varphi_{\Omega}}$ in \eqref{timediscretizationline3} and get
\begin{align} \label{testedwithFanddiff}
&\int_\Omega \mu (\varphi-\changes{\varphi_{\Omega}} ) \, dx
+ \int_\Omega \kappa \frac{\varphi + \varphi_k}{2} (\varphi - \changes{\varphi_{\Omega}}) \, dx \nonumber \\
=& \ensuremath{\mathcal{E}}(\varphi, \varphi)
+ \int_\Omega \Psi_0'(\varphi) (\varphi - \changes{\varphi_{\Omega}}) \, dx \, +h \int_{\Omega} \nabla \varphi \cdot \nabla \varphi \, dx \,.
\end{align}
Let $\mu_0 = \mu - \changes{\mu_{\Omega}}$. Then $\int_\Omega \mu (\varphi - \changes{\varphi_{\Omega}}) \, dx = \int_\Omega \mu_0 \varphi \, dx$.
In order to estimate the second term in \eqref{testedwithFanddiff} we
use that $\overline{\varphi} \in (-1+\varepsilon,1-\varepsilon)$ for sufficiently small $\varepsilon > 0$ and that $\lim_{\varphi \to \pm1} \Psi_0'(\varphi) = \pm \infty$. Hence for sufficiently small $ \varepsilon$ one obtains the inequality
$\Psi_0'(\varphi) (\varphi - \changes{\varphi_{\Omega}}) \geq C_\varepsilon |\Psi_0'(\varphi)| - \tilde{C}_\varepsilon$, which implies
\begin{align*}
\int_\Omega \Psi_0'(\varphi) (\varphi - \changes{\varphi_{\Omega}}) \, dx
\geq C \int_\Omega |\Psi_0'(\varphi)| \, dx - C_1 \,.
\end{align*}
Together with \eqref{testedwithFanddiff} we obtain
\begin{align*}
\int_\Omega |\Psi_0'(\varphi)| \, dx
&\leq \; C \|\mu_0\|_{L^2(\Omega)} \|\varphi\|_{L^2(\Omega)}
+ C \int_{\Omega} \frac{\kappa}{2} | \varphi + \varphi_k| | \varphi - \changes{\varphi_{\Omega}} | \, dx
+ C_1 \\
&\leq \; C (\|\mu_0\|_{L^2(\Omega)} + \|\varphi\|_{L^2(\Omega)}^2 + 1) \\
&\leq \; C (\|\nabla \mu\|_{L^2(\Omega)} + 1) \,,
\end{align*}
because of $|\varphi|$, $|\varphi_k| \leq 1$.
Next we choose $ \psi \equiv 1$ in \eqref{timediscretizationline3}. This yields
\begin{align*}
\int_\Omega \mu \, dx =
\int_\Omega \Psi_0'(\varphi) \, dx
- \int_\Omega \frac{\kappa}{2} \left(\varphi + \varphi_k \right) \, dx \,.
\end{align*}
Altogether this leads to
\begin{align*}
\left| \int_\Omega \mu \, dx \right| \leq &
C (\|\nabla \mu\|_{L^2(\Omega)} + 1) \,.
\end{align*}
Finally, the estimates of $\partial F_h(\varphi)$ and $\Psi_0'(\varphi)$ in $L^2(\Omega)$
follow directly from \eqref{timediscretizationline3} and \eqref{eq:DomEstim}.
\end{proof}
Now we will prove existence of solution to the time-discrete system.
We basically follow the line of the corresponding arguments in \cite{ADG13} here. As before we denote
\begin{equation*}
H^2_n(\Omega):=\{u\in H^2(\Omega):\mathbf{n}\cdot \nabla u|_{\partial\Omega}=0\}.
\end{equation*}
\begin{lemma} \label{timediscreteexistence}
For every $\mathbf{v}_k \in L^2_\sigma(\Omega)$, $\varphi_k \in H^1(\Omega)$ with $|\varphi_k(x)|\leq 1$ almost everywhere,
and $\rho_k = \frac{1}{2} (\tilde{\rho}_1 + \tilde{\rho}_2) + \frac{1}{2} (\tilde{\rho}_2 - \tilde{\rho}_1) \varphi_k$
there is some solution $(\mathbf{v},\varphi,\mu) \in \left( H^1_0(\Omega)^d \cap L^2_\sigma(\Omega) \right)
\times \mathcal{D}(\partial F_h) \times H^2_n(\Omega)$ of the system \eqref{timediscretizationline2}-\eqref{timediscretizationline3} and \eqref{equivtimediscretizationline1}. Moreover, the solution
satisfies the discrete energy estimate
\begin{align} \label{discreteenergyestimate}
E_{\mbox{\footnotesize tot,h}}&(\varphi,\mathbf{v}) + \int_\Omega \rho_k \frac{|\mathbf{v} - \mathbf{v}_k|^2}{2} \, dx
+ \int_\Omega \frac{|\nabla \varphi - \nabla \varphi_k|^2}{2} \, dx
+ \frac12 \ensuremath{\mathcal{E}}(\varphi - \varphi_k, \varphi - \varphi_k)
\nonumber \\
&+ h \int_\Omega 2 \eta(\varphi_k) |D\mathbf{v}|^2 \, dx + h \int_\Omega m(\varphi_k) |\nabla \mu|^2 \, dx
\leq E_{\mbox{\footnotesize tot,h}}(\varphi_k,\mathbf{v}_k) \,.
\end{align}
\end{lemma}
\begin{proof}
As first step we prove the energy estimate \eqref{discreteenergyestimate} for any solution
$(\mathbf{v},\varphi,\mu) \in \left( H^1_0(\Omega)^d \cap L^2_\sigma(\Omega) \right) \times \mathcal{D}(\partial F_h) \times H^2_n(\Omega)$
of \eqref{timediscretizationline2}-\eqref{timediscretizationline3} and
\eqref{equivtimediscretizationline1}.
We choose $\boldsymbol{\psi} = \mathbf{v}$ in \eqref{equivtimediscretizationline1} and use that
\begin{align*}
\int_\Omega \left( (\mbox{div} \, \widetilde{\mathbf{J}}) \frac{\mathbf{v}}{2} + \left( \widetilde{\mathbf{J}} \cdot \nabla \right) \mathbf{v} \right) \cdot \mathbf{v} \, dx
= \int_\Omega \mbox{div} \left( \widetilde{\mathbf{J}} \frac{|\mathbf{v}|^2}{2} \right) \, dx = 0.
\end{align*}
Then we derive as in \cite[Proof of Lemma 4.3]{ADG13}
\begin{align*}
\int_\Omega & \left( \mbox{div}(\rho(P_h \varphi_k) \mathbf{v} \otimes \mathbf{v}) - (\nabla \rho(P_h \varphi_k) \cdot \mathbf{v}) \frac{\mathbf{v}}{2} \right) \cdot \mathbf{v} \, dx
= \int_\Omega \mbox{div} \left( \rho(P_h \varphi_k) \mathbf{v} \frac{|\mathbf{v}|^2}{2} \right) \, dx
= 0 \,,
\end{align*}
due to $\operatorname{div} \mathbf{v} = 0$. Next
one easily gets
\begin{align*}
\frac{1}{h} \left(\rho \mathbf{v} - \rho_k \mathbf{v}_k \right) \cdot \mathbf{v} = \frac{1}{h} \left( \rho \frac{|\mathbf{v}|^2}{2} - \rho_k \frac{|\mathbf{v}_k|^2}{2} \right)
+ \frac{1}{h} \, (\rho - \rho_k) \, \frac{|\mathbf{v}|^2}{2}
+ \frac{1}{h} \rho_k \frac{|\mathbf{v}-\mathbf{v}_k|^2}{2} \,.
\end{align*}
Therefore \eqref{equivtimediscretizationline1} with $\boldsymbol{\psi} = \mathbf{v}$ yields
\begin{align} \label{testline1}
0 = \int_\Omega \frac{\rho |\mathbf{v}|^2 - \rho_k |\mathbf{v}_k|^2}{2h} \, dx
+ \int_\Omega \rho_k \frac{|\mathbf{v}-\mathbf{v}_k|^2}{2h} \, dx
+ \int_\Omega 2 \eta(\varphi_k) |D\mathbf{v}|^2 \, dx
+ \int_\Omega P_h \varphi_k \nabla \mu \cdot \mathbf{v} \, dx \,.
\end{align}
Moreover, multiplying \eqref{timediscretizationline2} with $\mu$ and using the boundary condition for $\mu$, one concludes
\begin{align} \label{testline2}
0 = \int_\Omega \frac{\varphi - \varphi_k}{h} \, \mu \, dx
+ \int_\Omega (\mathbf{v} \cdot \nabla P_h \varphi_k) \, \mu \, dx
+ \int_\Omega m(P_h \varphi_k) |\nabla \mu|^2 \, dx \,.
\end{align}
Furthermore choosing $\psi=\frac{1}{h} (\varphi - \varphi_k)$ in \eqref{timediscretizationline3} we obtain
\begin{align} \label{testline3}
0 = & \int_\Omega \nabla \varphi \cdot \nabla (\varphi - \varphi_k) \, dx
+ \int_\Omega {\Psi}_0'(\varphi) \frac{\varphi - \varphi_k}{h} \, dx
+ \frac{1}{h} \ensuremath{\mathcal{E}}(\varphi, \varphi-\varphi_k)
\nonumber \\
& - \int_\Omega \mu \, \frac{\varphi - \varphi_k}{h} \, dx
- \int_\Omega {\kappa} \frac{\varphi^2 - \varphi_k^2}{2h} \, dx \,.
\end{align}
Summation of \eqref{testline1}-\eqref{testline3} yields
\begin{align*}
0 =& \int_\Omega \frac{\rho |\mathbf{v}|^2 - \rho_k |\mathbf{v}_k|^2}{2h} \, dx
+ \int_\Omega \rho_k \frac{|\mathbf{v}-\mathbf{v}_k|^2}{2h} \, dx
+ \int_\Omega 2 \eta(\varphi_k) |D\mathbf{v}|^2 \, dx
+ \int_\Omega m(P_h \varphi_k) |\nabla \mu|^2 \, dx \\
& + \int_\Omega {\Psi}_0'(\varphi) \frac{\varphi-\varphi_k}{h} \, dx
- \int_\Omega {\kappa} \frac{\varphi^2 - \varphi_k^2}{2h} \, dx \\
& + \int_\Omega \nabla \varphi \cdot \nabla (\varphi - \varphi_k) \, dx
+ \frac{1}{h} \ensuremath{\mathcal{E}}(\varphi, \varphi - \varphi_k)
\\
\geq & \int_\Omega \frac{\rho |\mathbf{v}|^2 - \rho_k |\mathbf{v}_k|^2}{2h} \, dx
+ \int_\Omega \rho_k \frac{|\mathbf{v}-\mathbf{v}_k|^2}{2h} \, dx
+ \int_\Omega 2 \eta(\varphi_k) |D\mathbf{v}|^2 \, dx
+ \int_\Omega m(P_h \varphi_k) |\nabla \mu|^2 \, dx \\
& + \frac{1}{h} \int_\Omega \left( {\Psi}_0(\varphi) - {\Psi}_0(\varphi_k) \right) \, dx
- \int_\Omega \frac{{\kappa}}{2} \, \frac{\varphi^2 - \varphi_k^2}{h} \, dx \\
& + \int_\Omega \frac{|\nabla \varphi - \nabla \varphi_k|^2}{2} \, dx
+ \int_\Omega \left( \frac{|\nabla \varphi|^2}{2} - \frac{|\nabla \varphi_k|^2}{2} \right) dx \\
& + \frac{1}{h} \frac{\ensuremath{\mathcal{E}}(\varphi, \varphi)}{2} - \frac{1}{h}\frac{\ensuremath{\mathcal{E}}(\varphi_k, \varphi_k)}{2} + \frac{1}{h}\frac{\ensuremath{\mathcal{E}}(\varphi-\varphi_k, \varphi-\varphi_k)}{2} \,,
\end{align*}
because of $\int_\Omega P_h \varphi_k \nabla \mu \cdot \mathbf{v} \, dx= - \int_\Omega (\mathbf{v}\cdot \nabla P_h \varphi_k) \mu \, dx$,
\begin{align*}
{\Psi}_0'(\varphi) \, (\varphi - \varphi_k) &\geq {\Psi}_0(\varphi) - {\Psi}_0(\varphi_k) \,, \\
\nabla \varphi \cdot \nabla (\varphi - \varphi_k) &= \frac{|\nabla \varphi|^2}{2} - \frac{|\nabla \varphi_k|^2}{2}
+ \frac{|\nabla \varphi - \nabla \varphi_k|^2}{2}\,, \quad \mbox{and} \\
\ensuremath{\mathcal{E}}(\varphi, \varphi - \varphi_k) &= \frac{\ensuremath{\mathcal{E}}(\varphi, \varphi)}{2} - \frac{\ensuremath{\mathcal{E}}(\varphi_k, \varphi_k)}{2}
+ \frac{\ensuremath{\mathcal{E}}(\varphi - \varphi_k, \varphi - \varphi_k)}{2} \, .
\end{align*}
This shows \eqref{discreteenergyestimate}.
We will prove existence of weak solutions with the aid of the Leray-Schauder principle.
In order to obtain a suitable reformulation of our time-discrete system we define suitable
$\mathcal{L}_{k},\mathcal{F}_k : X \to Y$, where
\begin{align*}
X &= \left(H^1_0(\Omega)^d \cap L^2_\sigma(\Omega) \right) \times \mathcal{D}(\partial F_h) \times H^2_n(\Omega) \,, \\
Y &= \left(H^1_0(\Omega)^d \cap L^2_\sigma(\Omega) \right)' \times L^2(\Omega) \times L^2(\Omega)
\end{align*}
and
\begin{align*}
\mathcal{L}_k (\mathbf{w}) = \begin{pmatrix}
L_k(\mathbf{v}) \\
-\mbox{div}(m(P_h \varphi_k) \nabla \mu) + \int_\Omega \mu \, dx \\
\varphi + \partial F_h(\varphi)
\end{pmatrix}
\end{align*}
for every $\mathbf{w} = (\mathbf{v},\varphi,\mu) \in X$ and
\begin{align*}
\left< L_k(\mathbf{v}),\boldsymbol{\psi} \right> &= \int_\Omega 2 \eta(\varphi_k) D\mathbf{v} : D\boldsymbol{\psi} \, dx \quad \mbox{for all} \;
\boldsymbol{\psi} \in H^1_0(\Omega)^d \cap L^2_\sigma(\Omega).
\end{align*}
Moreover we define
\begin{align*}
\mathcal{F}_k(\mathbf{w}) = \begin{pmatrix}
- \frac{\rho \mathbf{v} - \rho_k \mathbf{v}_k}{h} - \mbox{div}(\rho(P_h \varphi_k) \mathbf{v} \otimes \mathbf{v}) - \nabla \mu P_h \varphi_k
- \left(\operatorname{div} \widetilde{\mathbf{J}} - \frac{\rho - \rho_k}{h} - \mathbf{v} \cdot \nabla \rho(P_h \varphi_k) \right) \frac{\mathbf{v}}{2}
- \left( \widetilde{\mathbf{J}} \cdot \nabla \right) \mathbf{v} \\
-\frac{\varphi - \varphi_k}{h} - \mathbf{v} \cdot \nabla P_h \varphi_k + \int_\Omega \mu \, dx \rule{0cm}{0,5cm} \\
\varphi + \mu + \tilde{\kappa} \frac{\varphi+\varphi_k}{2} \rule{0cm}{0,6cm}
\end{pmatrix}
\end{align*}
for $\mathbf{w} = (\mathbf{v},\varphi,\mu) \in X$.
By construction $\mathbf{w} = (\mathbf{v},\varphi,\mu) \in X$ is a solution of \eqref{timediscretizationline1}-\eqref{timediscretizationline3} if and only if
\begin{align*}
\mathcal{L}_k (\mathbf{w}) - \mathcal{F}_k(\mathbf{w}) = 0 \,.
\end{align*}
In \cite[Section~4.2]{ADG13} it is shown that
\begin{equation*}
L_k \colon H^1_0(\Omega)^d \cap L^2_\sigma(\Omega) \to \left(H^1_0(\Omega)^d \cap L^2_\sigma(\Omega)\right)'
\end{equation*}
is invertible
and that for every $f \in L^2(\Omega)$
\begin{align} \label{ell1}
- \mbox{div}(m(P_h \varphi_k) \nabla \mu) + \int_\Omega \mu \, dx = f \, \mbox{ in } \Omega \,, \quad
\left. \partial_\mathbf{n} \mu \right|_{\partial \Omega} = 0
\end{align}
has a unique solution $\mu \in H^2_n(\Omega)$. This follows from the Lax-Milgram Theorem and elliptic regularity theory.
Moreover, in \cite[Section~4.2]{ADG13} the estimate
\begin{align} \label{H2estimatemu}
\|\mu\|_{H^2(\Omega)} \leq \changes{C_k} \left( \|\mu\|_{H^1(\Omega)} + \|f\|_{L^2(\Omega)} \right)
\end{align}
is shown.
Because of Theorem~\ref{thm:Regularity}, $\partial F_h$ is maximal monotone and therefore
\begin{align*}
I + \partial F_h : \mathcal{D}(\partial F_h) \rightarrow L^2(\Omega)
\end{align*}
is invertible. Moreover, $ (I+\partial F_h)^{-1}\colon L^2(\Omega) \to H^{1}(\Omega)$ is continuous, which can be shown as in the proof of Proposition 7.5.5 in~\cite{Abe07}. Since now a nonlocal operator is involved we provide the details for the convenience of the reader. Let $f_l \to_{l\to\infty} f$ in $L^2(\Omega)$ such that
$f_l = u_l + \partial F(u_l)$ and $f = u + \partial F(u)$ be given. Then
$u_l \to u$ in $H^1(\Omega)$ since
\begin{align*}
\|u_l - u\|_{L^2}^2 + h \|\nabla u_l - \nabla u\|_{L^2}^2 + \ensuremath{\mathcal{E}}(u_l - u, u_l - u)
& \leq \|u_l - u\|_{L^2}^2 + \left( \partial F_h(u_l) - \partial F_h(u) , u_l - u \right)_{L^2} \\
& \leq \|u_l + \partial F_h(u_l) - (u + \partial F_h(u)) \|_{L^2} \, \|u_l - u\|_{L^2} \\
& \leq \frac{1}{2} \|f_l - f\|_{L^2}^2 + \frac{1}{2} \|u_l - u\|_{L^2}^2 \,.
\end{align*}
Altogether $\mathcal{L}_k : X \to Y$ is invertible with continuous inverse $\mathcal{L}_k^{-1} : Y \to X$.
We introduce the following auxiliary Banach spaces
\begin{align*}
\widetilde{X} &\mathrel{\mathop:\!\!=} \left( H_0^1(\Omega)^d \cap L_\sigma^2(\Omega) \right)
\times H^{1}(\Omega) \times H^2_n(\Omega) \,, \\
\widetilde{Y} &\mathrel{\mathop:\!\!=} L^{\frac{3}{2}}(\Omega)^d \times W^{1}_{\frac{3}{2}}(\Omega) \times H^1(\Omega) \,
\end{align*}
in order to obtain a completely continuous mapping in the following.
Because of the considerations above $\mathcal{L}_k^{-1} : Y \to \widetilde{X}$ is continuous.
Because of the compact embedding $\widetilde{Y} \hookrightarrow \hookrightarrow Y$,
$\mathcal{L}_k^{-1} : \widetilde{Y} \to \widetilde{X}$ is compact.
Next we show that $\mathcal{F}_k : \widetilde{X} \to \widetilde{Y}$ is
continuous and bounded. To this end one uses the estimates:
\begin{align*}
\|\rho \mathbf{v}\|_{L^{\frac{3}{2}}(\Omega)} &\leq C \|\mathbf{v}\|_{H^1(\Omega)} (\|\varphi\|_{L^2(\Omega)} + 1) \,, &
\|\mbox{div}(\rho(P_h \varphi_k) \mathbf{v} \otimes \mathbf{v}) \|_{L^{\frac{3}{2}}(\Omega)} &\leq C_k \|\mathbf{v}\|^2_{H^1(\Omega)} \,, \\
\|\nabla \mu P_h \varphi_k \|_{L^{\frac{3}{2}}(\Omega)} &\leq C_k \|\nabla \mu\|_{L^2(\Omega)} \,, &
\|(\operatorname{div}\widetilde{\mathbf{J}}) \mathbf{v}\|_{L^{\frac{3}{2}}(\Omega)} &\leq C_k \|\mathbf{v}\|_{H^1(\Omega)} \|\mu\|_{H^2(\Omega)} \,, \\
\|(\widetilde{\mathbf{J}} \cdot \nabla) \mathbf{v}\|_{L^{\frac{3}{2}}(\Omega)} &\leq C \|\mathbf{v}\|_{H^1(\Omega)} \|\mu\|_{H^2(\Omega)} \,, &
\|\mathbf{v} \cdot \nabla \varphi_k\|_{W^{1}_{\frac{3}{2}}(\Omega)} &\leq C_k \|\mathbf{v}\|_{H^1(\Omega)} \,.
\end{align*}
Note that $P_h \varphi_k$ and therefore $\rho(P_h \varphi_k)$
belong to $H^2(\Omega)$).
More precisely:
\begin{enumerate}
\item For the estimate of $\operatorname{div}(\rho (P_h \varphi_k) \mathbf{v} \otimes \mathbf{v})$ in $L^{\frac{3}{2}}(\Omega)$, one has to estimate a term of the form $\rho(P_h \varphi_k) \partial_l \mathbf{v}_i \mathbf{v}_j$
in $L^{\frac{3}{2}}(\Omega)$, which are a product of functions in $L^\infty(\Omega)$, $L^2(\Omega)$ and $L^6(\Omega)$. Therefore the term is bounded in $L^{\frac{3}{2}}(\Omega)$.
Moreover, there are terms of the form $\partial_l \rho(P_h \varphi_k) \mathbf{v}_i \mathbf{v}_j$ in $L^{\frac{3}{2}}(\Omega)$, where each factor belongs to $L^6(\Omega)$.
\item To estimate $(\operatorname{div}\widetilde{\mathbf{J}}) \mathbf{v}$ in $L^{\frac{3}{2}}(\Omega)$ one has terms of the form $m'(P_h \varphi_k) \partial_i P_h p_k \partial_j \mu \mathbf{v}_l$ and of the form $m(P_h \varphi_k) \partial_i \partial_j \mu \mathbf{v}_l$. For the first type of terms the first factor is in $L^\infty(\Omega)$ and the other three are in $L^6(\Omega)$, which yields the bound in $L^{\frac32}(\Omega)$. The second type are products of functions
in $L^\infty(\Omega)$, $L^2(\Omega)$ and $L^6(\Omega)$.
\item The bound of $(\widetilde{\mathbf{J}} \cdot \nabla) \mathbf{v}$ in $L^{\frac{3}{2}}(\Omega)$ follows easily since the factors in $m(P_h \varphi_k) \partial_i \mu \partial_j \mathbf{v}_l$
are bounded in $L^\infty(\Omega)$, $L^6(\Omega)$ and $L^2(\Omega)$, respectively.
\end{enumerate}
The estimates of the other terms are more easy and left to the reader.
These estimates show the boundedness of $ \mathcal{F}_k $. Using analogous estimates for differences of the terms, one can show the continuity of $ \mathcal{F}_k : \widetilde{X} \to \widetilde{Y} $.
We will now apply the Leray-Schauder principle on $\widetilde{Y}$. To this end we use that
$\mathcal{L}_k(\mathbf{w}) - \mathcal{F}_k(\mathbf{w}) = 0$ for $\mathbf{w} \in X$ is equivalent to
\begin{align} \label{abstractproblem}
\mathbf{f} - \mathcal{F}_k \circ \mathcal{L}_k^{-1}(\mathbf{f}) = 0 \quad \mbox{ for }\; \mathbf{f} = \mathcal{L}_k(\mathbf{w}) \,.
\end{align}
Therefore we define $\mathcal{K}_k := \mathcal{F}_k \circ \mathcal{L}_k^{-1} : \widetilde{Y} \to \widetilde{Y}$. We remark that $\mathcal{K}_k$ is
a compact operator since $\mathcal{L}_k^{-1} : \widetilde{Y} \to \widetilde{X}$ is compact and $\mathcal{F}_k : \widetilde{X} \to \widetilde{Y}$ is continuous.
Hence \eqref{abstractproblem} is equivalent to the fixed-point equation
\begin{align*}
\mathbf{f} = \mathcal{K}_k(\mathbf{f}) \,\qquad \text{for }\mathbf{f} \in \tilde{Y}.
\end{align*}
Now we have to show that there is some $R>0$ such that:
\begin{align} \label{requirementLeraySchauder}
\mbox{ If } \, \mathbf{f} \in \widetilde{Y} \, \mbox{ and } \, 0 \leq \lambda \leq 1
\, \mbox{ fulfill } \, \mathbf{f} = \lambda \mathcal{K}_k \mathbf{f} \,, \mbox{ then } \, \|\mathbf{f} \|_{\widetilde{Y}} \leq R \,.
\end{align}
To this end we assume that $\mathbf{f} \in \widetilde{Y}$ and $0 \leq \lambda \leq 1$ are such that $\mathbf{f} = \lambda \mathcal{K}_k \mathbf{f}$.
Let $\mathbf{w} = \mathcal{L}_k^{-1}(\mathbf{f})$. Then
\begin{align*}
\mathbf{f} = \lambda \mathcal{K}_k (\mathbf{f}) \quad \Longleftrightarrow \quad \mathcal{L}_k(\mathbf{w}) - \lambda \mathcal{F}_k(\mathbf{w}) = 0 \,.
\end{align*}
The latter equation is equivalent to
\begin{align}
&\int_\Omega 2 \eta(\varphi_k) D\mathbf{v} : D\boldsymbol{\psi} \, dx
+ \lambda \int_\Omega \frac{\rho \mathbf{v} - \rho_k \mathbf{v}_k}{h} \cdot \boldsymbol{\psi} \, dx
+ \lambda \int_\Omega \mbox{div}(\rho(P_h \varphi_k) \mathbf{v} \otimes \mathbf{v}) \cdot \boldsymbol{\psi} \, dx \nonumber \\
& \hspace*{10pt} + \lambda \int_\Omega \left( \mbox{div} \widetilde{\mathbf{J}} - \frac{\rho - \rho_k}{h} - \mathbf{v} \cdot \nabla \rho(P_h \varphi_k) \right) \frac{\mathbf{v}}{2} \cdot \boldsymbol{\psi} \, dx
+ \lambda \int_\Omega \left( \widetilde{\mathbf{J}} \cdot \nabla \right) \mathbf{v} \cdot \boldsymbol{\psi} \, dx\nonumber\\
& = - \lambda \int_\Omega \nabla \mu P_h \varphi_k \cdot \boldsymbol{\psi} \, dx \label{lambdaproblem1}
\end{align}\
for all $\boldsymbol{\psi} \in H_0^1(\Omega)^d \cap L^2_\sigma(\Omega)$ and
\begin{align}
& \lambda \frac{\varphi - \varphi_k}{h}
+ \lambda \mathbf{v} \cdot \nabla P_h \varphi_k
- \lambda \int_\Omega \mu \, dx
= \mbox{div}(m(P_h \varphi_k) \nabla \mu)
- \int_\Omega \mu \, dx \,, \label{lambdaproblem2} \\
& \varphi + \partial F_h(\varphi)
= \lambda \varphi + \lambda \mu
+ \lambda \widetilde{\kappa} \frac{\varphi + \varphi_k}{2} \,. \label{lambdaproblem3}
\end{align}
{\allowdisplaybreaks
As in the proof of \eqref{discreteenergyestimate} we choose
$\boldsymbol{\psi} = \mathbf{v}$ in \eqref{lambdaproblem1}, test \eqref{lambdaproblem2} with $\mu$ and multiply
\eqref{lambdaproblem3} with $\frac{1}{h}(\varphi-\varphi_k)$. In the same way as before one obtains:
\begin{align*}
0 \; = \; & \lambda \frac{1}{h} \int_\Omega \left( \frac{\rho |\mathbf{v}|^2}{2} - \frac{\rho_k |\mathbf{v}_k|^2}{2} \right)
+ \lambda \frac{1}{h} \int_\Omega \rho_k \frac{|\mathbf{v} - \mathbf{v}_k|^2}{2}
+ \int_\Omega 2 \eta(\varphi_k) |D\mathbf{v}|^2
+ (1-\lambda) \left( \int_\Omega \mu \right)^2 \\
& + \int_\Omega m(\varphi_k) |\nabla \mu|^2
+ (1-\lambda) \frac{1}{h} \int_\Omega \varphi( \varphi-\varphi_k )
+ \int_\Omega \nabla \varphi \cdot \left( \nabla \varphi - \nabla \varphi_k \right) \\
& +\frac1h \ensuremath{\mathcal{E}}(\varphi, \varphi - \varphi_k)
+ \frac{1}{h} \int_\Omega {\Psi}_0'(\varphi) (\varphi - \varphi_k)
- \lambda \frac{1}{h} \int_\Omega {\kappa} \frac{\varphi^2 - \varphi_k^2}{2} \\
\; \geq \; & \lambda \frac{1}{h} \int_\Omega \left( \frac{\rho |\mathbf{v}|^2}{2} - \frac{\rho_k |\mathbf{v}_k|^2}{2} \right)
+ \lambda \frac{1}{h} \int_\Omega \rho_k \frac{|\mathbf{v} - \mathbf{v}_k|^2}{2}
+ \int_\Omega 2 \eta(\varphi_k) |D\mathbf{v}|^2
+ (1-\lambda) \left( \int_\Omega \mu \right)^2 \\
& + \int_\Omega m(\varphi_k) |\nabla \mu|^2
+ (1-\lambda) \frac{1}{h} \int_\Omega \left( \frac{\varphi^2}{2} - \frac{\varphi_k^2}{2} \right)
+ \int_\Omega \left( \frac{|\nabla \varphi|^2}{2} - \frac{|\nabla \varphi_k|^2}{2} \right) \\
& +\frac{1}{h} \frac{\ensuremath{\mathcal{E}}(\varphi, \varphi)}{2} - \frac{1}{h}\frac{\ensuremath{\mathcal{E}}(\varphi_k, \varphi_k)}{2}
+ \frac{1}{h}\frac{\ensuremath{\mathcal{E}}(\varphi - \varphi_k, \varphi - \varphi_k)}{2} \\
& + \frac{1}{h} \int_\Omega \left( {\Psi}_0(\varphi) - {\Psi}_0(\varphi_k) \right)
- \lambda \frac{1}{h} \int_\Omega {\kappa} \frac{\varphi^2 - \varphi_k^2}{2} \,.
\end{align*}}
For brevity we omitted the integration element $dx$.
Thus we obtain
\begin{align*}
& h \int_\Omega 2 \eta(\varphi_k) |D\mathbf{v}|^2 + h \int_\Omega m(\varphi_k) |\nabla \mu|^2
+ \frac{h}{2} \int_\Omega |\nabla \varphi|^2 \\
& + \int_\Omega {\Psi}(\varphi) + (1-\lambda)\left(\int_\Omega \mu \, dx \right)^2 + \frac{\ensuremath{\mathcal{E}}(\varphi, \varphi)}{2} \\
& \leq \int_\Omega \frac{\rho_k |\mathbf{v}_k|^2}{2} + \frac{1}{2} \int_\Omega \varphi_k^2
+ \frac{h}{2} \int_\Omega |\nabla \varphi_k|^2 + \int_\Omega {\Psi}_0(\varphi_k)
+ \int_\Omega |{\kappa}| \frac{\varphi_k^2}{2} + \frac{\ensuremath{\mathcal{E}}(\varphi_k, \varphi_k)}{2} \,.
\end{align*}
Here we used $-\lambda \int_\Omega \widetilde{\kappa} \frac{\varphi_k^2}{2} \, dx \leq
\lambda \int_\Omega |\widetilde{\kappa}| \frac{\varphi_k^2}{2} \, dx$ and in addition estimated every
$\lambda$ resp. $(1-\lambda)$ on the right side by $1$.
Because of $\mathbf{w}=(\mathbf{v},\varphi,\mu) = \mathcal{L}_k^{-1}(\mathbf{f}) \in X$,
$\varphi \in \mathcal{D}(\partial F_h)$ and therefore $\varphi \in [-1,1]$ almost everywhere. In particular we have $\rho \geq 0$. Moreover,
$\int_\Omega {\Psi}(\varphi) \, dx$ is bounded.
Altogether we conclude
\begin{align} \label{apriorilambda}
\nonumber& (1-\lambda)\left(\int_\Omega \mu \, dx \right)^2 + h \int_\Omega 2 \eta(\varphi_k) |D\mathbf{v}|^2 \, dx + h \int_\Omega m(\varphi_k) |\nabla \mu|^2 \, dx \\
& + \frac{h}{2} \int_\Omega |\nabla \varphi|^2 \, dx + \frac{\ensuremath{\mathcal{E}}(\varphi, \varphi)}{2} \leq C_k \,.
\end{align}
for some $C_k$ independent of $(\mathbf{v}, \varphi,\mu)$.
Using $\|\varphi\|_{L^\infty} \leq 1$, Korn's inequality, \eqref{eq:EquivNorm2},
and the fact that $\eta$, $m$ and $a$ are bounded
from below by a positive constant, we obtain
\begin{align} \label{apriorilambdashort}
\sqrt{1-\lambda}\left|\int_\Omega \mu \, dx \right| + \|\mathbf{v}\|_{H^1(\Omega)} + \|\nabla \mu\|_{L^2(\Omega)} + \|\varphi\|_{H^1(\Omega)} \leq C_{k} \,.
\end{align}
In order to estimate $\|\mu\|_{L^2}$, we distinguish the cases $\lambda\in [\frac12,1]$ and $\lambda\in [0,\frac12)$. In the case
$\lambda \in [\frac{1}{2},1]$, we simply use
$\frac{1}{2} | \int_\Omega \mu \, dx| \leq \lambda |\int_\Omega \mu \, dx|$
and conclude as in the proof of Lemma \ref{derivativeforchempot} together with \eqref{apriorilambdashort} from
\eqref{lambdaproblem3} that
\begin{align*}
\left| \int_\Omega \mu \, dx \right| \leq C_k \,.
\end{align*}
In the case $\lambda \in [0,\frac{1}{2})$ we conclude directly from \eqref{apriorilambdashort} that
$\left| \int_\Omega \mu \, dx \right| \leq C_k$.
Thus \eqref{apriorilambdashort} can be improved to
\begin{align} \label{apriorilambdashortfinal}
\|\mathbf{v}\|_{H^1(\Omega)} + \|\mu\|_{H^1(\Omega)} + \|\varphi\|_{H^1(\Omega)} \leq C_k \,.
\end{align}
With the help of \eqref{H2estimatemu} we can estimate $\|\mu\|_{H^2(\Omega)}$ and derive
\begin{align} \label{apriorilambdashortfinally}
\|\mathbf{v}\|_{H^1(\Omega)} + \|\mu\|_{H^2(\Omega)} + \|\varphi\|_{H^1(\Omega)} \leq C_k \,.
\end{align}
Using \eqref{lambdaproblem3} we also have $\| \partial F_h(\varphi) \|_{L^2(\Omega)} \leq C_k$.
Altogether we conclude
\begin{align*}
\|\mathbf{w}\|_{\widetilde{X}} + \|\partial F_h(\varphi)\|_{L^2(\Omega)}
= \|(\mathbf{v},\varphi,\mu)\|_{\widetilde{X}} + \|\partial F_h(\varphi)\|_{L^2(\Omega)} \leq C_k \,.
\end{align*}
Finally we can estimate $\mathbf{f} = \mathcal{L}_k(\mathbf{w})$ in $\widetilde{Y}$ by using that
$\mathbf{f} - \lambda \mathcal{F}_k \mathcal{L}_k^{-1} (\mathbf{f}) = 0$ implies $\mathbf{f} = \lambda \mathcal{F}_k(\mathbf{w})$ together with the boundedness of
$\mathcal{F}_k : \widetilde{X} \to \widetilde{Y}$. Thus we obtain
\begin{align*}
\|\mathbf{f}\|_{\widetilde{Y}} = \|\lambda \mathcal{F}_k(\mathbf{w}) \|_{\widetilde{Y}} \leq C'_k \,.
\end{align*}
Thus the condition of the Leray-Schauder principle is satisfied, which proves the existence of a solution.
\end{proof}
\section{Proof of Theorem~\ref{existenceweaksolution}}\label{sec:proof}
\subsection{Compactness in Time}
In order to prove our main result Theorem \ref{existenceweaksolution} we send $h \to 0$
resp. $N \to \infty$ for the approximate solution, which are obtain by suitable interpolations of our time-discrete solutions. To this end let $N \in \mathbb{N}$ be given and let
$(\mathbf{v}_{k+1},\varphi_{k+1},\mu_{k+1})$, $k\in \ensuremath{\mathbb{N}}$, be chosen successively as a solution of
\eqref{timediscretizationline1}-\eqref{timediscretizationline3} with $h = \frac{1}{N}$ and $(\mathbf{v}_0,\varphi_0^N)$ \changes{where $ \varphi_0^N = P_h \varphi_0 $} as initial value.
As in \cite{ADG13} we define $f^N(t)$ for $t\in[-h,\infty)$ by the relation $f^N(t) = f_k$ for $t \in [(k-1)h,kh)$, where
$k \in \mathbb{N}_0$ and $f \in \{\mathbf{v},\varphi,\mu\}$.
Moreover, let $\rho^N = \frac{1}{2}(\tilde{\rho}_1 + \tilde{\rho}_2) + \frac{1}{2}(\tilde{\rho}_2 - \tilde{\rho}_1) \varphi^N$.
Furthermore we introduce the notation
\begin{align*}
\left( \Delta^+_h f \right)(t) &:= f(t+h)-f(t) \,,& \left( \Delta^-_h f \right)(t) &:= f(t)-f(t-h) \,, \\
\partial^\pm_{t,h} f(t) &:= \frac{1}{h} \left( \Delta_h^\pm f \right)(t) \,, &
f_h &:= \left( \tau_h^\ast f\right)(t) = f(t-h) \,.
\end{align*}
In order to derive the weak formulation in the limit let $\boldsymbol{\psi} \in \left( C^\infty_0(\Omega \times (0,\infty)) \right)^d$ with $\operatorname{div} \boldsymbol{\psi} = 0$ be arbitrary and choose
$\widetilde{\boldsymbol{\psi}} := \int_{kh}^{(k+1)h} \boldsymbol{\psi} \, dt$ as test function in
\eqref{timediscretizationline1}. By summation with respect to $k \in \mathbb{N}_0$ this yields
\begin{align}
\int_0^\infty &\hspace*{-5pt} \int_\Omega \partial^-_{t,h}(\rho^N \mathbf{v}^N) \cdot \boldsymbol{\psi} \, dx \,dt
+ \int_0^\infty \hspace*{-5pt} \int_\Omega \mbox{div} \left(\rho^N_h \mathbf{v}^N \otimes \mathbf{v}^N \right) \cdot \boldsymbol{\psi} \, dx \,dt
+ \int_0^\infty \hspace*{-5pt} \int_\Omega 2 \eta(\varphi^N_h) D\mathbf{v}^N : D\boldsymbol{\psi} \, dx \,dt \nonumber \\
& - \int_0^\infty \hspace*{-5pt} \int_\Omega \left( \mathbf{v}^N \otimes \widetilde{\mathbf{J}}^N \right) : D\boldsymbol{\psi} \, dx \,dt
= - \int_0^\infty \hspace*{-5pt} \int_\Omega \nabla \mu^N \varphi^N_h \cdot \boldsymbol{\psi} \, dx \,dt \label{timeintegratedline1}
\end{align}
for all $\boldsymbol{\psi} \in \left( C^\infty_0(\Omega \times (0,\infty)) \right)^d$ with $\operatorname{div} \boldsymbol{\psi} = 0$.
\changes{Here $ \rho^N_h = (\rho^N)_h $ and $ \varphi^N_h = (\varphi^N)_h. $}
Using a simple change of variable, one sees
\begin{align*}
\int_0^\infty \hspace*{-5pt} \int_\Omega \partial^-_{t,h}(\rho^N \mathbf{v}^N) \cdot \boldsymbol{\psi} \, dx \,dt
= - \int_0^\infty \hspace*{-5pt} \int_\Omega (\rho^N \mathbf{v}^N) \cdot \partial^+_{t,h}\boldsymbol{\psi} \, dx \,dt \,
\end{align*}
for sufficiently small $h>0$.
In the same way one derives
\begin{align} \label{timeintegratedline2}
\int_0^\infty \hspace*{-5pt} \int_\Omega \partial^-_{t,h} \varphi^N \, \zeta \, dx \,dt
+ \int_0^\infty \hspace*{-5pt} \int_{\Omega} \mathbf{v}^N \varphi^N_h \cdot \nabla \zeta \, dx \,dt
= \int_0^\infty \hspace*{-5pt} \int_\Omega m(\varphi^N_h) \nabla \mu^N \cdot \nabla \zeta \, dx \,dt
\end{align}
for all $\zeta \in C^\infty_0((0,\infty);C^1(\overline{\Omega}))$ as well as
\begin{align} \label{timeintegratedline3}
\int_{0}^{\infty}
\int_\Omega (\mu^N + {\kappa} \, \frac{\varphi^N + \varphi_h^N}{2})\psi \, dx \, dt
= & \int_{0}^{\infty} \ensuremath{\mathcal{E}}(\varphi^N,\psi) \,dt + \int_{0}^{\infty} \int_\Omega {\Psi}_0'(\varphi^N)\psi\, dx \, dt \nonumber\\
& +h \int_{0}^{\infty} \int_\Omega \nabla \varphi^N \cdot \nabla \psi \, dx \, dt
\end{align}
for all $\psi\in C^\infty_0((0,\infty);C^1(\overline{\Omega}))$.
Let $E^N(t)$ be defined as
\begin{align*}
E^N(t) = \frac{(k+1)h - t}{h} E_{\mbox{\footnotesize tot}}(\varphi_k,\mathbf{v}_k)
+ \frac{t - kh}{h} E_{\mbox{\footnotesize tot}}(\varphi_{k+1},\mathbf{v}_{k+1}) \; \mbox{ for } \; t \in [kh,(k+1)h)
\end{align*}
and set
\begin{align*}
D^N(t) := \int_\Omega 2 \eta(\varphi_k) |D\mathbf{v}_{k+1}|^2 \, dx + \int_\Omega m(\varphi_k) |\nabla \mu_{k+1}|^2 \, dx
\end{align*}
for all $t \in (t_k,t_{k+1})$, $k \in \mathbb{N}_0$.
Then \eqref{discreteenergyestimate} yields
\begin{align} \label{inequalityforEandD}
- \frac{d}{d t} E^N(t) = \frac{E_{\mbox{\footnotesize tot}}(\varphi_k,\mathbf{v}_k) - E_{\mbox{\footnotesize tot}}(\varphi_{k+1},\mathbf{v}_{k+1})}{h}
\geq D^N(t)
\end{align}
for all $t \in (t_k,t_{k+1})$, $k \in \mathbb{N}_0$.
Integration implies
\begin{align}
E_{\mbox{\footnotesize tot}}(\varphi^N(t),\mathbf{v}^N(t))
&+ \int_s^t \int_\Omega \left( 2 \eta(\varphi^N_h) |D\mathbf{v}^N|^2 + m(\varphi^N_h) |\nabla \mu^N|^2 \right) dx \,d\tau \nonumber \\
&\leq E_{\mbox{\footnotesize tot}}(\varphi^N(s),\mathbf{v}^N(s)) \label{inequforEN}
\end{align}
for all $0 \leq s \leq t < \infty$ with $s,t \in h\mathbb{N}_0$.
Because of Lemma \ref{derivativeforchempot} and since $E_{\mbox{\footnotesize tot}}(\varphi_0^N,\mathbf{v}_0)$
is bounded, we conclude that
\begin{align} \label{timediscrbounds}
\begin{array}{l}
(\mathbf{v}^N)_{N\in \ensuremath{\mathbb{N}}} \subseteq L^2(0,\infty;H^1(\Omega)^d)\cap L^\infty(0,\infty; L^2(\Omega)^d) \,, \\
(\nabla \mu^N)_{N\in\ensuremath{\mathbb{N}}} \subseteq L^2(0,\infty;L^2(\Omega)^d) \,, \rule{0cm}{0,5cm}\\
(\varphi^N)_{N\in\ensuremath{\mathbb{N}}} \subseteq L^\infty(0,\infty;H^{\frac{\alpha}{2}}(\Omega)) \,, \mbox{ and } \rule{0cm}{0,5cm} \\
(h^{\frac12}\nabla \varphi^N)_{N\in\ensuremath{\mathbb{N}}} \subseteq L^{\infty}(0, \infty;L^2(\Omega))
\end{array}
\end{align}
are bounded. Moreover, there is
a nondecreasing $C\colon (0,\infty) \to (0,\infty)$ such that
\begin{equation*}
\int_0^T \left| \int_\Omega \mu^N \, dx \right| dt \leq C(T) \, \mbox{ for all } \, 0<T<\infty\,. \rule{0cm}{0,5cm}
\end{equation*}
Therefore there are subsequences (denoted again by the index $N\in\ensuremath{\mathbb{N}}$, $h>0$, respectively) such that
\begin{align*}
\mathbf{v}^N \rightharpoonup \mathbf{v} \; &\mbox{ in } \, L^2(0,\infty;H^1(\Omega)^d) \,, \\
\mathbf{v}^N \rightharpoonup^\ast \mathbf{v} \; &\mbox{ in } \, L^\infty(0,\infty;L^2(\Omega)^d)
\,, \\
\varphi^N \rightharpoonup^\ast \varphi \; &\mbox{ in } \, L^\infty(0,\infty;H^{\frac{\alpha}{2}}(\Omega))
\,, \\
\mu^N \rightharpoonup \mu \; &\mbox{ in } \, L^2(0,T;H^1(\Omega)) \, \mbox{ for all } \, 0<T<\infty \,, \\
\nabla \mu^N \rightharpoonup \nabla \mu \; &\mbox{ in } \, L^2(0,\infty;L^2(\Omega)^d) \,,
\end{align*}
where $\mu\in L^2_{uloc}([0,\infty);H^1(\Omega))$.
In the following $\widetilde{\varphi}^N$ denotes the piecewise linear interpolant of $\varphi^N(t_k)$ in time, where
$t_k = kh$, $k \in \mathbb{N}_0$.
Then $\partial_t \widetilde{\varphi}^N = \partial_{t,h}^- \varphi^N$ and therefore
\begin{align} \label{estofdiffint}
\|\widetilde{\varphi}^N - \varphi^N \|_{H^{-1}(\Omega)} \leq h \|\partial_t \widetilde{\varphi}^N\|_{H^{-1}(\Omega)} \,.
\end{align}
Using that $\mathbf{v}^N \varphi^N$ and $\nabla \mu^N$ are bounded in $L^2(0,\infty; L^2(\Omega)^d)$ and \eqref{timeintegratedline2} we conclude that $\partial_t \widetilde{\varphi}^N \in L^2(0,\infty; \changes{H_{(0)}^{-1}}(\Omega))$
is bounded.
Since $(\varphi^N)_{N\in\ensuremath{\mathbb{N}}}$ and therefore $(\widetilde{\varphi}^N)_{N\in\ensuremath{\mathbb{N}}}$ are bounded in $L^\infty(0,\infty;H^{\frac{\alpha}{2}}(\Omega))$, the Lemma of Aubin-Lions yields
\begin{align}\label{p-convergence1}
\widetilde{\varphi}^N \to \widetilde{\varphi} \; \mbox{ in } \; L^2(0,T;L^2(\Omega))
\end{align}
for all $0<T<\infty$ for some $\widetilde{\varphi} \in L^\infty(0,\infty;L^2(\Omega))$ (and a suitable subsequence).
In particular $\widetilde{\varphi}^N(x,t) \to \widetilde{\varphi}(x,t)$
almost every $ (x,t)\in \Omega\times (0,\infty)$.
Because of \eqref{estofdiffint},
\begin{align}\label{p-convergence2}
\| \widetilde{\varphi}^N - \varphi^N \|_{L^2(-h, \infty;H^{-1}(\Omega))} \to 0
\end{align}
and thus $\widetilde{\varphi} = \varphi$.
Since $\widetilde{\varphi}^N \in H^1_{\mbox{\footnotesize uloc}}([0,\infty);H^{-1}(\Omega))
\cap L^{\infty}([0,\infty);H^{\frac{\alpha}{2}}(\Omega))
\hookrightarrow BUC([0,\infty);L^2(\Omega))$ and $\widetilde{\varphi}^N \in L^\infty(0,\infty;H^{\frac{\alpha}{2}}(\Omega))$
are bounded, Lemma \ref{lem:CwEmbedding} implies $\varphi \in BC_w([0,\infty);H^{\frac{\alpha}{2}}(\Omega))$.
Moreover, $ (\widetilde{\varphi}^N - {\varphi}^N)_{N\in\ensuremath{\mathbb{N}}} \subseteq L^\infty(-h,\infty;H^{\frac{\alpha}{2}}(\Omega))$ is bounded
since $ ({\varphi}^N)_{N\in\ensuremath{\mathbb{N}}}, (\widetilde{\varphi}^N)_{N\in\ensuremath{\mathbb{N}}} \subseteq L^\infty(-h,\infty;H^{\frac{\alpha}{2}}(\Omega)) $ are bounded. By interpolation with
\eqref{p-convergence2} we conclude
\begin{align}\label{p-convergence3}
\widetilde{\varphi}^N - \varphi^N \to 0 \; \mbox{in} \; L^2(-h, T;L^2(\Omega)) \,
\end{align}
and therefore
\begin{align}\label{p-convergence4}
{\varphi}^N \to \varphi \; \mbox{in} \; L^2(0, T;L^2(\Omega)) \,
\end{align}
for all $ 0<T<\infty. $
Moreover, we have
\begin{align}\label{p-convergence5}
&\| \varphi^N_h - \varphi \|_{L^2(0, T; L^2(\Omega))} \leq \|\varphi^N_h - \varphi_h\|_{L^2(0, T; L^2(\Omega))} + \| \varphi_h - \varphi \|_{L^2(0, T;L^2(\Omega))} \nonumber\\
&\leq h^{\frac12}\|\varphi_0^N\|_{L^2(\Omega)}+\|\varphi^N - \varphi \|_{L^2(0, T-h; L^2(\Omega))}+ \| \varphi_h - \varphi \|_{L^2(0, T;L^2(\Omega))}.
\end{align}
Because of $\| \varphi_h - \varphi \|_{L^2(0, T;L^2(\Omega))}\to_{h\to 0} 0$,
we obtain $ \| \varphi^N_h - \varphi \|_{L^2(0, T; L^2(\Omega))} \to_{h\to 0} 0$.
Finally using the bounds of $\widetilde{\varphi}^N$ in $H^1(0,T;H^{-1}(\Omega))\cap L^\infty(0,T;\changes{H^{\frac{\alpha}{2}}}(\Omega))$
for all $0<T<\infty$ as well as $\widetilde{\varphi}^N \to \varphi$ in $L^2(0,T;L^2(\Omega))$ we conclude
$\widetilde{\varphi}^N(0) \to \varphi(0)$ in $L^2(\Omega)$. Since $\widetilde{\varphi}^N(0)=\varphi_0^N\to_{N\to \infty}\varphi_0$ in $L^2(\Omega)$, we derive $\varphi(0) = \varphi_0$.
Since $\rho^N$ depends affine linearly on $\varphi^N$, the conclusions hold true for $\rho^N$.
To pass to the limit in \eqref{timeintegratedline3},
we closely follow the corresponding argument in \cite{ABG15}. The only difference is that \changes{we} work on the space-time domains directly, while they work on the spacial domains fixing a time variable in \cite{ABG15}. We include the argument here for completeness.
\changes{We first observe that
$ \Psi_0'(\varphi^N)$ are bounded
in $L^2_{uloc}([0, \infty);L^2(\Omega))$ using Lemma
\ref{derivativeforchempot} and the boundedness of $ \nabla \mu_N $
in $ L^{2}(0, \infty; L^2(\Omega)).$}
Using this bound, we can pass to
a subsequence such that $ \Psi_0'(\varphi^N) $ converges weakly in $L^2(0,T;L^2(\Omega))$ to $\chi$ for all $0<T<\infty$ as $N$ tends to infinity.
Let $\psi\in C^\infty_0((0,\infty);C^1(\overline{\Omega}))$.
Thanks to the convergences listed above, we can pass to the limit $N \rightarrow \infty$ in \eqref{timeintegratedline3} to find
\begin{align*}
\int_{0}^{\infty} \int_{\Omega} (\mu + \kappa \varphi) \psi \,dx \,dt
= \int_{0}^{\infty} \ensuremath{\mathcal{E}}(\varphi, \psi) \,dt + (\chi, \psi)_{L^2((0, \infty) \times \Omega)}.
\end{align*}
To show \eqref{weakline3}, we only have to identify the weak limit
$ \chi = \lim_{N \rightarrow \infty} \Psi_0'(\varphi^N) $.
Let $ T>0 $. Since \eqref{p-convergence4} holds, passing to a subsequence,
we have $ \varphi^N \rightarrow \varphi $ almost everywhere in $ \Omega \times (0, T) $.
On the other hand, thanks to Egorov's theorem, there exists a set $Q_m
\subset \Omega \times (0, T)$ such that $ |Q_m| \geq |\Omega \times (0, T)| - \frac{1}{2m} $ and on which $ \varphi^N \rightarrow \varphi$ uniformly. We now use (uniform with respect to $N$) estimate on $\Psi_0'(\varphi^N)$ in $L^2(\Omega \times (0, T))$.
By definition, the quantity
\begin{align*}
M_{\delta, N} = \left| \left\{ (x, t) \in \Omega \times (0, T) \mid |\varphi^N(x, t)| > 1- \delta \right\} \right|
\end{align*}
is decreasing in $\delta$ for all $ n \in \ensuremath{\mathbb{N}}. $
Since $\Psi_0'(y)$ is unbounded for $ y \rightarrow \pm 1$, we set
$$ c_{\delta} := \inf_{|c| \geq 1- \delta} |\Psi_0'(c)| \rightarrow_{\delta \rightarrow 0} \infty, $$
we have by the Tschebychev inequality
\begin{align*}
\int_{\Omega \times (0, T)} | \Psi_0'(\varphi^N)|^2\,dx\,dt \geq c_{\delta}^2 |M_{\delta, N}|.
\end{align*}
From the uniform (with respect to $N$) estimate of the norm of $ \Psi_0'(\varphi^N) $ in $ L^2(\Omega \times (0, T)), $ we obtain $ M_{\delta, n}
\rightarrow 0 $ for $ \delta \rightarrow 0 $ uniformly in $ n \in \ensuremath{\mathbb{N}}. $
Therefore, we deduce
\begin{align*}
\lim_{\delta \rightarrow 0} \left| \left\{ (x, t) \in \Omega \times (0, T) \mid |\varphi^N(x, t)| > 1- \delta \right\} \right|=0
\end{align*}
uniformly in $ N \in \ensuremath{\mathbb{N}}. $
Thus there exists $ \delta = \delta(m) $ independent of $N$, such that
\begin{align*}
\left| \left\{ (x, t) \in \Omega \times (0, T) \mid | \varphi^N(x, t) | > 1- \delta \right\} \right| \leq \frac{1}{2m},~~~~\forall N \in \ensuremath{\mathbb{N}}
\end{align*}
Consider now $ N \in \ensuremath{\mathbb{N}} $ so large that by uniform convergence we have $ |\varphi^{N'}(x,t) - \varphi^N(x,t)| < \frac{\delta}{2}$ for all $N' \geq N$ and all $(x,t)\in Q_m $. Moreover, let $ Q_{mN}' \subset Q_m $ be defined by
\begin{align*}
Q_{mN}' = Q_m \cap \left\{ (x, t) \in \Omega \times (0, T) \mid
| \varphi^{N}(x, t) | \leq 1 - \delta \right\}.
\end{align*}
By the above construction, we immediately deduce that $ | Q_{mN}' | \geq |\Omega \times (0, T) | - \frac{1}{m} $ and that $ \left| \varphi^{N'}(x, t) \right| < 1- \frac{\delta}{2} $ for all $ N' \geq N $ and for all $ (x,t) \in Q_{m,N}. $
Therefore by the regularity assumptions on the potential $ \Psi_0', $ we deduce
that $ \Psi_0'(\varphi^N) \rightarrow \Psi_0'(\varphi) $ uniformly on $ Q_{mN}'. $
Since $m $ is arbitrary, we have $ \Psi_0'(\varphi^N) \rightarrow \Psi_0'(\varphi) $ almost everywhere in $ \Omega \times (0, T). $ By a diagonal argument,
passing to a subsequence, we have $ \Psi_0'(\varphi^N) \rightarrow \Psi_0'(\varphi) $
almost everywhere in $ \Omega \times (0, \infty) $ and $\Psi_0'(\varphi^N) \rightarrow \Psi_0'(\varphi)$ as $h\to 0$ in $L^q(Q_T)$ for every $1\leq q<2$ and $0<T<\infty$. Finally, the uniqueness of weak and strong limits gives $ \chi = \Psi_0'(\varphi) $ as claimed.
Next we show $\mathbf{v}^N \to \mathbf{v}$ in $L^2(0,T;L^2(\Omega)^d)$ for all
$0<T<\infty$ and almost everywhere.
We note that
$\partial_t \left(\widetilde{\rho \mathbf{v}}^N\right) = \partial_{t,h}^- \left(\rho^N \mathbf{v}^N\right)$ since $\widetilde{\rho \mathbf{v}}^N$ is the piecewise linear interpolant of $\left(\rho^N \mathbf{v}^N\right)(t_k)$.
Using that
\begin{align*}
\rho^N_h \mathbf{v}^N \otimes \mathbf{v}^N & \; \mbox{ is bounded in } \; L^2(0,T;L^\frac{3}{2}(\Omega)) \,, \\
D\mathbf{v}^N & \; \mbox{ is bounded in } \; L^{2}(0,T;L^2(\Omega)) \,, \\
\mathbf{v}^N \otimes \nabla \mu^N & \; \mbox{ is bounded in } \; L^{\frac{8}{7}}(0,T;L^{\frac{4}{3}}(\Omega)) \,, \\
\nabla \mu^N \varphi^N_h & \; \mbox{ is bounded in } \; \changes{L^2(0,T;L^2(\Omega))} \,.
\end{align*}
together with \eqref{timeintegratedline1}, we obtain that
$\partial_t \left( \mathbb{P}_\sigma(\widetilde{\rho \mathbf{v}}^N) \right)$ is bounded in
$L^{\frac{8}{7}}(0,T; (W^1_6(\Omega))')$ for all $0<T<\infty$.
Here we remark that the boundedness of $\nabla \mu^N \in L^2(0,T;L^2(\Omega))$ and $\varphi^N_h \in \changes{L^\infty(0,T;L^{\infty}(\Omega))}$ imply that $\nabla \mu^N\varphi^N_h \in \changes{L^2(0,T;L^{2}(\Omega))} $ is bounded.
\changes{Since $ \rho^N $ is bounded in $ L^{\infty}(0, T; H^{\frac{\alpha}{2}}(\Omega)^d) $ and $ \mathbf{v}^N $ is bounded in $ L^2(0, T; H^{1}(\Omega)^d) $,
using a product rule for Besov spaces, cf.}~\cite{RS96}\changes{,
suitable Sobolev embeddings and the boundedness of $\mathbb{P}_{\sigma}$ in Sobolev spaces, we have the boundedness of
$ \mathbb{P}_\sigma (\widetilde{\rho \mathbf{v}}^N) $ in $ L^2(0,T;H^{\epsilon}(\Omega)^d) $ for some $ \epsilon > 0.$
Hence} the Lemma of Aubin-Lions implies
\begin{align*}
\mathbb{P}_\sigma (\widetilde{\rho \mathbf{v}}^N) \to \mathbf{w}
\; \mbox{ in } \; L^2(0,T;L^2(\Omega)^d)
\end{align*}
for all $0<T<\infty$ for some $\mathbf{w} \in L^\infty(0,\infty;L^2(\Omega)^d)$.
Since the projection $\mathbb{P}_\sigma : L^2(0,T;L^2(\Omega)^d) \to L^2(0,T;L^2_\sigma(\Omega))$ is weakly continuous, we
conclude from the weak convergence $\widetilde{\rho \mathbf{v}}^N \rightharpoonup \rho \mathbf{v}$
in $L^2(0,T;L^2(\Omega))$ that $\mathbf{w} = \mathbb{P}_\sigma (\rho \mathbf{v})$.
This yields
\begin{align*}
\int_0^T \int_\Omega \rho^N |\mathbf{v}^N|^2 = \int_0^T \int_\Omega \mathbb{P}_\sigma (\rho^N \mathbf{v}^N ) \cdot \mathbf{v}^N
\longrightarrow \int_0^T \int_\Omega \mathbb{P}_\sigma(\rho \mathbf{v}) \cdot \mathbf{v}
= \int_0^T \int_\Omega \rho \, |\mathbf{v}|^2 \,
\end{align*}
because of $\mathbb{P}_\sigma(\rho^N \mathbf{v}^N)\to_{N\to\infty } \mathbb{P}_\sigma (\rho \mathbf{v})$ in $L^2(0,T;L^2(\Omega)^d)$.
Since weak convergence and convergence of the norms imply strong convergence in a Hilbert space, we conclude $(\rho^N)^{\frac{1}{2}} \mathbf{v}^N \to (\rho)^{\frac{1}{2}} \mathbf{v}$ in $L^2(0,T;L^2(\Omega)^d)$.
Because of
\begin{align*}
\rho^N \to \rho \; \mbox{ almost everywhere in } \, (0,\infty) \times \Omega \; \mbox{ and } \;
|\rho^N| \geq c > 0 \,,
\end{align*}
we derive
\begin{align*}
\mathbf{v}^N = (\rho^N)^{-\frac{1}{2}} \left( (\rho^N)^{\frac{1}{2}} \mathbf{v}^N \right)
\to_{N\to\infty} \mathbf{v} \; \mbox{ in } \; L^2(0,T;L^2(\Omega)^d) \,.
\end{align*}
This yields $\mathbf{v}^N \to_{N\to\infty} \mathbf{v}$ almost everywhere in $(0,\infty) \times \Omega$
(for a subsequence).
Now we can pass to the limit in \eqref{timeintegratedline1},
\eqref{timeintegratedline2} to get \eqref{weakline1}, \eqref{weakline2} with the aid of the previous results using
that for all divergence free $\boldsymbol{\psi}$
\begin{align*}
\int_0^T \int_\Omega \nabla \mu^N P_N \varphi^N_h \cdot \boldsymbol{\psi} \, dx\, dt \to_{N\to \infty}
\int_0^T \int_\Omega \nabla \mu \varphi \cdot \boldsymbol{\psi} \, dx\, dt\,.
\end{align*}
The initial condition $ \mathbf{v}(0) = \mathbf{v}_0 $ in $ L^2(\Omega)^d$ is shown in the same
way as in \cite{ADG13}. Therefore we omit the proof.
\changes{Finally, using \eqref{eq:4}, $\Psi'(\varphi)\in L^2_{uloc}([0,\infty); L^2(\Omega))$ and the local regularity result due to \cite[Lemma~4.3]{AK07} we obtain $\varphi \in L^2_{uloc}([0,\infty);H^{\alpha}(\Omega'))$ for every open $\Omega'$ with $\overline{\Omega'}\subseteq \Omega$, i.e., $\varphi \in L^2_{uloc}([0,\infty);H^{\alpha}_{loc}(\Omega))$.}
\subsection{Proof of the Energy Inequality}
It remains to show the energy inequality \eqref{weakline5}.
If we show that $ \varphi^N(t) \to \varphi(t) $ in $ H^{\frac{\alpha}{2}}_{(m)} $ for
almost every $ t \in (0, \infty) $ and $ \sqrt{h} \nabla \varphi^N \to 0 $ in
$ (L^2(\Omega))^d $ for almost every $ t \in (0, \infty), $ the rest of the proof is
almost the same as in \cite{ADG13} and we omit it.
To this end it suffices to show
$ (\varphi^N, \sqrt{h} \nabla \varphi^N) $ converges strongly to $ (\varphi, 0) $ in $L^2(0, T; H^{\frac{\alpha}{2}}_{(m)}(\Omega) \times (L^2(\Omega))^d) $ for every $T>0$.
If we take $ \psi = \varphi^N $ in \eqref{timeintegratedline3} \changes{(after a standard approximation)}, we have
\begin{align}
\int_{0}^{\infty}
\int_\Omega \left(\mu^N + {\kappa} \, \frac{\varphi^N + \varphi_h^N}{2}\right)\varphi^N \, dx \, dt
= & \int_{0}^{\infty} \ensuremath{\mathcal{E}}(\varphi^N,\varphi^N) \,dt + \int_{0}^{\infty} \int_\Omega {\Psi}_0'(\varphi^N)\varphi^N\, dx \, dt \nonumber\\
& +h \int_{0}^{\infty} \int_\Omega \nabla \varphi^N \cdot \nabla \varphi^N \, dx \, dt\,.
\end{align}
Since $ \varphi^N \to \varphi $ in $ L^2(Q_T) $, $ \mu^N \rightharpoonup \mu$ in $L^2(Q_T) $ and $ \Psi_{0}'(\varphi^N) \rightharpoonup \Psi_{0}'(\varphi) $ in $ L^2(Q_T) $ as $ N \to \infty $, we have
\begin{align}\nonumber
& \lim_{N \to \infty} \left\{ \int_{0}^{\infty} \ensuremath{\mathcal{E}}(\varphi^N(t), \varphi^N(t))\,dt + h \int_{0}^{\infty}
\int_{\Omega} \nabla \varphi^N\cdot \nabla \varphi^N\,dx\,dt \right\} \\\label{limitnorm}
& = \int_{0}^{\infty} \int_{\Omega} (\mu \varphi + \kappa \varphi^2)\,dx\,dt - \int_{0}^{\infty} \int_{\Omega} \Psi_{0}'(\varphi)\varphi\,dx\,dt
= \int_{0}^{\infty} \ensuremath{\mathcal{E}}(\varphi(t), \varphi(t))\,dt
\end{align}
because of \eqref{weakline3}.
Next we show $ \varphi^N \rightharpoonup \varphi $ in $ L^2(0, T; H^{\frac{\alpha}{2}}_{(m)}) $
and $ \sqrt{h} \nabla \varphi^N \rightharpoonup 0 $ in $ L^2(0, T; L^2) $ as $ N \to \infty $ for any $ T>0 $. Let $T>0$ be arbitrarily fixed.
$ ( \varphi^N )_{N \in \ensuremath{\mathbb{N}}} $ is bounded in $ L^{\infty}(0, T; H^{\frac{\alpha}{2}}_{(m)}) $, hence also in $ L^2(0, T; H^{\frac{\alpha}{2}}_{(m)}) $. Then there exists some $ \varphi' \in L^2(0, T; H^{\frac{\alpha}{2}}_{(m)}) $ such that $ \varphi^N \rightharpoonup \varphi' $ in $ L^2(0, T; H^{\frac{\alpha}{2}}_{(m)}) $.
Since $ \varphi^N \to \varphi $ in $ L^2(Q_T) $, $ \varphi = \varphi' $. Hence $\varphi^N \rightharpoonup \varphi$ in $L^2(0, T; H^{\frac{\alpha}{2}}_{(m)}) $.
For any fixed $ \boldsymbol{\psi} \in C_0^{\infty}(Q_T)^d $,
$$ \int_{Q_T} \sqrt{h}~\nabla \varphi_N \cdot \boldsymbol{\psi} \, d(x, t) = - \int_{Q_T} \sqrt{h}~ \varphi^N~\mathrm{div}~\boldsymbol{\psi} \, d(x, t) $$
tends to zero as $N \to \infty$ since
$ \varphi^N \to \varphi $ in $ L^2(Q_T) $.
Since $ \sup_{N \in \ensuremath{\mathbb{N}}} \| \sqrt{h} \nabla \varphi^N \|_{L^2(Q_T)^d} < \infty $ and
$ \overline{C_0^{\infty}(Q_T)^d}^{~\|\cdot\|_{L^2(Q_T)^d}} = L^2(Q_T)^d $, we have
$\sqrt{h} \nabla \varphi^N \rightharpoonup 0$ in $L^2(Q_T)^d$. Hence we have
$ (\varphi^N, \sqrt{h} \nabla \varphi^N) \rightharpoonup (\varphi, 0) $ in $ L^2(0, T; H^{\frac{\alpha}{2}}_{(m)} \times (L^2)^d) $.
Because of \eqref{limitnorm}, we also have the convergence of the norms of $ (\varphi^N, \sqrt{h} \nabla \varphi^N) $ to that of $ (\varphi, 0)$ in $ L^2(0, T; H^{\frac{\alpha}{2}}_{(m)} \times (L^2)^d) $. Hence we have shown the claim.
\section*{Acknowledgments}
The results of this contribution were mainly obtained during a research stay of the second author at the University of Regensburg, which was partly supported by the ``Universit\"atsstiftung Hans Vielberth''.
The second author would like to thank Professor Mitsuru Sugimoto for offering him the support by JSPS KAKENHI Grant Numbers 26287022.
The second author was also supported by JSPS KAKENHI Grant Numbers 17K17804.
These supports are gratefully acknowledged.
|
{
"timestamp": "2019-11-21T02:17:34",
"yymm": "1806",
"arxiv_id": "1806.01030",
"language": "en",
"url": "https://arxiv.org/abs/1806.01030"
}
|
\section{introduction}
The incorporation of time in a fully quantum framework \cite{PaW.83} has recently attracted wide attention
\cite{Ga.09,GL.15,Mo.14,Ma.15,FC.13,BR.16,Er.17,Pa.17,Ni.18,Ll.18}. On the one hand, it is relevant as a fundamental
problem and a key issue
in the search
for a coherent theory of quantum gravity \cite{DW.67,Ro.04, Ku.11, Is.93,JT.11,Bo.11, Ho.12}. On the other hand,
a quantum description of time enables to exploit the quantum features of superposition and entanglement
in the development of new models of parallel-in-time simulation \cite{FC.13,BR.16}.
The concept of time is related to the quantification of evolution through a reference physical system called clock.
Historically, the readings of this clock provided an external classical parameter, called time. Nonetheless,
if we aim to introduce time into a fully quantum framework,
the clock has to be a quantum system itself. This is even more important in attempts to quantize gravity
where time has to be described by a dynamical entity \cite{Ro.04, Ku.11, Is.93,JT.11, Bo.11, Ho.12}.
Here we describe the system and the reference clock through a discrete system-time history state which enforces a
discrete unitary evolution on the system states. We consider the general case where the system-clock pairs can interact.
This scenario provides a more general starting point,
more adequate for some quantum gravity or cosmological models where interactions between an internal relational
clock and evolving degrees of freedom cannot be excluded \cite{Bo.11,Ho.12}.
We first discuss different representations of the history state, showing that
for a fixed initial state there is always an adequate selection of clock basis for
which the resultant evolution corresponds to a constant Hamiltonian,
with the history state satisfying a discrete counterpart of a standard Wheeler-DeWitt type equation \cite{DW.67}.
The general interacting formalism opens, however, new possibilities. The entanglement of the history state is a measure
of the number of orthogonal states visited by the system at orthogonal times \cite{BR.16}, and for a constant Hamiltonian
clearly depends on the seed system state. This dependence becomes, however, attenuated when the Hamiltonian is not constant
in time, and in the case where the evolution operators form a complete orthogonal set, it is in fact always {\it maximum},
irrespective of the initial state. The corresponding history state admits, nonetheless, a simple generation through a
two-clock scenario, where the clocks are linked to conjugate system variables.
We then analyze the quadratic entanglement entropy of history states, which, as opposed to the standard entropy,
can be explicitly evaluated in the general case, enabling one to characterize the system evolution and also to
connect the entanglement of states and operators. For a general constant Hamiltonian it can be analytically
determined for any number of steps. Moreover, we show that it is upper bounded by the quadratic entropy of the energy spread
of the initial state and lower bounded by that of the geodesic evolution connecting the initial and final states according
to the Fubini-Study metric \cite{AA.90}. And its average over all initial system states is directly proportional
to the quadratic operator entanglement entropy \cite{Z.00,N.03,P.07,M.13}
of the unitary gate that generates the history state. Through the channel-state
duality \cite{Ja.72,Ch.75,Gr.05,Du.05,Mi.13}, it is also shown that the pure state which represents the latter
is itself an operator history state, whose quadratic entanglement entropy determines its entangling power.
Finally, we show that through measurements on the clock it is possible to use both system and operator history states
to efficiently determine the overlap between system states and also the trace of the evolution operator between
any two-times. The latter reduces to the trace of a unitary operator (result of the DQC1 circuit \cite{KL.98})
for the simple case of a qubit clock. The properties of general discrete history states and their entanglement are
discussed in section \ref{II}, whereas the entanglement and history states
of unitary operators are discussed in \ref{III}. Conclusions are finally given in \ref{IV}.
\section{Discrete history states\label{II}}
We consider a system $S$ and a reference clock system $T$ in a joint pure state $|\Psi\rangle\in {\cal H}_S\otimes {\cal H}_T$,
with ${\cal H}_T$ of finite dimension $N$. Any such state can be written as
\begin{eqnarray}
|\Psi\rangle&=&{\textstyle\frac{1}{\sqrt{N}}}
\sum_{t} |S_t\rangle|t\rangle\,,\label{St1}\end{eqnarray}
where $|t\rangle$, $t=0,\ldots,N-1$, are orthogonal states of $T$ ($\langle t|t'\rangle=\delta_{tt'}$) and $|S_t\rangle$
are states of $S$, not necessarily orthogonal or normalized, yet satisfying $\sum_{t}\langle S_t|S_t\rangle/N=\langle \Psi|\Psi\rangle=1$.
Consider now a unitary operator ${\cal U}$ for the whole system of the form
\begin{equation}
{\cal U}=\sum_{t=1}^N U_{t,t-1}\otimes |t\rangle\langle t-1|\,,
\label{Upsi}
\end{equation}
where $t=N$ is identified with $t=0$ and $U_{t,t-1}$ are arbitrary unitary operators on $S$
satisfying $U_{0,N-1}\ldots U_{1,0}=\mathbb{1}$. If $|\Psi\rangle$ fulfills the eigenvalue equation
\begin{equation}
{\cal U}|\Psi\rangle=|\Psi\rangle\,,\label{Ueig}
\end{equation}
the states $|S_t\rangle$ will undergo a {\it unitary} evolution with $t$:
\begin{eqnarray}
|S_t\rangle=\sqrt{N}\langle t|\Psi\rangle&=&\sqrt{N}\langle t|{\cal U}|\Psi\rangle\nonumber\\&=&
U_{t,t-1}|S_{t-1}\rangle\label{Ux1}=U_t|S_{0}\rangle\,,\label{utw}
\end{eqnarray}
where $U_t=U_{t,t-1}\ldots U_{1,0}$,
with $U_0=\mathbb{1}$. The states $|S_t\rangle$ will then have a unit norm if $|\Psi\rangle$ is normalized.
Thus, the state (\ref{St1}) is a discrete finite dimensional version of the history state of
the Page-Wootters formalism \cite{PaW.83,GL.15}. Moreover,
writing ${\cal U}=\exp[-i{\cal J}]$, with ${\cal J}$ hermitian
(and spectrum $\subset[0,2\pi)$), Eq.\ (\ref{Ueig}) is equivalent to
\begin{equation}
{\cal J}|\Psi\rangle=0\label{WDW}\,,
\end{equation}
which is a discrete cyclic version of a Wheeler-DeWitt type equation \cite{DW.67}. Note, however,
that ${\cal J}$ will contain $S-T$ interaction terms in the general case where $U_{t,t-1}$ depends on $t$.
A unitary evolution of the states $|S_t\rangle$ actually occurs if $|\Psi\rangle$ is {\it any} eigenstate of ${\cal U}$:
Its eigenvalues are $e^{-i2\pi k/N}$, $k=0,\ldots,N-1$, and its eigenstates have all the form (\ref{St1}) with $|S_t\rangle$
satisfying a {\it shifted} unitary evolution: $|S_t\rangle=e^{i2\pi k/N}U_{t,t-1}|S_{t-1}\rangle=e^{i2\pi kt/N}U_t|S_0\rangle$.
Each eigenvalue has degeneracy equal to the dimension $d_S={\rm dim}{\cal H}_S$ of the system space, with its eigenspace spanned by
orthogonal history states $|\Psi^l_k\rangle$ generated by $d_S$ orthogonal initial states
$|S_{0}^l\rangle$: $\langle \Psi^l_k|\Psi^{l'}_{k'}\rangle=\langle S_0^l|S_0^{l'}\rangle=\delta^{ll'}$ \cite{BR.16}.
If $U_{t,t-1}$ is independent of $t$
$\forall$ $t=1,\ldots,N$, then
\begin{equation}U_{t,t-1}=\exp[-iH_S]\,,\label{UHS}
\end{equation}
with $H_S$ a fixed hermitian Hamiltonian for system $S$ with eigenvalues $2\pi k/N$, $k$ {\it integer}.
The operator (\ref{Upsi}) becomes then {\it separable}: ${\cal U}=\exp[-i H_S]\otimes \exp[-iP_T]$, implying
\begin{equation}
{\cal J}=H_S\otimes \mathbbm{1}+\mathbbm{1}\otimes P_T\,,\label{J}\end{equation}
which contains no interaction terms. Here $P_T$ is the generator of time translations,
satisfying $e^{-iP_T}|t-1\rangle=|t\rangle$ $\forall$ $t$ and
$P_T|k\rangle_T=\frac{2\pi k}{N} |k\rangle_T$, with $|k\rangle_T$ the discrete Fourier transform (DFT) of the states $|t\rangle$:
\begin{equation}|k\rangle_T=\frac{1}{\sqrt{N}}\!\sum_t e^{i2\pi kt/N}|t\rangle\,,\;\;k=0,\ldots,N-1\,.\label{DFT}\end{equation}
Eqs.\ (\ref{WDW})--(\ref{J}) then become an exact discrete version of the usual static Wheeler-DeWitt equation \cite{GL.15}.
The ensuing condition $\langle t|{\cal J}|\Psi\rangle=0$ implies
\begin{equation}-\langle t|P_T|\Psi\rangle=H_S|S_t\rangle\;
\,,\end{equation}
which is a discrete version of Schr\"odinger's equation: As $-\langle t|P_T|t'\rangle=i\frac{\partial}
{\partial t}\frac{1}{N}\sum_{k}e^{i2\pi k(t-t')/N}$,
for $N\rightarrow\infty$ $-\langle t|P_T|t'\rangle\rightarrow i\delta'(t-t')$
and $-\langle t|P_T|\Psi\rangle\rightarrow i\frac{\partial}{\partial t}|S_t\rangle$.
\subsection{Representations and entanglement of the history state}
By considering an arbitrary orthogonal basis $\{|q\rangle\}$
of ${\cal H}_S$, we may first rewrite $|\Psi\rangle$ as
\begin{equation}
|\Psi\rangle=\frac{1}{\sqrt{N}}\sum_{q,t}\psi(q,t)
|qt\rangle\label{St2}\,, \end{equation}
where $|qt\rangle=|q\rangle|t\rangle$ and $\psi(q,t)=\langle q|S_t\rangle=\sqrt{N}\langle qt|\Psi\rangle$ is a
``wave function'' satisfying a unitary evolution with $t$: $\psi(q,t)=\sum_{q'}\langle q|U_{t,t-1}|q'\rangle\psi(q',t-1)$.
We may then obtain the Schmidt decomposition of $|\Psi\rangle$, which we will here write as
\begin{equation}|\Psi\rangle=\sum_k \lambda_k\,|k\rangle_S\,|-k\rangle_T\label{Scm}\,,\end{equation}
where $\lambda_k>0$ are the singular values of the matrix $\psi(q,t)/\sqrt{N}$ and $|k\rangle_{S(T)}$ orthonormal
states of $S$ ($T$) derived from the singular value decomposition of $\psi(q,t)$, with $|-k\rangle\equiv|N-k\rangle$.
They are eigenstates of the reduced states $\rho_{S(T)}={\rm Tr}_{T(S)}\,|\Psi\rangle\langle\Psi|$, with
$\lambda_k^2$ their non-zero eigenvalues. While the states $|S_t\rangle\propto \langle t|\Psi\rangle$ are not necessarily
orthogonal but are equally probable, the states $|k\rangle_S\propto\, {_T}\langle-k|\Psi\rangle$ are all orthogonal but
not equally probable, with $\lambda_k^2$ representing a ``permanence'' probability.
In the constant case (\ref{UHS})--(\ref{J}), the Schmidt states $|k\rangle_S$ and $|k\rangle_T$ are
just the eigenstates of $H_S$ and $P_T$:
\begin{equation}
H_S|k\rangle_S=\frac{2\pi k}{N}|k\rangle_S\,,\;\;P_T|k\rangle_T=\frac{2\pi k}{N}|k\rangle_T\,,
\label{KST}\end{equation}
since $|S_t\rangle=e^{-i H_S t}|S_0\rangle=\sum_{k} \lambda_k e^{-i2\pi k t/N}|k\rangle_S$ with
$\lambda_k={_{S}}\langle k|S_0\rangle$, and hence
$|\Psi\rangle=\frac{1}{\sqrt{N}}\sum_{k,t}\lambda_k e^{-i2\pi kt/N}|k\rangle_S |t\rangle$ becomes Eq.\ (\ref{Scm}),
with $|k\rangle_T$ the strictly orthogonal states (\ref{DFT}).
The Schmidt coefficients $\lambda_k$ represent in this case the distribution of $|S_0\rangle$ over distinct energy eigenstates
(in case of degeneracy, $\lambda_k|k\rangle_S$ denotes the projection of $|S_0\rangle$ onto
the eigenspace of energy $2\pi k/N$ ($mod\, 2\pi$), with $\lambda_k^2$ the total probability of measuring this
energy in $|S_0\rangle$). It is then apparent from Eqs.\ (\ref{J}) and (\ref{Scm}) that $|\Psi\rangle$
satisfies Eq.\ (\ref{WDW}), which becomes a zero ``total momentum'' condition: $k_S+k_T=0$ ($mod\, N$).
In the case of arbitrary unitary operators $U_{t,t-1}$ in (\ref{Upsi}), for any given initial state $|S_0\rangle$ there is always,
however, a special orthogonal basis of ${\cal H}_T$ for which the corresponding states of $S$ {\it evolve according
to a constant Hamiltonian $H_S$} satisfying (\ref{KST}). It is just necessary to use the inverse DFT
of the Schmidt states $|k\rangle_T$ of (\ref{Scm}),
\begin{equation}
|\tau\rangle={\textstyle\frac{1}{\sqrt{N}}}\sum_k\,e^{-i2\pi k\tau/N}|k\rangle_T\,,\label{tau}
\end{equation}
with $k,\tau=0,\ldots,N-1$ (if the Schmidt rank
is less than $N$, the states $|k{\rangle}_T$ of (\ref{Scm}) can be completed with orthogonal states),
which will not coincide in general with the original states $|t\rangle$. The state (\ref{Scm}) then becomes
\begin{equation}
|\Psi\rangle={\frac{1}{\sqrt{N}}}
\sum_{\tau,k}\lambda_k\,e^{-i2\pi k\tau/N}|k\rangle_S|\tau\rangle=
\frac{1}{\sqrt{N}}\sum_{\tau}|S_{\tau}\rangle|\tau\rangle
\label{St22a}\,,\end{equation}
where $|S_\tau\rangle=\sum_k e^{-i2\pi k\tau/N}\lambda_k|k\rangle_S$ satisfies
\begin{eqnarray}|S_\tau\rangle=\sqrt{N}\langle \tau|\Psi\rangle&=&
\exp[-i\tau H_S]|S_{\tau=0}\rangle\,,\label{HStau}\end{eqnarray}
with $|S_{\tau=0}\rangle=\sum_{k}\lambda_k\,|k\rangle_S$
and $H_S$ defined over the Schmidt states $|k\rangle_S$ by Eq.\ (\ref{KST}). The Schmidt coefficients $\lambda_k$ can then be interpreted
as the distribution of $|S_{\tau=0}\rangle$ over these energy eigenstates. In terms of the operators
$H_S$ and $P_T$ defined by (\ref{KST}), $|\Psi\rangle$ satisfies Eq.\ (\ref{WDW}) also for an effective non-interacting
${\cal J}$ of the form (\ref{J}), and can be generated from $|S_{\tau=0}\rangle|0_\tau\rangle$ with the circuit of Fig.\ (\ref{f1}).
Assuming now $d_S= N$ (the Schmidt decomposition selects in any case subspaces of equal dimension on $S$ and $T$) we can also
consider the inverse DFT of the system Schmidt states, $|\xi\rangle=\frac{1}{\sqrt{N}}\sum_k e^{-i2\pi k \xi/N}|k\rangle_S$,
which satisfy $e^{-iH_S}|\xi\rangle=|\xi+1\rangle$
and are the special system states analogous to $|\tau\rangle$. We can then also rewrite $|\Psi\rangle$ as
\begin{eqnarray}
|\Psi\rangle&=&{\textstyle\frac{1}{\sqrt{N}}}
\sum_{\xi,\tau}\Lambda_{\xi-\tau}|\xi\tau\rangle
=\sum_{\xi}\Lambda_\xi|\Psi_\xi\rangle
\label{Psit}\,,
\end{eqnarray}
where $\sqrt{N}\langle \xi\tau|\Psi\rangle=\Lambda_{\xi-\tau}$ depends just on $\xi-\tau$,
and
\begin{equation}\Lambda_\xi=\frac{1}{\sqrt{N}}\sum_k e^{i2\pi k \xi/N}\lambda_k\,,\end{equation}
is the DFT of the
Schmidt coefficients $\lambda_k$, with $|\Psi_\xi\rangle=\frac{1}{\sqrt{N}}\sum_\tau|\xi+\tau\rangle|\tau\rangle$
orthogonal
maximally entangled history states: $\langle\Psi_\xi|\Psi_{\xi'}\rangle=\delta_{\xi\xi'}$
($|\xi+\tau\rangle\equiv|\xi+\tau-N\rangle$ if $\xi+\tau\geq N$).
The representation (\ref{Psit}) is then ``conjugate'' to (\ref{Scm}), expressing $|\Psi\rangle$ as a superposition of
maximally entangled orthogonal history states. Like (\ref{Scm}), it is {\it symmetric} in $S-T$: States
$|S_\tau\rangle=\sqrt{N}\langle \tau|\Psi\rangle=\sum_\xi\Lambda_{\xi-\tau}|\xi\rangle$ evolve unitarily with $\tau$
(Eq.\ (\ref{HStau})) while clock states $|T_\xi\rangle=\sqrt{N}\langle\xi|\Psi\rangle=\sum_{\tau}\Lambda_{\xi-\tau}|\tau\rangle$ evolve
unitarily with $\xi$:
\begin{eqnarray}|T_\xi\rangle=\sqrt{N}\langle\xi|\Psi\rangle&=&
\exp[-i\xi P_T]|T_{\xi=0}\rangle\,,\label{PTxi}\end{eqnarray}
where $|T_{\xi=0}\rangle=\sum_k \lambda_k|-k\rangle_T$,
complementing Eq.\ (\ref{HStau}). Both $\xi$ and $\tau$ always run from $0$ to $N-1$ with uniform weight, irrespective of the seed state.
From the Schmidt decomposition (\ref{Scm}) we can evaluate the system-time
entanglement entropy \cite{BR.16}
\begin{equation}
E(S,T)=S(\rho_S)=S(\rho_T)=-\sum_k \lambda_k^2\log_2 \lambda_k^2\label{S}\,,
\end{equation}
where $S(\rho)=-{\rm Tr}\,\rho\log_2\rho$. If $|S_0\rangle$ happens to be a common eigenstate of all $U_{t,t-1}$,
such that $|S_t\rangle=e^{-i\phi_t}|S_0\rangle$
$\forall$ $t$, then $|\Psi\rangle\propto|S_0\rangle\sum_t e^{-i\phi_t}|t\rangle$ becomes separable and $E(S,T)=0$ (stationary state),
whereas if
all $|S_t\rangle$ are orthogonal (i.e.\ fully distinguishable), $|\Psi\rangle$ becomes maximally entangled, with (\ref{St1})
already the Schmidt decomposition and $E(S,T)=\log_2 N$ maximum. Thus, $2^{E(S,T)}$ measures the actual system evolution time,
in the sense of counting the number of effective equally probable orthogonal states the system visits at
orthogonal times. For constant $U_{t,t-1}$ (Eq.\ (\ref{UHS})), $E(S,T)$ is just a measure of the {\it energy spread} ($mod\,2\pi$)
of the initial state, as $\lambda_k= {_S}\langle k|S_0\rangle$. A similar interpretation holds for the general case in terms
of the effective $H_S$ defined by (\ref{KST}).
On the other hand, the entropy determined by the conjugate distribution $|\Lambda_\xi|^2$,
\begin{equation}\tilde{E}(S,T)=-\sum_\xi |\Lambda_\xi|^2\log_2|\Lambda_\xi|^2\,, \label{Sc}
\end{equation}
measures the spread of $|\Psi\rangle$ over maximally entangled evolutions, or equivalently, the spread of system states
$|\xi\rangle$ for a given clock state $|\tau\rangle$ (or viceversa), and is a measure of {\it time uncertainty}.
It vanishes when $|\Psi\rangle$ is maximally entangled ($\Lambda_\xi=\delta_{\xi,0}$ if
$\lambda_k=\frac{1}{\sqrt{N}}$ $\forall\,k$), in which case there is complete synchronization
between the special system and clock basis states ($|\Psi\rangle=\frac{1}{\sqrt{N}}\sum_\tau |\tau\rangle|\tau\rangle)$,
and becomes maximum for a product state ($\Lambda_\xi=\frac{1}{\sqrt{N}}$ $\forall$ $\xi$ if
$\lambda_k=\delta_{k,0}$), in which case system and clock states are completely uncorrelated, as seen from (\ref{Psit}).
These two entropies satisfy the entropic uncertainty relation
\cite{BR.16} (see also \cite{DCT.91,PDO.01,Hi.57,Ll.18})
\begin{equation} E(S,T)+\tilde{E}(S,T)\geq \log_2\,N\,,
\end{equation}
which is saturated in the previous limits.
\begin{figure}[h!]
\centering
\includegraphics{figr1.ps}
\caption{Schematic circuit representing the generation of the {history state} (\ref{St22a}) in the special time basis,
where the system evolves according to a constant Hamiltonian $H_S$. Here $H^{\otimes n}$
denotes the Hadamard operator over $n$ qubits, with $2^n=N$.} \label{f1}
\end{figure}
\subsection{The case of a complete set of evolution operators\label{IIbb}}
While for a constant Hamiltonian the system-time entanglement (\ref{S}) clearly
depends on the seed state $|S_0\rangle$, such dependence becomes softened in the more
general case where the operators $U_{t,t-1}$ depend on $t$ and do not commute
among themselves, i.e.\ when the `Hamiltonian' $H_t\propto \ln U_{t,t-1}$ is
time-dependent and $[H_t,H_{t'}]\neq 0$ for some pairs $t\neq t'$. If they
have no common eigenstate, $|\Psi\rangle$ will be entangled for {\it any} $|S_0\rangle$.
The extreme case is that where the $U_t$'s of (\ref{utw})
form a {\it complete} set of {\it orthogonal} unitaries on $S$, such that
\begin{equation}{\rm Tr}\,[U_t^\dagger U_{t'}]=d_S\delta_{tt'}\,,\;\;t,t'=0,\ldots,d_S^2-1,\label{Uto}\end{equation}
implying $N=d_S^2$. In this case the history state (\ref{St1})
becomes {\it maximally entangled} for {\it any} initial state $|S_0\rangle$:
\begin{equation} E(S,T)=\log_2 d_S\,,\label{Eds}\end{equation}
such that $|\Psi\rangle=\frac{1}{\sqrt{d_S}}\sum_k |k\rangle_S|-k\rangle_T$ $\forall$ $|S_0\rangle$. \\
{\it Proof:} We may view Eq.\ (\ref{Uto}) as the scalar product between column vectors $\frac{1}{\sqrt{d_S}}\bm{U}_t$
of a $d_S^2\times d_S^2$ unitary matrix
${\bm U}$ of elements ${\bm U}_{ij,t}=\frac{1}{\sqrt{d_S}}\langle i|U_t|j\rangle$, with $\{|i\rangle\}$
any orthonormal basis of $S$,
such that (\ref{Uto}) is equivalent to ${\bm U}^\dagger{\bm U}=\mathbbm{1} _{d_S^2}$. This matrix then satisfies as well
${\bm U}{\bm U}^\dagger=\mathbbm{1}_{d_S^2}$, i.e.\
$\sum_{t}\langle i|U_t|j\rangle\langle l|U_t^\dagger|k\rangle=d_S\delta_{ik}\delta_{jl}$, which implies
$\sum_t U_t|j\rangle \langle l|U_t^\dagger=d_S\delta_{jl}\mathbbm{1}_{S}$ and hence
\begin{equation}
\sum_t U_t|S_0\rangle\langle S_0'| U_t^\dagger=d_S\,\langle S_0'|S_0\rangle\,\mathbbm{1}_{S}\label{res}\,,
\end{equation}
for any two states $|S_0\rangle$, $|S_0'\rangle$ of $S$. In particular,
for $|S_0\rangle=|S_0'\rangle$, Eq.\ (\ref{res}) implies a {\it maximally mixed}
reduced state $\rho_S={\rm Tr}_T|\Psi\rangle\langle\Psi|$ for {\it any} seed state $|S_0\rangle$:
\begin{equation}
\rho_S=\frac{1}{d_S^2}\sum_t U_t|S_0\rangle\langle S_0|U_t^\dagger=\frac{1}{d_S}\mathbbm{1}_S\,.\label{rhosm}
\end{equation}
Eq.\ (\ref{rhosm}) then leads to Eq.\ (\ref{Eds}). \qed
Therefore, a complete orthogonal set of $U_t$'s ensures that the system will visit $d_S$ orthogonal states
irrespective of the initial state $|S_0\rangle$. The Schmidt decomposition (\ref{Scm})
will then select a subspace of ${\cal H}_T$ of
dimension $d_S$ connected with $S$ through $|\Psi\rangle$. Due to the $d_S$-fold degeneracy
$\lambda_k=\frac{1}{\sqrt{d_S}}$ $\forall$ $k$, any orthogonal basis $\{|k\rangle_T\}$
of this subspace can be used in (\ref{Scm}), with
all states $|k\rangle_S=\sqrt{d_S}\,{_T}\langle-k|\Psi\rangle$ directly orthogonal.
A convenient choice of complete orthogonal set is provided by the Weyl operators \cite{W,Ga.88,Er.16}
\begin{equation}
U_{t}\equiv U_{pq}= \exp[i 2\pi pQ/d_S]\exp[-i 2\pi q P/d_S]\,,\label{UW}
\end{equation}
where $p,q=0,\ldots,d_S-1$, $t=qd_S+p$, $Q|q\rangle=q|q\rangle$, $P|p\rangle=p|p\rangle$
and $\{|q\rangle\}$, $\{|p\rangle\}$ are orthogonal
bases of $S$ related through a DFT: $|p\rangle=\frac{1}{\sqrt{d_S}}\sum_q e^{i2\pi pq/d_S}|q\rangle$.
They satisfy, for any eigenstate $|q_0\rangle$ of $Q$,
\begin{equation}U_{pq}|q_0\rangle=e^{i2\pi p(q_0+q)/d_S}|q_0+q\rangle\end{equation}
which implies Eq.\ (\ref{Uto}), i.e.\ ${\rm Tr}\,U_{p'q'}^\dagger U_{pq}=d_S\delta_{q'q}\delta_{p'p}$.
The discrete evolution under these operators can then be achieved by application of
just {\it two} different unitaries $U_{t,t-1}$ to the preceding
state (here $m\geq 1$, integer):
\begin{equation} U_{t,t-1}=\left\{\begin{array}{lr}e^{i2\pi Q/d_S}&t\neq m d_S\\
e^{-i2\pi P/d_S}e^{i2\pi Q/d_S}&t=m d_S\end{array}\right.\,.\label{MS}
\end{equation}
For instance, if $S$ is a qubit ($d_S=2$) we
may take
$Q=(\mathbbm{1}-\sigma_z)/2$, $P=(\mathbbm{1}-\sigma_x)/2$, with $e^{i2\pi Q/d_S}=\sigma_z$, $e^{-i2\pi P/d_S}=\sigma_x$.
Hence, $|\Psi\rangle=
\frac{1}{2}[|S_0\rangle|0\rangle+\sigma_z|S_0\rangle|1\rangle+
\sigma_x|S_0\rangle|2\rangle+i\sigma_y|S_0\rangle|3\rangle]$ is maximally entangled $\forall$ $|S_0\rangle$ ($E(S,T)=1$),
with $|S_1\rangle=\sigma_z|S_0\rangle$, $|S_2\rangle=\sigma_x|S_0\rangle=
-i\sigma_y|S_1\rangle$, $|S_3\rangle=i\sigma_y|S_0\rangle=\sigma_z|S_2\rangle$
and $|S_0\rangle=-i\sigma_y|S_3\rangle$.
In the general case, it is here natural to view system $T$ as formed by two clocks with identical
Hilbert space dimension $d_S$, which govern
{\it time-independent} Hamiltonians $H_1=-2\pi Q/d_S$ and $H_2=2\pi P/d_S$ associated with conjugate
operators $Q$, $P$ on $S$. Then
we may write the history state (\ref{St1}) for the operators (\ref{UW}) as
\begin{equation}
|\Psi\rangle=\frac{1}{d_S^2}\sum_{p,q}U_{pq}|S_0\rangle|p\rangle_{T_1}|q\rangle_{T_2}\,,
\end{equation}
which represents a history state of history states. It can then be implemented with the circuit of Fig.\ \ref{f2}.
\begin{figure}[h!]
\includegraphics[scale=0.8]{figr2.ps}
\caption{Schematic circuit representing the generation of a {maximally entangled} history state $|\Psi\rangle$,
for any initial system state $|S_0\rangle$. Here $U_P=e^{-i2\pi P/d_S}$, $U_Q=e^{i2\pi Q/d_S}$, with $P,Q$
conjugate operators on $S$ and $2^n=d_S$.} \label{f2}
\end{figure}
\subsection{The quadratic $S-T$ entanglement entropy: Analytic evaluation and bounds\label{IIc}}
The analytic evaluation of the entropy (\ref{S}) in the general case requires the
determination of the singular values $\lambda_k$, i.e.,
the eigenvalues $\lambda_k^2$
of $\rho_S$ or $\rho_T$, which is difficult in most cases. It is then convenient
to use the quadratic (also called linear)
entropy $S_2(\rho)=2{\rm Tr}[\rho(\mathbbm{1}-\rho)]=2(1-{\rm Tr}\,\rho^2)$,
which does not require explicit knowledge of the eigenvalues and is a linear function of the purity ${\rm Tr}\,\rho^2$.
Like $S(\rho)$, it vanishes iff $\rho$ is pure and is maximum iff $\rho$ is
maximally mixed (with $S_2(\rho)=1$ for a maximally
mixed single qubit state), satisfying the majorization relation
$S_2(\rho')\geq S_2(\rho)$ if $\rho'\prec \rho$ \cite{Bha.97,CR.03}.
The associated $S-T$ entanglement entropy is
\begin{eqnarray}
E_2(S,T)&=&S_2(\rho_S)=S_2(\rho_T)=2(1-\sum_k \lambda_k^4)\label{S21}\\
&=&2(1-{\textstyle\frac{1}{N^2}}\sum_{t,t'}|\langle S_t|S_{t'}\rangle|^2)\,,\label{S22}
\end{eqnarray}
and can be determined just from the overlaps between the evolved states.
For the complete orthogonal set (\ref{Uto}), it is easily verified that $\sum_{t,t'}|\langle S_t|S_{t'}\rangle|^2=d_S^3$, so that
$E_2(S,T)=2(1-\frac{1}{d_S})$ becomes maximum.
The overlaps $\langle S_t|S_{t'}\rangle$ are also experimentally accessible
through a measurement at the clock $T$ of the non-diagonal operators
$|t'\rangle\langle t|$ ($t\neq t'$):
\begin{equation}\frac{1}{N}\langle S_{t'}|S_{t}\rangle=\langle \Psi|\mathbbm{1}_{S}\otimes |t'\rangle\langle t||\Psi\rangle
=\langle \sigma_{t't}^x\rangle+i\langle\sigma_{t't}^y\rangle\,,\end{equation}
where $\sigma_{t't}^x=|t'\rangle\langle t|+|t\rangle\langle t'|$, $\sigma_{t't}^y=(|t'\rangle\langle t|-|t\rangle\langle t'|)/i$
are hermitian Pauli operators for the pair $t\neq t'$.
Let us now consider the evolution for a general constant Hamiltonian $H$ of arbitrary spectrum for system $S$, such that
$U_t=e^{-i Ht}$ $\forall$ $t$. In contrast with (\ref{S}), Eq.\ (\ref{S22}) can in this case be explicitly evaluated. Writing
\begin{equation}
|S_0\rangle=\sum_k c_k |E_k\rangle,\;\;\;H|E_k\rangle=E_k|E_k\rangle\,,\label{SE}
\end{equation}
with $E_k\neq E_{k'}$ if $k\neq k'$ (in case of degenerate states $|k_l\rangle$, $c_k|E_k\rangle=\sum_l c_{kl}|k_l\rangle$,
with $|c_k|^2=\sum_l |c_{kl}|^2$),
then $|S_t\rangle=\sum_k e^{-iE_k t} c_k |E_k\rangle$ and Eq.\ (\ref{S22}) becomes, for equally spaced
times $t=t_f\frac{j}{N-1}$, $j=0,\ldots, N-1$,
\begin{eqnarray}E_2(S,T)&=&2(1-\frac{1}{N^2}\sum_{t,t'}|\sum_k |c_k|^2 e^{-iE_k(t-t')}|^2)\\
&=&2\sum_{k\neq k'}|c_k c_{k'}|^2\left[1-\frac{\sin^2\frac{(E_k-E_{k'})t_f N}{2(N-1)}}{N^2\sin^2\frac{(E_k-E_{k'})t_f}{2(N-1)}}\right].\;\;\;\;\;\;\;
\label{E2x}
\end{eqnarray}
The exact result for a continuous evolution can also be obtained from (\ref{E2x}),
by taking the limit $N\rightarrow\infty$:
\begin{eqnarray}E_2(S,T)&\underset{N\rightarrow\infty}{\rightarrow}&2\sum_{k\neq k'}|c_k c_{k'}|^2\left[1-\frac{\sin^2\left(\frac{(E_k-E_{k'})t_f}{2}\right)}{(\frac{(E_k-E_{k'})t_f}{2})^2}\right]\;\;\;\;\;\;
\label{E23}
\end{eqnarray}
Eq.\ (\ref{E23}) provides a good approximation to (\ref{E2x}) if $\frac{|E_k-E_{k'}|t_f}{N-1}\ll 1$ $\forall$ $k\neq k'$
with finite weight $|c_kc_{k'}|^2> 0$.
Eqs.\ (\ref{E2x})--(\ref{E23}) are essentially measures of the {\it spread} of $|S_0\rangle$ over distinct
energy eigenstates. For small $t_f$ such that
$|E_k-E_{k'}|t_f\ll 1$ $\forall\,k,k'$, a second order expansion shows they
are proportional to the {\it energy fluctuation} in $|S_0\rangle$:
$|\langle S_{t}|S_{t'}\rangle|^2\approx 1-\langle (\Delta H)^2\rangle(t-t')^2$,
with $\Delta H=H-\langle H\rangle$ and $\langle O\rangle=\langle S_0|O|S_0\rangle$, implying
\begin{equation}
E_2(S,T)\approx \frac{N+1}{3(N-1)}\langle (\Delta H)^2\rangle\,t_f^2
\underset{N\rightarrow\infty}{\rightarrow}\frac{1}{3}\langle(\Delta H)^2\rangle\,t_f^2 \,.
\end{equation}
It then becomes proportional to the square of the speed $\sqrt{\langle (\Delta H)^2\rangle}$
of the continuous quantum evolution according to the Fubini-Study metric \cite{AA.90,La.17}.
It is also apparent from (\ref{E2x}) that $E_2(S,T)$ is upper bounded by the quadratic entropy of the energy distribution $|c_k|^2$:
\begin{eqnarray}E_2(S,T)\leq 2\sum_{k\neq k'}|c_k c_{k'}|^2=2(1-\sum_k |c_k|^4)\,.
\label{E22}
\end{eqnarray}
The maximum (\ref{E22}) for a fixed distribution $|c_k|^2$ is reached for an equally spaced spectrum of the form
\begin{equation} E_k={\frac{N-1}{t_f}} \frac{2\pi k}{N}+C\,,\label{spec}\end{equation}
with $k$ integer $\in[0,N-1]$, since in this case the bracket in (\ref{E2x}) takes its maximum value $1$ $\forall$ $k\neq k'$.
The spectrum (\ref{spec}) is just Eq.\ (\ref{KST}) for the scaled Hamiltonian
$H_S=\frac{t_f}{N-1}(H-C)$ (for which $t=0,\ldots,N-1$), so that the
energy states $|E_k\rangle$ become the Schmidt states $|k\rangle_S$ of (\ref{Scm})
and $|c_k|$ the Schmidt coefficients $\lambda_k$. For other spectra,
the states $|\tilde{k}\rangle_T=
\frac{1}{\sqrt{N}}\sum_t e^{-i E_k t}|t\rangle$
in
\begin{equation}|\Psi\rangle=\frac{1}{\sqrt{N}}\sum_{k,t} c_k e^{-iE_k t}|E_k\rangle |t\rangle=\sum_k c_k|E_k\rangle|\tilde{k}\rangle_T\,,\end{equation}
are not necessarily all orthogonal, so that $E(S,T)$ will become normally smaller \cite{BR.16}.
Nonetheless, for large $N$ and not too small $t_f$, the
states $|\tilde{k}\rangle_T$ will typically be almost orthogonal, so that the deviation from the upper
bound (\ref{E22}) will not be large, becoming significant only in the presence of quasidegeneracies in the spectrum:
The bracket in (\ref{E23}) vanishes just for $E_k\rightarrow E_{k'}$, becoming close to $1$ for $|E_k-E_{k'}|t_f/2>\pi$,
while that in (\ref{E2x}), which is a periodic function of $E_k-E_{k'}$ with period $\Delta_N=2\pi\frac{N-1}{t_f}$,
vanishes for $E_k\rightarrow E_{k'}+m\Delta_N$, $m=0$ or integer,
becoming close to $1$ whenever $|E_k-E_{k'}-m\Delta_N|t_f/2>\pi$.
On the other hand, Eq.\ (\ref{E23}) also admits a {\it lower bound} for fixed initial and final states $|S_0\rangle$
and $|S_{t_f}\rangle=\sum_k c_k e^{-i E_k t_f}|E_k\rangle$,
reached when the evolution (over $N$ equally spaced times $t=t_f \frac{j}{N-1}$ under a constant $H$)
remains in the subspace spanned by $|S_0\rangle$ and $|S_{t_f}\rangle$:
\begin{eqnarray}E_2(S,T)\geq E^{\rm min}_2(S,T) =1-\frac{\sin^2\frac{N\phi}{N-1}}{N^2\sin^2\frac{\phi}{N-1}}\,,
\label{E24}
\end{eqnarray}
where $\phi\in[0,\pi/2]$ is determined by the overlap between the initial and final states:
\begin{equation}
\cos\phi=|\langle S_0|S_{t_f}\rangle|=|\sum_k |c_k|^2 e^{-iE_k t_f}|\,.\label{ov}
\end{equation}
Writing the final state as
\begin{equation}
|S_{t_f}\rangle=e^{-i\gamma}(\cos\phi |S_0\rangle+\sin\phi|S_0^\perp\rangle),
\label{Stf}\end{equation}
where $\langle S_0^\perp|S_0\rangle=0$, $E^{\rm min}_2(S,T)$ is the result of Eq.\ (\ref{E2x})
for an evolution under a two level Hamiltonian
\begin{equation}
H^{\rm min}=\frac{\phi}{t_f}\sigma_y+\frac{\gamma}{t_f}\,,
\;\;\sigma_y=-i(|S_0\rangle\langle S_0^\perp|-|S_0^\perp\rangle\langle S_0|)\,,
\label{Hmin}\end{equation}
such that
\begin{eqnarray}|S^{\rm min}_t\rangle&\equiv&\exp[-i H^{\rm min} t]|S_0\rangle\nonumber\\&=&
{\textstyle e^{-i\gamma t/t_f}(\cos\frac{\phi t}{t_f}|S_0\rangle+
\sin \frac{\phi t}{t_f}|S_0^\perp\rangle)}\,,\label{Stg}\end{eqnarray}
with $|S^{\rm min}_{t_f}\rangle=|S_{t_f}\rangle$.
The demonstration of (\ref{E24}) is given in the appendix, but the result is physically clear:
The $S-T$ entanglement is a measure of the distinguishability
between the evolved states, and the minimum value is then obtained for an evolution within the subspace
containing the initial and final states,
where all intermediate states will be closer than in a general evolution.
Such evolution, Eq.\ (\ref{Stg}), proceeds precisely along the geodesic
determined by the Fubini-Study metric \cite{AA.90, La.17}, saturating the Mandelstam-Tamm bound
\cite{Bt.83}
$\Delta t \Delta E\geq \cos^{-1}(|\langle S_0|S_{t_f}\rangle|)=\phi$ ($\Delta t=t_f$,
$\Delta E=\sqrt{\langle (\Delta H^{\rm min})^2\rangle}=\phi/t_f$).
As check, for small $t_f$ such that $|E_k-\!E_{k'}|t_f\ll 1$ $\forall\, k\neq k'$,
a fourth order expansion of (\ref{E2x}) and (\ref{E24}) leads to
\begin{equation}
E_2(S,T)-E^{\rm min}_2(S,T)\approx\kappa[
\langle (\Delta H)^4\rangle-\langle (\Delta H)^2\rangle^2]t_f^4\geq 0\,, \label{46}
\end{equation}
where
$\kappa=\frac{(N+1)(N-2)(N-4/3)}{60(N-1)^3}>0$ $\forall\,N>2$.
Hence, the difference (\ref{46}) is verified to be non-negative and of fourth order in $t_f$, being proportional
to the fluctuation of $(\Delta H)^2$. The latter vanishes just for the geodesic evolution, where
$\Delta H=\Delta H^{\rm min}=\frac{\phi}{t_f}\sigma_y$ and hence
$\langle(\Delta H^{\rm min})^4\rangle=\langle (\Delta H^{\rm min})^2\rangle^2=\phi^4/t_f^4$,
implying $E_2(S,T)=E^{\rm min}_2(S,T)$.
Such fluctuation represents a curvature coefficient which measures the deviation from the geodesic \cite{La.17,Br.96}.
For $\phi\in[0,\pi/2]$, the bound (\ref{E24}) is, of course, an increasing function of $\phi$ for $N\geq 2$, i.e.\
of the Wootters distance \cite{Wo.81} $s(|S_0\rangle,|S_{t_f}\rangle)=2\arccos(
|\langle S_0|S_{t_f}\rangle|)=2\phi$, and hence a decreasing function of
the overlap $|\langle S_{t_f}|S_0\rangle|$. It is also a {\it decreasing} function of $N\geq 2$ for $\phi\in(0,\pi/2]$.
The minimum value is thus achieved in the continuous limit $N\rightarrow\infty$,
where $E_2^{\rm min}(S,T)\rightarrow 1-(\sin^2\phi)/\phi^2$. Then, we may also write, for any $N\geq 2$,
\begin{eqnarray}
E_2(S,T)\geq 1-\frac{\sin^2\phi}{\phi^2}\,.
\label{E25}
\end{eqnarray}
\section{Entanglement and history states of evolution operators \label{III}}
We now examine the application of the previous formalism to the evolution operators themselves.
The aim is to link properties of previous history states with those of the operators that generate it.
For this purpose the pure state representation of operators \cite{Ja.72,Ch.75,Gr.05,Du.05,Mi.13}
provides a convenient approach, enabling a direct derivation of their entanglement properties \cite{Z.00,N.03,P.07,M.13}.
\subsection{Entanglement of operators and pure state representation}
We first briefly review the concept of operator entanglement and its pure state representation.
Any operator ${\cal W}$ for a bipartite system A+B can be expanded as
\begin{equation}
{\cal W}=\sum_{i,j} M_{ij}C_i\otimes D_j,\label{Ucd}
\end{equation}
where $C_i$ and $D_j$ are orthogonal operators for A and B respectively,
satisfying
\begin{equation}{\rm Tr}\,C_i^\dagger C_j=\delta_{ij}d_A\,,\;\;{\rm Tr}\,D_i^\dagger D_j=\delta_{ij}d_B\,.\label{ort}\end{equation}
Hence, $M_{ij}=\frac{1}{d_A d_B}{\rm Tr}\,[C_i^\dagger\otimes D_j^\dagger\,{\cal W}]$. We can use, for instance, the
Weyl operators (\ref{UW}) for the sets $\{C_i\}$, $\{D_i\}$.
Eqs.\ (\ref{ort}) imply ${\rm Tr}\,[{\cal W}^\dagger {\cal W}]=
d_A d_B{\rm Tr}\,[M^\dagger M]$. If ${\cal W}$ is unitary, then ${\rm Tr}\,[M^\dagger M]=1$, entailing that the numbers $\{|M_{ij}|^2\}$
are in this case standard probabilities. By means of the singular value decomposition, we can write the $d_A^2\times d_B^2$ matrix $M$ as
$M=UDV^\dagger$,
where $U$ and $V$ are unitary matrices and $D$ a diagonal matrix with nonnegative entries $\lambda_k^{\cal W}$
satisfying $\sum_k (\lambda^{\cal W}_k)^2={\rm Tr}\,M^\dagger M=1$.
We can then rewrite ${\cal W}$ in the Schmidt form
\begin{equation}
{\cal W}=\sum_k \lambda^{\cal W}_k A_k \otimes B_k \,,\label{Sf}
\end{equation}
where $A_k \equiv \sum_{i}U_{ik}C_i$ and $B_k \equiv \sum_{j}V^*_{jk}D_j$, are again orthogonal operator bases for $A$ and $B$ satisfying
${\rm Tr}\,A_k^\dagger A_l=d_A\delta_{kl}$, ${\rm Tr}\,B_k^\dagger B_l=d_B\delta_{kl}$.
The von Neumann entanglement entropy of ${\cal W}$ can then be defined as
\begin{equation}
E({\cal W})=-\sum_k (\lambda^{\cal W}_k)^2\log_2 (\lambda^{\cal W}_k)^2\,.\label{EU}
\end{equation}
Similarly, $E_2({\cal W})=2\sum_k(1- (\lambda_k^{\cal W})^4)$. These entropies vanish when ${\cal W}$ is a product of local
unitaries, and are maximum when ${\cal W}$ is a uniform sum of $d^2$ products $A_k\otimes B_k$, with $d={\rm Min}[d_A,d_B]$.
The previous analogy between operators and states can be manifestly described
through the Choi isomorphism \cite{Ja.72,Ch.75,Gr.05,Du.05,Mi.13}. Any operator $O$ in a system with Hilbert space ${\cal H}$ of
dimension $d$ can be associated with a pure state $|O\rangle\in{\cal H}\otimes {\cal H}$, given by
\begin{equation}|O\rangle=(O\otimes \mathbb{1})|\mathbb{1}\rangle=\frac{1}{\sqrt{d}}
\sum_q(O|q\rangle)|q\rangle=\frac{1}{\sqrt{d}}\sum_{q,q'}\langle q'|O|q\rangle|q'\rangle|q\rangle\,,
\end{equation}
where $|\mathbb{1}\rangle=\frac{1}{\sqrt{d}}\sum_q |q\rangle |q\rangle$ is a maximally entangled state in ${\cal H}\otimes {\cal H}$ and $\{|q\rangle\}$
an orthonormal set. In this way,
\begin{equation}\langle O|O'\rangle=\frac{1}{d}\rm Tr\,[O^\dagger O']\,.\end{equation}
Therefore, orthogonal operators satisfying ${\rm Tr}\,[O_i^\dagger O_j]=d\delta_{ij}$ correspond to orthonormal states $\langle O_i|O_j\rangle=\delta_{ij}$.
And unitary operators $U$ to normalized states $|U\rangle$.
The operator (\ref{Ucd}) can then be associated with the pure state (note that $|\mathbbm{1}_{AB}\rangle=|\mathbbm{1}_A\rangle|\mathbbm{1}_B\rangle$)
\begin{equation}
|{\cal W}\rangle=({\cal W}\otimes\mathbbm{1}_{A'B'}) |\mathbbm{1}_A\rangle|\mathbbm{1}_B\rangle
=\sum_{ij} M_{ij}|C_i\rangle |D_j\rangle\,,\label{StU}
\end{equation}
where $|C_i\rangle=(C_i\otimes \mathbbm{1}_{A'})|\mathbbm{1}_A\rangle$,
$|D_j\rangle=(D_j\otimes \mathbbm{1}_{B'})|\mathbbm{1}_B\rangle$ form orthogonal sets:
$\langle C_k|C_i\rangle=\delta_{ki}$,
$\langle D_k|D_j\rangle=\delta_{kj}$. Thus, $M_{ij}=\langle C_i,D_j|{\cal W}\rangle$,
with $\langle {\cal W}|{\cal W}\rangle={\rm Tr}\,[M^\dagger M]$.
The state representation of the Schmidt form (\ref{Sf}) acquires then the standard appearance
\begin{equation}|{\cal W}\rangle=\sum_k \lambda^{\cal W}_k |A_k\rangle|B_k\rangle\,,\end{equation}
with $\langle A_k|A_l\rangle=\delta_{kl}=\langle B_k|B_l\rangle$, and the entanglement entropy (\ref{EU}) of a unitary
${\cal W}$ can be also expressed as
\begin{equation}
E({\cal W})=S(\rho_A^{\cal W})=S(\rho_B^{\cal W})\,,\;\;
\rho_{A(B)}^{\cal W}={\rm Tr}_{B(A)}\,|{\cal W}\rangle\langle {\cal W}|\,,\label{EU2}
\end{equation}
with $S(\rho)=-{\rm Tr}\rho\log_2\rho$. Similarly,
$E_2({\cal W})=S_2(\rho_A^{\cal W})=S_2(\rho_B^{\cal W})$, with
$S_2(\rho)=2(1-{\rm Tr}\,\rho^2)$.
\subsection{Generating operators and operator history states}
The history state (\ref{St1}) can be generated from an initial product state $|S_0\rangle|0\rangle$ as
\begin{equation}
|\Psi\rangle={\cal W}(I\otimes H^{\otimes n})|S_0\rangle|0\rangle\,, \label{CW}
\end{equation}
where $H^{\otimes n}$ denotes the Hadamard operator acting on the clock
($H^{\otimes n}|0\rangle=\frac{1}{\sqrt{N}}\sum_{t=0}^{N-1}|t\rangle$, with $N=2^n$) and
\begin{equation}
{\cal W}=\sum_t U_t\otimes |t\rangle\langle t|,\label{U1}
\end{equation}
the control-$U_t$ operator. By expanding $U_t$ in an orthogonal basis of operators $C_i$, we have
\begin{equation}{\cal W}=\sum_{t,i} {M}_{ti} C_i \otimes |t\rangle\langle t|,\;\;
{M}_{ti}=\frac{1}{d_S}{\rm Tr}\,C_i^\dagger U_t\,,\label{U12}
\end{equation}
where the coefficients ${M}_{tj}$ satisfy $\sum_j |{M}_{tj}|^2 =\frac{1}{d_S}{\rm Tr}\,U_t^\dagger U_t=1$,
and are hence standard probabilities at fixed $t$.
Since the projectors $|t\rangle\langle t|$ are also orthogonal and have unit trace,
the Schmidt coefficient $\lambda_k^{\cal W}$ of (\ref{Sf}) are here just the singular values of the matrix ${M}/\sqrt{N}$.
The ensuing entanglement entropy (\ref{EU}) is the same as that of ${\cal W}(I\otimes H^{\otimes n})$,
as they differ just by a local unitary.
The pure state (\ref{StU}) associated with the operator (\ref{U1})
is itself an {\it operator history state}:
\begin{equation}
|{\cal W}\rangle=\frac{1}{\sqrt{N}}\sum_t
|U_t\rangle|T_t\rangle\label{UH}\,,
\end{equation}
where $|U_t\rangle=(U_t\otimes\mathbbm{1}_{S'})|\mathbbm{1}_S\rangle=\frac{1}{\sqrt{d_S}}\sum_q U_t|q\rangle|q\rangle$
and $|T_t\rangle=(T_t\otimes\mathbbm{1}_{T'})|
\mathbbm{1}_T\rangle=|tt\rangle$, with $T_t=\sqrt{N}|t\rangle\langle t|$
and $\langle T_t|T_{t'}\rangle=\delta_{tt'}$.
Writing $|tt\rangle$ simply as $|t\rangle$, Eq.\ (\ref{UH}) is the standard history state (\ref{St1}) for a
maximally entangled initial state $|\mathbbm{1}_{S}\rangle=\frac{1}{\sqrt{d_s}}\sum_q|q\rangle|q\rangle$
of a bipartite system under a local evolution
$U_t\otimes \mathbbm{1}_{S'}$, so that it can be generated with the circuit depicted in Fig.\ \ref{f3}.
\begin{figure}[h!]
\centering
\includegraphics{figr3.ps}
\caption{(Color online) Schematic circuit representing the generation of the operator history state (\ref{UH}). } \label{f3}
\end{figure}
The entanglement of the history state (\ref{UH}) is the operator entanglement (\ref{EU}) of ${\cal W}$, which is then a measure
of the distinguishability of the operator states $|U_t\rangle$. Its quadratic entanglement
can be directly evaluated with Eq.\ (\ref{S22}), where now $\langle U_t|U_{t'}\rangle=\frac{1}{d_S}{\rm Tr}\,[U_t^\dagger U_{t'}]$:
\begin{equation}
E_2({\cal W})=2(1-{\textstyle\frac{1}{N^2}}\sum_{t,t'}|\langle U_t|U_{t'}\rangle|^2)\,.\label{E2W}
\end{equation}
It is now immediate to see that if $N=d_S^2$ and the operators $\{U_t\}$ form a {\it complete orthogonal set}
(Eq.\ (\ref{Uto})), the operator history state (\ref{UH}) is {\it maximally entangled}:
\begin{equation}
E({\cal W})=\log_2 d_S^2=2\log_2 d_S\,,
\end{equation}
while $E_2({\cal W})=2(1-\frac{1}{d_S^2})$,
since all states $|U_t\rangle$ become orthogonal: $\langle U_t|U_{t'}\rangle=\delta_{tt'}$.
The form (\ref{UH}) is then already the Schmidt decomposition of $|{\cal W}\rangle$. Since in this case
the original history state (\ref{St1}) has maximum entanglement $E(S,T)=\log_2 d_S$ for any initial state
$|S_0\rangle$, this result indicates a
close relation between the entangling power of ${\cal W}$ and its operator entanglement, which will be discussed below.
It is also apparent that if the $d_S^2$ operators $U_t$ are not all orthogonal, then $E({\cal U})<2\log_2 d_S$.
For a smaller number $N< d_S^2$ of times, $E({\cal W})$ will be maximum if all $N$ states $|U_t\rangle$ are orthogonal.
In the case of a {\it constant} Hamiltonian with energies $E_k$, such that $U_t=e^{-iHt}$ $\forall$ $t$, then
\begin{equation}\langle U_t|U_{t'}\rangle=\frac{1}{d_S}\sum_k e^{-iE_k(t-t')}\,.\end{equation}
For $N=d_S$, an equally spaced spectrum $E_k=2\pi k/N+C$, $k=0,\ldots,N-1$, (i.e., Eq.\ (\ref{spec})
if $t\rightarrow\frac{t_f}{N-1}j$) ensures that all $|U_t\rangle$ are strictly orthogonal:
$\langle U_t|U_{t'}\rangle=\delta_{tt'}$ $\forall$ $t,t'$ (the ensuing operators $U_t$ are in fact the first $d_S$ operators
of the Weyl set (\ref{UW})). Hence, $E({\cal W})$ will reach for this spectrum the maximum value
\begin{equation}E({\cal W}) =\log_2 d_S\,,\end{equation}
compatible with a fixed $H$ and $N=d_S$ times. The same holds for $E_2({\cal W})$. This result correlates
with the extremal properties of this spectrum discussed in \ref{IIc}.
On the other hand, since $U_{t,t-1}=U_tU_{t-1}^\dagger$, the operator ${\cal U}$ of Eq.\ (\ref{Upsi})
is related with ${\cal W}$ by
\begin{equation}
{\cal U}={\cal W}(I\otimes \exp[-iP_T]){\cal W}^\dagger\,,
\end{equation}
where $\exp[-iP_T]=\sum_t |t\rangle\langle t-1|$. The associated pure state is also a history state,
\begin{equation}
|{\cal U}\rangle=\frac{1}{\sqrt{N}}\sum_t |U_{t,t-1}\rangle|T_{t,t-1}\rangle\,,\label{Utm1}
\end{equation}
where $|T_{t,t-1}\rangle=\sqrt{N}(|t\rangle \langle t-1|\otimes \mathbb{1}_{T'})|\mathbb{1}_T\rangle=|t,t-1\rangle$ are again orthogonal states.
Its entanglement is then a measure of the distinguishability of the step evolution operator states $|U_{t,t-1}\rangle$,
and depends on the {\it order} of the operators $U_t$, in contrast with $E({\cal W})$. It vanishes in the constant case (\ref{UHS})--(\ref{J}).
\subsection{Operator entanglement and entangling power \label{IIIc}}
We have seen that there is a relation between the entanglement of the operator ${\cal W}$ and that of the history states it generates,
$|\Psi\rangle=\frac{1}{\sqrt{N}}\sum_{t}U_t|S_0\rangle |t\rangle$.
We will here prove that the quadratic operator entanglement entropy $E_2(U,T)\equiv E_2({\cal W})$, Eq.\ (\ref{E2W}), is
proportional to the {\it entangling power} of ${\cal W}$, defined as the average quadratic entanglement it generates when applied (as in Eq.\ (\ref{CW})) to initial product states $|S_0\rangle|0\rangle$:
\begin{equation}
\langle E_2 (S,T)\rangle=\frac{d_S}{d_S+1}E_2({\cal W})\,, \label{rel}
\end{equation}
where
\begin{equation}
\langle E_2 (S,T)\rangle=
\int_{\cal H} 2(1- {\rm Tr}\,\rho_S^2) d S_0\,, \label{ME2}
\end{equation}
is the average over all initial states $|S_0\rangle$ of the quadratic entanglement entropy $E_2(S,T)$ of the history state:
The integral runs over the whole set of initial states $|S_0\rangle$ with the Haar measure $dS_0$ (the only normalized unitarily
invariant measure over the Hilbert space) and $\rho_S$ is the reduced state of $S$ in $|\Psi\rangle$. \\
{\it Proof.} Since $\rho_S=\frac{1}{N}\sum_t U_t|S_0\rangle \langle S_0|U_t^\dagger$, we obtain
\begin{equation}
\langle {\rm Tr}\,\rho_S^2\rangle=
\frac{1}{N^2}\sum_{t,t'} \int_{\cal H}\langle S_0| U_t^{\dagger}U_{t'}|S_0\rangle\langle S_0| U_{t'}^{\dagger}U_t|S_0\rangle d S_0\,.
\label{ME21}
\end{equation}
Here we can define $O=U_t^{\dagger}U_{t'}$ and $P=U_{t'}^{\dagger}U_t=O^\dagger$ to use the relation \cite{RBKSC.04}
\begin{equation}
\int_{{\cal H}} \langle S_0|O|S_0\rangle\langle S_0|P|S_0\rangle d S_0=\frac{{\rm Tr}[O] {\rm Tr}[P]+{\rm Tr}[OP]}{d_S(d_S+1)} \,.
\label{MEtr}
\end{equation}
Since in this case $OP=\mathbb{1}_S$, we obtain
\begin{equation}
\langle {\rm Tr}\,\rho_S^2\rangle= \frac{\frac{1}{N^2}\sum_{t,t'} |{\rm Tr}\,[U_t^{\dagger}U_{t'}]|^2 + d_S}{d_S(d_S+1)}. \label{ME22}
\end{equation}
On the other hand, $E_2 ({\cal W})=2(1-{\rm Tr}\,\rho_U^2)$,
with $\rho_U^2=\frac{1}{N^2}\sum_{t,t'} |U_t\rangle\langle U_t^{\dagger}|U_{t'}\rangle\langle U_{t'}^{\dagger}|$. Thus,
\begin{equation}
{\rm Tr}\,\rho_U^2
=\frac{1}{N^2}\sum_{t,t'} |\langle U_t^{\dagger}|U_{t'}\rangle|^2 =
\frac{1}{(d_S N)^2}\sum_{t,t'} |{\rm Tr} [U_t^{\dagger}U_{t'}]|^2\,. \label{TrUt}
\end{equation}
Replacing (\ref{TrUt}) in (\ref{ME22}) leads to
$\langle {\rm Tr}\,\rho_S^2\rangle=\frac{d_S{\rm Tr}\,(\rho_U^2)+1}{d_S+1}$ and hence to Eq.\ (\ref{rel}). \qed
Therefore, the average over all initial system states of the quadratic $S-T$ entanglement is just that of the
generating unitary operator times $\frac{d_S}{d_S+1}$.
It is first verified that if the operators $U_t$ form a complete orthogonal set, $E_2({\cal W})=2(1-d_S^{-2})$
is maximum and Eq.\ (\ref{rel}) yields $\langle E_2(S,T)\rangle=2(1-d_S^{-1})$,
the maximum attainable value in a $d_S$ dimensional space, entailing it is always maximum, irrespective of the
initial state (sec.\ \ref{IIbb}).
In general, for a reduced set of $d$ orthogonal unitaries
$U_t$, with $N=d\leq d_S^2$, $E_2({\cal W})=2(1-d^{-1})$ and hence
\begin{equation}
\langle E_2(S,T)\rangle=2\frac{d_S(d-1)}{d(d_S+1)}\,.\label{rel2}
\end{equation}
In order to visualize this relation we define the effective average number of orthogonal states the system
visits as
\begin{equation}
\overline{d}_{S,T}=\frac{1}{1-\frac{1}{2}\langle E_2(S,T)\rangle}=\frac{d(d_S+1)}{d_S+d}\label{rel3}\,,
\end{equation}
such that $\langle E_2(S,T)\rangle=2(1-\frac{1}
{\overline{d}_{S,T}})$. If $d=d_S^2$, $\overline{d}_{S,T}=d_S$ becomes maximum, while if $d=d_S$,
which is, for instance, the case of a constant
Hamiltonian with spectrum $2\pi k/N$ ($d_S$ orthogonal operators $U_t=\exp[-iHt]$), Eq.\ (\ref{rel3})
leads to $\overline{d}_{S,T}=(d_S+1)/2$, i.e.,
just {\it half} the maximum value for large $d_S$. For any other spectrum and $N=d_S$, $\overline{d}_{S,T}\leq (d_S+1)/2$, i.e.,
\begin{equation}
\langle E_2(S,T)\rangle\leq 2\frac{d_S-1}{d_S+1}\label{rel4}\;\;\;\;\;(U_t=e^{-iHt},\;\;N=d_S)\,.
\end{equation}
Noticeably, it is sufficient to have $d\propto d_S$ ($\ll d_S^2$ for large $d_S$) to reach a high $\overline{d}_{S,T}$, i.e.,
$\overline{d}_{S,T}=\frac{m}{m+1}(d_S+1)$ if $d=m d_S$ (and $m\leq d_S$), as seen from (\ref{rel3}).
\subsection{Measuring operator overlaps}
The overlaps $\langle U_t|U_t'\rangle$,
which are the operator fidelities defined in \cite{Wa.09} and are involved in the quadratic entanglement
(\ref{E2W}) of the generating operator ${\cal W}$, can be experimentally obtained by measuring
$|T_t\rangle\langle T_{t'}|$ in the time part $T$ (Fig.\ \ref{f4}).
Remarkably, it is sufficient to start with the system in a {\it maximally mixed} state: If we trace out system $S'$
in the operator history state (\ref{UH}), we obtain
\begin{equation}\rho_{ST}=\frac{1}{N d_S}\sum_{t,t'}U_t U_{t'}^{\dagger}\otimes|t\rangle\langle t'|\label{rst}\,,\end{equation}
where we have written $|T_t\rangle$ as $|t\rangle$. Hence, tracing over $S$,
\begin{equation}\rho_{T}=
\frac{1}{N}\sum_{t,t'}\langle U_{t'}| U_{t}\rangle|t\rangle\langle t'|\,.\end{equation}
Thus, setting $U_{t,t'}=U_t U_{t'}^\dagger$,
\begin{equation}
\langle|t'\rangle \langle t|\rangle=\frac{1}{N}\langle U_{t'}|U_{t}\rangle =
\frac{1}{N d_S}{\rm Tr\,}[U_{t'}^\dagger U_{t}]=\frac{1}{N d_S}{\rm Tr\,}[U_{t,t'}]\,.\label{Uttp}
\end{equation}
Using again $\sigma^x_{t't}=
|t'\rangle\langle t|+|t\rangle\langle t'|$, $\sigma^y_{t't}=-i(|t'\rangle\langle t|-|t\rangle\langle t'|)$,
the trace of the evolution operator between any two times can then be obtained by measuring the averages of
$\sigma_{tt'}^x$ and $\sigma^y_{t't}$, which provide the real and imaginary parts:
\begin{equation}
\langle \sigma^x_{t't}\rangle=
\frac{2}{N}{\rm Re}[\langle U_t|U_{t'}\rangle],\;\;\langle \sigma^y_{t't}\rangle=
\frac{2}{N}{\rm Im}[\langle U_t|U_{t'}\rangle]\,.\label{Uov}
\end{equation}
Of course, the state (\ref{rst})) can be generated just by preparing system $S$ in the maximally mixed state,
as the purifying system $S'$ of the original operator state is traced out. Note also that $U_0=\mathbbm{1}$,
so that the averages $\langle\sigma^\mu_{0t}\rangle$ determine ${\rm Tr}\,U_t$.
\begin{figure}[h!]
\centering
\includegraphics{figr4.ps}
\caption{(Color online) Schematic circuit representing the measurement of the operator overlaps (\ref{Uov}).} \label{f4}
\end{figure}
In the special case $N=2$,
$T$ is a single qubit and we recover the standard DQC1 scheme for measuring the trace of an operator \cite{KL.98}.
The ensuing operator history state is
$|{\cal W}\rangle=
\frac{1}{\sqrt{2}}(|U_0\rangle|T_0\rangle+|U_1\rangle|T_1\rangle)$, and its quadratic entanglement is
\begin{equation}
E_{2}({\cal W})=1-|\langle U_0|U_1\rangle|^2=1-|{\rm Tr}\,U|^2/d_S^2\,.
\end{equation}
Its square root is just the entangling power of the DQC1 circuit defined in \cite{Yi.13}.
\section{Conclusions\label{IV}}
Quantum mechanics has mostly considered time as an external classical parameter.
In this work we have determined some fundamental properties concerning the generation and entanglement of discrete history states within
a parallel-in-time discrete model of quantum evolution, based on a finite dimensional quantum clock \cite{BR.16}.
It was first shown that a general unitary evolution for the system states follows from a static eigenvalue equation,
which can be recast as a generalized discrete version of a Wheeler-DeWitt equation.
The ensuing system-time entanglement is a measure of the actual number of distinguishable states visited by the system
at distinguishable times and satisfies an entropic energy-time uncertainty relation. Its dependence on the initial system
state becomes attenuated for non-constant non-commuting Hamiltonians, and in particular we have presented a simple two-clock scheme which
generates a {\it maximally entangled} history state irrespective of the seed state. Thus, history states essentially independent
of initial conditions can be generated. On the other hand, for any fixed seed system state there is always a
special clock basis selection for which the evolution corresponds to a constant Hamiltonian.
We have also shown that the quadratic entropy provides a convenient measure of the system-time entanglement entropy. It can be
evaluated analytically and satisfies strict and physical upper and lower bounds, the former
connected with the energy spread of the initial state and the latter determined by the
evolution along the geodesic path between the initial and final states. Hence, such path, which provides the minimum evolution
time \cite{Bt.83}, minimizes as well the quadratic $S-T$ entanglement entropy.
Finally, by means of the channel-state duality we have shown that the unitary operator generating the history state corresponds to an operator history state, with its quadratic entanglement entropy representing its entangling power.
We have also provided a simple scheme which allows to efficiently obtain the overlaps between system states and the traces of the
evolution operator between any two-times through measurements on the clock.
The present formalism is interesting as a fundamental aspect of quantum theory, where there are some possible
scenarios to explore further in connection with quantum gravity, such as interaction between relational clocks
\cite{Bo.11,Ho.12} and emergence of causality \cite{Br.11,CR.17}.
The incorporation of time in a discrete quantum clock system also enables the development of new models
of parallel-in-time simulation, taking advantage of the quantum features of superposition and entanglement.
This description of time could be also suitable for
application in Floquet systems, and in particular Floquet time crystals \cite{El.16}.
\section{Appendix}
{\it Proof of the lower bound of Eq.\ (\ref{E24})}.
We first assume a sufficiently short final time $t_f$ such that $|\frac{(E_k-E_{k'})t_f}{2}|\leq \pi$ $\forall$ $k\neq k'$.
Note that the overlap $|\langle S_0|S_{t_f}\rangle|$, Eq.\ (\ref{ov}), is unaffected by any translation
$E_k\rightarrow E_k+2j\pi/t_f$ $\forall j\in\mathbb{Z}$, for a given $k$.
The angle $\phi\in[0,\pi/2]$ determined by this overlap can also be rewritten as
\begin{eqnarray}
\phi&=&\arcsin\sqrt{1-|\langle S_0|S_{t_f}\rangle|^2}\\
&=&\arcsin\sqrt{\textstyle 2\sum_{k\neq k'}|c_k c_{k'}|^2\sin^2\frac{(E_k-E_{k'})t_f}{2}}\,.\label{ov2}
\end{eqnarray}
It is now expected that the overlap between any pair of intermediate states will be smaller than those between states
$|S^{\rm min}_t\rangle=e^{-iH_{\rm min}t}|S_0\rangle$
along the geodesic, such that (Eq.\ (\ref{Stg}))
$|\langle S_t|S_{t'}\rangle|\leq |\langle S^{\rm min}_t|S^{\rm min}_{t'}\rangle|=|\cos[\phi\frac{t-t'}{t_f}]|$.
This inequality is verified since the function
\begin{equation}
F(s)=\arcsin\sqrt{\textstyle2\sum_{k\neq k'}|c_k c_{k'}|^2\sin^2\frac{(E_k-E_{k'})t_fs}{2}}-\phi s
\end{equation}
where $s=|\frac{t-t'}{t_f}|\leq 1$, is a concave function of $s$ for $s\in [0,1]$ and satisfies $F(0)=F(1)=0$,
so that $F(s)\geq 0$ $\forall$ $s\in [0,1]$.
Hence, for short times $t_f$ such that all relative phases have yet not completed one period
($|E_k-E_{k'}|t_f<2\pi$ $\forall$ $k,k'$), all intermediate overlaps of the actual evolution are smaller than
those along the geodesic, and hence the actual $E_2(S,T)$ entropy is larger than that along the geodesic path.
For larger times $t_f$, the inequality (\ref{E24}) also holds but for a different reason: If $|\frac{(E_k-E_{k'})t_f}{2}|>\pi$
for some pairs $k,k'$, $F(s)$ may not be concave and can also be negative for some values of $s$. However, the relevant term of
the exact expression for $E_2(S,T)$ satisfies
\begin{equation}
\frac{\sin^2\frac{\gamma N}{N-1}}{N^2\sin^2\frac{\gamma}{N-1}}\leq
\frac{\sin^2\frac{(\gamma-j\pi) N}{N-1}}{N^2\sin^2\frac{(\gamma-j\pi)}{N-1}}
\end{equation}
where $\gamma=\frac{(E_k-E_{k'})t_f}{2}$ and $j$ is such that $|\gamma-j\pi|\in [0,\pi/2]$. This translation
of the energy difference does not affect the
overlap (Eq.\ (\ref{ov2})), but shows that the actual entropy $E_2(S,T)$ for large times will not become lower
than the bound previously obtained. In this case
some relative phases may have completed one or more periods, but the final effect will be to decrease the average
overlap and hence to increase $E_2(S,T)$.
\acknowledgments
The authors acknowledge support from CIC (RR) and CONICET (AB) of Argentina, and CONICET Grant PIP 112201501-00732.
|
{
"timestamp": "2018-09-14T02:03:01",
"yymm": "1806",
"arxiv_id": "1806.00956",
"language": "en",
"url": "https://arxiv.org/abs/1806.00956"
}
|
\section{Introduction}\label{sec:intro}}
\IEEEPARstart{T}{he} he transition to Web 2.0 transformed the business models of online marketing
from a global ad approach based to individual opinions and targeted campaigns \cite{morgan1994commitment, scoble2006naked, Abdullah:Incorporating, gegenhuber2017making}.
Web 2.0 not only took traditional marketing strategies to the extreme via viral marketing campaigns \cite{kurultay2012dynamics, wilde2013viral, rakic2014viral},
but it also gave rise to new techniques of brand building and audience targeting via influencer marketing \cite{brown2013influence, zietek2016influencer}.
In fact, the use of micro-influencers, trusted individuals within their communities,
has been seen as a more effective way to build a brand in terms of audience reception and return on investment
\cite{ha2015experiment, bijen2017ad, lisichkova2017impact}.
Instagram, which is a visual content sharing online social network (OSN), has become a focal point for influencer marketing.
With power users and micro-influencers publishing sponsored content companies need to rate these influencers and determine their value\cite{richardson2013cigar, cornet2016instagram, chen2016rise}.
Most of today's scoring themes rely on graph-based algorithms of a known network graph.
Such graphs are not always available, and building them for Instagram users requires a great deal of resources,
e.g., crawling time and computing costs.
A possible solution would be to infer the underlying network structure using the user activity logs,
as described by Barbieri et al.\cite{barbieri2013influence},
but even in the event a graph is constructed it would not necessarily be of much use
given that information decays exponentially along the graph even under optimal passive information propagation,
which is not the case.
The rest of the paper is organized as follows:
In Section~\ref{sec:background} we described OSNs in greater detail as well as current influence measuring schemes.
We then present our annotations and formal description of the problem of measuring and ranking influence in Section~\ref{sec:prob}.
The dataset of Instagram users and their posts is described in Section~\ref{sec:Dataset},
followed by discussion on the extracted and aggregated features of the testable data in Section~\ref{sec:Features}.
Following this, we present our testing methodology, baselines, regression models and experimental results in Section~\ref{sec:EXPERIMENTAL}.
Finally, we discuss our conclusion and possible future work in Section~\ref{sec:conclusions}.
\section{Background and Related Work}
\label{sec:background}
Online social media networks are often described as a directed graph
with entities such as users acting as nodes and relationships as the edges.
Such edges can be unidirectional or bi-directional, e.g.,
an Instagram "follower" and a Facebook "friendship", respectively.
These edges do not need to represent a long-lasting relationship;
they can signal a one-time engagement, e.g., a "like" or a "comment".
Following this, link prediction in OSNs became an active research field
focused on community detection, in the case of users as nodes \cite{Budalakoti:2012:BIF:2187836.2187932,Hsieh:2013:OOS:2488388.2488439},
or content suggestion otherwise \cite{Amin:2012:SRL:2365952.2366013, Reda:2012:MSR:2396761.2396847, Rodriguez:2012:MOO:2365952.2365961}.
In most OSNs, user-generated content is ``pushed", i.e., propagated via interaction.
When a user uploads a post, their followers can see the post and choose to pass it along,
creating a pyramid-formed cascade of information.
Thus, if user A follows user B who, in turn, follows user C, and user C posts some content user B chooses to share,
user A is passively influenced by user C.
These social micro-networks tend to grow around influential, active users \cite{ethan2009differing, lu2016vital}.
Instagram content, however, is ``pulled", i.e., information propagation requires activity along the pyramid,
such that, using our earlier example, for user C's post to reach user A, user A must look for content suggested by trusted users.
This situation raises the question of how to rank users in OSNs.
As OSNs are traditionally described as graphs, ranking has been done using various graph statistics,
from simple in/out degree to node closeness \cite{cha2010measuring, chen2012identifying},
as is the case with the work of Anger and Kittl in Twitter \cite{anger2011measuring}
and Agarwal et al. in the context of influential blogs \cite{agarwal2008identifying}.
Other techniques extend to existing link analysis algorithms -
the most popular one being PageRank \cite{Bianchini:2005:IP:1052934.1052938, Brin:1998:ALH:297805.297827}.
Weng et al. suggested twitterRank \cite{weng2010twitterrank} and Khrabrov et al. introduced starRank \cite{khrabrov2010discovering},
both extensions of PageRank working on Twitter's follower and engagements graphs, respectively.
On LinkedIn, a professional OSN,
Budalakoti and Bekkerman suggested a fair-bets model for ranking via transfer of authority \cite{budalakoti2012bimodal},
and on Instagram Egger suggested a PageRank extension for influencer ranking \cite{egger2016identifying}.
On Instagram, unlike Twitter, follower data is not publicly available.
While this information can still be collected via crawling, it is a long and expensive process.
A possible solution would be to infer the underlying network structure using the user activity logs,
as described by Barbieri et al.\cite{barbieri2013influence}.
Even in the event a graph is constructed, it would not necessarily be of much use
given that information decays exponentially along the graph even under optimal passive information propagation,
which is not the case.
\section{Problem Formulation}
\label{sec:prob}
The influence of a user in an OSN has been described either in simple, intuitive measures
or as a non-intuitive measurable graph statistic with no real-world meaning \cite{weng2010twitterrank, goyal2010learning, egger2016identifying}.
One such measure is the user's expected post engagements.
We extend this definition in the realization that being exposed to specific content often does not lead to active engagement.
We say the influence of an Instagram user (\textit{Instagrammer}) is the expected exposure their content would receives,
or, their expected number of views per post.
Adhering to the law of large numbers, we can estimate the users' influence using Definition~\ref{def:influence}:
\begin{definition}
\label{def:influence}
Let $U$ be the set of all Instagrammers, ${\mathcal{C}}$ be all content posted on Instagram,
$v_c$ the number of Instagrammers that saw post $c \in {\mathcal{C}}$ and ${\mathcal{C}}_u \subset {\mathcal{C}}$ is the content posted by $u \in U$.
We say that the \textit{influence} of Instagrammer $u$ is:
\begin{displaymath}
Inf_u = \frac{\sum_{c \in {\mathcal{C}}_u} v_c}{\left| {\mathcal{C}}_u \right|}.
\end{displaymath}
\end{definition}
\section{Instagram Dataset}
\label{sec:Dataset}
For the purpose of this study, a set of Instagram data was prepared in April 2017,
including posts published during 2015-2016 but prior to September 2016.
We focused on a subset of Instagram posts where view counts were accessible.
Independent studies have shown that 50\% of engagements of an Instagram post happen within 72 minutes of publication and 95\% within the following week\footnote{\url{https://blog.takumi.com/the-half-life-of-instagram-posts-3db61fb1db75}}.
As the change of feed ranking in March 2016 did not cause statistically significant changes to activity,
and as all posts examined by us were over 6 months old,
we say that the data is stable, meaning, all posts have reached at least 95\% of their potential views and engagements.
The data was prepared as follows:
\begin{enumerate}
\item We gathered information on videos
\footnote{We used Instagram API to collect user statistics.
We did not use the API to gather data for the posts themselves due to API limits.
Instead, we parsed each post web-page.}
published by a set of randomly selected Instagrammers with publicly accessible profiles. Denote the set of users as $U$.
Each of these Instagrammers must have published a minimum of 10 video posts before September 2016.
\item For each video $c \in {\mathcal{C}}$, we collected the following metrics:
\begin{itemize}
\item \textit{$likes_c$} - Number of likes awarded to post $c$.
\item \textit{$comments_c$} - Number of comments given to post $c$.
\item \textit{$v_c$} - Number of Instagrammers who watched part of the video.
\end{itemize}
\end{enumerate}
A total of $940,439$ posts by $115,044$ Instagrammers was collected\footnote{
This collection of anonymized public information is available at
\url{https://klear.com/sigir/instagram_data.zip}}.
\subsection{Instagram Statistics}
\begin{figure*}
\centering
\subfloat[][\centering Views Histogram\label{fig:distViews}]{
\includegraphics[width=0.24\textwidth]{count_views}
}
\subfloat[][\centering Views per Followers\label{fig:distFollowers}]{
\includegraphics[width=0.37\textwidth]{view2foll}
}
\subfloat[][\centering Views per Likes\label{fig:distLikes}]{
\includegraphics[width=0.37\textwidth]{view2eng}
}
\caption{Distributions per Instagrammer}
\end{figure*}
The distribution for log average views per Instagrammer is presented in Figure~\ref{fig:distViews},
from which we can tell that this statistic behaves in a log-normal distribution with a mean of $748$ views.
Furthermore, as this distribution is so close to normal,
we ascertain that our selection of sampled Instagrammers is a good semblance of real-world influence
with micro-influencers populating the dense mean
and casual users and celebrities appearing at the distribution extremes.
Post views per followers and per engagement appear in Figures~\ref{fig:distFollowers} and \ref{fig:distLikes}, respectively;
these show some underline truths of Instagram.
It can be seen that normally, the number of followers a user has outnumber his views,
as we expect following the described flow of information.
However, we found that this is not the case for sponsored posts,
massively engaged content or externally referenced content.
Another unlikely situation is of posts having more engagements than views.
This relates either to bought engagements, often via automation tools and fake accounts,
or to an interesting phenomenon on Instagram known as
"Like You, Like Me" where content is engaged simply to reciprocate prior engagements.
The issue mitigates as the number of engagements increase.
To avoid these sorts of odd behaviors, we performed univariate outliers removal,
ignoring the top and bottom posts for users with posts statistics above 2 standard deviations.
\subsection{Features Collected}
\label{sec:Features}
For the purpose of this work, we collected basic features directly from Instagram.
Expanding on the posts features mentioned above, we also collected user specific statistics.
We then considered each user as a data point with the following statistics:
\begin{itemize}
\item \textit{$likes$} - The average number of user post likes.
\item \textit{$comments$} - The average number of comments per user post.
\item \textit{$followers$} - The users audience size.
\item \textit{$\sqrt{likes \cdot followers}$} - Geometric mean of likes and followers,
taken as neither statistic is an exact representation of influence.
\item \textit{$\frac{followers}{post}$} - Used to suggest odd behavior
as same level influencers should have similar ratios.
\item \textit{$\frac{comments}{likes}$} - Another odd behavior indicator
as bought engagements tend to effect likes more than comments.
\item \textit{$focus$} - The difference and ratio between most and least engaged post,
these features were designed to test the variance and stability of a user engagement level.
\end{itemize}
\section{Regression Models}
We attempt to measure influence using well known regression models
via the features described at Section~\ref{sec:Features}.
Furthermore, as some models are sensitive to redundant features,
we perform recursive feature elimination, generating a subset of informative features
for the problem at hand.
The models tested include:
\begin{itemize}
\item \textit{Ridge Regression(RR)} - An extension of Linear Regression,
RR attempts to overcome Linear Regressions' problem with feature multi-collinearity
adding l2 norm regularization of the coefficients to the minimization problem\cite{hoerl1970ridge}.
\item \textit{Random Forest(RF)} - Non-linear algorithms that rely on
ensembles of decision trees with randomness injected into the model in both features
and instances selection\cite{breiman2001random}.
\end{itemize}
We also introduce a meta-algorithm expansion of our own.
It is clear that not all influencers should be handled in the same manner
and celebrities statistics would show vast differences than those of micro-influencers.
We propose a Multiple-Regression model, where data is separated to subsets,
in our case, using the K-Means clustering algorithm on the followers statistic \cite{forgy1965cluster},
and building a regression model for each subset.
Finally, it can be seen in Figures~\ref{fig:distFollowers} and \ref{fig:distLikes}
that the likes and followers' statistics grow in an exponential manner.
To handle potential bias towards these features, both in clustering and regression,
we transform these statistics using a log scale, i.e., $f\left(x\right)=\frac{x}{\ln x}$.
\section{Experimental Results}
\label{sec:EXPERIMENTAL}
In this section, we present the methodology for evaluating different techniques
and introduce two simple yet commonly used baselines.
We test our models and present the results of our attempt to measure the influence of Instagram users.
\subsection{Methodology}
To compare between different models, we employ two commonly used statistics.
To test the model's ability to measure influence, we employ the coefficient of determination, denoted $R^2$.
Bound by 1, higher $R^2$ scores would indicate lower error variance which indicates a tighter model.
Comparing the order of the predicted influence with the real influence allows us to rank users.
To test the resulting ranking created we use Spearman's rank correlation coefficient, denoted $r_s$.
To avoid the problems of a model tuned specifically to the test data,
we use a five-fold cross validation technique.
We randomly split $U$ into five equally sized sets of disjoint Instagrammers
and use them as five train-test datasets,
each test set contains roughly 20\% the size of the original set of users $U$
and the train set is made of the remaining 80\%.
The results are averaged on the five test cases.
\subsection{Baselines}
Two natural baselines for measuring influence are to use
the user's audience size (followers) or engagement level (number of likes).
We use both statistics baselines, utilizing a Linear Regression model.
While outside our scope,
for completeness purposes, we used the PageRank extension suggested by Egger \cite{egger2016identifying}.
For this, we crawled Instagram, creating a commentators graph around our test users.
\subsection{Comparison of Techniques}
The results of the $R^2$ and $r_s$ statistics for the regression models
and baselines are provided in Table~\ref{tab:regression}.
These results include both clustered and unclustered attempts,
as well as, show the result of the feature reduced models.
It is clear that the followers statistic, while intuitive and is often used in real-world scenarios,
is the weakest on any given metric.
This correlates with previous findings by Cha et al.\cite{cha2010measuring}.
The engagement baseline is the best choice for a direct ranking approach as
it is almost the best, certainly within error range,
and is much simpler to use than the full regression models.
Amongst our suggested models, Multi-Regression was not a useful approach
while feature reduction still resulted in strong models with only half the features.
When comparing RR and RF, we clearly see that RR is a more accurate model.
This is due to a limitation of the RF model - while RR can return any possible value,
RF models can return only linear combination of values in the training set
and while this result in a better ranker, the predicted value more often overshoots.
Due to resource and time constraints we ran the PageRank algorithm a subset of 10\% of the users,
resulting in an $r_s$ score of 0.673.
These results, only better then the followers baseline, are to be expected given Instagram's flow of information,
as discussed in Section~\ref{sec:background}.
\begin{table} [bth]
\caption{$R^2$ and $r_s$ statistics for regression models}
\label{tab:regression}
\begin{tabular}{l|c|c|c|c}
& \multicolumn{2}{|c|}{Regression} & \multicolumn{2}{|c}{Multi-Regression} \\ \hline
& $R^2$ & $r_s$ & $R^2$ & $r_s$ \\ \hline
full Ridge Regression & $\bf{0.725}$ & $0.848$ & $\bf{0.727}$ & $0.821$ \\ \hline
full Random Forest & $0.626$ & $\bf{0.869}$ & $0.621$ & $\bf{0.861}$ \\ \hline
minimal Ridge Regression & $0.723$ & $0.818$ & $0.727$ & $0.818$ \\ \hline
minimal Random Forest & $0.616$ & $0.864$ & $0.611$ & $0.859$ \\ \hline
\hline
Followers Baseline & $0.211$ & $0.757$ & $0.204$ & $0.725$ \\ \hline
Likes Baseline & $0.666$ & $0.859$ & $0.654$ & $0.853$
\end{tabular}
\end{table}
\section{Conclusions and Future work}
\label{sec:conclusions}
This work focused on measuring influence and influencer ranking on Instagram, a content sharing OSN.
Our definition of influence (Def.~\ref{def:influence}) and the features extracted from public information
allowed us to use out-of-the-box regression models to create what is, to our knowledge,
the first influence ranking algorithm based on an intuitive score derived from network-oblivious statistics.
We have shown general truths regarding Instagram such that the commonly sought out audience size is a poor metric for influence.
In our work, we did not consider the temporal nature of influence, i.e.,
the influence of a user is likely to change over time.
The rate of change may even depend on the influence itself,
as per the rich get richer phenomenon \cite{araujo2014not}.
Lastly, only simple user and posts statistics were used in this work.
We believe the use of more complex features would result in stronger models and a better ranking algorithm.
These features can be post specific, from the simple "day of the week" to complex "contains faces" \cite{silva2013picture, bakhshi2014faces},
user specific, e.g. the user's age or common content type \cite{jang2015generation, hu2014we},
or features relating to a user's audience, such as audience location or age \cite{ferrara2014online,manikonda2014analyzing}.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2018-06-05T02:12:47",
"yymm": "1806",
"arxiv_id": "1806.00881",
"language": "en",
"url": "https://arxiv.org/abs/1806.00881"
}
|
\section{Introduction}
In the following, we work over the field ${\mathbb C}$ of complex numbers. However, by the Lefschetz principle or flat base change, all the vanishing results in this paper are valid over any field of characteristic zero.\\
Let $X$ be a smooth projective variety, $A$ an integral ample divisor on $X$, and $K_X$ the canonical divisor. In this setting, the classical Kodaira Vanishing Theorem (\cite{Kodaira}) states that
\begin{equation*}\label{KV}
{\rm H}^i(X,{\mathcal O}_X(K_X + A)) = 0 \text{ for }i > 0.
\end{equation*}
According to the Iitaka philosophy (cf. \cite{Matsuki}), we obtain its logarithmic version (\cite{Norimatsu}) by adding a simple normal crossings divisor $D = \sum D_i$, called the bounday, on $X$:
\begin{equation}\label{KVlog}
{\rm H}^i(X,{\mathcal O}_X(K_X + D+A)) = 0 \text{ for }i > 0.
\end{equation}
The celebrated Kawamata-Viehweg Vanishing Theorem (\cite{Kawamata, Viehweg}) further generalizes \eqref{KVlog} to the setting where $A$ is a ${\mathbb Q}$-divisor, by allowing the boundary divisor to have a fractional part $F = \sum f_jF_j \hskip.1in (0 < f_j < 1)$ such that $F + A$ is integral (i.e., $F = \lceil A\rceil - A$) as well as an integral part $B = \sum_kB_k$. It states that
\begin{equation*}\label{KV}
{\rm H}^i(X,{\mathcal O}_X(K_X + B + F + A)) = {\rm H}^i(X,{\mathcal O}_X(K_X + B + \lceil A\rceil) = 0 \text{ for }i > 0,
\end{equation*}
where $B$ and $F$ share no common components, and $\text{Supp}(B \cup F)$ is a simple normal crossings divisor.\\
Given the preceding discussion, it is natural to ask for an analog of the previous picture in the setting of the Akizuki-Nakano Vanishing.
More precisely, recall that the Akizuki-Nakano Vanishing Theorem(\cite{Akizuki-Nakano}) states that
\begin{equation}\label{AKvanishing}
{\rm H}^i(X,\Omega_X^j( A)) = 0 \text{ for }i + j > \dim X,
\end{equation}
where $A$ is an integral ample divisor on $X$.
As before, the Iitaka philosophy suggests that one would obtain a logarithmic version of \eqref{AKvanishing} by considering a simple normal crossings boundary divisor $D = \sum D_i$, and replacing the usual sheaf of differential forms with the sheaf of logarithmic differential forms. This leads precisely to the Esnault-Viehweg Vanishing Theorem (\cite{EVB, EVI}):
\begin{equation*}\label{EVvanishing}
{\rm H}^i(X,\Omega_X^j(\log(D))( A)) = 0 \text{ for }i + j > \dim X.
\end{equation*}
At about the same time, Steenbrink \cite{Steenbrink} proved that
\begin{equation*}\label{Stvanishing}
{\rm H}^i(X,\Omega_X^j(\log(D))( A-D)) = 0 \text{ for }i + j > \dim X.
\end{equation*}
Sometime later the first author \cite{Arapura2} found a ``fractional'' version, which will be explained in section \ref{section:pvanishing}.
The statement of this fractional version does not seem to yield directly, as its special cases, all the known classical vanishing theorems mentioned above. However, a slight modification of the statement does, and this is the main result that we want to explain.
Suppose that $F$ is a fractional divisor with support contained in $D$, and let $G$ be an integral divisor such that $F \leq G \leq D$.
Then we will show in the text that
\begin{equation*}
{\rm H}^i(X,\Omega_X^j(\log(D))(F + A - G)) = {\rm H}^i(X,\Omega_X^j(\log(D))(\lceil A\rceil - G) = 0
\text{ for } i + j > \dim X.
\end{equation*}
This statement should not come as a surprise to the experts. For
example, it readily follows, via some standard arguments, from theorem
6.2 of the beautiful book by Esnault-Viehweg (\cite{EVB}). We also recently
learned of a nice paper by C. Huang, K. Liu, X. Wan, and X. Yang
\cite{HLWY}, which proves a similar result by $L^2$-methods in the K\"ahler setting.
Other authors have also considered certain cases in the
presence of singularities (e.g. \cite{Kov2}).
We do not consider such cases in this article. We are not claiming much originality in the result. Our goal here is to present a Kawamata-Viehweg type formulation of the Kodaira-Akizuki-Nakano Vanishing Theorem in a way that is easily accessible to non-experts. \\
We present two proofs of the main result.
The first is elementary, and involves a reduction to Steenbrink's Vanishing Theorem by the Kawamata Covering Lemma. Furthermore, since the logarithmic differential forms have no ramification under the Kummer covers, this proof makes it clear that the process of taking the round up ``$\lceil {A}\rceil$'' does not stem from the ramification, but rather from the subtraction of some effective divisor ``$G$'' appearing in our formulation. We note that this fact is not readily visible in the classical proof of the Kawamata-Viehweg Vanishing Theorem (\cite{Kawamata,KMM,Matsuki}) when reducing the fractional case to the integral case via the covering technique. This is caused by the fact that the Kawamata-Viehweg Vanishing Theorem only deals with top degree forms, while the Akizuki-Nakano Vanishing Vanishing Theorem deals also with lower degree differential forms. We emphasize again that all the essential ideas of reducing the fractional case to the integral case via covering already appear in \cite{EVB} as well as in \cite{Kawamata}. \\
The second proof uses a simplified version of an argument in \cite{Arapura2} to establish a fractional form of the Steenbrink Vanishing theorem. It uses the method of Deligne-Illusie \cite{DI} in positive characteristic. It is also worth noting that instead of the Kawamata Covering Lemma, it uses a lemma of Hara \cite{Hara} to handle the fractional parts. Once the fractional version of the Steenbrink Vanishing is proved, our main result follows immediately as an easy corollary via some ``round up'' tricks in \ref{helmke}.\\
We now briefly outline the contents. In the next section, we state our Kawamata-Viehweg type formulation of the logarithmic Akizuki-Nakano Vanishing Theorem, and explain how to obtain the classical vanishing theorems discussed above as special cases. We also discuss the failure of some naive versions of such a formulation. In the third section, we present the first proof of the main result with some remarks and an alternate argument. In the fourth section, we then present the second proof of the main result. Finally, in the last section, we discuss some potential applications and further generalizations.
\section{The main vanishing result.}
\subsection{Statement of the main vanishing result}
Let $X$ be a smooth projective variety, $D = \sum D_i$ a simple normal crossings divisor, $A$ an ample ${\mathbb Q}$-divisor, and
$F: = \lceil A\rceil - A$.
\begin{theorem}\label{mainvan} Let $X, D, A,$ and $F$ be as above. Suppose that $F = \lceil A\rceil - A \leq D$ (i.e. the support of $F$ is contained in $D$), and let $G$ be an integral divisor such that $F \leq G \leq D$.
Then we have:
\begin{equation*}
{\rm H}^i(X,\Omega_X^j(\log(D))(F + A - G)) = {\rm H}^i(X,\Omega_X^j(\log(D))(\lceil A\rceil - G) = 0
\text{ for } i + j > \dim X.
\end{equation*}
\end{theorem}
The following relative version easily follows from the above absolute version (see, e.g., an argument in the proof of Theorem 1-2-3 \cite{KMM}).
\begin{corollary}\label{relmainvan} Let $f:X \rightarrow Y$ be a projective morphism from a nonsingular variety $X$ to a variety $Y$, $D = \sum D_i$ a simple normal crossings divisor on $X$, $A$ an $f$-ample ${\mathbb Q}$-divisor, and
$F: = \lceil A\rceil - A$. Then we have:
\begin{equation*}
\mathrm{R}^if_*(\Omega_X^j(\log(D))(F + A - G)) = \mathrm{R}^if_*(\Omega_X^j(\log(D))(\lceil A\rceil - G)) = 0
\text{ for } i + j > \dim X.
\end{equation*}
\end{corollary}
\subsection{Failure of a stronger version}
Consider the statement of the Kawamata-Viehweg Vanishing Theorem:
$${\rm H}^i(X,{\mathcal O}_X(K_X + B + F + A)) = {\rm H}^i(X,{\mathcal O}_X(K_X + B + \lceil A\rceil) = 0 \text{ for }i > 0.$$
In this case, we observe that the only conditions on $B$ and $F$ are:
\begin{enumerate}
\item[(i)] $\text{Supp}(B \cup F)$ is a simple normal crossings divisor, and
\item[(ii)] $B$ and $F$ share no common components.
\end{enumerate}
\begin{remark}
Suppose $B$ and $F$ had a common component. If this common component is locally defined by $\{x = 0\}$, then $K_X + B + F$ has a local generator of the form $\dfrac{dx}{x^{1 + \delta}} \wedge \cdots$ with $\delta > 0$. However, this violates the standard philosophy that, in an appropriate logarithmic formulation, one should have no worse than simple poles (i.e., $dx/x^1 = d(\log x)$).
\end{remark}
Note that setting $j = \dim X$ and $B:= D -G$, Theorem \ref{mainvan} implies the Kawamata-Viehweg Vanishing Theorem recalled above. In this case, condition (ii) above follows from the condition $F \leq G \leq D$.\\
On the other hand, still staying in line with the above philosophy, one could imagine the following stronger formulation of Theorem \ref{mainvan}. Let $X$ be as before, $D = \sum_i D_i$ a simple normal crossings divisor on $X$, $A$ an ample ${\mathbb Q}$-divisor, and $F:= \lceil A\rceil - A$ such that $\text{Supp}(D \cup F)$ is also a simple normal crossings divisor. Let $G$ be an integral divisor such that \text{$D \cap F \leq G \leq D$}, where $D \cap F := \sum_{D_i \subset \text{Supp}(F)}D_i$. Then one is led to consider the following stronger vanishing statement where we do not require $F$ to be contained in $G$ or $D$:
\begin{equation*}
{\rm H}^i(X,\Omega_X^j(\log(D))(F + A - G)) = {\rm H}^i(X,\Omega_X^j(\log(D))(\lceil A\rceil - G) = 0
\text{ for } i + j > \dim X.
\end{equation*}
Note that, if $j = \dim X$, then this stronger formulation is actually equivalent to Theorem \ref{mainvan}. Moreover, in this case, they are also both equivalent to the Kawamata-Viehweg Vanishing Theorem.\\
On the other hand, if $j < \dim X$, then the stronger formulation above differs from Theorem \ref{mainvan}.
In fact, if $D=0$, then the stronger formulation would imply that
$${\rm H}^i(X,\Omega_X^j(\lceil A\rceil)) = 0 \text{ for }i + j > \dim X.$$
In view of the following statement of the Kawamata-Viehweg Vanishing (without any integral part $B$ of the boundary divisor)
$${\rm H}^i(X,\Omega_X^{\dim X}(\lceil A\rceil)) = {\rm H}^i(X,{\mathcal O}_X(K_X + \lceil A\rceil)) = 0 \text{ for }i > 0,$$
this could be interpreted as a Kawamata-Viehweg type formulation of the Akizuki-Nakano Vanishing Theorem.
However, this naive formulation, as well as the afore-mentioned stronger formulation, fails to hold!\\
In fact, one can also consider the following relative version of the stronger formulation with $D = 0$:
$${\rm R}^if_*\Omega_X^j(\lceil A\rceil) = 0 \text{ for }i + j > \dim X,$$
where $f:X \rightarrow Y$ is a projective morphism, $A$ is an $f$-ample ${\mathbb Q}$-divisor, and where $F = \lceil A\rceil - A$ is a simple normal crossings divisor on $X$.
However, the following example demonstrates that this statement fails.\\
\begin{example}\label{cexamp1}
Let $Y$ be a non-singular 3-fold, $f:X \rightarrow Y$ be the blow up of a point $P \in Y$, and $E := f^{-1}(P)$ be the exceptional divisor. Then $A = - \epsilon E$ is an $f$-ample ${\mathbb Q}$-divisor for some sufficiently small and positive rational number $0 < \epsilon << 1$. According to the stronger formulation, we should have
$${\rm R}^2f_*\Omega_X^2(\lceil A\rceil) = {\rm R}^2f_*\Omega_X^2 = 0.$$
On the other hand, we have an exact sequence of coherent ${\mathcal O}_X$-modules
$$0 \longrightarrow {\mathcal K}\ \longrightarrow \Omega_X^2 \overset{\phi}\longrightarrow \Omega_E^2 \rightarrow 0,$$
where $\phi$ is the restriction map and ${\mathcal K}$ is the kernel of the map $\phi$. The associated long exact sequence gives
$${\rm R}^2f_*\Omega_X^2 \longrightarrow {\rm R}^2f_*\Omega_E^2 \cong {\rm H}^2({\mathbb P}^2,\Omega_{{\mathbb P}^2}^2) \longrightarrow {\rm R}^3f_*{\mathcal K} = 0.$$
Here the last term vanishes as the fibers of $f$ have dimension at most $2$.
Since by the Serre duality
$${\rm H}^2({\mathbb P}^2,\Omega_{{\mathbb P}^2}^2) \cong {\rm H}^0({\mathbb P}^2, {\mathcal O}_{{\mathbb P}^2}) \cong {\mathbb C} \neq 0,$$
we conclude that
$${\rm R}^2f_*\Omega_X^2 \neq 0.$$
\end{example}
\subsection{Replacing the condition of $A$ being ample with nef and big}
The statement of the Kodaira Vanishing holds even if we replace an ample divisor $A$ with a nef and big divisor $L$:
$${\rm H}^i(X,{\mathcal O}_X(K_X + L)) = 0 \text{ for }i > 0,$$
where $X$ is a nonsingular projective variety and $L$ is an (integral) nef and big divisor on $X$. \\
The proof of this statement via the Kawamata-Viehweg Vanishing for a klt pair $(X,\Delta)$ (``klt'' is short for ``Kawamata log terminal'' singularities) goes as follows. Since $L$ is big, by the so-called Kodaira Lemma, we can write $L$ as a ${\mathbb Q}$-divisor
$$L = M + H,$$
where $M$ is an effective divisor and $H$ is an ample divisor. For $n \in {\mathbb N}$, we have another equation of ${\mathbb Q}$-divisors:
$$L = \dfrac{1}{n}\{L + (n - 1)L\} = \dfrac{1}{n}\{M + H + (n - 1)L\} = \dfrac{1}{n}M + \dfrac{1}{n}\{H + (n - 1)L\}.$$
Here $A := \dfrac{1}{n}\{H + (n - 1)L\}$ is an ample ${\mathbb Q}$-divisor, and \text{the pair $(X, \Delta = \dfrac{1}{n}M)$} is klt for $n$ sufficiently large. As an application of the Kawamata-Viehweg Vanishing Theorem to the klt pair $(X,\Delta)$ we obtain:
$${\rm H}^i(X,{\mathcal O}_X(K_X + L)) = {\rm H}^i(X,{\mathcal O}_X(K_X + \Delta + A)) = 0 \text{ for }i > 0.$$
Note that in the original setting with the SNC divisor $F = \lceil A\rceil - A$, the klt pair we consider is $(X,\Delta = F)$, and that we obtain
$$\begin{array}{rcl}
{\rm H}^i(X,{\mathcal O}_X(K_X + \lceil A\rceil)) &=& {\rm H}^i(X,{\mathcal O}_X(K_X + F + A)) \\
&=& {\rm H}^i(X,{\mathcal O}_X(K_X + \Delta + A)) = 0 \text{ for }i > 0.\\
\end{array}$$\\
However, it is well-known that the Akizuki-Nakano Vanishing fails if we replace, in its formulation, an ample divisor $A$ with a nef and big divisor $L$ (cf. 4.3.4 \cite{Lazarsfeld}, see also examples \ref{cexamp2.3.1}, \ref{cexamp2.3.2}, \ref{cexamp2.3.3} below.). In particular, there is an example where we have
$${\rm H}^i(X,\Omega_X^j(L)) \neq 0 \text{ and }i + j > \dim X,$$
where $X$ is a nonsingular projective variety over ${\mathbb C}$ and $L$ is an integral nef and big divisor on $X$. One might consider this to be a ``pathology'' if one expects that the Akizuki-Nakano Vanishing for a klt pair $(X,\Delta)$ should hold, and hence that one should have for a nef and big divisor $L = \Delta + A$ as above
$$\begin{array}{rcl}
{\rm H}^i(X,\Omega_X^j(L)) &=& {\rm H}^i(X,\Omega_X^j(\Delta + A)) \\
&=& {\rm H}^i(X,\Omega_X^j(\lceil A\rceil)) = 0 \text{ and }i + j > \dim X.\\
\end{array}$$
However, this is exactly the statement of the stronger version of our main theorem discussed above, which we saw fails to hold. Therefore, in the above sense, we may say that the failure of the Akizuki-Nakano Vanishing for a nef and big divisor and the failure of the stronger version of its Kawamata-Viehweg type formulation share the same origin.\\
\begin{example}\label{cexamp2.3.1}
Let $f:X \rightarrow Y$ be the blow up of a point $P \in Y$ on a nonsingular 3-fold $Y$ as in Example \ref{cexamp1}. Let $L = \pi^*H$ be the pull-back of an ample divisor $H$ on $Y$. In this case, $L$ is nef and big. Then
$${\rm R}^2f_*(\Omega_X^2(L)) = {\rm R}^2f_*(\Omega_X^2(\pi^*H)) \cong {\rm R}^2f_*(\Omega_X^2) \otimes H \neq 0,$$
since ${\rm R}^2f_*\Omega_X^2 \neq 0$. In particular, this shows the failure of the (relative) Akizuki-Nakano Vanishing, when we replace an ample divisor $A$ with a nef and big divisor $L$.
\end{example}
\begin{example}\label{cexamp2.3.2}
In the previous example, we can also take $L$ to be the structure sheaf ${\mathcal O}_X$, which is $f$-nef and $f$-big. Then we have
$${\rm R}^2f_*(\Omega_X^2(L)) = {\rm R}^2f_*(\Omega_X^2) \neq 0.$$
In particular, this shows the failure of the (relative) Akizuki-Nakano Vanishing, when we replace a relative ample divisor $A$ with a relative nef and big divisor $L$.
\end{example}
\begin{example}\label{cexamp2.3.3} Let $f:X \rightarrow Y$ be as in Example \ref{cexamp2.3.1}. Consider an ample divisor $H$ on $Y$ and, a sufficiently small and positive rational number $0 < \epsilon << 1$ such that $A = \pi^*H - \epsilon E$ is ample on $X$. Then looking at the Leray spectral sequence, one immediately sees that
$${\rm H}^2(X,\Omega_X^2(\lceil A\rceil)) = {\rm H}^2(X,\Omega_X^2(L)) \neq 0.$$
This provides counter-examples to the stronger version of our formulation and the Akizuki-Nakano Vanishing for a nef and big line bundle in the absolute setting.
\end{example}
\subsection{Special cases of the main vanishing result}\
We discuss various special cases of Theorem \ref{mainvan}.
\begin{case} $A$ is integral and $G = 0$.\\
This case yields the Esnault-Viehweg Vanishing Theorem
$${\rm H}^i(X,\Omega_X^j(\log(D))( A)) = 0 \text{ for }i + j > \dim X.$$
When $j = \dim X$, it yields the logarithmic version of \text{the Kodaira Vanishing Theorem.}
\end{case}
\begin{case} $j = \dim X$\\
By setting $B = D - G$, this case yields the Kawamata-Viehweg Vanishing Theorem:
$$
{\rm H}^i(X,\Omega_X^{\dim X}(D + \lceil A\rceil - G)) = {\rm H}^i(X,{\mathcal O}_X(K_X + B + F + A)) = 0 \text{ for }i > 0.$$
Here $B$ and $F$ share no common components because of the condition $F \leq G \leq D$.
\end{case}
\begin{case} $G = D$.\\
This case yields
$$\begin{array}{rcl}
{\rm H}^i(X,\Omega_X^j(\log(D))(F + A - D)) &=& {\rm H}^i(X,\Omega_X^j(\log(D))(\lceil A\rceil - D) )\\
&=& 0 \hskip.4in \text{ for }i + j > \dim X.\\
\end{array}$$
This is the fractional version of the Steenbrink Vanishing Theorem, which appears in \cite{Arapura2}. We note that when $j = \dim X$ and $A$ is integral, we recover the Kodaira Vanishing Theorem, but not its logarithmic version (unless we use the round up trick \ref{helmke}).
\end{case}
\begin{case} $D = G = E$, where $E$ is the support of a projective birational map $f: X \rightarrow Y$.\\
Consider a projective birational map $f: X \rightarrow Y$ from a nonsingular variety $X$. Then Corollary \ref{relmainvan} implies
$${\rm R}^if_*\Omega^j_X(\log(E))(\lceil A\rceil - E) = {\rm R}^if_*\Omega^j_X(\log(E))(- E) = 0 \text{ for }i + j > \dim X$$
where $E = \sum E_i$ is the exceptional divisor (which is assumed to be a simple normal crossings divisor), $A = \sum - e_iE_i$ is an $f$-ample divisor with $\lceil A\rceil = 0$. When $j = \dim X$, the statement becomes
$$R^if_*\omega_X = 0 \text{ for }i > 0,$$
which is nothing but the Grauert-Riemenschneider Vanishing Theorem.
\end{case}
\section{Proof of Theorem \ref{mainvan} by Kawamata Covering Lemma}
In this section, we provide a proof of Theorem \ref{mainvan} using the Kawamata Covering Lemma.
\subsection{The case when $A$ integral}
In this subsection, we prove Theorem \ref{mainvan} in the setting where $A$ is integral. We shall further split this case into subcases.
\begin{subcase} $G = D$.\\
In this case, the statement is nothing but the Steenbrink Vanishing Theorem. Note that, when $G = D = 0$, we obtain the Akizuki-Nakano Vanishing Theorem.
\end{subcase}
\begin{subcase} $G \leq D' = D - D_1 = D_2 + \cdots + D_l < D = D_1 + D_2 + \cdots + D_l$.\\
In this case, one proceeds via induction on the number of the components in $D$ and the dimension of $X$.
Consider the residue sequence
$$
0 \rightarrow \Omega_X^j(\log(D'))(A - G)
\rightarrow \Omega_X^j(\log(D))(A - G)
\xrightarrow{\psi} \Omega_{D_1}^{j-1}(\log(D'|_{D_1}))((A - G)|_{D_1})
\rightarrow 0,\\
$$
where $\psi$ is the residue map.
\begin{comment}
Recall, on local generators, $\psi$ is defined as follows:
$$\left\{\begin{array}{l}
\dfrac{dx_1}{x_1} \wedge_{s \in S} \dfrac{dx_s}{x_s} \wedge_{t \in T} dx_t \overset{\psi}\longrightarrow \wedge_{s \in S} \dfrac{dx_s}{x_s} \wedge_{t \in T} dx_t \\
\text{where } \{x_s = 0\} \subset D' \hskip.05in \forall s \in S \\
\text{and where }\{x_t = 0\} \not\subset D \hskip.05in \forall t \in T \text{ with } 1 + \# S + \# T = j, \\
\wedge_{s \in S} \dfrac{dx_s}{x_s} \wedge_{t \in T} dx_t \hskip.3in \overset{\psi}\longrightarrow 0 \\
\text{where } \{x_s = 0\} \subset D' \hskip.05in \forall s \in S \\
\text{and where } \{x_t = 0\} \not\subset D \hskip.05in \forall t \in T \text{ with } \# S + \# T = j. \\
\end{array}\right.$$
\end{comment}
The corresponding long exact sequence in cohomology gives
\begin{align*}
\cdots \rightarrow {\rm H}^i(X,\Omega_X^j(\log(D'))(A - G))
\rightarrow {\rm H}^i(X, \Omega_X^j(\log(D))(A - G)) \\
\rightarrow {\rm H}^i(D_1,\Omega_{D_1}^{j-1}(\log(D'|_{D_1}))((A - G)|_{D_1})) \rightarrow \cdots.
\end{align*}
If $i + j > \dim X$, then the first term is $0$ by induction on the number of the components in $D$ (since the number of the components in $D'$ is one less than that of $D$). On the other hand, if $i + j > \dim X$, the last term is also $0$ by induction on the dimension of $X$ (since $\dim D_1 = \dim X - 1$ and since $i + (j-1) = i + j - 1 > \dim X - 1 = \dim D_1$). Therefore, we conclude that
$${\rm H}^i(X, \Omega_X^j(\log(D))(A - G)) = 0$$
if $i + j > \dim X$.
\begin{remark}
Note that using the residue sequence above with $G = 0$, one can derive the Esnault-Viehweg Vanishing from the Akizuki-Nakano Vanishing via induction on the number of the components in $D$ and the dimension of $X$. But, it seems that the Steenbrink Vanishing cannot be derived from the Akizuki-Nakano Vanishing via a simple inductive argument using the residue sequence above.
\end{remark}
\end{subcase}
\subsection{The case when $A$ fractional}
We reduce the case where $A$ is fractional to the case where $A$ is integral, using the following Kawamata Covering Lemma.
\begin{lemma}[{\bf Kawamata Covering Lemma}(\cite{Kawamata,KMM,Matsuki}]\label{covlem}
There exists a finite morphism $\pi:Y \rightarrow X$ with the extension of the function fields ${\mathbb C}(Y)/{\mathbb C}(X)$ being Galois (and hence $\Gamma := \mathrm{Gal}({\mathbb C}(Y)/{\mathbb C}(X))$ acts on $Y$ over $X$) such that:\begin{enumerate}
\item[(i)] $Y$ is nonsingular projective.
\item[(ii)] $\pi^*A$ is integral.
\item[(iii)] $\pi$ is ramifed only along $D \cup M$, which forms an SNC divisor for some auxiliary divisor $M$, with $D$ and $M$ sharing no common components.
\item[(iv)] There is a sufficiently divisible and large integer $m \in {\mathbb N}$ such that, for any irreducible component $B$ in $D \cup M$, we have
$$\pi^*B = mB_Y$$
where $B_Y = \pi^{-1}(B)_{\text{red}}$ and that, if $B \subset F$, we have
$$(\star)\ a_B + \dfrac{m - 1}{m} \geq \lceil a_B\rceil.$$
Here $a_B$ is the coefficient of $B$ in $A$.
\item[(v)] For any closed point $P \in X$ there exists a regular system of parameters \linebreak $(x_1, \ldots, x_l, x_{l+1}, \ldots, x_n)$ such that
$\bullet$ $\{\prod_{\alpha = 1}^l x_{\alpha} = 0\} = (D \cup M)_P$, and
$\bullet$ any closed point $Q \in \pi^{-1}(P)$ has a regular system of paramaters of the form $(y_1 = x_1^{1/m}, \ldots, y_l = x_l^{1/m}, x_{l+1}, \ldots, x_n)$ \text{(for the same integer ``$m$'' mentioned in condition (iv)).}
\end{enumerate}
\end{lemma}
\begin{lemma}\label{Ginv} With notation as in Lemma \ref{covlem}, we have
$$\left[\pi_*\{\Omega^j(\mathrm{log}(D_Y))(\pi^*A - G_Y)\}\right]^{\Gamma} = \Omega^j(\mathrm{log}(D))(\lceil A\rceil - G).$$
\end{lemma}
\begin{proof}
First note that
$$\begin{array}{rcl}
\Omega_Y^j(\mathrm{log}(D_Y))(\pi^*A - G_Y) &\subset& \Omega_Y^j(\mathrm{log}((D \cup M)_Y)) \otimes_{{\mathcal O}_Y}{\mathbb C}(Y) \\
&=& \pi^*\{\Omega_X^j(\mathrm{log}(D \cup M))\} \otimes_{{\mathcal O}_Y}{\mathbb C}(Y) \\
\end{array}$$
and hence that
$$\pi_*\{\Omega_Y^j(\mathrm{log}(D_Y))(\pi^*A - G_Y)\} \subset \Omega_X^j(\mathrm{log}(D \cup M)) \otimes_{{\mathcal O}_X} {\mathbb C}(Y).$$
The $\Gamma$-action on the left-hand side is induced from the $\Gamma$-action on the right-hand side, where $\Gamma$ acts trivially on the first factor $\Omega_X^j(\mathrm{log}(D \cup M))$ and $\Gamma$ acts on the second factor ${\mathbb C}(Y)$ as the Galois group $\mathrm{Gal}({\mathbb C}(Y)/{\mathbb C}(X))$. Therefore, we conclude
$$\left[\pi_*\{\Omega_Y^j(\mathrm{log}(D_Y))(\pi^*A - G_Y)\}\right]^{\Gamma} \subset \Omega_X^j(\mathrm{log}(D \cup M)) \otimes_{{\mathcal O}_X} {\mathbb C}(X).$$
Our task is to identify the left-hand side with another subsheaf of the right-hand side
$$\Omega_X^j(\mathrm{log}(D))(\lceil A\rceil - G) \subset \Omega_X^j(\mathrm{log}(D \cup M)) \otimes_{{\mathcal O}_X} {\mathbb C}(X).$$
For a closed point $P \in X$, we choose a regular system of parameters
$$(\{x_s\}_{s \in S},\{x_t\}_{t \in T}, \{x_v\}_{v \in V}, \{x_w\}_{w \in W}, \{x_z\}_{z \in Z})$$
as in condition (v) of the Kawamata Covering Lemma and an affine open neighborhood $P \in U$ such that
$$\left\{\begin{array}{lcl}
\{x_s = 0\}_{s \in S} &=& (D \setminus G)_P = (D \setminus G)|_U \\
\{x_t = 0\}_{t \in T} &=& (G \setminus F)_P = (G \setminus F)|_U \\
\{x_v = 0\}_{v \in V} &=& F_P = F|_U \\
\{x_w = 0\}_{w \in W} &=& M_P = M|_U \\
\{x_z = 0\}_{z \in Z} &&\text{shares no components with }(D \cup M)_P \text{ or }(D \cup M)|_U,\\
\end{array}\right.$$
and that
$$\left\{\bigwedge_{s \in S_{\alpha} \subset S}\dfrac{dx_s}{x_s} \bigwedge_{t \in T_{\beta} \subset T}\dfrac{dx_t}{x_t} \bigwedge_{v \in V_{\gamma} \subset V}\dfrac{dx_v}{x_v} \bigwedge_{w \in W_{\delta} \subset W}\dfrac{dx_w}{x_w} \bigwedge_{z \in Z_{\epsilon} \subset Z}dx_z\right\},$$
where the collection of the subsets $S_{\alpha} \subset S, T_{\beta} \subset T, V_{\gamma} \subset V, W_{\delta} \subset W, Z_{\epsilon} \subset Z$ is the one of all those with $\# S_{\alpha} + \# T_{\beta} + \# V_{\gamma} + \# W_{\delta} + \# Z_{\epsilon} = j$, forms a basis of $\Omega_X^j(\log(D \cup M))$ as a free ${\mathcal O}_X$-module over $U$, while
$$\left\{\bigwedge_{s \in S_{\alpha} \subset S}\dfrac{dx_s}{x_s} \bigwedge_{t \in T_{\beta} \subset T}\dfrac{dx_t}{x_t} \bigwedge_{v \in V_{\gamma} \subset V}\dfrac{dx_v}{x_v} \bigwedge_{w \in W_{\delta} \subset W}dx_w \bigwedge_{z \in Z_{\epsilon} \subset Z}dx_z\right\}$$
forms a basis of $\Omega_X^j(\log(D))$ as a free ${\mathcal O}_X$-module over $U$.
Since $\pi$ ramifies only over $D \cup M$, we conclude that
$$\begin{array}{l}
\left\{\pi^*\left[\bigwedge_{s \in S_{\alpha} \subset S}\dfrac{dx_s}{x_s} \bigwedge_{t \in T_{\beta} \subset T}\dfrac{dx_t}{x_t} \bigwedge_{v \in V_{\gamma} \subset V}\dfrac{dx_v}{x_v} \bigwedge_{w \in W_{\delta} \subset W}\dfrac{dx_w}{x_w} \bigwedge_{z \in Z_{\epsilon} \subset Z}dx_z\right] \right\} \\
= \left\{\bigwedge_{s \in S_{\alpha} \subset S}\dfrac{dy_s}{y_s} \bigwedge_{t \in T_{\beta} \subset T}\dfrac{dy_t}{y_t} \bigwedge_{v \in V_{\gamma} \subset V}\dfrac{dy_v}{y_v} \bigwedge_{w \in W_{\delta} \subset W}\dfrac{dy_w}{y_w} \bigwedge_{z \in Z_{\epsilon} \subset Z}dx_z \right\} \\
\end{array}$$
forms a basis of $\Omega_Y^j(\log((D\cup M)_Y))$ as a free ${\mathcal O}_Y$-module over $\pi^{-1}(U)$, while
$$\left\{\bigwedge_{s \in S_{\alpha} \subset S}\dfrac{dy_s}{y_s} \bigwedge_{t \in T_{\beta} \subset T}\dfrac{dy_t}{y_t} \bigwedge_{v \in V_{\gamma} \subset V}\dfrac{dy_v}{y_v} \bigwedge_{w \in W_{\delta} \subset W}dy_w \bigwedge_{z \in Z_{\epsilon} \subset Z}dx_z \right\}$$
forms a basis of $\Omega_Y^j(\log((D)_Y))$ as a free ${\mathcal O}_Y$-module over $\pi^{-1}(U)$.
Take a section
$$\begin{array}{rcl}
f &\in& \Gamma(\pi^{-1}(U), \Omega_Y^j(\mathrm{log}((D \cup M)_Y)) \otimes_{{\mathcal O}_Y}{\mathbb C}(Y)) \\
&=& \Gamma(\pi^{-1}(U), \pi^*\{\Omega_X^j(\mathrm{log}(D \cup M))\} \otimes_{{\mathcal O}_Y}{\mathbb C}(Y)) \\
\end{array}$$
and write
$$f = \sum_{\alpha, \beta, \gamma, \delta, \epsilon} \left(\pi^*\left[\bigwedge_{s \in S_{\alpha} \subset S}\dfrac{dx_s}{x_s} \bigwedge_{t \in T_{\beta} \subset T}\dfrac{dx_t}{x_t} \bigwedge_{v \in V_{\gamma} \subset V}\dfrac{dx_v}{x_v} \bigwedge_{w \in W_{\delta} \subset W} \dfrac{dx_w}{x_w} \bigwedge_{z \in Z_{\epsilon} \subset Z}dx_z\right] \otimes f_{\alpha,\beta,\gamma,\delta,\epsilon}\right)$$
with $f_{\alpha,\beta,\gamma,\delta,\epsilon} \in {\mathbb C}(Y)$.
\vskip.03in
Observe
$$\begin{array}{rcl}
f &\in& \Gamma(U, \pi_*\{\Omega_Y^j(\mathrm{log}(D_Y))(\pi^*A - G_Y)\}) \\
&=& \Gamma(\pi^{-1}(U), \Omega_Y^j(\mathrm{log}(D_Y)(\pi^*A - G_Y)) \\
&\Longleftrightarrow& \text{div}\left(f_{\alpha,\beta,\gamma,\delta,\epsilon}/\prod_{w \in W_{\delta} \subset W}y_w\right) + \pi^*A - G_Y|_{\pi^{-1}(U)} \geq 0, \\
&& f_{\alpha,\beta,\gamma,\delta,\epsilon} \in {\mathbb C}(Y), \forall \alpha, \beta, \gamma, \delta, \epsilon \\
&\Longleftrightarrow& \text{div}\left(f_{\alpha,\beta,\gamma,\delta,\epsilon} /\prod_{w \in W_{\delta} \subset W}\pi^*(x_w)^{1/m}\right) + \pi^*A - G_Y|_{\pi^{-1}(U)} \geq 0, \\
&& f_{\alpha,\beta,\gamma,\delta,\epsilon} \in {\mathbb C}(Y), \forall \alpha, \beta, \gamma, \delta, \epsilon.\\
\end{array}$$
Therefore, we conclude
$$\begin{array}{rcl}
f &\in& \Gamma(U, \left[\pi_*\{\Omega_Y^j(\mathrm{log}(D_Y))(\pi^*A - G_Y)\}\right]^{\Gamma}) \\
&\Longleftrightarrow& \text{div}\left(f_{\alpha,\beta,\gamma,\delta,\epsilon}/\prod_{w \in W_{\delta} \subset W}\pi^*(x_w)^{1/m}\right) + \pi^*A - G_Y|_{\pi^{-1}(U)} \geq 0, \\
&& f_{\alpha,\beta,\gamma,\delta,\epsilon} \in {\mathbb C}(Y)^\Gamma = {\mathbb C}(X), \forall \alpha, \beta, \gamma, \delta, \epsilon\\
&\Longleftrightarrow& \text{div}\left(f_{\alpha,\beta,\gamma,\delta,\epsilon}/\prod_{w \in W_{\delta} \subset W}\pi^*(x_w)^{1/m}\right) + \pi^*A - \pi^*(G) + \pi^*\left(\dfrac{m - 1}{m}G\right)|_{\pi^{-1}(U)} \geq 0, \\
&& f_{\alpha,\beta,\gamma,\delta,\epsilon} \in {\mathbb C}(Y)^\Gamma = {\mathbb C}(X), \forall \alpha, \beta, \gamma, \delta, \epsilon\\
&\Longleftrightarrow& \text{div}\left(f_{\alpha,\beta,\gamma,\delta,\epsilon} / \prod_{w \in W_{\delta} \subset W}x_w^{1/m}\right) + A - G + \dfrac{m - 1}{m}G|_U \geq 0, \\
&& f_{\alpha,\beta,\gamma,\delta,\epsilon} \in {\mathbb C}(X), \forall \alpha, \beta, \gamma, \delta, \epsilon\\
&\Longleftrightarrow& \text{div}\left(f_{\alpha,\beta,\gamma,\delta,\epsilon} / \prod_{w \in W_{\delta} \subset W}x_w\right) + \lceil A\rceil - G|_U \geq 0, \\
&& f_{\alpha,\beta,\gamma,\delta,\epsilon} \in {\mathbb C}(X), \forall \alpha, \beta, \gamma, \delta, \epsilon\\
&\Longleftrightarrow& \\
f &\in& \Gamma(U, \Omega^j(\mathrm{log}(D))(\lceil A\rceil - G)). \\
\end{array}$$
We may add the following explanation for the second last equivalence: Let $B$ vary among all the irreducible components of $D \cup M|_U$. Then the 3rd last condition
$$\text{div}\left(f_{\alpha,\beta,\gamma,\delta,\epsilon} / \prod_{w \in W_{\delta} \subset W}x_w^{1/m}\right) + A - G + \dfrac{m - 1}{m}G|_U \geq 0$$
reads for the component $B$:
$$\left\{\begin{array}{cll}
\bullet &v_B(f_{\alpha,\beta,\gamma,\delta,\epsilon}) + a_B - 0 + \dfrac{m-1}{m} \cdot 0 \geq 0 & \\
\Longleftrightarrow & v_B(f_{\alpha,\beta,\gamma,\delta,\epsilon}) + a_B - 0 = v_B(f_{\alpha,\beta,\gamma,\delta,\epsilon}) + \lceil a_B\rceil - 0 \geq 0 &\text{ if } B \subset D \setminus G \\
a_B \in {\mathbb Z}&& \\
\bullet &v_B(f_{\alpha,\beta,\gamma,\delta,\epsilon}) + a_B - 1 + \dfrac{m-1}{m} \cdot 1 \geq 0 & \\
\Longleftrightarrow & v_B(f_{\alpha,\beta,\gamma,\delta,\epsilon}) + a_B - 1 = v_B(f_{\alpha,\beta,\gamma,\delta,\epsilon}) + \lceil a_B\rceil - 1 \geq 0 &\text{ if } B \subset G \setminus F \\
a_B \in {\mathbb Z}&& \\
\bullet &v_B(f_{\alpha,\beta,\gamma,\delta,\epsilon}) + a_B - 1 + \dfrac{m-1}{m} \cdot 1 & \\
=& v_B(f_{\alpha,\beta,\gamma,\delta,\epsilon}) + a_B + \dfrac{m-1}{m} \cdot 1 - 1 \geq 0 & \\
\Longleftrightarrow & v_B(f_{\alpha,\beta,\gamma,\delta,\epsilon}) + \lceil a_B\rceil - 1 \geq 0 & \text{ if } B \subset F \\
\text{by condition }(\star)&& \\
\bullet &v_B(f_{\alpha,\beta,\gamma,\delta,\epsilon}) - \dfrac{1}{m} + a_B - 0 + \dfrac{m-1}{m} \cdot 0 \geq 0 & \\
\Longleftrightarrow & v_B(f_{\alpha,\beta,\gamma,\delta,\epsilon}) - 1 + a_B - 0 = v_B(f_{\alpha,\beta,\gamma,\delta,\epsilon}) - 1 + \lceil a_B\rceil - 0 \geq 0 & \text{ if } B \subset M \\
a_B \in {\mathbb Z}&& \\
\end{array}\right.$$
where $A = \sum a_BB$.
\end{proof}
\vskip.1in
Now Theorem \ref{mainvan} in the fractional case is an immediate consequence of \ref{Ginv} as follows:
$$\begin{array}{rcl}
{\rm H}^i(X,\Omega_X^j(\mathrm{log}(D))(\lceil A\rceil - G)) &=& {\rm H}^i\left(X,\left[\pi_*\{\Omega_Y^j(\mathrm{log}(D_Y))(\pi^*A - G_Y)\}\right]^\Gamma\right) \\
&=& {\rm H}^i(X,\pi_*\{\Omega_Y^j(\mathrm{log}(D_Y))(\pi^*A - G_Y)\})^\Gamma \\
&=& {\rm H}^i(Y,\Omega_Y^j(\mathrm{log}(D_Y))(\pi^*A - G_Y))^\Gamma \\
&=& 0, \\
\end{array}$$
since we have
$$ {\rm H}^i(Y,\Omega^j(\mathrm{log}(D_Y))(\pi^*A - G_Y)) = 0 \text{ for }i + j > \dim X = \dim Y,$$
using the vanishing statement for the case where $\pi^*A$ is integral.
This completes the proof of the main theorem in the case where $A$ is fractional.
\subsection{Some remarks on the first proof}
\subsubsection{Basic Idea of the proof}
If we pretend that $\pi$ is ramified only over $D$, then the idea of the proof for \ref{Ginv} is more transparent. Under the pretension, since the logarithmic differential forms do not ramify and $G_Y = \dfrac{1}{m}\pi^*G = \dfrac{m-1}{m}\pi^*G - \pi^*G$, we have
$$\Omega_Y^j(\mathrm{log}(D_Y))(\pi^*A - G_Y) = \pi^*\{\Omega_X^j(\mathrm{log}(D))\}\left(\pi^*A + \dfrac{m-1}{m}\pi^*G - \pi^*G \right).$$
By taking $\pi_*$ and the $\Gamma$-invariant part, we conclude
$$\begin{array}{rcl}
\pi_*\{\Omega_Y^j(\mathrm{log}(D_Y))(\pi^*A - G_Y)\}^\Gamma &=& \pi_*\left[\pi^*\left\{\Omega_X^j(\mathrm{log}(D))\right\}\left(\pi^*A + \dfrac{m-1}{m}\pi^*G - \pi^*G\right) \right]^\Gamma \\
&=& \Omega_X^j(\mathrm{log}(D))\left(A + \dfrac{m-1}{m}G - G\right) \\
&=& \Omega_X^j(\mathrm{log}(D))\left(\lceil A\rceil - G\right).\\
\end{array}$$
Here the last equality, replacing $A + \dfrac{m-1}{m}G$ with $\lceil A\rceil$, results from the fact that only the fractional part of $A$ is affected and, hence that the coefficients of the components exceed their round ups when we add $\dfrac{m-1}{m}G$.\\
In the actual proof without the pretension, we have to analyze in more detail how a basis of the free ${\mathcal O}_X$-module $\Omega_X^j(\mathrm{log}(D))$ ramifies over $M$, when pulled back by $\pi$, compared to a basis of the free ${\mathcal O}_Y$-module $\Omega_Y^j(\mathrm{log}(D_Y))$ (and conclude that the ramification does not affect the conclusion at all). The basic idea, however, is the same.
\subsubsection{Use of the logarithmic forms and subtraction of the divisor $G$}\label{uselog}
In contrast to the logarithmic differential forms, if we use the usual differential forms, the basis of the free ${\mathcal O}_X$-module $\Omega_X^j$
$$\left\{\bigwedge_{\{x_{\alpha} = 0\} \subset (D \cup M)_P}dx_{\alpha} \bigwedge_{\{x_{\beta} = 0\} \not\subset (D \cup M)_P}dx_{\beta}\right\}$$
has varying ramification factors, and gives rise to the following corresponding basis of the free ${\mathcal O}_Y$-module $\Omega_Y^j$
$$\left\{\prod_{\{x_{\alpha} = 0\} \subset (D \cup M)_P}\dfrac{1}{\pi^*x_{\alpha}^{(m-1)/m}} \cdot \pi^*\left[\bigwedge_{\{x_{\alpha} = 0\} \subset (D \cup M)_P}dx_{\alpha} \bigwedge_{\{x_{\beta} = 0\} \not\subset (D \cup M)_P}dx_{\beta}\right]\right\}.$$
The varying ramifications cannot be expressd by the twist of a single (${\mathbb Q}\text{-}$) divisor. This is why one is led to the use of logarithmic differential forms.\\
On the other hand, if we use the logarithmic differential forms, since there is no ramification, there is no ``push'' from the ramification to raise $A$ to $\lceil A\rceil$. This is where the subtraction of the divisor $G$ comes in. The difference between $- G_Y = - \pi^*G + \dfrac{m-1}{m}\pi^*G$ and $- \pi^*G$, which is $\dfrac{m-1}{m}\pi^*G$, gives the push \text{to raise $A$ to $\lceil A\rceil$.}
\subsubsection{Comparison with the classical argument}
In the classical argument for the proof of the Kawamata-Viehweg Vanishing, where we only have to deal with the top form, the free ${\mathcal O}_X$-module $\Omega_X^{n = \dim X}$ is of rank one, having one generator
$$\bigwedge_{\alpha = 1}^l dx_{\alpha} \bigwedge_{\beta = l + 1}^{n = \dim X}dx_{\beta}.$$
Therefore, it has a unique ramification factor giving rise to the following unique basis of the free ${\mathcal O}_Y$-module $\Omega_Y^n$:
$$\prod_{\alpha = 1}^l\dfrac{1}{\pi^*x_{\alpha}^{(m-1)/m}} \cdot \pi^*\left[\bigwedge_{\alpha = 1}^l dx_{\alpha} \bigwedge_{\beta = l + 1}^{n = \dim X}dx_{\beta}\right].$$
Moreover, the reciprocal $\prod_{\alpha = 1}^lx_{\alpha}^{(m-1)/m}$ of the ramification factor gives the ``push'' to raise $A$ to $\lceil A\rceil$. However, the classical argument to look at the usual differential forms would face trouble in the case of lower degree forms as the basis has varying ramification factors (as discussed in \ref{uselog}).\\
Our new argument using the logarithmic forms and subtraction of the divisor $G$ applies to the lower differential forms in the setting dealing with the Kawamata-Viehweg type formulation of the (log) Akizuki-Nakano Vanishing as well as to the top differential form in the setting dealing with the classical Kawamata-Viehweg Vanishing. This also gives a slightly different view point towards the classical argument for the Kawamata-Viehweg Vanishing Theorem.
\subsubsection{An Alternative line of argument}\label{helmke} As suggested by Prof. Helmke, one could follow the following line of argument to prove our main vanishing result:
\begin{enumerate}
\item[(1)] Prove the case with $G = D$ of our formulation, i.e., $${\rm H}^i(X,\Omega_X^j(\log(D))(\lceil A\rceil - D)),$$ reducing its verification to the Steenbrink Vanishing Theorem via the Kawamata Covering Lemma as in our argument above.
\item[(2)]In order to prove the general case $F \leq G \leq D$, we set $A' = A + \epsilon(D - G)$ for a sufficiently small positive number $0 < \epsilon << 1$ so that $A'$ is again ample with $F' = \lceil A'\rceil - A' \leq D$. Now, using (1), we conclude
$$0 = {\rm H}^i(X,\Omega_X^j(\mathrm{log}(D)(\lceil A'\rceil - D)) = {\rm H}^i(X,\Omega_X^j(\mathrm{log}(D))(\lceil A\rceil - G))$$
as required.
\end{enumerate}
This line of argument avoids the use of the residue sequence. It also makes it clearer that what is essential is
$\bullet$ the Steenbrink Vanishing, and
$\bullet$ its fractional version as in \cite{Arapura2}.
\section{Proof of Theorem \ref{mainvan} by reduction mod $p$ via the result of Deligne-Illusie \cite{DI}}\label{section:pvanishing}
Here we explain how to obtain the fractional version of Steenbrink Vanishing, mentioned above, by reduction mod $p$. We prove the following theorem, via the results of Deligne-Illusie and Raynaud \cite{DI} and a lemma by Hara \cite{Hara}, in characteristic $p$. The following is special case of \cite[theorem 8.2]{Arapura2}. \\
\begin{theorem}\label{pvanishing} Let $X$ be a nonsingular projective variety over an algebraically closed field $k$ of characteristic $p>\dim X$. Let
$D = \sum D_i$ be a simple normal crossings divisor such that the pair $(X,D)$ is liftable modulo $p^2$. If $L$ is a line bundle such that
$L(-\Delta)$ is ample
for some ${\mathbb Q}$-divisor $\Delta$ supported on $D$ with coefficients in $[0,1)$,
then
\begin{equation*}
{\rm H}^i(X,\Omega_X^j(\log D)(-D) \otimes L)=0
\end{equation*}
for $i+j>\dim X$.
\end{theorem}
Now by a standard ``spreading out" argument, we obtain the following result in characteristic $0$, which implies the main theorem \ref{mainvan} (as explained in \ref{helmke}). We note that the line bundle $L$ and the $\mathbb{Q}$-divisor $\Delta$ correspond to $\lceil A\rceil$ and $F$ in the notation of \S 2.
\begin{corollary}
Let $X,D$, and $L$ be as above, but defined over an algebraically closed field of characteristic $0$. Then
\begin{equation*}
{\rm H}^i(X,\Omega_X^j(\log D)(-D) \otimes L)=0
\end{equation*}
for $i+j>\dim X$.
\end{corollary}
We give a short self contained proof (via the results and lemma mentioned above) of \ref{pvanishing}, extracting the ideas from the proof of \cite[theorem 8.2]{Arapura2}. First let us quote the following lemma by Hara \cite{Hara}.
\begin{lemma}[Hara {\cite[3.3]{Hara}}]
If $D'$ is an integral divisor satisfying $0\leq D'\leq (p-1)D$, then there is a quasi-isomorphism
$$\Omega_X^{\bullet}(\log D)\cong \Omega_X^\bullet(\log D)(D')$$
\end{lemma}
Now using the results of \cite{DI} and the above lemma, we obtain the following.
\begin{lemma}\label{lemma:boot1}
Let $M$ be a line bundle on $X$. Suppose that $D'$ is an integral divisor and that $0\le D'\le (p-1)D$.
Then the following inequalities hold.
\begin{enumerate}
\item[(a)] For all $r$,
$$\sum_{i+j=r}h^i(X, \Omega_X^j(\log D) \otimes M)\le \sum_{i+j=r}h^i(X,
\Omega_X^j(\log D)(D')\otimes M^p)$$
\item[(b)] For all $r$,
$$\sum_{i+j=r}h^i(X, \Omega_X^j(\log D)(-D) \otimes M)\le \sum_{i+j=r}h^i(X,
\Omega_X^j(\log D)(-D-D')\otimes M^p)$$
\end{enumerate}
\end{lemma}
\begin{proof}
Let $F:X\to X$ denote the absolute Frobenius map.
By \cite[\S 4.2]{DI}, the projection formula, and the previous lemma, we have
\begin{equation*}
\begin{split}
{\rm H}^i(X, \bigoplus_j \Omega_X^j(\log D)[-j] \otimes M) &\cong {\mathbb H}^i(X,
(F_*\Omega_X^\bullet(\log D) )\otimes M) \\
&\cong {\mathbb H}^i(X,
F_*(\Omega_X^\bullet(\log D) \otimes M^p)) \\
&\cong {\mathbb H}^i(X,\Omega_X^\bullet(\log D) \otimes M^p)\\
&\cong {\mathbb H}^i(X,\Omega_X^\bullet(\log D)(D') \otimes M^p)
\end{split}
\end{equation*}
These isomorphisms together with the spectral sequence
$$E_1^{ab}= {\rm H}^b(X,\Omega_X^a(\log D)(D')\otimes M^p)\Rightarrow
{\mathbb H}^{a+b}(X,\Omega_X^\bullet(\log D)(D') \otimes M^p)$$
prove the first inequality. We obtain the second inequality from the first using the Serre duality.
\end{proof}
\begin{lemma}\label{lemma:boot2}
With the same notation as in the previous lemma for $X, D$ and $M$, suppose this time that $D'$ is an integral divisor and that $0\le D'\le (p^n-1)D$.
Then
$$\sum_{i+j=r}h^i(X, \Omega_X^j(\log D)(-D) \otimes M)\le \sum_{i+j=r}h^i(X,
\Omega_X^j(\log D)(-D-D')\otimes M^{p^n})$$
\end{lemma}
\begin{proof}
We may write $D' = p^{n-1} D_1' + p^{n-2} D_2'+\ldots$, where $0\le D_i'\le (p-1)D$. Then repeatedly
applying lemma \ref{lemma:boot1} gives
\begin{equation*}
\begin{split}
\sum_{i+j=r}h^i(X, \Omega_X^j(\log D)(-D) \otimes M) &\le
\sum_{i+j=r}h^i(X, \Omega_X^j(\log D)(-D) \otimes M^p(-D_1'))\\
&\le
\sum_{i+j=r}h^i(X, \Omega_X^j(\log D)(-D) \otimes M^{p^2}(-pD_1'-D_2'))\\
&\ldots
\end{split}
\end{equation*}
\end{proof}
\begin{proof}[Proof of \ref{pvanishing}]
By assumption, $L(-\Delta)$ is ample for some $\Delta=\sum r_iD_i$ with $r_i\in [0,1)\cap {\mathbb Q}$.
Using Kleiman's ampleness criterion (cf \cite{Lazarsfeld}), we can see that $L(-\sum r_i'D_i)$ remains ample, whenever
$r'_i$ is sufficiently close to $r_i$. Therefore, we can assume that the coefficients $r_i$ lie in $[0,1)\cap {\mathbb Z}[\frac{1}{p^l}]$ for some sufficiently large integer $l$.
Thus,
$L^{p^n}(-D')$ is ample for some integer $n>0$ and some integral divisor $0 \le D' = p^n(\sum r_i'D_i) \le (p^n-1)D$. We may also assume, taking $n$ sufficiently large, that
$${\rm H}^i(X, \Omega_X^j(\log D)(-D)\otimes L^{p^n}(-D'))=0$$
for all $i>0$ by the Serre Vanishing. Now \ref{pvanishing} is a consequence of lemma \ref{lemma:boot2}, noting that, if $i + j = r > \dim X$, then either $i > 0$ or $i = 0$ with $j > \dim X$ and hence ${\rm H}^0(X, \Omega_X^j(\log D)(-D)\otimes L) = {\rm H}^0(X, \Omega_X^j(\log D)(-D)\otimes L^{p^n}(-D'))=0$.
\end{proof}
\section{Stacky version}
\cite{MO} gives an interpretation of the Kawamata-Viehweg Vanishing as (an application of) the Kodaira Vanishing for a certain Deligne-Mumford stack. In the same spirit, our main vanishing result can be interpreted as (an application of) the Steenbrink Vanishing for a certain Deligne-Mumford stack.
\section{Applications/Future directions}
The application of the Kawamata-Viehweg Vanishing Theorem in the Minimal Model Program is one of the most remarkable stories in the modern development of the subject of Algebraic Geometry. Here we list some of the well-known applications of the Akizuki-Nakano Vanishing and the Steenbrink Vanishing in the hope that our KV-type formulation of the (log) Akizuki-Nakano Vanishing will find some interesting applications in the future.
\subsection{Unobstructedness of the deformation of Fano manifolds}\
\begin{case} (Classical unobstructedness of the deformation of Fano manifolds)\\
Let $X$ be a Fano manifold, i.e., a nonsingular projective variety with $- K_X$ being ample. Then we have
$${\rm H}^2(X, T_X) \cong {\rm H}^2(X,\Omega^{\dim X - 1}(- K_X)) = 0,$$
and hence the deformation of the Fano manifold has no obstruction \cite{Mori-Mukai} by the Akizuki-Nakano Vanishing.
\end{case}
\begin{case}(Unobstructedness of the deformation of log ${\mathbb Q}$-Fano manifolds)\\ Let $(X,B+F)$ be a pair consisting of a nonsingular projective variety and an effective ${\mathbb Q}$-divisor $B + F = \sum B_k + \sum f_jF_j \hskip.1in (0 < f_j < 1)$ with the support $D = \sum B_k + \sum F_j$ being a simple normal crossings divisor on $X$. Assume $(X,B+F)$ is a log ${\mathbb Q}$-Fano manifold, i.e., $- (K_X + B + F)$ is ample. Then the deformation of the pair $(X,B+F)$ is unobstructed, since
$$\begin{array}{rcl}
{\rm H}^2(X,T_X(- \log(D)) &\cong& {\rm H}^2(X,\Omega^{\dim X - 1}(\log(D))(- (K_X + D)) \\
&=& {\rm H}^2(X, \Omega^{\dim X - 1}(\log(D))(\lceil - (K_X + B + F)\rceil - G)) = 0,\\
\end{array}$$
where $G = \sum F_j$ by our Theorem 2.1.
\end{case}
\subsection{Extension of the Akizuki-Nakano Vanishing to singular varieties, and a theorem by Flenner}
Steenbrink's motivation to prove his vanishing theorem was to give a simple proof of the vanishing theorem of Guillen, Navarro, Pascual and Puerta \cite{Nav}, which can be considered a natural extension (from a certain point of view) of the Kodaira-Akizuki-Nakano Vanishing to singular varieties involving the du Bois complex. A very nice application of their vanshing theorem is due to Flenner \cite{Flen}, who proves that the regular $l$-forms on the smooth locus of a singular variety extend to the regular forms on any resolution of singularities for
$l$ less than the codimension of the singular set minus $1$. Flenner uses the Steenbrink Vanishing only indirectly; a different argument, where the vanishing is used more explicitly, can be found in \cite{Arapura1}.
\subsection{Kovacs' singular version of the Esnault-Viehweg Vanishing and its application to the study of the family of canonically polarized varieties} Kovacs \cite{Kov2} proves a singular version of the Esnault-Viehweg Vanishing (and others). As an application of these vanishing theorems, he proves an Arakelov-Parshin type boundedness result for the families of canonically polarized varieties with rational Gorenstein singularities (cf.\cite{Kov1, Viehweg-Zuo}). See also Kovacs' extension \cite{Kov3} of the Steenbrink Vanishing Theorem.
\subsection{Analysis of the zeo locus of the (log) 1-forms by Wei, extending the previous results of Hacon-Kovacs \cite{Hacon-Kovacs} and Popa-Schnell \cite{Popa-Schnell}} Wei \cite{Wei1, Wei2} proves that the zero-locus of any global holomorphic log-one-form on a projective log-smooth pair (X, D) of log-general type must be non-empty, using some generalizations of the Kodaira-Saito Vanishing theorem \cite{Saito}. This is a generalization of the results proved by \cite{Hacon-Kovacs} and \cite{Popa-Schnell}.
\subsection{Generalization in terms of the ``multiplier ideal sheaf''} The Kawamata-Viehweg Vanishing Theorem has a generalization in terms of the ``multiplier ideal sheaf''. It is an interesting question what the proper definition of a ``multiplier ideal sheaf'' and a generalization would be in the context of our vanishing result. We will discuss the answer to this question in the subsequent papers.
\vskip.1in
\paragraph{\bf Acknowledgements.} We thank Professors O. Fujino, C. Hacon, S. Helmke, Y. Kawamata, Y. Namikawa, A. Moriwaki, S. Mori, S. Mukai, and M. Nori for invaluable comments, suggestions and support. We also thank the members of our seminar, P. Coupek, H. Li, and H. Wang for listening to the talks about the subject.
|
{
"timestamp": "2018-09-05T02:23:37",
"yymm": "1806",
"arxiv_id": "1806.01137",
"language": "en",
"url": "https://arxiv.org/abs/1806.01137"
}
|
\section{Introduction}
In this article, we always consider simple graphs, which have neither loops nor multiple edges. For a graph $G$, let $V(G)$ and $E(G)$ denote the set of vertices and the set of edges of G, respectively. We write $|G|$ for the order of $G$ (i.e., $|G| = |V(G)|).$ For a vertex $v$ of $G$, we denote by $\deg_{G}(v)$ the degree of $v$ in $G.$ \\
For an integer $k\geqslant 2$, we define
$$\sigma_k(G)=\min\left\lbrace \sum_{x\in S}\deg_{G}(x):\text{for all indepentdent subset $S$ in } V(G),|S|=k\right\rbrace.$$
In a tree, a vertex of degree one and a vertex of degree at least three is called a \textit{leaf} and a \textit{branch vertex} respectively. Many researchers have investigated the degree sum conditions for the existence of a spanning tree with a bounded number of branch vertices (see the survey article \cite{OY} for more details).
Moreover, many analogue results for the claw-free graphs are studied (see \cite{MS}, \cite{R}, \cite{GHHSV}, \cite{FKKLR} and \cite{MOY} for examples). In particular, in 2004, Gargano, Hammar, Hell, Stacho and Vaccaro gave a sufficient condition for a connected claw-free graph to have a spanning tree with few branch vertices. They proved the following theorem.
\begin{theorem}[{\cite[Gargano et al.]{GHHSV}}]\label{thm1}
Let $k$ be a non-negative integer and let $G$ be a connected claw-free graph of order $n$. If $\sigma_{k+3}\geq n-k-2$, then $G$ has a spanning tree with at most $k$ branch vertices.
\end{theorem}
After that, under the same degree condition of Theorem \ref{thm1}, Kano,
Kyaw, Matsuda, Ozeki, Saito and Yamashita showed the existence of a spanning tree with a bounded number of leaves. That seems slightly strong for the existence of a spanning tree with a
bounded number of branch vertices above. They proved the following.
\begin{theorem}[{\cite[Kano et al.]{KKMO}}]\label{theorem KKMO}
Let $k$ be a non-negative integer and let $G$ be a connected claw-free graph of order $n$. If $\sigma_{k+3}\geq n-k-2$, then $G$ has a spanning tree with at most $k+2$ leaves.
\end{theorem}
On the other hand, in 2014, Matsuda, Ozeki and Yamashita proposed the following conjecture.
\begin{conjecture}[{\cite[Matsuda et al.]{MOY}}]\label{conj1}
Let $k$ be a non-negative interger and let $G$ be a connected claw-free graph of order $n$. If $\sigma_{2k+3}(G)\geq n-2$, then $G$ has a spanning tree with at most $k$ branch vertices.
\end{conjecture}
In \cite{MOY}, the authors gave examples to show that Conjecture \ref{conj1} is optimal if it is correct and they also proved the conjecture while $k=1$. Motivating by the techniques in \cite{MOY}, \cite{Kyaw1} and \cite{CHH}, we would like to prove Conjecture \ref{conj1} for the case $k=2$. In particular, the main result is stated as the following.
\begin{theorem}\label{thm-mainB}
Let $G$ be a connected claw-free graph of order $n$. If $\sigma_{7}(G)\geq n-2$, then $G$ has a spanning tree with at most two branch vertices.
\end{theorem}
Moreover, by using a part of the proof of Theorem \ref{thm-mainB} we also give an another result which gives an improvement of the result of Theorem \ref{thm1} while $k=3.$ We will prove the following theorem.
\begin{theorem}\label{thm-mainA}
Let $G$ be a connected claw-free graph of order $n$. If $\sigma_{6}(G)\geq n-5$, then $G$ has a spanning tree with at most two branch vertices.
\end{theorem}
\section{Proof of Theorem \ref{thm-mainB} and Theorem \ref{thm-mainA} }
\vspace{0.5cm}
Before proving the theorems we give some notations for convenience.
Let $T$ be a spanning tree of $G.$ Setting $L(T)$ and $B(T)$ the set of leaves and the set of branch vertices of the tree $T,$ respectively. For $u, v \in V(T),$ denote by $P_T[u, v]$ the unique path in $T$ connecting $u$ and $v.$ We assign an orientation in $P_T[u, v]$ from $u$ to $v.$
For a subset $X$ in $V(G),$ set $N(X) = \{x\in V(G) |\ xy \in E(G) \text{ for some $y \in X$}\}$ and $\deg (X) = \sum_{x\in X}\deg_G(x)$. For an integer $k \geq 1,$ we denote $N_k(X) = \{x \in V(G)\ |\ |N(x) \cap X| = k\}.$
{\bf Proof of Theorem \ref{thm-mainB}}
Suppose that $G$ has no spanning tree with at most 2 branch vertices. Let $T$ be a spanning tree of $G.$ Then $|B(T)| \geq 3$ and we have the following
\begin{equation*}
|L(T)|=2+\sum\limits_{v\in B(T)}(\deg_T(v)-2)\geq 2+3.(3-2)=5.
\end{equation*}
On the other hand, by Theorem \ref{theorem KKMO} we conclude that $G$ has a spanning tree with at most 6 leaves. Therefore, $G$ has a spanning tree $T$ such that $5 \leq |L(T)| \leq 6.$\\
Now we will prove Theorem \ref{thm-mainB} by giving some contradictions in four steps.
\vspace{0.3cm}
{\bf Step 1}. If there exists a spanning tree $T$ of $G$ such that $|L(T)|= 5$ and $T$ has exactly 3 branch vertices $s, w, t$ of degree 3, where $w\in P_T[t,s]$ (see figure 1).
\begin{figure}[h]
\centering
\includegraphics[width=0.4\linewidth]{./pic1}
\caption[Tree T]{The tree $T$ is in Step 1}
\label{Pic1}
\end{figure}
Let $L(T)=\{u_1, u_2, u_3, u_4, u_5\}$ be the set of leaves of $T$.
Let $B_i$ be a vertex set of components of $T-\{s,w,t\}$ such that $L(T)\cap B_i=\{u_i\}$ for $1\leq i\leq 5$ and the only vertex of $N_T\{s,w,t\}\cap B_i$ is denoted by $v_i$. Without loss of generality, we may assume that $B_i\cap N_T(s) \not= \emptyset\ (1\leq i \leq 2), B_j\cap N_T(t)\not= \emptyset\ (3\leq j \leq 4)$ and $B_5\cap N_T(w)\not= \emptyset.$ Set $P_1=V(P_T[w, s]-\{w,s\}),\ P_2=V(P_T[t, w]-\{t,w\})$ and $P=P_1\cup P_2$. Set $r_1=d_T(s,t), r_2=d_T(s,w)$. For each $x\in P_T[t, s] $ or $ P_T[s, u_i], (1\leq i \leq 2)$ or $ P_T[t, u_j](3\leq j \leq 4)$ or $P_T[w, u_5])$, its successor $x^{+}$ and the predecessor $x^{-}$ are defined, if they exist.
We choose the tree $T$ such that:\\
(C1) $(r_1;r_2)$ is as small as possible in lexicographic order.
Set $I=\{u_1;u_2;u_3;u_4;u_5;u_6=t;u_7=s\}$.
\begin{claim}\label{claim1} We have\
$v_1s^{-}, v_2s^{-}, v_3t^{+}, v_4t^{+}\notin E(G)$ and $v_1v_2, v_3v_4\in E(G)$.
\end{claim}
\begin{proof}
If $v_1s^{-}\in E(G)$ then we consider the tree $T'=T+v_1s^{-}-sv_1.$ It makes a contradiction with the condition (C1) or $T'$ has two branch vertices. Hence $v_1s^{-}\notin E(G)$.\\
Similarly, we also have \ $v_2s^{-}, v_3t^{+}, v_4t^+\notin E(G)$.\\
Now combining with the properties of the claw-free graph $G$ we obtain $v_1v_2, v_3v_4\in E(G)$.
\end{proof}
\begin{claim}\label{claim2}
$I$ is an independent set.
\end{claim}
\begin{proof}
For each $1\leq i< j\leq 5,$ if $u_iu_j\in E(G)$ then we consider the tree
$$T' = T+u_iu_j-v_iv_i^{-}.$$
The resulting tree is a spanning tree of $G$ with two branch vertices, this gives a contradiction. \\
For $i\in \{1;2;5;7\},$ if $u_iu_6\in E(G)$ then we consider the tree
$T'=T+u_iu_6-ww^{-}.$
The resulting tree is a spanning tree of $G$ with two branch vertices, which is a contradiction. If $u_iu_6\in E(G)$ for some $i\in \{3; 4\}$ then by Claim \ref{claim1} the tree $T'=T+u_6u_i+v_3v_4-u_6v_3-u_6v_4$ is a spanning tree of $G$ with two branch vertices. This also gives a contradiction.\\
Similarly, we also have $u_7u_i\notin E(G)$ for all $1\leq i\leq 5$.\\
Claim \ref{claim2} is proved.
\end{proof}
Since $G$ is claw-free and Claim \ref{claim2} holds we have $N_3(I)=\emptyset.$
\begin{claim}\label{claim3}
$v_i\notin N(u_j)$ for all \ $1\leq i\not= j\leq 5$, and $u_6v_1, u_6v_2, u_6v_5, u_7v_3, u_7v_4, u_7v_5\not\in E(G)$.
\end{claim}
\begin{proof}
For all $1\leq i\not= j\leq 5$, if $v_iu_j\in E(G)$ then we consider the tree
$$ T' =T+v_iu_j-v_iv_i^{-}.$$
The resulting tree has two branch vertices. This gives a contradiction.\\
Now if $u_6v_i\in E(G)$, for some $i\in\{1;2;5\}$ then the tree
$ T' = T+u_6v_i-v_iv_i^{-} $ is a spanning tree of $G$ with two branch vertices, a contradiction.\\
Similarly, we also get $u_7v_3, u_7v_4, u_7v_5\notin E(G)$.
\end{proof}
\begin{claim}\label{claim4}
For all $1\leq i\leq 5$, $1\leq j\leq 7$, $j\not= i,$ if $x\in B_i\cap N(u_j)$ then
\begin{enumerate}
\item[{\rm(a)}] $x\not= u_i$,
\item[{\rm(b)}] $x\not= v_i$ if $j\in \{1,2,3,4,5\}$ ,
\item[{\rm(c)}] $x\not= v_1, v_2, v_5$ if $j=6$,
\item[{\rm(d)}] $x\not= v_3, v_4, v_5$ if $j=7$,
\item[{\rm(e)}] $x^{-}\notin N(I-\{u_j\})$.
\end{enumerate}
\end{claim}
\begin{proof}
By Claim \ref{claim2} and Claim \ref{claim3} we prove (a), (b), (c) and (d).\\
Now, suppose that $u_kx^{-}\in E(G)$ with $k\not= j$.\\
If $i=5$ then the tree $T'=T+u_jx+u_kx^{-}-xx^{-}-wv_5$ is a spanning tree of $G$ with two branch vertices, a contradiction.\\
Otherwise, by the same role of $s$ and $t$ we may assume that $i\in \{1;2\}$.\\
Case 1. $j\not=7$. We consider the tree
\begin{align*}
T' =
\begin{cases}
T+u_jx+x^{-}u_k-xx^{-}-sv_i & \text { if $k\not=7$},\\
T+u_jx+x^{-}u_7+v_1v_2-xx^{-}-sv_1-sv_2 & \text { if $k=7$}.\\
\end{cases}
\end{align*}
Then the resulting tree is a spanning tree of $G$ with two branch vertices, a contradiction.\\
Case 2. $j=7$. Set $h\in\{1;2\}-\{i\}$.\\
If $k\not= h$ then the tree $T'=T+sx+u_kx^{-}+v_1v_2-xx^{-}-sv_1-sv_2$ is a spanning tree of $G$ with two branch vertices, a contradiction.\\
If $k=h$ then , since $\{ss^{-}, sv_i, sx\}$ is not claw, we have either $xs^{-}\in E(G)$ or $xv_i\in E(G)$.\\
Subcase 2.1. If $xs^{-}\in E(G)$ then the tree $T'=T+s^{-}x+u_hx^{-}-xx^{-}-sv_i$ gives a contradiction with the condiction (C1) or $T'$ has two branch vertices, this implies a contradiction.\\
Subcase 2.2. If $xv_i\in E(G)$ then we consider the tree $T'=T+xv_i+u_hx^{-}-xx^{-}-sv_i.$ This implies a contradiction from the fact that $T'$ is a spanning tree of $G$ with two branch vertices.\\
Claim \ref{claim4} is proved.
\end{proof}
\begin{claim}\label{claim5}
We have $N_2(I-\{u_i\})\cap B_i=\emptyset$ for all $1\leq i\leq 5$.
\end{claim}
\begin{proof}
Now, suppose that there exists $x\in N_2(I-\{u_i\})\cap B_i$. Then $xu_j, xu_k\in E(G)$ for some $j,k\not=i$. By Claim \ref{claim4} we get $x^{-}\notin N(I-\{u_j\}) \cup N(I-\{u_k\})=N(I)$. Hence, $\{xu_j, xu_k, xx^{-}\}$ is claw, which gives a contradiction. Therefore $N_2(I-\{u_i\})\cap B_i=\emptyset$.
\end{proof}
Since Claim \ref{claim4} and Claim \ref{claim5}, for $1\leq i \leq 5,$ $\{u_i\}, N(u_i)\cap B_i$ and $ (N(I-\{u_i\}))^{-}\cap B_i$ are pair-wise disjoint subsets
of $B_i$, where $(N(I-\{u_i\}))^{-}\cap B_i=\{x^{-}:\ x\in N(I-\{u_i\})\cap B_i\}$ and $ N_3(I)=(N_2(I)- N(u_i))\cap B_i=\emptyset.$ Thus, for each $i\in \{1;2;3;4\},$ we have
\begin{equation}\label{eq11}
\sum\limits_{j=1}^7 |N_G(u_j)\cap B_i|\leq |B_i|.
\end{equation}
Moreover, when $i=5$ we have
\begin{equation}\label{eq12}
\sum\limits_{j=1}^7 |N_G(u_j)\cap B_5|\leq |B_5|-1.
\end{equation}
Now we consider the set $N(I) \cap V(P_T[t,s]-\{s,t\})$.
\begin{claim}\label{claim6}
For all $1\leq i\leq 4$, then $N(u_i)\cap P=\emptyset$.
\end{claim}
\begin{proof}
For each $1\leq i\leq 4$, if there exists $y\in N(u_i)\cap P$ then we consider the tree
$$ T'= T+yu_i-v_iv_i^{-}.$$
This contradicts the condition (C1). Hence Claim \ref{claim6} holds.
\end{proof}
\begin{claim}\label{claim7}
For all $1\leq i\leq 4$, then $w\notin N(u_i)$.
\end{claim}
\begin{proof}
For each $1\leq i\leq 4$, if there exists $wu_i\in E(G)$ the we set the tree
$$ T'= T+wu_i-v_iv_i^{-} .$$
Then $T'$ is a spanning tree of $G$ with two branch vertices. This gives a contradiction.
\end{proof}
\begin{claim}\label{claim8}
$v_5w^{-}\in E(G), wu_5 \notin E(G), wu_6\notin E(G)$.
\end{claim}
\begin{proof}
If $w^{+}v_5\in E(G)$ then consider the tree $T'=T+w^{+}v_5-wv_5$ has two branch vertices or it contradicts the condition $(C1).$ If $w^{+}w^{-}\in E(G)$ then the tree $T'=T+w^{+}w^{-}-ww^{-}$ has two branch vertices or it gives a contradiction with the condition (C1). Then, since $\{ww^{+}, ww^{-}, wv_5\}$ is not claw we obtain $v_5w^{-}\in E(G)$.\\
For $i \in \{5; 6\},$ if $u_iw\in E(G)$ then the tree
$T'=T+u_iw+v_5w^{-}-v_5w-ww^{-}$ is a spanning tree of $G$ with two branch vertices, a contradiction. Hence $u_iw\notin E(G)$ for all $5\leq i \leq 6.$
\end{proof}
By Claims \ref{claim7} and \ref{claim8} we conclude that $wu_i\notin E(G)$ for all $1\leq i\leq 6.$ Then, we get
\begin{equation}\label{eq13}
\sum\limits_{j=1}^7 |N_G(u_j)\cap \{w\}|=|N(s)\cap \{w\}|\leq 1.
\end{equation}
\begin{claim}\label{claim9} We have
\begin{equation}\label{eq14}
\sum\limits_{j=1}^7 |N_G(u_j)\cap P_1|\leq |P_1|.
\end{equation}
\end{claim}
\begin{proof}
By Claim \ref{claim6} we have
\begin{equation*}
\sum\limits_{j=1}^7 |N_G(u_j)\cap P_1|=|N(u_5)\cap P_1|+|N(s)\cap P_1|+|N(t)\cap P_1|.
\end{equation*}
Assume that there exists $x\in N(u_5)\cap P_1$. Then we obtain a contradiction with the condiction (C1) by considering the tree $T'=T+xu_5-v_5w.$ So $N(u_5)\cap P_1=\emptyset$.\\
Now we will prove that $N(s)\cap N(t)\cap P_1=\emptyset.$ Indeed, if there exists $x\in N(s)\cap N(t)\cap P_1$. If $x^{-}t\in E(G)$ then we consider the tree $T'=T+xt+xt^{-}-xx^{-}-ww^{-}.$ Hence $T'$ is a spanning tree of $G$ with two branch vertices, a contradiction. This implies that $xt^{-}\notin E(G)$. Combining with the fact that $G$ is claw-free and considering four vertices $\{x,t,s,x^{-}\}$ we get $sx^{-}\in E(G)$. Now we consider the tree $T'=T+xt+sx^{-}-xx^{-}-ww^{-}$. Hence $T'$ is a spanning tree of $G$ with two branch vertices, which is a contradiction. Therefore, $N(s)\cap N(t)\cap P_1=\emptyset.$ This completes (\ref{eq14}).
\end{proof}
\begin{claim}\label{claim10}
If $z\in N(u_5)\cap P_2$ then $z\not= w^{-}$ and $w^{-}\notin N(I)$.
\end{claim}
\begin{proof}
If $z=w^-$ then the tree $T'=T+u_5w^{-}-ww^{-}$ is a spanning tree of $G$ with two branch vertices, which is a contradiction. Hence $z\not= w^{-}$.\\
If $w^{-}t\in E(G)$ then the tree $T'=T+u_5z+tw^{-}-ww^{-}-zz^{-}$ is a spanning tree of $G$ with two branch vertices, which gives a contradiction. Hence $w^{-}t\notin E(G)$.\\
By Claim \ref{claim6} we have $u_iw^{-}\notin E(G)$ for all $1\leq i\leq 4$.\\
If $w^{-}s\in E(G)$ then the tree $T'=T+sw^{-}-ww^{-}$ is a spanning tree of $G$ with 2 branch vertices, this implies a contradiction. Hence $w^{-}s\notin E(G)$. Therefore $w^{-}\notin N(I)$.
\end{proof}
\begin{claim}\label{claim11}
$|N(u_5)\cap N(t)\cap P_2|\leq 1$.
\end{claim}
\begin{proof}
Suppose that there exist $x,y\in N(u_5)\cap N(t)\cap P_2$, $x\not =y$. Without loss of generality, we may assume $y\in P_T[t,x]$.\\
Suppose that $u_5x^{-} \in E(G)$ then by $xt\in E(G)$ we consider the tree $T'=T+xt+u_5x^{-} -xx^{-}-wv_5.$ The resulting tree has two branch vertices, which is a contradiction. Hence we get $u_5x^{-}\notin E(G)$. Then $x^{-}\not= y$. Similarly, we also obtain $u_5y^{-}\notin E(G)$. Combining with $\{xu_5, xt, xx^{-}\}$ is not claw we obtain $tx^{-}\in E(G)$. By the condition (C1) it is easy to give $N(v_3)\cap P_2=\emptyset$. Then since $\{tv_3, ty, tx^{-}\}$ is not claw we obtain $yx^{-}\in E(G)$. Therefore we may conclude $x^{-}y^{-}\in E(G)$ by $\{yu_5, yy^{-}, yx^{-}\}$ is not claw. Hence we consider the tree $T'=T+x^{-}y^{-}+u_5y-yy^{-}-wv_5$ to contradict the condition (C1).
\end{proof}
\begin{claim}\label{claim12}
$N(s)\cap N(t)\cap P_2=\emptyset$.
\end{claim}
\begin{proof}
Assume that $x\in N(s)\cap N(t)\cap P_2$. \\
If $x^{+}t\in E(G)$ then we have the fact that the tree $T'=T+sx+tx^{+}-xx^{+}-ww^{+}$ is a spanning tree of $G$ with two branch vertices. This gives a contradiction. Hence $tx^{+}\notin E(G)$.Thus, since $\{xs,xt,xx^{+}\}$ is not claw we have $sx^{+}\in E(G)$.\\
We consider the tree $T'=T+sx+sx^{+}-xx^{+}-ww^{+}.$ Hence, $T'$ is a spanning tree of $G$ with two branch vertices, a contradiction. Therefore, $N(s)\cap N(t)\cap P_2=\emptyset$.
\end{proof}
\begin{claim}\label{claim13}
$N(s)\cap N(u_5)\cap P_2=\emptyset$.
\end{claim}
\begin{proof}
Assume that $x\in N(s)\cap N(u_5)\cap P_2$.\\
If $sx^{-}\in E(G)$ then we consider the tree $T'=T+sx+sx^{-}-xx^{-}-ww^{+}.$ The resulting tree is a spanning tree of $G$ with two branch vertices, a contradiction. Therefore, $sx^{-}\notin E(G)$.
Thus, since $\{xs,xu_5,xx^{-}\}$ is not claw we obtain $u_5x^{-}\in E(G)$.\\
Now, we set the tree $T'=T+u_5x^{-}+sx-xx^{-}-ww^{+}$ then $T'$ is a spanning tree of $G$ with two branch vertices, which is a contradiction. Claim \ref{claim13} is proved.
\end{proof}
By Claims \ref{claim10}-\ref{claim13}, we have $N(s)\cap N(t)\cap P_2=N(s)\cap N(u_5)\cap P_2=\emptyset$, $|N(t)\cap N(u_5)\cap P_2|\leq 1$ and $w^{-}\notin N(I)$ if $|N(t)\cap N(u_5)\cap P_2|=1.$ Hence, combining with Claim \ref{claim6}, we obtain
\begin{equation}\label{eq15}
\sum\limits_{j=1}^7 |N_G(u_j)\cap P_2|\leq |P_2|.
\end{equation}
By (\ref{eq11})-(\ref{eq15}) we conclude that
\begin{align*}
|G|&=\sum\limits_{i=1}^5|B_i|+|P_T[s,t]|\\
&\geq (1+\sum\limits_{i=1}^5\sum\limits_{j=1}^7 |N_G(u_j)\cap B_i|)+3+|P_1|+|P_2|\\
&\geq 3+\sum\limits_{i=1}^5\sum\limits_{j=1}^7 |N_G(u_j)\cap B_i|+\sum\limits_{j=1}^7 |N_G(u_j)\cap P_1|+\sum\limits_{j=1}^7 |N_G(u_j)\cap P_2|+\sum\limits_{j=1}^7 |N_G(u_j)\cap \{w\}|\\
& \geq 3+\deg (I)\geq 3+\sigma_7(G)\geq 3+(|G|-2)=1+|G|.
\end{align*}
This gives a contradiction. Step 1 is proved.
\vspace{0.5cm}
{\bf Step 2.} $|L(T)| = 6$ and $T$ has three branch vertices $s,w,t$ with $\deg_G{(s)}=4, \deg_G(w)=\deg_G(t)=3$ and $w\in P_T[t,s]$ (see figure 2).
\begin{figure}[h]
\centering
\includegraphics[width=0.4\linewidth]{./pic2}
\caption[Tree T]{The tree $T$ is in Step 2}
\label{Pic2}
\end{figure}
Let $L(T)=\{u_1,u_2,u_3,u_4,u_5,u_6\}$ be the set of leaves of $T$.
Let $B_i$ be a vertex set of components of $T-\{s,w,t\}$ such that $L(T)\cap B_i=\{u_i\}$ for $1\leq i\leq 6$ and the only vertex of $N_T\{s,w,t\}\cap B_i$ is denoted by $v_i$. In this step, we may assume $B_i\cap N_T(s) \not= \emptyset\ (1\leq i \leq 3), B_j\cap N_T(t)\not= \emptyset\ (4\leq j \leq 5)$ and $B_6\cap N_T(w)\not= \emptyset.$ Set $P_1=V(P_T[w, s]-\{w,s\}),\ P_2=V(P_T[t, w]-\{t,w\})$ and $P=P_1\cup P_2$. Set $r_1=d_T(s,t), r_2=d_T(s,w)$. For each $x\in P_T[t, s] $ or $ P_T[s, u_i], (1\leq i \leq 3)$ or $ P_T[t, u_j](4\leq j \leq 5)$ or $P_T[w, u_6])$, its successor $x^{+}$ and the predecessor $x^{-}$ are defined, if they exist.
We choose the tree $T$ such that:\\
(C2)\ $(r_1;r_2)$ is as small as possible in lexicographic order.
Using similar arguments as in the proof of Claims \ref{claim1}, \ref{claim8} we have.
\begin{claim}\label{claim14}
$t^{+}v_4, t^{+}v_5, v_6w^{+}\notin E(G)$, $v_4v_5, v_6w^{-}\in E(G)$ and $u_6w\notin E(G)$.
\end{claim}
Set $u_7=t$ and $ I=\{u_1;u_2;u_3;u_4;u_5;u_6;u_7\}$.
\begin{claim}\label{claim15}
$I$ is an independent set and $N_3(I)=\emptyset.$
\end{claim}
\begin{proof}
For $1\leq i<j\leq 6$, if $u_iu_j\in E(G)$ then we consider the tree
$$ T' = T+u_iu_j-v_iv_i^{-}.$$
Then either the resulting tree $T'$ is a spanning tree of $G$ with two branch vertices, a contradiction, or coming back Step 1 to give a contradiction. Then $u_iu_j\notin E(G)$ for all $1\leq i<j\leq 6.$\\
If $u_7u_j\in E(G)$, $j\in \{4;5\}$ then by Claim \ref{claim14} we can see that the tree $T'=T+u_ju_7+v_4v_5-u_7v_4-u_7v_5$ is a spanning tree of $G$ with two branch veritces, a contradiction.\\
Now if $u_7u_i\in E(G)$ for some $i\in\{1;2;3;6\}$ then the tree $T'=T+u_7u_i-ww^{-}$
is a spanning tree of $G$ with two branch vertices, a contradiction. Hence $u_7u_i\in E(G)$ for all $i\in\{1;2;3;6\}$. Therefore, $I$ is an independent set.\\
Moreover, since $G$ is claw-free and $I$ is an independent set we obtain $N_3(I)=\emptyset.$ \\
Claim \ref{claim15} is proved.
\end{proof}
\begin{claim}\label{claim16}
$v_i\notin N(u_j)$ for all \ $1\leq i\not= j\leq 6$, $u_7v_6\not\in E(G)$ and $\sum\limits_{i=1}^3 |N_G(u_7)\cap \{v_i\}| \leq 1$.
\end{claim}
\begin{proof}
$v_i\notin N(u_j)$ for all \ $1\leq i\not= j\leq 6$, $u_7v_6\not\in E(G)$ is proved by the similar arguments as in proof of Claim \ref{claim3}.\\
If there exist $u_7v_i, u_7v_j\in E(G)$ for some $i,j\in \{1;2;3\}, i\not=j$ then $T'=T+u_7v_i+u_7v_j-sv_i-sv_j$ is a spanning tree of $G$ with two branch vertices, a contradiction. Hence $\sum\limits_{i=1}^3 |N_G(u_7)\cap \{v_i\}| \leq 1$.
\end{proof}
Using the same arguments as in proofs of Claim \ref{claim4} and Claim \ref{claim5} we may prove the following claims.
\begin{claim}\label{claim17}
For all $1\leq i\leq 6$, $1\leq j\leq 7$, $j\not= i,$ if $x\in B_i\cap N_G(u_j)$ then $x\not= u_i$ and $x^{-}\notin N(I-\{u_j\})$.
\end{claim}
\begin{claim}\label{claim18}
We have $N_2(I-\{u_i\})\cap B_i=\emptyset$ for all $1\leq i\leq 6$.
\end{claim}
By Claim \ref{claim17} and Claim \ref{claim18} we firstly have
\begin{equation}\label{eq21}
\sum\limits_{i\in\{4;5\}}\sum\limits_{j=1}^7 |N_G(u_j)\cap B_i|\leq \sum\limits_{i\in\{4;5\}}|B_i|.
\end{equation}
After that, by Claim \ref{claim16} we also obtain
\begin{equation}\label{eq22}
\sum\limits_{i\in\{1;2;3;6\}}\sum\limits_{j=1}^7 |N_G(u_j)\cap B_i|\leq \sum\limits_{i\in\{1;2;3;6\}}|B_i|-3.
\end{equation}
Since $\{sv_1,sv_2,sv_3\}$ is not claw there exist two vertices which we may assume that $v_1,v_2$ such that $v_1v_2\in E(G)$.
\begin{claim}\label{claim19}
$N_G(u_i)\cap P_T [t,s]=\emptyset$ for $i=1;2$.
\end{claim}
\begin{proof}
Suppose that there exists $x\in N(u_i)\cap P_T[t,s]$ with $i=1;2$. \\
If either $x=w$ or $x=t$ then we consider the tree $T'=T+xu_i+v_1v_2 - sv_1-sv_2.$ The tree $T'$ is a spanning tree of $G$ with 2 branch vertices, a contradiction. \\
If either $x=s$ or $x\in P$ then we consider a new tree $T'=T+xu_i+v_1v_2-sv_1-sv_2.$ Now, using the same arguments as in Step 1 we give a contradiction.\\
Claim \ref{claim19} is proved.
\end{proof}
\begin{claim}\label{claim20} We have
\begin{equation}\label{eq23}
\sum\limits_{j=1}^7 |N_G(u_j)\cap \{s\}|=0.
\end{equation}
\end{claim}
\begin{proof}
By Claim \ref{claim19} we get $su_1; su_2\notin E(G)$.\\
If $su_i\in E(G)$ for some $i\in\{4;5;6;7\}$ then the tree $T'=T+su_i-ww^{+}$
is a spanning tree of $G$ with two branch vertices, this gives a contradiction. Hence $su_i\notin E(G)$ for all $4 \leq i \leq 7$.\\
Now, assume that $u_3s\in E(G)$. If $u_3s^{-}\in E(G)$ then we consider the tree $T'=T+u_3s^{-}-ss^{-}.$ Hence, using the same arguments as in Step 1 we give a contradiction. Therefore $u_3s^{-}\notin E(G)$. Thus, since $\{ss^{-}, su_3, sv_1\}$ is not claw, we have $s^{-}v_1\in E(G)$.\\
Repeating the same arguments we also have $s^{-}v_2\in E(G)$.\\
Now we consider the tree $T'=T+s^{-}v_1+s^{-}v_2-sv_1-sv_2.$ Then the resulting graph $T'$ has two branch vertices or this contradicts the condition (C2). Hence $u_3s\notin E(G)$.\\
Claim \ref{claim20} is completed.
\end{proof}
\begin{claim}\label{claim21} We have
\begin{equation}\label{eq24}
\sum\limits_{j=1}^7 |N_G(u_j)\cap \{w\}|\leq 1.
\end{equation}
\end{claim}
\begin{proof}
By Claim \ref{claim19} and Claim \ref{claim14} we have $wu_1; wu_2; wu_6\notin E(G)$.\\
Now for $i\in\{4;5;7\}$, then by Claim \ref{claim14} we may consider the tree $T'=T+wu_i+w^{-}v_6-ww^{-}-wv_6.$ Then $T'$ is a spanning tree of $G$ with two branch vertices, a contradiction. Then $wu_i\notin E(G)$. We thus give the following
\[\sum\limits_{j=1}^7 |N_G(u_j)\cap \{w\}|=|N(u_3)\cap \{w\}|\leq 1.\]
\end{proof}
Continuously, we will consider the set $N(I) \cap P = N(I) \cap (P_1\cup P_2)$.\\
By the condition (C2), we have.
\begin{claim}\label{claim22}
$N(u_4)\cap P = N(u_5)\cap P=\emptyset$.
\end{claim}
\begin{claim}\label{claim23} We have
\begin{equation}\label{eq25}
\sum\limits_{j=1}^7 |N_G(u_j)\cap P_1|\leq |P_1|.
\end{equation}
\end{claim}
\begin{proof}
By Claim \ref{claim19} and Claim \ref{claim22} we have $N(u_i)\cap P_1=\emptyset$ for all $i\in \{1;2; 4; 5\}$. By the condition (C2) we get $N(u_6)\cap P_1=\emptyset$. Hence
\[\sum\limits_{j=1}^7 |N_G(u_j)\cap P_1|=|N(u_3)\cap P_1|+|N(u_7)\cap P_1|.\]
Now, suppose that there exists $x\in N(u_3)\cap N(u_7)\cap P_1$. \\
If $u_7x^{-}\in E(G)$ then the tree $T'=T+u_7x+u_7x^{-}-xx^{-}-ww^{-}$ is a spanning tree of $G$ with two branch vertices, which is a contradiction. Hence $u_7x^{-}\notin E(G)$. Then since $\{xu_3, xx^{-}, xu_7\}$ is not claw we have $u_3x^{-}\in E(G).$ We consider the tree $T'=T+u_3x^{-}+xu_7-xx^{-}-ww^{-}.$ So $T'$ is a spanning tree of $G$ with two branch vertices, a contradiction.
Therefore, $N(u_3)\cap N(u_7)\cap P_1=\emptyset.$ Hence we get (\ref{eq25}).\\
Claim \ref{claim23} is proved.
\end{proof}
\begin{claim}\label{claim24} We have
\begin{equation}\label{eq26}
\sum\limits_{j=1}^7 |N_G(u_j)\cap P_2|\leq |P_2|+2.
\end{equation}
\end{claim}
\begin{proof}
By Claim \ref{claim19} and Claim \ref{claim22} we have the following
\begin{equation*}
\sum\limits_{j=1}^7 |N_G(u_j)\cap P_2|=|N(u_3)\cap P_2|+|N(u_6)\cap P_2|+|N(u_7)\cap P_2|.
\end{equation*}
Suppose that $x\in N(u_3)\cap N(u_6)\cap P_2$. If $u_6x^{-}\in E(G)$ then the tree $T'=T+u_6x^{-}+u_3x-ww^{+}-xx^{-}$ is a spanning tree of $G$ with exactly two branch vertices, a contradiction. Therefore $u_6x^{-}\notin E(G)$. Thus, since $\{xu_3,xu_6,xx^{-}\}$ is not claw we obtain $u_3x^{-}\in E(G)$. Then $T'=T+u_6x+u_3x^{-}-xx^{-}-ww^{-}$ is a spanning tree of $G$ with exactly two branch vertices, a contradiction. We conclude that $N(u_3)\cap N(u_6)\cap P_2=\emptyset.$
Next, we prove $|N(u_3)\cap N(u_7)\cap P_2|\leq 1$. Suppose that there exist $x,y\in N(u_3)\cap N(u_7)\cap P_2$, $x\not= y$. Without loss of generality we may assume $y\in P_T[t,x]$.\\
If $u_3x^{-}\in E(G)$ then $T'=T+xu_7+u_3x^{-}-xx^{-}-ww^{+}$ is a spanning tree of $G$ with exactly two branch vertices, this is a contradiction. Hence $u_3x^{-}\notin E(G)$. Then since $\{xu_7, xu_3, xx^{-}\}$ is not claw we have $u_7x^{-}\in E(G)$. Now we consider the tree $T'=T+u_3y+u_7x^{-}+u_7x-xx^{-}-yy^{-}-ww^{+}$. Then the resulting graph $T'$ is a spanning tree of $G$ with exactly two branch vertices, a contradiction. So $|N(u_3)\cap N(u_7)\cap P_2|\leq 1$.
Last, we will prove $|N(u_6)\cap N(u_7)\cap P_2|\leq 1$. Suppose that there exist $x,y\in N(u_6)\cap N(u_7)\cap P_2$, $x\not= y$. Without loss of generality we may assume $y\in P_T[t,x]$. If $u_6x^{-}\in E(G)$ then $T'=T+xu_7+u_6x^{-}-xx^{-}-wv_6$ is a spanning tree of $G$ with exactly two branch vertices, a contradiction. Hence $u_6x^{-}\notin E(G)$. Then since $\{xu_7, xu_6, xx^{-}\}$ is not claw we have $u_7x^{-}\in E(G)$. Now we consider the tree $T'=T+u_6y+u_7x^{-}+u_7x-xx^{-}-yy^{-}-wv_6$. Then the resulting graph $T'$ is a spanning tree of $G$ with exactly two branch vertices, which is a contradiction. So $|N(u_6)\cap N(u_7)\cap P_2|\leq 1$.
Combining all above claims we complete Claim \ref{claim24}.
\end{proof}
Summing the inequalities (\ref{eq21})-(\ref{eq26}), it yields
\begin{align*}
|G|&=\sum\limits_{i=1}^6|B_i|+|P_T[t,s]|\\
&\geq (3+\sum\limits_{i=1}^6\sum\limits_{j=1}^7 |N_G(u_j)\cap B_i|)+3+|P_1|+|P_2|\\
&\geq 3+\sum\limits_{i=1}^6\sum\limits_{j=1}^7 |N_G(u_j)\cap B_i|+\sum\limits_{j=1}^7 |N_G(u_j)\cap \{s\}|+\sum\limits_{j=1}^7 |N_G(u_j)\cap \{w\}|\\
&+\sum\limits_{j=1}^7 |N_G(u_j)\cap P_1|+\sum\limits_{j=1}^7 |N_G(u_j)\cap P_2|\\
& \geq 3+\deg (I)\geq 3+\sigma_7(G)\geq 3+(|G|-2)=1+|G|.
\end{align*}
This is a contradiction. Step 2 is completed.
\vspace{0.5cm}
{\bf Step 3}. $T$ has two branch vertices $s$ and $t$ of degree 3 and two branchs which tough with $P_T[t,s]-\{t,s\}$ at $w$ and $z$. Without loss of generality we may assume $z\in P_T[t,w]$ (here $z$ can be $w,$ see figure 3).
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{./pic3}
\caption[Tree T]{The tree $T$ is in Step 3}
\label{Pic3}
\end{figure}
Let $L(T)=\{u_1,u_2,u_3,u_4,u_5,u_6\}$ be the set of leaves of $T$.
Let $B_i$ be a vertex set of components of $T-\{s,w,z,t\}$ such that $L(T)\cap B_i=\{u_i\}$ for $1\leq i\leq 6$ and the only vertex of $N_T\{s,w,z,t\}\cap B_i$ is denoted by $v_i$. In this step, we may assume $B_i\cap N_T(s) \not= \emptyset\ (1\leq i \leq 2), B_j\cap N_T(t)\not= \emptyset\ (3\leq j \leq 4),$ $B_5\cap N_T(w)\not= \emptyset$ and $B_6\cap N_T(z)\not= \emptyset.$ Set $Q_1=V(P_T[w, s]-\{w,s\}), Q_2=V(P_T[z,w]-\{z,w\}),\ P_1 = Q_1 \cup Q_2,\ P_2=V(P_T[t, z]-\{t,z\})$ and $P=P_1\cup P_2$. Set $r_1=d_T(s,t), r_2=d_T(s,w),r_3=d_T(s,z) .$ For each $x\in P_T[t, s] $ or $ P_T[s, u_i], (1\leq i \leq 2)$ or $ P_T[t, u_j](3\leq j \leq 4)$ or $P_T[w, u_5]$ or $P_T[z, u_6]$, its successor $x^{+}$ and the predecessor $x^{-}$ are defined, if they exist.
We choose the tree $T$ such that:\\
(C3) $(r_1;r_2; r_3)$ is as small as possible in lexicographic order.
Repeating the same arguments as in the proof of Claim \ref{claim1}, we have the following.
\begin{claim}\label{claim25a} We have\
$v_1s^{-}, v_2s^{-}, v_3t^{+}, v_4t^{+}\notin E(G)$ and $v_1v_2, v_3v_4\in E(G)$.
\end{claim}
\begin{claim}\label{claim25}
$N(u_i)\cap P=\emptyset$ for all $1\leq i\leq 4$.
\end{claim}
\begin{proof}
Suppose that there exists $x\in N(u_i)\cap P$ for some $1\leq i\leq 4$. We consider the tree
$ T'= T+xu_i-v_iv_i^{-} $
to give a contradiction with the condition (C3).
\end{proof}
\begin{claim}\label{claim26}
$N(u_i)\cap \{s,t,w,z\}=\emptyset$ for all $1\leq i\leq 4$ and $N(u_j)\cap \{s,t\}=\emptyset$ for all $j\in \{5;6\}$.
\end{claim}
\begin{proof}
For each $i \in \{1,2,3,4\}$, if either $w \in N(u_i)$ or $z \in N(u_i)$, then without loss of generality, we assume that $wu_i \in E(G).$\\
If $w\not= z$ then we consider the tree $T'=T+wu_i-sv_i$ for the case $i\in \{1;2\}$ and $T'=T+wu_i-tv_i$ for the case $i\in \{3;4\}.$ The last case gives a contradiction with (C3) and with the first case we use the same arguments as in Step 2 to give a contradiction too.\\
If $w=z$ then the tree
$T'= T+wu_i-v_iv_i^{-} $
is a spanning tree of $G$ with two branch vertices, a contradiction.
Now, for each $1\leq i \leq 6$, if either $s \in N(u_i)$ or $t \in N(u_i),$ then without loss of generality, we assume that $su_i \in E(G).$ Since Claim \ref{claim25a}, we can set
\begin{align*}
T' =
\begin{cases}
T+su_i+v_1v_2-sv_1-sv_2 & \text { if $i\in \{1;2\}$},\\
T+su_i-ww^{+}& \text { if $i\in \{3;4;5;6\}$}.
\end{cases}
\end{align*}
Then the resulting tree is a spanning tree of $G$ with two branch vertices if $w=z,$ a contradiction. Otherwise we use the similar arguments as in Step 1 or Step 2 to give a contradiction.\\
This completes Claim \ref{claim26}.
\end{proof}
Set $u_7=t$ and $I=\{u_1;u_2;u_3;u_4;u_5;u_6;u_7\}$.
\begin{claim}\label{claim27}
$I$ is an independent set.
\end{claim}
\begin{proof}
By Claim \ref{claim26} we have $u_iu_7 \not\in E(G)$ for all $i \in \{1;2;3;4;5;6\}.$ \\
If $u_iu_j\in E(G)$ where $1\leq i\not=j\leq 6$ then we consider the tree
$ T' = T+u_iu_j-v_jv_j^{-}.$
Then we can use the arguments as in the proofs of Step 1 and Step 2 or $T'$ has two branch vertices to give a contradiction. This implies Claim \ref{claim27}.
\end{proof}
Using the similar arguments as in the proof of Claim \ref{claim4} we may obtain the following.
\begin{claim}\label{claim28}
For all $1\leq i\not= j\leq 6,$ if $x\in B_i\cap N(u_j)$ then $x\not= u_i, v_i$ and $x^{-}\notin N(I-\{u_j\}).$ \\
For all $i\in \{1;2;5;6\}, $ if $x\in B_i\cap N(u_7)$ then $x\not= u_i, v_i$ and $x^{-}\notin N(I-\{u_7\}).$ \\
For all $i\in \{3;4\},$ if $x\in B_i\cap N(u_7)$ then $x\not= u_i$ and $x^{-}\notin N(I-\{u_7\}).$
\end{claim}
By Claim \ref{claim28}, for $i\in \{3;4\}$ we obtain
\begin{equation}\label{eq31}
\sum\limits_{j=1}^7 |N_G(u_j)\cap B_i|\leq |B_i|.
\end{equation}
Moreover, for $i\in \{1;2;5;6\}$ we have
\begin{equation}\label{eq32}
\sum\limits_{j=1}^7 |N_G(u_j)\cap B_i|\leq |B_i|-1.
\end{equation}
{\bf Case 1.} $z=w.$
\begin{claim}\label{claim29}
If $z=w$ then $v_5v_6\in E(G)$, $w\notin N(u_i)$ and $N(u_i)\cap P=\emptyset$ for all $5\leq i\leq 6$.
\end{claim}
\begin{proof}
If $w^{+}v_5\in E(G)$ the we consider the tree $T'=T+w^{+}v_5-wv_5.$ This contradicts the condition (C3). Hence $w^{+}v_5\notin E(G)$. Similarly, we also get $w^{+}v_6\notin E(G)$. Then since $\{ww^{+}, wv_5, wv_6\}$ is not claw we obtain $v_5v_6\in E(G)$ .\\
For $i\in \{5,6\},$ if $wu_i\in E(G)$ then we come back Step 1 with the tree $T'=T+u_iw+v_5v_6-wv_5-wv_6,$ this gives a contradiction. Hence $wu_5, wu_6\notin E(G)$. \\
Now if there exists $x\in N(u_i)\cap P$ for some $i\in \{5,6\}$. Then we also come back Step 1 with the tree $T'=T+xu_i+v_5v_6-wv_5-wv_6.$ This gives a contradiction.\\
Claim \ref{claim29} is proved.
\end{proof}
By Claims \ref{claim26} and \ref{claim29} we have
\begin{equation}\label{eq33}
\sum\limits_{j=1}^7 |N_G(u_j)\cap \{s,w\}|=|N(u_7)\cap \{s,w\}|= |N(t)\cap \{s,w\}|\leq 2.
\end{equation}
By Claims \ref{claim25}, \ref{claim29} we also have
\begin{equation}\label{eq34}
\sum\limits_{j=1}^7 |N_G(u_j)\cap (P_1\cup P_2)|= |N(t)\cap (P_1\cup P_2)|\leq |P_1\cup P_2|=|P_1|+|P_2|.
\end{equation}
By (\ref{eq31})-(\ref{eq34}) we obtain
\begin{align*}
|G|&=\sum\limits_{i=1}^6|B_i|+|P_T[s,t]|\\
&\geq (4+\sum\limits_{i=1}^6\sum\limits_{j=1}^7 |N_G(u_j)\cap B_i|)+3+|P_1|+|P_2|\\
&\geq 5+\sum\limits_{i=1}^6\sum\limits_{j=1}^7 |N_G(u_j)\cap B_i|+\sum\limits_{j=1}^7|N(u_j)\cap \{s,w\}|+ \sum\limits_{j=1}^7 |N_G(u_j)\cap (P_1\cup P_2)|\\
& \geq 5+\deg (I)\geq 5+\sigma_7(G)\geq 5+(|G|-2)=3+|G|.
\end{align*}
This gives a contradiction.
{\bf Case 2.} $w\not= z$.
\begin{claim}\label{claim30}
If $z\not=w$ then $w^{+}v_5, z^{+}v_6, u_5w^{-}, u_5w, u_6z^{-}, u_6z, u_6w\notin E(G)$ and $v_5w^{-}, v_6z^{-}\in E(G)$.
\end{claim}
\begin{proof}
By the condition (C3), we get $w^{+}v_5, z^{+}v_6, u_6w, w^{+}w^{-}, z^{+}z^{-}\notin E(G)$.\\
Then, since $G$ is claw-free we get $v_5w^{-}\in E(G)$ and $v_6z^{-}\in E(G)$.\\
Now, if $u_5w^{-}\in E(G)$ (or $u_6z^{-}\in E(G)$) then we may use the proof of Step 1 with the tree $T'=T+u_5w^{-}-ww^{-} ( \text{or}\ T'=T+u_6z^{-}-zz^{-} \text{, respectively})$ to get a contradiction.\\
If $u_5w\in E(G)$ then it gives a contradiction by Step 1 when using the tree $T'=T+u_5w+v_5w^{-}-wv_5-ww^{-}.$ Hence $u_5w\notin E(G)$.\\
Similarly, we also get $u_6z\notin E(G)$. Claim \ref{claim30} is proved.
\end{proof}
\begin{claim}\label{claim31} When $z\not= w$, we have
\begin{equation}\label{eq41}
\sum\limits_{j=1}^7 |N_G(u_j)\cap \{s,w,z\}|\leq 4.
\end{equation}
\end{claim}
\begin{proof}
By Claim \ref{claim26} and Claim \ref{claim30} we have the following
\[\sum\limits_{j=1}^7 |N_G(u_j)\cap \{s,w,z\}|=|N(t)\cap\{s,w,z\}|+|N(u_5)\cap \{z\}|\leq 4.\]
\end{proof}
\begin{claim}\label{claim32} When $z\not= w$, we have
\begin{equation}\label{eq42}
\sum\limits_{j=1}^7 |N_G(u_j)\cap Q_1|\leq |Q_1|.
\end{equation}
\end{claim}
\begin{proof}
By the condition (C3) we have $N(u_5)\cap Q_1=N(u_6)\cap Q_1=\emptyset$. Combining with Claim \ref{claim25} we get
\[\sum\limits_{j=1}^7 |N_G(u_j)\cap Q_1|=|N(t)\cap Q_1|\leq |Q_1|.\]
\end{proof}
\begin{claim}\label{claim33} When $z\not= w$, we have
\begin{equation}\label{eq43}
\sum\limits_{j=1}^7 |N_G(u_j)\cap Q_2|\leq |Q_2|.
\end{equation}
\end{claim}
\begin{proof}
By the condition (C3) we have $N(u_6)\cap Q_2=\emptyset$. Combining with Claim \ref{claim25} we give the following
\[\sum\limits_{j=1}^7 |N_G(u_j)\cap Q_2|=|N(t)\cap Q_2|+|N(u_5)\cap Q_2|.\]
Firstly, we will prove that $|N(u_5)\cap N(t)\cap Q_2|\leq 1$. Indeed, suppose that there exist $x,y\in N(u_5)\cap N(t)\cap Q_2$, $x\not= y$. Without loss of generality, we may assume $x\in P_T[y,s]$.\\
If $u_5x^{-}\in E(G)$ then we come back previous steps with the tree $T'=T+xt+u_5x^{-}-xx^{-}-zz^{-}$ to give a contradiction. Hence $u_5x^{-}\notin E(G)$. Then since $\{xu_5, xt, xx^{-}\}$ is not claw we obtain $tx^{-}\in E(G)$. Similarly, we also get $ty^{-}\in E(G)$.\\
If $v_3x^{-}\in E(G)$ then we use the condition (C3) or the proof of Step 2 to obtain a contradiction with the tree $T'=T+x^{-}v_3-tv_3$. Hence $v_3x^{-}\notin E(G)$. Similarly, we also have $v_3y^{-}\notin E(G)$.\\
Now, since $\{tv_3, tx^{-}, ty^{-}\}$ is not claw we get $x^{-}y^{-}\in E(G)$. Then consider the tree $T'=T+x^{-}y^{-}+u_5y-x^{-}(x^{-})^{-}-yy^{-}$ to imply a contradiction with the condition (C3). So $|N(u_5)\cap N(t)\cap Q_2|\leq 1$.
Now, if $N(u_5)\cap N(t)\cap Q_2=\emptyset$ then (\ref{eq43}) holds.
If $|N(u_5)\cap N(t)\cap Q_2|=1$, set $N(u_5)\cap N(t)\cap Q_2=\{x\}$. To complete (\ref{eq43}) we will prove that $w^{-}\notin N(I)$. Indeed, by Claim \ref{claim25} and Claim \ref{claim30} we get $w^{-}\notin N(u_i)$ for all $i\in \{1;2;3;4;5\}$. Now by the condition (C3) we may obtain $w^{-}\notin N(u_6)$. On the other hand, if $w^{-}u_7\in E(G)$ then the tree $T'=T+u_5x+w^{-}u_7-ww^{-}-zz^{-}$ implies a contradiction by Step 1 or Step 2. Hence if $|N(u_5)\cap N(t)\cap Q_2|=1$ then $w^{-}\notin N(I)$. Therefore, Claim \ref{claim33} is completed.
\end{proof}
\begin{claim}\label{claim34}
When $z\not=w$, if $N(u_5)\cap P_2\not= \emptyset$ then $z^{-}\notin N(I)$.
\end{claim}
\begin{proof}
Suppose that there exists $x\in N(u_5)\cap P_2$. By Claim \ref{claim25} and Claim \ref{claim30} we have $z^{-}\notin N(u_i)$ for all $i\in\{1;2;3;4;6\}$.\\
If $u_5z^{-}\in E(G)$ then using the proof of Step 1 to give a contradiction when we consider the tree $T'=T+u_5z^{-}-zz^{-}.$ Hence $z^{-}\notin N(u_5)$.\\
If $u_7z^{-}\in E(G)$ then we consider the tree $T'=T+u_5x+u_7z^{-}-zz^{-}-xx^{-}$ to come back Step 1 when $x=t^{+}$ and Step 2 when $x\not= t^{+}.$ This implies a contradiction. Hence $z^{-}\notin N(u_7)$.\\
Therefore, Claim \ref{claim34} holds.
\end{proof}
\begin{claim}\label{claim35}
When $z\not=w$, we have $N(u_5)\cap N(u_6)\cap P_2=\emptyset$.
\end{claim}
\begin{proof}
First, we will show that $y\in N(u_j)\cap P_2$ then $y^{+}\notin N(u_k), k \not=i,$ where $\{j;k\}=\{5;6\}$. Indeed, if $y^{+}u_k\in E(G)$ then we give a contradiction as in Step 1 by considering the tree $T'=T+u_jy+u_ky^{+}-yy^{+}-zz^{-}.$ Now, suppose that there exists $y\in N(u_5)\cap N(u_6)\cap P_2$. We get $y^{+}\notin N(u_5)\cup N(u_6)$. Then $G$ contains a claw subgraph $\{yu_5, yu_6, yy^{+}\}$, a contradiction. Therefore $N(u_5)\cap N(u_6)\cap P_2=\emptyset$.
\end{proof}
\begin{claim}\label{claim36}
When $z\not=w,$ we have $|N(u_i)\cap N(t)\cap P_2|\leq 1$ for all $5\leq i \leq 6$.
\end{claim}
\begin{proof}
For $5\leq i \leq 6,$ suppose that there exist $x,y\in N(u_i)\cap N(t)\cap P_2$, $x\not= y$. Without loss of generality, we may assume that $y\in P_T[t,x]$. \\
If $u_ix^{-}\in E(G)$ then we come back Step 2 with the tree $T'=T+xt+u_ix^{-}-xx^{-}-zz^{-}$, this gives a contradiction. Hence $u_ix^{-}\notin E(G)$. Then since $\{xu_i, xt, xx^{-}\}$ is not claw we obtain $tx^{-}\in E(G)$. Similarly, we also get $ty^{-}\notin E(G)$.\\
If $v_3x^{-}\in E(G)$ then we get a contradiction with the condition (C3) with the tree $T'=T+x^{-}v_3-tv_3$. Hence $v_3x^{-}\notin E(G)$. Similarly, we also have $v_3y^{-}\notin E(G)$.\\
Now, since $\{tv_3, tx^{-}, ty^{-}\}$ is not claw we get $x^{-}y^{-}\in E(G)$. Then consider the tree
$$ T' = T+x^{-}y^{-}+u_iy-v_iv_i^{-}-yy^{-},$$
which contradicts with the condition (C3). Claim \ref{claim36} is proved.
\end{proof}
By Claim \ref{claim25} and Claims \ref{claim34}-\ref{claim36} we have
\begin{equation}\label{eq44}
\sum\limits_{j=1}^7 |N_G(u_j)\cap P_2|= |N(u_5)\cap P_2|+|N(u_6)\cap P_2|+|N(t)\cap P_2|\leq |P_2|+1.
\end{equation}
Summing the inequalities (\ref{eq31}), (\ref{eq32}) and (\ref{eq41})-(\ref{eq44}), it yields
\begin{align*}
|G|&=\sum\limits_{i=1}^6|B_i|+|P_T[s,t]|\\
&\geq (4+\sum\limits_{i=1}^6\sum\limits_{j=1}^7 |N_G(u_j)\cap B_i|)+4+|P_1|+|P_2|\\
&\geq 3+\sum\limits_{i=1}^6\sum\limits_{j=1}^7 |N_G(u_j)\cap B_i|+\sum\limits_{j=1}^7|N(u_j)\cap \{s,w,z\}|+ \sum\limits_{j=1}^7 |N_G(u_j)\cap (P_1\cup P_2)|\\
& \geq 3+\deg (I)\geq 3+\sigma_7(G)\geq 3+(|G|-2)=1+|G|.
\end{align*}
This gives a contradiction with the assumptions. Step 3 is proved.
\vspace{0.5cm}
{\bf Step 4}. $|L(T)|=6$ and the tree $T$ has exactly four branch vertices of degree 3 which called $z,s,t,w$ such that $\{z\}= P_T{[t,s]} \cap P_T{[t,w]}$ (see figure \ref{Pic4}).
\begin{figure}[h]
\centering
\includegraphics[width=0.4\linewidth]{./pic4}
\caption[Tree T]{The tree $T$ is in Step 4}
\label{Pic4}
\end{figure}
Let $L(T)=\{u_1, u_2, u_3,u_4,u_5,u_6\}$ be the set of leaves of $T.$ \\
Let $B_i$ be a vertex set of components of $T-\{s,w,z,t\}$ such that $L(T)\cap B_i=\{u_i\}$ for $1\leq i\leq 6$ and the only vertex of $N_T\{s,w,z,t\}\cap B_i$ is denoted by $v_i$. In this step, we may assume $B_i\cap N_T(s) \not= \emptyset\ (1\leq i \leq 2), B_j\cap N_T(t)\not= \emptyset\ (3\leq j \leq 4)$ and $B_k\cap N_T(w)\not= \emptyset\ (5\leq k \leq 6).$ Set $P_1=V(P_T[z,s]-\{z,s\}), P_2=V(P_T[z,t]-\{z,t\}), P_3=V(P_T[z,w]-\{z,w\})$.
We choose the tree $T$ such that:\\
(C4)\ $|P_1|+|P_2|+|P_3|$ is as small as possible.
By the condition (C4) or coming back Step 1, Step 2, Step 3 if necessary we have the following.
\begin{claim}\label{claim41}
$v_1v_2;v_3v_4;v_5 v_6\in E(G)$ and $N(u_i)\cap (P_1\cup P_2\cup P_3\cup \{s,t,w\})=\emptyset.$
\end{claim}
Set $u_7=z,\ I=\{u_1;u_2;u_3;u_4;u_5;u_6;u_7\}$. \\
Repeating the similar arguments as in previous steps and combining Claim \ref{claim41} we may obtain $I$ is an independent set and
\begin{equation}\label{eq141}
\sum\limits_{j=1}^7 |N_G(u_j)\cap (P_1\cup P_2\cup P_3)|= |N(z)\cap (P_1\cup P_2\cup P_3)|\leq |P_1\cup P_2\cup P_3|=|P_1|+|P_2|+|P_3|.
\end{equation}
Moreover, using the same arguments as in previous steps and combining with Claim \ref{claim41} we also have the followings.
\begin{equation}\label{eq142}
\sum\limits_{j=1}^7 |N_G(u_j)\cap \{s,t,w\}|= |N(z)\cap \{s,t,w\}|\leq 3.
\end{equation}
\begin{equation}\label{eq143}
\sum\limits_{j=1}^7 |N_G(u_j)\cap B_i|\leq |B_i|-1 \text{ for all $i\in \{1;2; 3; 4; 5; 6\}$}.
\end{equation}
By (\ref{eq141}), (\ref{eq142}) and (\ref{eq143}) we obtain
\begin{align*}
|G|&=\sum\limits_{i=1}^6|B_i|+4+|P_1|+|P_2|+|P_3|\\
&\geq (6+\sum\limits_{i=1}^6\sum\limits_{j=1}^7 |N_G(u_j)\cap B_i|)+1+\sum\limits_{j=1}^7 |N_G(u_j)\cap \{s,t,w\}|+\sum\limits_{j=1}^7 |N_G(u_j)\cap (P_1\cup P_2\cup P_3)|\\
&\geq 7+\deg(I)\geq 7+\sigma_7(G)\geq 7+(|G|-2)=5+|G|.
\end{align*}
This is impossible. Step 4 is completed.
Finally, we complete the proof of Theorem \ref{thm-mainB}.
\vspace{0.5cm}
{\bf Proof of Theorem \ref{thm-mainA}}.
Suppose, to the contrary, that $G$ has no spanning tree with at most 2 branch vertices. Let $T$ be a spanning tree of $G.$ Then $|B(T)| \geq 3.$ So we have the following
\begin{equation*}
|L(T)|=2+\sum\limits_{v\in B(T)}(\deg_T(v)-2)\geq 2+3.(3-2)=5.
\end{equation*}
On the other hand, since the assumptions of Theorem \ref{thm-mainA} we use Theorem \ref{theorem KKMO} for $k=3$ to show that $G$ has a spanning tree $T$ with at most $5$ leaves. Then $|L(T)|= 5$ and $T$ has exactly three branch vertices $s, w, t$ of degree 3. Here, we may assume that $w\in P_T[t,s]$.
Now we use the same notations as in the proof of Step 1 of Theorem \ref{thm-mainB}. Set $X=\{u_1, u_2, u_3, u_4, u_5, u_6=t\}.$ Then $X$ is an independent set.
Repeating same arguments as in the proof of Step 1 of Theorem \ref{thm-mainB} we will obtain the followings.
\begin{equation}\label{eq51}
\sum\limits_{j=1}^6 |N_G(u_j)\cap B_i|\leq |B_i|, \text{ for all $i\in \{3; 4\}$}.
\end{equation}
\begin{equation}\label{eq52}
\sum\limits_{j=1}^6|N_G(u_j)\cap B_i|\leq
|B_i|-1, \text{ for all $i\in \{1; 2;5\}$}.
\end{equation}
\begin{equation}\label{eq53}
\sum\limits_{j=1}^6 |N_G(u_j)\cap \{s,w\}|=0.
\end{equation}
\begin{equation}\label{eq54}
\sum\limits_{j=1}^6 |N_G(u_j)\cap P_1|\leq \sum\limits_{j=1}^7|N_G(u_j)\cap P_1|\leq |P_1|.
\end{equation}
\begin{equation}\label{eq55}
\sum\limits_{j=1}^6 |N_G(u_j)\cap P_2|\leq \sum\limits_{j=1}^7|N_G(u_j)\cap P_2|\leq |P_2|.
\end{equation}
By (\ref{eq51})-(\ref{eq55}) we conclude that
\begin{align*}
|G|&=\sum\limits_{i=1}^5|B_i|+|P_T[t,s]|\\
&\geq (3+\sum\limits_{i=1}^5\sum\limits_{j=1}^6 |N_G(u_j)\cap B_i|)+3+|P_1|+|P_2|\\
&\geq 6+\sum\limits_{i=1}^5\sum\limits_{j=1}^6 |N_G(u_j)\cap B_i|+\sum\limits_{i=1}^6 |N_G(u_j)\cap P_T[t,s]|\\
& \geq 6+\deg (X)\geq 6+\sigma_6(G)\geq 6+(|G|-5)=1+|G|.
\end{align*}
This gives a contradiction.\\
Theorem \ref{thm-mainA} is proved.
{\bf Acknowledgements.} The research is supported by the NAFOSTED Grant of Vietnam (No.101.04-2018.03).
|
{
"timestamp": "2018-11-06T02:10:25",
"yymm": "1806",
"arxiv_id": "1806.00734",
"language": "en",
"url": "https://arxiv.org/abs/1806.00734"
}
|
\section{Introduction}
The Clean Air Act, one of the most comprehensive and expensive air quality regulations in the world, mandates that the National Ambient Air Quality Standards (NAAQS) are routinely reviewed. If evidence of the adverse health effects of exposure to ambient air pollution at levels below the NAAQS is established based on the peer reviewed literature, then the NAAQS must be lowered, even at the cost of hundreds of million of dollars. For that reason, researchers routinely address the following question: is exposure to levels of air pollution, even below the NAAQS, harmful to human health?
With the next review of the NAAQS for fine particulate matter (PM$_{2.5}$) scheduled to be completed by the end of the year 2020, the determination of whether exposure levels of PM$_{2.5}$ below the NAAQS is harmful to human health is subject to unprecedented level of scrutiny. More recently, because of the highly contentious nature surrounding air pollution regulations and the lowering of the NAAQS particularly, there is an increasing pressure to cast this question within a causal inference framework \citep{Zigler2014a}. The method in this paper is motivated by the need to address this critically important question by flexibly estimating an exposure response curve while reliably eliminating confounding bias especially \textit{at low levels} of exposure.
The literature on the harmful effects of air pollution is very extensive (see, for example, \cite{Dominici2002, Eftim2008, Zeger2008, Zanobetti2007,Crouse2015, Crouse2016, Di2017association, Di2017air, Berger2017, Makar2018, Lim2018association}). However, significant methodological gaps remain in the context of estimating health effects at very low levels.
Environmental research studying the health effects of exposure to low levels of ambient air pollution has either examined the relationship in the subset of the sample residing in areas with ambient concentrations below a pre-specified threshold \citep{Lee2016acute, Shi2016, Di2017association, Di2017air, Schwartz2017estimating, Makar2018, Wang2018longterm, Schwartz2018national}, or has employed regression approaches for ER estimation across the observed range of pollution concentrations \citep{Daniels2000, Dominici2002, Schwartz2002, Bell2006, Hart2015association, Thurston2016ambient, Jerrett2017comparing, Weichenthal2017biomass, Lim2018association}.
In either case, confounding adjustment in air pollution studies is most-often performed using either a pre-specified set of covariates, or a set of covariates which is decided upon using an ad-hoc variable selection procedure. Such procedure is often based on the statistical significance of covariates' coefficients in an outcome regression, or the change in the pollution concentration's coefficient in an outcome model including and excluding sets of covariates \citep{Devries2016, Pinault2016, Garcia2016, Weichenthal2017biomass}.
Generally, regression and semi-parametric modeling approaches for ER estimation such as generalized linear models or generalized additive models \citep{Hastie1986, Daniels2004, Shaddick2008, Shi2016, Dominici2002} make the following assumptions:
1) the set of potential confounders that are included into the regression model among a potentially large set of available covariates is specified a priori;
2) uncertainty arising from the variable selection techniques is not accounted for;
3) the same potential confounders with constant confounding strength are considered when estimating the health effects across all exposure levels (we refer to this as \textit{global confounding adjustment});
and 4) the shape of the ER function is modelled as a spline, a polynomial, or linear with a threshold.
Even though ER estimation in air pollution research has mostly remained outside the potential outcome framework, there has been substantial work in ER estimation within the causal inference literature. \cite{Hirano2004} introduced the generalized propensity score (GPS) in order to adjust for confounding when estimating the effects of a continuous exposure. \cite{Flores2012estimating} estimated a causal ER function employing a weighted locally linear regression with weights defined based on the GPS. \cite{Kennedy2017} introduced a doubly robust approach for estimating the causal ER function using flexible machine learning tools.
These approaches are very promising and manifest the growing interest in principled causal inference methods for continuous exposures. However, none of the existing approaches explicitly accommodates that in ER estimation, and in contrast to binary treatments, confounding \textit{might differ across levels of the exposure}. In fact, even though some of the approaches could be altered to allow for different set of confounders or different confounding strength across exposure levels, current implementations of causal methodology for ER estimation has assumed global confounding of pre-selected covariates. Furthermore, it is unclear how these approaches perform in the case of confounding that varies across exposure levels. To address this, confounding adjustment and confounder selection need to be meaningfully extended in the case of a continuous exposure to provide useful scientific guidance with regard to covariates' confounding importance \textit{at different exposure levels}.
In our exploratory analyses (\cref{sec:data_description}), we report that the relationship between ambient PM$_{2.5}$ concentrations and the rate of hospitalization for cardiovascular diseases might be confounded by a {\it different set of covariates} at the low versus at the high exposure levels, or by covariates with \textit{different confounding strength}. We refer to this phenomenon as \textit{local confounding}. We argue that --especially in the context of estimating causal effects at low levels-- local confounding adjustment is deemed necessary.
To target local confounding, if exposure levels with different confounding were known, one could adopt a separate model at each level and adjust for \textit{all} measured variables using one of the approaches described above. However, even if the number of covariates and local sample size rendered such approach computationally feasible, including unnecessary confounders in the regression model could lead to inefficient estimation of causal effects, especially at very low levels of exposure where data are sparse. Data driven methods to select a minimal necessary set of covariates to be included into an outcome model for estimation of causal effects of binary treatments have been proposed \citep{Luna2011a, Wang2012, Wilson2014}, but to our knowledge, they have not been extended in the context of ER estimation with local confounding adjustment.
The goal of this paper is to overcome the challenges described above by introducing
a Bayesian framework for the estimation of a causal ER curve called LERCA (Local Exposure Response Confounding Adjustment).
We cast our approach within a causal inference framework by introducing the concept of {\it experiment configuration}
$\bm{\bar{s}} = (s_0, s_1, \dots, s_{K + 1})$, where $[s_{k-1}, s_k)$ denotes a specific range of exposure values.
We use the term {\it experiment} to mimic the hypothetical assignment of a unit to exposure value within $[s_{k-1}, s_k)$. Within each experiment, i.e. \textit{locally} in the exposure range $[s_{k-1}, s_k)$, we assume that: 1) the ER is linear; 2) the potential confounders of the ER relationship are unknown but observed; and 3) the strength of the local confounding is also unknown. Across experiments, we require that the ER is continuous at the points $\bm{\bar s}$. Importantly, the internal points of the experiment configuration, $s_1, s_2, \dots, s_K$, are themselves unknown and have to be estimated from the data.
Our work contributes to various components in the literature. First, we contribute to the estimation of causal effects of continuous treatments by extending our understanding of confounding in these settings. Second, our work has connections to the literature on Bayesian free-knot splines \citep{Denison1998automatic, Dimatteo2001bayesian}. The location of the knots (internal points of the experiment configuration) is informed by both the ER fit, and the necessity for local confounding adjustment. Lastly, our work contributes to the highly controversial and politically charged issue of estimating the causal effects of population exposure to low levels of ambient air pollution.
Even though our motivation and focus is the effects of air pollution, the statistical challenges related to ER estimation at low exposure levels are common across many fields, such as toxicology \citep{Scholze2001}, and clinical trials \citep{Babb1998}.
In fact, the methodology presented in this paper can be used to evaluate regulatory settings of potential harmful substances, and can be routinely used to assess health effects of low level exposures.
Such applications include the effects of lead \citep{Chiodo2004, Jusko2008}, environmental contaminants \citep{vanderoost2003}, radiation \citep{National2006, Fazel2009}, and pesticides \citep{Mackenzie2010, Androutsopoulos2012}.
In \cref{sec:data_description} we introduce our motivating data set, discuss the difference between personal exposures and ambient concentrations in air pollution studies, and illustrate that local confounding is likely to be present in our study. In \cref{sec:notation}, we introduce the notation and assumptions on which LERCA in \cref{sec:method} is based. In \cref{sec:simulations} we show through simulations that both off-the-shelf and state of the art approaches for ER estimation perform poorly when local confounding is present, and we compare LERCA to alternatives in the presence of global confounding. Finally, in \cref{sec:application}, we use LERCA to estimate the causal ER function relating ambient PM$_{2.5}$ concentrations with log cardiovascular hospitalization rates in the Medicare population of 5,362 zip codes. Limitations and potential extensions are discussed in \cref{sec:discussion}.
\section{Data description, ambient concentrations, local confounding}
\label{sec:data_description}
In this section we illustrate that, in our study, there might exist a different set of confounders at the low and the high levels of ambient pollution concentrations. LERCA is motivated to overcome this particular challenge.
\subsection{Data description}
We start by briefly describing our data set which is a collection of linked data from many sources. The unit of the observation is the zip code $i$, with sample size $N= 5,362$. For each zip code,
we acquired information on several potential confounders, denoted by $C_{ij}$ for $j=1, 2, \ldots, p$ and $p=27$, capturing socio-economic, demographic, climate, and risk factor information. The full set of zip code level covariates are described in Table \ref{app_tab:Table1}.
We calculate the outcome $Y_i$ defined as log hospitalization rate for cardiovascular diseases (codes ICD-9 390 to 459) among Medicare beneficiaries residing in zip code $i$ in the year 2013.
Since Medicare beneficiaries are, in their plurality, individuals over the age of 65, our focus is on the health effect of PM$_{2.5}${} on the elderly. For each zip code $i$, we assign exposure $X_i$ defined as the average of daily levels of ambient PM$_{2.5}$ concentrations for the years 2011 and 2012 recorded by EPA (U.S. Environmental Protection Agency) monitors within a 6 mile radius of zip code $i$'s centroid.
The values of $X_i$ range from $2.7$ to $18.3 \mu g \slash m^3$ (see \cref{fig:PM_2013}).
We define $X_i$ using the two years prior to the year whose outcome we analyze in order to respect the temporal ordering of treatment and outcome when drawing causal conclusions. Longer time lags could be considered, but, in such settings, our analysis would potentially be more susceptible to population mobility.
\begin{figure}[!t]
\begin{center}
\includegraphics[width = 0.65\textwidth]{application_ZIP_PM.pdf}
\caption{Average levels of PM$_{2.5}$ for the years 2011-2012 for each zip code $i$ included into the analysis.}
\label{fig:PM_2013}
\end{center}
\end{figure}
Since our definition of $X_i$ requires the presence of an EPA monitor within 6 miles of a zip code's centroid, the zip codes included in our study are a subset of the full set of zip codes in the continental U.S. Supplement \ref{sec:data_details} includes a detailed description of data linkage (EPA monitors, Medicare, others), and descriptive statistics across zip codes in the whole continental U.S. and only those included in our study. Excluded zip codes resemble, in general, those included in our analysis, but are perhaps in more rural areas, with lower population density, and higher proportions of white population and unemployment.
\subsection{Ambient concentrations versus personal exposures}
In order to agree with existing causal inference literature for continuous treatments, we often refer to measurements $X_i$ as zip code $i$'s \textit{exposure}, and a range of ambient pollution concentration as an \textit{exposure level}. However, a zip code's measurement of ambient concentration $X_i$ might be substantially different from the personal exposure of an individual residing in that zip code. \cref{fig:outdoor_personal} shows a hypothetical DAG relating ambient and indoor pollution concentrations with individuals' personal exposures and health outcomes. Ambient pollution concentrations act on an individual's outcome only through the individuals' personal exposures.
In this paper, we focus on estimating the causal effects of ambient PM$_{2.5}${} on cardiovascular health outcomes.
That is, potentially, the most interesting question from a policy perspective, since policy regulations (and the NAAQS) are set based on the knowledge for the effect of ambient concentrations.
The implications of using ambient concentrations instead of personal exposures to study the effect of pollution concentrations are discussed in \cref{sec:discussion}.
\begin{figure}[H]
\centering
\begin{tikzpicture}
\node[text centered] (useless) {};
\node[above = 0.8 of useless, text centered] (indoor) {Indoor PM$_{2.5}$};
\node[right = 2 of useless, text centered] (personal) {Personal PM$_{2.5}$};
\node[right = 1 of personal, text centered] (useless2) {};
\node[above = 0.8 of useless2, text centered] (conf2) {$\widetilde{\bm C}_2$};
\node[above = 0.8 of conf2, text centered] (conf3) {$\widetilde{\bm C}_3$};
\node[right = 2 of personal, text centered] (health) {Health};
\node[below = 0.8 of personal, text centered] (conf1) {$\widetilde{\bm C}_1$};
\node[left = 2 of useless, text centered] (pm) {Ambient PM$_{2.5}$};
\draw[->] (pm) -- (personal);
\draw[->] (indoor) -- (personal);
\draw[->] (conf3) -- (indoor);
\draw[->] (conf3) -- (health);
\draw[->] (conf2) -- (personal);
\draw[->] (conf2) -- (health);
\draw[->] (pm) -- (indoor);
\draw[->] (personal) -- (health);
\draw[->] (conf1) -- (pm);
\draw[->] (conf1) -- (health);
\end{tikzpicture}
\caption{DAG relating exposure to ambient PM$_{2.5}${} concentrations to personal exposures and health outcomes. Covariate sets $\widetilde{\bm C}_1$, $\widetilde{\bm C}_2$, $\widetilde{\bm C}_3$ represent potentially different sets of confounders between the health outcome and ambient, personal or indoor PM$_{2.5}${} concentrations or exposures. Arrows represent potential (but not necessarily present) relationships.}
\label{fig:outdoor_personal}
\end{figure}
\subsection{Potential presence of local confounding in our study}
In the case of binary treatments, whether a covariate acts as a confounder is often evaluated by checking whether there exists significant imbalance in the covariate distribution of the treated and control groups. For continuous exposures, there is no direct counterpart to covariate balance since units are not separated into two groups. Instead, exploratory analyses for the presence of confounding are often based on covariates' strength in predicting the exposure through regression models \citep{Imai2004causal}. Then, a covariate's p-value in a model for the exposure is used to investigate whether it might be a confounding variable.
We use a related approach to illustrate the potential presence of local confounding in our data. We considered two subsets of zip codes: 1) zip codes with low ambient concentrations ($<$8$\mu g\slash m^3$; 817 observations); and 2) zip codes with high ambient concentrations ($>$11.5$\mu g\slash m^3$; 672 observations). Even though this definition of the low and high exposure levels is arbitrary for the purpose of our illustration, this choice ensures a similar number of observations and similar range of exposure values within the two levels. Within each exposure level \textit{separately}, we considered a linear regression of ambient pollution concentration on each covariate, and evaluated the covariate's predictive strength through its p-value.
\begin{figure}[!b]
\centering
\includegraphics[width=0.95\textwidth]{application_illustrate_confounding_app_pvalues.pdf}
\vspace{-20pt}
\caption{Covariate p-values in predicting the exposure, separately at the low (blue: $<8\mu g/m^3$) and high (red: $>11.5\mu g/m^3$) exposure levels.}
\label{fig:app_pvalues}
\end{figure}
\cref{fig:app_pvalues} shows the p-value of the covariates in the $p$ regression models, and for the two exposure levels.
We see that some variables such as population density (\texttt{Population/SQM}) and the percentage of the population with less than a high school education (\texttt{\% Below HS}) are predictive of ambient concentrations in both low and high exposure levels. However, other variables, such as the median household value (in logarithm -- \texttt{House Value}), are only predictive of ambient pollution concentrations at the high exposure levels. The opposite is true for the percentage of population that is white (\texttt{\% White}).
Such initial investigation indicates that different variables might act as predictors of the ambient exposure at different exposure levels.
In Supplement \ref{app_sec:local_confounding}, we show the estimated covariates' coefficients whose p-values are shown in \cref{fig:app_pvalues}. Since the two exposure levels are relatively balanced in terms of number of observations and range of exposure values, the magnitude of the p-values in \cref{fig:app_pvalues} is directly comparable to the magnitude of the estimated coefficients. This indicates that initial investigation of local confounding could be equivalently performed in terms of estimated coefficients or p-values.
In Supplement \ref{app_sec:local_confounding}, we also consider a similar exploratory analysis to investigate which covariates are predictors of the health outcome at the low and high exposure levels separately. Combining the results presented there to the ones in \cref{fig:app_pvalues}, there is evidence that the variables that confound the ER relationship might differ across levels of the exposure leading to {\it local confounding}. For example, the zip code median household value (\texttt{House Value}) is predictive of both ambient air pollution and cardiovascular hospitalization rates at the high exposure levels, but is not predictive of ambient air pollution at low exposure levels. Additionally, there is indication that the percentage of the population with less than a high school degree (\texttt{\% Below HS}) is a confounder at the low exposure levels, whereas the same variable is not predictive of the health outcome at the high exposure levels.
\section{Causal ER, the experiment configuration, and the local ignorability assumption}
\label{sec:notation}
We follow the potential outcome framework \citep{Neyman1923,Rubin1974,Hirano2004}, and under the stable unit value of treatment assumptions (SUTVA; no interference, no hidden versions of the treatment \citep{Rubin1980}), we use $Y_i(x)$ to denote the potential outcome for observation $i$ at exposure $x \in \mathcal{X}$, where $\mathcal X \subset \mathbb{R}$ is the interval including all possible exposure values. Then, $\{Y_i(x), x \in \mathcal{X} \}$ is unit $i$'s ER curve, and $\{ \overline Y(x) = E[Y_i(x)], x \in \mathcal{X}\}$ is the population average ER curve.
Assuming $\overline Y(x)$ is differentiable as a function of $x$, we define the instantaneous causal effect
\[
\Delta(x) = \lim_{h \rightarrow 0} \frac{\overline Y(x+ h) - \overline Y(x)} h.
\]
A $\Delta(x) \neq 0$ implies that variation in the exposure in a neighborhood of $x$ has a causal effect on the expected outcome. We also define the population average causal effect of an exposure shift from $x$ to $x + \delta$, as $CE_\delta(x) = \overline Y(x + \delta) - \overline Y(x) = \int_x^{x + \delta} \Delta(t) \mathrm{d}t$.
The observed outcome $Y_i$ is equal to the potential outcome at the observed exposure $Y_i(X_i)$.
Under the weak ignorability assumption which states that the treatment is as if randomized conditional on observed covariates,
$X \perp\!\!\!\perp Y(x) | \bm C$,
and every subject in the population can experience any $x \in \mathcal{X}$, $\overline Y(x)$ is identifiable using the observed data \citep{Hirano2004}.
Then, a minimal confounding adjustment set $\bm C^* \subseteq \bm C$ is a set of covariates which satisfies $X \perp\!\!\!\perp Y(x) | \bm C^*$, but $X \not\!\perp\!\!\!\perp Y(x) | \bm C^{**}$ for any $\bm C^{**}$ strict subset of $\bm C^*$ \citep{Luna2011a,Wang2012,Vansteelandt2012}.
In this paper, we are interested in addressing the possibility that the minimal sufficient adjustment set $\mathbf C^*$ varies across exposure levels. We formalize this by introducing the \textit{experiment configuration}.
Let $K$ denote a fixed positive integer, and $\text{min}= \inf \mathcal X$ and $\text{max}=\sup \mathcal{X}$ denote the known and fixed minimum and maximum values of the exposure range $\mathcal{X}$. Then, $\mathbf{\bar{s}} = (s_0 = \min, s_1, s_2, \dots, $ $s_K, s_{K + 1} = \max)$ is the experiment configuration which defines a partition of the exposure range in $K + 1$ experiments $g_k = [s_{k-1}, s_k)$. We use $\bm s$ to denote the internal points $s_1, s_2, \dots, s_K$. In \cref{fig:ER_example}, a hypothetical exposure response function is plotted where $\mathbf{\bar{s}}$ defines a total of 4 experiments ($K = 3$).
Then,
$\bm C^*_k$ is a minimal sufficient adjustment set in experiment $k$ if it satisfies
\begin{equation}
X \perp\!\!\!\perp Y(x) | \bm C^*_k,\ \text{for all } x \in g_k,
\label{eq:ignor_exper}
\end{equation}
and (\ref{eq:ignor_exper}) does not hold for any strict subset of $\bm C^*_k$.
The sets $\bm C^*_k$ can be disjoint, identical, or overlapping if the same variable is necessary for confounding adjustment in more than one experiment.
\begin{figure}[H]
\begin{center}
\includegraphics[width = 0.65\textwidth]{ER_RJ_example.pdf}
\end{center}
\caption{ER curve with exposure range partitioned by $\bar{\bm s}$ in 4 experiments.}
\label{fig:ER_example}
\end{figure}
\section{ER estimation in the presence of local confounding}
\label{sec:method}
Motivated by the evidence of local confounding between ambient PM$_{2.5}${} concentrations and cardiovascular hospitalizations discussed in \cref{sec:data_description}, we introduce
LERCA: Local Exposure Response Confounding Adjustment. In order to build intuition, we do so for a fixed experiment configuration in \cref{sec:fixed_s}. LERCA with unknown $\bm s$ is presented in \cref{sec:unknown_s}. The choice of $K$ is discussed in \cref{subsec:choosing_K}.
\subsection{Known experiment configuration}
\label{sec:fixed_s}
Assume for now a known experiment configuration $\mathbf{\bar{s}}$. Then, locally, that is for $x \in g_k = [s_{k-1}, s_k)$, we assume the following pair of exposure and outcome models:
\begin{equation}{\small
\begin{aligned}
p(x |\bm C = \bm c, x \in g_k) & =
\phi \big(x;\ \delta_{k0}^X + \textstyle{\sum_{j = 1}^p} \alpha_{kj}^X \delta_{kj}^X c_{j},\ \sigma_{k,X}^2 \big)\\
p(y|X = x, \bm C = \bm c, x \in g_k) & =
\phi \big( y;\ \delta_{k0}^Y + \beta_k (x - s_{k - 1}) + \textstyle{\sum_{j = 1}^p} \alpha_{kj}^Y \delta_{kj}^Y c_j,\ \sigma_{k,Y}^2 \big)
\end{aligned}}
\label{eq:likelihoods}
\end{equation}
where $\phi(\cdot;\mu, \sigma^2)$ denotes the normal density with mean $\mu$ and variance $\sigma^2$, and $\alpha_{kj}^X \in \{0, 1\}$ indicates that covariate $C_j$ is included into the exposure model of the $k^{th}$ experiment ($\alpha_{kj}^X=1$), or not ($\alpha_{kj}^X=0$). The parameter $\alpha_{kj}^Y$ has the same interpretation, but for the outcome model.
The parameter $\beta_k$ denotes the instantaneous change in the expected outcome associated with a local variation in exposure for $x \in g_k$, adjusted for the $C_j$s that have $\alpha^Y_{kj}=1$.
Even though all parameters depend on which covariates are included in the corresponding model, we do not explicitly state this dependence for notational simplicity.
Model (\ref{eq:likelihoods}) allows for a different set of variables and variables' coefficients at the different experiments.
If a minimal confounding adjustment set for experiment $k$ is included in the outcome model and the mean functional form is correctly specified, $\beta_k$ is an unbiased estimator of the instantaneous effect $\Delta(x)$, for $x \in g_k$. Similarly, an unbiased estimator of $\overline Y(x)$ is $E_{\bm C}\{ E[Y_i | X = x, \bm C] \}$, which can be estimated by taking the average over the units in our sample of the conditional expectation $E[Y_i | X = x, \bm C_i]$.
In \cref{subsec:prior_alphas} we discuss how the prior distribution on the inclusion indicators is chosen to target confounding adjustment. In \cref{subsec:prior_continuity}, we discuss prior specification for outcome model coefficients that ensures borrowing of information across experiments and ER continuity across the exposure range. But first we address two questions that naturally arise from the specification of model \cref{eq:likelihoods}. First, we clarify the connection between LERCA and a model that specifies the ER relationship using linear splines in \cref{subsec:connection_splines}. Then, in \cref{subsec:connection_separate}, we discuss how LERCA compares to a model that is fit separately within each experiment $g_k$.
\subsubsection{Connection to linear splines}
\label{subsec:connection_splines}
In the outcome specification of model \cref{eq:likelihoods}, the term $\beta_k (x - s_{k-1})$ in the mean functional could be substituted by $\beta_k x$ and $- \beta_k s_{k - 1}$ could be absorbed in the intercept. However, specifying the model as to include $\beta_k (x - s_{k-1})$ demonstrates the connection between model \cref{eq:likelihoods} and a model where the ER relationship is specified using linear splines with knots $\bm s$. Furthermore, such specification significantly simplifies prior elicitation to ensure ER continuity (see \cref{subsec:prior_continuity}), and posterior sampling satisfying the continuity condition (see Supplement \ref{app_sec:MCMC}).
Even though the outcome model in \cref{eq:likelihoods} resembles a linear splines model, there is a \textit{key} distinction between the two models.
In model \cref{eq:likelihoods}, different experiments $g_k$ are allowed to have a different slope for the exposure ($\beta_k$), a different set of outcome predictors (covariates with $\alpha_{kj}^Y = 1$), or the same set of predictors but with different coefficient ($\delta_{kj}^Y$). Therefore, points $\bm s$ in \cref{eq:likelihoods} represent a change in the slope or a change in the outcome model covariate adjustment.
On the other hand, a model that uses splines for the exposure-response relationship only allows $\beta_k$ to vary with $k$. In this sense, a splines model is a sub-case of model \cref{eq:likelihoods}, that for $\alpha_{kj}^Y$ and $\delta_{kj}^Y$ constant across $k$.
The assumption of local linearity (linear effect of the exposure on the outcome within each experiment) can lead to global non-linearity, and can be relaxed using higher order splines. However, for our study of the health effects of ambient air pollution at low concentrations, previous research indicates that the relationship between air pollution and cardiovascular outcomes is linear \citep{Thurston2016ambient, Lim2018association} or supra-linear \citep{Crouse2015, Pinault2017associations}, situations that our model can adjust to.
\subsubsection{Connection to a separate model across experiments}
\label{subsec:connection_separate}
A natural question that arises from the LERCA model specification in \cref{eq:likelihoods}, is how LERCA compares to fitting a separate outcome model within each experiment $g_k$. Doing so would still allow for different confounders and different confounding strength at different exposure levels.
However, a separate model within each experiment would not borrow any information across exposure levels, and could estimate an ER that is not continuous at the points $\bm s$.
In contrast, LERCA borrows information across exposure levels by ensuring that the estimated ER is continuous everywhere (see \cref{subsec:prior_continuity}). If higher order polynomials are used within each experiment, LERCA, similarly to splines, could be altered to accommodate higher order smoothness across the exposure range.
\subsubsection{Prior distribution on inclusion indicators for confounding adjustment}
\label{subsec:prior_alphas}
We build upon the work by \cite{Wang2012, Wang2015} to assign an informative prior on covariates' local inclusion indicators $(\alpha^X_{kj}, \alpha^Y_{kj})$. This prior choice ensures that model averaging assigns high posterior weights to outcome models including a minimal confounding adjustment set separately for each exposure range, and specifies
\begin{align*}
& \frac{P(\alpha^Y_{kj} = 1 | \alpha^X_{kj} = 1)}{P(\alpha^Y_{kj} = 0 | \alpha^X_{kj} = 1)} = \omega\; \mbox{where} \; \omega > 1,\; \mbox{iid} \; \forall \; j,k.
\numberthis
\label{eq:BAC_prior}
\end{align*}
By specifying \cref{eq:BAC_prior}, a variable $C_j$ is assigned high prior probability to be included into the outcome model if it is also included in the exposure model ($x \in g_k$ \& $\alpha^X_{kj} = 1$). \citet{Wang2012} and \cite{Antonelli2017guided} show that, for binary treatments, this informative prior leads to outcome models that include the minimal set of true confounders with higher posterior weights than model selection approaches that are based solely on the outcome model. In our context, this experiment-specific prior specification ensures that, locally, covariates in the minimal set $\bm C^*_k$ are included in the outcome model of experiment $k$ with high posterior probability.
\subsubsection{Ensuring ER continuity}
\label{subsec:prior_continuity}
In most applications, it is expected that the causal ER relationship $\overline Y(x)$ is continuous in $x$. Therefore, estimates of $\overline Y(x)$, in our case $E_{\bm C}\{ E[Y | X = x, \bm C]\}$, should also be continuous. If the covariates $C_j$ are centered, and under model \cref{eq:likelihoods}, continuity of the estimated ER function is satisfied if
\begin{align*}
& \lim_{x \rightarrow s_k^+} E_{\bm C} \{ E[Y | X = x, \bm C] \} = \lim_{x \rightarrow s_k^-} E_{\bm C} \{ E[Y | X = x, \bm C] \} \\
& \hspace{20pt} \iff
\delta_{k0}^Y = \delta_{(k-1)0}^Y + \beta_{k - 1}(s_k - s_{k - 1}).
\numberthis
\label{eq:intercept_prior}
\end{align*}
This is ensured by assuming a point-mass recursive prior on $\delta_{k0}^Y, k \geq 2$.
Then, conditional on $\bm s$, the outcome model intercept of experiment $k \geq 2$ is a deterministic function of the outcome model intercept of the first experiment $\delta_{10}^Y$, and the slopes $\beta_1, \beta_2, \dots, \beta_{k - 1}$. These parameters are assigned independent non-informative normal prior distributions.
\subsubsection{Prior distributions of the remaining coefficients}
Prior distributions on the remaining regression coefficients (exposure model coefficients, outcome model covariates' coefficients) and variance terms are chosen such that they lead to known forms of the full conditional posterior distributions to simplify sampling.
We use independent non-informative inverse gamma prior distributions on $\sigma^2_{k, X}, \sigma^2_{k, Y}$.
Non-informative normal prior is chosen for the exposure model intercepts $\delta_{k0}^X$.
Conditional on the inclusion indicators, the prior on the regression coefficient $\delta_{kj}^Y$ is a point mass at 0, or a non-informative normal distribution when $\alpha_{kj}^Y$ is equal to 0 or 1 accordingly. Similarly for the exposure model covariates' coefficients $\delta_{kj}^X$.
Default hyperparameter values are set to 0.001 for the inverse gamma distribution, and (0, 100) for the mean, and standard deviation of the normal distribution.
Details on the prior specifications can be found in Supplement \ref{app_sec:priors}.
\subsection{Unknown experiment configuration}
\label{sec:unknown_s}
For a fixed experiment configuration $\mathbf{\bar s}$, each experiment is treated separately in terms of confounder \textit{selection and strength} of the confounding adjustment. However, the configuration itself is a key component of the fitted exposure response curve, and fixing it a priori could lead to bias and uncertainty underestimation. Instead, we assume that, a priori, the internal points of the experiment configuration $\bm{s}$ are distributed as the even-numbered order statistics of $2K + 1$ samples from a uniform distribution on the interval $(s_0, s_{K + 1})$.
This prior choice of $\bm s$ discourages specifications of $\bm s$ that include values that are too close to each other \citep{Green1995}.
The prior is augmented by indicators that consecutive points $s_k, s_{k+1}$ cannot be closer than some distance $d_k$.
Conditional on $\bm s$, we follow the model specification and prior distributions described in \cref{sec:fixed_s}.
\subsection{MCMC scheme and convergence diagnostics}
Markov Chain Monte Carlo (MCMC) methods are used to acquire samples from the posterior distribution of model parameters. A detailed description of the MCMC scheme including computational challenges and contributions can be found in Supplement \ref{app_sec:MCMC}. There, we also discuss MCMC convergence diagnostics based on the potential scale reduction factor (PSR; \cite{Gelman1992}) for quantities that do not directly depend on the experiment configuration.
\subsection{Number of points in the experiment configuration}
\label{subsec:choosing_K}
As presented previously, LERCA requires the specification of the internal number of points $K$ in the experiment configuration. Since the number of parameters grows with $K$, possible values for $K$ could be bounded by considering the maximum number of coefficients we are willing to entertain.
Cross validation methods to choose values of tuning parameters are often infeasible in the Bayesian framework due to time and computational resources constraints.
In a comprehensive review, \cite{Gelman2014} discusses methods of estimating the expected out of sample prediction error for Bayesian methods.
The widely-applicable information criterion (WAIC; \cite{Watanabe2010}) provides an estimate of the out-of-sample prediction error based on one MCMC run. It is defined as $WAIC = - 2 \left(\mbox{lppd}- p_{WAIC} \right)$,
where $\mbox{lppd}$ and $p_{WAIC}$ denote the
log point-wise posterior predictive density and the penalty:
\begin{align*}
\mbox{lppd} = &\sum_{i = 1}^n \log E_{post}p(x_i, y_i | \theta)\\
p_{WAIC} = & \sum_{i = 1}^n \mathrm{var}_{post} \left( \log p(x_i, y_i | \theta) \right).
\end{align*}
Here, $\theta$ denotes the full vector of parameters, and $E_{post}$, $\mathrm{var}_{post}$ denote the posterior mean and variance.
In order to choose $K$ for LERCA, LERCA is fit \textit{once} for different values of $K$, and $K$ is chosen as the value that minimizes the estimate of the WAIC.
\section{Simulation Studies}
\label{sec:simulations}
The main goal of our simulation study is to illustrate that local confounding is an important issue that both commonly-used and flexible approaches for ER estimation fail to adjust for and they return biased results. The results from our simulation study indicate that methodology that directly accommodates local confounding is necessary in order to correctly estimate the causal effect of a continuous exposure. An R package which can be used to generate data with local confounding and fit LERCA is available at \url{https://github.com/gpapadog/LERCA}.
Additionally, in \cref{sec:sims_global} we discuss results from a simulation study under a generative model \textit{without} local confounding. In this case, traditional approaches and global confounding adjustment suffice for ER estimation, and the question is how comparably LERCA performs.
The approaches we considered are:
\begin{enumerate}
\item Generalized Additive Model (\texttt{GAM}): Regressing the outcome $Y$ on flexible functions of the exposure $X$ and all potential confounders (4 degrees of freedom for each predictor).
\item Spline Model (\texttt{SPLINE}): Additive spline estimator described in \cite{Bia2014}. The generalized propensity score (gps) is modelled as a linear regression on all covariates. The ER function is estimated using additive spline bases of the exposure and gps.
\item The Hirano and Imbens estimator \citep{Hirano2004} (\texttt{HI-GPS}):
ER estimation is obtained by fitting an outcome regression model including quadratic terms for both the exposure and the gps, and the exposure-gps interaction. The gps is estimated as in \texttt{SPLINE}.
\item Inverse Probability Weighting estimator (\texttt{IPW}): The generalized propensity score is used to weigh observations in an outcome regression model that includes linear and quadratic terms of exposure. The gps is estimated as in \texttt{SPLINE}.
\item The doubly-robust approach of \cite{Kennedy2017} (\texttt{KENNEDY}): The gps and outcome models are estimated using the Super Learner algorithm \citep{vanderlaan2007} combining the sample mean, linear regression with and without two-way interactions, generalized additive models, multivariate adaptive regression splines, and random forests.
Based on the gps and outcome model estimates, the pseudo-outcome is calculated and is regressed on the exposure using kernel smoothing. This approach is chosen to represent state-of-the-art methods in ER estimation that are based on flexible, machine-learning and non-parametric approaches.
\end{enumerate}
\subsection{Data generation with local confounding}
We generate data with exposure values which range from 0 to 10 and are uniformly distributed over the exposure range. Even though a uniform distribution is not accurate for the exposure variable in our study (ambient air pollution concentrations), we consider a uniformly distributed exposure to ensure that methods' performance is solely affected by the presence of local confounding, and not by the presence of limited sample size at some exposure levels. We consider a quadratic ER, and true experiment configuration $\bm{\bar{s}} = (0, 2, 4, 7, 10)$. Table \ref{tab:sim_cov} summarizes which of the 8 potential confounders are predictive of the exposure and$\slash$or the outcome within each experiment (correlations and regression coefficients are summarized in \cref{app_table:sims_covs}). Note that in this data generating mechanism the minimal set of confounders vary across the four experiments. We simulate 400 data sets of 800 observations each. Details on the data generating mechanism are in Supplement \ref{app_sec:sim_mech}.
\begin{table}[H]
\centering
\caption{Representation of which covariates are predictive of the exposure and $\slash$ or the outcome within each experiment (denoted by a \checkmark). Covariates with \checkmark in both models within the same experiment are local confounders.}
\label{tab:sim_cov}
\begin{tabular}{cr|cccccccc}
Experiment & Model & $C_1$ & $C_2$ & $C_3$ & $C_4$ & $C_5$ & $C_6$ & $C_7$ & $C_8$ \\ \hline
1 & $X | \bm C$ & \checkmark & \checkmark & \checkmark &&&& \\
& $Y|X, \bm C$ & \checkmark & \checkmark & \checkmark & & & & & \\ \hline
2 & $X | \bm C$ & \checkmark & \checkmark & & \checkmark & & & & \\
& $Y | X, \bm C$ & & \checkmark & \checkmark & \checkmark & & & & \\ \hline
3 & $X | \bm C$ &\checkmark & & \checkmark & & \checkmark & & & \\
& $Y | X, \bm C$ & & \checkmark & \checkmark & & \checkmark &&& \\ \hline
4 & $X | \bm C$ & & \checkmark & & & \checkmark & \checkmark & & \\
& $Y | X, \bm C$ & & \checkmark & \checkmark & & & \checkmark && \\ \hline
\end{tabular}
\end{table}
\subsection{Fitting the methods}
The different methods are fit using the \texttt{gam} and \texttt{causaldrf} R packages \citep{Hastie2017,Schafer2015}, and the code available on \cite{Kennedy2017}. LERCA is fit for $K \in \{2, 3, 4\}$, and for each data set the results shown correspond to the $K$ that minimized the WAIC.
Using each method, we estimate the population average ER curve $\overline Y(x)$ over an equally spaced grid of points on the interval $(0, 10)$, and compare the root mean squared error (rMSE) as a function of $x$. We also assess whether LERCA can recover the correct location of the points $\bm s$, identify the true confounders within each experiment, and choose the correct value for $K$.
\subsection{Simulation Results}
\cref{fig:sims_others} shows the estimated ER curves using the alternative methods.
In \cref{fig:sims_LERCA} we summarize the LERCA results including the estimated ER, the internal points of the experiment configuration and outcome model inclusion indicators of covariates $C_1, C_4$ as a function of exposure $x \in (0, 10)$. We choose $C_1$ and $C_4$ because, in this data generating mechanism, $C_1$ is a confounder in experiment 1 ($x < 2$), and $C_4$ is a confounder in experiment 2 only ($2 < x < 4$).
Grey lines correspond to results for individual data sets, whereas black solid lines correspond to averages across simulated data sets.
\begin{figure}[!t]
\centering
\includegraphics[width = 0.98\textwidth]{sims44_waic_res_others_Kennedy.pdf}
\caption{The true mean ER function (dashed line), estimated ER functions from each simulated data set (gray), and the mean of the estimated ER functions (solid lines) using all alternative methods.}
\label{fig:sims_others}
\vspace{10pt}
\includegraphics[width = 0.98 \textwidth]{sims44_waic_res_LERCA.pdf}
\caption{LERCA results. (Left) Mean ER estimates. (Center) Posterior distribution of the internal locations $\bm s$. (Right) Outcome model posterior inclusion probability of $C_1$ and $C_4$. Gray lines correspond to simulated data sets separately, and black solid lines correspond to averages across data sets.}
\label{fig:sims_LERCA}
\end{figure}
In \cref{fig:sims_others} we see that the alternative methods return biased results, especially at very low or very high levels of the exposure. These results indicate that neither commonly-used nor flexible approaches utilizing machine learning tools appropriately accommodate local confounding adjustment for ER estimation.
In terms of root MSE, LERCA was consistently lower than the alternative methods at low exposure levels, all approaches performed similarly at middle exposure levels, and GAM slightly outperformed LERCA at high levels (\cref{fig:sims_rootMSE}). In Supplement \ref{app_subsec:sims_local_reverse}, we show that the relative performance of GAM and LERCA is reversed when the confounding structure is also reversed. These results indicate that local confounding is an issue across all exposure levels, and that, since the true confounding structure is never known for a real data set, LERCA should be preferred if local confounding is of concern.
As showed in \cref{fig:sims_LERCA}, even though the true ER is quadratic and LERCA is formulated as piece-wise linear, LERCA is able to identify the correct shape of the exposure-response function. We find that using WAIC to choose the value of $K$ led to choosing the correct value of $K = 3$ 40\% of the times, and $K = 2$ 58\% of the times indicating that WAIC tends to over-penalize large values of $K$. Regardless, the correct internal locations $\bm s = \{ 2, 4, 7\}$ are located at the modes of the posterior distribution (second panel in \cref{fig:sims_LERCA}).
By examining the posterior inclusion probabilities of $C_1, C_4$, we observe that instrumental variables (e.g., $C_1$ in experiments 2 and 3) are often included in the outcome model. However, LERCA includes the minimal confounding set within each experiment with very high probability. On average (across the points in the exposure range and across all the simulated data sets) the minimal confounding set was included in the adjustment set 99\% of the times (ranging from 89-100\% across simulated data sets), indicating that the variables necessary for confounding adjustment are almost always included in the adjustment set.
Lastly, the point-wise 95\% and 50\% credible intervals cover the true mean ER values 84\% and 39\% of the times accordingly. The observed under-coverage is largely due to the underestimation of $K$.
\subsection{Simulation results in the absence of local confounding}
\label{sec:sims_global}
The previous generative scenario compared methods' performance in the presence of local confounding. In Supplement \ref{app_subsec:sims_global}, LERCA is compared to the alternative methods in the more traditional setting of global confounding, that is, in the setting more favorable to the other methods. In this context, LERCA with $K = 3$ (fixed) performed similarly in terms of root MSE compared to GAM and Kennedy's doubly-robust estimator, but better than the remaining alternative methods. These results indicate that LERCA offers a protection against bias arising from local confounding, without sacrificing efficiency when local confounding is not present.
\section{Estimating the effect of ambient PM$_{2.5}${} concentrations on zip code cardiovascular hospitalization rates}
\label{sec:application}
We estimate the relationship between the average ambient PM$_{2.5}${} concentrations for the years 2011-2012 and log cardiovascular hospitalization rates in 2013, using the data set introduced in \cref{sec:data_description}, and allowing for local confounding adjustment. Here, a unit $i$ from \cref{sec:notation} corresponds to the areal unit of a zip code.
\subsection{Plausibility of the causal assumptions in our study}
\label{subsec:plausibility}
The interpretation of estimated results as causal are bound by the plausibility of the causal assumptions within the study's setting. Here, we examine these assumptions in the evaluation of the causal relationship between ambient PM$_{2.5}${} and cardiovascular hospitalization rates.
One assumption discussed in \cref{sec:notation} is SUTVA which states that a zip code's potential outcomes are only a function of the zip code's own PM$_{2.5}${} levels. If Medicare beneficiaries residing within a zip code travel outside of it, then other zip codes' ambient PM$_{2.5}${} concentrations can affect the personal exposures of zip code $i$'s beneficiaries and, as a consequence, the zip code's hospitalization rates, invalidating SUTVA. This phenomenon is referred to in the literature as \textit{interference}. Since PM$_{2.5}${} concentrations in nearby zip codes are similar, and Medicare beneficiaries are expected, at some level, to spend most of their time within a relatively close distance to their home, interference can be assumed to be limited (dashed arrow in \cref{fig:assumption_plausibility}). When interference is limited, \cite{Savje2018average} showed that ignoring it returns estimates that are close to an average treatment effect.
The most commonly invoked causal assumption is that of ignorability. In our setting, ignorability implies that, conditional on measured covariates, a zip code's ``assignment'' to a specific level of ambient PM$_{2.5}${} does not depend on its \textit{potential outcomes} (see \cref{eq:ignor_exper}), and any zip code can experience any ambient PM$_{2.5}${} within the observed range.
One natural question is whether spatial correlation of ambient pollution concentrations invalidate ignorability, either through confounding or positivity.
The no unmeasured confounding assumption is expected to hold if the set of measured covariates includes all confounders.
In our study, we might expect that the covariates of nearby zip codes (such as nearby weather conditions) affect a zip code's ambient PM$_{2.5}${} concentrations (arrow from $\bm C_j$ to $X_i$ in \cref{fig:assumption_plausibility}).
If, in addition, a zip code's outcome directly depends on the covariates of other zip codes (arrow from $\bm C_j$ to $Y_i$) then $\bm C_j$ has to be included in the model for $Y_i$. We assume that such direct dependence does not exist in our study. For example, weather conditions near but not in zip code $i$ only affect zip code $i$'s hospitalization rates through their effect on ambient PM$_{2.5}${} concentrations.
Even though ambient PM$_{2.5}${} concentrations are spatially correlated, the positivity assumption requires that zip codes can experience any PM$_{2.5}${} concentration level \textit{marginally}, and the spatial correlation of PM$_{2.5}${} does not further complicate the plausibility of the positivity assumption.
Lastly, the interpretation of our study results as causal are bound by the specification of the model in \cref{eq:likelihoods}. If the mean functionals are not correctly specified, estimates of $\beta_k$ can be biased for the causal effect of PM$_{2.5}${} within that exposure level. Even though model \cref{eq:likelihoods} assumes independence of PM$_{2.5}${} concentrations, the spatial dependence structure is not expected to affect estimation of the model's regression coefficients or variable selection.
The results presented next can only be interpreted as causal under the assumptions discussed here. If any of the assumptions is violated, the study results should be interpreted as associational.
\begin{figure}[H]
\centering
\begin{tikzpicture}
\node[text centered] (X1) {$X_i$};
\node[left = 2 of X1, text centered] (C1) {$\bm C_i$};
\node[below = 1.2 of X1, text centered] (X2) {$X_j$};
\node[left = 2 of X2, text centered] (C2) {$\bm C_j$};
\node[right = 2 of X1, text centered] (Y1) {$Y_i$};
\draw[->] (C1) -- (X1);
\draw[->] (C1) to [out=30,in=150,looseness=1] (Y1);
\draw[->] (C2) -- (X2);
\draw[->] (C1) -- (X2);
\draw[->] (C2) -- (X1);
\draw[-] (X1) -- (X2);
\draw[-] (C1) -- (C2);
\draw[->] (X1) -- (Y1);
\draw[->, dashed] (X2) -- (Y1);
\end{tikzpicture}
\caption{Subgraph relating zip code $i$ and $j$'s covariates and ambient PM$_{2.5}${} concentrations to $i$'s potential outcomes. Arrow from $X_j$ to $Y_i$ is weaker than that from $X_i$.}
\label{fig:assumption_plausibility}
\end{figure}
\subsection{Study results}
\label{subsec:study_results}
\begin{figure}[!t]
\centering
\includegraphics[width = 0.98\textwidth]{application_results1_revision.pdf}
\caption{ Top: Mean ER curve of PM$_{2.5}$ exposure (x-axis) and log all-cause cardiovascular hospitalizations (y-axis) --solid line-- with 95\% pointwise credible intervals. The rug of points shows the distribution of observed PM$_{2.5}$ values. Bottom: The posterior mean and 95\% credible interval of the $\beta$ coefficient within the four experiments. The rug of points shows the posterior distribution for $\bm s$ for $K = 3$.}
\label{fig:app_result}
\end{figure}
We fit LERCA for $K \in \{2, 3, \dots, 6\}$ and we report the results for $K=3$ which corresponds to the model with the lowest WAIC.
\cref{fig:app_result} shows the posterior mean and the 95\% credible intervals of the ER, the posterior distribution of the internal points of the experiment configuration, and the posterior mean and 95\% credible interval of $\beta_k$ within each experiment. Positive values of $\beta_k$ imply that an increase in ambient PM$_{2.5}$ concentrations leads to an increase in hospitalization rates.
In \cref{fig:app_result}, we see that the estimated ER is supra-linear with steeper incline at low concentrations. Examining the 95\% credible intervals for $\beta_k$, there is evidence that an increase in PM$_{2.5}${} at the low levels ($\leq 9.9\mu g/m^3$) leads to an increase in log hospitalization rates. However, 95\% credible intervals for $x \geq 9.9\mu g/m^3$ include zero.
Note that the current NAAQS for long term exposure to ambient PM$_{2.5}$ is equal to $12 \mu g\slash m^3$.
These results indicate that there is no exposure threshold for the effect of PM$_{2.5}$ on cardiovascular outcomes, which means that reductions in ambient PM$_{2.5}$ would lead to further health improvements, even at the low levels.
These results are consistent with other epidemiological studies which have found that the strength of the association between PM$_{2.5}${} and health outcomes is larger at low concentration levels \citep{Dominici2002, Shi2016,Di2017air}.
Lastly, the posterior distribution of $\bm s$, shows that observations below $8 \mu g / m^3$ and over $11.5 \mu g/m^3$ are always in the same experiment.
\subsection{Variability of the covariates' posterior inclusion across PM$_{2.5}${} concentration levels}
We investigated whether local confounding was present by examining the variability of the covariates' inclusion probabilities in the exposure and outcome models as a function of PM$_{2.5}${}. \cref{fig:post_inclusion} shows the posterior inclusion probabilities for three covariates as a function of PM$_{2.5}${} providing a measure of the covariates' confounding importance across the PM$_{2.5}${} concentration range.
\begin{figure}[!t]
\includegraphics[width = \textwidth]{application_1results1_inc.pdf}
\caption{Posterior inclusion probability of zip code population percentage with less than a high school education, population density, and median house value in the exposure and outcome model as a function of PM$_{2.5}${}.}
\label{fig:post_inclusion}
\end{figure}
The posterior inclusion probabilities vary substantially at different concentration levels indicating that local confounding is likely to be present.
In \cref{fig:app_pvalues}, the exploratory analysis showed that the zip code median household value (\texttt{House Value}) was predictive of both PM$_{2.5}${} and hospitalization rates at the high ambient concentration levels, but only of the outcome at the low levels. The LERCA results in \cref{fig:post_inclusion} lead to the same conclusion. Similarly, the posterior inclusion probability for the variable representing the zip code's percentage of the population with less than a high school education (\texttt{\% Below HS}) indicates that this variable is an important confounder only at the low levels, in accordance to the exploratory analysis. LERCA returns a similar conclusion about the variable representing population density (\texttt{Population/SQM}), in disagreement with the analysis in \cref{sec:data_description} which showed that population density was predictive of both PM$_{2.5}${} concentrations and the outcome at both low and high levels.
Comparisons between the results in \cref{fig:app_pvalues} and the outcome model posterior inclusion probabilities were performed for all variables. LERCA tends to include in the outcome model a smaller number of variables than what one might have assumed based on the exploratory analysis. This is expected since LERCA considers the confounding importance of all variables simultaneously.
\subsection{Variability of the covariates' posterior inclusion within the low experiment}
With the focus of our study being the evaluation of the effect of ambient PM$_{2.5}${} at the low concentration levels, we studied the interpretation of $\beta_1$ across MCMC samples. Since the interpretation of $\beta_1$ as a causal effect requires that a sufficient adjustment set is included in the outcome model, we examined the variability of the covariates' outcome model inclusion indicators within the low experiment across iterations of the MCMC.
Across MCMC samples, 174 combinations of the covariates were included in the outcome model (out of the $2^{27}$ possible ones).
Even though this is a large number of potential models, 51\% of the posterior weight was given to the model with the following 9 covariates: the zip code's median house value and percentage of the population with at most a high school education, as measured in the 2000 Census and its extrapolation between 2000 and 2013, the population rate that has been a smoker at some point in their lives, the zip code's population density, the average dew point, the average age of Medicare beneficiaries and the percentage of them that are women. We refer to this model as \textit{Model 1}. The model with the second highest posterior probability included the same covariates except for smoking rate, and accounted for 7\% of the MCMC samples. Therefore, there is evidence that Model 1 outperformed the rest in confounding adjustment at low levels.
In order to evaluate the impact of model averaging on our final estimates, we compared the posterior distribution of $\beta_1$ across all MCMC samples to its distribution based on the MCMC samples for which Model 1 was chosen. Across all samples, $\beta_1$ was estimated to be equal to $0.035$ with 95\% credible interval $0.012-0.06$, and posterior probability that it is greater than 0 equal to 99.7\%. Among posterior samples for which Model 1 was chosen, $\beta_1$ was estimated to be $0.034$ with 95\% credible interval $0.011-0.056$ and posterior probability that $\beta_1$ is greater than 0 also equal to 99.7\%. The consistency of the Model 1 estimates and the model averaged estimates is an indication that model averaging, in this situation, did not lead to averaging over incompatible models.
\section{Discussion}
\label{sec:discussion}
We have introduced an innovative Bayesian approach for flexible estimation of the ER curve in observational studies that has the following features: 1) it casts the formulation of the ER within a potential outcome framework, and mimics several randomized experiments across exposure levels;
2) it uses the data to inform the experiment configuration; and given the current experiment configuration
3) allows for the possibility which is a reality in our study (\cref{fig:app_pvalues} and \cref{fig:post_inclusion}) that different sets of covariates are confounders at different exposure levels;
4) allows for varying confounding effect across levels of the exposure;
5) performs local covariate selection to increase efficiency;
6) propagates model uncertainty for the experiment configuration and covariate selection in the posterior inference on the whole ER curve;
and finally,
7) provides important scientific guidance related to which covariates are confounders at different exposure levels.
Although non-parametric and varying coefficient approaches \citep{Hastie1993} for ER estimation could, in theory, allow for differential confounding across different exposure levels, none of the existing methods for ER estimation explicitly accommodates local confounding, nor provides guidance for which covariates are confounders of the effect of interest at different levels of the exposure. Furthermore, the use of non-parametric methods to estimate a generalized propensity score or model the outcome of interest could prove unfruitful in situations where most of the available data are over a specific range of the exposure variable, the number of potential confounders is large, and interest lies in the estimation of causal effects for a change in the exposure in the tails of the exposure distribution.
In such situations, LERCA provides a way to model the outcome acknowledging that the exposure-response relationship might be confounded by different covariates at different exposure levels.
Lastly, it is worth noting that LERCA shall not be seen as a direct competitor to the approach by \cite{Kennedy2017}.
In fact, since the Super Learner algorithm combines different approaches for modeling the outcome, LERCA could be incorporated in the algorithm as an approach that allows for the presence of local confounding.
The main contribution of this paper is in addressing the issue of local confounding in ER estimation, and in providing guidance of covariates' confounding importance at different exposure levels. In doing so, LERCA is based on several modeling decisions that can be easily altered.
First, within each experiment and thus locally within a narrow exposure range, LERCA assumes linearity for both the outcome and exposure models. Local linearity could be relaxed by using higher order polynomials.
Second, the informative prior on the inclusion indicators could lead to the inclusion of instrumental variables in the outcome model, which will not lead to bias, but will decrease the efficiency of our estimators. In the study of air pollution, strong instrumental variables are not expected to be present. Alternative strategies for local confounder selection can be accommodated here, extending, for example, work by \cite{Wilson2014,Cefalu2017} and \cite{Antonelli2018}.
An interesting line of research is to explore LERCA's extensions to more flexible functional specifications, and to evaluate the performance of different approaches to model selection (via prior specifications or penalization techniques) for different confounding scenarios.
An alternative modification of LERCA could enforce that the ER curve is monotone, by assuming prior distributions on $\beta_k$ that are left (or right)-truncated at zero. This modification could be of explicit interest for environmental and toxicological research, and in studies of air pollution in particular where the ER relationship is often believed to be supra-linear \citep{Pinault2017associations, Vodonos2018concentration}. In risk assessment studies, the shape of the ER can greatly affect conclusions \citep{Pope2015health}, and is often assumed to be linear, log-linear, log-log, or power function with or without threshold \citep{Devos2016effect, Burnett2014integrated}. \cite{Nasari2016class} proposed a class of models that can capture various ER shapes and are easy to implement in large data sets. Even though LERCA can capture effectively any ER shape, development of a faster and computationally efficient estimation procedure is required for very large data sets. Future work could focus on incorporating local confounding adjustment in air pollution analyses including the whole United States and including zip codes that are not located near an air pollution monitor.
The results of our study are in agreement with a supra-linear ER shape, indicating that there is a larger health impact of ambient air pollution concentrations at low exposure levels, and that, if the causal assumptions hold, lowering ambient PM$_{2.5}${} concentrations would lead to a reduction in cardiovascular hospitalization rates.
Even though our analysis addresses a key question in air pollution epidemiology, it is also met with its own challenges.
First, even though focusing on ambient pollution concentrations is important from a policy perspective (since regulations can more directly control ambient concentrations), the results of this study are not directly interpretable for evaluating the health effects of \textit{personal} exposure to PM$_{2.5}${}. The relationship between ambient concentrations and personal exposures is complicated, with studies showing that there is substantial variability in personal exposures among individuals of similar ambient exposure concentrations \citep{Dockery1981personal, Clayton1993particle}, largely due to the individuals' activities \citep{Meng2009determinands}.
The correlation between ambient concentrations and personal exposures might differ by exposure levels if, for example, individuals residing in highly polluted areas are more likely to avoid outdoor activities. This further complicates interpreting the results of the relationship between ambient concentrations and health to personal exposures.
Furthermore, variables that are expected to be confounders of personal exposures and health outcomes (such as an individual's smoking habits, variables $\widetilde{\bm C}_2$ in \cref{fig:outdoor_personal}) are not necessarily confounders of ambient PM$_{2.5}${} concentrations and outcomes (variables $\widetilde{\bm C}_1$ in \cref{fig:outdoor_personal}). Indication of confounding of the relationship between ambient concentrations and health outcomes by variables such as the median household value (see \cref{fig:post_inclusion}) implies that these variables are not confounders in the classic sense (since they are not driving ambient PM$_{2.5}${} concentrations) but are correlated with variables that are. Therefore, a variable's confounding strength for ambient concentrations is not directly interpretable as its confounding strength for personal exposures.
Second, this analysis has used log event rates as the outcome of interest in a linear regression setting, with all zip codes contributing equally to the estimation of the models irrespective of their size. Linear regression for the analysis of rate data has been used in various settings, for example in \cite{Joshua1990estimating, Mohamedshah1993truck, Chua2009pediatric, Wang2012} and \cite{LiuSmith2016gender}.
A Poisson regression model where the number of hospitalizations is the response variable and the Medicare population size within a zip code is the offset would be more in agreement with the literature on count outcomes (and within air pollution epidemiology specifically).
However, extending local confounding adjustment to Poisson regression is computationally complicated: (a) enforcing ER continuity for Poisson regression is not straightforward since $E_{\bm C}\{ E[Y | X = x, \bm C] \}$ is not easily acquired (see \cref{subsec:prior_continuity}), (b) posterior sampling of the experiment configuration and regression coefficient involves marginal densities of regression models, and (c) no conjugate prior distribution exists for regression coefficients in Poisson regression.
\bibliographystyle{plainnat}
|
{
"timestamp": "2020-01-09T02:12:03",
"yymm": "1806",
"arxiv_id": "1806.00928",
"language": "en",
"url": "https://arxiv.org/abs/1806.00928"
}
|
\section{Introduction}
\label{sec:introduction}
Visual question answering (VQA) is an increasingly popular research domain that unites two traditionally disparate machine learning subfields: natural language processing and computer vision. The goal of VQA is to generate a natural language answer to a question about an image. While a number of existing approaches perform well on VQA \citep{Gupta17survey,wu2017survey}, it is unclear to what extent current models are capable of making semantic distinctions between visually-similar images.
\begin{figure}[ht]
\vspace{0.25cm}
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{viz1_final.png}}
\vspace{-0.25cm}
\caption{Counterexample task. The goal is to identify a counterexample (green border) that results in a different answer to the question from a set of 24 visually-similar images.}
\label{fig:cx_task}
\end{center}
\vspace{-0.70cm}
\end{figure}
In this work, we explore a reformulation of the VQA task that more directly evaluates a model's capacity to reason about the underlying concepts encoded in images. Under the standard VQA task, given the question ``What color is the fire hydrant?'' and an image of a street scene, a model might answer ``red.'' Under the alternative task, the model must produce a counterexample; e.g., an image of a fire hydrant that is not red. Successful performance on the visual counterexample prediction task (abbreviated VQA-CX) requires reasoning about how subtle visual differences between images affect the high-level semantics of a scene.
The VQA-CX task was originally proposed in \citet{VQA2} as a useful explanation modality for VQA models. However, despite its applicability as a powerful tool for model introspection, this idea has remained largely under-explored by the research community. To our knowledge, this work represents the first follow-up attempt to operationalize the VQA-CX paradigm originally proposed by \citet{VQA2}.
We introduce two plug-and-play approaches for evaluating the performance of existing, pretrained VQA models on VQA-CX. The first method is an unsupervised model that requires no training and works out-of-box with a pretrained VQA model. The second method is a supervised neural model that can be used with or without a pretrained VQA model. The unsupervised model outperforms the baselines proposed in \citet{VQA2}. Meanwhile, the supervised model outperforms all existing unsupervised and supervised methods for counterexample prediction.
Crucially, while we use a state-of-the-art VQA model to facilitate counterexample prediction, we find that our methods perform almost as well without receiving any information from this model. In other words, the multimodal representation learned by the VQA model contributes only marginally (approximately 2\%) to performance on VQA-CX. These results challenge the assumption that successful performance on VQA is indicative of more general visual-semantic reasoning abilities.
\section{Background}
Contemporary research interest in VQA began with the release of DAQUAR, the DAtaset for QUestion Answering
on Real-world images \citep{malinowski2014daquar}. Since then, at least five other major VQA benchmarks have been proposed. These include COCO-VQA \citep{ren2015cocovqa}, FM-IQA \citep{gao2015fm-iqa}, VisualGenome \citep{krishna2017visualgenome}, Visual7w \citep{zhu2016visual7w}, and VQA \citep{VQA1,VQA2}. With the exception of DAQUAR, all of the datasets use include images from the Common Objects in Context (COCO) dataset \citep{lin2014coco}, which contains 330K images.
The VQA dataset was first introduced in \citet{VQA1} as more free-form, open-ended VQA benchmark. Previous datasets placed constraints on the kinds of questions authored by human annotators (e.g., Visual7w, VisualGenome), or relied on image captioning models to generate questions (e.g., COCO-VQA). In contrast, the crowdsourcing method employed by \citet{VQA1} was designed generate a more diverse range of question types requiring both visual reasoning and common knowledge. However, owing in part to the lack of constraints on question generation, the original VQA dataset contains several conspicuous biases. For instance, for questions beginning with the phrase, ``What sport is...'', the correct answer is ``tennis'' 41\% of the time. Additionally, question generation was impacted by a visual priming bias \citep{zhang2016yin}, which selected for questions with affirmative answers. For instance, for questions beginning with ``Do you see a...,'' the correct answer is ``yes'' 87\% of the time. Models that exploit these biases can achieve high accuracy on VQA without understanding the content of the accompanying images \citep{VQA2}.
In an effort to balance the VQA dataset, \citet{VQA2} introduced VQA 2.0, which is built on pairs of visually-similar images that result in different answers to the same question. Specifically, for each image $I$ in the original dataset, \citet{VQA2} determined the 24 nearest neighbor images $I_\mathrm{NN} = \{I'_1, \ldots, I'_{24}\}$ using convolutional image features $V$ derived from VGGNet \citep{simonyan2014VGG}. For each image/question/answer pair $(I, Q, A)$ in the original VQA dataset, crowd workers were asked to select a complementary image $I^* \in I_\mathrm{NN}$ that produced a different answer $A^*$ for the same $Q$. The most commonly selected $I^*$ was then included as a new example $(I^*, Q, A^*)$ in VQA 2.0, resulting in a dataset that is roughly double the size of the original. In addition to reducing language biases in the data, the pair-based composition of VQA 2.0 provides a convenient approach for supervised training and evaluation of counterexample prediction models.
\section{Approach}
We treat VQA-CX as a supervised learning problem, which can be formalized as follows. For each image, question, and answer $(I, Q, A)$ in the original VQA task, the model is presented with the the $K=24$ nearest neighbor images $I_\mathrm{NN} = \{I'_1, \ldots, I'_{K} \}$ of the original image. The model assigns scores $\mathcal{S} = S(I'_1), \ldots, S(I'_{K})$ to each candidate counterexample. The crowd-selected counterexample $I^* \in I_\mathrm{NN}$ serves as ground truth. For notational clarity, we distinguish between raw images $I$ and convolutional image features $V$. Additionally, we use prime notation ($I', V', A'$) to denote candidate counterexamples, asterisk notation to denote the ground truth counterexample ($I^*, V^*, A^*$), and no superscript when referring to the original example $(I, V, A)$. We do not use any superscripts for $Q$, since the question is the same in all cases.
Both of our VQA-CX models use an existing VQA model as a submodule. While there exist many diverse solutions for VQA \citep{wu2017survey}, we mostly treat the VQA model as a black box that can be expressed as a function of its inputs. We make only two assumptions about the architecture. First, we assume the model outputs a distribution $P(\mathcal{A}|I, Q)$ over a discrete number of answer classes (where $|\mathcal{A}|$ is a hyperparameter).\footnote{While most models treat VQA as a discrete classification task, some adopt a generative approach (e.g., \citet{wu2016ask,zhu2015building,wang2017fvqa}), which is not compatible with our methods.} Second, we assume the model internally combines its inputs into some multimodal representation $Z$, which we can access. (Note that this second assumption, which violates the black box principle, is only used optionally in the NeuralCX model.) We therefore treat a VQA model as a function $\text{VQA}(I, Q) = P(\mathcal{A}|I, Q), Z$.
In order to establish a basis for comparison with \citet{VQA2}, we began by reproducing their baselines, described in the following section. We then developed two architectures for VQA-CX. Both models can be used in conjunction with any VQA model that meets the above two criteria. The first architecture, which we call the Embedding Model, compares the semantic similarity between candidate answers in an embedding space, weighing different answers by $P(\mathcal{A}|I, Q)$. Since the Embedding Model relies solely on a pretrained VQA model and a pretrained answer embedding, it is fully unsupervised and requires no training. The second architecture is a straightforward multilayer perceptron that takes as input features related to $I$, $I'$, $Q$, and $A$, including the outputs of a VQA model, and returns a score $S(I')$. This NeuralCX model is trained in a pairwise fashion using standard supervised learning methods.
\section{Models}
\subsection{Prior Work}
To our knowledge, the only previous work on VQA-CX was carried out by the authors of the VQA 2.0 dataset. \citet{VQA2} present a two-headed model that simultaneously answers questions and predicts counterexamples. The model consists of three components:
\smallskip
\noindent\textbf{Shared base: }Produces a multimodal embedding of the question and image via pointwise multiplication, as in \citep{lu2015deeper}.
\[Z = \text{CNN}(I) \cdot \text{LSTM}(Q)\]
During a single inference step, a total of $K + 1$ images (the original image and its KNNs) are passed through is component.
\smallskip
\noindent\textbf{Answering head: }Predicts a probability distribution over answer classes.
\[P(\mathcal{A}|Z) = \sigma(W_{out}Z + b_{out})\]
Only the $Z$ corresponding to the original image is used in the answering head.
\smallskip
\noindent\textbf{Explaining head: }Predicts counterexample scores for each of $K$ nearest neighbor images.
\[S(I'_i) = (W_{zd} Z_i + b_{zd}) \cdot (W_{ad} A + b_{ad})\]
This component can be seen as computing vector alignment between a candidate counterexample and the ground truth answer. To allow for the dot product computation, $Z_i$ and $A$ are both projected into a common embedding space of dimensionality $d$. Note that in the final layer of the network, all $K$ scores $\mathcal{S} = S(I'_1), \ldots, S(I'_{K})$ are passed through a $K \times K$ fully-connected layer, presumably to allow the model to learn the distribution over the rank of $I^*$ within $I_\mathrm{NN}$.
The two-headed model is trained on a joint loss that combines supervision signals from both heads.
\[\mathcal{L(S)} = -\log P(A | I, Q) + \lambda \sum_{I'_i \neq I^*} \max(0, M - (S(I^* - I'_{i})))\]
The answer loss is simply the cross entropy loss induced by the ground truth answer $A \in \mathcal{A}$. Meanwhile, the explanation loss is a pairwise hinge ranking loss \citep{chopra2005similarity}, which encourages the model to assign the ground-truth counterexample $I^*$ a higher score than the other candidates.
\subsection{Baselines}
In addition to their counterexample model, \citet{VQA2} introduce three key baselines for VQA-CX:
\begin{itemize}
\item \textbf{Random Baseline:} Rank $I_\mathrm{NN}$ randomly.
\item \textbf{Distance Baseline:} Rank $I_\mathrm{NN}$ by L2 distance from $I$. Closer images are assigned higher scores.
\item \textbf{Hard Negative Mining:} For each $I'_i \in I_\mathrm{NN}$, determine the probability of the original answer $P(A)_i = \text{VQA}(I'_i, Q)$ using a pretrained VQA model. Rank the $I'_i$ according to \textit{negative} probability $-P(A)_i$. In other words, choose counterexamples for which the VQA model assigns a low probability to the original answer.
\end{itemize}
\subsection{Unsupervised Embedding Model}
Successful performance on VQA-CX requires a nuanced treatment of the semantic relationship between answers. While the counterexample answer $A^*$ is distinct from the original answer $A$, the two are often close neighbors in semantic space. For example, for the question-answer pair ($Q=$ ``What animal is in the tree?''; $A=$``cat''), the counterexample answer is more likely to be ``dog'' than ``meatball,'' even though the semantic distance between ``cat'' and ``meatball'' is greater. Ideally, a VQA-CX model should take into account this linguistic prior.
The Embedding Model counterbalances the goal of identifying a semantically-similar counterexample answer with the necessity that the answer not be identical to the original. The model uses answer-class predictions $P(\mathcal{A} | I', Q)$ from a pretrained VQA model, and answer embeddings $W_\mathcal{A}$ from a pretrained Skip-Thoughts model \citep{skipthoughts} to assign a score to each nearest neighbor image:
\begin{multline}
S(I'_i) = \lambda \sum_{\substack{a \in \mathcal{A};\\ a \neq A}} \text{cossim} \left(a, A\right) P(a|I', Q) - (1-\lambda)\log P(A|I', Q)
\end{multline}
The term to the left of the subtraction encourages the model to select counterexamples that produce answers similar to the original. Meanwhile, the term to the right discourages the model from selecting the exact same answer as the original. The $\lambda$ hyperparameter, chosen empirically, determines the relative weight of these terms.
\subsection{Supervised NeuralCX Model}
NeuralCX is a fully-connected network that takes as input 10 features derived from $I$, $I'$, $Q$, and $A$. Some of these features, such as $V$, $Q$, and $A$, are representations of the original image, question, and answer. Others, such as $Z$ and $P(\mathcal{A'})$, are computed by a VQA model. Table \ref{table:features} summarizes the input features.
All features are concatenated into a single input vector and passed through a series of hidden layers, where the size $h$ and number $N$ of layers are hyperparameters. All layers share the same $h$ and use ReLU activation. The output of the last hidden layer is projected to an unnormalized scalar score $S(I')$. Fig. \ref{fig:neuralcx} depicts the NeuralCX architecture.
A single training iteration for NeuralCX consists of $K$ forward passes of the network to produce a score for each candidate $I'_i \in I_\mathrm{NN}$. We compute the cross-entropy loss for the ground truth $I^*$, and optimize the parameters of the network via backpropagation.
\begin{figure}[ht]
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{neuralcx.png}}
\vspace{-0.15cm}
\caption{Diagram of NeuralCX architecture. The model is a fully-connected neural network that takes visual, question, answer, and multimodal features as input and produces a score indicating the relevance of $I'$ as a counterexample.}
\label{fig:neuralcx}
\end{center}
\vspace{-0.15cm}
\end{figure}
\begin{table}[ht]
\vspace{-0.5cm}
\begin{center}
\begin{sc}
\begin{tabular}{@{}llll@{}}
\toprule
Feature & Definition & Size & Origin \\ \midrule \midrule
$V$ & $\text{CNN}(I)$ & $2048$ & CNN \\
$V'$ & $\text{CNN}(I')$ & $2048$ & CNN \\
$V_M$ & $V \odot V'$ & $2048$ & Computed \\
$V_D$ & $||V' - V||$ & $1$ & Computed \\
$\text{Rank}$ & $\text{onehot}(i)$ & $24$ & Computed \\ \midrule
$Q$ & $\text{LSTM}(Q)$ & $2400$ & LSTM \\
$A$ & $W_\mathcal{A} A$ & $2400$ & $A_\text{emb}$ \\
$A'$ & $(W_\mathcal{A})^T P(\mathcal{A'})$ & $2400$ & $A_\text{emb}$, VQA \\ \midrule
$Z$ & $\text{VQA}(I, Q)$ & $360$ & VQA \\
$Z'$ & $\text{VQA}(I', Q)$ & $360$ & VQA \\ \bottomrule
\end{tabular}
\end{sc}
\end{center}
\vspace{0.25cm}
\caption{Full set of features input to NeuralCX model.}
\label{table:features}
\vspace{-0.5cm}
\end{table}
\section{Methods}
Our VQA-CX dataset consists of 211K training examples and 118K test examples, of which 10K were reserved as a validation set. A single example in our dataset consists of the original VQA 2.0 example $(I, Q, A)$, and the 24 nearest neighbor images $I_\mathrm{NN}$, which contain the ground truth counterexample $I^* \in I_\mathrm{NN}$ and its corresponding answer $A^*$.
Our train and test data are, by necessity, proper subsets of the VQA 2.0 training and validation datasets, respectively. To construct our trainset, we first identified the examples for which the image $I$ had a corresponding $I^*$. Approximately 22\% of the images in VQA 2.0 do not have a labeled complement. \footnote{These images correspond to instances in which crowd workers indicated that it was not possible to select a counterexample from among $I_\mathrm{NN}$ \citep{VQA2}.} Next, we filtered out examples for which $I^*$ did not appear in $I_\mathrm{NN}$. Since we used the nearest neighbors data provided by \citet{VQA2}, $I^*$ should theoretically always appear in $I_\mathrm{NN}$. However, because the KNN relation is not symmetric (i.e., $I^1 \in I^2_\mathrm{NN} \nRightarrow I^2 \in I^1_\mathrm{NN}$), we found that in certain cases, $I^* \notin I_\mathrm{NN}$. After filtering, we were left with $211,626 / 433,757$ train examples and $118,499 / 214,354$ validation examples. Note that while \citet{VQA2} collected labeled counterexamples for the VQA 2.0 dataset, this data is not public. As a result, we did not make use of the VQA 2.0 test set, instead testing on the VQA 2.0 validation set.
We implemented our models and experiments in Pytorch \citep{paszke2017pytorch}. For all experiments involving VQA models, we used MUTAN \citep{MUTAN}, a state-of-the-art VQA model that uses Tucker decomposition to parametrize a bilinear interaction between $Q$ and $V$. We pretrained MUTAN separately on VQA 2.0 for 100 epochs with early stopping to a peak test accuracy of 47.70. Unfortunately, because we needed to train the model on only the VQA 2.0 training set (and not the validation set), this accuracy is considerably lower than the 58.16 single-model accuracy obtained by \citet{MUTAN}. Additionally, since the VQA-CX task requires us to load all 24 $V_{\mathrm{NN}}$ features into memory simultaneously, we opted use a no-attention variant of MUTAN that is more space-efficient, but lower-performing. We used a pretrained ResNet-152 model \citep{resnet152} to precompute visual features for all images, and a pretrained Skip-Thoughts model \citep{skipthoughts} to compute question and answer embeddings. We also utilized framework code from the vqa.pytorch Github repository.\footnote{https://github.com/Cadene/vqa.pytorch}
For all experiments with the NeuralCX model, we trained for a maximum of 20 epochs with early stopping. We optimize the model parameters with standard stochastic gradient descent methods, using the Pytorch library implementation of Adam \citep{DBLP:journals/corr/KingmaB14} with learning rate $0.0001$ and batch size $64$. We also employed dropout regularization ($p=0.25$) between hidden layers \citep{srivastava2014dropout}.
We experimented with different numbers of hidden layers $N = 1, 2, 3$ and hidden units $h = 256, 512, 1024$, but found that larger architectures resulted in substantial training time increases with negligible performance gains. We therefore use a moderate-sized architecture of $N=2, h=512$ for all reported results. This model takes about 35 minutes to train to peak performance on a single Tesla K80 GPU.
We evaluate the performance of our models and baselines with recall@$k$, which measures the percentage of the ground truth counterexamples that the model ranks in the top $k$ out of the $24$ candidate counterexamples. Results on the test set for the NeuralCX Model, Embedding Model, and baseline models are reported in Table \ref{table:results}. To better understand the relative importance of the different inputs to the NeuralCX model, we selectively ablated different features by replacing them with noise vectors drawn randomly from a uniform distribution. We chose to randomize inputs, rather than remove them entirely, so as to keep the model architecture constant across experiments. In each ablation experiment, the model was fully retrained. Results from these experiments are reported in Table \ref{table:ablation}.
\section{Results}
We began by reimplementing the baselines presented by \citet{VQA2} and comparing our results with theirs. As expected, the Random Baseline performed approximately at chance (recall@5 $\approx \frac{5}{24}$ or $0.2083$). Our Distance Baseline was comparable with, but slightly higher than, the result reported by \citeauthor{VQA2}. This discrepancy indicates that it is possible that the distribution over the rank of the ground-truth counterexample is more skewed in our dataset than in the one used by \citeauthor{VQA2}. Notably, in both cases, the strategy of ranking counterexample images based on distance in feature space is more than two times better than chance, and serves as a strong baseline.
As in \citet{VQA2}, we found Hard Negative Mining to be a relatively under-performing approach. Since we used a different VQA model from \citeauthor{VQA2}, our results on this baseline are not directly comparable. Nevertheless, in both cases, Hard Negative Mining performed only marginally above chance. To isolate the impact of the VQA model, we computed the Hard Negative Mining baseline using an untrained (randomly initialized) VQA model. After this change, the performance dropped to random.
\begin{table*}[ht]
\begin{center}
\begin{sc}
\begin{tabular}{@{}lllll@{}}
\toprule
& & \multicolumn{2}{c}{Our Results} & \multicolumn{1}{c}{Goyal et al. (2016)} \\
\multicolumn{1}{l}{CX Model} & VQA Model & \multicolumn{1}{l}{Recall@1} & \multicolumn{1}{l}{Recall@5} & \multicolumn{1}{l}{Recall@5} \\ \midrule
Random Baseline & - & $4.20$ & $20.85$ & $20.79$ \\
Hard Negative Mining & untrained & $4.06$ & $20.73$ & - \\
Hard Negative Mining & pretrained & $4.34$ & $22.06$ & $21.65$ \\
Embedding Model & untrained & $4.20$ & $21.02$ & - \\
Embedding Model & pretrained & $7.77$ & $30.26$ & - \\
Distance Baseline & - & $11.51$ & $44.48$ & $42.84$ \\ \midrule
Two-headed CX (Goyal et al.) & trainable & - & - & $43.39$ \\
NeuralCX & untrained & $16.30$ & $52.48$ & - \\
NeuralCX & pretrained & $18.27$ & $54.87$ & - \\
\textbf{NeuralCX} & \textbf{trainable} & $\mathbf{18.47}$ & $\mathbf{55.14}$ & - \\ \bottomrule
\end{tabular}
\end{sc}
\end{center}
\vspace{0.25cm}
\caption{Results of VQA-CX models and baselines. Where applicable, we compare our results with those reported in \citet{VQA2}. The midline separates models that were evaluated without training (above) with those that were trained on the VQA-CX dataset (below). Untrained denotes that the VQA model parameters were randomly-initialized and immutable. Pretrained denotes parameters that were learned on the VQA task and then made immutable. Trainable denotes parameters that were first learned on VQA, and then fine-tuned on VQA-CX.}
\label{table:results}
\vspace{-0.5cm}
\end{table*}
\begin{table}[ht]
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{@{}cc|ccccccc@{}}
\toprule
\multicolumn{2}{c|}{Performance} & \multicolumn{7}{c}{Ablated Features} \\
R@5 & R@1 & $V$ & $V_{\text{M}}$ & $V_{\text{D}}$ & Rank & $Q$ & $A$ & $Z$ \\ \midrule
$43.05$ & $12.33$ & \xmark & \xmark & \xmark & \xmark & & & \\
$44.48$ & $11.42$ & \xmark & & & & & & \\
$44.48$ & $11.51$ & & \xmark & \xmark & \xmark & & & \\
$44.48$ & $11.52$ & \xmark & \xmark & \xmark & & \xmark & \xmark & \xmark \\
$44.55$ & $13.17$ & & & & \xmark & & & \\
$47.09$ & $13.29$ & & & & & \xmark & \xmark & \xmark \\
$52.18$ & $16.48$ & & & & & & \xmark & \\
$54.87$ & $18.27$ & & & & & \xmark & & \\
$54.87$ & $18.27$ & & & & & & & \xmark \\
$54.87$ & $18.27$ & & & & & & & \\ \bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vspace{0.25cm}
\caption{Selective ablation of NeuralCX inputs. Features that are marked \xmark\hspace{0.025cm} are replaced with noise. Ablations are sorted from top to bottom in order of disruptiveness, with the bottom row showing results from an undisturbed model. The different features are defined in Table \ref{table:features}.}
\label{table:ablation}
\vspace{-0.5cm}
\end{table}
The Embedding Model performed between Hard Negative Mining and the Distance Baseline. Interestingly, the value of $\lambda$ that maximized performance was 1.0, meaning that integrating the overt probability of $A$ under the VQA model only hurt accuracy. We observed a smooth increase in performance as we varied $\lambda$ between 0 and 1. Clearly, there is some signal in the relative position of the candidate answer embeddings around the ground truth answer, but not enough to improve on the information captured in the visual feature distance.
The NeuralCX model significantly outperformed both the Distance Baseline and the two-headed model from \citet{VQA2}. To quantify the impact of the VQA model on the performance of NeuralCX, we tested three conditions for the underlying VQA model: untrained, pretrained, and trainable. In the untrained condition, we initialized NeuralCX with an untrained VQA model. In the pretrained condition, we initialized NeuralCX with a pretrained VQA model, which was frozen during VQA-CX training. In the trainable condition, we allowed gradients generated by the loss layer of NeuralCX to backpropagate through the VQA model. We found that fine-tuning the VQA model in this manner produced small gains over the pretrained model. Meanwhile, with an untrained VQA model, the recall@5 of NeuralCX was only 2.39 points lower than with a trained model.
In the NeuralCX ablation experiments, we found that visual features were crucial to strong performance. Without any visual features, recall fell below the Distance Baseline. Both $V$ and the rank embedding appear to be especially important to the task. Intriguingly, these features also appear to be interdependent; ablating either $V$ or the rank embedding was almost as disruptive as ablating both. Meanwhile, we found that ablating the non-visual features produced a much smaller impact. While ablating $A$ resulted in a small performance drop, ablating $Q$ and $Z$ did not affect performance at all.
\section{Discussion}
\begin{figure*}[hbtp]
\begin{center}
\centerline{\includegraphics[width=1.75\columnwidth]{viz2_final.png}}
\caption{Qualitative results on the counterexample prediction task. Left: The original image and ground truth counterexample from VQA 2.0, along with the question and ground truth answers. Right: the top 5 counterexamples selected by NeuralCX, with the top 3 answers (as scored by a VQA model) underneath. In the top 4 rows, the model correctly identifies the correct counterexample (green outline). In the bottom 4 rows, the model fails. See the Discussion for a review of common failure modes.}
\label{fig:qualitative}
\end{center}
\end{figure*}
Our results highlight both promises and challenges associated with VQA-CX. On the one hand, the fact that NeuralCX outperforms the methods from \citet{VQA2} demonstrates that the data contain enough signal to support supervised learning. This result is especially important in light of the pronounced skew in the distribution over the rank of $I^*$ in the dataset, which makes approaches based merely on image distance unreasonably dominant. Given that the supervised neural model from \citet{VQA2} barely surpasses the Distance Baseline, it seems likely that this model overfits to the $I^*$ rank distribution. Indeed, the $K \times K$ fully-connected layer of this model inherently limits the information that can pass through to the output. Due to this bottleneck, it is unlikely that this network learns anything other than the optimal activation biases of the $K$ output units.
In contrast, we observed that NeuralCX effectively leverages both visual and semantic information. When provided with only visual features, the recall@5 for NeuralCX was 7.78 percentage points lower than when the model was provided with both visual and semantic features (Table \ref{table:ablation}). In particular, the answer embedding provides information about the semantic similarity between $A'$ and $A$, which we hypothesize allows the model to select counterexamples that are semantically distinct from the original example. The strong performance of the Embedding Model---which bases its predictions solely on answer similarity, and does not model image distance---also supports this hypothesis. Thus, while visual similarity remains a crucial feature for VQA-CX, our findings demonstrate that in order to achieve peak performance on this task, a model must also leverage semantic information.
While our results indicate that the answer embeddings encode task-relevant information, the same cannot be said for the multimodal embeddings $Z$ produced by the VQA model. In our ablation experiments, we found that replacing $Z$ and $Z'$ with noise did not affect the performance of NeuralCX. Since $Z$ is, by definition, a joint embedding of $Q$ and $V$, it is possible that $Z$ encodes redundant information. However, if this were the case, we would expect $Z$ to help the model in cases where visual features are not available. Instead, we see a significant drop in accuracy when we ablate the visual features but leave $Z$, suggesting that $Z$ does not support the recovery of visual features.
Our experiments with untrained VQA models suggest that the representations learned by the VQA model do not contain useful information for identifying counterexamples. Replacing the pretrained VQA model with an untrained version results in a decrease of only 2.39 recall@5. (Based on our ablation experiments, this performance hit is not due to the loss of $Z$, but rather, the loss of the distribution over the counterexample answer $P(\mathcal{A'})$, which is used to weight the embedding representation of $A'$). One could argue that it is unfair to expect the VQA model to provide useful information for VQA-CX, since it was not trained on this task. However, when we co-train the VQA model with NeuralCX, we find only a small performance improvement compared to the pretrained model. This result holds regardless of whether the VQA model is initialized from pretrained weights when trained on VQA-CX.
This transfer failure raises questions about the extent to which models that perform well on the VQA dataset actually learn semantic distinctions between visually-similar images. In our qualitative analysis, we found that while the VQA model often produces the correct answer, it also assigns high probability to semantically-opposite answers. For instance, when answering ``yes,'' the model's other top guesses are almost always ``no'' and ``unsure.'' Similarly, counting questions, the VQA model often hedges by guessing a range of numbers; e.g., ``1, 2, 3'' (see Fig. \ref{fig:qualitative}). While this strategy may be optimal for the VQA task, it suggests that the VQA model is effectively memorizing what types of answers are likely to result from questions and images. In other words, it is unclear from these results whether the VQA model can actually distinguish between the correct answer and other answers with opposite meanings.
While our results expose issues with existing approaches to VQA, it is important to consider two external failure modes that also affect performance on VQA-CX. First, in some cases, NeuralCX fails to fully utilize information from the VQA model. On certain examples, even when the VQA model correctly identified a particular $I'$ as producing the same answer as the original, NeuralCX still chose $I'$ as the counterexample. In other cases, NeuralCX incorrectly assigned high scores to images for which $A' \approx A$; e.g., an image of children ``playing'' was selected as a counterexample to an image of children ``playing game.'' These failures indicate that NeuralCX does not optimally leverage the semantic information provided by the VQA model.
The second failure mode arises from issues with the data itself. While the complementary pairs data in VQA 2.0 makes it possible to formalize counterexample prediction as its own machine learning task, several idiosyncrasies in the data make VQA-CX a partially ill-posed problem.
\begin{itemize}
\item There may be multiple images in $I_\mathrm{NN}$ that could plausibly serve as counterexamples. This is particularly evident for questions that involve counting (e.g., for $Q=$ ``How many windows does the house have?'' the majority of images in $I_\mathrm{NN}$ are likely to contain a different number of windows than the original image.) In many cases, our models identified valid counterexamples that were scored as incorrect, since only a single $I^* \in I_\mathrm{NN}$ is labeled as the ground truth.
\item For approximately 9\% of the examples, the counterexample answer $A^*$ is the same as $A$. This irregularity is due to the fact that the tasks of identifying counterexamples and assigning answer labels were assigned to different groups of crowd workers \citep{VQA2}. In addition to potential inter-group disagreement, the later group had no way of knowing the intentions of the former. This discontinuity resulted in a subset of degenerate ground truth counterexamples.
\item The distribution over the rank of $I^*$ within $I_\mathrm{NN}$ is not uniform; there is a strong bias towards closer nearest neighbors. In the training set, $I^*$ falls within the top 5 nearest neighbors roughly 44\% of the time.
\item Certain questions require common knowledge that VQA models are unlikely to possess (e.g., ``Is this a common animal to ride in the US?''; ``Does this vehicle require gasoline?'').
\item Other questions require specialized visual reasoning skills that, while within reach for current machine learning methods, are unlikely to be learned by general VQA architectures (e.g., ``What is the second word on the sign?'' or ``What time is on the clock?'')
\item Finally, a small portion of the questions in VQA 2.0 simply do not admit to the counterexample task. For instance, given the question, ``Do zebras have horses as ancestors?'' it is impossible to select an image, zebra or otherwise, that reverses biological fact.
\end{itemize}
While these idiosyncrasies in the data make VQA 2.0 a less than ideal domain for this task, we nevertheless view work on VQA-CX as crucial to the broader goals of representation learning. As leaderboard-based competitions like the VQA Challenge\footnote{http://www.visualqa.org/challenge.html} continue to steer research efforts toward singular objectives, we feel that auxiliary VQA tasks like counterexample prediction offer important means for cross-validating progress.
\section{Conclusion}
In this work, we explored VQA-CX: a reformulation of the VQA task that requires models to identify counterexamples. We introduced two architectures for evaluating the performance of existing VQA models on this task. Using these models, we established a new state-of-the-art for counterexample prediction on VQA 2.0. While we used a top-performing VQA model in our experiments, we found that the representations learned by this model did not contribute to performance on VQA-CX. Our results suggest that VQA models that perform well on the original task do not necessarily learn conceptual distinctions between visually-similar images. These findings raise important questions about the effectiveness of VQA in general, and the VQA 2.0 dataset in particular, as a benchmark of multimodal reasoning.
These issues occur amidst general concerns about AI interpretability. As machine learning systems assume increasing levels of decision-making responsibility in society, it is imperative that they be able to provide human-interpretable explanations \citep{doshi2017towards}. Within the domain of VQA, explanation by counterexample serves as a potentially-useful way for machines to build trust among their users \citep{VQA2}. However, this explanation modality will only serve its intended purpose if the underlying systems can meaningfully represent and distinguish between the semantic concepts encoded in images. We hope that this work will motivate the development of new benchmarks that adopt a more nuanced approach to the evaluation of visual-semantic reasoning.
|
{
"timestamp": "2018-07-25T02:05:36",
"yymm": "1806",
"arxiv_id": "1806.00857",
"language": "en",
"url": "https://arxiv.org/abs/1806.00857"
}
|
\section{Introduction}
The paper deals with homogenization problem for integral operators of convolution type in $\mathbb R^d$ with dispersal kernels
that have random statistically homogeneous ergodic coefficients. For such operators, under natural integrability, moment and uniform
ellipticity conditions as well as the symmetry condition we prove the homogenization result and study the properties of the limit operator.
The integral operators with a kernel of convolution type are of great interest both from the mathematical point of view and due to various important applications in other fields. Among such applications are models of population dynamics and ecological models, see \cite{OFetal}, \cite{DEE} and references therein,
non-local diffusion problems, see \cite{AMRT, BCF},
continuous particle systems, see \cite{ FKK, KPZ}, image processing algorithms, see \cite{GiOs}. In the cited works only the case of homogeneous environments has been considered. In this case the corresponding dispersal kernel depends only on the displacement $y-x$. However, many applications deal with non-homogeneous environments.
Such environments are described in terms of integral operator whose dispersal kernels depend not only on the displacement $x-y$ but also
on the starting and the ending positions $x, y$.
When studying the large-time behaviour of evolution processes in these environments it is natural to make the diffusive scaling
in the corresponding integral operators and to consider the homogenization problem for the obtained family of operators with a small
positive parameter. In what follows we call this parameter $\eps$
The case of environments with periodic characteristics has been studied in the recent work
\cite{PiZhi17}. It has been shown that under natural moment and symmetry conditions on the kernel the family of rescaled operators admits homogenization, and that for the corresponding jump Markov process the Central Limit Theorem and the Invariance Principle hold. Interesting homogenization problems for periodic operators containing both second order elliptic operator
and nonlocal Levy type operator have been considered in \cite{Arisawa} and \cite{Sandric2016}.
In the present paper we consider the more realistic case of environments with random statistically homogeneous characteristics.
More precisely, we assume that the dispersal kernel of the studied operators has the form $\Lambda(x,y)a(x-y)$, $x,\,y\in\mathbb R^d$,
where $a(z)$ is a deterministic even function that belongs to $L^1(\mathbb R^d)\cap L^2_{\rm loc}(\mathbb R^d)$ and has finite second moments,
while $\Lambda(x,y)=\Lambda(x,y,\omega)$ is a statistically homogeneous symmetric ergodic random field that satisfies the uniform ellipticity conditions
$0<\Lambda^-\leq \Lambda(x,y)\leq \Lambda^+$.\\
Making a diffusive scaling we obtain the family of operators
\begin{equation}\label{L_u_biseps}
(L^\eps u)(x) \ = \ \eps^{-d-2} \int\limits_{\mathbb R^d} a\Big(\frac{x-y}{\eps}\Big)
\Lambda\Big(\frac{x}{\eps},\frac{y}{\eps}\Big) (u(y) - u(x)) dy,
\end{equation}
where a positive scaling factor $\eps$ is a parameter.
For the presentation simplicity we assume in this paper that $\Lambda(x,y)=\mu(x)\mu(y)$ with a statistically homogeneous ergodic
field $\mu$. However, all our results remain valid for the generic statistically homogeneous symmetric random fields $\Lambda(x,y)$
that satisfy the above ellipticity conditions.
The main goal of this work is to investigate
the limit behaviour of $L^\eps$ as $\eps\to 0$.
We are going to show that the family $L^\eps$ converges almost surely to a second order
elliptic operator with constant deterministic coefficient in the so-called $G$-topology, that is
for any $m>0$ the family of operators $(-L^\eps+m)^{-1}$ almost surely converges strongly in $L^2(\mathbb R^d)$
to the operator $(-L^0+m)^{-1}$ where $L^0=\Theta^{ij}\frac{\partial^2}{\partial x^i\partial x^j}$,
and $\Theta$ is a positive definite constant matrix.
There is a vast existing literature devoted to
homogenization theory of differential operators, at present it is a well-developed area, see for instance monographs \cite{BLP} and \cite{JKO}. The first homogenization results for divergence form differential operators with random coefficients were obtained
in pioneer works \cite{Ko78} and \cite{PaVa79}. In these works it was shown that the generic divergence form second order elliptic
operator with random statistically homogeneous coefficients admits homogenization. Moreover, the limit operator has constant coefficients,
in the ergodic case these coefficients are deterministic.
Later on a number of important homogenization results have been obtained for various elliptic and parabolic differential equations and system of equations in random stationary media. The reader can find many references in the book \cite{JKO}.
Homogenization of elliptic difference schemes and discrete operators in statistically homogeneous media has been performed in
\cite{Ko87}, \cite{Ko86}. Also, in \cite{Ko86} several limit theorems have been proved for random walks in stationary discrete
random media that possess different types of symmetry.
To our best knowledge in the existing literature there are no results on stochastic homogenization of convolution type integral operators with a dispersal kernel that has stationary rapidly oscillating coefficients.
In the one-dimensional case a homogenization problem for the operators that have both local and non-local parts has been
considered in the work \cite{Rho_Var2008}. This work deals with scaling limits of the solutions to stochastic differential
equations in dimension one with stationary coefficients driven by Poisson random measures and Brownian
motions. The annealed convergence theorem is proved, in which the limit exhibits
a diffusive or superdiffusive behavior, depending on whether the Poisson random measure has a finite second moment
or not. It is important in this paper that the diffusion coefficient does not degenerate.
Our approach relies on asymptotic expansion techniques and using the so-called corrector. As often happens in the case of
random environments we cannot claim the existence of a stationary corrector. Instead, we construct a corrector which is a random field
in $\mathbb R^d$ with stationary increments and almost surely has a sublinear growth in $L^2(\mathbb R^d)$. \\
When substituting two leading terms of the expansion for the solution of the original equation, we obtain
the discrepancies being oscillating functions with zero average.
Some of these functions are not stationary.
In order to show that the contributions of these discrepancies
are asymptotically negligible we add to the expansion two extra terms. The necessity of constructing these terms is essentially
related to the fact that, in contrast with the case of elliptic differential equations, the resolvent of the studied operator is not
locally compact in $L^2(\mathbb R^d)$.
The paper is organized as follows:
In Section \ref{s_pbmset} we provide the detailed setting of the problem
and formulate the main result of this work.
The leading terms of the ansatz for a solution of equation $(L^\eps-m)u^\eps=f$ with $f\in C_0^\infty(\mathbb R^d)$ are introduced in Section \ref{s_asyexp}. Also in this section we outline the main steps of the proof of our homogenization theorem.
Then in Section \ref{s_corr} we construct the principal corrector in the asymptotic expansion and study the properties
of this corrector.
Section \ref{s_addterms} is devoted to constructing two additional terms of the expansion of $u^\eps$. Then we introduce
the effective matrix and prove its positive definiteness.
Estimates for the remainder in the asymptotic expansion are obtained in Section \ref{s_estrem}.
Finally, in Section \ref{s_proofmain} we complete the proof of the homogenization theorem.
\section{Problem setup and main result}\label{s_pbmset}
\noindent
We consider a homogenization problem for a random convolution type operator of the form
\begin{equation}\label{L_u}
(L_\omega u)(x) \ = \ \mu(x,\omega) \int\limits_{\mathbb R^d} a(x-y) \mu(y,\omega) (u(y) - u(x)) dy.
\end{equation}
For the function $a(z)$ we assume the following:
\begin{equation}\label{A1}
a(z) \in L^{1}(\mathbb R^d) \cap L^{2}_{\rm loc}(\mathbb R^d), \quad a(z) \ge 0; \quad a(-z) = a(z),
\end{equation}
and
\begin{equation}\label{M2}
\| a \|_{L^1(\mathbb R^d)} = \int\limits_{\mathbb R^d} a(z) \ dz = a_1 < \infty; \quad \sigma^2 = \int\limits_{\mathbb R^d} |z|^2 a(z) \ dz < \infty.
\end{equation}
We also assume that
\begin{equation}\label{add}
\mbox{there exists a constant} \; c_0>0 \; \mbox{ and a cube } \; {\bf B} \subset \mathbb R^d, \; \mbox{ such that } \; a(z) \ge c_0 \quad \mbox{for all } \; z \in {\bf B}.
\end{equation}
This additional condition on $a(z)$ is naturally satisfied for regular kernels, and we introduced \eqref{add} for a presentation simplicity. Assumption \eqref{add} essentially simplifies derivation of inequality \eqref{L2B}, on which the proof of the smallness of the first corrector is based, see Proposition \ref{1corrector} below. We notice that inequality \eqref{L2B} can also be derived without assumption \eqref{add}, however in this case additional arguments of measure theory are required.
\\[5pt]
Let $(\Omega,\mathcal{F}, \mathbb P)$ be a standard probability space.
We assume that the random field $ \mu(x,\omega)= {\bm\mu} (T_x \omega) $ is stationary and bounded from above and from below:
\begin{equation}\label{lm}
0< \alpha_1 \le \mu(x,\omega) \le \alpha_2 < \infty;
\end{equation}
here ${\bm\mu} (\omega) $ is a random variable, and $T_x$, $x\in \mathbb R^d$, is an ergodic group of measurable transformations acting in $\omega$-space $\Omega$, $T_x:\Omega
\mapsto\Omega$, and possessing the following properties:
\begin{itemize}
\item $T_{x+y}=T_x\circ T_y\quad\hbox{for all }x,\,y\in\mathbb R^d,\quad T_0={\rm Id}$,
\item $\mathbb P(A)=\mathbb P(T_xA)$ for any $A\in\mathcal{F}$ and any $x\in\mathbb R^d$,
\item $T_x$ is a measurable map from $\mathbb R^d\times \Omega$ to $\Omega$, where $\mathbb R^d$ is equipped
with the Borel $\sigma$-algebra.
\end{itemize}
\medskip
Let us consider a family of the following operators
\begin{equation}\label{L_eps}
(L^{\varepsilon}_\omega u)(x) \ = \ \frac{1}{\varepsilon^{d+2}} \int\limits_{\mathbb R^d} a \Big( \frac{x-y}{\varepsilon} \Big) \mu \Big( \frac{x}{\varepsilon},\omega \Big) \mu \Big( \frac{y}{\varepsilon},\omega \Big) \Big( u(y) - u(x) \Big) dy.
\end{equation}
We are interested in the limit behavior of the operators $L^{\varepsilon}_\omega$ as $\varepsilon \to 0$ .
We are going to show that for a.e. $\omega$ the operators $L^{\varepsilon}_\omega$ converge to a differential operator with constant coefficients in the topology of the resolvent convergence. Let us fix $m>0$, any $f \in L^2(\mathbb R^d)$, and define $u^{\varepsilon}$ as the solution of equation:
\begin{equation}\label{u_eps}
(L^{\varepsilon}_\omega - m) u^{\varepsilon} \ = \ f, \quad \mbox{ i.e. } \; u^{\varepsilon} \ = \ (L^{\varepsilon}_\omega - m)^{-1} f
\end{equation}
with $f \in L^2(\mathbb R^d)$. Denote by $\hat L$ the following operator in $L^2(\mathbb R^d)$:
\begin{equation}\label{L_hat}
\hat L u \ = \ \sum_{i,j = 1}^d \Theta_{i j} \frac{\partial^2 u}{\partial x_i \ \partial x_j}, \quad {\cal D}(\hat L) = H^2(\mathbb R^d)
\end{equation}
with a positive definite matrix $\Theta = \{ \Theta_{i j} \}, \ i,j = 1, \ldots, d,$ defined below, see (\ref{Positive}). Let $u_0(x)$ be the solution of equation
\begin{equation}\label{u_0}
\sum_{i,j = 1}^d \Theta_{i j} \frac{\partial^2 u_0}{\partial x_i \ \partial x_j} - m u_0 = f, \quad \mbox{ i.e. } \; u_0 \ = \ (\hat L - m)^{-1} f
\end{equation}
with the same right-hand side $f$ as in (\ref{u_eps}).
\begin{theorem}\label{T1} Almost surely for any $f \in L^2(\mathbb R^d)$ and any $m>0$ the convergence holds:
\begin{equation}\label{t1}
\| (L^{\varepsilon}_\omega - m)^{-1} f - (\hat L - m)^{-1} f \|_{L^2(\mathbb R^d)} \ \to 0 \quad \mbox{ as } \; \varepsilon \to 0.
\end{equation}
\end{theorem}
The statement of Theorem \ref{T1} remains valid in the case of non-symmetric operators $L^\eps$ of the form
\begin{equation}\label{L_eps_ns}
(L^{\varepsilon,{\rm ns}}_\omega u)(x) \ = \ \frac{1}{\varepsilon^{d+2}} \int\limits_{\mathbb R^d} a \Big( \frac{x-y}{\varepsilon} \Big) \lambda \Big( \frac{x}{\varepsilon},\omega \Big) \mu \Big( \frac{y}{\varepsilon},\omega \Big) \Big( u(y) - u(x) \Big) dy
\end{equation}
with $\lambda(z,\omega)=\bm{\lambda}(T_z\omega)$ such that $0< \alpha_1 \le \lambda(x,\omega) \le \alpha_2 < \infty$.
In this case the equation \eqref{u_eps} reads
\begin{equation}\label{u_eps_nssss}
(L^{\varepsilon,{\rm ns}}_\omega - m) u^{\varepsilon} \ = \ f.
\end{equation}
\begin{corollary}\label{cor_main}
Let $\lambda(z,\omega)$ and $\mu(z,\omega)$ satisfy condition \eqref{lm}. Then a.s. for any $f\in L^2(\mathbb R^d)$ and any $m>0$ the limit relation in \eqref{t1} holds true with $\hat L^{\rm ns} u \ = \ \sum_{i,j = 1}^d \Theta^{\rm ns}_{i j} \frac{\partial^2 u}{\partial x_i \ \partial x_j}$, \
$\Theta^{\rm ns}=\big(\mathbb E \big\{\frac{\bm\mu}{\bm\lambda}\big\}\big)^{-1} \Theta$, and $\Theta$ defined in
\eqref{Positive}.
\end{corollary}
\section{Asymptotic expansion for $u^\eps$ }\label{s_asyexp}
We begin this section by introducing a set of functions $f \in C_0^\infty(\mathbb R^d)$ such that
$u_0 \ = \ (\hat L - m)^{-1} f\in C_0^\infty(\mathbb R^d)$. We denote this set by $ {\cal S}_0(\mathbb R^d)$. Observe that
this set is dense in $L^2(\mathbb R^d)$. Indeed, if we take $\varphi(x)\in C^\infty(\mathbb R)$ such that $0\leq\varphi \leq 1$,
$\varphi=1$ for $x\leq 0$ and $\varphi=0$ for $x\geq 1$, then letting $f_n=(\hat L-m)\big(\varphi(|x|-n)(\hat L-m)^{-1}f(x)\big)$
one can easily check that $f_n\in C_0^\infty(\mathbb R^d)$ and $\|f_n-f\|_{L^2(\mathbb R^d)}\to0$, as $n\to\infty$.\\
We consider first the case when $f \in {\cal S}_0(\mathbb R^d)$
and denote by $Q$ a cube centered at the origin and such that $\mathrm{supp}(u_0)\subset Q$.
We want to
prove the convergence
\begin{equation}\label{convergence1}
\| u^{\varepsilon} - u_0 \|_{L^2(\mathbb R^d)} \ \to 0, \quad \mbox{ as } \ \varepsilon \to 0,
\end{equation}
where the functions $u^\varepsilon$ and $u_0$ are defined in (\ref{u_eps}) and (\ref{u_0}), respectively.
To this end we approximate the function $ u^\varepsilon (x, \omega)$ by means of the following ansatz
\begin{equation}\label{v_eps}
w^{\varepsilon}(x, \omega) \ = \ v^\varepsilon (x, \omega) + u_2^\varepsilon (x, \omega) + u_3^\varepsilon(x, \omega), \quad \mbox{ with } \; v^{\varepsilon}(x, \omega) \ = \ u_0(x)+ \varepsilon \theta \big(\frac{x}{\varepsilon}, \omega\big) \nabla u_0(x),
\end{equation}
where $\theta \big(z, \omega\big) $ is a vector function which is often called a corrector. It will be introduced later on as a solution of an auxiliary problem that does not depend on $\eps$, see \eqref{korrkappa1}. A solution of this problem, $\theta(z,\omega)$ say, is defined up to an additive
constant vector. \\ We set
\begin{equation}\label{hi}
\chi^\varepsilon (z,\omega) = \theta (z,\omega)+ c^\varepsilon (\omega), \quad c^\varepsilon (\omega) = - \frac{1}{|Q|} \int\limits_Q \theta \big( \frac{x}{\varepsilon},\omega \big) dx.
\end{equation}
Observe that under such a choice of the vector $c^\eps$
the function $\chi^\varepsilon \big(\frac x\eps,\omega\big)$ has zero average in $Q$. We show in Proposition \ref{1corrector} that $\eps c^\eps\to 0$ a.s.
It should be emphasized that $\theta (y, \omega)$ need not be a stationary field, that is we do not claim that $\theta(y, \omega) = {\bm\theta} (T_y \omega)$ for some random vector ${\bm\theta}(\omega)$.
Two other functions, $u_2^\varepsilon$ and $u_3^\varepsilon$, that appear in the ansatz in \eqref{v_eps} will be introduced in \eqref{corr-u2}, \eqref{u3}, respectively.
After substitution $v_\eps$ for $u$ to (\ref{L_eps}) we get
$$
(L^{\varepsilon} v^{\varepsilon})(x) \ = \ \frac{1}{\varepsilon^{d+2}} \int\limits_{\mathbb R^d} a \big( \frac{x-y}{\varepsilon} \big) \mu \big( \frac{x}{\varepsilon} \big) \mu \big( \frac{y}{\varepsilon} \big)
\Big( u_0(y)+ \varepsilon \theta \big(\frac{y}{\varepsilon}\big) \nabla u_0(y)
- u_0(x)-\varepsilon \theta \big(\frac{x}{\varepsilon} \big) \nabla u_0(x)
\Big) dy;
$$
here and in what follows we drop the argument $\omega$ in the random fields $\mu(y,\omega)$, $\theta(y,\omega)$, etc.,
if it does not lead to ambiguity.
After change of variables $\frac{x-y}{\varepsilon}=z$ we get
\begin{equation}\label{ml_1}
(L^{\varepsilon} v^{\varepsilon})(x) \ = \ \frac{1}{\varepsilon^{2}} \int\limits_{\mathbb R^d} dz \ a (z) \mu \big( \frac{x}{\varepsilon} \big) \mu \big( \frac{x}{\varepsilon} -z \big) \Big( u_0(x-\varepsilon z) - u_0(x) + \varepsilon \theta \big(\frac{x}{\varepsilon}-z \big) \nabla u_0 (x-\varepsilon z) -\varepsilon \theta \big( \frac{x}{\varepsilon} \big) \nabla u_0(x) \Big).
\end{equation}
The Taylor expansion of a function $u(y)$ with a remainder in the integral form reads
$$
\begin{array}{c}
u(y) \ = \ u(x) + \int_0^1 \nabla u (x + (y-x)t) \cdot (y-x) \ dt \\[3pt]
= \ u(x) + \nabla u(x) \cdot (y-x) + \int_0^1 \nabla \nabla u(x+(y-x)t) (y-x) (y-x) (1-t) \ dt
\end{array}
$$
and is valid for any $x, y \in \mathbb R^d$. Thus we can rewrite (\ref{ml_1}) as follows
\begin{eqnarray}
(L^{\varepsilon} v^{\varepsilon})(x) \hskip -1.7cm &&\nonumber\\[1.6mm]
\label{K2_1}
&&\!\!\!\!\!=\, \frac{1}{\varepsilon} \mu \Big( \frac{x}{\varepsilon}, \omega \Big)\nabla u_0(x)\! \cdot\! \int\limits_{\mathbb R^d} \Big[ -z + \theta \Big(\frac{x}{\varepsilon}-z, \omega \Big) - \theta \Big(\frac{x}{\varepsilon}, \omega \Big) \Big] a (z) \mu \Big( \frac{x}{\varepsilon} -z, \omega \Big) \, dz
\\[1mm]
\nonumber
&&\!\!\!\!\! +\,\mu \Big(\! \frac{x}{\varepsilon}, \omega \Big) \nabla \nabla u_0 (x)\!\cdot\! \int\limits_{\mathbb R^d}\! \Big[ \frac12 z\!\otimes\!z\! - z \!\otimes\!\theta \Big(\frac{x}{\varepsilon}\!-\!z,\omega \Big) \Big] a (z) \mu \Big( \frac{x}{\varepsilon}\! -\!z, \omega \Big) \, dz
+\, \ \phi_\varepsilon (x) \hfill\\
\nonumber
&&=: \frac{1}{\varepsilon} I^\varepsilon_{-1} + \varepsilon^0 I^\varepsilon_0 + \phi_\varepsilon
\end{eqnarray}
with
\begin{equation}\label{14}
\begin{array}{rl} \displaystyle
\!\!\!\!&\hbox{ }\!\!\!\!\!\!\!\!\!\!\!\!\phi_\varepsilon (x, \omega) =\\[3mm]
& \!\!\!\!\!\!\!\!\displaystyle
\!\! \int\limits_{\mathbb R^d}\! a (z) \mu \Big( \frac{x}{\varepsilon},\omega \Big) \mu \Big( \frac{x}{\varepsilon}\! -\!z,\omega \Big) \bigg(\int\limits_0^{1} \nabla \nabla u_0(x-\varepsilon z t) \!\cdot\! z\!\otimes\!z \,(1-t) \ dt - \frac{1}{2} \nabla \nabla u_0(x)\!\cdot\! z\!\otimes\!z \bigg) \, dz
\\[4mm] &\!\!\!\!\!\!\!\!\! \displaystyle
+\, \frac{1}{\varepsilon} \mu \Big( \frac{x}{\varepsilon},\omega \Big) \int\limits_{\mathbb R^d} \ a (z) \mu \Big( \frac{x}{\varepsilon} -z, \omega \Big) \theta \Big(\frac{x}{\varepsilon}\!-\!z,\omega \Big)\! \Big(\nabla u_0(x- \varepsilon z) - \nabla u_0(x) \Big)\, dz
\\[4mm] &\!\!\!\!\!\!\!\!\! \displaystyle
+ \mu \Big( \frac{x}{\varepsilon},\omega \Big) \nabla \nabla u_0(x) \int\limits_{\mathbb R^d} \ a (z) \mu \Big( \frac{x}{\varepsilon} -z, \omega \Big) z \otimes \theta \Big(\frac{x}{\varepsilon}\!-\!z,\omega \Big)\, dz.
\end{array}
\end{equation}
Here and in what follows $z\otimes z$ stands for the matrix $\{z_iz_j\}_{i,j=1}^d$.
Let us outline the main steps of the proof of relation \eqref{convergence1}.
In order to make the term $I^\eps_{-1}$ in \eqref{K2_1} equal to zero, we should
construct a random field $\theta \big(z, \omega\big)$ that satisfies the following equation
\begin{equation}\label{korr1}
\int\limits_{\mathbb R^d} \Big( -z + \theta \big(\frac{x}{\varepsilon}-z, \omega \big) - \theta \big(\frac{x}{\varepsilon}, \omega\big) \Big) \, a (z) \mu \big( \frac{x}{\varepsilon} -z,\omega \big) \ dz \ = \ 0.
\end{equation}
The goal of the first step is to construct such a random field $\theta(z,\omega)$.
Next we show that the second term $I^\varepsilon_0$ can be represented as a sum
$$
I^\varepsilon_0 = \hat L u_0 + S\Big(\frac x\eps,\omega\Big)\nabla\nabla u_0 + f_2^\varepsilon (x,\omega),
$$
where $S(z,\omega)$ is a stationary matrix-field with zero average, and $f_2^\varepsilon (x,\omega)$ is a non-stationary term; both of them are introduced below. We define $u_2^\varepsilon$ and $u_3^\varepsilon$ by
$$
(L^\varepsilon - m) u_2^\varepsilon = - S\Big(\frac x\eps,\omega\Big)\nabla\nabla u_0, \quad (L^\varepsilon - m) u_3^\varepsilon = - f_2^\varepsilon (x,\omega),
$$
and prove that $\| u_2^\varepsilon \|_{L^2(\mathbb R^d)} \to 0$, $\| u_3^\varepsilon \|_{L^2(\mathbb R^d)} \to 0$. Then
considering the properties of the corrector $\theta$, see Theorem \ref{t_corrector}, we derive the limit relation
$\|\varepsilon \theta\big(\frac x\eps\big) \nabla u_0(x) \|_{L^2(\mathbb R^d)} \to 0$, as $\varepsilon \to 0$.
This yields $\| w^\varepsilon - u_0 \| \to 0$.
With this choice of $\theta$, $u_2^\varepsilon$ and $u_3^\varepsilon$ the expression $(L^\varepsilon - m) w^\varepsilon$ can be rearranged as follows:
$$
(L^\varepsilon - m) w^\varepsilon = (L^\varepsilon - m) v^\varepsilon + (L^\varepsilon - m) (u_2^\varepsilon + u_3^\varepsilon) =
(\hat L - m) u_0 + \phi_\varepsilon - m \varepsilon \theta \nabla u_0
$$
$$
= f + \phi_\varepsilon - m \varepsilon \theta \nabla u_0 = (L^\varepsilon - m) u^\varepsilon + \phi_\varepsilon - m \varepsilon \theta \nabla u_0.
$$
We prove below in Lemma \ref{reminder} that $\|\phi_\varepsilon\|\big._{L^2(\mathbb R^d)}$ is vanishing as $\varepsilon \to 0$.
This implies the convergence $\| w^\varepsilon - u^\varepsilon \|\big._{L^2(\mathbb R^d)} \to 0$ and, by the triangle inequality, the required relation in \eqref{convergence1}.
\medskip
\section{First corrector}\label{s_corr}
\medskip
In this Section we construct a solution of equation \eqref{korr1}.
Denote
\begin{equation}\label{fkorr1}
r \big(\frac{x}{\varepsilon}, \omega\big) = \int\limits_{\mathbb R^d} z \, a (z) \, \mu \big( \frac{x}{\varepsilon} -z,\omega \big) \ dz,
\end{equation}
then $r(\xi, \omega) = \mathbf{r}(T_\xi \omega), \; \xi = \frac{x}{\varepsilon},$ is a stationary field. Moreover, since $\mathbb{E} \mu ( \xi -z,\omega )= \mathbb{E}{\bm\mu}(T_{\xi-z} \omega) = const$ for all $z$, then
$$
\mathbb{E} r (\xi, \omega) = \int\limits_{\mathbb R^d} z \, a (z) \, \mathbb{E}\mu ( \xi -z,\omega ) \ dz \ = \ 0.
$$
Equation \eqref{korr1} takes the form
\begin{equation}\label{korrkappa1}
r (\xi, \omega) \ = \ \int\limits_{\mathbb R^d} a (z) \mu ( \xi -z,\omega ) \, \big( \theta (\xi-z, \omega ) - \theta (\xi, \omega) \big) \ dz.
\end{equation}
We are going to show now that equation \eqref{korrkappa1} has a solution that possesses the following properties: \\[1.5mm]
{\bf A}) the increments $\zeta_z(\xi, \omega)
= \theta (z+\xi, \omega ) - \theta (\xi, \omega)$ are stationary for any given $z$, i.e.
$$\zeta_z(\xi, \omega) = \zeta_z(0, T_\xi \omega);$$
{\bf B})
$\eps \theta\big(\frac x\eps,\omega\big) $
is a function of sub-linear growth in $L_{\rm loc}^2(\mathbb R^d)$: for any bounded Lipschitz domain $Q\subset \mathbb R^d$
$$
\Big\| \varepsilon \, \theta \big(\frac{x}{\varepsilon}, \omega \big) \Big\|_{L^2(Q)} \to 0 \quad \mbox{a.s.} \; \omega \in \Omega.
$$
Here and in the sequel for presentation simplicity we write for the $L^2$ norm of a vector-function just $L^2(Q)$ instead of
$L^2(Q\,;\,\mathbb R^d)$.
\medskip
\begin{theorem}\label{t_corrector}
There exists a unique (up to an additive constant vector) solution $\theta\in L^2_{\rm loc}(\mathbb R^d)$ of equation \eqref{korrkappa1} that satisfies conditions {\bf A}{\rm )} -- {\bf B}{\rm )}.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{t_corrector}]
We divide the proof into several steps.\\
{\sl Step 1.} Consider the following operator
acting in $L^2(\Omega)$:
\begin{equation}\label{A-omega}
(A \varphi)(\omega) = \int\limits_{\mathbb R^d} a(z) {\bm\mu}(T_z \omega) \big( \varphi (T_z \omega) - \varphi(\omega) \big) dz
\end{equation}
\begin{proposition}\label{spectrA}
The spectrum $\sigma(A) \subset (-\infty, 0]$.
\end{proposition}
\begin{proof}
It is straightforward to check that the operator $A$ is bounded and symmetric in the weighted space
$L^2(\Omega, P_\mu) = L^2_\mu(\Omega)$ with $d P_\mu(\omega) = {\bm\mu}(\omega) d P(\omega)$.
Denoting $\tilde \omega = T_z \omega, \ s=-z$, using stationarity of $\mu$ and considering the relation $a(-z) = a(z)$ we get
\begin{equation}\label{PropA1}
\begin{array}{c}
\displaystyle
\int\limits_\Omega \int\limits_{\mathbb R^d} a(z){\bm\mu}(T_z \omega){\bm\mu}(\omega) \varphi^2(T_z \omega) \, dz \, dP(\omega)=
\int\limits_\Omega \int\limits_{\mathbb R^d} a(z) {\bm\mu}(\tilde \omega) {\bm\mu}(T_{-z} \tilde\omega) \varphi^2(\tilde\omega) \, dz \, dP(\tilde\omega) \\[3pt]
\displaystyle
= \int\limits_\Omega \int\limits_{\mathbb R^d} a(s){\bm\mu}( \omega){\bm\mu}(T_s \omega) \varphi^2(\omega)\, ds \, dP(\omega).
\end{array}
\end{equation}
Thus
\begin{equation}\label{PropA1bis}
\begin{array}{c}
\displaystyle
\big( A\varphi, \varphi \big)_{L^2_\mu} = \int\limits_\Omega \int\limits_{\mathbb R^d} a(z) {\bm\mu}(T_z \omega) \big( \varphi(T_z \omega) - \varphi(\omega) \big) \varphi(\omega) {\bm\mu}(\omega) dz dP(\omega)
\\ \displaystyle
= -\frac12 \int\limits_\Omega \int\limits_{\mathbb R^d} a(z) {\bm\mu}(T_z \omega) {\bm\mu}(\omega) \big( \varphi(T_z \omega) - \varphi(\omega) \big)^2 dz dP(\omega)<0.
\end{array}
\end{equation}
Since the norms in $L^2(\Omega)$ and $L^2_\mu(\Omega)$ are equivalent, the desired statement follows.
\end{proof}
Let us consider for any $\delta>0$ the equation
\begin{equation}\label{A-delta}
\delta \varphi(\omega) - \int\limits_{\mathbb R^d} a (z) {\bm\mu} ( T_z \omega ) ( \varphi (T_z \omega ) - \varphi ( \omega) ) \ dz = r(\omega), \quad r(\omega) = \int\limits_{\mathbb R^d} z a (z) {\bm\mu} (T_z \omega ) \ dz.
\end{equation}
By Proposition \ref{spectrA} the operator $(\delta I - A)^{-1}$ is bounded, then there exists a unique solution $\varkappa^\delta (\omega) = -(\delta I - A)^{-1} r (\omega)$ of \eqref{A-delta}.
For any given $z \in R^d$
we set
$$
u^\delta(z,\omega) = \varkappa^\delta(T_z \omega) - \varkappa^\delta(\omega).
$$
Then
\begin{equation}\label{u-delta}
u^\delta(z_1 + z_2,\omega) = u^\delta(z_2,\omega) + u^\delta(z_1, T_{z_2} \omega) \quad \forall \ z_1, z_2 \in \mathbb R^d.
\end{equation}
For any $\xi \in\mathbb R^d$ as an immediate consequence of \eqref{A-delta} we have
\begin{equation}\label{A-delta-xi}
\delta \varkappa^\delta (T_\xi \omega) - \int\limits_{\mathbb R^d} a (z) {\bm\mu} ( T_{\xi+z} \omega ) ( \varkappa^\delta (T_{\xi+z} \omega ) - \varkappa^\delta ( T_\xi \omega) ) \ dz = \int\limits_{\mathbb R^d} z a (z) {\bm\mu} (T_{\xi+z} \omega ) \ dz.
\end{equation}
\medskip
Next we obtain a priori estimates for $\| \varkappa^\delta (T_z \omega) - \varkappa^\delta (\omega)\|_{L^2_M}$ with $dM(z, \omega) = a(z) dz dP(\omega)$.
\begin{proposition}\label{boundM}
The following estimate holds:
\begin{equation}\label{AB}
\| u^\delta(z,\omega) \|_{L^2_M} = \| \varkappa^\delta (T_z \omega) - \varkappa^\delta (\omega) \|_{L^2_M} \ \le \ C
\end{equation}
with a constant $C$ that does not depend on $\delta$.
\end{proposition}
\begin{proof}
Multiplying equation \eqref{A-delta} by $\varphi(\omega)={\bm\mu}(\omega)\varkappa^\delta(\omega)$ and integrating
the resulting relation over $\Omega$ yields
\begin{equation}\label{Prop2}
\begin{array}{c}
\displaystyle
\delta \int\limits_\Omega \big(\varkappa^\delta(\omega)\big)^2{\bm\mu}(\omega)\, dP(\omega)
- \int\limits_{\mathbb R^d} \int\limits_\Omega a (z) {\bm\mu} ( T_z \omega ) \big( \varkappa^\delta (T_z \omega ) - \varkappa^\delta ( \omega) \big) \varkappa^\delta(\omega){\bm\mu}(\omega) \, dz \, dP(\omega) \\ \displaystyle
= \int\limits_{\mathbb R^d} \int\limits_\Omega z a(z) \varkappa^\delta(\omega) {\bm\mu}(T_z \omega) {\bm\mu}(\omega) \, dz \, dP(\omega).
\end{array}
\end{equation}
The same change of variables as in \eqref{PropA1} results in the relation
\begin{equation}\label{Prop2_eq}
\int\limits_{\mathbb R^d} \int\limits_\Omega z a(z) \varkappa^\delta (\omega) {\bm\mu}(T_z \omega) {\bm\mu}(\omega) \, dz \, dP(\omega)= -
\int\limits_{\mathbb R^d} \int\limits_\Omega z a(z) \varkappa^\delta (T_z \omega) {\bm\mu}(\omega) {\bm\mu}(T_z \omega)\, dz \, dP(\omega),
\end{equation}
therefore, the right-hand side of \eqref{Prop2} takes the form
\begin{equation}\label{RHS}
\!\int\limits_{\mathbb R^d}\! \int\limits_\Omega z a(z) \varkappa^\delta(\omega) {\bm\mu}(T_z \omega) {\bm\mu}(\omega) dz dP(\omega)= -\frac12
\int\limits_{\mathbb R^d}\! \int\limits_\Omega z a(z) \big( \varkappa^\delta(T_z \omega) - \varkappa^\delta(\omega) \big) {\bm\mu}(T_z \omega) {\bm\mu}(\omega) dz dP(\omega).
\end{equation}
Equality \eqref{PropA1bis} implies that the second term on the left-hand side of \eqref{Prop2} can be rearranged in the following way
\begin{equation}\label{LHS2}
\begin{array}{c}
\displaystyle
- \int\limits_{\mathbb R^d} \int\limits_\Omega a (z) {\bm\mu} ( T_z \omega ) \big( \varkappa^\delta (T_z \omega ) - \varkappa^\delta ( \omega) \big) \varkappa^\delta(\omega){\bm\mu}(\omega) \, dz \, dP(\omega)
\\ \displaystyle
= \frac12 \int\limits_{\mathbb R^d} \int\limits_\Omega a(z) {\bm\mu}(T_z \omega) {\bm\mu}(\omega) \big( \varkappa^\delta( T_z \omega) - \varkappa^\delta (\omega) \big)^2 dz \, dP(\omega).
\end{array}
\end{equation}
Let us denote
$$
J^\delta = \int\limits_{\mathbb R^d} \int\limits_\Omega {\bm\mu}(T_z \omega) {\bm\mu}(\omega) \big( \varkappa^\delta( T_z \omega) - \varkappa^\delta (\omega) \big)^2 a(z) dz \, dP(\omega) = \int\limits_{\mathbb R^d} \int\limits_\Omega {\bm\mu}(T_z \omega) {\bm\mu}(\omega) (u^\delta (z,\omega))^2 dM(z,\omega)
$$
and
$$
\int\limits_{\mathbb R^d} \int\limits_\Omega \big( \varkappa^\delta( T_z \omega) - \varkappa^\delta (\omega) \big)^2 a(z) dz \, dP(\omega) = \int\limits_{\mathbb R^d} \int\limits_\Omega (u^\delta (z,\omega))^2 dM(z,\omega) = \| u^\delta \|^2_{L^2_M},
$$
where $dM(z, \omega) = a(z) dz dP(\omega)$.
Then
\begin{equation}\label{B1}
J^\delta = \int\limits_{\mathbb R^d} \int\limits_\Omega {\bm\mu}(T_z \omega) {\bm\mu}(\omega) (u^\delta (z,\omega))^2 dM(z,\omega) \ge \alpha_1^2 \| u^\delta \|^2_{L^2_M}
\end{equation}
and on the other hand, relations \eqref{Prop2} - \eqref{LHS2} imply the following upper bound on $J^\delta$:
\begin{equation}\label{B2}
J^\delta = \int\limits_{\mathbb R^d} \int\limits_\Omega {\bm\mu}(T_z \omega) {\bm\mu}(\omega) (u^\delta (z,\omega))^2 dM(z,\omega) \le \frac12 \alpha_2^2 \sigma \| u^\delta \|_{L^2_M}.
\end{equation}
Bounds \eqref{B1} - \eqref{B2} together yield
$$
\alpha_1^2 \| u^\delta \|^2_{L^2_M} \le J^\delta \le \frac12 \alpha_2^2 \sigma \| u^\delta \|_{L^2_M}.
$$
Consequently we obtain the estimate \eqref{AB} with $C = \frac{\alpha_2^2}{2 \alpha_1^2} \sigma$, and this estimate is uniform in $\delta$.
\end{proof}
\begin{corollary}
For any $\delta>0$ the following upper bound holds:
\begin{equation}\label{u-norm}
\sqrt{\delta} \, \| \varkappa^\delta \|_{L^2_\mu} \le C.
\end{equation}
\end{corollary}
\begin{proof}
From \eqref{Prop2} we have
\begin{equation}\label{Prop2-norm}
\begin{array}{c}
\displaystyle
\delta \int\limits_\Omega \big(\varkappa^\delta(\omega)\big)^2{\bm\mu}(\omega)\, dP(\omega)
=\int\limits_{\mathbb R^d} \int\limits_\Omega a (z) {\bm\mu} ( T_z \omega ) \big( \varkappa^\delta (T_z \omega ) - \varkappa^\delta ( \omega) \big) \varkappa^\delta(\omega){\bm\mu}(\omega) \, dz \, dP(\omega) \\ \displaystyle
+\int\limits_{\mathbb R^d} \int\limits_\Omega z a(z) \varkappa^\delta(\omega) {\bm\mu}(T_z \omega) {\bm\mu}(\omega) \, dz \, dP(\omega).
\end{array}
\end{equation}
Then using \eqref{RHS}, \eqref{LHS2}, \eqref{B2} together with the Cauchy-Swartz inequality and bound \eqref{AB}, we obtain that the expression on the right-hand side of \eqref{Prop2-norm} is uniformly bounded in $\delta$.
\end{proof}
Proposition \ref{boundM} implies that the family $\{ u^\delta(z, \omega) \}_{\delta>0}$ is bounded in $L^2_M$. Consequently there exists a subsequence $u_j (z, \omega) = u^{\delta_j} (z, \omega)$, $j=1,2, \ldots,$ that converges in a weak topology of $L^2_M$ as $\delta_j \to 0$. We denote this limit by $\theta(z,\omega)$:
\begin{equation}\label{theta}
w\,\mbox{-}\!\!\lim_{j \to \infty} u_j (z,\omega) = w\,\mbox{-}\!\!\lim_{\delta_j \to 0} \big( \varkappa^{\delta_j}(T_z \omega) - \varkappa^{\delta_j}(\omega) \big) = \theta(z,\omega),
\end{equation}
Clearly, $\theta(z,\omega) \in L^2_M$, i.e.
\begin{equation}\label{thetaLM}
\int\limits_{\mathbb R^d} \int\limits_\Omega \theta^2 (z,\omega) a(z) dz dP(\omega) < \infty,
\end{equation}
and by the Fubini theorem $\theta (z, \omega) \in L^2 (\Omega)$ for almost all $z$ from the support of the function $a(z)$. In addition $\theta(0,\omega) \equiv 0$ and for any $z$
\begin{equation}\label{Etheta}
\mathbb{E} \theta(z,\omega) = \lim_{\delta_j \to 0} \Big( \mathbb{E} \varkappa^{\delta_j} (T_z \omega) - \mathbb{E} \varkappa^{\delta_j}(\omega) \Big) = 0.
\end{equation}
\medskip\noindent
{\sl Step 2.} {\sl Property A}. The function $\theta(z,\omega)$ introduced in \eqref{theta} is not originally defined on the set
$\{z\in\mathbb R^d\,:\,a(z)=0\}$.
\begin{proposition}\label{statincrements}
The function $\theta(z, \omega)$, given by \eqref{theta}, can be extended to $\mathbb R^d\times\Omega$ in such a way that $\theta(z, \omega)$ satisfies relation \eqref{u-delta}, i.e. $\theta(z, \omega)$ has stationary increments:
\begin{equation}\label{thetaVIP}
\theta(z+\xi,\omega) - \theta (\xi,\omega) = \theta(z, T_\xi \omega) = \theta(z, T_\xi \omega) - \theta(0, T_\xi \omega).
\end{equation}
\end{proposition}
\begin{proof}
Applying Mazur's theorem \cite[Section V.1]{Yo65} we conclude that $\theta(z, \omega) = s\,\hbox{-}\!\lim\limits_{n \to \infty} w_n$ is the strong limit of a sequence $w_n$ of convex combinations of elements $u_j(z,\omega) = u^{\delta_j} (z,\omega)$.
The strong convergence implies that there exists a subsequence of $\{w_n \}$ that converges a.s. to the same limit $\theta(z, \omega)$:
$$
\lim\limits_{n_k \to \infty} w_{n_k} (z, \omega) = \theta(z, \omega) \quad \mbox{for a.e. } \; z \; \mbox{ and a.e.} \; \omega.
$$
Since equality \eqref{u-delta} holds for all $u_j$, it also holds for any convex linear combination $w_n$ of $u_j$:
\begin{equation}\label{wn}
w_n (z_1 + z_2,\omega) = w_n(z_2,\omega) + w_n (z_1, T_{z_2} \omega) \quad \forall \ n.
\end{equation}
Thus taking the subsequence $\{w_{n_k} \}$ in equality \eqref{wn} and
passing to the point-wise limit $n_k \to \infty$ in any term of this equality
we obtain \eqref{thetaVIP} first only for such $z_1, z_2$ that $z_1, z_2, z_1+ z_2$ belong to $\mathrm{supp}(a)$.
Then we extend function $\theta(z, \omega)$ to a.e. $z \in \mathbb R^d$ using relation \eqref{thetaVIP}:
\begin{equation}\label{lim_sh_inv}
\theta(z_1 + z_2, \omega) = \theta(z_2, \omega) + \theta(z_1, T_{z_2} \omega).
\end{equation}
Observe that this extension is well-defined because relation \eqref{thetaVIP} holds on the support of $a$.\\[1.5mm]
Let us show that $\theta(z,\omega)$ is defined for all $z\in\mathbb Z^d$. To this end we observe that, due to the properties
of the dynamical system $T_z$, the function $\theta(z_1,T_{z_2}\omega)$ is well-defined measurable function
of $z_1$ and $\omega$ for all $z_2\in\mathbb R^d$. The function $\theta(z_1+z_2,\omega)$ possesses the same property
due to its particular structure. Then according to \eqref{lim_sh_inv} the function $\theta(z_2, \omega)$ is defined
for all $z\in\mathbb Z^d$.
\end{proof}
Denote $\zeta_z (\xi, \omega)= \theta(z+\xi,\omega) - \theta (\xi,\omega) $,
then for $z\in\mathbb R^d$ relation \eqref{thetaVIP} yeilds
\begin{equation}\label{thetaVIPbis}
\zeta_z (\xi, \omega) = \zeta_z(0, T_\xi \omega) ,
\end{equation}
i.e. for all $z\in\mathbb R^d$ the field $\zeta_z(\xi,\omega)$ is statistically homogeneous in $\xi$, and
\begin{equation}\label{zetatheta}
\zeta_z(0, \omega) = \theta(z, \omega).
\end{equation}
Thus by \eqref{theta}, \eqref{thetaVIP} -- \eqref{thetaVIPbis} the random function $\theta(z,\omega)$ is not stationary, but its increments $\zeta_z(\xi, \omega) = \theta (z+\xi, \omega ) - \theta (\xi, \omega)$ form a stationary field for any given $z$.
\bigskip\noindent
{\sl Step 3.} At this step we show that $\theta$ satisfies equation \eqref{korrkappa1}.\\
Let us prove now that $\theta(z,\omega)$ defined by \eqref{theta} is a solution of equation \eqref{korr1} (or \eqref{korrkappa1}).
To this end for an arbitrary function $\psi(\omega) \in L^2(\Omega)$ we multiply equality \eqref{A-delta-xi} by a function
$\psi(\omega){\bm\mu}(\omega)$ and integrate the resulting relation over $\Omega$, then
we have
\begin{equation}\label{Solution}
\begin{array}{c}
\displaystyle
\delta \int\limits_\Omega \varkappa^\delta(T_\xi \omega) \psi(\omega) {\bm\mu}(\omega)\, dP(\omega) \!=\!
\int\limits_{\mathbb R^d} \int\limits_\Omega a (z) {\bm\mu} ( T_{\xi+z} \omega ) \big( \varkappa^\delta (T_{\xi+z} \omega ) - \varkappa^\delta (T_\xi \omega) \big) dz \psi(\omega) {\bm\mu}(\omega) dP(\omega) \\ \displaystyle
+\int\limits_{\mathbb R^d} \int\limits_\Omega z a(z) {\bm\mu}(T_{\xi+z} \omega) dz \, \psi(\omega) {\bm\mu}(\omega) \, dP(\omega).
\end{array}
\end{equation}
By estimate \eqref{u-norm} and the Cauchy-Swartz inequality for any $\psi \in L^2(\Omega)$ we get
\begin{equation}\label{ud-norm}
\delta \int\limits_\Omega \varkappa^\delta(T_\xi \omega) \psi(\omega) {\bm\mu}(\omega)\, dP(\omega) \to 0 \quad \mbox{as } \\ \delta \to 0.
\end{equation}
Passing to the limit $\delta \to 0$ in equation \eqref{Solution}
and taking into account \eqref{theta} and \eqref{ud-norm}, we obtain that for a.e. $\omega$ the function $\theta(z,T_\xi \omega)$ satisfies the equation
\begin{equation*}\label{A-delta-xibis}
\int\limits_{\mathbb R^d} a (z) {\bm\mu} ( T_{\xi+z} \omega ) \theta(z, T_\xi \omega) ) \ dz = - \int\limits_{\mathbb R^d} z a (z) {\bm\mu} (T_{\xi+z} \omega ) \ dz.
\end{equation*}
Using \eqref{thetaVIP} we get after the change of variables $z \to -z$
\begin{equation}\label{theta-xi-z}
-\int\limits_{\mathbb R^d} a (z) {\bm\mu} ( T_{\xi-z} \omega ) ( \theta (\xi-z, \omega ) - \theta ( \xi, \omega) ) \ dz + \int\limits_{\mathbb R^d} z a (z) {\bm\mu} (T_{\xi-z} \omega ) \ dz =0,
\end{equation}
and it is the same as \eqref{korr1}. Thus we have proved that $\theta(z,\omega)$ is a solution of \eqref{korrkappa1}.
\medskip
\noindent
{\sl Step 4}. Property B.
Assumption \eqref{add} and inequality \eqref{thetaLM} imply that
$$
c_0 \int\limits_{{\bf B}} \int\limits_\Omega \theta^2 (z,\omega) dz dP(\omega) < \int\limits_{\mathbb R^d} \int\limits_\Omega \theta^2 (z,\omega) a(z) dz dP(\omega) < \infty,
$$
and by the Fubini theorem we conclude that a.s.
\begin{equation}\label{L2B}
\int\limits_{{\bf B}} \theta^2 (z,\omega) dz < \infty.
\end{equation}
Thus $\theta(z,\omega) \in L^2({\bf B})$ with $\| \theta (z, \omega) \|_{L^2({\bf B})} = K(\omega)$ for a.e. $\omega$, and
${\mathbb E} (K(\omega))^2< \infty$.
\begin{proposition} [Sublinear growing of $\eps\theta(\frac x\eps) $ in $L_{\rm loc}^2(\mathbb R^d)$] \label{1corrector}
Denote by $\varphi_\eps (z, \omega) = \eps\, \theta \big(\frac z\eps, \omega\big)$.
Then a.s.
\begin{equation}\label{1corrsmall}
\| \varphi_\eps (\cdot, \omega) \|_{L^2(\mathcal{Q})} \ \to \ 0 \quad \mbox{ as } \; \eps \to 0
\end{equation}
for any bounded Lipschitz domain $\mathcal{Q}\subset\mathbb R^d$.
\end{proposition}
\begin{proof}
We use in the proof inequality \eqref{L2B}
and assume in what follows without loss of the generality that ${\bf B}=[0,1]^d$.
\begin{lemma}\label{LemmaC}
The family of functions $\varphi_\eps (z, \omega) = \eps\, \theta \big(\frac z\eps, \omega\big)$ is bounded and compact in $L^2(Q)$.
\end{lemma}
\begin{proof}
Using change of variables $\frac z\eps = y$ we have
$$
\|\varphi_\eps \|^2_{L^2(Q)} = \| \eps \, \theta \big(\frac z\eps, \omega\big) \|^2_{L^2(Q)} = \int\limits_Q \eps^2 \, \theta^2 \big(\frac z\eps, \omega\big) dz =
\int\limits_{\eps^{-1} Q} \eps^{d+2} \, \theta^2 (y, \omega) dy
$$
$$
= \eps^{d+2} \sum\limits_{j \in \mathbb{Z}_{ Q/\eps}} \ \int\limits_{B_j} \, \theta^2 (y, \omega) dy = \eps^{d+2} \sum\limits_{j \in \mathbb{Z}_{Q/\eps}} \ \int\limits_{B_j} \, (\theta (y, \omega) - \theta(j,\omega) + \theta(j,\omega))^2 dy
$$
\begin{equation}\label{L-1}
\le {2}\eps^{d+2} \sum\limits_{j \in \mathbb{Z}_{ Q/\eps}} \ \int\limits_{B_j} (\theta (y, \omega) -
\theta(j,\omega))^2 dy \ + \ {2}\eps^{d+2} \sum\limits_{j \in \mathbb{Z}_{ Q/\eps}} \theta^2 (j,\omega) \, |B_j|.
\end{equation}
Here $j \in \mathbb{Z}^d \cap \frac1\eps Q = \mathbb{Z}_{ Q/\eps}$, $B_j=j+[0,1)^d$.
Then if $y \in B_j$, then $y = j+z, \; z \in {\bf B} = [0,1)^d$, and we can rewrite the first term on the right-hand side of \eqref{L-1} as follows
$$
{2}\,\eps^{d+2} \sum\limits_{j \in \mathbb{Z}_{ Q/\eps}} \ \int\limits_{{\bf B}} (\theta (j + z, \omega) - \theta(j,\omega))^2 dz =
{2}\,\eps^{d+2} \sum\limits_{j \in \mathbb{Z}_{Q/\eps}} \ \int\limits_{{\bf B}} \theta^2 (z, T_j \omega) dz.
$$
Using the fact that $ \theta_B(j,\omega):=\int\limits_{{\bf B}} \theta^2 (z, T_j \omega) dz$ is a stationary field and $\theta(z,\omega) \in L^2({\bf B})$, by the Birkhoff
ergodic theorem we obtain that
$$
{2}\,\eps^{d} \sum\limits_{j \in \mathbb{Z}_{Q/\eps}} \ \int\limits_{{\bf B}} \theta^2 (z, T_j \omega) dz \ \to \ 2 |Q| \ \mathbb{E} \int\limits_{{\bf B}} \theta^2 (z, \omega) dz<\infty.
$$
Consequently, the first term in \eqref{L-1} is vanishing as $\eps \to 0$:
\begin{equation}\label{L-2}
{2}\eps^{d+2} \sum\limits_{j \in \mathbb{Z}_{Q/\eps}} \ \int\limits_{{\bf B}} \theta^2 (z, T_j \omega) dz \ \to \ 0.
\end{equation}
Let us prove now that a.s. the second term in \eqref{L-1} is bounded. Denoting
$$
\widehat \varphi_\eps (z) =\eps \, \widehat \theta \big(\frac z\eps, \omega\big),
$$
where $\widehat \theta$ is a piecewise constant function: $\widehat \theta \big(\frac z\eps,\omega\big) =
\theta \big([\frac z\eps],\omega\big) = \theta (j,\omega)$ as $z \in \eps B_j$, the second term in \eqref{L-1} equals to
\begin{equation}\label{L-3}
{2}\,\eps^{d+2} \sum\limits_{j \in \mathbb{Z}_{Q/\eps}} \theta^2 (j,\omega) = 2 \, \| \eps \, \widehat \theta \big(\frac z\eps, \omega\big) \|^2_{L^2(Q)}
=2\|\widehat \varphi_\eps(z)\|^2_{L^2(Q)}.
\end{equation}
Let us estimate the difference gradient of $ \widehat \varphi_\eps$:
$$
\| {\rm grad} \, \widehat \varphi_\eps\|^2_{(L^2(Q))^d} = \eps^2 \int\limits_Q \sum_{k=1}^d \frac{\big(
\theta\big([\frac1\eps(z+\eps e_k)], \omega\big) - \theta\big([\frac z\eps],\omega\big) \big)^2}{\eps^2} \, dz
$$
$$
= \int\limits_Q \sum_{k=1}^d \big(\theta\big(\big[\frac z\eps\big] + e_k, \omega\big) - \theta\big(\big[\frac z\eps\big],\omega\big) \big)^2 \, dz = \eps^d \sum_{k=1}^d \sum\limits_{j \in \mathbb{Z}_{Q/\eps}} \big(\theta(j+ e_k, \omega) - \theta(j,\omega) \big)^2.
$$
But $\theta(j+ e_k, \omega) - \theta(j,\omega) = \theta(e_k, T_j \omega)$ is stationary for any given $e_k$, thus
\begin{equation}\label{L-4}
\| {\rm grad} \, \widehat \varphi_\eps\|^2_{(L^2(Q))^d} = \eps^d \sum_{k=1}^d \sum\limits_{j \in \mathbb{Z}_{Q/\eps}} \big(\theta(j+ e_k, \omega) - \theta(j,\omega) \big)^2 \ \to \ |Q| \sum_{k=1}^d C_k,
\end{equation}
where $C_k = \mathbb{E} \theta^2 (e_k, \omega)$.
Next we prove that a.s. the following estimate holds:
\begin{equation}\label{L-5}
\bar \theta_\eps (\omega) = \int\limits_Q \widehat \varphi_\eps (z, \omega) dz =
\eps^d \sum\limits_{j \in \mathbb{Z}_{ Q/\eps}} \eps \, \theta(j,\omega) \le \widetilde C(\omega).
\end{equation}
We apply the induction and start with $d=1$. Using stationarity of $\theta(j+1,\omega) - \theta(j,\omega)$ we have by the ergodic theorem
$$
\eps^2 \, \Big| \sum\limits_{j \in \mathbb{Z}_{Q/\eps}} \theta(j,\omega) \Big| \le \eps^2 \,
\sum\limits_{j \in \mathbb{Z}_{Q/\eps}} \sum_{k=0}^{j-1} |\theta(k+1,\omega) - \theta(k,\omega) |
$$
$$
\le \eps^2 \, \sum\limits_{j \in \mathbb{Z}_{Q/\eps}} \sum\limits_{k \in \mathbb{Z}_{Q/\eps}} |\theta(k+1,\omega) - \theta(k,\omega) | = \eps^2\frac{|Q|}\eps \sum\limits_{k \in \mathbb{Z}_{Q/\eps}} |\theta(e_1, T_k\omega) | \ \to \ |Q|^2 \mathbb{E} |\theta (e_1, \omega)| = \bar C_1.
$$
Thus
$$
\overline{\lim\limits_{\eps \to 0}}\ \eps^2 \, \Big| \sum\limits_{j \in \mathbb{Z}_{Q/\eps}} \theta(j,\omega) \Big| \le \bar C_1,
$$
and this implies that for a.e. $\omega$
\begin{equation}\label{L-5A}
\sup_\eps \Big| \eps^2 \, \sum\limits_{j \in \mathbb{Z}_{Q/\eps}} \theta(j,\omega) \Big| \le \widetilde C_1(\omega),
\end{equation}
where the constant $\widetilde C_1 (\omega)$ depends only on $\omega$.
Let us show how to derive the required upper bound in the dimension $d=2$ using \eqref{L-5A}. In this case $j~\in~\mathbb{Z}_{Q/\eps}, \ j=(j_1, j_2)$, and we assume without loss of generality that $Q \subset [-q, q]^2$. Then
$$
\theta ((j_1, j_2), \omega) = \sum_{k=0}^{j_2 -1} \big( \theta ((j_1, k+1), \omega) - \theta ((j_1, k), \omega) \big) \ + \ \theta ((j_1, 0), \omega),
$$
and for any $j=(j_1, j_2) \in \mathbb{Z}_{Q/\eps}$ we get
$$
| \theta ((j_1, j_2), \omega)| \le \sum_{k= - q/\eps}^{q/\eps} \big| \theta ((j_1, k+1), \omega) - \theta ((j_1, k), \omega) \big| \ + \ |\theta ((j_1, 0), \omega)|.
$$
Using \eqref{L-5A} and the ergodic property of the field $| \theta (e_2, T_j\omega)|$ we obtain the following upper bound
$$
\eps^3 \, \Big| \sum\limits_{(j_1, j_2) \in \mathbb{Z}_{Q/\eps}} \theta ((j_1, j_2), \omega) \Big| \le \eps^3 \sum_{j_1= - q/\eps}^{q/\eps} \frac{2q}\eps \sum_{k= - q/\eps}^{q/\eps} | \theta (e_2, T_{(j_1, k)} \omega)| \ + \ \eps^3
\sum_{j_1=- q/\eps}^{q/\eps} \frac{2q}\eps |\theta ((j_1, 0), \omega)|
$$
$$
= 2q\eps^2 \sum\limits_{(j_1, k) \in \mathbb{Z}_{Q/\eps}} | \theta (e_2, T_{(j_1, k)} \omega)| + 2q\eps^2
\sum_{j_1=- q/\eps}^{q/\eps} |\theta ((j_1, 0), \omega)| \le \widetilde C_2(\omega) + 2q \widetilde C_1(\omega),
$$
where $2q$ is the 1-d volume of slices of $Q$ that are orthogonal to $e_1$.
The case of $d>2$ is considered in the same way.
\medskip
Applying the standard discrete Poincar\'e inequality or the Poincar\'e inequality for piece-wise linear approximations of discrete
functions we obtain from \eqref{L-4} - \eqref{L-5} that a.s.
\begin{equation}\label{L-6}
\| \widehat \varphi_\eps \|^2_{L^2(Q)} \le g_1 \Big(\int\limits_Q \widehat \varphi_\eps (z, \omega) dz \Big)^2 + g_2
\| {\rm grad} \, \widehat \varphi_\eps\|^2_{(L^2(Q))^d} \le K(\omega),
\end{equation}
where the constants $g_1, \; g_2$, and $K(\omega)$ do not depend on $n$.
Thus using the same piece-wise linear approximations and considering the compactness of embedding of $H^1(Q)$ to $L^2(Q)$ we derive from \eqref{L-4} and \eqref{L-6} that
the set of functions $\{ \widehat \varphi_\eps \}$ is compact in $L^2(Q)$. As follows from \eqref{L-1} -- \eqref{L-2}
$$
\varphi_\eps = \widehat \varphi_\eps + \breve{ \varphi}_\eps, \quad \mbox{where } \; \breve{ \varphi}_\eps(x) =
\eps \big(\theta\big(\frac x\eps\big) - \widehat \theta\big(\frac x\eps\big)\big), \quad \| \breve{ \varphi_\eps} \|_{L^2(Q)} \to 0 \; (\eps \to 0).
$$
This together with compactness of $\{ \widehat \varphi_\eps \}$ implies the compactness of the family $\{ \varphi_\eps \}$. Lemma is proved.
\end{proof}
Next we show that any limit point of the family $\{\varphi_\eps\}$ as $\eps\to0$ is a constant function.
\begin{lemma}\label{Prop_constfun}
Let $\{ \varphi_\eps \}$ converge for a subsequence to $\varphi_0$ in $L^2(Q)$. Then
$\varphi_0=const$.
\end{lemma}
\begin{proof}
According to \cite{LadSol} the set $\{\mathrm{div}\phi\,:\,\phi\in (C_0^\infty(Q))^d\}$ is dense in the subspace
of functions from $L^2(Q)$ with zero average. It suffice to show that
\begin{equation}\label{ortog_con}
\int\limits_Q \mathrm{div}\phi(x) \varphi_\eps(x)\,dx\longrightarrow 0, \ \ \hbox{as }\eps\to0,
\end{equation}
for any $\phi=(\phi^1,\,\phi^2,\ldots,\phi^d)\in (C_0^\infty(Q))^d$. Clearly,
$$
\frac 1\eps(\phi^j(x+\eps e_j)-\phi^j(x))=\partial_{x_j}\phi^j(x)+\eps\upsilon_\eps,
$$
where $\|\upsilon_\eps\|_{L^\infty(Q)}\leq C$. Then, for sufficiently small $\eps$, we have
$$
\int\limits_Q \mathrm{div}\phi(x) \varphi_\eps(x)\,dx=\int\limits_Q (\phi^j(x+\eps e_j)-\phi^j(x))
\theta\big(\frac x\eps,\omega\big)\,dx\,+\,o(1)
$$
$$
=\int\limits_Q \phi^j(x)\big(\theta\big(\frac x\eps-e_j,\omega\big)-\theta\big(\frac x\eps,\omega\big)\big)\,dx\,+\,o(1),
$$
where $o(1)$ tends to zero as $\eps\to0$ by Lemma \ref{LemmaC}. Since $\theta(z-e_j,\omega)-(\theta(z,\omega)$ is a stationary functions,
by the Birkhoff ergodic theorem the integral on the right-hand side converges to zero a.s. as $\eps\to 0$, and the desired statement follows.
\end{proof}
Our next goal is to show that
almost surely the limit relation in \eqref{1corrsmall} holds.
By Lemma \ref{LemmaC} the constants $\eps c^\eps$ with $c^\eps$ defined in \eqref{hi} are a.s. uniformly in $\eps$ bounded, that is
\begin{equation}\label{co_bou}
|\eps c^\eps|\leq K(\omega)
\end{equation}
for all sufficiently small $\eps>0$.\\
Consider a convergent subsequence $\{\varphi_{\eps_n}\}_{n=1}^\infty$.
By Lemma \ref{Prop_constfun} the limit function is a constant,
denote this constant by $\varphi_0$. Assume that $\varphi_0\not=0$. Then
$$
\varphi_{\eps_n}(z)=\varphi_0+\rho_{\eps_n}(z),
$$
where $\|\rho_{\eps_n}\|_{L^2({Q})}\to0$ as $\eps_n\to0$. Clearly, we have
$$
\varphi_{2\eps_n}(z)=2\eps_n\theta\Big(\frac z{2\eps_n}\Big)=2\eps_n\theta\Big(\frac{z/2}{\eps_n}\Big)
=2\varphi_0+2\rho_{\eps_n}\Big(\frac{z}{2}\Big)\to 2\varphi_0,
$$
because $\|\rho_{\eps_n}(\cdot/2)\|_{L^2({Q})}\to 0$ as $\eps_n\to0$. Similarly, for any $M\in \mathbb Z^+$
we have
$$
\varphi\big._{M\eps_n}(z)\,\to\, M\varphi_0 \qquad \hbox{in }L^2({Q}).
$$
Choosing $M$ in such a way that $M|\varphi_0|> K(\omega)$ we arrive at a contradiction with \eqref{co_bou}.
Therefore, $\varphi_0=0$ for any convergent subsequence.
This yields the desired convergence
in \eqref{1corrsmall} and completes the proof of Proposition \ref{1corrector}.
\end{proof}
\noindent
{\sl Step 5}. Uniqueness of $\theta$.
\begin{proposition}
[Uniqueness]\label{uniqueness}
Problem \eqref{korrkappa1} has a unique up to an additive constant solution $\theta(z,\omega)$, $\theta \in L^2_M$,
with statistically homogeneous increments
such that \eqref{1corrsmall} holds true.
\end{proposition}
\begin{proof}
Consider two arbitrary solutions $\theta_1(z,\omega)$ and $\theta_2(z,\omega)$ of problem \eqref{korrkappa1}.
Then the difference $\Delta (z,\omega)=\theta_1(z,\omega)-\theta_2(z,\omega)$ satisfies the equation
\begin{equation}\label{1A}
\int\limits_{\mathbb R^d} a (z) \mu ( \xi+ z, \omega ) \big(\Delta (\xi+z,\omega ) - \Delta(\xi, \omega) \big) \ dz =0
\end{equation}
for a.e. $\omega$ and for all $\xi \in \mathbb R^d$.
Let us remark that the function $\Delta (z,\omega)$ inherits properties {\bf A)} and {\bf B)} of $\theta_1(z,\omega)$ and
$\theta_2(z,\omega)$.
Consider a cut-off function $ \varphi (\frac{|\xi|}{R})$ parameterized by $R>0$, where $\varphi(r)$, $r\in\mathbb R$, is a function
defined by
$$
\varphi(r) = \left\{
\begin{array}{c}
1, \quad r \le 1, \\ 2 - r, \quad 1<r<2, \\ 0, \quad r \ge 2.
\end{array}
\right.
$$
For any $R>0$, multiplying equation \eqref{1A} by $\mu(\xi, \omega) \Delta (\xi, \omega ) \varphi (\frac{|\xi|}{R})$ and integrating
the resulting relation in $\xi$ over $ \mathbb R^d$, we obtain the following equality
\begin{equation}\label{1B}
\int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \big(\Delta (\xi+z,\omega ) - \Delta(\xi, \omega) \big) \Delta(\xi, \omega) \varphi (\frac{|\xi|}{R}) \, dz \, d \xi =0.
\end{equation}
Using the relation $a(-z)=a(z)$, after change of variables $z \to -z, \ \xi - z = \xi'$, we get
\begin{equation}\label{2B}
\int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} a (z) \mu ( \xi'+ z, \omega ) \mu (\xi', \omega ) \big(\Delta (\xi',\omega ) - \Delta(\xi'+z, \omega) \big) \Delta(\xi'+z, \omega) \varphi (\frac{|\xi'+z|}{R}) \, dz \, d \xi' =0.
\end{equation}
Renaming $\xi'$ back to $\xi$ in the last equation and taking the sum of \eqref{1B} and \eqref{2B} we obtain
$$
\int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \big(\Delta (\xi+z,\omega ) - \Delta(\xi, \omega) \big) \Big( \Delta(\xi+z, \omega) \varphi (\frac{|\xi+z|}{R}) - \Delta(\xi, \omega) \varphi (\frac{|\xi|}{R}) \Big) dz \, d \xi
$$
$$
= \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \Big(\Delta (\xi+z,\omega ) - \Delta(\xi, \omega) \Big)^2 \varphi (\frac{|\xi|}{R}) \, dz \, d \xi
$$
$$
+ \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \big(\Delta (\xi+z,\omega ) - \Delta(\xi, \omega) \big) \Delta(\xi+z, \omega) \big( \varphi (\frac{|\xi+z|}{R}) - \varphi (\frac{|\xi|}{R}) \big) dz \, d \xi
$$
\begin{equation}\label{2C}
= J_1^R \ + \ J_2^R = 0.
\end{equation}
Letting $R=\eps^{-1}$, we first estimate the contribution of $J_2^R $.
\begin{lemma}\label{J2} The following limit relation holds a.s.:
\begin{equation}\label{3A}
\frac{1}{R^d} |J_2^R| \ \to \ 0 \quad \mbox{ as } \; R \to \infty.
\end{equation}
\end{lemma}
\begin{proof}
Denote $\Delta_z (T_\xi \omega ) = \Delta (\xi+z,\omega ) - \Delta(\xi, \omega)$, then $\Delta_z (T_\xi \omega ) $ is stationary in $\xi$ for any given $z$.
We consider separately the integration over $|\xi| > 3R$ and $|\xi| \le 3R$ in the integral $J_2^R$:
$$
J_2^R = \int\limits_{\mathbb R^d} \int\limits_{|\xi|>3R} a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \Delta_z(T_\xi \omega) \Delta(\xi+z, \omega) \big( \varphi (\frac{|\xi+z|}{R}) - \varphi (\frac{|\xi|}{R}) \big) dz \, d \xi
$$
$$
+ \int\limits_{\mathbb R^d} \int\limits_{|\xi|\le 3R} a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \Delta_z(T_\xi \omega) \Delta(\xi+z, \omega) \big( \varphi (\frac{|\xi+z|}{R}) - \varphi (\frac{|\xi|}{R}) \big) dz \, d \xi.
$$
If $|\xi| > 3R$, then $\varphi (\frac{|\xi|}{R}) = 0$. Also, $\varphi (\frac{|\xi+z|}{R})=0$ if $|\xi| > 3R$ and $|z|>R$.
Then we obtain the following upper bound
$$
\frac{1}{R^d} \int\limits_{\mathbb R^d} \int\limits_{|\xi|> 3R} a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) |\Delta_z (T_\xi \omega) | |\Delta(\xi+z, \omega)| \varphi (\frac{|\xi+z|}{R}) d\xi \, dz
$$
\begin{equation}\label{estimm}
\le \frac{\alpha_2^2 }{R^d} \int\limits_{|\eta|\le 2R} \Big( \int\limits_{|z|>R} |z| a (z) |\Delta_z (T_{\eta-z} \omega) |\, dz \Big) \frac1R |\Delta(\eta, \omega)| \varphi (\frac{|\eta|}{R}) d\eta
\end{equation}
$$
\le \frac{\alpha_2^2 }{R^d} \int\limits_{|\eta|\le 2R} \phi (T_\eta \omega) \frac1R |\Delta(\eta, \omega)| \varphi (\frac{|\eta|}{R}) \, d\eta,
$$
where $\eta=\xi+z$,
$$
\phi (T_\eta \omega) = \int\limits_{\mathbb R^d} |z| a (z) |\Delta_z (T_{\eta -z} \omega)| \, dz,
$$
and in the first inequality we have used the fact that $1< \frac{|z|}{R}$ if $|z|>R$.
Since $\Delta_z(\omega) \in L^2_M$, then $\phi(\omega) \in L^2(\Omega)$.
Applying the Cauchy-Swartz inequality to the last integral in \eqref{estimm} and recalling the relation $R=\eps^{-1}$ we have
\begin{equation}\label{5B}
\frac{\alpha_2^2 }{R^d} \int\limits_{|\eta|\le 2R} \phi (T_\eta \omega) \frac{|\Delta(\eta, \omega)|}{R} \varphi (\frac{|\eta|}{R}) \, d \eta \le \alpha_2^2 \Big( \frac{1}{R^d} \int\limits_{|\eta|\le 2R} \phi^2 (T_\eta \omega) d\eta \Big)^{\frac12} \Big( \frac{1}{R^d} \int\limits_{|\eta|\le 2R} \big(\frac{|\Delta(\eta, \omega)|}{R} \big)^2 d \eta\Big)^{\frac12} \to 0,
\end{equation}
as $R \to \infty$, because the first integral on the right hand side is bounded due to the stationarity of $\phi (T_\eta \omega)$, and the second integral tends to 0 due to sublinear growth of $\Delta(\eta, \omega)$, see \eqref{1corrsmall}.
If $|\xi| \le 3R$, then the corresponding part of $R^{-d} J_2^R$ can be rewritten as a sum of two terms
$$
\frac{1}{R^d}
\int\limits_{\mathbb R^d} \int\limits_{|\xi| \le 3R } a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \Delta_z(T_\xi \omega) (\Delta(\xi+z, \omega) - \Delta(\xi, \omega)) \big( \varphi (\frac{|\xi+z|}{R}) - \varphi (\frac{|\xi|}{R}) \big) d\xi \, dz
$$
$$
+ \frac{1}{R^d}\int\limits_{\mathbb R^d} \int\limits_{| \xi | \le 3R} a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \Delta_z(T_\xi \omega) \Delta(\xi, \omega) \big( \varphi (\frac{|\xi+z|}{R}) - \varphi (\frac{|\xi|}{R}) \big) d\xi \, dz = I_1 + I_2.
$$
We estimate $|I_1|$ and $|I_2|$ separately. Using the inequality $|\varphi( \frac{|x|}{R}) - \varphi (\frac{|y|}{R}) | \le \frac{|x-y|}{R}$ by
the same arguments as above we get
$$
|I_2| \le \frac{\alpha_2^2}{R^d} \int\limits_{\mathbb R^d} \int\limits_{|\xi| \le 3R} a (z) |\Delta_z(T_\xi \omega)| |\Delta(\xi, \omega)| \frac{|z|}{R} d\xi \, d z
$$
$$
\le \alpha_2^2 \Big( \frac{1}{R^d} \int\limits_{|\xi|\le 3R} \phi^2 (T_\xi \omega) d\xi \Big)^{\frac12} \Big( \frac{1}{R^d} \int\limits_{|\xi|\le 3R} \big(\frac{|\Delta(\xi, \omega)|}{R} \big)^2 d \xi\Big)^{\frac12} \to 0.
$$
To estimate $I_1$ we divide the area of integration in $z$ into two parts: $|z|< \sqrt{R}$ and $|z| \ge \sqrt{R}$, and first consider the integral
$$
I_1^{(<)} = \frac{1}{R^d} \int\limits_{|z| < \sqrt{R}} \int\limits_{|\xi| \le 3R } a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \Delta_z^2(T_\xi \omega) \big( \varphi (\frac{|\xi+z|}{R}) - \varphi (\frac{|\xi|}{R}) \big) d\xi \, dz
$$
Since $|z|\leq\sqrt{R}$, we have $|\varphi( \frac{|\xi +z|}{R}) - \varphi (\frac{|\xi|}{R}) | \le \frac{1}{\sqrt{R}}$. Therefore,
$$
|I_1^{(<)}| \le \alpha_2^2 \frac{1}{\sqrt{R}} \ \frac{1}{R^d} \int\limits_{|\xi| \le 3R } \int\limits_{\mathbb{R}^d} a (z) \Delta_z^2(T_\xi \omega) dz \, d \xi \to 0,
$$
as $R \to \infty$; here we have used the fact that
$$
\frac{1}{R^d} \int\limits_{|\xi| \le 3R } \int\limits_{\mathbb{R}^d} a (z) \Delta_z^2(T_\xi \omega) dz \, d \xi \to c_0 \mathbb{E} \Big( \int\limits_{\mathbb{R}^d} a (z) \Delta_z^2(\omega) dz \Big)
$$
with a constant $c_0$ equal to the volume of a ball of radius $3$ in $\mathbb R^d$. We turn to the second integral
$$
I_1^{(>)} = \frac{1}{R^d} \int\limits_{|z| \ge \sqrt{R}} \int\limits_{|\xi| \le 3R } a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \Delta_z^2(T_\xi \omega) \big( \varphi (\frac{|\xi+z|}{R}) - \varphi (\frac{|\xi|}{R}) \big) d\xi \, dz.
$$
Considering the inequality $|\varphi( \frac{|\xi +z|}{R}) - \varphi (\frac{|\xi|}{R}) | \le 1$ we obtain
\begin{equation}\label{7A}
|I_1^{(>)}| \le \alpha_2^2 \frac{1}{R^d} \int\limits_{|\xi| \le 3R } \int\limits_{|z| \ge \sqrt{R}} a (z) \Delta_z^2(T_\xi \omega) \, dz \, d \xi.
\end{equation}
Denote by $\psi_{R}(\omega)$ the stationary function defined by
$$
\psi_{R}(\omega) = \int\limits_{|z| \ge \sqrt{R}} a (z) \Delta_z^2( \omega) \, dz.
$$
Since $ \Delta_z( \omega) \in L^2_M$, then
\begin{equation}\label{5A}
\mathbb{E} \psi_{R}(\omega) \to 0 \quad \mbox{ as } \; R \to \infty.
\end{equation}
Moreover, function $\psi_{R}(\omega)$ is a.s. decreasing in $R$.
Using the ergodic theorem, \eqref{7A} and \eqref{5A}, we conclude that $ |I_1^{(>)}| $ tends to zero as $R \to \infty$.
Thus we have proved that $|I_1| +|I_2| \to 0 $ as $R \to \infty$ a.s.
Together with \eqref{5B}
this implies \eqref{3A}.
\end{proof}
We proceed with the term $J_1^R$ in \eqref{2C}:
$$
J_1^R = \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \Delta_z^2 (\xi,\omega ) \varphi (\frac{|\xi|}{R}) \, dz \, d \xi.
$$
Using the ergodic theorem we get as $R \to \infty$
\begin{equation}\label{6A}
\frac{1}{R^d} J_1^R =
\frac{1}{R^d} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \Delta_z^2 (\xi,\omega ) \varphi (\frac{|\xi|}{R}) \, dz \, d \xi \to c_1 \mathbb{E} \int\limits_{\mathbb R^d} a (z) {\bm\mu} ( T_z \omega ) {\bm\mu} (\omega ) \Delta_z^2 (\omega )dz,
\end{equation}
where $c_1=\int_{\mathbb R^d}\varphi(|\xi|)d\xi>0$.
Consequently from \eqref{2C} - \eqref{3A} it follows that
\begin{equation}\label{6B}
\frac{1}{R^d} |J_1^R| \ \to \ 0 \quad \mbox{ as } \; R \to \infty,
\end{equation}
and together with \eqref{6A} this implies that
\begin{equation}\label{6C}
\mathbb{E} \int\limits_{\mathbb R^d} a (z) {\bm\mu}( T_z \omega ) {\bm\mu} (\omega ) \Delta_z^2 (\omega )dz = 0.
\end{equation}
Using condition \eqref{add} we conclude from \eqref{6C} that $\Delta_z (\omega)
\equiv 0$ for a.e. $z$ and a.e. $\omega$, and hence $\theta_1(z,\omega)=\theta_2(z,\omega)$.
Proposition is proved.
\end{proof}
${ }$\\[-0.8cm]
This completes the proof of Theorem \ref{t_corrector}.\end{proof}
\section{Additional terms of the asymptotic expansion}\label{s_addterms}
Recall that $I_0^\eps$ stands for the sum of all terms of order $\varepsilon^{0}$ in (\ref{K2_1}) and that $u_0\in C_0^\infty(\mathbb R^d)$.
Our first goal is to determine the coefficients of the effective elliptic operator $\hat L$.
To this end we consider the following scalar product of $I_0^\eps$ with a function $\varphi \in L^2(\mathbb R^d)$:
\begin{equation}\label{hatK2_1}
(I^\varepsilon_0, \varphi) =
\int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \Big( \frac12 z\otimes z - z \otimes \theta
\big(\frac{x}{\varepsilon}-z, \omega \big) \Big) \ a (z) \mu \big( \frac{x}{\varepsilon}, \omega \big) \mu \big( \frac{x}{\varepsilon} -z, \omega \big) \ dz \ \nabla \nabla u_0 (x) \varphi(x) dx.
\end{equation}
After change of variables $x = \varepsilon \eta$ we have
\begin{equation}\label{hatK2_2}
\begin{array}{l}
\displaystyle
(I^\varepsilon_0, \varphi) =
\varepsilon^d \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \frac12 a (z) \,z\otimes z \, \mu ( \eta, \omega ) \mu ( \eta -z, \omega ) \, dz \, \nabla \nabla u_0 (\varepsilon\eta) \, \varphi (\varepsilon \eta) \, d\eta \\ \displaystyle
- \varepsilon^d \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} a (z) \,z \otimes \theta (\eta-z, \omega ) \mu ( \eta, \omega ) \mu ( \eta -z, \omega ) \, dz \, \nabla \nabla u_0 (\varepsilon\eta) \, \varphi (\varepsilon \eta) \, d\eta = I^\eps_1(\varphi) - I^\eps_2(\varphi).
\end{array}
\end{equation}
We consider the integrals $I^\eps_1(\varphi)$ and $I^\eps_2(\varphi)$ separately.
Since $\int_{\mathbb R^d}|z|^2a(z)ds\leq\infty$, then
$$
\int\limits_{\mathbb R^d} z\otimes z \,a(z) \mu (0,\omega)\mu(-z,\omega)\,dz \in (L^\infty(\Omega))^{d^2}.
$$
Therefore, by the Birkhoff ergodic theorem a.s.
$$
\int\limits_{\mathbb R^d} z\otimes z\,a(z) \mu (\frac{x}{\eps},\omega)\mu(\frac{x}{\eps}-z,\omega)\,dz \rightharpoonup
D_1\quad\hbox{weakly in } \ (L^2_{\rm loc}(\mathbb R^d))^{d^2}
$$
with
\begin{equation}\label{J_1}
D_1 = \int\limits_{\mathbb R^d} \frac12 \, z\otimes z \, a (z) \, E\{ \mu ( 0, \omega ) \mu ( -z, \omega )\} \, dz.
\end{equation}
Recalling that $u_0\in C_0^\infty(\mathbb R^d)$, we obtain
\begin{equation}\label{I_1}
I^\eps_1(\varphi)\to \int\limits_{\mathbb R^d}D_1\nabla\nabla u_0(x)\varphi(x)\,dx.
\end{equation}
The second integral in \eqref{hatK2_2} contains the non-stationary random field $ \theta (z,\omega)$, and we rewrite $I_2(\varphi)$ as a sum of two terms, such that the first term contains the stationary field $\zeta_z (\eta, \omega)$ and the contribution of the second one is asymptotically negligible. In order to estimate the contribution of the second term we construct an additional corrector $u_2^\varepsilon$, see formula \eqref{corr-u2} below.\\
We have
\begin{equation}\label{I_2appr}
\begin{array}{l}
\displaystyle
I^\varepsilon_2 (\varphi) = \int\limits_{\mathbb R^d}\! \int\limits_{\mathbb R^d} a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \theta (\frac{x}{\varepsilon} - z, \omega ) \nabla \nabla u_0(x) \varphi(x) \, d x \, dz \\ \displaystyle
= \frac12 \int\limits_{\mathbb R^d}\! \int\limits_{\mathbb R^d} a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \theta (\frac{x}{\varepsilon} - z, \omega ) \nabla \nabla u_0(x) \varphi(x) \, d x \, dz \\ \displaystyle
- \, \frac12 \int\limits_{\mathbb R^d}\! \int\limits_{\mathbb R^d} a (z) z \, \mu (\frac{y}{\varepsilon}, \omega ) \mu (\frac{y}{\varepsilon} -z, \omega ) \theta (\frac{x}{\varepsilon} - z, \omega ) \nabla \nabla u_0(y - \varepsilon z) \varphi(y-\varepsilon z) \, d y \, dz \\ \displaystyle
= \frac12 \int\limits_{\mathbb R^d}\! \int\limits_{\mathbb R^d} \! a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \Big( \theta (\frac{x}{\varepsilon} - z, \omega ) \nabla \nabla u_0(x) \varphi(x) - \theta (\frac{x}{\varepsilon}, \omega ) \nabla \nabla u_0(x - \varepsilon z) \varphi (x-\varepsilon z)\! \Big) d x dz \\ \displaystyle
= \frac12 \int\limits_{\mathbb R^d}\! \int\limits_{\mathbb R^d} a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \big( \theta (\frac{x}{\varepsilon} - z, \omega ) - \theta (\frac{x}{\varepsilon}, \omega ) \big) \nabla \nabla u_0(x) \varphi(x) d x \, dz \\ \displaystyle
+ \frac12 \int\limits_{\mathbb R^d}\! \int\limits_{\mathbb R^d} a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \theta (\frac{x}{\varepsilon}, \omega ) \big( \nabla \nabla u_0(x) \varphi(x) - \nabla \nabla u_0(x - \varepsilon z) \varphi (x-\varepsilon z) \big) d x \, dz,
\end{array}
\end{equation}
here and in what follows $z\theta(z)\nabla\nabla u_0(x)$ stands for $z^i\theta^j(z)\partial_{x_i}\partial_{x_j}u_0(x)$.
The field $\zeta_{-z} (\eta, \omega)= \theta(\eta -z,\omega) - \theta (\eta,\omega)$ is stationary for any given $z$, and
\begin{equation}\label{PL1}
\int\limits_{\mathbb R^d} a (z) z \otimes \zeta_{-z} (0, \omega) \mu ( 0, \omega ) \mu ( -z, \omega ) \, dz \in (L^2(\Omega))^{d^2}.
\end{equation}
Indeed, in view of \eqref{thetaLM} and \eqref{zetatheta} by the Cauchy-Schwarz inequality we have
$$
\int\limits_{\Omega}\bigg(\int\limits_{\mathbb R^d} |a (z) z \otimes \zeta_{-z} (0, \omega) \mu ( 0, \omega ) \mu ( -z, \omega )| \, dz\bigg)^2 d P(\omega) \le
$$
$$
\alpha_2^2 \Big(\int\limits_{\mathbb R^d} a (z) |z|^2 dz \Big) \Big( \int\limits_{\mathbb R^d} \int\limits_{\Omega} a (z) \, |\theta(-z, \omega)|^2 dz d P(\omega) \Big) < \infty.
$$
Consequently applying the ergodic theorem to the stationary field \eqref{PL1} we obtain for the first integral in \eqref{I_2appr} as $\varepsilon \to 0$
\begin{equation}\label{I2-stationary}
\begin{array}{l}
\displaystyle
\frac12 \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) z \zeta_{-z} (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \nabla \nabla u_0(x) \varphi(x) d x \, dz \ \to
\\ \displaystyle
\frac12 \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} a (z) z E\{ \zeta_{-z} (0, \omega) \mu ( 0, \omega ) \mu ( -z, \omega ) \} \nabla \nabla u_0(x) \varphi(x) d x \, dz = \int\limits_{\mathbb R^d} D_2 \, \nabla \nabla u_0 (x) \varphi(x) \, dx,
\end{array}
\end{equation}
where we have used the notation
\begin{equation}\label{D_2}
D_2 = \frac12 \, \int\limits_{\mathbb R^d} a (z) z \otimes E\{ \zeta_{-z} (0, \omega) \mu ( 0, \omega ) \mu ( -z, \omega )\} \, dz.
\end{equation}
Denote the last integral on the right-hand side in \eqref{I_2appr} by $J_2^\varepsilon (\varphi)$:
\begin{equation}\label{J2eps}
J_2^\varepsilon (\varphi) = \frac12 \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \theta (\frac{x}{\varepsilon}, \omega ) \big( \nabla \nabla u_0(x) \varphi(x) - \nabla \nabla u_0(x - \varepsilon z) \varphi (x-\varepsilon z) \big) d x \, dz
\end{equation}
and consider this expression as a functional on $L^2(\mathbb R^d)$ acting on function $\varphi$.
In order to show that for each $\eps>0$ the functional $J_2^\varepsilon$
is a bounded linear functional on $L^2(\mathbb R^d)$ we represent $J_2^\varepsilon$ as a sum $J_2^\varepsilon=J_2^{1,\varepsilon}
+J_2^{2,\varepsilon}+J_2^{3,\varepsilon}$ with $J_2^{1,\varepsilon}$,
$J_2^{2,\varepsilon}$ and $J_2^{3,\varepsilon}$ introduced below and estimate each of these functionals separately. By Proposition \ref{1corrector} a.s.
$ \theta (\frac{x}{\eps},\omega)\in L^2_{\rm loc}(\mathbb R^d)$ for all $\varepsilon>0$. Therefore,
$$
J_2^{1,\varepsilon} (\varphi) = \frac12 \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \theta (\frac{x}{\varepsilon}, \omega ) \nabla \nabla u_0(x) \varphi(x) d x \, dz
$$
is a.s. a bounded linear functional on $L^2(\mathbb R^d)$. Similarly,
$$
J_2^{2,\varepsilon} (\varphi) = \frac12 \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \theta (\frac{x}{\varepsilon}-z, \omega ) \nabla \nabla u_0(x-\eps z)
\varphi(x-\eps z) d x \, dz
$$
is a.s. a bounded linear functional on $L^2(\mathbb R^d)$. Due to \eqref{thetaLM} and by the Birkhoff ergodic theorem the linear functional
$$
J_2^{3,\varepsilon} (\varphi) = \frac12 \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \Big( \theta (\frac{x}{\varepsilon}, \omega )-
\theta (\frac{x}{\varepsilon}-z, \omega )\Big) \nabla \nabla u_0(x-\eps z)
\varphi(x-\eps z) d x \, dz
$$
$$
= \frac12 \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) z \, \mu (\frac{x}{\varepsilon}+z, \omega ) \mu (\frac{x}{\varepsilon} , \omega ) \, \Big( \theta (\frac{x}{\varepsilon}+z, \omega )-
\theta (\frac{x}{\varepsilon}, \omega )\Big) \nabla \nabla u_0(x)
\varphi(x) d x \, dz
$$
$$
= \frac12 \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) z \, \mu (\frac{x}{\varepsilon}+z, \omega ) \mu (\frac{x}{\varepsilon} , \omega ) \, \theta
(z,T_{\frac{x}{\varepsilon}} \omega ) \nabla \nabla u_0(x)
\varphi(x) d x \, dz
$$
is a.s. bounded in $L^2(\mathbb R^d)$. Since $J_2^{\varepsilon} (\varphi) =J_2^{1,\varepsilon} (\varphi)+ J_2^{2,\varepsilon} (\varphi)+ J_2^{3,\varepsilon} (\varphi)$, the desired boundedness of $J_2^{\varepsilon}$ follows.
Then by the Riesz theorem for a.e. $\omega$ there exists a function $f_2^\varepsilon = f_2^\varepsilon(u_0) \in L^2(\mathbb R^d)$ such that $J_2^\varepsilon(\varphi) = (f_2^\varepsilon,\varphi)$. We emphasize that here we do not claim that the norm of $J_2^\eps$ admits
a uniform in $\eps$ estimate.
Next we show that the contribution of $f_2^\varepsilon$ to $w^\varepsilon$ is vanishing. To this end consider the function (additional corrector)
\begin{equation}\label{corr-u2}
u_2^\varepsilon (x,\omega) = (-L^\varepsilon +m)^{-1} f_2^\varepsilon (x, \omega).
\end{equation}
\begin{lemma}\label{l_u2small}
$\| u_2^\varepsilon\|_{L^2(\mathbb R^d)} \to 0$ as $\varepsilon \to 0$ for a.e. $\omega$.
\end{lemma}
\begin{proof}
Taking $\varphi = u_2^\varepsilon$ we get
\begin{equation}\label{L1}
((-L^\varepsilon +m) u_2^\varepsilon, u_2^\varepsilon) = (f_2^\varepsilon, u_2^\varepsilon).
\end{equation}
Considering \eqref{L_eps} the left-hand side of \eqref{L1} can be rearranged as follows:
\begin{equation}\label{L1-LHS}
\begin{array}{l}
\displaystyle
- \frac{1}{\varepsilon^2} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) ( u_2^\varepsilon (x-\varepsilon z) - u_2^\varepsilon(x)) dz \, u_2^\varepsilon (x) dx + m \int\limits_{\mathbb R^d} (u_2^\varepsilon)^2 (x) dx \\ \displaystyle
= \, \frac12 \frac{1}{\varepsilon^2} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) ( u_2^\varepsilon (x-\varepsilon z) - u_2^\varepsilon(x))^2 dz dx + m \int\limits_{\mathbb R^d} (u_2^\varepsilon)^2 (x) dx.
\end{array}
\end{equation}
We denote
$$
G_1^2 = \frac{1}{2 \varepsilon^2} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) ( u_2^\varepsilon (x-\varepsilon z) - u_2^\varepsilon(x))^2 dz dx, \quad G_2^2= m \int\limits_{\mathbb R^d} (u_2^\varepsilon)^2 (x) dx.
$$
It follows from \eqref{J2eps} that the right-hand side of \eqref{L1} takes the form
\begin{equation}\label{L1-RHS}
\begin{array}{l}
\displaystyle
J_2^\varepsilon (u_2^\varepsilon) = \frac12 \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \theta (\frac{x}{\varepsilon}, \omega ) \big( \nabla \nabla u_0(x) u_2^\varepsilon(x) - \nabla \nabla u_0(x - \varepsilon z) u_2^\varepsilon (x-\varepsilon z) \big) d x \, dz \\ \displaystyle
= \frac12 \, \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \theta (\frac{x}{\varepsilon}, \omega ) \nabla \nabla u_0(x) \big( u_2^\varepsilon(x) - u_2^\varepsilon (x-\varepsilon z) \big) d x \, dz \\[6mm] \displaystyle
+ \frac12 \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \theta (\frac{x}{\varepsilon}, \omega ) \big( \nabla \nabla u_0(x) - \nabla \nabla u_0(x - \varepsilon z) \big) u_2^\varepsilon (x-\varepsilon z) d x \, dz =\! \frac12 (I_1 + I_2).
\end{array}
\end{equation}
It is proved in Proposition \ref{1corrector} that a.s. $\|\eps \theta (\frac x\eps,\omega)\|_{L^2(B)}\to 0$ as $\eps\to0$ for any
ball $B\subset\mathbb R^d$.
By the Cauchy-Schwartz inequality we obtain the following upper bounds for $I_1$:
\begin{equation}\label{L1-RHS-I1}
\begin{array}{l}
\displaystyle
I_1 \le \left( \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \big( u_2^\varepsilon(x) - u_2^\varepsilon (x-\varepsilon z) \big)^2 d x \, dz \right)^{1/2} \\
\displaystyle
\left( \frac{1}{\varepsilon^2 } \, \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z)|z|^2 \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \varepsilon^2 \big|\theta (\frac{x}{\varepsilon}, \omega )\big|^2 (\nabla \nabla u_0(x))^2 d x \, dz \right)^{1/2} \\
\displaystyle
\le \frac{1}{\varepsilon} \, o(1) \ \left(\frac12 \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \big( u_2^\varepsilon(x) - u_2^\varepsilon (x-\varepsilon z) \big)^2 d x \, dz \right)^{1/2} = G_1 \cdot o(1),
\end{array}
\end{equation}
where $o(1)\to0$ as $\eps\to0$. We turn to the second integral $I_2$. Let $B$ be a ball centered at the origin and such that
$\mathrm{supp}(u_0)\subset B$, $\mathrm{dist}(\mathrm{supp}(u_0),\partial B)>1$. Then
$$
\Big|\int\limits_{\mathbb R^d} \int\limits_{B} \, a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \theta (\frac{x}{\varepsilon}, \omega ) \big( \nabla \nabla u_0(x) - \nabla \nabla u_0(x - \varepsilon z) \big) u_2^\varepsilon (x-\varepsilon z) d x \, dz\Big|
$$
\begin{equation}\label{aaa1}
\leq C\int\limits_{\mathbb R^d} \int\limits_{B} \, a (z) |z|^2 \, \big|\eps \theta (\frac{x}{\varepsilon}, \omega )\big|\,
| u_2^\varepsilon (x-\varepsilon z)| d x \, dz\le \| u_2^\varepsilon \|_{L^2(\mathbb R^d)} \cdot o(1) = G_2 \cdot o(1).
\end{equation}
The integral over $B^c=\mathbb R^d\setminus B$ can be estimated in the following way:
$$
\Big|\int\limits_{\mathbb R^d} \int\limits_{B^c} \, a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \theta (\frac{x}{\varepsilon}, \omega ) \big( \nabla \nabla u_0(x) - \nabla \nabla u_0(x - \varepsilon z) \big) u_2^\varepsilon (x-\varepsilon z) d x \, dz\Big|
$$
$$
\Big|\int\limits_{\mathbb R^d} \int\limits_{B^c} \, a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \theta (\frac{x}{\varepsilon}, \omega ) \nabla \nabla u_0(x - \varepsilon z) u_2^\varepsilon (x-\varepsilon z) d x \, dz\Big|
$$
\begin{equation}\label{aaa2}
\leq C\int\limits_{|z|\geq \frac1\eps} \int\limits_{B^c} \, a (z) |z| \, \big| \theta (\frac{x}{\varepsilon}, \omega )\big|\,
|\nabla \nabla u_0(x - \varepsilon z)|\, |u_2^\varepsilon (x-\varepsilon z)|\, d x \, dz
\end{equation}
$$
\leq C\int\limits_{|z|\geq \frac1\eps} \int\limits_{\mathbb R^d} \, a (z) |z| \, \big| \theta (\frac{x}{\varepsilon}+z, \omega )\big|\,
|\nabla \nabla u_0(x)|\, |u_2^\varepsilon (x)|\, d x \, dz
$$
$$
\leq C\int\limits_{|z|\geq \frac1\eps} \int\limits_{\mathbb R^d} \, a (z) |z| \,\Big[ \big| \theta (\frac{x}{\varepsilon}+z, \omega )
- \theta (\frac{x}{\varepsilon}, \omega )\big|+\big| \theta (\frac{x}{\varepsilon}, \omega )\big|\Big]\,
|\nabla \nabla u_0(x)|\, |u_2^\varepsilon (x)|\, d x \, dz.
$$
We have
$$
\int\limits_{|z|\geq \frac1\eps} \int\limits_{\mathbb R^d} \, a (z) |z| \,\big| \theta (\frac{x}{\varepsilon}, \omega )\big|\,
|\nabla \nabla u_0(x)|\, |u_2^\varepsilon (x)|\, d x \, dz
$$
$$
\leq
\int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) |z|^2 \,\big|\eps \theta (\frac{x}{\varepsilon}, \omega )\big|\,
|\nabla \nabla u_0(x)|\, |u_2^\varepsilon (x)|\, d x \, dz
\leq G_2\cdot o(1)
$$
and
$$
\int\limits_{|z|\geq \frac1\eps} \int\limits_{\mathbb R^d} \, a (z) |z| \,\Big[ \big| \theta (\frac{x}{\varepsilon}+z, \omega )
- \theta (\frac{x}{\varepsilon}, \omega )\big|\Big]\,
|\nabla \nabla u_0(x)|\, |u_2^\varepsilon (x)|\, d x \, dz
$$
$$
\leq \int\limits_{|z|\geq \frac1\eps} \int\limits_{\mathbb R^d} \, a (z) |z| \, \big| \zeta_z (T_{\frac{x}{\varepsilon}}\omega )
\big|\,
|\nabla \nabla u_0(x)|\, |u_2^\varepsilon (x)|\, d x \, dz
$$
$$
\leq \left( \int\limits_{|z|\geq \frac1\eps} \, a (z) z^2 \, dz \right)^{\frac12} \int\limits_{\mathbb R^d} \left( \int\limits_{\mathbb R^d} a(z) \big| \zeta_z (T_{\frac{x}{\varepsilon}}\omega )
\big|^2 \, dz \right)^{\frac12}
|\nabla \nabla u_0(x)|\, |u_2^\varepsilon (x)|\, d x
$$
$$
\leq o(1) \, \left( \int\limits_{\mathbb R^d} \, |u_2^\varepsilon (x)|^2 \, dx \right)^{\frac12} \left( \int\limits_{\mathbb R^d} \left( \int\limits_{\mathbb R^d} a(z) \big| \zeta_z (T_{\frac{x}{\varepsilon}}\omega )
\big|^2 \, dz \right)
|\nabla \nabla u_0(x)|^2\, d x \right)^{\frac12} = G_2\cdot o(1).
$$
Since $\zeta_z (\omega) \in L^2_M$, the second integral in the right hand side here converges to a constant by the ergodic theorem.
Combining the last two estimates we conclude that the term on the right-hand side in \eqref{aaa2} does not exceed $G_2\cdot o(1)$.
Therefore, considering \eqref{aaa1}, we obtain $I_1\leq G_2\cdot o(1)$. This estimate and \eqref{L1-RHS-I1} imply that
$$
G_1^2 + G_2^2 = I_1 + I_2 \le (G_1 + G_2) \cdot o(1).
$$
Consequently, $G_1 \to 0$ and $G_2 = m^{1/2} \| u_2^\varepsilon \|_{L^2(\mathbb R^d)} \to 0$ as $\varepsilon \to 0$. Lemma is proved.
\end{proof}
\medskip
Thus we can rewrite $I^\varepsilon_0$ (all the terms of the order $\varepsilon^{0}$) as follows
\begin{equation}\label{VV}
I^\varepsilon_0 = (D_1 - D_2) \cdot \nabla\nabla u_0 + f_2^\varepsilon + S(\frac{x}{\varepsilon}, \omega) \cdot \nabla\nabla u_0, \qquad S(\frac{x}{\varepsilon}, \omega) = \Psi_1(\frac{x}{\varepsilon}, \omega) - \Psi_2(\frac{x}{\varepsilon}, \omega),
\end{equation}
where the matrices $D_1$and $D_2$ are defined in \eqref{J_1} and \eqref{D_2} respectively, and $ S(\frac{x}{\varepsilon}, \omega), \Psi_1(\frac{x}{\varepsilon}, \omega), \Psi_2(\frac{x}{\varepsilon}, \omega)$ are stationary fields with zero mean which are given by
\begin{equation}\label{Psi-1}
\Psi_1(\frac{x}{\varepsilon}, \omega) = \frac12 \int\limits_{\mathbb R^d} \, a (z) z^2 \Big[ \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) -
E\{ \mu ( 0, \omega ) \mu ( -z, \omega ) \} \Big] dz,
\end{equation}
\begin{equation}\label{Psi-2}
\Psi_2(\frac{x}{\varepsilon}, \omega) = \frac12 \int\limits_{\mathbb R^d} \, a (z) z \Big[ \zeta_{-z} (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) -
E\{ \zeta_{-z} (0, \omega) \mu ( 0, \omega ) \mu ( -z, \omega )\} \Big] dz.
\end{equation}
Denote
\begin{equation}\label{u3}
u_3^\varepsilon(x,\omega) = (-L^\varepsilon+m)^{-1} F^\varepsilon(x,\omega), \quad \mbox{where } \; F^\varepsilon(x, \omega) = S(\frac{x}{\varepsilon}, \omega) \cdot \nabla\nabla u_0(x).
\end{equation}
Since $ {\rm supp} \, u_0 \subset B$ is a bounded subset of $\mathbb R^d$ and
$$
\int\limits_{\mathbb R^d} \, a (z) |z|\, \big|\zeta_{-z} ( \omega )\big| \,dz \in L^2(\Omega),
$$
then by the Birkhoff theorem $u_3^\varepsilon \in L^2(\mathbb R^d)$. Our goal is to prove that $\|u_3^\varepsilon \|_{L^2(\mathbb R^d)} \to 0$ as $\varepsilon \to 0$. We first show that the family $\{u_3^\varepsilon\}$ is bounded in $L^2(\mathbb R^d)$.
\begin{lemma}\label{Bound}
The family of functions $u_3^\varepsilon$ defined by \eqref{u3} is uniformly bounded in $L^2(\mathbb R^d)$ for e.a. $\omega$: $\|u_3^\varepsilon \|_{L^2(\mathbb R^d)} \le C$ for any $0<\varepsilon<1$.
\end{lemma}
\begin{proof}
Since the operator $(-L^\varepsilon+m)^{-1} $ is bounded ($\| (-L^\varepsilon+m)^{-1} \| \le \frac{1}{m}$), then it is sufficient to prove that $\| F^\varepsilon(x,\omega) \|_{L^2(\mathbb R^d)} \le C$ uniformly in $\varepsilon$. By the Birkhoff ergodic theorem the functions $ \Psi_1(\frac{x}{\varepsilon}, \omega)$ and $\Psi_2(\frac{x}{\varepsilon}, \omega)$ a.s converge
to zero weakly in $L^2(B)$, so does $S(\frac{x}{\varepsilon}, \omega)$. Then $S(\frac{x}{\varepsilon}, \omega)\cdot \nabla\nabla
u_0$ a.s. converges to zero weakly in $L^2(\mathbb R^d)$.
This implies the desired boundedness.
\end{proof}
\begin{lemma}\label{Convergence} For any cube $B$ centered at the origin
$\|u_3^\varepsilon \|_{L^2(B)} \ \to \ 0$ as $\varepsilon \to 0$ for e.a. $\omega$.
\end{lemma}
\begin{proof}
The first step of the proof is to show that any sequence $\{u_3^{\varepsilon_j} \}$, $\varepsilon_j \to 0$, is compact in $L^2(B)$.
Using definition \eqref{u3} we have
$$
( (-L^\varepsilon+m) u_3^\varepsilon, u_3^\varepsilon) \ = \ ( F^\varepsilon, u_3^\varepsilon).
$$
The left-hand side of this relation can be rewritten as
\begin{equation}\label{L2-rhs}
\begin{array}{l}
\displaystyle
\int\limits_{\mathbb R^d} (-L^\varepsilon+m) u_3^\varepsilon(x) u_3^\varepsilon(x) dx \\ \displaystyle
= \, m \int\limits_{\mathbb R^d} (u_3^\varepsilon(x))^2 dx - \frac{1}{\varepsilon^2} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega) ( u_3^\varepsilon (x-\varepsilon z) - u_3^\varepsilon(x)) u_3^\varepsilon (x) dz dx \\ \displaystyle
= \, m \int\limits_{\mathbb R^d} (u_3^\varepsilon(x))^2 dx + \frac{1}{2 \varepsilon^2} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega) ( u_3^\varepsilon (x-\varepsilon z) - u_3^\varepsilon(x))^2 dz dx.
\end{array}
\end{equation}
Consequently we obtain the following equality
\begin{equation}\label{u3-main}
m \int\limits_{\mathbb R^d} (u_3^\varepsilon(x))^2 dx + \frac{1}{2 \varepsilon^2} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega) ( u_3^\varepsilon (x-\varepsilon z) - u_3^\varepsilon(x))^2 dz dx = ( F^\varepsilon, u_3^\varepsilon).
\end{equation}
Considering the uniform boundedness of $F^\varepsilon$ and $ u_3^\varepsilon$, see Lemma \ref{Bound}, we immediately conclude that
\begin{equation}\label{C-main}
\frac{1}{\varepsilon^2} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega) ( u_3^\varepsilon (x-\varepsilon z) - u_3^\varepsilon(x))^2 dz dx < K
\end{equation}
uniformly in $\varepsilon$ and for a.e. $\omega$. Therefore,
\begin{equation}\label{C-main_pure}
m \int\limits_{\mathbb R^d} (u_3^\varepsilon(x))^2 dx+\frac{1}{\varepsilon^2} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) ( u_3^\varepsilon (x-\varepsilon z) - u_3^\varepsilon(x))^2 dz dx < K
\end{equation}
For the sake of definiteness assume that $B=[-1,1]^d$. The cubes of other size can be considered in exactly the same way.
Let $\phi(s)$ be an even $C_0^\infty(\mathbb R)$ function such that $0\leq \phi\leq 1$, $\phi(s)=1$ for $|s|\leq 1$,
$\phi(s)=0$ for $|s|\geq 2$, and $|\phi'(s)|\leq 2$. Denote $\tilde u_3^\varepsilon(x)= \phi(|x|)u_3^\varepsilon(x)$.
It is straightforward to check that
\begin{equation}\label{C-main_modi1}
m \int\limits_{\mathbb R^d} (\tilde u_3^\varepsilon(x))^2 dx+\frac{1}{\varepsilon^2} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) (\tilde u_3^\varepsilon (x-\varepsilon z) - \tilde u_3^\varepsilon(x))^2 dz dx < K
\end{equation}
We also choose $\mathcal{R}$ in such a way that $\int_{|z|\leq \mathcal{R}}a(z)dz\geq \frac12$ and introduce
$$
\tilde a(z) ={\bf 1}_{\{|z|\leq \mathcal{R}\}}\,a(z)\,\Big(\int_{|z|\leq \mathcal{R}}a(z)dz\Big)^{-1}.
$$
Then
\begin{equation}\label{C-main_cut}
m \int\limits_{\mathbb R^d} (\tilde u_3^\varepsilon(x))^2 dx+\frac{1}{\varepsilon^2} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, \tilde a (z) (\tilde u_3^\varepsilon (x-\varepsilon z) - \tilde u_3^\varepsilon(x))^2 dz dx < K.
\end{equation}
Letting $\tilde B = [-\pi, \pi]^d$, we denote by $\hat u_3^\varepsilon(x)$ the $\tilde B$ periodic extension of
$\tilde u_3^\varepsilon(x)$.
For the extended function we have
\begin{equation}\label{C-main_per}
m \int\limits_{\tilde B} (\hat u_3^\varepsilon(x))^2 dx+\frac{1}{\varepsilon^2} \int\limits_{\tilde B} \int\limits_{\mathbb R^d} \, \tilde a (z) (\hat u_3^\varepsilon (x-\varepsilon z) - \hat u_3^\varepsilon(x))^2 dz dx < K.
\end{equation}
The functions $e_k(x) = \frac{1}{(2 \pi)^{d/2}} e^{ikx}, \; k \in Z^d$, form an orthonormal basis in $L^2(B)$, and
$$
\hat u_3^\varepsilon(x) = \sum_k \alpha_k^\varepsilon e_k(x), \quad \hat u_3^\varepsilon (x-\varepsilon z) = \sum_k \alpha_k^\varepsilon e^{-i\varepsilon kz} e_k(x);
$$
$$
\| \hat u_3^\varepsilon(x)\|^2 = \sum_k (\alpha_k^\varepsilon)^2, \quad \|\hat u_3^\varepsilon (x-\varepsilon z) -
\hat u_3^\varepsilon(x) \|^2 =\sum_k (\alpha_k^\varepsilon)^2 |e^{-i\varepsilon k z} - 1|^2.
$$
Then inequality \eqref{C-main} is equivalent to the following bound
\begin{equation}\label{AAA1}
\frac{1}{\varepsilon^2} \sum_k (\alpha_k^\varepsilon)^2 \, \int\limits_{\mathbb R^d} \, \tilde a (z) |e^{-i\varepsilon k z} - 1|^2 dz < C.
\end{equation}
\begin{lemma}\label{Propc1c2}
For any $k \in Z^d$ and any $0<\varepsilon<1$ there exist constants $C_1, \ C_2$ (depending on $d$) such that
\begin{equation}\label{A2}
\int\limits_{\mathbb R^d} \, \tilde a (z) |e^{-i\varepsilon k z} - 1|^2 dz \ge \min \{ C_1 k^2 \varepsilon^2, \ C_2 \}.
\end{equation}
\end{lemma}
\begin{proof}
For small $\varepsilon$, the lower bound by $C_1 k^2 \varepsilon^2$ follows from the expansion of $e^{-i \varepsilon k z}$ in the neighborhood of 0. For large enough $\varepsilon |k|\ge \varkappa_0>1$ we use the following inequality
$$
\int\limits_{\mathbb R^d} \, \tilde a (z) |e^{-i\varepsilon k z} - 1|^2 dz \ge c_0 \int\limits_{[0,1]^d} |e^{-i\varepsilon k z} - 1|^2 dz \ge c_0 \big(2-\frac{2}{\varkappa_0}\big)^d.
$$
\end{proof}
Let us consider a sequence $\varepsilon_j \to 0$. Using inequalities \eqref{AAA1}-\eqref{A2} we will construct now for any $\delta>0$ a finite $2 \delta$-net covering all elements of the sequence $u_3^{\varepsilon_j}$. For any $\delta>0$ we take $|k_0|$ and $j_0$ such that
\begin{equation}\label{A3}
\frac{C}{\delta} < C_1 |k_0|^2 < \frac{C_2}{\varepsilon_{j_0}^2},
\end{equation}
where $C,\, C_1, \, C_2$ are the same constants as in \eqref{AAA1}-\eqref{A2}. Then it follows from \eqref{AAA1}-\eqref{A3} that
$$
\sum_{k:|k| \ge |k_0|} C_1 |k_0|^2 (\alpha_k^{\varepsilon_j})^2 < \sum_{k: |k| \ge |k_0|} \min \Big\{ C_1 |k|^2, \, \frac{C_2}{\varepsilon_j^2} \Big\} \, (\alpha_k^{\varepsilon_j})^2 < C \quad \mbox{ for any } \; j>j_0.
$$
Consequently we obtain the uniform bound on the tails of $\hat u_3^{\varepsilon_j}$ for all $j>j_0$:
\begin{equation}\label{A4}
\sum_{k:|k| \ge |k_0|} (\alpha_k^{\varepsilon_j})^2 < \frac{C}{C_1 |k_0|^2} < \delta.
\end{equation}
Denote by ${\cal H}_{k_0} \subset L^2(\tilde B)$ a linear span of basis vectors $\{ e_k, \ |k|<|k_0| \}$. Evidently, it is a finite-dimensional subspace. Then we have
$$
\hat u_3^\varepsilon = w_{k_0}^\varepsilon + \sum_{k:|k| \ge |k_0|} \alpha_k^{\varepsilon} e_k, \quad \mbox{ where } \; w_{k_0}^\varepsilon = P_{{\cal H}_{k_0}} u_3^\varepsilon.
$$
Since we already know from Lemma \ref{Bound} that the functions $\hat u_3^{\varepsilon_j}$ are uniformly bounded in $L^2(\tilde B)$, then the functions $w_{k_0}^{\varepsilon_j}$ are also uniformly bounded. Therefore there exists in ${\cal H}_{k_0}$ a finite $\delta$-net covering the functions $\{ w_{k_0}^{\varepsilon_j}, \, j>j_0 \}$. Estimate \eqref{A4} implies that the same net will be the $2 \delta$-net for the functions $\{\hat u_3^{\varepsilon_j}, \, j>j_0 \}$. We need to add to this net $j_0$ elements to cover first $j_0$ functions $\hat u_3^{\varepsilon_j}, \, j=1, \ldots, j_0$.
Thus we constructed the finite $2 \delta$-net for any $\delta>0$ which proves the compactness of $\{\hat u_3^{\varepsilon} \}$ as $\varepsilon \to 0 $ in $L^2(\tilde B)$.
Since $u_3^{\varepsilon}(x)=\hat u_3^{\varepsilon}(x)$ for $x\in B$, we conclude that the family $\{u_3^{\varepsilon}\}$ is compact
in $L^2(B)$. In the same way one can show that this family is compact on any cube $B=[-L,L]^d$.
This completes the proof of Lemma.
\end{proof}
\begin{lemma}\label{l_u3small}
The following limit relation holds: $\|u_3^\eps\|_{L^2(\mathbb R^d)}\to 0$, as $\eps\to0$.
\end{lemma}
\begin{proof}
We go back to formula \eqref{u3-main}. On the right-hand side of this equality we have the inner product of two sequences $F^\varepsilon$ and $u_3^\varepsilon$ Since the sequence $F^\eps \rightharpoonup 0$ weakly in $L^2(B)$, and the sequence $u_3^\varepsilon$ is compact in $L^2(B)$, the product $(F^\varepsilon, u_3^\varepsilon) \to 0$ as $\varepsilon \to 0$.
Therefore, both integrals on the left-hand side of \eqref{u3-main} also tend to zero as $\varepsilon \to 0$, and we obtain that $\| u_3^\varepsilon \|_{L^2(\mathbb R^d)} \to 0, \ \varepsilon \to 0$.
\end{proof}
Denote by $\Theta$ the matrix $\Theta = D_1 - D_2$, where $D_1, \, D_2$ are defined by \eqref{J_1}, \eqref{D_2}. Our next goal is to show that $D_1 - D_2$ is a positive definite matrix.
\begin{proposition}
The matrix $\Theta = D_1 - D_2$ is positive definite:
\begin{equation}\label{Positive}
\Theta \ = \ \frac12 \, \int\limits_{\mathbb R^d} \int\limits_{\Omega} \big(z\otimes z - z \otimes \zeta_{-z} (0, \omega ) \big) \, a (z) \, \mu (0, \omega ) \mu ( -z, \omega) \, dz \, d P(\omega) > 0.
\end{equation}
\end{proposition}
\begin{proof}
We recall that $\varkappa^\delta(\omega)$ stands for a unique solution of equation \eqref{A-delta}. Letting
$\varkappa_\eta^\delta(\omega)=\eta\cdot\varkappa^\delta(\omega)$, $\eta\in\mathbb R^d\setminus \{0\}$,
one can easily obtain
\begin{equation}\label{Prop2_eta}
\begin{array}{c}
\displaystyle
\delta \int\limits_\Omega \big(\varkappa_\eta^\delta(\omega)\big)^2\mu(\omega)\, dP(\omega)
- \int\limits_{\mathbb R^d} \int\limits_\Omega a (z) \mu ( T_z \omega ) \big( \varkappa_\eta^\delta (T_z \omega ) - \varkappa_\eta^\delta ( \omega) \big) \varkappa_\eta^\delta ( \omega)\mu(\omega) \, dz \, dP(\omega) \\ \displaystyle
= \int\limits_{\mathbb R^d} \int\limits_\Omega (\eta\cdot z) a(z) \varkappa_\eta^\delta(\omega) \mu(T_z \omega) \mu(\omega) \, dz \, dP(\omega).
\end{array}
\end{equation}
In the same way as in the proof of Proposition \ref{spectrA},
we derive the following relation:
\begin{equation}\label{Prop2_etabis}
\begin{array}{c}
\displaystyle
\delta \int\limits_\Omega \big(\varkappa_\eta^\delta(\omega)\big)^2\mu(\omega)\, dP(\omega)
+\frac12\int\limits_{\mathbb R^d} \int\limits_\Omega a (z) \mu ( T_z \omega ) \big( \varkappa_\eta^\delta (T_z \omega ) - \varkappa_\eta^\delta ( \omega) \big)^2\mu(\omega) \, dz \, dP(\omega) \\ \displaystyle
= - \frac12 \int\limits_{\mathbb R^d} \int\limits_\Omega (\eta\cdot z) a(z)\big( \varkappa_\eta^\delta (T_z \omega ) - \varkappa_\eta^\delta ( \omega) \big) \mu(T_z \omega) \mu(\omega) \, dz \, dP(\omega).
\end{array}
\end{equation}
According to \eqref{theta} the sequence $\eta\cdot(\varkappa_\eta^{\delta_j} (T_z \omega ) - \varkappa_\eta^{\delta_j} ( \omega))$
converges weakly in $L^2_M $ as $\delta_j\to 0$ to $\eta\cdot\theta(z,\omega)$. Passing to the limit $\delta_j\to0$
in relation \eqref{Prop2_etabis} and considering the lower semicontinuity of the $L^2_M$ norm with respect to the weak
topology, we arrive at the following inequality
\begin{equation}\label{est_ineq}
\frac12\int\limits_{\mathbb R^d} \int\limits_\Omega a (z) \mu ( T_z \omega ) \big(\eta\cdot\theta(z,\omega) \big)^2\mu(\omega) \, dz \, dP(\omega) \leq
- \frac12 \int\limits_{\mathbb R^d} \int\limits_\Omega (\eta\cdot z) a(z)\big( \eta\cdot\theta(z,\omega) \big) \mu(T_z \omega) \mu(\omega) \, dz \, dP(\omega).
\end{equation}
Therefore,
$$
\Theta \eta\cdot\eta= \frac12 \, \eta_i\eta_j\int\limits_{\mathbb R^d} \int\limits_{\Omega} \big(z^i z^j - z^i \zeta^j_{-z} (0, \omega ) \big) \, a (z) \, \mu (0, \omega ) \mu ( -z, \omega) \, dz \, d P(\omega)
$$
$$
=\frac12 \int\limits_{\mathbb R^d} \int\limits_\Omega \big((\eta\cdot z)^2+(\eta\cdot z) (\eta\cdot \theta(z,\omega))\big) \, a (z) \, \mu (0, \omega ) \mu ( z, \omega) \, dz \, d P(\omega).
$$
Combining the latter relation with \eqref{est_ineq} we obtain
$$
\Theta \eta\cdot\eta\geq \frac12 \int\limits_{\mathbb R^d} \int\limits_\Omega \big((\eta\cdot z)+(\eta\cdot z) (\eta\cdot \theta(z,\omega))\big)^2 \, a (z) \, \mu (0, \omega ) \mu ( z, \omega) \, dz \, d P(\omega).
$$
Since $\theta(z, \omega)$ is a.s. a function of sublinear growth in $z$, we conclude that $ \eta\cdot\theta(z, \omega) \not \equiv \eta\cdot z$, consequently the integral on the right-hand side here is strictly positive.
This yields the desired positive definiteness.
\end{proof}
\section{Estimation of the remainder $ \phi_\varepsilon $}\label{s_estrem}
In this section we consider the remainder $ \phi_\varepsilon (x, \omega)$ given by (\ref{14}) and prove that $\|\phi_\varepsilon\|_{L^2(\mathbb R^d)}$ vanishes a. s. as $\varepsilon \to 0$.
\begin{lemma}\label{reminder}
Let $u_0 \in {\cal{S}}(\mathbb R^d)$. Then a.s.
\begin{equation}\label{fi}
\| \phi_\varepsilon (\cdot, \omega) \|_{L^2(\mathbb R^d)} \ \to \ 0 \quad \mbox{ as } \; \varepsilon \to 0.
\end{equation}
\end{lemma}
\begin{proof}
The first term in (\ref{14}) can be written as
$$
\phi_\varepsilon^{(1)} (x, \omega) = \int\limits_{\mathbb R^d} dz \ a (z) \mu \Big( \frac{x}{\varepsilon}, \omega \Big) \mu \Big( \frac{x}{\varepsilon} -z, \omega \Big) \int_0^{1} \ \Big( \nabla \nabla u_0(x - \varepsilon z t) - \nabla \nabla u_0(x) \Big) z \otimes z (1-t) \ dt.
$$
It doesn't depend on the random corrector $ \theta$ and can be considered exactly in the same way as in \cite[Proposition 5 ]{PiZhi17}.
Thus we have
\begin{equation}\label{phi_1bis}
\| \phi_\varepsilon^{(1)} \|_{L^2(\mathbb R^d)} \to 0 \quad \mbox{ as } \; \varepsilon \to 0.
\end{equation}
Let us denote by $\phi_\varepsilon^{(2)}$ the sum of the second and the third terms in (\ref{14}):
\begin{equation}\label{reminder-2}
\begin{array}{rl} \displaystyle
\!\!\!\!&\hbox{ }\!\!\!\!\!\!\!\!\!\!\!\!\phi_\varepsilon^{(2)} (x, \omega) =\\[3mm]
&\!\!\!\!\!\!\!\!\! \displaystyle
\mu \big( \frac{x}{\varepsilon},\omega \big) \int\limits_{\mathbb R^d} \ a (z) \mu \big( \frac{x}{\varepsilon} -z, \omega \big) \theta \big(\frac{x}{\varepsilon}\!-\!z,\omega \big) \Big( \frac{1}{\varepsilon} \big(\nabla u_0(x- \varepsilon z) - \nabla u_0(x)\big) + z \, \nabla \nabla u_0(x) \Big)\, dz.
\end{array}
\end{equation}
We take sufficiently large $L>0$ such that supp $\, u_0 \subset \{|x|<\frac12 L \}$ and estimate $\phi_\varepsilon^{(2)} (x, \omega)$ separately in the sets $\{|x|<L\}$ and $\{|x|>L\}$.
If $|x|>L$, then $u_0(x) = 0$. Since $a(z)$ has a finite second moment in $\mathbb R^d$, for any $c>0$ we have
\begin{equation}\label{ineqz2}
\frac{1}{\varepsilon^2} \int\limits_{|z|> \frac{c}{\varepsilon}} a (z) \, dz = \frac{1}{\varepsilon^2} \int\limits_{|z|> \frac{c}{\varepsilon}} a (z) \frac{z^2}{z^2} \, dz \le \frac{1}{c^2} \int\limits_{|z|> \frac{c}{\varepsilon}} a (z) z^2 \, dz \to 0 \quad \mbox{as } \; \varepsilon \to 0.
\end{equation}
Therefore,
\begin{equation}\label{r-2out}
\begin{array}{l}
\displaystyle
\| \phi_\varepsilon^{(2)} \, \chi_{|x|>L} \|^2_{L^2(\mathbb R^d)}
=\!\! \int\limits_{|x|>L} \Big(\!\! \int\limits_{|x - \varepsilon z|< \frac12 L}\!\! \frac{1}{\varepsilon}
\mu \big( \frac{x}{\varepsilon},\omega \big) a (z) \mu \big( \frac{x}{\varepsilon} -z, \omega \big) \theta \big(\frac{x}{\varepsilon}\!-\!z,\omega \big) \nabla u_0(x- \varepsilon z) \, dz \Big)^2 dx
\\[3mm] \displaystyle
< \alpha_2^4 \, \Big( \frac{1}{\varepsilon^2}
\int\limits_{|z|> \frac{L}{2\varepsilon}} \ a (z) \, dz \, \Big)^2 \|\varepsilon \theta \big(\frac{y}{\varepsilon},\omega \big)
\nabla u_0(y)\|_{L^2(\mathbb R^d)}^2 \to 0;
\end{array}
\end{equation}
Here we have also used the limit relation $\| \varepsilon \theta \big(\frac{y}{\varepsilon},\omega) \nabla u_0(y) \|_{L^2(\mathbb R^d)} \to 0$ that is ensured by Proposition \ref{1corrector}.
Denote $\chi_{<L}(x) = \chi_{\{|x|<L\}}(x)$ and represent the function $\phi_\varepsilon^{(2)} (x,\omega) \, \chi_{<L}(x)$ as follows:
\begin{equation}\label{r-2in-bis}
\phi_\varepsilon^{(2)} (x, \omega) \, \chi_{<L} (x) = \gamma_\varepsilon^{<} (x, \omega) + \gamma_\varepsilon^{>} (x, \omega),
\end{equation}
where
\begin{equation}\label{r-2in}
\begin{array}{l} \displaystyle
\gamma_\varepsilon^{<} (x, \omega) =\mu \big( \frac{x}{\varepsilon},\omega \big) \chi_{<L}(x)\\[3mm]
\displaystyle
\times\int\limits_{|\varepsilon z|< 2L } \ a (z) \mu \big( \frac{x}{\varepsilon} -z, \omega \big) \theta \big(\frac{x}{\varepsilon}\!-\!z,\omega \big) \Big( \frac{1}{\varepsilon} \big(\nabla u_0(x- \varepsilon z) - \nabla u_0(x)\big) + z \, \nabla \nabla u_0(x) \Big)\, dz;
\\[9mm]
\displaystyle
\gamma_\varepsilon^{>} (x, \omega) = \mu \big( \frac{x}{\varepsilon},\omega \big) \chi_{<L}(x) \\[3mm]
\displaystyle
\times\int\limits_{|\varepsilon z|> 2L } \ a (z) \mu \big( \frac{x}{\varepsilon} -z, \omega \big) \theta \big(\frac{x}{\varepsilon}\!-\!z,\omega \big) \Big( \frac{1}{\varepsilon} \big(\nabla u_0(x- \varepsilon z) - \nabla u_0(x)\big) + z \, \nabla \nabla u_0(x) \Big)\, dz.
\end{array}
\end{equation}
Since $u_0\in C_0^\infty(\mathbb R^d)$, the Teylor decomposition applies to $\nabla u_0 (x- \varepsilon z)$, and we get
$$
\frac{1}{\varepsilon} \big(\nabla u_0 (x- \varepsilon z) - \nabla u_0(x)\big) + z \, \nabla \nabla u_0(x) = \frac{\varepsilon}{2} \nabla\nabla\nabla u_0 (\xi)\, z \otimes z
$$
with some $\xi \in \mbox{supp} \, u_0$, here the notation $\nabla\nabla\nabla u_0 (\xi)\, z \otimes z$ is used for the vector function
$(\nabla\nabla\nabla u_0 (\xi)\, z \otimes z)^i=\partial_{x^j}\partial_{x^k}\partial_{x^i}u_0(\xi)z^jz^k$. Then the right-hand side of the first formula in \eqref{r-2in} admits the estimate
\begin{equation}\label{r-2in1}
\begin{array}{l} \displaystyle
\mu \big( \frac{x}{\varepsilon},\omega \big) \chi_{<L}(x) \Big|\!\!\!\int\limits_{|\varepsilon z|< 2L } \!\!\!\!\!\! a (z) \mu \big( \frac{x}{\varepsilon} -z, \omega \big) \theta \big(\frac{x}{\varepsilon}\!-\!z,\omega \big) \Big( \frac{1}{\varepsilon} \big(\nabla u_0(x- \varepsilon z) - \nabla u_0(x)\big) + z \nabla \nabla u_0(x)\! \Big) dz \Big|
\\[3mm]
\displaystyle
\le \frac{\alpha_2^2}{2} \max | \nabla\nabla\nabla u_0 | \int\limits_{\mathbb R^d } \, \varepsilon | \theta \big(\frac{x}{\varepsilon}\!-\!z,\omega \big)| \, \chi_{<3L}(x-\varepsilon z) \, a (z) z^2 \, dz.
\end{array}
\end{equation}
Taking into account the relation
\begin{equation}\label{r-2in1add}
\begin{array}{l} \displaystyle
\int\limits_{\mathbb R^d } \Big( \int\limits_{\mathbb R^d } \, \varepsilon | \theta \big(\frac{x}{\varepsilon}\!-\!z,\omega \big)| \, \chi_{<3L}(x-\varepsilon z) \, a (z) z^2 \, dz \Big)^2 dx
\\[3mm]
\displaystyle
= \int\limits_{\mathbb R^d } a (z_1) z_1^2 dz_1 \int\limits_{\mathbb R^d } a (z_2) z_2^2 dz_2 \int\limits_{\mathbb R^d } \varepsilon^2 | \theta \big(\frac{x}{\varepsilon}\!-\!z_1,\omega \big)| | \theta \big(\frac{x}{\varepsilon}\!-\!z_2,\omega \big)| \chi_{<3L}(x-\varepsilon z_1) \chi_{<3L}(x-\varepsilon z_2) dx
\end{array}
\end{equation}
and applying the Cauchy-Schwartz inequality to the last integral on its right hand side
we conclude with the help of Proposition \ref{1corrector} that $\|\gamma_\varepsilon^{<} (x, \omega) \|_{L^2(\mathbb R^d) } \to 0$ as $\varepsilon \to 0$.
If $|x|<L$ and $|\varepsilon z|>2L$, then $|x-\varepsilon z|>L$, and $u_0 (x-\varepsilon z)=0$. The right-hand side of the second formula in \eqref{r-2in} can be rearranged as follows:
\begin{equation}\label{r-2in2}
\begin{array}{l} \displaystyle
\gamma_\varepsilon^{>} (x, \omega) =
\mu \big( \frac{x}{\varepsilon},\omega \big) \chi_{<L}(x) \int\limits_{|z|> \frac{2L}{\varepsilon} }\!\!\!\! a (z) \mu \big( \frac{x}{\varepsilon} -z, \omega \big) \theta \big(\frac{x}{\varepsilon}\!-\!z,\omega \big) \Big( - \frac{1}{\varepsilon} \nabla u_0(x) + z \, \nabla \nabla u_0(x) \Big)\, dz
\\[3mm] \displaystyle
=\mu \big( \frac{x}{\varepsilon},\omega \big) \chi_{<L}(x) \!\! \int\limits_{|z|> \frac{2L}{\varepsilon} } \!\!\!\! a (z) \mu \big( \frac{x}{\varepsilon} -z, \omega \big) \big( \theta \big(\frac{x}{\varepsilon}\!-\!z,\omega \big) - \theta \big(\frac{x}{\varepsilon},\omega \big) \big) \Big(\!\! - \frac{1}{\varepsilon} \nabla u_0(x) + z \nabla \nabla u_0(x)\!\Big) dz
\\[3mm] \displaystyle
+\mu \big( \frac{x}{\varepsilon},\omega \big) \chi_{<L}(x) \!\! \int\limits_{|z|> \frac{2L}{\varepsilon} } \!\!\!\! a (z) \mu \big( \frac{x}{\varepsilon} -z, \omega \big) \theta \big(\frac{x}{\varepsilon},\omega \big) \Big( - \frac{1}{\varepsilon} \nabla u_0(x) + z \, \nabla \nabla u_0(x) \Big)\, dz
\end{array}
\end{equation}
The second term on the right-hand side in \eqref{r-2in2} is estimated in the same way as the function $\phi_\varepsilon^{(2)} \, \chi_{|x|>L}$ in \eqref{r-2out}.
Thus the $L^2(\mathbb R^d)$ norm of this term tends to 0 as $\varepsilon \to 0$.
The first term on the right-hand side of \eqref{r-2in2} admits the following upper bound:
\begin{equation}\label{r-2in2bis}
\begin{array}{l} \displaystyle
\Big| \mu \big( \frac{x}{\varepsilon},\omega \big) \chi_{<L}(x) \int\limits_{|z|> \frac{2L}{\varepsilon} } \ a (z) \mu \big( \frac{x}{\varepsilon} -z, \omega \big) \zeta_{-z} \big(T_{\frac{x}{\varepsilon}}\omega \big) \Big( - \frac{1}{\varepsilon} \nabla u_0(x) + z \, \nabla \nabla u_0(x) \Big)\, dz \Big|
\\[3mm] \displaystyle
\leq \alpha_2^2 \int\limits_{|z|> \frac{2L}{\varepsilon} } \ a (z) \Big| \zeta_{-z} \big(T_{\frac{x}{\varepsilon}}\omega \big)\Big|\ \Big| - \frac{1}{\varepsilon} \nabla u_0(x) + z \, \nabla \nabla u_0(x)\Big|\, dz
\\[3mm] \displaystyle
\leq \alpha_2^2 C(L) \int\limits_{|z|> \frac{2L}{\varepsilon} } |z| a (z) \Big| \zeta_{-z} \big(T_{\frac{x}{\varepsilon}}\omega \big)\Big| \, dz\ \big(\big| \nabla u_0(x)\big| + \big| \nabla \nabla u_0(x)\big|\big).
\\[3mm] \displaystyle
\leq \alpha_2^2 C(L) \Big(\int\limits_{|z|> \frac{2L}{\varepsilon} } |z|^2 a (z)dz\Big)^\frac12
\Big(\int\limits_{\mathbb R^d} a (z)
\big|\zeta_{-z} \big(T_{\frac{x}{\varepsilon}}\omega \big)\big|^2 \, dz\Big)^\frac12\ \big(\big| \nabla u_0(x)\big| + \big| \nabla \nabla u_0(x)\big|\big).
\end{array}
\end{equation}
Since $\zeta_{-z} (\omega)\in L^2_M$, we have
$$
\mathbb E\int\limits_{\mathbb R^d} a (z)
| \zeta_{-z} (\omega )|^2 \, dz<\infty.
$$
Taking into account the convergence
$$
\int\limits_{|z|> \frac{2L}{\varepsilon} } |z|^2 a (z)dz\to 0,\quad \hbox{as }\eps\to0,
$$
by the Birkhoff ergodic theorem we obtain that the $L^2(\mathbb R^d)$ norm of the first term on the right-hand side of \eqref{r-2in2}
tends to zero a.s., as $\eps\to0$. Therefore, $\|\gamma_\varepsilon^{>} (x, \omega) \|_{L^2(\mathbb R^d) } \to 0$ as $\varepsilon \to 0$.
From \eqref{r-2in-bis} it follows that $\| \phi_\varepsilon^{(2)}(x,\omega) \chi_{<L} (x) \|_{L^2(\mathbb R^d)} \to 0$ as $ \varepsilon \to 0$, and together with \eqref{r-2out} this implies that
\begin{equation}\label{rr}
\| \phi_\varepsilon^{(2)}(x,\omega) \|_{L^2(\mathbb R^d)} \to 0 \quad \mbox{as } \; \varepsilon \to 0.
\end{equation}
Finally, \eqref{fi} follows from \eqref{phi_1bis} and \eqref{rr}. Lemma is proved.
\end{proof}
\section{Proof of the main results}\label{s_proofmain}
We begin this section by proving relation \eqref{convergence1} for $f\in \mathcal{S}_0(\mathbb R^d)$. For such $f$ we have
$u_0\in C_0^\infty(\mathbb R^d)$. It follows from \eqref{v_eps}, Proposition \ref{1corrector} and Lemmas \ref{l_u2small}, \ref{l_u3small}
that
\begin{equation}\label{frstconv}
\|w^\eps-u_0\|_{L^2(\mathbb R^d)}\to 0,\quad\hbox{as }\eps\to 0.
\end{equation}
By the definition of $v^\eps$, $u_2^\eps$ and $u_3^\eps$,
$$
(L^\eps-m)w^\eps=(\hat L-m)u_0-m\eps \theta \Big(\frac x\eps\Big)\cdot\nabla u_0+\phi_\eps
=f-m\eps \theta \Big(\frac x\eps\Big)\cdot\nabla u_0+\phi_\eps
$$
$$
=(L^\eps-m)u^\eps-m\eps \theta \Big(\frac x\eps\Big)\cdot\nabla u_0+\phi_\eps.
$$
Therefore,
$$
(L^\eps-m)(w^\eps-u^\eps)=-m\eps \theta \Big(\frac x\eps\Big)\cdot\nabla u_0+\phi_\eps.
$$
According to Proposition \ref{1corrector} and Lemma \ref{reminder} the $L^2$ norm of the functions on the right-hand side
of the last formula tends to zero as $\eps\to0$. Consequently,
$$
\|w^\eps-u^\eps\|_{L^2({\mathbb R^d})}\to 0,\quad\hbox{as }\eps\to 0.
$$
Combining this relation with \eqref{frstconv} yields the desired relation \eqref{convergence1} for $f\in\mathcal{S}_0(\mathbb R^d)$.
To complete the proof of Theorem \ref{T1} we should show that the last convergence holds for any $f\in L^2(\mathbb R^d)$.
For any $f \in L^2(\mathbb R^d)$ there exists $f_\delta \in \mathcal{S}_0$ such that $\| f - f_\delta\|_{L^2(\mathbb R^d)} <\delta$.
Since the operator $(L^\varepsilon - m)^{-1}$ is bounded uniformly in $\varepsilon$, then
\begin{equation}\label{delta_1}
\| u^{\varepsilon}_\delta - u^\varepsilon \|_{L^2(\mathbb R^d)} \le C_1 \delta,
\qquad
\| u_{0,\delta} - u_0 \|_{L^2(\mathbb R^d)} \le C_1 \delta,
\end{equation}
where
$$
u^{\varepsilon} \ = \ (L^{\varepsilon} - m)^{-1} f, \; \; u_{0} \ = \ (\hat L - m)^{-1} f, \; \;
u^{\varepsilon}_\delta \ = \ (L^{\varepsilon} - m)^{-1} f_\delta, \; \; u_{0,\delta} \ = \ (\hat L - m)^{-1} f_\delta.
$$
Recalling that $f_\delta\in\mathcal{S}_0$, we obtain $\| u^{\varepsilon}_\delta - u_{0, \delta} \|_{L^2(\mathbb R^d)} \to 0 $. Therefore, by (\ref{delta_1})
$$
\mathop{ \overline{\rm lim}}\limits_{\varepsilon \to 0} \| u^{\varepsilon} - u_0 \|_{L^2(\mathbb R^d)} \le 2 C_1 \delta
$$
with an arbitrary $\delta>0$. This implies the desired convergence in \eqref{t1} for an arbitrary $f\in L^2(\mathbb R^d)$
and completes the proof of the main theorem.
\subsection{Proof of Corollary \ref{cor_main}}
Here we assume that the operator $L^{\eps,{\rm ns}}$ is defined by \eqref{L_eps_ns}. Multiplying equation \eqref{u_eps_nssss}
by $\rho^\eps(x,\omega)=\rho\big(\frac{x}{\eps},\omega\big)=
\mu\big(\frac{x}{\eps},\omega\big)\big(\lambda\big(\frac{x}{\eps},\omega\big)\big)^{-1}$
we obtain
\begin{equation}\label{eq_modfd}
L^{\eps}u_\eps -m\rho^\eps u_\eps=\rho_\eps f,
\end{equation}
where the symmetrized operator $L^{\eps}$ is given by \eqref{L_eps}.
Letting $\langle\rho\rangle=\mathbb E\bm{\rho}
=\mathbb E\big(\frac{\bm{\mu}}{\bm{\lambda}}\big)$ we consider an auxiliary equation
\begin{equation}\label{eq_ns_aux}
L^{\eps}g_\eps -m\langle\rho\rangle g_\eps=\langle\rho\rangle f.
\end{equation}
By Theorem \ref{T1} the functions $g_\eps$ converge a.s. in $L^2(\mathbb R^d)$, as $\eps\to0$, to a solution of the equation $\hat Lg -m\langle\rho\rangle g=\langle\rho\rangle f$. Our goal is to show that $\|g_\eps-u_\eps\|_{L^2(\mathbb R^d)}\to0$
as $\eps\to0$. To this end we subtract equation \eqref{eq_modfd} from \eqref{eq_ns_aux}.
After simple rearrangements this yields
\begin{equation}\label{eq_ns_alpha}
L^{\eps}\alpha_\eps -m\rho_\eps \alpha_\eps=\big(\langle\rho\rangle-\rho_\eps\big)g_\eps +\big(\langle\rho\rangle-\rho_\eps\big) f.
\end{equation}
with $\alpha_\eps(x)=g_\eps(x)-u_\eps(x)$. In a standard way one can derive the following estimate
\begin{equation}\label{C_ns_pure}
m \int\limits_{\mathbb R^d} (\alpha_\varepsilon(x))^2 dx+\frac{1}{\varepsilon^2} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) ( \alpha_\varepsilon (x-\varepsilon z) - \alpha_\varepsilon(x))^2 dz dx < C.
\end{equation}
As was shown in the proof of Lemma \ref{Convergence}, this estimate implies compactness of the family $\{\alpha_\eps\}$
in $L^2(B)$ for any cube $B$. Multiplying \eqref{eq_ns_alpha} by $\alpha_\eps$ and integrating the resulting relation
over $\mathbb R^d$ we obtain
\begin{equation}\label{al_al}
\|\alpha_\eps\|^2_{L^2(\mathbb R^d)}\leq C_1
\big|\big((\langle\rho\rangle-\rho_\eps)g_\eps, \alpha_\eps\big)_{L^2(\mathbb R^d)}\big| +\big|\big((\langle\rho\rangle-\rho_\eps) f,\alpha_\eps\big)_{L^2(\mathbb R^d)}\big|
\end{equation}
By the Birkhoff ergodic theorem $(\langle\rho\rangle-\rho_\eps)$ converges to zero weakly in $L^2_{\rm loc}(\mathbb R^d)$.
Considering the boundedness of $(\langle\rho\rangle-\rho_\eps)$ and the properties of $\alpha_\eps$ and $g_\eps$, we conclude that the both terms on the right-hand side in \eqref{al_al} tend to zero, as $\eps\to0$. So does
$\|\alpha_\eps\|^2_{L^2(\mathbb R^d)}$. Therefore, $u_\eps$ converges to the solution of equation
$\hat Lu -m\langle\rho\rangle u=\langle\rho\rangle f$. Dividing this equation by $\langle\rho\rangle$, we rewrite
the limit equation as follows
$$
\Big(\mathbb E\big\{\frac{\bm\mu}{\bm\lambda}\big\}\Big)^{-1}Q_{ij}\frac{\partial^2 u}{\partial x_i\partial x_j}-mu=f
$$
with $\Theta$ defined in \eqref{Positive}. This completes the proof of Corollary.
\noindent
{\large \bf Acknowlegements}\\[2mm]
The work on this project was completed during the visit of Elena Zhizhina at the Arctic University of Norway, campus Narvik. She expresses her gratitude to the colleagues at this university for hospitality.
|
{
"timestamp": "2018-07-19T02:01:27",
"yymm": "1806",
"arxiv_id": "1806.00995",
"language": "en",
"url": "https://arxiv.org/abs/1806.00995"
}
|
\section{Introduction}
The adaptive choice of a sampling policy lies at the heart of many fields of \textit{Machine Learning} where former Monte Carlo
experiments guide the forthcoming ones. This includes for instance \textit{reinforcment learning} \cite{jie+a:2010,peters+m+a:2010,schulman+l+a+j+m:2015} where the optimal policy maximizes the reward; inference in \textit{Bayesian} \cite{delmoral+d+j:2006} or \textit{graphical models}
\cite{lou2017dynamic}; \textit{optimization} based on stochastic gradient descent \cite{zhao+z:2015} or without using the gradient \cite{hashimoto2018derivative}; \textit{rejection sampling} \cite{erraqabi+v+c+m:2016}. \textit{Adaptive importance sampling} (AIS) \cite{ho+b:1992,cappe+d+g+m+r:2008}, which extends the basic Monte Carlo integration approach, offers a natural probabilistic framework to describe the evolution of sampling policies. The present paper establishes, under fairly reasonable conditions, that AIS is asymptotically optimal, i.e., learning the sampling policy has no cost asymptotically.
Suppose we are interested in computing some integral value $ \int \varphi $, where $\varphi:\mathbb R^d \to \mathbb R$ is called the integrand.
The importance sampling estimate of $ \int \varphi $ based on the sampling policy $q$, is given by
\begin{align}\label{eq:unnormalization}
n^{-1} \sum_{i=1}^n \frac{\varphi(x_i)}{q(x_i)},
\end{align}
where $(x_1,\ldots x_n)\overset{\text{i.i.d.}}{\sim} q$. The previous estimate is unbiased. It is well known, e.g., \cite{hammersley+h:1964,evans:2000}, that the optimal sampling policy, regarding the variance, is when $q$ is proportional to $ |\varphi| $. A slightly different context where importance sampling still applies is Bayesian estimation. Here the targeted quantity is $ \int \varphi \pi $ and we only have access to an unnormalized version $\pi_u$ of the density $\pi= \pi_u / \int \pi_u $. Estimators usually employed are
\begin{align}\label{eq:normalization}
{ \sum_{i=1}^n \frac{\varphi(x_i) \pi_u (x_i)}{q(x_i)} } \left/ { \sum_{i=1}^n \frac{ \pi_u (x_i)}{q(x_i)} } \right. .
\end{align}
In this case, the optimal sampling policy $q$ is proportional to $ |\varphi - \int \varphi \pi |\pi $ (see \cite{douc+g+m+r:2007b} or Remark \ref{rk:opt_pol_norm} below).
Both previous frameworks, namely, the classical integration problem and the Bayesian estimation problem, are examples where the sampling policy can be chosen appropriately. Because appropriate policies naturally depend on $\varphi$ or $\pi$, we generally cannot simulate from them. They are then approximated adaptively, by densities from which we can simulate, using the information gathered from the past stages. This is the very spirit of AIS. At each stage $t$, the value $ I_t$, standing for the current estimate, is updated using i.i.d. new samples $x_{t,1} ,\ldots x_{t,n_t} $ from $ q_{t}$, where $ q_{t}$ is a probability density function that might depend on the past stages $1,\ldots t-1$. The distribution $ q_t$, called the \textit{sampling policy}, targets some optimal, at least suitable, sampling policy.
The sequence $(n_t)\subset \mathbb N^*$, called the \textit{allocation policy}, contains the number of particles generated at each stage.
The following algorithm describes the AIS schemes for the classical integration problem. For the Bayesian problem, it suffices to change the estimate according to (\ref{eq:normalization}). This is a generic representation of AIS as no explicit update rule is specified (this will be discussed just below).
\begin{algorithm}[AIS]\label{algo:pop_monte_carlo}\ \\
\begin{minipage}{13cm}
\textbf{Inputs}: The number of stages $T \in \mathbb N^*$, the allocation policy $(n_t)_{t=1,\ldots T}\subset \mathbb N^*$, the sampler update procedure, the initial density $q_0$.
\medskip\hrule\medskip
Set $ S_0 = 0$, $N_0 = 0$. \noindent For $t$ in $1,\ldots T$ :
\begin{enumerate}[(i)]
\item (Explore) Generate $(x_{t,1},\ldots x_{t,n_t})$ from $ q_{t-1}$
\item (Exploit)
\begin{enumerate}[(a)]
\item \begin{minipage}[t]{.4\textwidth} Update the estimate: \\ \end{minipage}\begin{minipage}[t]{.4\textwidth} \vspace{-.8cm} \begin{align*}
& S_{t} = S_{t-1} + \sum_{i = 1} ^ {n_t} \frac{\varphi(x_{t,i}) }{ q_{t-1}( x_{t,i}) }\\
& N_t = N_{t-1} + n_t\\
& I_t = N _t^{-1} S_{t}
\end{align*}
\end{minipage}
\item Update the sampler $ q_{t}$
\end{enumerate}
\end{enumerate}
\hrule
\end{minipage}
\end{algorithm}
Pioneer works on adaptive schemes include \cite{kloek+v:1978} where, within a two-stages procedure, the sampling policy is chosen out of a parametric family; this is further formalized in \cite{geweke:1989};
\cite{ho+b:1992} introduces the idea of a multi-stages approach where all the previous stages are used to update
the sampling policy (see also \cite{richard+z:2007} regarding the choice of the loss function);
\cite{owen+z:2000} investigates the use of control variates coupled with importance sampling;
the \textit{population Monte Carlo} approach \cite{cappe+g+m+r:2004,cappe+d+g+m+r:2008} offers a general framework for AIS
and has been further studied using parametric mixtures \cite{douc+g+m+r:2007a,douc+g+m+r:2007b};
see also \cite{cornuet+m+m+r:2012,veach+g:1995} for a variant called \textit{multiple adaptive importance sampling};
see \cite{elvira+m+l+b:2015} for a recent review.
In \cite{zhang:1996,neddermeyer:2009}, using kernel smoothing, \textit{nonparametric importance sampling} is introduced. The approach of choosing $q_t$ out of a parametric family should also be contrasted with the non parametric approach based on particles often refereed to as \textit{sequential Monte Carlo} \cite{delmoral+d+j:2006,chopin:2004,douc+m:2008} whose context is different as traditionally the targeted distribution changes with $t$. The distribution $q_{t-1}$ is then a weighted sum of Dirac masses $ \sum_i w_{t-1,i} \delta_{x_{t-1,i}}$, and updating $q_t$
follows from adjustment of the weights.
The theoretical properties of adaptive schemes are difficult to derive due to the recycling of the past samples at each stage and hence to the lack of independence between samples.
Among the update based on a parametric family, the convergence properties of the Kullback-Leibler divergence
between the estimated and the targeted distribution are studied in \cite{douc+g+m+r:2007a}.
Properties related to the asymptotic variance are given in \cite{douc+g+m+r:2007b}. Among nonparametric update, \cite{zhang:1996} establishes fast convergence rates in a two-stages strategy where the number of samples used in each stage goes to infinity. {For sequential Monte Carlo, limit theorems are given for instance in \cite{delmoral+d+j:2006,chopin:2004,douc+m:2008}. }
{All these results are obtained when $T$ is fixed and $n_T\to \infty$ and therefore misses the true nature of the adaptive schemes for which the asymptotic should be made with respect to $T$. }
Recently, a more realistic asymptotic regime was considered in \cite{marin+p+s:2012} in which the allocation policy $(n_t)$ is a fixed growing sequence of integers.
The authors establish the consistency of the estimate when the update
is conducted with respect to a parametric family but depends \textit{only} on the last stage. They focus on multiple adaptive importance sampling \cite{cornuet+m+m+r:2012,veach+g:1995} which is different than AIS (see Remark \ref{rk:multiple} below for more details).
In this paper, folllowing the same spirit as \cite{douc+g+m+r:2007a,douc+g+m+r:2007b,cappe+d+g+m+r:2008}, we study \textit{parametric} AIS as presented in the AIS algorithm when the policy is chosen out of a parametric family of probability density functions.
Our analysis focuses on the following $3$ key points which are new to the best of our knowledge.
\begin{itemize}
\item A central limit theorem is established for the AIS estimate $I_t$. It involves high-level conditions on the sampling policy estimate $q_t$ (which will be easily satisfied for parametric updates). Based on the martingale property associated to some sequences of interest, the asymptotic is not with $T$ fixed and $n_T\to\infty$,
but with the number of samples $n_1+\dots + n_T\to\infty$. In particular,
the allocation policy $(n_t)$ is not required to grow to infinity. This is presented in section \ref{sec:eff_AIS}.
\item The high-level conditions are verified in the case of parametric sampling policies with updates taking place in a general framework inspired by the paradigm of empirical risk minimization (several concrete examples are provided). This establishes the asymptotic optimality of AIS in the sense that the rate and the asymptotic variance coincide with some ``oracle'' procedure
where the targeted policy is known from the beginning.
The details are given in section \ref{sec:consistency_samp_pol}.
\item A new method, called weighted AIS (wAIS) is designed in section \ref{sec:weightedAIS} to eventually forget bad samples drawn during the early stages of AIS. Our numerical experiments shows that (i) wAIS accelerates significantly the convergence of AIS and (ii) small allocation policies $(n_t)$ (implying more frequent updates) give better results than large $(n_t)$ (at equal number of requests to $\varphi$). This last point supports empirically the theoretical framework adopted in the paper.
\end{itemize}
All the proofs are given in the supplementary material.
\section{Central limit theorems for AIS}\label{sec:eff_AIS}
For the sake of generality and because it will be useful in the treatment of normalized estimators, we consider the multivariate case where $\varphi = (\varphi_1,\ldots \varphi_p) : \mathbb R^d \to \mathbb R^p$. In the whole paper, $\int \varphi $ is with respect to the Lebesgue measure, $\|\cdot\|$ is the Euclidean norm.
To study the AIS algorithm, it is appropriate to work at the sample time scale as described below rather than at the sampling policy scale as described in the introduction. The sample $x_{t,i}$ (resp. the policy $q_t$) of the previous section ($t$ is the block index and $i$ the sample index within the block) is now simply denoted $x_j$ (resp. $q_j$), where $j=n_1+\dots n_t+i$
is the sample index in the whole sequence $1,\ldots n $, with $n=N_T$. The following algorithm is the same as Algorithm \ref{algo:pop_monte_carlo} (no explicit update rule is provided) but is expressed at the sample scale.
\begin{algorithm}[AIS at sample scale]\label{algo:pop_monte_carlobis} ~ \\
\begin{minipage}{13cm}
\textbf{Inputs}: The number of stages $T \in \mathbb N^*$, the allocation policy $(n_t)_{t=1,\ldots T}\subset \mathbb N^*$, the sampler update procedure, the initial density $q_0$.
\medskip\hrule\medskip
Set $ S_0 = 0$. For $j$ in $1,\ldots n$ :
\begin{enumerate}[(i)]
\item (Explore) Generate $x_j$ from $ q_{j-1}$
\item (Exploit)
\begin{enumerate}[(a)]
\item \begin{minipage}[t]{.4\textwidth} Update the estimate: \\ \end{minipage}\begin{minipage}[t]{.4\textwidth} \vspace{-.8cm}\begin{align*}
& S_j = S_{j-1} + \frac{\varphi(x_j)}{ q_{j-1}(x_j)}\\
& I_j = j^{-1} S_j
\end{align*}
\end{minipage}
\item Update the sampler $ q_j$ whenever $j\in \{ N_t = \sum_{s=1}^tn_s: t\ge 1\}$
\end{enumerate}
\end{enumerate}
\hrule
\end{minipage}
\end{algorithm}
\subsection{The martingale property}
Define $\Delta_j$ as the $j$-th centered contribution to the sum $ S_j$: $\Delta_j= {\varphi (x_j)} / { q_{j-1} (x_j) }-\int \varphi $. Define, for all $n\geq 1$,
\begin{align*}
M_n = \sum_{j=1}^{n}\Delta_j.
\end{align*}
The filtration we consider is given by $\mathscr F_{n}=\sigma(x_1,\ldots x_n)$. The quadratic variation of $ M$ is given by $\langle M\rangle_n=\sum_{j=1}^n\mathbb E\big[\Delta_j\Delta_j^T\,|\,\mathscr F_{j-1}\big]$. Set
\begin{align}\label{vari}
V (q,\varphi) = \int\frac{\left(\varphi(x)- q(x) \int \varphi \right)\left(\varphi(x)- q(x) \int \varphi \right)^T }{ q(x)}dx.
\end{align}
\begin{lemma}\label{prop:mg}
Assume that for all $1\leq j\leq n$, the support of $ q_j$ contains the support of $\varphi$,
then the sequence $( M_n,\mathscr F_n)$ is a martingale.
In particular, $ I_n $ is an unbiased estimate of $ \int \varphi $. In addition, the quadratic variation of $ M$ satisfies
$\langle M\rangle_n =\sum_{j=1}^n V ( q_{j-1},\varphi)$.
\end{lemma}
\subsection{A central limit theorem for AIS}
The following theorem describes the asymptotic behavior of AIS. The conditions will be verified for parametric updates in section \ref{sec:consistency_samp_pol} (see Theorem \ref{thm:final_th}).
\begin{theorem}[central limit theorem for AIS]\label{clt}
Assume that the sequence $q_n$ satisfies
\begin{align}\label{vcond}
&V( q_n,\varphi) \to V_*,\qquad \text{a.s.}
\end{align}
for some $V_*\geq 0$ and that there exists $\eta >0$ such that
\begin{align}\label{cond:lindeberg}
&\sup_{j \in \mathbb N} \int \frac{\|\varphi\|^{2+\eta}}{ q_{j}^{1+\eta}}<\infty,\qquad \text{a.s.}
\end{align}
Then we have
\begin{align*}
\sqrt n \,\Big( I_n-\int \varphi \Big)\overset{\mathrm{d}}{\to} \mathcal N(0,V_*).
\end{align*}
\end{theorem}
\begin{remark}[zero-variance estimate]
Suppose that $p=1$ (recalling that $\varphi : \mathbb R ^d \to \mathbb R^p$). Theorem \ref{clt} includes the degenerate case $V_*= 0$.
This happens when the integrand has constant sign
and the sampling policy is well chosen, i.e. $ q_n \to |\varphi| /\int|\varphi|$. In this case,
we have that $ \sqrt n (I_n-\int \varphi ) = o_p(1)$, meaning that the standard Monte Carlo convergence rate ($1/\sqrt n $) has been improved. This is inline with the results presented in \cite{zhang:1996} where fast rates of convergence (compared to standard Monte Carlo) are obtained under restrictive conditions on the allocation policy $(n_t)$. Note that other techniques such as \textit{control variates}, \textit{kernel smoothing} or \textit{Gaussian quadrature} can achieve fast convergence rates \cite{oates:2016,portier+s:2018,bardenet+h:2016,delyon+p:2016}.
\end{remark}
\begin{remark}[adaptive multiple importance sampling]\label{rk:multiple}
Another way to compute the importance weights, called multiple adaptive importance
sampling, has been introduced in \cite{veach+g:1995} and has been successfully used in \cite{owen+z:2000,cornuet+m+m+r:2012}.
This consists in replacing $ q_{j-1}$ in the computation of $S_j$ by $\bar q_{j-1}=\sum_{i=1}^j q_{i-1}/j$,
$x_j$ still being drawn under $ q_{j-1}$.
The intuition is that this averaging will reduce the effect of exceptional points $x_j$ for which
$|\varphi(x_j)|\gg q_{j-1}( x_j)$ (but $|\varphi(x_j)|\not\!\gg\bar q_{j-1}( x_j)$). Our approach is not able to study this variant,
simply because the martingale property described previously is not anymore satisfied.
\end{remark}
\subsection{Normalized AIS}\label{sec:norm_ais}
The normalization technique described in (\ref{eq:normalization}) is designed to compute $\int \varphi \pi$, where $\pi$ is a density. It is useful in the Bayesian context where $\pi$ is only known up to a constant. As this technique seems to provide substantial improvements compared to unnormalized estimates (i.e., (\ref{eq:unnormalization}) with $\varphi $ replaced by $\varphi \pi$), we recommend to use it even when the normalized constant of $\pi$ is known. Normalized estimators are given by
\begin{align*}
&I ^{(\text{norm})}_n = \frac{ I_n(\varphi \pi) }{ I_n(\pi)},\qquad\text{with}\quad I_n(\psi ) = n^{-1} \sum_{j=1}^ n {\psi(x_{j}) }/{ q_{j-1}(x_{j})}.
\end{align*}
Interestingly, normalized estimators are weighted least-squares estimates as they minimize the function $a\mapsto \sum_{j=1}^n ({\pi(x_{j})}/{ q_{j-1}(x_{j})}) (\varphi(x_{j}) - a)^2$. In contrast with $I_n$, $I ^{(\text{norm})}_n $ has the following shift-invariance property : whenever $\varphi $ is shifted by $\mu$, $I ^{(\text{norm})}_n $ simply becomes $I ^{(\text{norm})}_n + \mu$.
Because $I_n(\psi )$ is of the same kind as $I_n$ defined in the second AIS algorithm, a straightforward application of Theorem \ref{clt} (with $(\varphi^T \pi,\pi)^T$ in place of $\varphi$) coupled with the delta-method \cite[chapter 3]{vandervaart:1998} permits to obtain the following result.
\begin{corollary}[central limit theorem for normalized AIS]\label{cor:normalized}
Suppose that (\ref{vcond}) and (\ref{cond:lindeberg}) hold with $(\varphi^T \pi,\pi)^T$ (in place of $\varphi$). Then we have
\begin{align*}
\sqrt n \Big(I ^{(\text{norm})}_n - \int \varphi \pi\Big) \overset{\mathrm{d} } {\to} \mathcal N (0, u^T V_* u ),
\end{align*}
with $u = (1,-\int \varphi^T \pi )^T$.
\end{corollary}
\section{Parametric sampling policy}\label{sec:consistency_samp_pol}
From this point forward, the sampling policies $q_t$, $t=1,\ldots T$ (we are back again to the sampling policy scale as in Algorithm \ref{algo:pop_monte_carlo}), are chosen out of a parametric family of probability density functions $\{q_\theta\,:\, \theta\in \Theta\}$. All our examples fit the general framework of empirical risk minimization over the parameter space $\Theta\subset \mathbb R^q$, where $ \theta_t$ is given by
\begin{align}
& \theta_t \in \argmin_{\theta\in \Theta} \, R_{t}(\theta),\label{eq:min_risk}\\
\nonumber& R_{t}(\theta) = \sum_{s=1}^t \sum_{i=1}^{n_s} \frac{m_\theta(x_{s,i} )}{ q_{s-1}(x_{s,i})},
\end{align}
where $ q_{s}$ is a shortcut for $q_{\theta_s}$, $m_\theta : \mathbb R^d\to \mathbb R $ might be understood as a loss function (see the next section for examples). Note that $R_t/N_t$ is an unbiased estimate of the risk
$r(\theta) = \int m_{\theta} $.
\subsection{Examples of sampling policy}\label{subsection:examples}
We start by introducing a particular case, which is one of the simplest way to implement AIS. Then we will provide more general approaches. In what follows, the targeted policy, denoted by $f$, is chosen by the user and represents the distribution from which we wish to sample. It often reflects some prior knowledge on the problem of interest. If $\varphi : \mathbb R^d \to \mathbb R^p$, with $p=1$, then (as discussed in the introduction) $f\propto |\varphi| $ is optimal for (\ref{eq:unnormalization}) and $f \propto |\varphi - \int \varphi \pi | \pi $ is optimal for (\ref{eq:normalization}). In the Bayesian context where many integrals $\int (\varphi_1,\ldots \varphi_p) d\pi$ need to be computed, a usual choice is $f = \pi$. All the following methods only require calls to an unnormalized version of $f$.
\paragraph{Exact method of moments with Student distributions.} In this case $(q_\theta)_{\theta\in\Theta}$ is just
the family of multivariate Student distributions with $\nu>2$ degrees of freedom (fixed parameter).
The parameter $\theta$ contains a location and a scale parameter $\mu$ and $\Sigma$. This family has two advantages:
the parameter $\nu$ allows tuning for heavy tails, and estimation is easy
because moments of $q_\theta$ are explicitly related to $\theta$. A simple unbiased estimate for $\mu$ is $ (1/ {N_t} ) \sum_{s=1}^t \sum_{i=1}^{n_s} x_{s,i} {f(x_{s,i} )} / { q_{s-1}(x_{s,i} )}$,
but, as mentioned in section \ref{sec:norm_ais}, we prefer to use the normalized estimate (using the shortcut $ q_{s}$ for $q_{\theta_s}$):
\begin{align}\label{muhat}
&\mu_t
= {\sum_{s=1}^t \sum_{i=1}^{n_s} x_{s,i} \frac{f(x_{s,i} )} { q_{s-1}(x_{s,i} )}} \left/ {\sum_{s=1}^t \sum_{i=1}^{n_s} \frac{f(x_{s,i} )} { q_{s-1}(x_{s,i} )}} \right. ,\\
& \Sigma_t
= \left(\frac{\nu-2 }{\nu}\right) {\sum_{s=1}^t\sum_{i=1}^{n_s} (x_{s,i} - \mu_t )(x_{s,i} - \mu_t )^T \frac{f(x_{s,i} )} { q_{s-1}(x_{s,i} )}} \left/ {\sum_{s=1}^t \sum_{i=1}^{n_s} \frac{f(x_{s,i} )} { q_{s-1}(x_{s,i} )}}\right. .
\label{sigmahat}
\end{align}
\paragraph{Generalized method of moments (GMM).} This approach includes the previous example. The policy is chosen according to a moment matching condition, i.e., $\int g q_\theta = \int g f$ for some function $g:\mathbb R^d \to \mathbb R^D $. For instance, $g$ might be given by $x\mapsto x$ or $x\mapsto xx^T$ (both are considered in the Student case). Following \cite{hansen:1982}, choosing $\theta$ such that the empirical moments of $g$ coincide with $\int g q_\theta$ might be impossible. We rather compute $ \theta_t $ as the minimum of
\begin{align*}
\left\| \mathbb E_\theta (g) - \left( {\sum_{s=1}^t \sum_{i=1}^{n_s} g(x_{s,i}) \frac{f(x_{s,i} )} { q_{s-1}(x_{s,i} )}} \left/ { \sum_{s=1}^t \sum_{i=1}^{n_s} \frac{f(x_{s,i} )} { q_{s-1}(x_{s,i} ) }} \right. \right) \right\|^2.
\end{align*}
Equivalently,
\begin{align*}
\theta_t \in \argmin_{\theta \in \Theta} \, \sum_{s=1}^t\sum_{i = 1}^{n_s} \left\| \mathbb E_\theta (g) - g(x_{s,i})\right\|^2 \frac{f(x_{s,i} )} { q_{s-1}(x_{s,i} ) } ,
\end{align*}
which embraces the form given by \eqref{eq:min_risk}, with $ m_\theta = \|\mathbb E_\theta (g) - g \|^2 f$.
\paragraph{Kullback-Leibler approach.} Following \cite[section 5.5]{vandervaart:1998}, define the Kullback-Leibler risk as
$r(\theta) = - \int \log( q_\theta ) f $.
Update of $\theta_t$ is done by minimizing the current estimator of $N_t r(\theta)$ given by
\begin{align}
& R_{t}(\theta)= R_{t-1}(\theta)-\sum_{i=1}^{n_t}\frac{\log( q_\theta (x_{t,i})) f (x_{t,i})}{ q_{t-1}(x_{t,i})}.\label{eq:risk_update_kl}
\end{align}
\paragraph{Variance approach.} Another approach, when $\varphi : \mathbb R^d \to \mathbb R^p$ with $p=1$, consists in minimizing the variance over the class of sampling policies. In this case, define
$r(\theta) = \int {\varphi^2}/ {q_\theta} $, and follow a similar approach as before by minimizing at each stage,
\begin{align}\label{eq:risk_update_var}
& R_{t}(\theta)= R_{t-1}(\theta)+\sum_{i=1}^{n_t}\frac{\varphi(x_{t,i})^2 }{ q_{\theta}(x_{t,i}) q_{t-1}(x_{t,i})}.
\end{align}
This case represents a different situation than the Kullback-Leibler approach and the GMM. Here, the sampling policy is selected optimally with respect to a particular function $\varphi$ whereas for KL and GMM the sampling policy is driven by a targeted distribution $f$.
\begin{remark}[computation cost]
The update rule (\ref{eq:min_risk}) might be computationally costly but alternatives exist. For instance, when $q_\theta$ is a family of Gaussian distributions, closed formulas are available for (\ref{eq:risk_update_var}). In fact we are in the case of weighted maximum likelihood estimation for which we find
exactly (\ref{muhat}) and (\ref{sigmahat}), with $\nu = \infty$.
This is computed online at no cost. Another strategy to reduce the computation time is to use online stochastic gradient descent in (\ref{eq:min_risk}).
\end{remark}
\begin{remark}[block estimator]
In \cite{marin+p+s:2012}, the authors suggest to update $ \theta$ based only on the particles from the last stage. For the Kullback-Leibler update, (\ref{eq:risk_update_kl}) would be replaced by $ R_{t}(\theta) = - \sum_{i = 1}^{n_t} {\log(q_\theta(x_{t,i})) f(x_{t,i})} / { q_{t-1}( x_{t,i} ) }$.
While this update makes easier the theoretical analysis (assuming that $n_t \to \infty$), its main drawback is that most of the computing effort is forgotten at each stage as the previous computations are not used.
\end{remark}
\subsection{Consistency of the sampling policy and asymptotic optimality of AIS}\label{consit_sampling_policy}
The updates described before using GMM, the Kullback-Leibler divergence or the variance, all fit within the framework of empirical risk minimization, given by (\ref{eq:min_risk}), which rewritten at the sample scale gives
\begin{align*}
& R_j(\theta) = R_{j-1}(\theta) + \frac{m_\theta(x_j)}{ q_{j-1}( x_j) } \\
&- \text{ if } j\in \{ N_t: t\ge 1\} \text{ then } : ~~~~~~~~~~~~~ \theta_j \in \argmin_{\theta\in \Theta} \, R_j(\theta)\\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ q_j= q_{ \theta_j}\\
&- \text { else :} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ q_j= q_{j-1}.
\end{align*}
The proof follows from a standard approach from $M$-estimation theory \cite[Theorem 5.7]{vandervaart:1998} but a particular attention shall be payed to the uniform law of large numbers because of the missing i.i.d. property of the sequences of interest.
\begin{theorem}[concistency of the sampling policy]\label{theorem:convergence_theta}
Set $M(x) =\sup_{\theta\in \Theta} m_\theta(x)$. Assume that $\Theta\subset \mathbb R^q $ is a compact set and that
\begin{align}\label{Mhyp}
&\int M(x) d x<\infty , \quad \sup_{\theta\in\Theta} \int \frac{M(x)^2}{q_{\theta}(x)}dx<\infty,\quad\text{and } \quad \forall\theta\ne\theta_*,~~ r(\theta) = \int m_\theta > \int m_{\theta_*}.
\end{align}
If moreover, for any $x\in \mathbb R^d $, the function $ \theta\mapsto m_{\theta}(x)$ is continuous on $\mathbb R^q$, then
\begin{align*}
\theta_n \to \theta_*,\qquad \text{a.s.}
\end{align*}
\end{theorem}
The conclusion given in Theorem \ref{theorem:convergence_theta} permits to check the conditions of Theorem \ref{clt}. This leads to the following result.
\begin{theorem}[asymptotic optimality of AIS]\label{thm:final_th}
Under the assumptions of Theorem~\ref{theorem:convergence_theta},
if there exists $\eta >0$ such that $\sup_{\theta\in \Theta} \int {\|\varphi\|^{2+\eta}} / { q_{\theta}^{1+\eta}}<\infty$,
then, we have
\begin{align*}
\sqrt n \,( I_n -I )\overset{\mathrm{d} } {\to} \mathcal N\big(0,V(q_{\theta_*},\varphi)\big),
\end{align*}
where $V(\cdot,\cdot)$ is defined in Equation~(\ref{vari}).
\end{theorem}
\begin{remark}[the oracle property]\label{rk:opt_pol}
From (\ref{Mhyp}), we deduce that $q_{\theta_*}$ is the unique minimizer of the risk function $r$. The risk function based on GMM or the Kullback-Leibler approach (described in section \ref{subsection:examples}) is derived from a certain targeted density $f$ in such a way that if $q_\theta = f $, then $r(\theta) $ is a minimum. Hence under the identifiability conditions of Theorem \ref{theorem:convergence_theta}, whenever $f\in \{q_\theta\,:\, \theta\in \Theta\}$, we have $q_{\theta_*} = f$. This means that asymptotically, AIS achives the same variance as the ``oracle'' importance sampling method based on the (fixed) sampler $f$.
\end{remark}
\begin{remark}[optimal policy for normalized AIS]\label{rk:opt_pol_norm}
For normalized AIS, the asymptotic variance is $ u^T V(q_{\theta_*} , (\varphi^T\pi,\pi)^T) u $, $V$ and $u$ are given in Corollary \ref{cor:normalized}. Minimizing w.r.t. $q_{\theta_*} $, we obtain the result (recalled in the introduction for nonadaptive strategies) that the optimal sampling policy for normalized AIS is proportional to $ |\varphi - \int \varphi \pi |\pi $ (see section \ref{sec:rk_opt_pol} in the supplementary material).
\end{remark}
\section{Weighted AIS}\label{sec:weightedAIS}
We follow ideas from \cite[section 4]{douc+g+m+r:2007b} to develop a novel method to estimate $\int \varphi \pi$. The method is called weighted adaptive importance sampling (wAIS), and will automatically re-weights each sample depending on its accuracy. It allows in practice to forget poor samples generated during the early stages. For clarity, suppose that $\varphi : \mathbb R^d\to \mathbb R ^p$ with $p=1$.
Define the weighted estimate, for any function $\psi$,
\begin{align*}
& I_T^{(\alpha)}(\psi) = N _T^{-1} \sum_{t=1}^T\alpha_{T,t} \sum_{i=1}^{n_t} \frac{\psi(x_{t,i})}{ q_{t-1}(x_{t,i})}.
\end{align*}
Note that for any sequence $(\alpha_{T,1},\ldots \alpha_{T,T})$ such that $\sum_{t=1}^T n_t \alpha_{T,t}=N_t $, $I_T^{(\alpha)}(\psi)$ is an unbiased estimate of $\int \psi$. Let $\sigma_t^2 = \mathbb E [ V(q_{t-1},\varphi) ]$ where $V(\cdot,\cdot)$ is defined in Equation (\ref{vari}). The variance of $I_T^{(\alpha)}(\varphi) $ is $ N_T^{-2} \sum_{t=1}^T \alpha_{T,t}^2 n_t \sigma_t^2$
which minimized w.r.t. $(\alpha)$ gives $\alpha_{T,t }\propto \sigma_t^{-2}$, for each $t=1,\ldots T$. In \cite{douc+g+m+r:2007b}, a re-weighting is proposed using estimates of $\sigma_t$ (based on sample of the $t$-th stage).
We propose the following weights
\begin{align}\label{eq:weights_alpha}
\alpha_{T,t}^{-1} \propto \sum_{i=1}^{n_t} \left(\frac{\pi(x_{t,i})}{q_{t-1}(x_{t,i}) } - 1\right) ^2,
\end{align}
satisfying the constraints $\sum_{t=1}^T n_t \alpha_{T,t}=N_t $. The wAIS estimate is the (weighted and normalized) AIS estimate given by
\begin{align}\label{eq:wAIS}
I_T^{(\alpha)}(\varphi\pi) / I_T^{(\alpha)}(\pi) .
\end{align}
In contrast with the approach in \cite{douc+g+m+r:2007b}, because our weights are based on the estimated variance of $\pi/q_{t-1}$, our proposal is free from the integrand $\varphi$ and thus reflects the overall quality of the $t$-th sample. This makes sense whenever many functions need to be integrated making inappropriate a re-weighting depending on a specific function.
Another difference with \cite{douc+g+m+r:2007b} is that we use the true expectation, $1$, in the estimate of the variance, rather than the estimate $(1/n_t) \sum_{i=1}^{n_t} {\pi(x_{t,i})} / {q_{t-1}(x_{t,i}) }$. This permits to avoid the situation (common in high dimensional settings) where a poor sampler $q_{t-1}$ is such that ${\pi(x_{t,i}}) / {q_{t-1}(x_{t,i}) }\simeq 0$, for all $i=1,\ldots n_t$, implying that the classical estimate of the variance is near $0$, leading (unfortunately) to a large weight.
\section{Numerical experiments}
\begin{figure}
\centering\includegraphics[height=7cm,width = 6.5cm]{err_log_NOsigT_50p_2.pdf}\includegraphics[height=7cm,width = 6.5cm]{err_log_NOsigT_50p_4.pdf}\\
\centering\includegraphics[height=7cm,width = 6.5cm]{err_log_NOsigT_50p_8.pdf}\includegraphics[height=7cm,width = 6.5cm]{err_log_NOsigT_50p_16.pdf}
\caption{From left to right $d=2,4,8,16$. AIS and wAIS are computed with $T=50$ with a constant allocation policy $n_t = 2e3$. Plotted is the logarithm of the MSE (computed for each method over $100$ replicates) with respect to the number of requests to the integrand.}\label{fig:variance_stab}
\end{figure}
In this section, we study a toy Gaussian example to illustrate the practical behavior of AIS. Special interest is dedicated to the effect of the dimension $d$, the practical choice of $(n_t)$ and
the gain given by wAIS introduced in the previous section. We set $N_T = 1e5$ and we consider $d=4,8,16$. The code is made available at \url{https://github.com/portierf/AIS}.
The aim is to compute $\mu_* = \int x \phi_{\mu_*,\sigma_*}(x)d x $ where $\phi_{\mu,\sigma}:\mathbb R^d\to \mathbb R $ is the probability density of $\mathcal N (\mu, \sigma^2I_d)$, $\mu_* = (5,\ldots 5)^T\in \mathbb R^d$, $\sigma_* = 1$, and $I_d$ is the identity matrix of size $(d,d)$. The sampling policy is taken in the collection of multivariate Student distributions of degree $\nu = 3$ denoted by $\{q_{\mu,\Sigma_0}\,:\, \mu \in \mathbb R^d \}$ with $\Sigma_0 =\sigma_0I_d(\nu-2) /\nu$ and $\sigma_0 = 5$. The initial sampling policy is set as $\mu_0 = (0,\ldots 0)\in \mathbb R^d$. The mean $\mu_t$ is updated at each stage $t=1,\ldots T$ following the GMM approach as described in section \ref{sec:consistency_samp_pol}, leading to the simple update formula
\begin{align*}
&\mu_t
= {\sum_{s=1}^t \sum_{i=1}^{n_s} x_{s,i} \frac{f(x_{s,i} )} { q_{s-1}(x_{s,i} )}} \left/ {\sum_{s=1}^t \sum_{i=1}^{n_s} \frac{f(x_{s,i} )} { q_{s-1}(x_{s,i} )}} \right. ,
\end{align*}
with $f = \phi_{\mu_*,\sigma_*}$. In section \ref{sec:supp_update_variance} of the supplementary file, other results considering the update of the variance within the student family are provided.
As the results for the unnormalized approaches were far from being competitive with the normalized ones, we consider only normalized estimators. The (normalized) AIS estimate of $\mu_* $ is simply given by $ \mu_{t}$ as displayed above. The wAIS estimate of $\mu_*$ is computed using (\ref{eq:wAIS}) with weights (\ref{eq:weights_alpha}).
We also include the adaptive MH proposed in \cite{haario+s+t:2001}, where the proposal, assuming that $X_{i-1} = x$, is given by $\mathcal N \left(x , (2.4)^2 ( C_{i} + \epsilon I_d )/d \right)$, if $ i>i_0$, and $ \mathcal N (x , I_d)$, if $ i\leq i_0$, with $C_i$ the empirical covariance matrix of $(X_0,X_1,\ldots X_{i-1})$, $i_0 = 1000$ and $\epsilon = 0.05$ (other configurations as for instance using only half of the chain have been tested without improving the results). Finally we consider a so called ``oracle'' method : importance sampling with fix policy $\phi_{\mu_*,\sigma_*}$.
For each method that returns $\mu$, the mean square error (MSE) is computed as the average of $\|\mu - \mu_*\|^2$ computed over $100$ replicates of $\mu$.
\begin{figure}
\centering\includegraphics[height=7cm,width = 6.5cm]{err_log_com_Np_2.pdf}\includegraphics[height=7cm,width = 6.5cm]{err_log_com_Np_4.pdf}\\
\centering\includegraphics[height=7cm,width = 6.5cm]{err_log_com_Np_8.pdf}\includegraphics[height=7cm,width = 6.5cm]{err_log_com_Np_16.pdf}
\caption{From left to right $d=2,4,8,16$. AIS and wAIS are computed with $T=5,20,50$, each with a constant allocation policy, resp. $n_t = 2e4,5e3,2e3$. Plotted is the logarithm of the MSE (computed for each method over $100$ replicates) with respect to the number of requests to the integrand.}\label{fig:comp_alloc}
\end{figure}
In Figure \ref{fig:variance_stab}, we compare the evolution of all the mentioned algorithms with respect to stages $t=1,\ldots T= 50$ with constant allocation policy $n_t = 2e3$ (for AIS and wAIS). The clear winner is wAIS. Note that the policy $\phi_{\mu_*,\sigma_*}$, which is not the optimal one (see Remark \ref{rk:opt_pol_norm}), seems to give worse results than the the policy $\phi_{\mu_*,5}$, as wAIS with \texttt{sig\_0} performs better than the ``oracle'' after some time.
In Figure \ref{fig:comp_alloc}, we examine $3$ constant allocation policies given by $T = 50$ and $n_t = 2e3$; $T= 20 $ and $n_t = 5e3$; $T =5$ and $n_t = 2e4$. We clearly notice that the rate of convergence is influenced by the number of update steps (at least at the beginning). The results call for updating as soon as possible the sampling policy. This empirical evidence supports the theoretical framework studied in the paper which imposes no condition on the growth of $(n_t)$.
\subsubsection*{Acknowledgments} The authors are grateful to R\'emi Bardenet for useful comments and additional references.
\bibliographystyle{plain}
|
{
"timestamp": "2018-10-04T02:06:39",
"yymm": "1806",
"arxiv_id": "1806.00989",
"language": "en",
"url": "https://arxiv.org/abs/1806.00989"
}
|
\section{Introduction}
\label{}
The Anti-de Sitter/Conformal Field Theory (AdS/CFT) correspondence or duality, also known by string/gauge or gauge/gravity duality or yet holography was proposed by Juan Maldacena in 1988 \cite{mal} and brought various perspectives for the hadronic physics outside the perturbative regime in the sense that one can relate an abstract type IIB superstring theory to a SYM theory.
Correspondences or dualities are not new in the history of Physics. These correspondences relate two physical theories, generally distinct from one another, through certain characteristics belonging to both theories. Just for example, we recall two very well-known dualities in Physics. The first one is the duality between the quantum Sine-Gordon model and the massive Thirring model \cite{ref10, ref11} and the second one is the electric-magnetic duality or Seiberg's duality \cite{Seiberg:1994pq}.
The major difference between AdS/CFT correspondence and those mentioned before is that the first two relate two quantum field theories to each other while the AdS/CFT correspondence relates a quantum field theory in a $d$-dimensional space and a theory of supergravity in a curved $D$-dimensional space, with $D> d$.
More generally, however, with high level of mathematical abstraction, it can be said that the AdS/CFT correspondence relates a superstring theory or M-theory on certain background which can be described by $AdS_d \times {\cal M}^{D-d}$, where $AdS_d$ is $d$-dimensional anti-de Sitter space and ${\cal M}^{D-d}$ is some compactification of the $D-d$-dimensional space, with a conformal field theory (CFT) on the AdS boundary. One can note, if $D=10$ one has a superstring theory. In case of $D=11$, one has a M-theory.
In a more concrete view, AdS/CFT correspondence can be seen as a correspondence or duality between a conformal SYM, since there is no dynamic in the $\beta$ function, with extended supersymmetry $({\cal N} = 4)$, symmetry group given by $SU(N)$ with $N \rightarrow \infty$, in a flat $(3+1)$ dimensional Minkowski spacetime and a type IIB superstring theory in a curved $10$-dimensional spacetime, which can be mathematically described as the $AdS_5 \times S^5$, which in a low energy limit, can be associated to a theory of supergravity. Furthermore, there are many well known references dealing with AdS/CFT as one can see in \cite{Gubser:1998bc,Witten:1998qj,Witten:1998zw,Aharony:1999ti}, for instance.
The use of the AdS/CFT correspondence allows us to investigate many aspects of the hadronic physics describe by the QCD outside the perturbative regime. QCD is the best known theory to describe the strong interactions. Amongst other characteristics of QCD, there is one called confinement, meaning that in the infrared limit, i.e, for low energies or large distances, quarks and gluons are bound to each other strongly, the strong coupling $g \gg 1$, inaccessible to the perturbative approach. Such calculations involve bound states as the glueball masses, and consequently its related Regge trajectories, are features of the non-perturbative regime.
Glueball states are bound states of gluons predicted by QCD, but not detected so far and characterised by $J^{PC}$, where $J$ is the total angular momentum, and $P$ and $C$ are the $P-$parity (spatial inversion) and the $C-$parity (charge conjugation) eigenvalues, respectively.
Regge trajectories are well known approximate linear relations between total angular momenta $(J)$ and the square of the masses $(m)$, such as:
\begin{equation}
J(m^2) \approx \alpha' \, m^2 + \alpha_0 \, ,
\end{equation}
\noindent with $\alpha_0$ and $\alpha'$ constants.
In order to make the comparison between our results and the results coming from other approaches easier, we will provide the most known Regge trajectory for the soft pomeron \cite{Donnachie:1984xq, Donnachie:1985iz}, given by:
\begin{equation}\label{land}
J(m^2) \approx 0.25 \, m^2 + 1.08\,,
\end{equation}
\noindent where the masses throughout this work are expressed in GeV. The pomeron is related to the even spin glueball, with $P=C=+1$. In a $J \times m^2$ plane known as Chew-Frautschi plane, the masses of glueball states lie on the pomeron Regge trajectory.
Of course, there are also other models to describe the pomeron, providing Regge trajectories pretty close to Eq.\eqref{land}, as one can see for instance in \cite{Cudell:2001ii, Levin:1998pk}.
On the other hand, for odderon, now related to the odd spin glueball with $P=C=-1$, there are also many models to describe it, such as, isotropic lattice \cite{Meyer:2004jc}, anisotropic lattice \cite{Chen:2005mg}, relativistic many body model \cite{LlanesEstrada:2005jf} and the non-relativistic constituent model \cite{LlanesEstrada:2005jf} for which:
\begin{equation}\label{nr_odderon}
J(m^2) \approx 0.18 \, m^2 + 0.25 \,,
\end{equation}
etc. We are going to compare our results obtained for the Regge trajectory of the odderon, with this trajectory.
As mentioned before the AdS/CFT implies a super conformal field theory, and thus, cannot be used directly to tackle QCD, since QCD is not a conformal theory. So, one must break the conformal invariance, and after this, one can construct phenomenological models that describe (large $N$) QCD approximately. These models are known as AdS/QCD models.
Some proposals appeared in order to deal with the conformal invariance, such as ``Witten black hole" \cite{Witten:1998zw} and the introduction of an IR hard cutoff a certain value $z_{max}$ of the holographic coordinate $z$ and just considering a slice of $AdS_5$ space in the region $0 \leq z \leq z_{max}$, with some appropriate boundary conditions \cite{Polchinski:2001tt, Polchinski:2002jw, BoschiFilho:2002vd, BoschiFilho:2002ta}. In these two last works emerges the idea of the hardwall model, as this model is known nowadays. There are many results in order to study even and odd glueball state masses, as well as Regge trajectories associated to the pomeron and the odderon within the hardwall model, as one can see in the following references \cite{BoschiFilho:2005yh, Capossoli:2013kb, Rodrigues:2016cdb}.
For our purposes, in this work we will focus on another approach known as softwall model, in its dynamic version, to break the conformal invariance and investigate the hadronic physics, as can be seen in the following Sections.
The softwall model arises from the need to break the conformal invariance and one has to introduce in the action of the fields a decreasing exponential factor of the dilatonic field that represents a soft IR cutoff. The original softwall model was proposed in \cite{Karch:2006pv} to study vector mesons, and subsequently extended to glueballs \cite{Colangelo:2007pt}, to other mesons and baryons \cite{Forkel:2007tz} and even to study the deep inelastic scattering \cite{Capossoli:2015sfa}. The main feature of this model is to produce linear Regge trajectories.
As discussed in \cite{BoschiFilho:2012xr, Li:2013oda, Capossoli:2015ywa}
the Regge trajectories for glueballs coming from the original softwall model although linear are not in agreement with lattice data. In particular in reference \cite{FolcoCapossoli:2016ejd} suggested an $AdS_5$ mass renormalisation in order to get a unified treatment for both scalar and high even spin glueballs. Due to this, modifications were proposed in the original softwall model, as can be seen in the following Sections.
This work is organized as follows: In Section \ref{dsw} we will present a dynamical modification in the holographic softwall model in order to calculate the masses of the even and odd spin glueballs as well as the Regge trajectories related to pomeron and odderon. Also in this Section, we will explore the analytically solvable version of this model. In the Section \ref{ano} we will present a modification in the dynamical softwall model, now taking into account the anomalous dimension related to the QCD beta function and numerically solve, also in order to get the masses of the even and odd spin glueballs and the Regge trajectories related to pomeron and odderon. Finally in Section \ref{con} we will present our conclusions and last comments.
\section{Dynamical Modification in the Holographic Softwall Model} \label{dsw}
As mentioned in the previous Section, the references \cite{BoschiFilho:2012xr, Li:2013oda, Capossoli:2015ywa} showed that the softwall model does not seem to be working well for glueball states because both masses and Regge trajectories are not in agreement with those found in the literature. In this Section, we will present a modification to the softwall model.
The dynamical modification in the holographic softwall model is based on the dilaton field becoming dynamical satisfying the Einstein's equations, and the metric structure is also consistently solved by Einstein's equations, and both cases are in five dimensions.
To do this, let us start writing the $5D$ action for the graviton-dilaton action in the string frame:
\begin{equation}\label{acao_corda}
S = \frac{G_5^{-1}}{16 \pi } \int d^5 x \sqrt{-g_s} \; e^{-2\Phi(z)} ({\cal R}_s + 4 \partial_M \Phi \partial^M \Phi - V^s_G(\Phi))
\end{equation}
\noindent where $G_5$ is Newton's constant in five dimensions, $g_s$ is the determinant of the metric tensor in the $5-$dimensional space, $\Phi(z) = k z^2$ is the dilatonic field, where $k \sim \Lambda^2_{QCD}$ and $V^s_G(\Phi)$ is the dilatonic potential. All of these parameters are in the string frame, so the metric tensor has the following form:
\begin{equation}\label{g_s}
ds^2 = g^s_{MN} dx^M dx^N = b^2_s(z)(dz^2 + \eta_{\mu \nu}dx^\mu dx^\nu); \; \; \;b_s(z) \equiv e^{A_s(z)}.
\end{equation}
\noindent with $M,N = 0,1,2,3,4; \; \mu, \nu = 0,1,2,3,$ and $\eta_{\mu \nu} =$ diag $(-1, 1, 1, 1)$ the metric of the four-dimensional Minkowski space.
After a Weyl rescaling, from the string frame to the Einstein frame, one can write Eq.(\ref{acao_corda}) as:
\begin{equation}\label{acao_einstein}
S = \frac{1}{16 \pi G_5} \int d^5 x \sqrt{-g_E} \; (R_E -\frac{4}{3} \partial_M \Phi \partial^M \Phi - V^E_G(\Phi))\;,
\end{equation}
\noindent where
\begin{equation}\label{weyl}
g^E_{MN} = g^s_{MN}e^{-\frac{2}{3}\Phi}\;; \qquad V^E_G = e^{\frac{4}{3}\Phi}V^s_G\;.
\end{equation}
The equations of motion from \eqref{acao_einstein}, can be writte as:
\begin{equation}\label{eq_mov_e_2_1}
-A''_E + A'^2_E - \frac{4}{9}\Phi'^2 = 0\;;
\end{equation}
\begin{equation}\label{eq_mov_e_2_2}
\Phi'' + 3A'_E \Phi' - \frac{3}{8}e^{2A_E}\partial_\Phi V^E(\Phi) = 0\;,
\end{equation}
\noindent where we defined $\Phi'=\partial \Phi/ \partial z$, $A'=\partial A/ \partial z$ and
\begin{equation}\label{redef}
b_E (z) = b_s(z)e^{-\frac{2}{3}\Phi(z)} = e^{A_E(z)}\;; \qquad A_E(z) = A_s(z) - \frac{2}{3}\Phi(z)\;.
\end{equation}
\noindent Solving Eqs. (\ref{eq_mov_e_2_1}) and (\ref{eq_mov_e_2_2}) for the quadratic dilaton background, $\Phi(z)=kz^2$, one finds:
\begin{equation}\label{sol_eq_mov_e_2_1}
A_E(z) = \log{\left( \frac{R}{z} \right)} - \log{\left(_0F_1\left(\frac 54, \frac{\Phi^2}{9}\right)\right)}\;,
\end{equation}
\noindent and
\begin{equation}\label{sol_eq_mov_e_2_2}
V^E_G(\Phi) = -\frac{12 ~ _0F_1(\frac14, \frac{\Phi^2}{9})^2}{R^2} + \frac{16 ~ _0F_1(\frac 54, \frac{\Phi^2}{9})^2\, \Phi^2}{3 R^2}\;,
\end{equation}
where $_0F_1(a,z)$ is the Kummer confluent hypergeometric function.
Using (\ref{redef}) and (\ref{sol_eq_mov_e_2_1}), one can note that the warp factor in the string frame is
\begin{equation}\label{redef_2}
A_s(z) = \log{\left( \frac{R}{z} \right)} + \frac{2}{3}\Phi(z) - \log{\left[_0F_1\left(\frac 54, \frac{\Phi^2}{9}\right)\right]}\,,
\end{equation}
\noindent which means that the metric (\ref{g_s}) is a deformed AdS space. Using \eqref{weyl} one has
\begin{equation}\label{vs}
V^s_G(\Phi) =\exp\{-\frac 43 \Phi\} \left[ -\frac{12 ~ _0F_1(1/4, \frac{\Phi^2}{9})^2}{R^2} + \frac{16 ~ _0F_1(5/4, \frac{\Phi^2}{9})^2 \Phi^2}{3 R^2}\right]
\end{equation}
\noindent so that this potential generates the desired quadratic dilaton where $R$ is the AdS radius.
Returning to string frame, the $5D$ action for the scalar glueball field ${\cal G}$ is given by \cite{Colangelo:2007pt}:
\begin{equation}\label{acao_ori_soft}
S = \int d^5 x \sqrt{-g_s} \; \frac{1}{2} e^{-\Phi(z)} [\partial_M {\cal G}\partial^M {\cal G} + M^2_{5} {\cal G}^2]
\end{equation}
\noindent which leads to the following equation of motion:
\begin{equation}\label{eom_1}
\partial_M[\sqrt{-g_s} \; e^{-\Phi(z)} g^{MN} \partial_N {\cal G}] - \sqrt{-g_s} e^{-\Phi(z)} M^2_{5} {\cal G} = 0\,.
\end{equation}
Representing the scalar field through a $4d$ Fourier transform ${\cal \tilde{G}}(q,z)$ and performing a change of function ${\cal \tilde{G}} = \psi (z) e^{\frac{B(z)}{2}}$, where $B(z) = \Phi(z) - 3A_s(z) $, one gets the following $1d$ Schr\"odinger-like equation
\begin{equation}\label{equ_5}
- \psi''(z) + \left[ \frac{B'^2(z)}{4} - \frac{B''(z)}{2} + M^2_{5} \left( \frac{R}{z}\right)^2 e^{4kz^2/3} {\cal A}^{-2} \right] \psi(z) = - q^2 \psi(z)
\end{equation}
\noindent or explicitly for the quadratic dilaton $\Phi(z)= k z^2$:
\begin{equation}\label{equ_7}
- \psi''(z) + \left[ k^2 z^2 + \frac{15}{4z^2} - 2k + M^2_{5} \left( \frac{R}{z}\right)^2 e^{4kz^2/3}\right] \psi(z) = (- q^2 )\psi(z),
\end{equation}
with ${\cal A}$ = $_0F_1(5/4, \frac{\Phi^2}{9})$. This equation was solved numerically in \cite{Li:2013oda,Capossoli:2016kcr, Capossoli:2016ydo}.
Since, for our purposes, in this Section, we are interested in analytical solutions, then we will use the action \eqref{acao_ori_soft} with metric tensor (\ref{g_s}) and $\Phi(z)$ still given by $k z^2$ but with the function $A_s(z)$ replaced by:
\begin{equation}\label{am}
{{A}}_M(z) = \log{\left( \frac{R}{z} \right)} + \frac{2}{3}\Phi(z),
\end{equation}
One can conclude by looking at (\ref{g_s}) and (\ref{am}) that this modification produces a deformation in the original $AdS_5$, meaning that this dynamical softwall model is no longer $AdS_5$. Of course, now we are dealing with an asymptotically $AdS_5$ space, as can be seen for the UV limit or $z\rightarrow 0$, one has $A_M(z)|_{(z\rightarrow 0 )}\propto \log \left( \frac{R}{z} \right)$ .
The Schr\"odinger-like equation \eqref{equ_7} has an effective potential given by:
$${\cal V}(z) = \left[ k^2 z^2 + \frac{15}{4z^2} - 2k + M^2_{5} \left( \frac{R}{z}\right)^2 e^{4kz^2/3} \right]\,.$$
This is still not exactly solvable so we expand the exponential in the last term in the brackets and just retain terms up to first order in the parameter $k$. In fact, we could retain terms up to second order in $k$ without breaking exact solvability, but this contribution would not modify significantly our subsequent analysis. This procedure gives us the equation
\begin{equation}\label{equ_7_1_new}
- \psi''(z) + \left[ k^2 z^2 + \frac{15}{4z^2} - 2k + M^2_{5} \left( \frac{R}{z}\right)^2 + \frac{4 kz^2}{3}M^2_{5} \left( \frac{R}{z}\right)^2\right] \psi(z) = (- q^2 )\psi(z).
\end{equation}
which is exactly solvable and represents the dynamical and analytical softwall model that we consider here. From the eigenenergies and by associating $-q^ 2_n$ with the square of the masses of the 4D glueball states, one has:
\begin{equation}\label{adsw_1}
m_n^2 = \left[ 4n + 2\sqrt{4 +M^2_{5} R^2} + \frac{4}{3}R^2M^2_{5} \right]k; \;\;\;\; (n=0, 1, 2, \cdots).
\end{equation}
For the lightest scalar glueball $0^{++}$ is dual to the fields with zero mass $(M^2_5 = 0 )$ in the $AdS_5$ space, Eq.(\ref{adsw_1}) becomes:
\begin{equation}\label{adsw}
m_n^2 = \left[ 4n +4 \right] k \,.
\end{equation}
The results obtained from \eqref{adsw_1} for the masses of the lightest scalar glueball $(n = 0)$ and its radial excitations $(n=1, 2, \cdots)$, using $k = 0.85$ GeV$^2$ are presented in Table \ref{t1}.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c}
\hline
& \multicolumn{4}{c|}{Glueball States $J^{PC}$} & \\
\cline{2-5}
& $0^{++}$ & $0^{++*} $ & $0^{++**}$ & $0^{++***}$ & $k$ \\ \hline
$n$ & 0 & 1 & 2 & 3 & \\
\hline \hline
\, $m_n$ \,
&\, 1.84\, &\, 2.61 \,&\, 3.19 \,& \, 3.69 \, & \, 0.85 \, \\ \hline
\end{tabular}
\caption{\em Masses expressed in GeV for the glueball states $J^{PC}$ of the the lightest scalar glueball and its radial excitations from dynamical and analytical softwall model using Eq.(\ref{adsw}) for $k = 0.85$ GeV$^2$.}
\label{t1}
\end{table}
The values found for the masses of the lightest scalar glueball and its radial excitations are in agreement with those found in the literature from lattice calculations, as one can see in \cite{Meyer:2004jc, Chen:2005mg, Morningstar:1999rf, Lucini:2001ej}.
In order to deal with higher spin glueballs one can recall the AdS/CFT correspondence dictionary who tells us how to relate the operator in the gauge theory with fields in the $AdS_{5} \times S^5$ space. The conformal dimension $\Delta$ of a boundary operator is given by:
\begin{equation}\label{dim_delta}
\Delta = 2 + \sqrt{4 + R^2 M^2_{5}}
\end{equation}
For a pure SYM theory defined on the boundary, one has that the scalar glueball state $0^{++}$ is represented by the operator ${\cal O}_4$, given by:
\begin{equation}\label{fmn}
{\cal O}_4 = Tr(F^{\mu\nu}F_{\mu \nu})
\end{equation}
which has conformal dimension $\Delta = 4$. So, the lightest scalar glueball $0^{++}$ is dual to the fields with zero mass $(M^2_{5} = 0 )$ in the $AdS_5$ space, as mentioned before.
After this explanation, we will apply an approach following \cite{deTeramond:2005su} where the glueball operator with spin $\ell$ could be obtained by the insertion of symmetrised covariant derivatives in the operator ${\cal O}_{4} = F^2$, such that:
\begin{equation}
{\cal O}_{4+ \ell} = FD_{\{\mu_1 \cdots} {D_{\mu_\ell\}}}F
\end{equation}
%
\noindent with conformal dimension $\Delta = 4 + \ell$.
This approach was used within holographic hardwall model in two cases. The first one, to calculate the masses of even glueball states $0^{++}$, $2^{++}$, $4^{++}$, $6^{++}, \cdots$ and to obtain the corresponding pomeron Regge trajectory \cite{BoschiFilho:2005yh, Rodrigues:2016cdb}. The second one, was used to calculate the masses of even glueball states $1^{--}$, $3^{--}$, $5^{--}$, $7^{--}, \cdots$ and to obtain the corresponding odderon Regge trajectory \cite{Capossoli:2013kb}.
For even spin glueball states after the insertion of symmetrised covariant derivatives, and using \eqref{dim_delta}, one has:
\begin{equation}
M^2_{5}R^2 = \ell(\ell+4)\,; \qquad ({\rm even}\, \ell)\,.
\end{equation}
\noindent Plugging this result in Eq.(\ref{adsw_1}), one gets:
\begin{equation}\label{adsw_1_even}
m_n^2 = \left[ 4n + 2\sqrt{4 +\ell(\ell+4)} + \frac{4}{3}\ell(\ell+4) \right]k\,; \qquad ({\rm even}\, \ell).
\end{equation}
and for the particular cases of non-excited states $(n=0)$, one has:
\begin{equation}\label{kkssparn}
m_0^2 = \left[ 2\sqrt{4 +\ell(\ell+4)} + \frac{4}{3}\ell(\ell+4) \right]k\,; \qquad ({\rm even}\, \ell).
\end{equation}
In the case of odd spin glueballs, following, the operator ${\cal O}_6$ that describes the glueball state $1^{--}$ is given by:
\begin{equation}
{\cal O}_{6} =SymTr\left( {\tilde{F}_{\mu \nu}}F^2\right),
\end{equation}
\noindent and inserting the symmetrised covariant derivatives one has:
\begin{equation}
{\cal O}_{6 + J} = SymTr\left( {\tilde{F}_{\mu \nu}}F D_{\lbrace\mu_1 \cdots} D_{\mu_\ell \rbrace}F\right),
\end{equation}
\noindent with conformal dimension $\Delta = 6 + J$ and spin $1+\ell$. Then, for the case of the odd spin glueball states, using again \eqref{dim_delta}, one finds:
\begin{equation}
M^2_{5}R^2 = (J+6)(J+2)\,; \qquad ({\rm odd}\, \ell),
\end{equation}
\noindent Plugging this result in Eq.(\ref{adsw_1}), one gets:
\begin{equation}\label{adsw_1_odd}
m_n^2 = \left[ 4n + 2\sqrt{4 +(J+6)(J+2)} + \frac{4}{3}(J+6)(J+2) \right]k\,; \qquad ({\rm odd}\, \ell).
\end{equation}
\noindent One can read for the non-excited odd spin glueball states $(n=0)$
\begin{equation}\label{kkssimpn}
m_0^2 = \left[ 2\sqrt{4 +(J+6)(J+2)} + \frac{4}{3}(J+6)(J+2) \right]k\,; \qquad ({\rm odd}\, \ell).
\end{equation}
In Table \ref{t5}, we will present the values for masses for the even spin glueball states, from \eqref{kkssparn} and in the Table \ref{t6}, the values for the masses for the odd spin glueball states, from \eqref{kkssimpn}. For both calculation was used $k= 0.2$ GeV$^2$.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
& \multicolumn{6}{c|}{Glueball States $J^{PC}$} & \\
\cline{2-7}
& $0^{++}$ & $2^{++} $ & $4^{++}$ & $6^{++}$ & $8^{++}$ & $10^{++}$ & $ k $ \\
\hline \hline
Masses
&\, 0.89\, &\, 2.19 \,&\, 3.30 \,& \, 4.38 \, &\, 5.44 &\, 6.49 \, & \, 0.20 \, \\ \hline
\end{tabular}
\caption{\em Masses expressed in GeV for the glueball states $J^{PC}$ with even $J$ from the dynamical and analytical softwall model using Eq.(\ref{kkssparn}) with $k= 0.2$ GeV$^2$.}
\label{t5}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
& \multicolumn{6}{c|}{Glueball States $J^{PC}$} & \\
\cline{2-7}
& $1^{--}$ & $3^{--} $ & $5^{--}$ & $7^{--}$ & $9^{--}$ & $11^{--}$ & $ k $ \\
\hline \hline
Masses
&\, 2.82\, &\, 3.94 \,&\, 5.03 \,& \, 6.11 \, &\, 7.19&\, 8.26\, & \, 0.20 \, \\ \hline
\end{tabular}
\caption{\em Masses expressed in GeV for the glueball states $J^{PC}$ with odd $J$ from SW using Eq.(\ref{kkssimpn}) and $k= 1$ and 2 GeV$^2$ and from the modified SW using Eq.(\ref{adsw_1_odd}) and $k= 0.2$ GeV$^2$.}
\label{t6}
\end{table}
From these results presented in Table \ref{t5}, one can derive the Regge trajectory for even spin glueball states, which one can be associated to the pomeron, such as:
\begin{equation}\label{rtpadsw}
J(m^2) = (0.23 \pm 0.02) \, m^2 + (0.8 \pm 0.5)
\end{equation}
\noindent The errors for the slope and the intercept come from the linear fit. This Regge trajectory is in agreement with the one presented in \eqref{land}.
In the same way, using the results from Table \ref{t6}, one can derive the Regge trajectory for odd spin glueball states, which one can be associated to the odderon, such as:
\begin{equation}\label{rtoadsw}
J(m^2) = (0.17 \pm 0.01) \, m^2 + (0.4 \pm 0.4)\;.
\end{equation}
\noindent The errors for the slope and intercept come from the linear fit. This Regge trajectory for the odderon is in agreement with the one presented in \eqref{nr_odderon}, within the nonrelativistic constituent model.
\section{Dynamical Corrections to the Anomalous Holographic Softwall Model} \label{ano}
In this Section we will calculate numerically the masses of higher, even and odd, spin glueball states, and construct the Regge trajectories related to pomeron and odderon but now taking into account the anomalous dimensions from a chosen QCD beta function, namely, beta function with an IR fixed point at finite coupling, in addition to dynamical corrections in the holographic softwall model in same way that was done in the previous Section.
The references \cite{Gursoy:2007cb} introduced the idea of using QCD beta functions to get an interesting UV behaviour for the softwall model modified by convenient superpotentials for the dilaton field. In particular in \cite{BoschiFilho:2012xr}, the authors took into account the anomalous dimensions, also related to QCD beta functions, and obtained the masses only for the scalar glueball and its radial excitations in agreement with those presented in the literature.
As we are dealing with dynamical corrections in the softwall model, we will follow the same steps used in Section \ref{dsw}. Then let us recall the metric tensor that will be used:
\begin{equation}\label{redef_2_1}
A_s(z) = \log{\left( \frac{R}{z} \right)} + \frac{2}{3}\Phi(z) - \log{\left[_0F_1\left(\frac 54, \frac{\Phi^2}{9}\right)\right]}\,,
\end{equation}
\noindent and the action for the scalar glueball field ${\cal G}$ that also will be used:
\begin{equation}\label{acao_ori_soft_1}
S = \int d^5 x \sqrt{-g_s} \; \frac{1}{2} e^{-\Phi(z)} [\partial_M {\cal G}\partial^M {\cal G} + M^2_{5} {\cal G}^2].
\end{equation}
The dilatonic field still remains as $\Phi = k z^2$.
After some calculation as done in the Section \ref{dsw} we obtained this Schr\"odinger-like equation:
\begin{equation}\label{equ_7_1}
- \psi''(z) + \left[ k^2 z^2 + \frac{15}{4z^2} - 2k + M^2_{5} \left( \frac{R}{z}\right)^2 e^{4kz^2/3}\right] \psi(z) = (- q^2 )\psi(z).
\end{equation}
Also recalling the AdS/CFT dictionary, the classical (non-anomalous) conformal dimension $\Delta_{\rm{class.}}$ of a super Yang-Mills (SYM) scalar operator is given by:
\begin{equation}\label{di}
\Delta_{\rm{class.}} = 2 + \sqrt{4 + R^2 M^2_{5}}\;.
\end{equation}
Therefore, one can write:
\begin{equation}\label{dim}
R^2 M^2_{5} = \Delta_{\rm{class.}}( \Delta_{\rm{class.}} - 4) \;.
\end{equation}
The SYM is a conformal theory, so the beta function vanishes and the conformal dimensions has no anomalous contributions, therefore they keep only their classical dimension.
On the other side, within the QCD approach, the scalar glueball operator has full dimension given from the trace anomaly of the energy-momentum tensor \cite{Narison:1988ts,Gubser:2008yx}, so that:
\begin{equation}\label{beta1}
T^{\mu}_{\mu} = \frac{\beta(\alpha)}{16 \pi \alpha^ 2} Tr F^2 + {\rm fermionic \;\;terms}
\end{equation}
\noindent and the beta function can defined as:
\begin{equation}\label{beta2}
\beta(\alpha(\mu) )\equiv \frac{d \alpha(\mu)}{d \ln(\mu)},
\end{equation}
where $\mu$ is a renormalisation scale, $\alpha \equiv g_{YM}^2 /4 \pi$ and $g_{YM}$ is the Yang-Mills coupling constant.
The fermionic part in (\ref{beta1}) can be disregarded because only the operator $Tr F^2$ is relevant for our purposes.
Besides, the scaling behaviour for a generic operator can be written as:
\begin{equation}\label{beta3}
\Delta_{\cal O} = - \frac{d {\cal O}}{d \ln \mu}.
\end{equation}
It is appropriate to mention that the full dimension $\Delta_{\cal O}$ also can be represented by sum of the classical dimension $\Delta_{\rm{class.}}$ and the anomalous dimension $\gamma(\mu)$, then one has:
\begin{equation}\label{beta4}
\Delta_{\cal O} = \Delta_{\rm{class.}} + \gamma(\mu).
\end{equation}
Particularly, for the case of the scalar glueball operator, inserting Eq.(\ref{beta1}), disregarding fermionic part, in (\ref{beta3}), we obtain:
\begin{equation}
\Delta_{T^{\mu}_{\mu}} \left( \frac{\beta(\alpha)}{8 \pi \alpha^ 2} Tr F^2 \right) = - (\beta'(\alpha) - \frac{2}{\alpha} \beta(\alpha) - \Delta_{F^2}) \frac{\beta(\alpha)}{8 \pi \alpha^ 2} Tr F^2\;,
\end{equation}
\noindent where the prime represents the derivative with respect to $\alpha$.
Then, the scalar glueball operator $Tr F^2$ has the full dimension:
\begin{equation}\label{beta6}
\Delta_{F^2} = 4 + \beta'(\alpha) - \frac{2}{\alpha} \beta(\alpha)
\end{equation}
Using the 't Hooft coupling $\lambda \equiv N_C g_{YM}^2 = 4 \pi N_C \alpha$, one gets
\begin{equation}\label{beta7}
\Delta_{F^2} = 4 + \beta'(\lambda) - \frac{2}{\lambda} \beta(\lambda)
\end{equation}
\noindent where the prime represents the derivative with respect to $\lambda$ and the beta function is given by:
\begin{equation}
\beta(\lambda(\mu)) \equiv \frac{d \lambda(\mu)}{d \ln(\mu)}.
\end{equation}
As our concern is about higher spin glueballs, for even and odd spins, and to get their corresponding Regge trajectories related to the pomeron and to the odderon, we will use the same procedure done in Section \ref{dsw}, that means we will insert symmetrised covariant derivatives in a given operator with spin $\ell$ in order to raise the total angular momentum. In the particular case of the operator ${\cal O}_4 = F^2$, one gets once again:
\begin{equation}\label{4+J}
{\cal O}_{4 + J} = FD_{\lbrace\mu_1 \cdots} D_{\mu_\ell \rbrace}F,
\end{equation}
\noindent with conformal dimension $\Delta_{\rm{class.}} = 4 + \ell$ and spin $\ell$.
In a similar way, one can write the full dimension $\Delta^{even\,J}_{T^{\mu}_{\mu}} = 4 + J$,
and now Eq.(\ref{beta7}) can be written as:
\begin{equation}\label{beta8}
\Delta^{even\, J}_{F^2} = 4 + \ell + \beta'(\lambda) - \frac{2}{\lambda} \beta(\lambda).
\end{equation}
Using (\ref{dim}), the full dimension for a glueball state with higher even spin $\ell$, taking into account the beta function is:
\begin{equation}\label{dfullp}
R^2M^2_{5} = \Delta^{even\, \ell}_{F^2} (\Delta^{even\, \ell}_{F^2} -4)
\end{equation}
or explicitly:
\begin{equation}\label{r2par}
R^2M^2_{5} = \left[ 4 + \ell + \beta'(\lambda) - \frac{2}{\lambda} \beta(\lambda)\right] \left[ \ell + \beta'(\lambda) - \frac{2}{\lambda} \beta(\lambda)\right]\,; \qquad ({\rm even}\, \ell)\,.
\end{equation}
One has to replace \eqref{r2par} in the Schr\"odinger-like equation \eqref{equ_7_1} to get the masses for even glueball states.
On the other hand, for odd spin glueballs, as also shown in Section \ref{dsw} the operator ${\cal O}_6$ that describes the glueball state $1^{--}$ is given by:
\begin{equation}
{\cal O}_{6} =SymTr\left( {\tilde{F}_{\mu \nu}}F^2\right),
\end{equation}
\noindent and after the insertion of symmetrised covariant derivatives one gets:
\begin{equation}\label{6+J}
{\cal O}_{6 + J} = SymTr\left( {\tilde{F}_{\mu \nu}}F D_{\lbrace\mu_1 \cdots} D_{\mu_\ell \rbrace}F\right),
\end{equation}
\noindent with conformal dimension $\Delta_{\rm{class.}} = 6 + \ell$ and spin $1+\ell$.
Then one can write the full dimension $\Delta^{odd\, \ell}_{T^{\mu}_{\mu}} = 6 + \ell$\,,
and now Eq.(\ref{beta7}) becomes:
\begin{equation}\label{beta8_1}
\Delta^{odd\, \ell}_{F^2} = 6 + \ell + \beta'(\lambda) - \frac{2}{\lambda} \beta(\lambda).
\end{equation}
Using (\ref{dim}), one can write the full dimension for a glueball state with higher odd spin $\ell$, taking into account the beta function:
\begin{equation}\label{dfulli}
R^2M^2_{5} = \Delta^{odd\, \ell}_{F^2} (\Delta^{odd\, \ell}_{F^2} -4)
\end{equation}
and explicitly:
\begin{equation}\label{r2impar}
R^2M^2_{5} = \left[ 6 + \ell + \beta'(\lambda) - \frac{2}{\lambda} \beta(\lambda)\right] \left[ 2 + \ell + \beta'(\lambda) - \frac{2}{\lambda} \beta(\lambda)\right]\,; \qquad ({\rm odd}\, \ell).
\end{equation}
One has to replace \eqref{r2impar} in \eqref{equ_7_1} to get the masses of the odd spin glueball states.
At this moment, let us discuss about the QCD beta function chosen for this work whose was proposed in \cite{Alanen:2009na}:
\begin{equation}\label{beta11}
\beta(\lambda) = - b_0 \lambda^2 \left[ 1 - \frac{\lambda}{\lambda_{\ast}}\right] \;;\;\;\; {\rm for}\;\;\; \lambda_{\ast} > 0\;.
\end{equation}
This beta function fulfil necessary IR and UV requirements, meaning that for the IR fixed point $\lambda = \lambda_{\ast}$ this beta function vanishes. Moreover, it reproduces the perturbative $\beta(\lambda) \sim - b_0 \lambda^2$ at $1-$ loop order in the ultraviolet and behaves as $\beta (\lambda) \sim + \lambda^3$ at large coupling.
Since one can relate the holographic or radial coordinate $z$ of the $AdS_5$ space with $\mu^{-1}$ where $\mu$ was defined as the renormalisation group scale, one can write the relationship between the beta function and coordinate $z$, given by:
\begin{equation}\label{beta10}
\beta(\lambda(\mu)) = \mu \frac{d \lambda(\mu)}{d \mu} \Rightarrow \beta(\lambda(z)) = - z \frac{d \lambda(z)}{dz},
\end{equation}
\noindent where the integration constant will be fixed by $\lambda(z) \equiv \lambda_0$ at a particular energy scale $z_0$.
Eq. (\ref{beta10}) can also be solved exactly for this beta function, so that:
\begin{equation}\label{beta12}
\lambda(z) = \frac{\lambda_{\ast}}{1 + W\left(\left( \frac{z_0}{z}\right)^{b_0 \lambda_{\ast}} \left( \frac{\lambda_{\ast} - \lambda_0}{\lambda_0}\right) \exp^{\frac{\lambda_{\ast} - \lambda_0}{\lambda_0}}\right) }
\end{equation}
\noindent where $W(z)$ is again the Lambert function and $\lambda(z_0) = \lambda_0$ fixes the integration constant.
This equation leads to the expected QCD asymptotic behaviour at short distances when $z$ is close to the boundary $(z\to 0)$:
\begin{equation}
\lambda(z) \sim - 1/(b_0 \ln z).
\end{equation}
Finally, replacing (\ref{beta11}) and (\ref{beta12}) in Eqs. (\ref{r2par}) and (\ref{r2impar}), solving numerically the Schr\"odinger-like equation \eqref{equ_7_1} and using suitable values for $k$, $\lambda_0$ and $\lambda_{\ast}$, one can get the masses of even and odd glueball states, respectively.
The results obtained for the masses of even and odd glueball states are presented in Table \ref{t7} and Table \ref{t8}, respectively.
\begin{table}[!h]
\centering
\begin{tabular}{|c|c|c||c|c|c|c|c|c|} \hline
\multicolumn{3}{|c||}{Parameters}
& \multicolumn{6}{c|}{Glueball States $J^{PC}$ Masses} \\ \hline
\cline{4-9}
\hline $k$ & $\lambda_0$ & $\lambda_{\ast}$ & $0^{++}$ & $2^{++} $ & $4^{++}$ & $6^{++}$ & $8^{++}$ & $10^{++}$ \\
\hline \hline
\hline
$ 0.09$ & $10.5$ & $350$ & 0.79 & 2.13 & 3.28 & 4.39 & 5.48 & 6.57 \\ \hline
\end{tabular}
\caption{\em Masses {\rm (GeV)} for the glueball states $J^{PC}$ with even $\ell$ with $P=C=+1$ calculated numerically from dynamical corrections to the anomalous holographic softwall model using \eqref{equ_7_1} with mass relationship \eqref{r2par} and the beta function with an IR fixed point at finite coupling, \eqref{beta11}, using suitable values for the parameters $k$ {\rm (GeV$^{2}$)}, $\lambda_0$ and $\lambda_{\ast}$ (dimensionless).}
\label{t7}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c||c|c|c|c|c|} \hline
\multicolumn{3}{|c||}{Parameters}
& \multicolumn{5}{c|}{Glueball States $J^{PC}$ Masses} \\ \hline
\cline{4-8}
\hline $k$ & $\lambda_0$ & $\lambda_{\ast}$ & $1^{--}$ & $3^{--} $ & $5^{--}$ & $7^{--}$ & $9^{--}$ \\
\hline \hline
\hline
$ 0.09$ & $10.5$ & $350$ & 2.72 & 3.84 & 4.94 & 6.03 & 7.11 \\ \hline
\end{tabular}
\caption{\em Masses {\rm (GeV)} for the glueball states $J^{PC}$ with even $\ell$ with $P=C=-1$ calculated numerically from dynamical corrections to the anomalous holographic softwall model using \eqref{equ_7_1} with mass relationship \eqref{r2impar} and the beta function with an IR fixed point at finite coupling, \eqref{beta11}, using suitable values for the parameters $k$ {\rm (GeV$^{2}$)}, $\lambda_0$ and $\lambda_{\ast}$ (dimensionless).}
\label{t8}
\end{table}
From Table \ref{t7} one can derive the following Regge trajectory related to the pomeron:
\begin{equation}\label{pad}
J(m^2) = (0.23 \pm 0.02) \, m^2 + (0.9 \pm 0.5) \,,
\end{equation}
in agreement with the one presented in \eqref{land}. The errors for the slope and the intercept come from the linear fit.
In the same way, from Table \ref{t8} one can derive the following Regge trajectory related to odderon:
\begin{equation}\label{oad}
J(m^2) = (0.18 \pm 0.01)\, m^2 + (0.1 \pm 0.4) \, ,
\end{equation}
in agreement with the one presented in \eqref{nr_odderon}. The errors for the slope and the intercept come from the linear fit.
\section{Conclusions} \label{con}
In this work we presented two examples how the type IIB superstring theory via AdS/CFT correspondence can be used to investigate the hadronic physics away from the perturbative regime.
The approach used is this work was based on the dynamical versions of the holographic softwall model. The first one, analytically solvable and the second one, numerically solvable and taking into account the corrections from anomalous dimension. Both approaches give results for the masses of higher even and odd spin glueball, as well as the Regge trajectories to associated to the pomeron and the odderon compatible with those found in the literature.
\section{Acknowledgements}
E.F.C. is partially supported by PROPGPEC-CPII. E.F.C. would like to thank Henrique Boschi-Filho for his suggestions and comments.
|
{
"timestamp": "2018-06-05T02:17:05",
"yymm": "1806",
"arxiv_id": "1806.01061",
"language": "en",
"url": "https://arxiv.org/abs/1806.01061"
}
|
\section{Introduction}
Many recently introduced machine learning techniques in the context of dynamical problems have much in common with system identification procedures developed in the last decades for applications in signal treatment, circuit theory and, in general, systems theory. In these problems, system knowledge is only available in the form of input-output observations and the task consists in finding or {\it learning} a model that approximates it for mainly forecasting or classification purposes. An important goal in that context is finding families of transformations that are both computationally feasible and versatile enough to reproduce a rich number of patterns just by modifying a limited number of procedural parameters.
The versatility or flexibility of a given machine learning paradigm is usually established by proving its {\bfseries\itshape universality}. We say that a family of transformations is universal when its elements can approximate as accurately as one wants all the elements of a sufficiently rich class containing, for example, all continuous or even all measurable transformations. In the language of learning theory, this is equivalent to the possibility of making approximation errors arbitrarily small \cite{cucker:smale, Smale2003, cucker:zhou:book}. In more mathematical terms, the universality of a family amounts to its density in a rich class of the type mentioned above. Well-known universality results are, for example, the uniform approximation properties of feedforward neural networks established in \cite{cybenko, hornik, hornik1991} in the context of static continuous and, more generally, measurable real functions.
A first solution to this problem in the dynamic context was pioneered in the works of Fr\'echet \cite{frechet:volterra_series} and Volterra \cite{volterra:book} one century ago when they proved that finite Volterra series can be used to uniformly approximate continuous functionals defined on compact sets of continuous functions. These results were further extended in the 1950s by the MIT school lead by N. Wiener \cite{wiener:book, brilliant:volterra, george:volterra} but always under compactness assumptions on the input space and the time interval in which inputs are defined. A major breakthrough was the generalization to infinite time intervals carried out by Boyd and Chua in \cite{Boyd1985}, who formulated a uniform approximation theorem using Volterra series for operators endowed with the so called {\bfseries\itshape fading memory property} on continuous time inputs. An input/output system is said to have fading memory when the outputs associated to inputs that are close in the recent past are close, even when those inputs may be very different in the distant past.
In this paper we address the universality or the uniform approximation problem for transformations or {\bfseries\itshape filters} of discrete time signals of infinite length that have the fading memory property. The approximating set that we use is generated by nonlinear state-space transformations and that is referred to as
{\bfseries\itshape reservoir computers (RC)}~\cite{jaeger2001, Jaeger04, maass1, maass2, Crook2007, verstraeten, lukosevicius} or {\bfseries\itshape reservoir systems}. These are special types of recurrent neural networks determined by two maps, namely a {\bfseries\itshape reservoir} $F: \mathbb{R} ^N\times \mathbb{R} ^n\longrightarrow \mathbb{R} ^N$, $n,N \in \mathbb{N} $, and a {\bfseries\itshape readout} map $h: \mathbb{R}^N \rightarrow \mathbb{R}^d$ that under certain hypotheses transform (or filter) an infinite discrete-time input ${\bf z}=(\ldots, {\bf z} _{-1}, {\bf z} _0, {\bf z} _1, \ldots) \in (\mathbb{R}^n) ^{\Bbb Z } $ into an output signal ${\bf y} \in (\mathbb{R} ^d)^{\Bbb Z } $ of the same type using the state-space transformation given by:
\begin{empheq}[left={\empheqlbrace}]{align}
\mathbf{x} _t &=F(\mathbf{x}_{t-1}, {\bf z} _t),\label{reservoir equation}\\
{\bf y} _t &= h (\mathbf{x} _t), \label{readout}
\end{empheq}
where $t \in \Bbb Z $ and the dimension $N \in \mathbb{N} $ of the state vectors $\mathbf{x} _t \in \mathbb{R} ^N $ is referred to as the number of virtual {\bfseries\itshape neurons} of the system. When a RC system has a uniquely determined filter associated to it, we refer to it as the {\bfseries\itshape RC filter}.
An important advantage of the RC approach is that, under certain hypotheses, intrinsically infinite dimensional problems regarding filters can be translated into analogous questions related to the reservoir and readout maps that generate them and that are defined on much simpler finite dimensional spaces. This strategy has already been used in the literature in relation to the universality question in, for instance, \cite{sandberg:esn, sandberg:esn:paper, Matthews:thesis, Matthews1993, perryman:thesis, Stubberuda}.
The universal approximation properties of feedforward neural networks~\cite{komogorovnn, arnoldnn, sprecherthesis, sprecherI, sprecherII, cybenko, hornik, hornik:derivatives, hornik1991, hornik:new:results, rueschendorf:thomsen} was used in those works to find neural networks-based families of filters that are dense in the set of approximately finite memory filters with inputs defined in the positive real half-line. Other works in connection with the universality problem in the dynamic context are~\cite{Maass2000, maass1, corticalMaass, MaassUniversality} where RC is referred to as Liquid State Machines. In those references and in the same vein as in \cite{Boyd1985}, universal families of RC systems with inputs defined on infinite continuous time intervals were identified in the fading memory category as a corollary of the Stone-Weierstrass theorem. This approach required invoking the natural hypotheses associated to this result, like the pointwise separation property or the compactness of the input space, that was obtained as a consequence of the fading memory property. Another strand of interesting literature that we will not explore in this work has to with the Turing computability capabilities of the systems of the type that we just introduced; recent relevant works in this direction are \cite{kilian:1996, siegelmann:1997, cabessa:2015, cabessa:2016}, and references therein.
The main contribution of this paper is showing that a particularly simple type of RC systems called {\bfseries\itshape echo state networks (ESNs)} can be used as {\it universal approximants in the context of discrete-time fading memory filters with uniformly bounded inputs defined on negative infinite times}. ESNs are RC systems of the form \eqref{reservoir equation}-\eqref{readout} given by:
\begin{empheq}[left={\empheqlbrace}]{align}
\mathbf{x} _t &=\sigma \left(A\mathbf{x}_{t-1}+ C{\bf z} _t+ \boldsymbol{\zeta}\right),\label{esn reservoir equation}\\
{\bf y} _t &= W\mathbf{x} _t. \label{esn readout}
\end{empheq}
In these equations, $C \in \mathbb{M}_{N, n} $ is called the {\bfseries\itshape input mask}, $\boldsymbol{\zeta} \in \mathbb{R} ^N $ is the {\bfseries\itshape input shift}, and $A \in \mathbb{M}_{N,N} $ is referred to as the {\bfseries\itshape reservoir matrix}.
The map $\sigma $ in the state-space equation \eqref{esn reservoir equation} is constructed by componentwise application of a sigmoid function (like the hyperbolic tangent or the logistic function) and is called the {\bfseries\itshape activation function}. Finally, the readout map is linear in this case and implemented via the {\bfseries\itshape readout matrix} $W \in \mathbb{M}_{d, N}$. ESNs already appear in \cite{Matthews:thesis, Matthews1993} under the name of {\it recurrent networks} but it was only more recently, in the works of H. Jaeger \cite{Jaeger04}, that their outstanding performance in machine learning applications was demonstrated.
The strategy that we follow to prove that statement is a combination of what the literature refers to as {\bfseries\itshape internal} and {\bfseries\itshape external approximation}. External approximation is the construction of a RC filter that approximates a given (not necessarily RC) filter. In the internal approximation problem, one is given a RC filter and builds another RC filter that approximates it by finding reservoir and readout maps that are close to those of the given one. In the external part of our proof we use a previous work \cite{RC6} where we constructed a family of RC systems with linear readouts that we called {\bfseries\itshape non-homogeneous state affine systems (SAS)}. We showed in that paper that the RC filters associated to SAS systems uniformly approximate any discrete-time fading memory filter with uniformly bounded inputs defined on negative infinite times. Regarding the internal approximation, we show that any RC filter, in particular SAS filters, can be approximated by ESN filters using the universal approximation property of neural networks. These two facts put together allow us to conclude that ESN filters are capable of uniformly approximating any discrete-time fading memory filter with uniformly bounded inputs. We emphasize that this result is shown exclusively for deterministic inputs using a uniform approximation criterion; an extension of this statement that accommodates stochastic inputs and $L ^p $ approximation criteria can be found in \cite{RC8}.
The paper is structured in three sections:
\begin{itemize}
\item Section \ref{Fading memory is a topological property} introduces the notation that we use all along the paper and, more importantly, specifies the topologies and Banach space structures that we need in order to talk about continuity in the context of discrete-time filters. It is worth mentioning that we characterize the fading memory property as a continuity condition of the filters that have it with respect to the product topology in the input space. On other words, {\it the fading memory property is not a metric property, as it is usually presented in the literature, but a topological one}. An important conceptual consequence of this fact is that the fading memory property does not contain any information about the rate at which systems that have it ``forget" inputs. Several corollaries can be formulated as a consequence of this fact that are very instrumental in the developments in the paper.
\item Section \ref{Internal approximation of reservoir filters} contains a collection of general results in relation with the properties of the RC systems generated by continuous reservoir maps. In particular, we provide conditions that guarantee that a unique reservoir filter can be associated to them (the so called {\bfseries\itshape echo state property}) and we identify situations in which those filters are themselves continuous (they satisfy automatically the fading memory property). We also point out large classes of RC systems for which internal approximation is possible, that is, if the RC systems are close then so are the associated reservoir filters.
\item Section \ref{Echo state networks as universal uniform approximants} shows that {\it echo state networks are universal uniform approximants in the category of discrete-time fading memory filters with uniformly bounded inputs}.
\end{itemize}
\section{Continuous and fading memory filters}
\label{Fading memory is a topological property}
This section introduces the notation of the paper as well as general facts about filters and functionals needed in the developments that follow. The new results are contained in Section \ref{Continuity and the fading memory property}, where we characterize the fading memory property as a continuity condition when the sequence spaces where inputs and outputs are defined are uniformly bounded and are endowed with the product topology. This feature makes this property independent of the weighting sequences that are usually introduced to define it.
\subsection{Notation}
\paragraph{Vectors and matrices.}
A column vector is denoted by a bold lower case symbol like $\mathbf{r}$ and $\mathbf{r} ^\top $ indicates its transpose. Given a vector $\mathbf{v} \in \mathbb{R} ^n $, we denote its entries by $v_i$, with $i \in \left\{ 1, \dots, n
\right\} $; we also write $\mathbf{v}=(v _i)_{i \in \left\{ 1, \dots, n\right\} }$.
We denote by $\mathbb{M}_{n , m }$ the space of real $n\times m$ matrices with $m, n \in \mathbb{N} $. When $n=m$, we use the symbol $\mathbb{M} _n $ to refer to the space of square matrices of order
$n$. Given a matrix $A \in \mathbb{M} _{n , m} $, we denote its components by $A _{ij} $ and we write $A=(A_{ij})$, with $i \in \left\{ 1, \dots, n\right\} $, $j \in \left\{ 1, \dots m\right\} $. Given a vector $\mathbf{v} \in \mathbb{R} ^n $, the symbol $\| \mathbf{v}\| $ stands for any norm in $\mathbb{R} ^n $ (they are all equivalent) and is not necessarily the Euclidean one, unless it is explicitly mentioned. The open balls with respect to a given norm $\left\|\cdot \right\| $, center $\mathbf{v} \in \mathbb{R} ^n $, and radius $r>0$ will be denoted by $B_{\left\|\cdot \right\|}(\mathbf{v}, r) $; their closures by $\overline{B_{\left\|\cdot \right\|}(\mathbf{v}, r)} $. For any $A \in \mathbb{M} _{n , m} $, $\|A\| _2 $ denotes its matrix norm induced by the Euclidean norms in $\mathbb{R}^m $ and $\mathbb{R} ^n $, and satisfies~\cite[Example 5.6.6]{horn:matrix:analysis} that $\|A\| _2=\sigma_{{\rm max}}(A)$, with $\sigma_{{\rm max}}(A)$ the largest singular value of $A$. $\|A\| _2 $ is sometimes referred to as the spectral norm of $A$. The symbol $\vertiii{\cdot}$ is reserved for the norms of operators or functionals defined on infinite dimensional spaces.
\paragraph{Sequence spaces.}
$\mathbb{N}$ denotes the set of natural numbers with the zero element included. $\Bbb Z $ (respectively, $\Bbb Z _+ $ and $\Bbb Z _- $) are the integers (respectively, the positive and the negative integers). The symbol $(\mathbb{R}^n) ^{\Bbb Z } $ denotes the set of infinite real sequences of the form ${\bf z}=(\ldots, {\bf z} _{-1}, {\bf z} _0, {\bf z} _1, \ldots) $, $ {\bf z} _i \in \mathbb{R}^n $, $i \in \Bbb Z $; $(\mathbb{R}^n) ^{\Bbb Z _ -} $ and $(\mathbb{R}^n) ^{\Bbb Z _ +} $ are the subspaces consisting of, respectively, left and right infinite sequences: $(\mathbb{R}^n) ^{\Bbb Z _ -}=\{{\bf z}=(\ldots, {\bf z} _{-2}, {\bf z} _{-1}, {\bf z} _0) \mid {\bf z} _i \in \mathbb{R}^n, i \in \mathbb{Z}_{-}\}$, $(\mathbb{R}^n) ^{\Bbb Z _ +}=\{{\bf z}=({\bf z} _0, {\bf z} _1, {\bf z} _2, \ldots) \mid {\bf z} _i \in \mathbb{R}^n, i \in \mathbb{Z}_{+}\}$. Analogously, $(D_n) ^{\Bbb Z } $, $(D_n) ^{\Bbb Z _ -} $, and $(D_n) ^{\Bbb Z _ +} $ stand for (semi-)infinite sequences with elements in the subset $D_n\subset \mathbb{R}^n $. In most cases we endow these infinite product spaces with the Banach space structures associated to one of the following two norms:
\begin{itemize}
\item The {\bfseries\itshape supremum norm}: define $\| {\bf z}\| _{\infty}:= {\rm sup}_{ t \in \Bbb Z} \left\{\| {\bf z} _t
\|\right\}$. The symbols $\ell ^{\infty}(\mathbb{R}^n) $ and $\ell_{\pm} ^{\infty}(\mathbb{R}^n) $ are used to denote the Banach spaces formed by the elements in the corresponding infinite product spaces that have a finite supremum norm.
\item The {\bfseries\itshape weighted norm}: let $w : \mathbb{N} \longrightarrow (0,1] $ be a decreasing sequence with zero limit. We define the associated {\bfseries\itshape weighted norm } $\| \cdot \| _w $ on $(\mathbb{R}^n)^{\Bbb Z _{-}}$ associated to the {\bfseries\itshape weighting sequence} $w$ as the map:
\begin{eqnarray*}
\begin{array}{cccc}
\| \cdot \| _w :& (\mathbb{R}^n)^{\Bbb Z _{-}} & \longrightarrow & \overline{\mathbb{R}^+}\\
&{\bf z} &\longmapsto &\| {\bf z} \| _w:= \sup_{t \in \Bbb Z_-}\{\| {\bf z}_t w_{-t}\|\}.
\end{array}
\end{eqnarray*}
The Proposition \ref{lw is a banach space} in Appendix \ref{lw is a banach space appendix} shows that the space
\begin{equation*}
\ell ^{w}_-({\Bbb R}^n):= \left\{{\bf z}\in \left(\mathbb{R}^n\right)^{\mathbb{Z}_{-}}\mid \| {\bf z}\| _w< \infty\right\},
\end{equation*}
endowed with weighted norm $\| \cdot \| _w $ forms also a Banach space.
\end{itemize}
It is straightforward to show that $\left\| {\bf z} \right\|_{w}\leq \left\| {\bf z} \right\|_{\infty} $, for all $\mathbf{v} \in (\mathbb{R}^n) ^{\Bbb Z _ -} $. This implies that $\ell_{-} ^{\infty}(\mathbb{R}^n) \subset \ell ^{w}_-({\Bbb R}^n)$ and that the inclusion map $(\ell_{-} ^{\infty}(\mathbb{R}^n), \left\| \cdot \right\|_{\infty}) \hookrightarrow (\ell ^{w}_-({\Bbb R}^n, \left\| \cdot \right\|_{w})$ is continuous.
\subsection{Filters and systems}
\paragraph{Filters.}
Let $D_n \subset \mathbb{R}^n $ and $D_N \subset \mathbb{R}^N $. We refer to the maps of the type $U: (D _n) ^{\Bbb Z} \longrightarrow (D_N) ^{\Bbb Z} $ as {\bfseries\itshape filters} or {\bfseries\itshape operators} and to those like $H: (D _n) ^{\Bbb Z} \longrightarrow D_N $ (or $H: (D _n) ^{\Bbb Z_\pm} \longrightarrow D_N $) as $\mathbb{R}^N $-valued {\bfseries\itshape functionals}. These definitions will be sometimes extended to accommodate situations where the domains and the targets of the filters are not necessarily product spaces but just arbitrary subsets of $\left({\Bbb R}^n\right)^{\mathbb{Z}} $ and $\left({\Bbb R}^N\right)^{\mathbb{Z}} $ like, for instance, $\ell ^{\infty}(\mathbb{R}^n) $ and $\ell ^{\infty}(\mathbb{R}^N) $.
A filter $U: (D _n) ^{\Bbb Z} \longrightarrow (D_N) ^{\Bbb Z} $ is called {\bfseries\itshape causal} when for any two elements ${\bf z} , \mathbf{w} \in (D _n) ^{\Bbb Z} $ that satisfy that ${\bf z} _\tau = \mathbf{w} _\tau$ for any $\tau \leq t $, for a given $t \in \Bbb Z $, we have that $U ({\bf z}) _t= U ({\bf w}) _t $. Let $T_\tau:(D _n) ^{\Bbb Z} \longrightarrow(D _n) ^{\Bbb Z} $ be the {\bfseries\itshape time delay} operator defined by $T_\tau( {\bf z}) _t:= {\bf z}_{t- \tau}$. The filter $U$ is called {\bfseries\itshape time-invariant} (TI) when it commutes with the time delay operator, that is, $T_\tau \circ U=U \circ T_\tau $, for any $\tau\in \Bbb Z $ (in this expression, the two operators $T_\tau $ have to be understood as defined in the appropriate sequence spaces).
We recall (see for instance~\cite{Boyd1985}) that there is a bijection between causal time-invariant filters and functionals on $(D_n)^{\Bbb Z _-} $. Indeed, consider the sets $\mathbb{F}_{(D_n)^{\mathbb{Z}_{-}}}$ and $\mathbb{H}_{(D_n)^{\mathbb{Z}_{-}}}$ defined by
\begin{eqnarray}
\mathbb{F}_{(D_n)^{\mathbb{Z}_{-}}} &:= & \left\{U: (D_n) ^{\Bbb Z} \longrightarrow (\mathbb{R}^N) ^{\Bbb Z} \mid \mbox{$U$ is causal and time-invariant}\right\},\label{f set}\\
\mathbb{H}_{(D_n)^{\mathbb{Z}_{-}}} &:= & \left\{H: (D_n) ^{\Bbb Z_-} \longrightarrow \mathbb{R} ^N\right\}.\label{h set}
\end{eqnarray}
Then, given a time-invariant filter $U:(D_n) ^{\Bbb Z} \longrightarrow (\mathbb{R}^N) ^{\Bbb Z}$, we can associate to it a functional $H _U: (D_n) ^{\Bbb Z_-} \longrightarrow \mathbb{R} ^N$ via the assignment $H _U ({\bf z}):= U({\bf z} ^e) _0 $, where ${\bf z} ^e \in (\mathbb{R}^n)^{\Bbb Z } $ is an arbitrary extension of ${\bf z} \in (D_n)^{\Bbb Z _-} $ to $ (D_n)^{\Bbb Z } $. Let $ \boldsymbol{\Psi}: \mathbb{F}_{(D_n)^{\mathbb{Z}_{-}}} \longrightarrow \mathbb{H}_{(D_n)^{\mathbb{Z}_{-}}} $ be the map such that $\boldsymbol{\Psi}(U):= H _U $. Conversely, for any functional $H: (D_n) ^{\Bbb Z_-} \longrightarrow \mathbb{R} ^N$, we can define a time-invariant causal filter $U_H:(D_n) ^{\Bbb Z} \longrightarrow (\mathbb{R}^N) ^{\Bbb Z}$ by $U_H({\bf z}) _t:= H((\mathbb{P}_{\Bbb Z_-} \circ T _{-t}) ({\bf z})) $, where $T _{-t} $ is the $(-t)$-time delay operator and $\mathbb{P}_{\Bbb Z_-}: (\mathbb{R}^n)^{\Bbb Z} \longrightarrow (\mathbb{R}^n)^{\Bbb Z _-} $ is the natural projection. Let $ \boldsymbol{\Phi}: \mathbb{H}_{(D_n)^{\mathbb{Z}_{-}}} \longrightarrow \mathbb{F} _{(D_n)^{\mathbb{Z}_{-}}}$ be the map such that $\boldsymbol{\Phi}(H):= U_H $. It is easy to verify that:
\begin{eqnarray*}
\boldsymbol{\Psi}\circ \boldsymbol{\Phi}&=& \mathbb{I}_{\mathbb{H}_{(D_n)^{\mathbb{Z}_{-}}}} \quad \mbox{or, equivalently,} \quad H_{U _H}= H,\quad \mbox{for any functional} \quad H: (D_n)^{\Bbb Z _-} \rightarrow \mathbb{R}^N, \\
\boldsymbol{\Phi}\circ \boldsymbol{\Psi}&=& \mathbb{I}_{\mathbb{F}_{(D_n)^{\mathbb{Z}_{-}}}} \quad \mbox{or, equivalently,} \quad U_{H _U} = U, \quad \mbox{for any causal TI filter} \quad U: (D_n) ^{\Bbb Z} \rightarrow (\mathbb{R}^N) ^{\Bbb Z},
\end{eqnarray*}
that is, $\boldsymbol{\Psi} $ and $\boldsymbol{\Phi}$ are inverses of each other and hence are both bijections.
Additionally, we note that the sets $\mathbb{F}_{(D_n)^{\mathbb{Z}_{-}}} $ and $ \mathbb{H}_{(D_n)^{\mathbb{Z}_{-}}} $ are vector spaces with naturally defined operations and that $\boldsymbol{\Psi} $ and $\boldsymbol{\Phi}$ are linear maps between them, which allows us to conclude that $\mathbb{F}_{(D_n)^{\mathbb{Z}_{-}}} $ and $ \mathbb{H} _{(D_n)^{\mathbb{Z}_{-}}} $ are linear isomorphic.
When a filter is causal and time-invariant, we work in many situations just with the restriction $U: (D_n) ^{\Bbb Z_-} \longrightarrow (D_N) ^{\Bbb Z_-} $ instead of the original filter $U: (D_n) ^{\Bbb Z} \longrightarrow (D_N) ^{\Bbb Z} $ without making the distinction, since the former uniquely determines the latter. Indeed, by definition, for any ${\bf z} \in ( D _n) ^{\Bbb Z} $ and $t \in \Bbb Z $:
\begin{equation}
\label{why we can restrict to zminus}
U ({\bf z})_t= \left(T_{-t} \left(U({\bf z})\right)\right)_0= \left(U \left(T_{-t}({\bf z})\right)\right)_0,
\end{equation}
where the second equality holds by the time-invariance of $U$ and the value in the right-hand side depends only on $\mathbb{P}_{\mathbb{Z}_{-}}\left(T_{-t}({\bf z})\right) \in (D_n) ^{\Bbb Z_-}$, by causality.
\paragraph{Reservoir systems and filters.}
Consider now the RC system determined by~\eqref{reservoir equation}--\eqref{readout} with reservoir map defined on subsets $D _N, D' _N \subset \mathbb{R}^N $ and $D_n\subset \mathbb{R}^n $, that is, $F: D _N\times D_n\longrightarrow D' _N$ and $h: D'_N \rightarrow \mathbb{R}^d$. There are two properties of reservoir systems that will be crucial in what follows:
\begin{itemize}
\item {\bfseries\itshape Existence of solutions} property: this property holds when for each ${\bf z} \in \left(D_n\right)^{\mathbb{Z}} $ there exists an element ${\bf x} \in \left(D _N\right)^{\mathbb{Z}} $ that satisfies the relation~\eqref{reservoir equation} for each $t \in \Bbb Z $.
\item {\bfseries\itshape Uniqueness of solutions} or {\bfseries\itshape echo state} property {\bfseries\itshape (ESP)}: it holds when the system has the existence of solutions property and, additionally, these solutions are unique.
\end{itemize}
The echo state property has deserved much attention in the context of echo state networks~\cite{jaeger2001, Jaeger04, Buehner:ESN, Yildiz2012, zhang:echo, Wainrib2016, Manjunath:Jaeger, gallicchio:esp}. We emphasize that these two properties are genuine conditions that are not automatically satisfied by all RC systems. Later on in the paper, Theorem \ref{uniform approx theorem} specifies sufficient conditions for them to hold.
The combination of the existence of solutions with the axiom of choice allows us to associate filters $U ^F: (D_n)^{\Bbb Z} \longrightarrow(D_N)^{\Bbb Z} $ to each RC system with that property via the reservoir map and~\eqref{reservoir equation}, that is, $U ^F ({\bf z}) _t := \mathbf{x} _t \in \mathbb{R} ^N $, for all $t \in \Bbb Z $. We will denote by $U ^F _h: (D_n)^{\Bbb Z} \longrightarrow(D_d)^{\Bbb Z} $ the corresponding filter determined by the entire reservoir system, that is, $U ^F_h ({\bf z}) _t =h \left(U ^F ({\bf z}) _t\right):= {\bf y} _t \in \mathbb{R} ^d$. $ U ^F_h $ is said to be a {\bfseries\itshape reservoir filter} or a {\bfseries\itshape response map} associated to the RC system~\eqref{reservoir equation}--\eqref{readout}. The filters $U ^F $ and $U ^F _h $ are causal by construction. A unique reservoir filter can be associated to a reservoir system when the echo state property holds. We warn the reader that reservoir filters appear in the literature only in the presence of the ESP; that is why we sometimes make the distinction between those that come from reservoir systems that do and do not satisfy the ESP by referring to them as {\bfseries\itshape reservoir filters} and {\bfseries\itshape generalized reservoir filters}, respectively.
In the systems theory literature, the RC equations~\eqref{reservoir equation}--\eqref{readout} are referred to as the {\bfseries\itshape state-variable} or the {\bfseries\itshape internal representation} point of view and associated filters as the {\bfseries\itshape external representation} of the system.
The next proposition shows that in the presence of the ESP, reservoir filters are not only causal but also time-invariant. In that situation we can hence associate to $U ^F _h $ a {\bfseries\itshape reservoir functional} $H^F _h : (D _n)^{\Bbb Z _-} \longrightarrow \mathbb{R}^d$ determined by $H^F _h:=H_{U ^F _h} $.
\begin{proposition}
\label{esp implies ti}
Let $D_N \subset \mathbb{R}^N $, $D_n \subset \mathbb{R}^n $, and $F:D_N \times D_n \longrightarrow D_N $ be a reservoir map that satisfies the echo state property for all the elements in $\left(D_n\right)^{\mathbb{Z}} $. Then, the corresponding filter $U ^F: \left(D_n\right)^{\mathbb{Z}} \longrightarrow \left(D_N\right)^{\mathbb{Z}} $ is causal and time-invariant.
\end{proposition}
We emphasize that, as it can be seen in the proof in the appendix, it is the autonomous character of the reservoir map that guarantees time-invariance in the previous proposition. An explicit time dependence on time in that map would spoil that conclusion.
\paragraph{Reservoir system morphisms.}
Let $N _1, N _2, n, d \in \mathbb{N} $ and let $F _1: D _{N _1}\times D_n\longrightarrow D_{N _1}$, $h_1: D_{N _1} \rightarrow \mathbb{R}^d$ and $F_2: D _{N _2}\times D_n\longrightarrow D_{N _2}$, $h_2: D_{N _2} \rightarrow \mathbb{R}^d$ be two reservoir systems. We say that a map $f:D _{N _1} \longrightarrow D_{N _2} $ is a morphism between the two systems when it satisfies the following two properties:
\begin{description}
\item [(i)] {\bfseries\itshape Reservoir equivariance:}
$
f(F _1(\mathbf{x} _1, {\bf z}))=F _2(f(\mathbf{x}_1), {\bf z}),
$
for all $ \mathbf{x}_1 \in D _{N _1}$, and ${\bf z} \in D _n$.
\item [(ii)] {\bfseries\itshape Readout invariance:} $h _1(\mathbf{x}_1)= h _2(f (\mathbf{x}_1)) $, for all $ \mathbf{x}_1 \in D _{N _1}$.
\end{description}
When the map $f$ has an inverse and it is also a morphism between the systems determined by the pairs $(F _2, h _2) $ and $(F _1, h _1) $ we say that $f$ is a {\bfseries\itshape system isomorphism} and that the systems $(F _1, h _1) $ and $(F _2, h _2) $ are {\bfseries\itshape isomorphic}. Given a system $F _1: D _{N _1}\times D_n\longrightarrow D_{N _1}$, $h_1: D_{N _1} \rightarrow \mathbb{R}^d$ and a bijection $f:D _{N _1} \longrightarrow D_{N _2} $, the map $f$ is a system isomorphism with respect to the system $F_2: D _{N _2}\times D_n\longrightarrow D_{N _2}$, $h_2: D_{N _2} \rightarrow \mathbb{R}^d$ defined by
\begin{eqnarray}
F_2(\mathbf{x} _2, {\bf z}) &:= & f \left(F _1(f ^{-1}(\mathbf{x} _2), {\bf z})\right), \quad \mbox{for all} \quad \mathbf{x} _2\in D _{N _2}, {\bf z} \in D _n,\label{isom system 1}\\
h_2(\mathbf{x} _2) &:= & h _1(f ^{-1}(\mathbf{x} _2))), \quad \mbox{for all} \quad \mathbf{x} _2\in D _{N _2}.\label{isom system 2}
\end{eqnarray}
The proof of the following statement is a straightforward consequence of the definitions.
\begin{proposition}
\label{morphisms consequences}
Let $F_1: D _{N _1}\times D_n\longrightarrow D_{N _1}$, $h_1: D_{N _1} \rightarrow \mathbb{R}^d$ and $F_2: D _{N _2}\times D_n\longrightarrow D_{N _2}$, $h_2: D_{N _2} \rightarrow \mathbb{R}^d$ be two reservoir systems. Let $f:D _{N _1} \longrightarrow D_{N _2} $ be a morphism between them. Then:
\begin{description}
\item [(i)] If ${\bf x}^1 \in \left(D _{N _1}\right)^{\mathbb{Z}} $ is a solution for the reservoir map $F _1$ associated to the input ${\bf z} \in \left(D_n\right)^{\mathbb{Z}} $, then the sequence ${\bf x}^2 \in \left(D _{N _2}\right)^{\mathbb{Z}} $ defined by ${\bf x}^2_t:= f \left({\bf x}^1 _t\right) $, $t \in \Bbb Z $, is a solution for the reservoir map $F _2$ associated to the same input.
\item [(ii)] If $U_{h _1}^{F _1}$ is a generalized reservoir filter for the system determined by the pair $(F _1, h _1 )$ then it is also a reservoir filter for the system $(F _2, h _2)$. Equivalently, given a generalized reservoir filter $U_{h _1}^{F _1}$ determined by $(F _1, h _1)$, there exists a generalized reservoir filter $U_{h _2}^{F _2}$ determined by $(F _2, h _2)$ such that $U_{h _1}^{F _1}= U_{h _2}^{F _2}$.
\item [(iii)] If $f$ is a system isomorphism then the implications in the previous two points are reversible.
\end{description}
\end{proposition}
\subsection{Continuity and the fading memory property}
\label{Continuity and the fading memory property}
In agreement with the notation introduced in the previous section, in the following paragraphs the symbol $U : \left(D_n\right)^{\mathbb{Z}_-} \longrightarrow \left(D_N\right)^{\mathbb{Z}_-} $ stands for a causal and time-invariant filter or, strictly speaking, for the restriction of $U : \left(D_n\right)^{\mathbb{Z}} \longrightarrow \left(D_N\right)^{\mathbb{Z}} $ to $\mathbb{Z}_{-} $, see \eqref{why we can restrict to zminus}; $H _U:\left(D_n\right)^{\mathbb{Z}_-} \longrightarrow D_N $ is the associated functional, for some $D_N \subset \mathbb{R}^N $ and $D_n \subset \mathbb{R}^n $. Analogously, $U_H $ is the filter associated to a given functional $H$.
\begin{definition}[{\bfseries\itshape Continuous filters and functionals}]
\label{Continuous filters and functionals}
Let $D_N \subset \mathbb{R}^N $ and $D_n \subset \mathbb{R}^n $ be bounded subsets such that $\left(D_n\right)^{\mathbb{Z}_-} \subset \ell ^{\infty}_-(\mathbb{R}^n) $ and $\left(D_N\right)^{\mathbb{Z}_-} \subset \ell ^{\infty}_-(\mathbb{R}^N) $. A causal and time-invariant filter $U : \left(D_n\right)^{\mathbb{Z}_-} \longrightarrow \left(D_N\right)^{\mathbb{Z}_-} $ is called {\bfseries\itshape continuous} when it is a continuous map between the metric spaces $\left(\left(D_n\right)^{\mathbb{Z}_-}, \left\|\cdot \right\|_{\infty} \right)$ and $\left(\left(D_N\right)^{\mathbb{Z}_-}, \left\|\cdot \right\|_{\infty}\right)$. An analogous prescription can be used to define {\bfseries\itshape continuous functionals} $H : \left(\left(D_n\right)^{\mathbb{Z}_-}, \left\|\cdot \right\|_{\infty} \right) \longrightarrow \left(D_N, \left\|\cdot \right\|\right) $.
\end{definition}
The following proposition shows that when filters are causal and time-invariant, their continuity can be read out of their corresponding functionals and viceversa.
\begin{proposition}
\label{continuous functional iff filter}
Let $D_n \subset \mathbb{R}^n $ and $D_N \subset \mathbb{R}^N $ be such that $\left(D_n\right)^{\mathbb{Z}_-} \subset \ell ^{\infty}_-(\mathbb{R}^n) $ and $\left(D_N\right)^{\mathbb{Z}_-} \subset \ell ^{\infty}_-(\mathbb{R}^N) $. Let $U : \left(D_n\right)^{\mathbb{Z}_-} \longrightarrow \left(D_N\right)^{\mathbb{Z}_-} $ be a causal and time-invariant filter, $H : \left(D_n\right)^{\mathbb{Z}_-} \longrightarrow D_N$ a functional, and let $\boldsymbol{\Phi} $ and $\boldsymbol{\Psi} $ be the maps defined in the previous section. Then, if the filter $U$ is continuous then so is the associated functional $\boldsymbol{\Psi} (U) =:H _U $. Conversely, if $H$ is continuous then so is $\boldsymbol{\Phi} (H) =:U_H $.
Define now the vector spaces
\begin{eqnarray}
\mathbb{F}_{(D_n)^{\mathbb{Z}_{-}}} ^{\infty}&:= & \left\{U: (D_n) ^{\Bbb Z_-} \longrightarrow \ell ^{\infty}_-(\mathbb{R}^N) \mid \mbox{$U$ is causal, time-invariant, and continuous}\right\},\label{f set continuous}\\
\mathbb{H}_{(D_n)^{\mathbb{Z}_{-}}} ^{\infty}&:= & \left\{H: (D_n) ^{\Bbb Z_-} \longrightarrow \mathbb{R} ^N \mid \mbox{$H$ is continuous}\right\}.\label{h set continuous}
\end{eqnarray}
The previous statements guarantee that the maps $\boldsymbol{\Psi} $ and $\boldsymbol{\Phi} $ restrict to the maps (that we denote with the same symbol)
$\boldsymbol{\Psi}:\mathbb{F}_{(D_n)^{\mathbb{Z}_{-}}} ^{\infty} \longrightarrow \mathbb{H}_{(D_n)^{\mathbb{Z}_{-}}} ^{\infty}$ and $\boldsymbol{\Phi}:\mathbb{H}_{(D_n)^{\mathbb{Z}_{-}}} ^{\infty} \longrightarrow \mathbb{F}_{(D_n)^{\mathbb{Z}_{-}}} ^{\infty}$ that are linear isomorphisms and are inverses of each other.
\end{proposition}
\begin{definition}[{\bfseries\itshape Fading memory filters and functionals}]
\label{Fading memory filters and functionals}
Let $w : \mathbb{N} \longrightarrow (0,1] $ be a weighting sequence and let $D_N \subset \mathbb{R}^N $ and $D_n \subset \mathbb{R}^n $ be such that $\left(D_n\right)^{\mathbb{Z}_-} \subset \ell ^{w}_-({\Bbb R}^n)$ and $\left(D_N\right)^{\mathbb{Z}_-} \subset \ell ^{w}_-(\mathbb{R}^N) $. We say that a causal and time-invariant filter $U : \left(D_n\right)^{\mathbb{Z}_-} \longrightarrow \left(D_N\right)^{\mathbb{Z}_-} $ (respectively, a functional $H : \left(D_n\right)^{\mathbb{Z}_-} \longrightarrow D_N $) satisfies the {\bfseries\itshape fading memory property (FMP)} with respect to the sequence $w$ when it is a continuous map between the metric spaces $\left(\left(D_n\right)^{\mathbb{Z}_-}, \left\|\cdot \right\|_w \right)$ and $\left(\left(D_N\right)^{\mathbb{Z}_-}, \left\|\cdot \right\|_{w}\right)$ (respectively, $\left(\left(D_n\right)^{\mathbb{Z}_-}, \left\|\cdot \right\|_w \right)$ and $\left(D_N, \left\|\cdot \right\|\right)$).
If the weighting sequence $w$ is such that $w _t= \lambda ^t $, for some $\lambda\in (0,1) $ and all $t \in \mathbb{N} $, then $U$ is said to have the $\lambda $-{\bfseries\itshape exponential fading memory property}. We define the sets
\begin{eqnarray}
\!\!\!\!\!\!\!\!\!\!\!\!\mathbb{F}_{(D_n)^{\mathbb{Z}_{-}},(D_N)^{\mathbb{Z}_{-}}} ^{w}&:= &\left\{U: (D_n) ^{\Bbb Z_-} \longrightarrow (D_N)^{\mathbb{Z}_{-}} \mid \mbox{$U$ causal, time-invariant, and FMP w.r.t. $w$}\right\},\label{f set fmp 1}\\
\!\!\!\!\!\!\!\!\!\!\!\!\mathbb{H}_{(D_n)^{\mathbb{Z}_{-}},(D_N)^{\mathbb{Z}_{-}}} ^{w}&:= &\left\{H: (D_n) ^{\Bbb Z_-} \longrightarrow D_N \mid \mbox{$H$ is FMP with respect to $w$}\right\}.\label{h set fmp 1}
\end{eqnarray}
These definitions can be extended by replacing the product set $\left(D_N\right)^{\mathbb{Z}_-} $ by any subset of $\ell ^{w}_-(\mathbb{R}^N)$ that is not necessarily a product space. In particular, we define the sets
\begin{eqnarray}
\mathbb{F}_{(D_n)^{\mathbb{Z}_{-}}} ^{w}&:= & \left\{U: (D_n) ^{\Bbb Z_-} \longrightarrow \ell ^{w}_-(\mathbb{R}^N) \mid \mbox{$U$ is causal, time-invariant, and FMP w.r.t. $w$}\right\},\label{f set fmp}\\
\mathbb{H}_{(D_n)^{\mathbb{Z}_{-}}} ^{w}&:= & \left\{H: (D_n) ^{\Bbb Z_-} \longrightarrow \mathbb{R} ^N \mid \mbox{$H$ is FMP with respect to $w$}\right\}.\label{h set fmp}
\end{eqnarray}
\end{definition}
Definitions \ref{Continuous filters and functionals} and \ref{Fading memory filters and functionals} can be easily reformulated in terms of more familiar $\epsilon$-$\delta $-type criteria, as they were introduced in \cite{Boyd1985}. For example, the continuity of the functional $H : \left(D_n\right)^{\mathbb{Z}_-} \longrightarrow D_N $ is equivalent to stating that for any ${\bf z} \in (D_n)^{\Bbb Z _{-}} $ and any $\epsilon>0 $, there exists a $\delta(\epsilon)> 0 $ such that for any ${\bf s} \in (D_n)^{\Bbb Z _{-}}$ that satisfies that
\begin{equation}
\label{continuity epsilon delta}
\| {\bf z} - {\bf s}\|_{\infty}=\sup_{t \in \Bbb Z_-}\{\| {\bf z}_t-{\bf s}_t \|\}< \delta(\epsilon), \quad \mbox{then} \quad \|H _U({\bf z})-H _U({\bf s})\|< \epsilon.
\end{equation}
Regarding the fading memory property, it suffices to replace the implication in \eqref{continuity epsilon delta} by
\begin{equation}
\label{fmp epsilon delta}
\| {\bf z} - {\bf s}\|_{w}=\sup_{t \in \Bbb Z_-}\{\| {\bf z}_t-{\bf s}_t \|w_{-t}\}< \delta(\epsilon), \quad \mbox{then} \quad \|H _U({\bf z})-H _U({\bf s})\|< \epsilon.
\end{equation}
A very important part of the results that follow concern {\bfseries\itshape uniformly bounded} families of sequences, that is, subsets of $\left({\Bbb R}^n\right)^{\mathbb{Z}_{-}} $ of the form
\begin{equation}
\label{Kset}
K_{M}:=\left\{ {\bf z} \in \left({\Bbb R}^n\right)^{\mathbb{Z}_{-}} \mid \| {\bf z}_t\| \leq M \quad \mbox{for all} \quad t \in \Bbb Z _{-} \right\}, \quad \mbox{for some $M>0 $.}
\end{equation}
It is straightforward to show that $K _M\subset\ell_{-} ^{\infty}(\mathbb{R}^n) \subset \ell ^{w}_-({\Bbb R}^n)$, for all $M>0 $ and any weighting sequence $w$. A very useful fact is that the relative topology induced by $(\ell ^{w}_-({\Bbb R}^n), \left\|\cdot \right\|_w) $ in $K _M $ coincides with the one induced by the product topology in $\left({\Bbb R}^n\right)^{\mathbb{Z}_{-}} $. This is a consequence of the following result that is a slight generalization of \cite[Theorem 20.5]{Munkres:topology}. A proof is provided in Appendix \ref{proof of product topology for uniformly bounded} for the sake of completeness.
\begin{theorem}
\label{product topology for uniformly bounded}
Let $\left\|\cdot \right\|: \mathbb{R}^n \longrightarrow [0, \infty) $ be a norm in ${\Bbb R}^n$, $M>0 $, and let $w : \mathbb{N} \longrightarrow (0,1] $ be a weighting sequence. Let $\overline{d} _M(\mathbf{a},\mathbf{b}):=\min \left\{\left\|\mathbf{a}-\mathbf{b}\right\|, M\right\}$, $\mathbf{a} ,\mathbf{b} \in {\Bbb R}^n $, be a bounded metric on ${\Bbb R}^n $ and define the $w$-{\bfseries\itshape weighted metric} $D_w^M $ on $({\Bbb R}^n)^{\mathbb{Z}_{-}} $ as
\begin{equation}
\label{def of weighted metric}
D_w^M(\mathbf{x}, {\bf y}):=\sup_{t \in \mathbb{Z}_{-}} \left\{\overline{d} _M(\mathbf{x}_t, {\bf y}_t)w_{-t}\right\}, \quad \mathbf{x}, {\bf y} \in ({\Bbb R}^n)^{\mathbb{Z}_{-}}.
\end{equation}
Then $D_w^M$ is a metric that induces the product topology on $({\Bbb R}^n)^{\mathbb{Z}_{-}} $. The space $({\Bbb R}^n)^{\mathbb{Z}_{-}} $ is complete relative to this metric.
\end{theorem}
An important consequence that can be drawn from this theorem is that all the weighted norms induce the same topology on the subspaces formed by uniformly bounded sequences. An obvious consequence of this fact is that continuity with respect to this topology can be defined without the help of weighting sequences or, equivalently, filters or functionals with uniformly bounded inputs that have the fading memory with respect to a weighting sequence, have the same feature with respect to any other weighting sequence. We make this more specific in the following statements.
\begin{corollary}
\label{all weighted norms are the same}
Let $M>0 $ and let $K_{M}:=\left\{ {\bf z} \in \left({\Bbb R}^n\right)^{\mathbb{Z}_{-}} \mid \| {\bf z}_t\| \leq M \quad \mbox{for all} \quad t \in \Bbb Z _{-} \right\} $ be a subset of $\left({\Bbb R}^n\right)^{\mathbb{Z}_{-}} $ formed by uniformly bounded sequences. Let $w : \mathbb{N} \longrightarrow (0,1] $ be an arbitrary weighting sequence. Then, the metric induced by the weighted norm $\left\|\cdot \right\|_w $ on $K _M $ coincides with $D_w^{2M} $. Moreover, since $D_w^{2M} $ induces the product topology on $K _M=\left(\overline{B_{\left\|\cdot \right\|}(\mathbf{0}, M)}\right)^{\mathbb{Z}_{-}} $, we can conclude that all the weighted norms induce the same topology on $K _M $. We recall that $\overline{B_{\left\|\cdot \right\|}(\mathbf{0}, M)} $ is the closure of the ball with radius $M$ centered at the origin, with respect to the norm $\left\|\cdot \right\|$ in ${\Bbb R}^n$. The same conclusion holds when instead of $K _M $ we consider the set $(D_n) ^{\mathbb{Z}_{-}} $, with $D_n $ a compact subset of ${\Bbb R}^n $.
\end{corollary}
Theorem \ref{product topology for uniformly bounded} can also be used to give a quick alternative proof in discrete time to an important compactness result originally formulated in Boyd and Chua in \cite[Lemma 1]{Boyd1985} for continuous time and, later on, in \cite{RC6} for discrete time. The next corollary contains an additional completeness statement.
\begin{corollary}
\label{km compact complete}
Let $K _M$ be the set of uniformly bounded sequences, defined as in \eqref{Kset}, and let $w : \mathbb{N} \longrightarrow (0,1] $ be a weighting sequence. Then, $ \left(K _M, \left\|\cdot \right\|_w\right) $ is a compact, complete, and convex subset of the Banach space $(\ell ^{w}_-({\Bbb R}^n), \left\|\cdot \right\|_w) $. The compactness and the completeness statements also hold when instead of $K _M $ we consider the set $(D_n) ^{\mathbb{Z}_{-}} $, with $D_n $ a compact subset of ${\Bbb R}^n $; if $D_n $ is additionally convex then the convexity of $(D_n) ^{\mathbb{Z}_{-}} $ is also guaranteed.
\end{corollary}
It is important to point out that the coincidence between the product topology and the topologies induced by weighted norms that we described in Corollary \ref{all weighted norms are the same} only occurs for uniformly bounded sets of the type introduced in \eqref{Kset}. As we state in the next result, the norm topology in $\ell ^{w}_-({\Bbb R}^n) $ is strictly finer than the one induced by the product topology in $ \left(\mathbb{R}^n\right) ^{\mathbb{Z}_-} $.
\begin{proposition}
\label{in lww norm finer than product}
Let $w : \mathbb{N} \longrightarrow (0,1] $ be a weighting sequence and let $(\ell ^{w}_-({\Bbb R}^n), \left\|\cdot \right\|_w) $ be the Banach space constructed using the corresponding weighted norm on the space of left infinite sequences with elements in $\mathbb{R}^n $. The norm topology in $\ell ^{w}_-({\Bbb R}^n) $ is strictly finer than the subspace topology induced by the product topology in $\left(\mathbb{R}^n\right)^{\mathbb{Z}_{-}} $ on $\ell ^{w}_-({\Bbb R}^n) \subset \left(\mathbb{R}^n\right)^{\mathbb{Z}_{-}}$.
\end{proposition}
The results that we just proved imply an elementary property of the sets that we defined in \eqref{f set fmp 1}-\eqref{h set fmp 1} and \eqref{f set fmp}-\eqref{h set fmp} that we state in the following lemma.
\begin{lemma}
\label{fs and ^sfor w}
Let $M>0$ and let $w $ be a weighting sequence. Let $U:K _M \longrightarrow \ell ^{w}_-(\mathbb{R}^N) $ (respectively, $H: K _M \longrightarrow \mathbb{R}^N $) be and element of $\mathbb{F}_{K _M} ^{w} $ (respectively, $\mathbb{H}_{K _M} ^{w} $). Then there exists $L>0 $ such that $U(K _M) \subset K _L $ (respectively, $H(K_M)\subset \overline{B_{\left\|\cdot \right\|}(\mathbf{0}, L)}) $) and we can hence conclude that $U \in \mathbb{F}_{K _M, K _L} ^{w} $ (respectively, $H \in \mathbb{H}_{K _M, K _L} ^{w} $). Conversely, the inclusion $\mathbb{F}_{K _M, K _L} ^{w} \subset \mathbb{F}_{K _M} ^{w} $ (respectively, $\mathbb{H}_{K _M, K _L} ^{w} \subset \mathbb{H}_{K _M} ^{w} $) holds true for any $M>0 $. The sets $\mathbb{F}_{K _M} ^{w} $ and $\mathbb{H}_{K _M} ^{w} $ are vector spaces.
\end{lemma}
The next proposition spells out how the fading memory property is independent of the weighting sequence that is used to define it, which shows its intrinsically topological nature. A conceptual consequence of this fact is that the fading memory property does not contain any information about the rate at which systems that have it ``forget" inputs. A similar statement in the continuous time setup has been formulated in \cite{sandberg:fmp}. Additionally, there is a bijection between FMP filters and functionals.
\begin{proposition}
\label{FMP independent of w}
Let $K _M \subset \left({\Bbb R}^n\right)^{\mathbb{Z}_{-}}$ and $K _L \subset \left({\Bbb R}^N\right)^{\mathbb{Z}_{-}} $ be subsets of uniformly bounded sequences defined as in \eqref{Kset} and let $w : \mathbb{N} \longrightarrow (0,1] $ be a weighting sequence. Let $U: K _M \longrightarrow K _L$ be a causal and time-invariant filter and let $H:K _M \longrightarrow \overline{B_{\left\|\cdot \right\|}( {\bf 0},L)}$ be a functional. Then:
\begin{description}
\item [(i)] If $U$ (respectively $H$) has the fading memory property with respect to the weighting sequence $w$, then it has the same property with respect to any other weighting sequence. In particular, this implies that
\begin{equation*}
\mathbb{F}_{K _M, K _L} ^{w}=\mathbb{F}_{K _M, K _L} ^{w'}\quad \mbox{and} \quad\mathbb{H}_{K _M, K _L} ^{w}=\mathbb{H}_{K _M, K _L} ^{w'}, \quad \mbox{for any weighting sequence $w'$.}
\end{equation*}
In what follows we just say that $U$ (respectively $H$) has the fading memory property and denote
\begin{equation*}
\mathbb{F}_{K _M, K _L} ^{{\rm FMP}}:=\mathbb{F}_{K _M, K _L} ^{w}\quad \mbox{and} \quad\mathbb{H}_{K _M, K _L} ^{{\rm FMP}}:=\mathbb{H}_{K _M, K _L} ^{w}, \quad \mbox{for any weighting sequence $w$.}
\end{equation*}
The same statement holds true for the vector spaces $\mathbb{F}_{K _M } ^{w} $ and $\mathbb{H}_{K _M } ^{w} $, that will be denoted in the sequel by $\mathbb{F}_{K _M } ^{{\rm FMP}} $ and $\mathbb{H}_{K _M } ^{{\rm FMP}} $, respectively.
\item [(ii)] Let $\boldsymbol{\Phi} $ and $\boldsymbol{\Psi} $ be the maps defined in the previous section. Then, if the filter $U$ has the fading memory property then so does the associated functional $\boldsymbol{\Psi} (U) =:H _U $. Analogously, if $H$ has the fading memory property, then so does $\boldsymbol{\Phi} (H) =:U_H $. This implies that the maps $\boldsymbol{\Psi} $ and $\boldsymbol{\Phi} $ restrict to maps (that we denote with the same symbols) $\boldsymbol{\Psi}:\mathbb{F}_{K _M, K _L} ^{{\rm FMP}} \longrightarrow \mathbb{H}_{K _M, K _L} ^{{\rm FMP}}$ and $\boldsymbol{\Phi}:\mathbb{H}_{K _M, K _L} ^{{\rm FMP}} \longrightarrow \mathbb{F}_{K _M, K _L} ^{{\rm FMP}}$ that are inverses of each other. The same applies to $\boldsymbol{\Psi}:\mathbb{F}_{K _M} ^{{\rm FMP}} \longrightarrow \mathbb{H}_{K _M} ^{{\rm FMP}}$ and $\boldsymbol{\Phi}:\mathbb{H}_{K _M} ^{{\rm FMP}} \longrightarrow \mathbb{F}_{K _M} ^{{\rm FMP}}$ that, in this case, are linear isomorphisms.
\end{description}
The same statements can be formulated when instead of $K _M $ and $K _L $ we consider the sets $(D_n) ^{\mathbb{Z}_{-}} $ and $(D_N) ^{\mathbb{Z}_{-}} $, with $D_n $ and $D_N $ compact subsets of ${\Bbb R}^n $ and $\mathbb{R}^N $, respectively.
\end{proposition}
In the conditions of the previous proposition, the vector spaces $\mathbb{F}_{K _M} ^{{\rm FMP}} $ and $ \mathbb{H}_{K _M}^{{\rm FMP}}$ can be endowed with a norm. More specifically, let $U: K _M \longrightarrow \ell ^{w}_-({\Bbb R}^n)$ be a filter and let $H:K _M \longrightarrow \mathbb{R}^N$ be a functional that have the FMP. Define:
\begin{eqnarray}
\vertiii{U}_{\infty} &:=&\sup_{{\bf z} \in K _M} \left\{\left\|U ({\bf z})\right\|_ \infty\right\}=\sup_{{\bf z} \in K _M} \left\{\sup_{t \in \mathbb{Z}_{-}}\left\{\left\|U ({\bf z})_t\right\|\right\}\right\},\label{norm of U}\\
\vertiii{H}_{\infty} &:=&\sup_{{\bf z} \in K _M} \left\{\left\|H ({\bf z})\right\|\right\}.\label{norm of H}
\end{eqnarray}
The compactness of $(K _M, \left\|\cdot \right\|_w) $ guaranteed by Corollary \ref{km compact complete} and the fact that by Lemma \ref{fs and ^sfor w} $U$ and $H$ map into uniformly bounded sequences and a compact subspace of $\mathbb{R}^N $, respectively, ensures that the values in \eqref{norm of U} and \eqref{norm of H} are finite, which makes $\left(\mathbb{F}_{K _M} ^{{\rm FMP}}, \vertiii{\cdot }_{\infty} \right)$ and $ \left(\mathbb{H}^{{\rm FMP}}_{K _M}, \vertiii{\cdot }_{\infty} \right)$ into normed spaces that, as we will see in the next result, are linearly homeomorphic. For any $L>0 $ these norms restrict to the spaces $\mathbb{F}_{K _M, K _L} ^{{\rm FMP}} $ and $\mathbb{H}_{K _M, K _L} ^{{\rm FMP}} $, which are in general not linear but become nevertheless metric spaces.
\begin{proposition}
\label{linear homeomorphism prop}
The linear isomorphism $\boldsymbol{\Psi}: \left(\mathbb{F}_{K _M} ^{{\rm FMP}}, \vertiii{\cdot }_{\infty} \right)\longrightarrow \left(\mathbb{H}_{K _M}^{{\rm FMP}}, \vertiii{\cdot }_{\infty} \right)$ and its inverse $\boldsymbol{\Phi} $ satisfy that
\begin{eqnarray}
\vertiii{\boldsymbol{\Psi}(U )}_{\infty}&\leq&
\vertiii{U}_ \infty, \quad \mbox{for any} \quad U \in \mathbb{F}_{K _M} ^{{\rm FMP}}, \label{first ineq psis}\\
\vertiii{\boldsymbol{\Phi}(H)}_{\infty}&\leq& \vertiii{H}_{\infty}, \quad \mbox{for any} \quad H \in \mathbb{H}_{K _M}^{{\rm FMP}}.\label{second ineq psis}
\end{eqnarray}
These inequalities imply that these two maps are continuous linear bijections and hence the spaces $\left(\mathbb{F}_{K _M} ^{{\rm FMP}}, \vertiii{\cdot }_{\infty} \right)$ and $ \left(\mathbb{H}_{K _M}^{{\rm FMP}}, \vertiii{\cdot }_{\infty} \right)$ are linearly homeomorphic. Equivalently, the following diagram commutes and all the maps in it are linear and continuous
$$\minCDarrowwidth55pt
\begin{CD}
\left(\mathbb{F}_{K _M} ^{{\rm FMP}}, \vertiii{\cdot }_{\infty} \right) @>\boldsymbol{\Psi}>> \left(\mathbb{H}_{K _M}^{{\rm FMP}}, \vertiii{\cdot }_{\infty} \right)\\
@A{\rm Id}_{\mathbb{F}_{K _M} ^{{\rm FMP}}}AA @VV{\rm Id}_{\mathbb{H}_{K _M} ^{{\rm FMP}}}V\\
\left(\mathbb{F}_{K _M} ^{{\rm FMP}}, \vertiii{\cdot }_{\infty} \right) @<\boldsymbol{\Phi}<< \left(\mathbb{H}_{K _M}^{{\rm FMP}}, \vertiii{\cdot }_{\infty} \right).
\end{CD}$$
For any $L>0 $, the inclusions $\left(\mathbb{F}_{K _M, K _L} ^{{\rm FMP}}, \vertiii{\cdot }_{\infty} \right) \hookrightarrow \left(\mathbb{F}_{K _M} ^{{\rm FMP}}, \vertiii{\cdot }_{\infty} \right)$ and $\left(\mathbb{H}_{K _M, K _L} ^{{\rm FMP}}, \vertiii{\cdot }_{\infty} \right) \hookrightarrow \left(\mathbb{H}_{K _M} ^{{\rm FMP}}, \vertiii{\cdot }_{\infty} \right)$ (see Lemma \ref{fs and ^sfor w}) are continuous and so are the restricted bijections (that we denote with the same symbols) $\boldsymbol{\Psi}:(\mathbb{F}_{K _M, K _L} ^{{\rm FMP}}, \vertiii{\cdot }_{\infty}) \longrightarrow (\mathbb{H}_{K _M, K _L} ^{{\rm FMP}}, \vertiii{\cdot }_{\infty}) $ and $\boldsymbol{\Phi}:(\mathbb{H}_{K _M, K _L} ^{{\rm FMP}}, \vertiii{\cdot }_{\infty}) \longrightarrow (\mathbb{F}_{K _M, K _L} ^{{\rm FMP}}, \vertiii{\cdot }_{\infty}) $ that are inverses of each other. The last statement is a consequence of the following inequalities:
\begin{eqnarray}
\vertiii{\boldsymbol{\Psi}(U _1 )-\boldsymbol{\Psi}(U _2)}_{\infty}&\leq&
\vertiii{U_1- U _2}_ \infty, \quad \mbox{for any} \quad U_1, U _2 \in \mathbb{F}_{K _M, K _L} ^{{\rm FMP}}, \label{first ineq psis kl}\\
\vertiii{\boldsymbol{\Phi}(H_1)-\boldsymbol{\Phi}(H_2)}_{\infty}&\leq& \vertiii{H_1- H _2}_{\infty}, \quad \mbox{for any} \quad H _1, H _2 \in \mathbb{H}_{K _M, K _L}^{{\rm FMP}}.\label{second ineq psis kl}
\end{eqnarray}
The same statements can be formulated when instead of $K _M $ and $K _L $ we consider the sets $(D_n) ^{\mathbb{Z}_{-}} $ and $(D_N) ^{\mathbb{Z}_{-}} $, with $D_n $ and $D_N $ compact subsets of ${\Bbb R}^n $ and ${\Bbb R}^N$, respectively.
\end{proposition}
\section{Internal approximation of reservoir filters}
\label{Internal approximation of reservoir filters}
This section characterizes situations under which reservoir filters can be uniformly approximated by finding uniform approximants for the corresponding reservoir systems. Such a statement is part of the next theorem that also identifies criteria for the availability of the echo state and the fading memory properties (recall that we used the acronyms ESP and FMP, respectively). As it was already mentioned, a reservoir system has the ESP when it has a unique semi-infinite solution for each semi-infinite input. We also recall that in the presence of uniformly bounded inputs, as it was shown in Section \ref{Continuity and the fading memory property}, the FMP amounts to the continuity of a reservoir filter with respect to the product topologies on the input and output spaces. The completeness and compactness of those spaces established in Corollary \ref{km compact complete} allows us to use various fixed point theorems to show that solutions for reservoir systems exist under very weak hypotheses and that for contracting and continuous reservoir maps (we define this below) these solutions are unique and depend continuously on the inputs. Said differently, {\it contracting continuous reservoir maps induce reservoir filters that automatically have the echo state and the fading memory properties}.
\begin{theorem}
\label{uniform approx theorem}
Let $K _M \subset \left({\Bbb R}^n\right)^{\mathbb{Z}_{-}}$ and $K _L \subset \left({\Bbb R}^N\right)^{\mathbb{Z}_{-}} $ be subsets of uniformly bounded sequences defined as in \eqref{Kset} and let $F: \overline{B_{\left\|\cdot \right\|}({\bf 0}, L)} \times \overline{B_{\left\|\cdot \right\|}({\bf 0}, M)} \longrightarrow \overline{B_{\left\|\cdot \right\|}({\bf 0}, L)} $ be a continuous reservoir map.
\begin{description}
\item [(i)] {\bf Existence of solutions:} for each ${\bf z} \in K _M$ there exists a $\mathbf{x} \in K _L$ (not necessarily unique) that solves the reservoir equation associated to $F$, that is,
\begin{equation*}
\mathbf{x}_t=F( \mathbf{x}_{t-1}, {\bf z}_t), \quad \mbox{for all $t \in \mathbb{Z}_{-}$.}
\end{equation*}
\item [(ii)] {\bf Uniqueness and continuity of solutions (ESP and FMP):} suppose that the reservoir map $F$ is a contraction, that is, there exists $0<r<1$ such that for all $\mathbf{u}, \mathbf{v} \in \overline{B_{\left\|\cdot \right\|}({\bf 0}, L)}$, $\mathbf{z} \in \overline{B_{\left\|\cdot \right\|}({\bf 0}, M)}$, one has
\begin{equation*}
\left\|F(\mathbf{u}, {\bf z})-F(\mathbf{v}, {\bf z})\right\|\leq r \left\|\mathbf{u}- \mathbf{v}\right\|.
\end{equation*}
Then, the reservoir system associated to $F$ has the echo state property. Moreover, this system has a unique associated causal and time-invariant filter $U _F:K _M \longrightarrow K _L $ that has the fading memory property, that is, $U _F \in \mathbb{F}_{K _M, K _L} ^{{\rm FMP}} $. The set $U _F (K _M)$ of accessible states of the filter $U _F $ is compact.
\item [(iii)] {\bf Internal approximation property:}
let $F _1, F _2:\overline{B_{\left\|\cdot \right\|}({\bf 0}, L)} \times \overline{B_{\left\|\cdot \right\|}({\bf 0}, M)} \longrightarrow \overline{B_{\left\|\cdot \right\|}({\bf 0}, L)} $ be two continuous reservoir maps such that $F _1 $ is a contraction with constant $0<r<1$ and $F _2 $ has the existence of solutions property. Let $U _{F _1}, U _{F _2}:K _M \longrightarrow K _L $ be the corresponding filters (if $F _2 $ does not have the ESP, then $U _{F _2} $ is just a generalized filter). Then, for any $\epsilon>0 $, we have that
\begin{equation}
\label{uniform mathema statement}
\left\|F _1-F _2\right\|_{\infty}< \delta(\epsilon):=(1-r) \epsilon \quad \mbox{implies that} \quad
\vertiii{U_{F _1}-U_{F _2}}_{\infty}< \epsilon.
\end{equation}
\end{description}
Part {\bf (i)} also holds true when instead of $K _M $ and $K _L $ we consider the sets $(D_n) ^{\mathbb{Z}_{-}} $ and $(D_N) ^{\mathbb{Z}_{-}} $, with $D_n $ and $D_N $ compact and convex subsets of ${\Bbb R}^n $ and $\mathbb{R}^N $, respectively, that replace the closed balls $\overline{B_{\left\|\cdot \right\|}({\bf 0}, M)} $ and $\overline{B_{\left\|\cdot \right\|}({\bf 0}, L)} $. The same applies to parts {\bf (ii)} and {\bf (iii)} but, this time, the convexity hypothesis is not needed.
\end{theorem}
\noindent Define the set $\mathbb{K}_{K _M, K _L}:= \left\{F: \overline{B_{\left\|\cdot \right\|}({\bf 0}, L)} \times \overline{B_{\left\|\cdot \right\|}({\bf 0}, M)} \longrightarrow \overline{B_{\left\|\cdot \right\|}({\bf 0}, L)}\mid \mbox{$F$ is a continuous contraction} \right\} $. Using the notation introduced in the previous section, the statement in \eqref{uniform mathema statement} and part {\bf (ii)} of the theorem automatically imply that the map
\begin{equation*}
\begin{array}{cccc}
\Xi : &(\mathbb{K}_{K _M, K _L}, \left\| \cdot \right\|_{\infty})& \longrightarrow &\left(\mathbb{F}_{K _M, K _L} ^{{\rm FMP}}, \vertiii{\cdot }_{\infty} \right)\\
& F&\longmapsto &U _F
\end{array}
\end{equation*}
is continuous and by Proposition \ref{linear homeomorphism prop}, the map that associates to each $F \in \mathbb{K}_{K _M, K _L}$ the corresponding functional $H _F $, that is,
\begin{equation*}
\begin{array}{cccc}
\boldsymbol{\Psi} \circ \Xi : &(\mathbb{K}_{K _M, K _L}, \left\| \cdot \right\|_{\infty})& \longrightarrow &\left(\mathbb{H}_{K _M, K _L} ^{{\rm FMP}}, \vertiii{\cdot }_{\infty} \right)\\
& F&\longmapsto &H _F,
\end{array}
\end{equation*}
is also continuous.
\medskip
\noindent\textbf{Proof of the theorem. \ \ }{\bf (i)} We start by defining, for each ${\bf z} \in K _M$, the map given by
\begin{equation*}
\begin{array}{cccc}
\mathcal{F}_{\bf z}: & K _L &\longrightarrow &K _L\\
&\mathbf{x}&\longmapsto & \left(\mathcal{F}_{\bf z}(\mathbf{x})\right)_t:=F(\mathbf{x} _{t-1}, {\bf z}_t).
\end{array}
\end{equation*}
We show first that $\mathcal{F}_{\bf z} $ can be written as a product of continuous functions. Indeed:
\begin{equation}
\label{product decomposition}
\mathcal{F}_{\bf z}=\prod_{t \in \mathbb{Z}_{-}}F(\cdot , {\bf z} _t) \circ p _{t-1}(\mathbf{x}),
\end{equation}
where the projections $p _t: K _L \longrightarrow \overline{B_{\left\|\cdot \right\|}( {\bf 0},L)} $ are given by $p _t({\bf x})= {\bf x}_t $. These projections are continuous when we consider in $K _L $ the product topology. Additionally, the continuity of the reservoir $F$ implies that $\mathcal{F}_{\bf z} $ is a product of continuous functions, which ensures that $\mathcal{F}_{\bf z} $ is itself continuous \cite[Theorem 19.6]{Munkres:topology}. Moreover, by the corollaries \ref{all weighted norms are the same} and \ref{km compact complete}, the space $K _L $ is a compact and convex subset of the Banach space $\left(\ell ^{w}_-({\Bbb R}^n), \| \cdot \| _w\right) $ (see Proposition \ref{lw is a banach space}), for any weighting sequence $w$. Schauder's Fixed Point Theorem (see \cite[Theorem 7.1, page 75]{Shapiro:Farrago}) guarantees then that $\mathcal{F}_{\bf z} $ has at least a fixed point, that is, a point $\mathbf{x} \in K _L$ that satisfies $\mathcal{F}_{\bf z} (\mathbf{x})= \mathbf{x}$ or, equivalently,
\begin{equation*}
\mathbf{x}_t=F(\mathbf{x}_{t-1}, {\bf z}_t), \quad \mbox{for all $t \in \Bbb Z_-$},
\end{equation*}
which implies that $\mathbf{x} $ is a solution of $F $ for ${\bf z} $, as required.
\medskip
\noindent {\bf Proof of part (ii)} The main tool in the proof of this part is a parameter dependent version of the Contraction Fixed Point Theorem, that we include here for the sake of completeness and whose proof can be found in \cite[Theorem 6.4.1, page 137]{Sternberg:dynamical:book}.
\medskip
\noindent {\bf Lemma} {\it
Let $(X, d_X)$ be a complete metric space and let $Z$ be a metric space. Let $K:X \times Z \longrightarrow X $ be a continuous map such that for each $z \in Z $, the map $K _z:X \longrightarrow X $ given by $K _z(x):=K(x,z)$ is a contraction with a constant $0<r<1 $ (independent of $z$), that is, $d_X(K(x,z), K(y,z))\leq r d(x,y) $, for all $x,y \in X $ and all $z \in Z $. Then:
\begin{description}
\item [(i)] For each $z \in Z $, the map $K _z $ has a unique fixed point in $X$.
\item [(ii)] The map $U _K:Z \longrightarrow X $ that associates to each point $z \in Z $ the unique fixed point of $K _z $ is continuous.
\end{description}
}
\medskip
\noindent Consider now the map
\begin{equation*}
\begin{array}{cccc}
\mathcal{F}: & K _L \times K _M&\longrightarrow &K _L\\
&(\mathbf{x}, {\bf z})&\longmapsto & \left(\mathcal{F}(\mathbf{x}, {\bf z})\right)_t:=F(\mathbf{x} _{t-1}, {\bf z}_t).
\end{array}
\end{equation*}
First, as we did in \eqref{product decomposition}, it is easy to show that $\mathcal{F} $ is continuous with respect to the product topologies in $K _M $ and $K _L $, by writing it down as the product of the composition of continuous functions. Second, we show that the map $\mathcal{F} $ is a contraction. Indeed, since by Corollary \ref{all weighted norms are the same} we can choose an arbitrary weighting sequence to generate the product topologies in $K _M $ and $K _L $, we select $w: \mathbb{N} \longrightarrow (0, 1]$ given by $w _t:= \lambda ^t $, with $t \in \mathbb{N} $ and $\lambda >0$ that satisfies $ 0<r< \lambda<1 $. Then, for any $\mathbf{x}, {\bf y} \in K _L $ and any ${\bf z} \in K _M $, we have
\begin{equation*}
\left\|\mathcal{F}(\mathbf{x}, {\bf z})-\mathcal{F}(\mathbf{y}, {\bf z})\right\|_w=
\sup_{t \in \mathbb{Z}_{-}}\left\{\left\|F(\mathbf{x}_{t-1}, {\bf z}_t)-F(\mathbf{y}_{t-1}, {\bf z}_t)\right\|\lambda^{-t}\right\}\leq
\sup_{t \in \mathbb{Z}_{-}}\left\{\left\|\mathbf{x}_{t-1}-\mathbf{y}_{t-1}\right\|r\lambda^{-t}\right\},
\end{equation*}
where we used that $F$ is a contraction. Now, since $ 0<r< \lambda<1 $ and hence $r/ \lambda<1 $, we have
\begin{equation*}
\sup_{t \in \mathbb{Z}_{-}}\left\{\left\|\mathbf{x}_{t-1}-\mathbf{y}_{t-1}\right\|r\lambda^{-t}\right\}=
\sup_{t \in \mathbb{Z}_{-}}\left\{\left\|\mathbf{x}_{t-1}-\mathbf{y}_{t-1}\right\|\lambda^{-(t-1)}\frac{r}{\lambda}\right\}\leq
\frac{r}{\lambda} \left\|\mathbf{x}- {\bf y}\right\|_w.
\end{equation*}
This shows that $\mathcal{F} $ is a family of contractions with constant $r/ \lambda<1 $ that is continuously parametrized by the elements in $K _M$. The lemma above implies the existence of a continuous map $U _F: \left(K _M, \left\|\cdot \right\|_w\right)\longrightarrow\left(K _L, \left\|\cdot \right\|_w\right) $ that is uniquely determined by the identity
\begin{equation*}
\mathcal{F} \left(U _F({\bf z}), {\bf z}\right)=U _F({\bf z}), \quad \mbox{for all ${\bf z}\in K _M $}.
\end{equation*}
Proposition \ref{esp implies ti} implies that $U _F $ is causal and time-invariant. The set $U _F (K _M)$ of accessible states of the filter $U _F $ is compact because it is the image of a compact set (see Corollary \ref{km compact complete}) by a continuous map (see \cite[Theorem 26.5, page 166]{Munkres:topology}).
\medskip
\noindent {\bf Proof of part (iii)} Let ${\bf z} \in K _M$ and let $U_{F _1}({\bf z}) $ be the unique solution for ${\bf z} $ of the reservoir systems associated to $F _1 $ available by the part {\bf (ii)} of the theorem that we just proved. Additionally, let $U_{F _2}({\bf z}) $ be the value of a generalized filter associated to $F _2 $ that exist by hypothesis. Then, for any $t \in \mathbb{Z}_{-} $, we have:
\begin{align*}
\|U_{F _1}({\bf z})_t&-U_{F _2}({\bf z})_t\| = \left\|F _1(U_{F _1}({\bf z})_{t-1}, {\bf z} _t)-F _2(U_{F _2}({\bf z})_{t-1}, {\bf z} _t)\right\|\\
&= \left\|F _1(U_{F _1}({\bf z})_{t-1}, {\bf z} _t)-F _1(U_{F _2}({\bf z})_{t-1}, {\bf z} _t)+F _1(U_{F _2}({\bf z})_{t-1}, {\bf z} _t)-F _2(U_{F _2}({\bf z})_{t-1}, {\bf z} _t)\right\|\\
&\leq \left\|F _1(U_{F _1}({\bf z})_{t-1}, {\bf z} _t)-F _1(U_{F _2}({\bf z})_{t-1}, {\bf z} _t)\right\|+\left\|F _1(U_{F _2}({\bf z})_{t-1}, {\bf z} _t)-F _2(U_{F _2}({\bf z})_{t-1}, {\bf z} _t)\right\|\\
&\leq r\left\|U_{F _1}({\bf z})_{t-1}-U_{F _2}({\bf z})_{t-1}\right\|+\left\|F _1(U_{F _2}({\bf z})_{t-1}, {\bf z} _t)-F _2(U_{F _2}({\bf z})_{t-1}, {\bf z} _t)\right\|.
\end{align*}
If we now recursively apply $n$ times the same procedure to the first summand of this expression, we obtain that
\begin{multline}
\label{decomposition ineqs}
\|U_{F _1}({\bf z})_t-U_{F _2}({\bf z})_t\| \leq r ^n \|U_{F _1}({\bf z})_{t-n}-U_{F _2}({\bf z})_{t-n}\|+\left\|F _1(U_{F _2}({\bf z})_{t-1}, {\bf z} _t)-F _2(U_{F _2}({\bf z})_{t-1}, {\bf z} _t)\right\|\\
+r\left\|F _1(U_{F _2}({\bf z})_{t-2}, {\bf z} _{t-1})-F _2(U_{F _2}({\bf z})_{t-2}, {\bf z} _{t-1})\right\|\\
+ \cdots+r^{n-1}\left\|F _1(U_{F _2}({\bf z})_{t-n}, {\bf z} _{t-(n+1)})-F _2(U_{F _2}({\bf z})_{t-n}, {\bf z} _{t-(n+1)})\right\|
\end{multline}
If we combine the inequality \eqref{decomposition ineqs} with the hypothesis
\begin{equation*}
\left\|F _1-F _2\right\|_{\infty}=\sup_{\mathbf{x} \in \overline{B_{\left\|\cdot \right\|}({\bf 0}, L)},\, \mathbf{z} \in \overline{B_{\left\|\cdot \right\|}({\bf 0}, M)}} \left\{\left\|F _1(\mathbf{x}, {\bf z})-F _2(\mathbf{x}, {\bf z})\right\|\right\}< \delta(\epsilon):=(1-r) \epsilon,
\end{equation*}
we obtain
\begin{multline}
\label{decomposition ineqs second}
\|U_{F _1}({\bf z})-U_{F _2}({\bf z})\| _\infty = \sup _{t \in \mathbb{Z}_{-}} \left\{
\|U_{F _1}({\bf z})_t-U_{F _2}({\bf z})_t\|
\right\}\\
\leq 2 L r ^n +(1+ \cdots + r^{n-1}) \delta(\epsilon)=2 L r ^n +\frac{1-r ^n}{1-r} \delta(\epsilon)
\end{multline}
Since this inequality is valid for any $n \in \mathbb{N} $, we can take the limit $n \longrightarrow \infty $ and we obtain that
\begin{equation*}
\|U_{F _1}({\bf z})-U_{F _2}({\bf z})\| _\infty\leq \frac{\delta(\epsilon)}{1-r}= \epsilon.
\end{equation*}
Additionally, as this relation is valid for any ${\bf z} \in K _M $, we can conclude that
\begin{equation*}
\vertiii{U_{F _1}-U_{F _2}}_{\infty}=\sup_{{\bf z} \in K _M} \left\{\left\|U_{F _1} ({\bf z})-U_{F _2} ({\bf z})\right\|_\infty\right\}\leq \epsilon,
\end{equation*}
as required. \quad $\blacksquare$
\medskip
As a straightforward corollary of the first part of the previous theorem, it is easy to show that echo state networks always have (generalized) reservoir filters associated as well as to formulate conditions that ensure simultaneously the echo state and the fading memory properties.
We recall that a map $\sigma: \mathbb{R} \longrightarrow [-1,1] $ is a {\bfseries\itshape squashing function} if it is non-decreasing, $\lim_{x \rightarrow -\infty} \sigma(x)=-1 $, and $\lim_{x \rightarrow \infty} \sigma(x)=1 $.
\begin{corollary}
\label{esns have filters}
Consider echo state network given by
\begin{empheq}[left={\empheqlbrace}]{align}
\mathbf{x} _t &=\sigma \left(A\mathbf{x}_{t-1}+ C{\bf z} _t+ \boldsymbol{\zeta}\right),\label{esn reservoir equation theorem prep}\\
{\bf y} _t &= {W}\mathbf{x} _t, \label{esn readout theorem prep}
\end{empheq}
where $C \in \mathbb{M}_{N, n} $ for some $N \in \mathbb{N} $, $\boldsymbol{\zeta} \in \mathbb{R} ^N $, $A \in \mathbb{M}_{N,N}$, $W \in \mathbb{M}_{d, N} $, and the input signal ${\bf z} \in \left(D_n\right)^{\mathbb{Z}}$, with $D_n \subset \mathbb{R}^n$ a compact and convex subset.
The function $\sigma :\mathbb{R}^N \longrightarrow [-1,1] ^N$ in \eqref{esn reservoir equation theorem prep} is constructed by componentwise application of a squashing function that we also call $\sigma$. Then:
\begin{description}
\item [(i)] If the squashing function $\sigma $ is continuous, then the reservoir equation \eqref{esn reservoir equation theorem prep} has the existence of solutions property and we can hence associate to the system \eqref{esn reservoir equation theorem prep}-\eqref{esn readout theorem prep} a generalized reservoir filter.
\item [(ii)] If the squashing function $\sigma $ is differentiable with Lipschitz constant $L _\sigma:=\sup_{x \in \mathbb{R}}\{| \sigma' (x)|\} < \infty$ and the matrix $A$ is such that $\left\|A\right\|_2 L _\sigma= \sigma_{{\rm max}}(A) L _\sigma<1$, then the reservoir system \eqref{esn reservoir equation theorem prep}-\eqref{esn readout theorem prep} has the echo state and the fading memory properties and we can hence associate to it a unique time-invariant reservoir filter.
\end{description}
The statement in part {\bf (i)} remains valid when $[-1,1] ^N$ is replaced by a compact and convex subset $D_N \subset [-1,1] ^N$ that is left invariant by the reservoir equation \eqref{esn reservoir equation theorem prep}, that is, $ \sigma \left(A\mathbf{x} + C{\bf z} + \boldsymbol{\zeta}\right) \in D_N$ for any $\mathbf{x} \in D_N$ and any ${\bf z} \in D _n$. The same applies to part {\bf (ii)} but only the compactness hypothesis is necessary.
\end{corollary}
\begin{remark}
\normalfont
The hypothesis $\left\|A\right\|_2 L _\sigma<1 $ appears in the literature as a sufficient condition to ensure the echo state property, which has been extensively studied in the ESN literature~\cite{jaeger2001, Jaeger04, Buehner:ESN, zhang:echo, Yildiz2012, Wainrib2016, Manjunath:Jaeger}. Our result shows that this condition
implies automatically the fading memory property. Nevertheless, that condition is far from being sharp and has been significantly improved in \cite{Buehner:ESN, Yildiz2012}. We point out that the enhanced sufficient conditions for the echo state property contained in those references also imply the fading memory property via part {\bf (ii)} of Theorem \ref{uniform approx theorem}.
\end{remark}
\section{Echo state networks as universal uniform approximants}
\label{Echo state networks as universal uniform approximants}
The internal approximation property that we introduced in part {\bf (ii)} of Theorem \ref{uniform approx theorem} tells us that we can approximate any reservoir filter by finding an approximant for the reservoir system that generates it. This reduces the problem of proving a density statement in a space of operators between infinite-dimensional spaces to a space of functions with finite dimensional variables and values. This topic is the subject of many results in approximation theory, some of which we mentioned in the introduction. This strategy allows one to find simple approximating reservoir filters for any reservoir system that has the fading memory property. In the next result we use as approximating family the echo state networks that we presented in the introduction and that, as we see later on, are the natural generalizations of neural networks in a dynamic learning setup, with the important added feature that they are constructed using linear readouts. The combination of this approach with a previously obtained result \cite{RC6} on the density of reservoir filters on the fading memory category allows us to prove in the next theorem that echo state networks can approximate any fading memory filter. On other words, {\it echo state networks are universal}.
All along this section, we use the Euclidean norm for the finite dimensional spaces, that is, for each $\mathbf{x}\in \mathbb{R} ^n $, we write $\left\|\mathbf{x}\right\|:=\left(\sum_{i=1}^n x _i^2 \right)^{1/2} $. For any $M>0 $, the symbol $B_{\left\|\cdot \right\|} ({\bf 0}, M)$ (respectively $\overline{B_{\left\|\cdot \right\|} ({\bf 0}, M)}$) denotes here the open (respectively closed) balls with respect to that norm. Additionally, we set $I _n:=B_{\left\|\cdot \right\|} ({\bf 0}, 1)$.
\begin{theorem}
\label{ESN universality theorem}
Let $U:I _n^{\mathbb{Z}_{-}} \longrightarrow \left(\mathbb{R}^d\right)^{\mathbb{Z}_{-}} $ be a causal and time-invariant filter that has the fading memory property. Then, for any $\epsilon>0 $ and any weighting sequence $w$, there is an echo state network
\begin{empheq}[left={\empheqlbrace}]{align}
\mathbf{x} _t &=\sigma \left(A\mathbf{x}_{t-1}+ C{\bf z} _t+ \boldsymbol{\zeta}\right),\label{esn reservoir equation theorem}\\
{\bf y} _t &= {W}\mathbf{x} _t. \label{esn readout theorem}
\end{empheq}
whose associated generalized filters $U_{{\rm ESN}}:I _n^{\mathbb{Z}_{-}} \longrightarrow \left(\mathbb{R}^d\right)^{\mathbb{Z}_{-}} $ satisfy that
\begin{equation}
\label{esn approx}
\vertiii{U-U_{{\rm ESN}}}_{\infty}< \epsilon.
\end{equation}
In these expressions $C \in \mathbb{M}_{N, n} $ for some $N \in \mathbb{N} $, $\boldsymbol{\zeta} \in \mathbb{R} ^N $, $A \in \mathbb{M}_{N,N}$, and $W \in \mathbb{M}_{d, N} $.
The function $\sigma :\mathbb{R}^N \longrightarrow [-1,1]^N$ in \eqref{esn reservoir equation theorem} is constructed by componentwise application of a continuous squashing function $\sigma:\mathbb{R} \longrightarrow [-1,1]$ that we denote with the same symbol.
When the approximating echo state network \eqref{esn reservoir equation theorem}-\eqref{esn readout theorem} satisfies the echo state property, then it has a unique filter $U_{{\rm ESN}} $ associated which is necessarily time-invariant. The corresponding reservoir functional $H_{{\rm ESN}}:I _n^{\mathbb{Z}_{-}} \longrightarrow \mathbb{R}^d $ satisfies that
\begin{equation}
\label{esn approx functional}
\vertiii{H _U-H_{{\rm ESN}}}_{\infty}< \epsilon.
\end{equation}
\end{theorem}
\begin{remark}
\normalfont
Echo state networks are generally used in practice in the following way: the architecture parameters $A$, $C$, and $\boldsymbol{\zeta}$ are drawn at random from a given distribution and it is only the readout matrix $W$ that is trained using a teaching signal by solving a linear regression problem. It is important to emphasize that the universality theorem that we just stated does not completely explain the empirically observed robustness of ESNs with respect to the choice of those parameters. In the context of standard feedforward neural networks this feature has been addressed using, for example, the so called extreme learning machines \cite{Huang2006}. In dynamical setups and for ESNs this question remains an open problem that will be addressed in future works.
\end{remark}
\noindent\textbf{Proof of the theorem.\ \ }As we already explained, we proceed by first approximating the filter $U$ by one of the non-homogeneous state-affine system (SAS) reservoir filters introduced in \cite{RC6}, and we later on show that we can approximate that reservoir filter by an echo state network like the one in \eqref{esn reservoir equation theorem}-\eqref{esn readout theorem}.
We start by recalling that a non-homogeneous state-affine system is a reservoir system determined by the state-space transformation:
\begin{empheq}[left={\empheqlbrace}]{align}
\mathbf{x} _t &=p({\bf z _t})\mathbf{x}_{t-1}+q( {{\bf z}} _t),\label{sas reservoir equation rc7}\\
{\bf y} _t &= W_1\mathbf{x} _t, \label{sas readout rc7}
\end{empheq}
where the inputs ${\bf z _t} \in I _n:=B_{\left\|\cdot \right\|} ({\bf 0}, 1)$, the states $\mathbf{x} _t \in \mathbb{R}^{N _1}$, for some $N _1 \in \mathbb{N} $, and $W _1\in \mathbb{M}_{d,N _1}$. The symbols $p({\bf z _t}) $ and $q({\bf z _t})$ stand for polynomials with matrix coefficients and degrees $r$ and $s$, respectively, of the form:
\begin{eqnarray*}
p({\bf z})&=&\sum_{{i _1, \ldots, i _n \in \left\{0, \ldots, r\right\} \above 0 pt i _1+ \cdots + i _n\leq r}}
z _1^{i _1} \cdots z _n^{i _n} A_{{i _1, \ldots, i _n}}, \quad A_{{i _1, \ldots, i _n}} \in \mathbb{M}_{N_1}, \quad {\bf z} \in I _n\\
q({\bf z})&=&\sum_{{i _1, \ldots, i _n \in \left\{0, \ldots, s\right\} \above 0 pt i _1+ \cdots + i _n\leq s}}
z _1^{i _1} \cdots z _n^{i _n} B_{{i _1, \ldots, i _{n}}}, \quad B_{{i _1, \ldots, i _n}} \in \mathbb{M}_{N_1,1}, \quad {\bf z} \in I _n.
\end{eqnarray*}
Let $L>0$ and choose a real number $K$ such that
\begin{equation}
\label{condition on k}
0<K< \frac{L}{L+1}<1.
\end{equation}
Consider now SAS filters that satisfy that $\max _{{\bf z} \in I_n}\sigma_{{\rm max}}(p ({\bf z}))<K $ and $ \max _{{\bf z} \in I_n}\sigma_{{\rm max}}(q ({\bf z})) <K $. It can be shown \cite[Proposition 3.7]{RC6} that under those hypotheses, the reservoir system~\eqref{sas reservoir equation rc7}-\eqref{sas readout rc7} has the echo state property and defines a unique causal, time-invariant, and fading memory filter $U_{{W_{1}}}^{p,q}:I_n^{\Bbb Z_-} \longrightarrow (\mathbb{R} ^d)^{\Bbb Z_-} $. Moreover, Theorem 3.12 in \cite{RC6} shows that for any $\epsilon_1>0 $, there exists a SAS filter $U_{{W_{1}}}^{p,q} $ satisfying the hypotheses that we just discussed, for which
\begin{equation}
\label{first approximation U by functionals}
\vertiii{H_U-H_{{W_{1}}}^{p,q}}_{\infty}< \epsilon _1,
\end{equation}
where $H_U$ and $H_{{W_{1}}}^{p,q}$ are the reservoir functionals associated to $U$ and $U_{{ W_{1}}}^{p,q} $, respectively. Proposition \ref{linear homeomorphism prop} together with this inequality imply that
\begin{equation}
\label{first approximation U}
\vertiii{U-U_{{W_{1}}}^{p,q}}_{\infty}< \epsilon _1.
\end{equation}
We now show that the SAS filter $U_{{W_{1}}}^{p,q} $ can be approximated by the filters generated by an echo state network. Define the map
\begin{equation}
\label{sas first version}
\begin{array}{cccc}
F_{{\rm SAS}}: &\overline{B_{\left\|\cdot \right\|} ({\bf 0}, L)} \times I _n &\longrightarrow &\mathbb{R}^{N _1} \\
&(\mathbf{x}, {\bf z})&\longmapsto & p({\bf z })\mathbf{x}+q( {{\bf z}}),
\end{array}
\end{equation}
with $B_{\left\|\cdot \right\|} ({\bf 0}, L) \subset \mathbb{R}^{N _1} $ and $p$ and $q$ the polynomials associated to the approximating SAS filter $U_{{W_{1}}}^{p,q} $ in \eqref{first approximation U}.
The prescription on the choice of the constant $K$ in \eqref{condition on k} has two main consequences. Firstly, the map $F_{{\rm SAS}} $ is a contraction. Indeed, for any $(\mathbf{x}, {\bf z}), (\mathbf{y}, {\bf z}) \in \overline{B_{\left\|\cdot \right\|} ({\bf 0}, L)} \times I _n $:
\begin{equation}
\label{fsas contraction}
\left\|F_{{\rm SAS}}(\mathbf{x}, {\bf z})- F_{{\rm SAS}}(\mathbf{y}, {\bf z})\right\|\leq \left\|p({\bf z })\mathbf{x}-p({\bf z })\mathbf{y} \right\|\leq \left\|p({\bf z})\right\|_2 \left\|\mathbf{x}- {\bf y}\right\|\leq K\left\|\mathbf{x}- {\bf y}\right\|.
\end{equation}
The map $F_{{\rm SAS}} $ is hence a contraction since $K<1 $ by hypothesis. Second, $\left\|F_{{\rm SAS}}\right\|_ \infty<L $ because by \eqref{condition on k}
\begin{equation*}
\left\|F_{{\rm SAS}}\right\|_ \infty=\sup_{(\mathbf{x}, {\bf z})\in B_{\left\|\cdot \right\|} ({\bf 0}, L) \times I _n}\{ \left\|p({\bf z })\mathbf{x}+q( {{\bf z}})\right\|\}\leq
\sup_{(\mathbf{x}, {\bf z})\in B_{\left\|\cdot \right\|} ({\bf 0}, L) \times I _n} \{\left\|p({\bf z })\right\|_2
\left\|\mathbf{x}\right\|
+\left\|q( {{\bf z}})\right\|\}\leq KL+K<L.
\end{equation*}
This implies, in particular, that the map $F_{{\rm SAS}} $ maps into $\overline{B_{\left\|\cdot \right\|} ({\bf 0}, L)} $ and hence \eqref{sas first version} can be rewritten as
\begin{equation*}
F_{{\rm SAS}}: \overline{B_{\left\|\cdot \right\|} ({\bf 0}, L)} \times I _n \longrightarrow \overline{B_{\left\|\cdot \right\|} ({\bf 0}, L)}.
\end{equation*}
Additionally, we set
\begin{equation}
\label{def of l_1}
L _1:=\left\|F_{{\rm SAS}}\right\|_ \infty<L.
\end{equation}
The uniform density on compacta of the family of feedforward neural networks with one hidden layer proved in \cite{cybenko, hornik} guarantees that for any $\epsilon_2>0 $, there exists $N \in \mathbb{N}$, $G \in \mathbb{M}_{N, N_1} $, $C \in \mathbb{M}_{N ,n} $, $E \in \mathbb{M}_{N_1, N} $, and $\boldsymbol{\zeta} \in {\Bbb R}^N $, such that the map defined by
\begin{equation}
\label{NN first version}
\begin{array}{cccc}
F_{{\rm NN}}: &\overline{B_{\left\|\cdot \right\|} ({\bf 0}, L)} \times I _n &\longrightarrow &\mathbb{R}^{N _1} \\
&(\mathbf{x}, {\bf z})&\longmapsto & E \sigma \left(G \mathbf{x}+ C {\bf z}+ \boldsymbol{\zeta}\right),
\end{array}
\end{equation}
satisfies that
\begin{equation}
\label{approx of f by fnn}
\left\|F_{{\rm NN}} -F _{{\rm SAS}}\right\|_{\infty}=\sup_{\mathbf{x} \in B_{\left\|\cdot \right\|} ({\bf 0}, L),\, \mathbf{z} \in I _n} \left\{\left\|F_{{\rm NN}} (\mathbf{x}, {\bf z})-F _{{\rm SAS}}(\mathbf{x}, {\bf z})\right\|\right\}< \epsilon _2.
\end{equation}
The combination of \eqref{approx of f by fnn} with the reverse triangle inequality implies that $\left\|F_{{\rm NN}}\right\|_{\infty} -\left\|F _{{\rm SAS}}\right\|_{\infty}< \epsilon _2 $ or, equivalently,
\begin{equation}
\label{fnn and fsas epsilon}
\left\|F_{{\rm NN}}\right\|_{\infty} <\left\|F _{{\rm SAS}}\right\|_{\infty}+ \epsilon _2.
\end{equation}
Given that $\left\|F_{{\rm SAS}}\right\|_ \infty=L _1<L $, if we choose $\epsilon _2>0 $ small enough so that
$L _1+ \epsilon _2< L $ or, equivalently,
\begin{equation}
\label{condition on epsilon2}
\epsilon _2< L- L _1,
\end{equation}
then \eqref{fnn and fsas epsilon} guarantees that $\left\|F_{{\rm NN}}\right\|_ \infty<L $, which shows that $F_{{\rm NN}} $ maps into $B_{\left\|\cdot \right\|} ({\bf 0}, L) $, that is, we can write that
\begin{equation}
\label{fnn maps to ball}
F_{{\rm NN}}: \overline{B_{\left\|\cdot \right\|} ({\bf 0}, L)} \times I _n \longrightarrow \overline{B_{\left\|\cdot \right\|} ({\bf 0}, L)}.
\end{equation}
The continuity of the map $F _{{\rm NN}}$ and the first part of Theorem \ref{uniform approx theorem} imply that the corresponding reservoir equation has the existence of solutions property and that we can hence associate to it a (generalized) filter $U_{F _{{\rm NN}}} $. At the same time, as we proved in \eqref{fsas contraction}, the map $F_{{\rm SAS}}$ is a contraction with constant $K<1$. These facts, together with \eqref{approx of f by fnn} and the internal approximation property in Theorem \ref{uniform approx theorem} allow us to conclude that the (unique) reservoir filter $U_{F _{{\rm SAS}}} $ associated to the reservoir map $F _{{\rm SAS}} $ is such that
\begin{equation}
\label{eps with 2 nana}
\vertiii{U_{F_{{\rm NN}}} -U_{F _{{\rm SAS}}}}_{\infty}< \epsilon _2/(1-K).
\end{equation}
Consider now the readout map $h_{W _1}: \mathbb{R}^{N _1}\longrightarrow \mathbb{R}^d $ given by $h_{W _1}(\mathbf{x}):=W _1 \mathbf{x} $ and let $U_{F_{{\rm NN}}}^{h_{W _1}}:(I _n) ^{\mathbb{Z}_{-}} \longrightarrow (\mathbb{R} ^d) ^{\mathbb{Z}_{-}} $ be the filter given by $U_{F_{{\rm NN}}}^{h_{W _1}}({\bf z})_t:=W _1U_{F_{{\rm NN}}}({\bf z})_t $, $t \in \mathbb{Z}_{-} $. Analogously, define $U_{F_{{\rm SAS}}}^{h_{W _1}}:(I _n) ^{\mathbb{Z}_{-}} \longrightarrow (\mathbb{R} ^d) ^{\mathbb{Z}_{-}} $ and notice that $U_{F_{{\rm SAS}}}^{h_{W _1}}=U^{p,q}_{W _1} $. Using these observations and \eqref{eps with 2 nana} we have proved that for any $\epsilon _2> 0$ we can find a filter of the type $U_{F_{{\rm NN}}}^{h_{W _1}} $ that satisfies that
\begin{equation}
\label{eps with 2 nana2}
\vertiii{U^{p,q}_{W _1}-U_{F_{{\rm NN}}}^{h_{W _1}} }_{\infty}\leq \left\|W _1\right\|_2 \vertiii{ U_{F _{{\rm SAS}}}- U_{F_{{\rm NN}}}}_{\infty}< \left\|W _1\right\|_2\epsilon _2/(1-K).
\end{equation}
Consequently, for any $\epsilon>0 $, if we first set $\epsilon_1= \epsilon/2 $ in \eqref{first approximation U by functionals} and we then choose
\begin{equation}
\label{epsilon2 22}
\epsilon_2:=\min \left\{\frac{\epsilon(1-K) }{2\left\|W _1\right\|_2}, \frac{L-L _1}{2}\right\},
\end{equation}
in view of \eqref{condition on epsilon2} and \eqref{eps with 2 nana2}, we can guarantee using \eqref{first approximation U} and \eqref{eps with 2 nana2} that
\begin{equation}
\label{epsilon approx by NSNs}
\vertiii{U-U_{F_{{\rm NN}}}^{h_{W _1}} }_{\infty}\leq \vertiii{U-U^{p,q}_{W _1} }_{\infty} + \vertiii{U^{p,q}_{W _1}-U_{F_{{\rm NN}}}^{h_{W _1}} }_{\infty}\leq \frac{\epsilon}{2}+\frac{\epsilon}{2}= \epsilon.
\end{equation}
In order to conclude the proof it suffices to show that the filter $U_{F_{{\rm NN}}}^{h_{W _1}} $ can be realized as the reservoir filter associated to an echo state network of the type presented in the statement. We carry that out by using the elements that appeared in the construction of the reservoir $F_{{\rm NN}} $ in \eqref{NN first version} to define a new reservoir map $F_{{\rm ESN}} $ with the architecture of an echo state network. Let $A:=GE \in \mathbb{M}_N $ and define
\begin{equation}
\label{ESN first version}
\begin{array}{cccc}
F_{{\rm ESN}}: &D _N \times I _n &\longrightarrow &\mathbb{R}^{N } \\
&(\mathbf{x}, {\bf z})&\longmapsto & \sigma \left(A \mathbf{x}+ C {\bf z}+ \boldsymbol{\zeta}\right).
\end{array}
\end{equation}
The set $D_N $ in the domain of $F_{{\rm ESN}} $ is given by
\begin{equation}
\label{domain of ESN}
D_N:=[-1,1]^N\cap E ^{-1}(\overline{B_{\left\|\cdot \right\|} ({\bf 0}, L)}),
\end{equation}
where $E ^{-1}(\overline{B_{\left\|\cdot \right\|} ({\bf 0}, L)}) $ denotes the preimage of the set $\overline{B_{\left\|\cdot \right\|} ({\bf 0}, L)} \subset \mathbb{R}^{N _1} $ by the linear map $E: \mathbb{R}^N \longrightarrow \mathbb{R}^{N _1} $ associated to the matrix $E \in \mathbb{M}_{N_1, N} $. This set is compact as $E ^{-1}(\overline{B_{\left\|\cdot \right\|} ({\bf 0}, L)}) $ is closed and $[-1,1]^N $ is compact and hence $D_N $ is a closed subspace of a compact space which is always compact \cite[Theorem 26.2]{Munkres:topology}. Additionally, $D_N $ is also convex because $[-1,1]^N $ is convex and $E ^{-1}(\overline{B_{\left\|\cdot \right\|} ({\bf 0}, L)})$ is also convex because it is the preimage of a convex set by a linear map, which is always convex.
We note now that the image of $F_{{\rm ESN}} $ is contained in $D _N $. First, as the squashing function maps into the interval $[-1,1]$, it is clear that
\begin{equation}
\label{step 1 esn inclusion}
F_{{\rm ESN}} \left(D_N, I _n\right)\subset [-1,1] ^N.
\end{equation}
Second, for any $\mathbf{x} \in D _N $ we have by construction that $\mathbf{x} \in E ^{-1}(\overline{B_{\left\|\cdot \right\|} ({\bf 0}, L)}) $ and hence $E \mathbf{x} \in \overline{B_{\left\|\cdot \right\|} ({\bf 0}, L)} $. Since by \eqref{fnn maps to ball} $F _{{\rm NN}} $ maps into $\overline{B_{\left\|\cdot \right\|} ({\bf 0}, L)} $, we can ensure that for any ${\bf z} \in I _n $, the image $F _{{\rm NN}}(E \mathbf{x}, {\bf z}) =E \sigma \left(GE \mathbf{x}+ C {\bf z}+ \boldsymbol{\zeta}\right)=E \sigma \left(A \mathbf{x}+ C {\bf z}+ \boldsymbol{\zeta}\right) \in \overline{B_{\left\|\cdot \right\|} ({\bf 0}, L)} $ or, equivalently,
\begin{equation}
\label{step 2 esn inclusion}
F_{{\rm ESN}}(\mathbf{x}, {\bf z})=\sigma \left(A \mathbf{x}+ C {\bf z}+ \boldsymbol{\zeta}\right) \in E ^{-1}(\overline{B_{\left\|\cdot \right\|} ({\bf 0}, L)}).
\end{equation}
The relations \eqref{step 1 esn inclusion} and \eqref{step 2 esn inclusion} imply that
\begin{equation}
\label{step 3 esn inclusion}
F_{{\rm ESN}} \left(D_N, I _n\right)\subset D_N,
\end{equation}
and hence, we can rewrite \eqref{ESN first version} as
\begin{equation*}
F_{{\rm ESN}} :D _N \times I _n \longrightarrow D _N.
\end{equation*}
The continuity of the map $F _{{\rm ESN}}$ and the compactness and convexity of the set $D_N \subset \mathbb{R}^N$ that we established above allow us to use the first part of Theorem \ref{uniform approx theorem} to conclude that the corresponding reservoir equation has the existence of solutions property and that we can hence associate to it a (generalized) filter $U_{F _{{\rm ESN}}} $. Let $W:=W _1E \in \mathbb{M}_{ d,n}$ and define the readout map $h_{{\rm ESN}}:D _N \longrightarrow \mathbb{R}^d $ by $h_{{\rm ESN}}(\mathbf{x}):=W \mathbf{x}= W _1E \mathbf{x} $. Denote by $U _{{\rm ESN} } $ any generalized reservoir filter associated to the echo state network system $\left(F_{{\rm ESN}},h_{{\rm ESN}}\right) $ that, by construction, satisfies $U_{{\rm ESN}}({\bf z})_t:= h_{{\rm ESN}}(U_{F _{{\rm ESN}}}({\bf z})_t)=WU_{F _{{\rm ESN}}}({\bf z})_t$, for any ${\bf z} \in I _n$ and $t \in \Bbb Z_- $.
We next show that the map $f: D_N=[-1,1]^N\cap E ^{-1}(\overline{B_{\left\|\cdot \right\|} ({\bf 0}, L)}) \longrightarrow\overline{B_{\left\|\cdot \right\|} ({\bf 0}, L)} $ given by $f (\mathbf{x}):= E \mathbf{x} $ is a morphism between the echo state network system $\left(F_{{\rm ESN}},h_{{\rm ESN}}\right) $ and the reservoir system $\left(F_{{\rm NN}},h_{W _1}\right) $. Indeed, the reservoir equivariance property holds because, for any $(\mathbf{x}, {\bf z}) \in D _N \times I _n $, the definitions \eqref{NN first version} and \eqref{ESN first version} ensure that
\begin{equation*}
f(F_{{\rm ESN}}(\mathbf{x}, {\bf z}))=E\sigma \left(A \mathbf{x}+ C {\bf z}+ \boldsymbol{\zeta}\right)=E\sigma \left(GE \mathbf{x}+ C {\bf z}+ \boldsymbol{\zeta}\right)=F_{{\rm NN}}(E\mathbf{x}, {\bf z})=F_{{\rm NN}}(f(\mathbf{x}), {\bf z}).
\end{equation*}
The readout invariance is obvious. This fact and the second part in Proposition \ref{morphisms consequences} imply that all the generalized filters $U _{{\rm ESN} } $ associated to the echo state network are actually filters generated by the system $\left(F_{{\rm NN}},h_{W _1}\right) $. This means that for each generalized filter $U _{{\rm ESN} } $ there exists a generalized filter of the type $U_{F_{{\rm NN}}}^{h_{W _1}} $ such that $U _{{\rm ESN} }=U_{F_{{\rm NN}}}^{h_{W _1}} $. The inequality \eqref{epsilon approx by NSNs} proves then \eqref{esn approx} in the statement of the theorem. The last claim in the theorem is a straightforward consequence of Propositions \ref{esp implies ti} and \ref{linear homeomorphism prop}.\quad $\blacksquare$
\section{Appendices}
\subsection{Proof of Proposition \ref{esp implies ti}}
\label{proof of lemma esp implies ti}
Let $\tau \in \mathbb{N} $ and let $T_\tau^n:(D_n) ^{\Bbb Z} \longrightarrow(D_n) ^{\Bbb Z} $ and $T_\tau^N:(D_N) ^{\Bbb Z} \longrightarrow(D_N) ^{\Bbb Z} $ be the corresponding time delay operators. For any ${\bf z} \in (D_n) ^{\Bbb Z} $, let ${\bf x} \in (D_N) ^{\Bbb Z} $ be the unique solution of the reservoir system determined by $F$, that is,
\begin{equation}
\label{ttau 1}
\mathbf{x}:=U ^F({\bf z}).
\end{equation}
Then, for any $t \in \Bbb Z $,
\begin{equation}
\label{ttau 2}
\left( T _\tau^N \circ U ^F({\bf z}) \right)= \mathbf{x}_{t- \tau}.
\end{equation}
Analogously, let $\widetilde{ \mathbf{x}} \in (D_N) ^{\Bbb Z} $ be the unique solution of $F$ associated to the input $T _\tau ^n({\bf z}) $, that is,
\begin{equation}
\label{ttau 3}
\widetilde{\mathbf{x}} _t=\left( U ^F\circ T _\tau ^n({\bf z}) \right)_t, \quad \mbox{for any} \quad t \in \Bbb Z.
\end{equation}
By construction, the sequence $\widetilde{\mathbf{x}} $ satisfies that
\begin{equation*}
\widetilde{\mathbf{x}} _t= F \left(\widetilde{\mathbf{x}} _{t-1},T _\tau ^n({\bf z})_t\right)= F \left(\widetilde{\mathbf{x}} _{t-1},{\bf z}_{t- \tau}\right), \quad \mbox{for any} \quad t \in \Bbb Z.
\end{equation*}
It we set $s:=t - \tau $, this expression can be rewritten as
\begin{equation}
\label{ttau 4}
\widetilde{\mathbf{x}} _{s+ \tau}= F \left(\widetilde{\mathbf{x}} _{s+ \tau-1},{\bf z}_{s}\right), \quad \mbox{for any} \quad s \in \Bbb Z,
\end{equation}
and if we define $\widehat{\mathbf{x}} _s:= \widetilde{\mathbf{x}} _{s+ \tau} $, the equality \eqref{ttau 4} becomes
\begin{equation*}
\widehat{\mathbf{x}} _s=F \left(\widehat{\mathbf{x}} _{s-1}, {\bf z} _s \right), \quad \mbox{for any} \quad s \in \Bbb Z,
\end{equation*}
which shows that $\widehat{\mathbf{x}} \in (D_N) ^{\Bbb Z}$ is a solution of $F$ determined by the input ${\bf z}\in (D_n) ^{\Bbb Z} $. Since the sequence $\mathbf{x} \in (D_N) ^{\Bbb Z}$ in \eqref{ttau 1} is also a solution of $F$ for the same input, the echo state property hypothesis on the systems determined by $F$ implies that $\mathbf{x}= \widehat{\mathbf{x}} $, necessarily. This implies that $\mathbf{x}_{t- \tau}= \widehat{\mathbf{x}}_{t- \tau} $ for all $t \in \Bbb Z $, which is equivalent to $\widetilde{\mathbf{x}} _t= \mathbf{x}_{t- \tau}$. This equality guarantees that \eqref{ttau 2} and \eqref{ttau 3} are equal and since $\widehat{\mathbf{z}} \in (D_n) ^{\Bbb Z}$ is arbitrary, we have that
\begin{equation*}
T _\tau^N \circ U ^F=U ^F\circ T _\tau ^n,
\end{equation*}
as required. \quad $\blacksquare$
\subsection{Proof of Proposition \ref{continuous functional iff filter}}
Suppose first that $U$ is continuous. This implies the existence of a positive function $\delta_U (\epsilon) $ such that if $\mathbf{u},\mathbf{v} \in \left(D_n\right)^{\mathbb{Z}_-} $ are such that $\left\|\mathbf{u}- \mathbf{v}\right\|_{\infty} <\delta _U(\epsilon)$, then $\left\|U(\mathbf{u})-U(\mathbf{v})\right\|_{\infty}< \epsilon $. Under that hypothesis, it is clear that:
\begin{equation*}
\left\| H _U(\mathbf{u})-H _U(\mathbf{v})\right\|= \left\|U(\mathbf{u})_0-U(\mathbf{v})_0\right\|\leq \sup _{t \in \mathbb{Z}_{-} } \left\{ \left\|U(\mathbf{u})_t-U(\mathbf{v})_t\right\|\right\}=\left\|U(\mathbf{u})-U(\mathbf{v})\right\|_{\infty}< \epsilon,
\end{equation*}
which shows the continuity of $H_U : \left(\left(D_n\right)^{\mathbb{Z}_-}, \left\|\cdot \right\|_{\infty} \right) \longrightarrow \left(D_N, \left\|\cdot \right\|\right) $.
Conversely, suppose that $H : \left(\left(D_n\right)^{\mathbb{Z}_-}, \left\|\cdot \right\|_{\infty} \right) \longrightarrow \left(D_N, \left\|\cdot \right\|\right) $ is continuous and let $\delta_{H} (\epsilon)>0 $ be such that if $\left\|\mathbf{u}- \mathbf{v}\right\|_{\infty} <\delta _{H}(\epsilon)$ then $\left\| H (\mathbf{u})-H (\mathbf{v})\right\|< \epsilon$. Then, for any $t \in \mathbb{Z}_{-} $,
\begin{equation}
\label{intermediate hu}
\left\|U_H(\mathbf{u})_t-U_H(\mathbf{v})_t\right\|=
\left\| H ((\mathbb{P} _{\mathbb{Z}_{-}}\circ T_{-t})(\mathbf{u}))-H ((\mathbb{P} _{\mathbb{Z}_{-}}\circ T_{-t})(\mathbf{v}))\right\|< \epsilon,
\end{equation}
which proves the continuity of $U_H$.
The inequality follows from the fact that for any $\mathbf{u}\in \left(D_n\right)^{\mathbb{Z}_-} $, the components of the sequence $(\mathbb{P} _{\mathbb{Z}_{-}}\circ T_{-t})(\mathbf{u}) $ are included in those of $\mathbf{u} $ and hence $\sup _{s \in \Bbb Z_-} \left\{\left\|(\mathbb{P} _{\mathbb{Z}_{-}}\circ T_{-t}(\mathbf{u}))_s\right\|\right\} \leq \sup _{s \in \Bbb Z_-} \left\{\left\|\mathbf{u}_s\right\|\right\}$ or, equivalently, $\left\|(\mathbb{P} _{\mathbb{Z}_{-}}\circ T_{-t})(\mathbf{u})\right\|_{\infty}\leq \left\|\mathbf{u}\right\|_{\infty} $. This implies that if $\left\|\mathbf{u}- \mathbf{v}\right\|_{\infty} <\delta _{H}(\epsilon)$ then $\left\|T_{-t}(\mathbf{u})- T_{-t}(\mathbf{v})\right\|_{\infty} <\delta _{H}(\epsilon)$ and hence \eqref{intermediate hu} holds. \quad $\blacksquare$
\subsection{Proof of Theorem \ref{product topology for uniformly bounded}}
\label{proof of product topology for uniformly bounded}
We first show that the map $D_w^M:({\Bbb R}^n)^{\mathbb{Z}_{-}} \times ({\Bbb R}^n)^{\mathbb{Z}_{-}} \longrightarrow [0, \infty) $ defined in \eqref{def of weighted metric} is indeed a metric. It is clear that $D_w^M(\mathbf{x}, {\bf y})\geq 0 $ and that $D_w^M(\mathbf{x}, {\bf x})= 0 $, for any $\mathbf{x}, {\bf y} \in ({\Bbb R}^n)^{\mathbb{Z}_{-}} $. Conversely, if $D_w^M(\mathbf{x}, {\bf y})= 0 $, this implies that $\overline{d} _M(\mathbf{x}_t, {\bf y}_t)w_{-t}\leq \sup_{t \in \mathbb{Z}_{-}} \left\{\overline{d} _M(\mathbf{x}_t, {\bf y}_t)w_{-t}\right\} =D_w^M(\mathbf{x}, {\bf y})=0$, which ensures that $\overline{d} _M(\mathbf{x}_t, {\bf y}_t)=0 $, for any $t \in \mathbb{Z}_{-} $, and hence $\mathbf{x}= {\bf y} $ necessarily since the map $\overline{d} _M$ is a metric in $\mathbb{R}^n$~\cite[Chapter 2, \textsection{20}]{Munkres:topology}. It is also obvious that $D_w^M(\mathbf{x}, {\bf y})=D_w^M(\mathbf{y}, {\bf x}) $. Regarding the triangle inequality, notice that for any $\mathbf{x}, {\bf y}, {\bf z} \in ({\Bbb R}^n)^{\mathbb{Z}_{-}} $ and $t \in \mathbb{Z}_{-} $:
\begin{equation*}
\overline{d} _M(\mathbf{x}_t, {\bf z}_t)w_{-t}\leq \overline{d} _M(\mathbf{x}_t, {\bf y}_t)w_{-t}+\overline{d} _M(\mathbf{y}_t, {\bf z}_t)w_{-t}\leq D_w^M(\mathbf{x}, {\bf y})+D_w^M(\mathbf{y}, {\bf z}),
\end{equation*}
which implies that
\begin{equation*}
D_w^M(\mathbf{x}, {\bf z})=\sup_{t \in \mathbb{Z}_{-}} \left\{\overline{d} _M(\mathbf{x}_t, {\bf z}_t)w_{-t}\right\}\leq D_w^M(\mathbf{x}, {\bf y})+D_w^M(\mathbf{y}, {\bf z}).
\end{equation*}
We now show that the metric topology on $(\mathbb{R})^{\mathbb{Z}_{-}} $ associated to $D_w^M $ coincides with the product topology. Let ${\bf x} \in ({\Bbb R}^n)^{\mathbb{Z}_{-}} $ and let $B_{D_w^M}(\mathbf{x}, \epsilon) $ be an $\epsilon $-ball around it with respect to the metric $D_w^M $. Let now $N \in \mathbb{N} $ be large enough so that $w _N < \epsilon/M $. We then show that the basis element $V$ for the product topology in $(\mathbb{R}^n)^{\mathbb{Z}_{-}} $ given by
\begin{equation*}
V:= \cdots \times {\Bbb R}^n\times {\Bbb R}^n\times B_{\overline{d} _M}(\mathbf{x}_{-N}, \epsilon) \times \cdots B_{\overline{d} _M}(\mathbf{x}_{-1}, \epsilon) \times B_{\overline{d} _M}(\mathbf{x}_{0}, \epsilon)
\end{equation*}
and that obviously contains the element $\mathbf{x}\in (\mathbb{R}^n)^{\mathbb{Z}_{-}} $ is such that $V \subset B_{D_w^M}(\mathbf{x}, \epsilon) $. Indeed, since for any $ {\bf y}\in (\mathbb{R}^n)^{\mathbb{Z}_{-}} $ and any $t \in \mathbb{Z}_{-} $ we have that $\overline{d} _M(\mathbf{x} _t, {\bf y}_t)\leq M $, we can conclude that
\begin{equation*}
\overline{d} _M(\mathbf{x} _t, {\bf y}_t)w_{-t}\leq M w_{N}, \quad \mbox{for all} \quad t\leq -N.
\end{equation*}
Therefore, $D_w^M(\mathbf{x}, {\bf y}) \leq \max \left\{M w_{-N}, \overline{d} _M(\mathbf{x} _{-N}, {\bf y}_{-N})w_{N}, \ldots, \overline{d} _M(\mathbf{x} _{-1}, {\bf y}_{-1})w_{1}, \overline{d} _M(\mathbf{x} _{0}, {\bf y}_{0})w_{0}\right\}$ and hence if
${\bf y}\in V $ this expression is smaller than $\epsilon $ which allows us to conclude the desired inclusion $V \subset B_{D_w^M}(\mathbf{x}, \epsilon) $.
Conversely, consider a basis element of the product topology given by $U=\prod_{t \in \mathbb{Z}_{-}}U _t $ where $U _t=B_{\overline{d} _M}(\mathbf{x}_{t}, \epsilon_t) $ for a finite set of indices $t \in \left\{\alpha _1, \ldots, \alpha_r\right\} $, $\epsilon _t\leq 1 $, and $U _t= \mathbb{R}^n $ for the rest. Let $\epsilon := {\rm min}_{t \in \left\{\alpha _1, \ldots, \alpha_r\right\}} \left\{\epsilon _t w _{-t}\right\}$. We now show that $B_{D_w^M}(\mathbf{x}, \epsilon) \subset U$. Indeed, if ${\bf y} \in B_{D_w^M}(\mathbf{x}, \epsilon) $ then $\overline{d} _M(\mathbf{x} _t, {\bf y}_t)w_{-t}\leq D_w^M(\mathbf{x}, {\bf y}) < \epsilon$, for all $t \in \mathbb{Z}_{-} $. It $t \in \left\{\alpha _1, \ldots, \alpha_r\right\} $ then $\epsilon< \epsilon _t w _{-t} $ and hence $\overline{d} _M(\mathbf{x} _t, {\bf y}_t)w_{-t}< \epsilon _t w _{-t} $, which ensures that $\overline{d} _M(\mathbf{x} _t, {\bf y}_t) < \epsilon _t $ and hence ${\bf y} \in U $, as desired.
We conclude by showing that $\left(({\Bbb R}^n)^{\mathbb{Z}_{-}}, D_w^M\right) $ is a complete metric space. First, notice that since for any $\mathbf{x}, {\bf y} \in ({\Bbb R}^n)^{\mathbb{Z}_{-}} $ and any given $t \in \mathbb{Z}_{-} $ we have that
\begin{equation*}
\overline{d} _M(\mathbf{x}_t, {\bf y}_t)\leq \frac{D_w^M(\mathbf{x}, {\bf y})}{w_{-t}},
\end{equation*}
we can conclude that if $ \left\{\mathbf{x} (i)\right\}_{i \in \mathbb{N}} $ is a Cauchy sequence in $({\Bbb R}^n)^{\mathbb{Z}_{-}} $, then so are the sequences $ \left\{\mathbf{x}_t (i)\right\}_{i \in \mathbb{N}} $ in ${\Bbb R}^n $, for any $t \in \mathbb{Z}_{-} $, with respect to the bounded metric $\overline{d} _M$. Since the completeness with respect to the bounded metric $\overline{d} _M$ and the Euclidean metric are equivalent~\cite[Chapter 7, \textsection{43}]{Munkres:topology} we can ensure that $ \left\{\mathbf{x}_t (i)\right\}_{i \in \mathbb{N}} $ converges to an element $\mathbf{a} _t \in {\Bbb R}^n $ with respect to the Euclidean metric for any $t \in \mathbb{Z}_{-} $. We now show that $ \left\{\mathbf{x} (i)\right\}_{i \in \mathbb{N}} $ converges to $\mathbf{a}:= \left(\mathbf{a} _t\right)_{t \in \mathbb{Z}_{-}} \in ({\Bbb R}^n)^{\mathbb{Z}_{-}} $, with respect to the metric $D_w^M$, which proves the completeness statement.
Indeed, since the metric $D_w^M $ generates the product topology, let $U=\prod_{\in \in \mathbb{Z}_{-}}U _t $ be a basis element such that ${\bf a} \in U $ and, as before, $U _t=B_{\overline{d} _M}(\mathbf{a}_{t}, \epsilon_t) $ for a finite set of indices $t \in \left\{\alpha _1, \ldots, \alpha_r\right\} $, $\epsilon _t\leq 1 $, and $U _t= \mathbb{R}^n $ for the rest. Let $\epsilon=\min \left\{\epsilon _{\alpha _1}, \ldots, \epsilon_{\alpha _r}\right\} $. Since for each $t \in \mathbb{Z}_{-} $ the sequence $\mathbf{x}_t (i) \overset{i \rightarrow \infty}{\longrightarrow} \mathbf{a} _t $, then there exists $N _t \in \mathbb{N} $ such that for any $k> N _t $ we have that $\left\|\mathbf{x} _t(k)- \mathbf{a} _t\right\|< \epsilon$. If we take $N _\epsilon=\max \left\{N _{\alpha _1}, \ldots, N_{\alpha _r}\right\} $ then it is clear that $\mathbf{x} (i) \in U$, for all $i > N _\epsilon $, as required. \quad $\blacksquare$
\subsection{Proof of Corollary \ref{all weighted norms are the same}}
\label{proof of all weighted norms are the same}
Notice first that for any $\mathbf{x}, {\bf y} \in K _M $, we have that $\left\|\mathbf{x}_t- {\bf y}_t\right\|<2M $, $t \in \mathbb{Z}_{-} $, and hence
\begin{equation*}
D_w^{2M}(\mathbf{x}, {\bf y}):=\sup_{t \in \mathbb{Z}_{-}} \left\{\overline{d} _{2M}(\mathbf{x}_t, {\bf y}_t)w_{-t}\right\}=\sup_{t \in \mathbb{Z}_{-}} \left\{\left\|\mathbf{x}_t- {\bf y}_t\right\|w_{-t}\right\}= \left\|\mathbf{x}- {\bf y}\right\|_w.
\end{equation*}
Hence, the topology induced by the weighted norm $\left\|\cdot \right\|_w $ on $K _M $ coincides with the metric topology induced by the restricted metric $D_w^{2M}|_{K _M \times K _M} $ which, by Theorem \ref{product topology for uniformly bounded}, is the subspace topology induced by the product topology on $ \left({\Bbb R}^n\right)^{\mathbb{Z}_{-}} $ on $K _M$ (see \cite[Exercise 1, page 133]{Munkres:topology}), as well as the product topology on the product $K _M=\left(\overline{B_{\left\|\cdot \right\|}(\mathbf{0}, M)}\right)^{\mathbb{Z}_{-}}$ (see \cite[Theorem 19.3, page 116]{Munkres:topology})).\quad $\blacksquare$
\subsection{Proof of Corollary \ref{km compact complete}}
First, since $K _M=\left(\overline{B_{\left\|\cdot \right\|}(\mathbf{0}, M)}\right)^{\mathbb{Z}_{-}} $, it is clearly the product of compact spaces. By Tychonoff's Theorem (\cite[Chapter 5]{Munkres:topology}) $K _M $ is compact when endowed with the product topology which, by Corollary \ref{all weighted norms are the same}, coincides with the topology associated to the restriction of the norm $\left\|\cdot \right\|_w $ to $K _M $, as well as with the metric topology given by $D^{2M}_w|_{K _M \times K _M} $.
Second, since $ \left(K _M, \left\|\cdot \right\|_w\right) $ is metrizable it is a Hausdorff space. This implies (see \cite[Theorem 26.3]{Munkres:topology}) that as $K _M $ is a compact subspace of the Banach space $(\ell ^{w}_-({\Bbb R}^n), \left\|\cdot \right\|_w) $ (see Proposition \ref{lw is a banach space}) then it is necessarily closed. This in turn implies (\cite[Theorem B, page 72]{simmons:topology}) that $ \left(K _M, \left\|\cdot \right\|_w\right) $ is complete.
Finally, the convexity statement follows from the fact that the product of convex sets is always convex.
\quad $\blacksquare$
\subsection{Proof of Proposition \ref{in lww norm finer than product}}
Let $d_w $ be the metric on $\ell ^{w}_-({\Bbb R}^n)$ induced by the weighted norm $ \left\|\cdot \right\|_w $ and let $D_w:=D_w ^1 $ be the $w$-weighted metric on $({\Bbb R}^n)^{\mathbb{Z}_{-}} $ with constant $M=1 $ introduced in Theorem \ref{product topology for uniformly bounded} and defined using the same underlying norm in ${\Bbb R}^n $ as the one associated to $\left\|\cdot \right\|_w $. As we saw in that theorem, the metric $D_w$ induces the product topology on $({\Bbb R}^n)^{\mathbb{Z}_{-}} $.
Let now $\mathbf{u} \in \ell ^{w}_-({\Bbb R}^n)$ and let $\epsilon >0$. Let now $\mathbf{v} \in \ell ^{w}_-({\Bbb R}^n)$ be such that $d_w(\mathbf{u}, \mathbf{v})< \epsilon $. By definition, we have that
\begin{equation*}
D_w(\mathbf{u}, \mathbf{v})=\sup_{t \in \mathbb{Z}_{-}} \left\{\overline{d} _1(\mathbf{x}_t, {\bf y}_t)w_{-t}\right\}=\sup_{t \in \mathbb{Z}_{-}} \left\{(\min \{\|\mathbf{x}_t- {\bf y}_t\|, 1\})w_{-t}\right\}\leq
\sup_{t \in \mathbb{Z}_{-}} \left\{\|\mathbf{x}_t- {\bf y}_t\|w_{-t}\right\}=d_w(\mathbf{u}, \mathbf{v})< \epsilon,
\end{equation*}
which shows that $B_{d_w}(\mathbf{u}, \epsilon)\subset B_{D_w}(\mathbf{u}, \epsilon) $ and allows us to conclude that the norm topology in $\ell ^{w}_-({\Bbb R}^n) $ is finer than the subspace topology induced by the product topology in $\left(\mathbb{R}^n\right)^{\mathbb{Z}_{-}} $.
We now show that this inclusion is strict. Since the weighting sequence $w$ converges to zero, there exists an element $t _0 \in \mathbb{Z}_{-} $ such that $w_{-t _0}< \epsilon/2 $. Let $\lambda > 0 $ arbitrary and define the element $\mathbf{v} ^\lambda \in ({\Bbb R}^n)^{\mathbb{Z}_{-}} $ by setting $\mathbf{v} ^\lambda_{t _0}:= \lambda \mathbf{u}_{t _0} $ and $\mathbf{v} ^\lambda_{t}:= \lambda \mathbf{u}_{t } $ when $t \neq t _0 $. We now show that $\mathbf{v} ^ \lambda \in B_{D_w}(\mathbf{u}, \epsilon) $ for any $\lambda>0 $. Indeed,
\begin{equation*}
D_w(\mathbf{u}, \mathbf{v}^\lambda)=\min \{| \lambda-1|\|\mathbf{u}_{t_0}\|, 1\}w_{-t _0}\leq
1 \cdot w_{-t _0}< \epsilon/2< \epsilon.
\end{equation*}
At the same time, by definition,
\begin{equation*}
d_w(\mathbf{u}, \mathbf{v}^\lambda)=| \lambda-1|\|\mathbf{u}_{t_0}\|w_{-t _0}< \infty,
\end{equation*}
which shows that $\mathbf{v}^\lambda \in \ell ^{w}_-({\Bbb R}^n)$. However, since $| \lambda-1|\|\mathbf{u}_{t_0}\|w_{-t _0} $ can be made as large as desired by choosing $\lambda $ big enough, we have proved that for any ball $B_{d_w}(\mathbf{u}, \epsilon') $, with $\epsilon'>0 $ arbitrary, the ball $B_{D_w}(\mathbf{u}, \epsilon) $ contains always an element in $\ell ^{w}_-({\Bbb R}^n)$ that is not included in $B_{d_w}(\mathbf{u}, \epsilon') $. This argument allows us to conclude that the norm topology in $\ell ^{w}_-({\Bbb R}^n) $ is strictly finer than the subspace topology induced by the product topology. \quad $\blacksquare$
\subsection{Proof of Lemma \ref{fs and ^sfor w}}
The proof requires the following preparatory lemma that will also be used later on in the proof of Proposition \ref{FMP independent of w}.
\begin{lemma}
\label{ operator is continuous}
Let $M>0 $ and let $w $ be a weighting sequence. Then:
\begin{description}
\item [(i)] The operator $\mathbb{P} _{\mathbb{Z}_{-}} \circ T _{-t}: (K _M, \left\|\cdot \right\|_w) \longrightarrow (K _M, \left\|\cdot \right\|_w)$ is a continuous map, for any $t \in \mathbb{Z}_{-}$.
\item [(ii)] The projections $p _i: (\ell ^{w}_-(\mathbb{R}^n), \left\|\cdot \right\|_w) \longrightarrow (\mathbb{R}^n, \left\|\cdot \right\|) $, $i \in \mathbb{Z}_{-}$, given by $p _i({\bf z})= {\bf z}_i $, are continuous.
\end{description}
\end{lemma}
\noindent\textbf{Proof of the lemma.\ \ (i)} We show that this statement is true by characterizing $\mathbb{P} _{\mathbb{Z}_{-}} \circ T _{-t} $ as a Cartesian product of continuous maps between two product spaces endowed with the product topologies and by using Corollary \ref{all weighted norms are the same}. Indeed, notice first that the projections $p _i: (K _M, \left\|\cdot \right\|_w) \longrightarrow \overline{B_{\left\|\cdot \right\|}( {\bf 0},M)} $ are continuous since by Corollary \ref{all weighted norms are the same} the topology induced on $K _M $ by the weighted norm $\left\|\cdot \right\|_w $ is the product topolopy. Since $\mathbb{P} _{\mathbb{Z}_{-}} \circ T _{-t} $ can be written as the infinite Cartesian product of continuous maps $\mathbb{P} _{\mathbb{Z}_{-}} \circ T _{-t}=\prod _{i=t}^{- \infty} p _i = \left(\ldots,p_{t-2},p_{t-1}, p _t\right)$ it is hence continuous when using the product topology induced by $\left\|\cdot \right\|_w $ (see \cite[Theorem 19.6]{Munkres:topology}).
\medskip
\noindent {\bf (ii)} Notice first that the projections $p _i: (\ell ^{w}_-(\mathbb{R}^n), \left\|\cdot \right\|_w) \longrightarrow (\mathbb{R}^n, \left\|\cdot \right\|) $ are obviously continuous when we consider in $\ell ^{w}_-(\mathbb{R}^n) $ the subspace topology induced by the product topology in $({\Bbb R}^n)^{\mathbb{Z}_{-}} $. The continuity of $p _i: (\ell ^{w}_-(\mathbb{R}^n), \left\|\cdot \right\|_w) \longrightarrow (\mathbb{R}^n, \left\|\cdot \right\|) $ then follows directly from Proposition \ref{in lww norm finer than product}. $\blacktriangledown $
\medskip
We now proceed with the proof of Lemma \ref{fs and ^sfor w}. Let first $H \in \mathbb{H}_{K _M} ^{w} $. The FMP hypothesis implies that the map $H:(K _M, \left\|\cdot \right\|_w) \longrightarrow (\mathbb{R}^N, \left\|\cdot \right\| )$ is continuous. Given that $K _M $ is compact by Corollary \ref{km compact complete} then so is $H(K _M) \subset \mathbb{R} ^M$. This in turn implies that $H(K _M) $ is closed and bounded \cite[Theorem 27.3]{Munkres:topology} which guarantees the existence of $L>0 $ such that $H(K_M)\subset \overline{B_{\left\|\cdot \right\|}(\mathbf{0}, L)}) $. The map obtained out of $H$ by restriction of its target to $\overline{B_{\left\|\cdot \right\|}(\mathbf{0}, L)}) $ (that we denote with the same symbol) is also continuous and hence $H \in \mathbb{H}_{K _M, K _L} ^{w} $.
Let now $U:K _M \longrightarrow \ell ^{w}_-(\mathbb{R}^N) $ in $\mathbb{F}_{K _M} ^{w} $ and consider the composition $p _0 \circ U: K _M \longrightarrow \mathbb{R}^N $. The FMP hypothesis on $U$ and the continuity of $p _0: (\ell ^{w}_-(\mathbb{R}^n), \left\|\cdot \right\|_w) \longrightarrow (\mathbb{R}^n, \left\|\cdot \right\|) $ that we established in the second part of Lemma \ref{ operator is continuous} imply that $p _0 \circ U $ is continuous. This implies, together with the compactness of $K _M $ that we proved in Corollary \ref{km compact complete}, the existence of $L>0 $ such that $p _0 \circ U(K_M)\subset \overline{B_{\left\|\cdot \right\|}(\mathbf{0}, L)}) $. Equivalently, for any ${\bf z} \in K _M $, we have that $U ({\bf z})_0 \in \overline{B_{\left\|\cdot \right\|}(\mathbf{0}, L)})$. Now, since $U$ is by hypothesis time invariant, we have by \eqref{why we can restrict to zminus} that
\begin{equation*}
U ({\bf z})_t= \left(T_{-t} \left(U({\bf z})\right)\right)_0= U \left(T_{-t}({\bf z})\right)_0 \in \overline{B_{\left\|\cdot \right\|}(\mathbf{0}, L)}), \ t \in \mathbb{Z}_{-}, \ \mbox{since $T_{-t}({\bf z}) \in K _M $},
\end{equation*}
which proves that $U (K _M) \subset K _L $. The map obtained out of $U$ by restriction of its target to $K _L$ (that we denote with the same symbol) is also continuous since $(K _L, \left\|\cdot \right\|_w)$ is a topological subspace of $(\ell ^{w}_-(\mathbb{R}^n), \left\|\cdot \right\|_w) $ and hence $U \in \mathbb{F}_{K _M, K _L} ^{w} $, as required.
The inclusion $\mathbb{F}_{K _M, K _L} ^{w} \subset \mathbb{F}_{K _M} ^{w} $ (respectively, $\mathbb{H}_{K _M, K _L} ^{w} \subset \mathbb{H}_{K _M} ^{w} $) is a consequence of the continuity of the inclusion map $(K _L, \left\|\cdot \right\|_w)\hookrightarrow (\ell ^{w}_-(\mathbb{R}^n), \left\|\cdot \right\|_w) $ (respectively, $(\overline{B_{\left\|\cdot \right\|}(\mathbf{0}, L)}), \left\|\cdot \right\|)\hookrightarrow (\mathbb{R}^n, \left\|\cdot \right\|)$). \quad $\blacksquare$
\subsection{Proof of Proposition \ref{FMP independent of w}}
\noindent {\bf Proof of part (i)} The FMP of $U$ with respect to the sequence $w$ is, by definition, equivalent to the continuity of the map $U:(K _M, \left\|\cdot \right\|_w)\longrightarrow (K _L, \left\|\cdot \right\|_w) $ (respectively, $H:(K _M, \left\|\cdot \right\|_w)\longrightarrow (\overline{B_{\left\|\cdot \right\|}( {\bf 0},L)}, \left\|\cdot \right\|) $). By Corollary \ref{km compact complete}, this is equivalent to the continuity of these maps when $K _M$ and $K _L $ are endowed with the product topology which is, by the same result, generated by any arbitrary weighting sequence.
Consider now $U:(K _M, \left\|\cdot \right\|_w)\longrightarrow (\ell ^{w}_-({\Bbb R}^n), \left\|\cdot \right\|_w)$ in $\mathbb{F}_{K _M } ^{w} $ (respectively, $H:(K _M, \left\|\cdot \right\|_w)\longrightarrow ({\Bbb R}^n, \left\|\cdot \right\|) $ in $\mathbb{H}_{K _M } ^{w} $). By Lemma \ref{fs and ^sfor w} there exists an $L >0 $ such that $U$ (respectively, $H$) can be considered an element of $\mathbb{F}_{K _M , K _L} ^{w} $ (respectively, $\mathbb{H}_{K _M , K _L } ^{w} $) by restriction of the target. Using the statement that we just proved about the space $\mathbb{F}_{K _M , K _L} ^{w} $ (respectively, $\mathbb{H}_{K _M , K _L } ^{w} $) we can conclude that $U$ (respectively, $H $) has the FMP with respect to any weighting sequence. Since, again by Lemma \ref{fs and ^sfor w}, the inclusion $\mathbb{F}_{K _M, K _L} ^{w} \subset \mathbb{F}_{K _M} ^{w} $ (respectively, $\mathbb{H}_{K _M, K _L} ^{w} \subset \mathbb{H}_{K _M} ^{w} $) holds true for any $M>0 $, and any weighting sequence $w$, we can conclude that $U$ (respectively, $H$) is continuous as an element of $\mathbb{F}_{K _M } ^{w} $ (respectively, $\mathbb{H}_{K _M } ^{w} $) for any weighting sequence $w$, as required.
\medskip
\noindent {\bf Proof of part (ii)} First, suppose that $H:(K _M, \left\|\cdot \right\|_w)\longrightarrow (\overline{B_{\left\|\cdot \right\|}( {\bf 0},L)}, \left\|\cdot \right\|) $ has the FMP and that this map is hence continuous. Given that the associated filter $U _H:(K _M, \left\|\cdot \right\|_w) \longrightarrow (K _L, \left\|\cdot \right\|_w)$ can be written as $U _H= \prod _{t=0}^{- \infty} H \circ \left(\mathbb{P} _{\mathbb{Z}_{-}} \circ T _{-t}\right) $ we can also conclude that it is continuous. Indeed, by part {\bf (i)} of Lemma \ref{ operator is continuous}, the map $H \circ \left(\mathbb{P} _{\mathbb{Z}_{-}} \circ T _{-t}\right) $, $t \in \mathbb{Z}_{-}$, is a composition of continuous functions and it is hence continuous. Additionally, the product $\prod _{t=0}^{- \infty} H \circ \left(\mathbb{P} _{\mathbb{Z}_{-}} \circ T _{-t}\right) :(K _M, \left\|\cdot \right\|_w) \longrightarrow (K _L, \left\|\cdot \right\|_w)$ is also continuous because the topology of $(K _L, \left\|\cdot \right\|_w)$ coincides with the product topology by Corollary \ref{all weighted norms are the same} and hence the continuity follows from \cite[Theorem 19.6]{Munkres:topology}, which shows that $U _H $ has the FMP. Conversely, if $U :(K _M, \left\|\cdot \right\|_w) \longrightarrow (K _L, \left\|\cdot \right\|_w)$ has the FMP, so is the case with $H _U=p _0 \circ U: (K _M, \left\|\cdot \right\|_w) \longrightarrow (\overline{B_{\left\|\cdot \right\|}( {\bf 0},L)}, \left\|\cdot \right\|) $ as it is the composition of two continuous maps. These arguments shows that $\boldsymbol{\Psi}(\mathbb{F}_{K _M, K _L} ^{{\rm FMP}}) \subset \mathbb{H}_{K _M, K _L} ^{{\rm FMP}}$ and $\boldsymbol{\Phi}(\mathbb{H}_{K _M, K _L} ^{{\rm FMP}} ) \subset \mathbb{F}_{K _M, K _L} ^{{\rm FMP}}$, and that the maps $\boldsymbol{\Psi}:\mathbb{F}_{K _M, K _L} ^{{\rm FMP}} \longrightarrow \mathbb{H}_{K _M, K _L} ^{{\rm FMP}}$ and $\boldsymbol{\Phi}:\mathbb{H}_{K _M, K _L} ^{{\rm FMP}} \longrightarrow \mathbb{F}_{K _M, K _L} ^{{\rm FMP}}$ are hence inverses of each other.
The parallel statement regarding the spaces $\mathbb{F}_{K _M} ^{{\rm FMP}}$ and $ \mathbb{H}_{K _M} ^{{\rm FMP}}$ can be easily established by mimicking the proof of part {\bf (i)} using Lemma \ref{fs and ^sfor w}.
\quad $\blacksquare$
\subsection{Proof of Proposition \ref{linear homeomorphism prop}}
We start by proving the continuity of $\boldsymbol{\Psi} $ by establishing the inequality \eqref{first ineq psis}. Let $U \in \mathbb{F}_{K _M} ^{{\rm FMP}} $. By definition
\begin{equation}
\label{step 1 for continuity}
\vertiii{\boldsymbol{\Psi}(U)}_{\infty} =\sup_{{\bf z} \in K _M} \left\{\left\|\boldsymbol{\Psi}(U)({\bf z})\right\|\right\}=
\sup_{{\bf z} \in K _M} \left\{\left\|U ({\bf z})_0\right\|\right\}.
\end{equation}
Since we have that
\begin{equation*}
\sup_{{\bf z} \in K _M} \left\{\left\|U ({\bf z})_0\right\|\right\}\leq
\sup_{{\bf z} \in K _M} \{\sup_{t \in \Bbb Z_-} \{
\left\|U ({\bf z})_t\right\| \}
\}=
\sup_{{\bf z} \in K _M} \left\{
\left\|U ({\bf z})\right\|_\infty\right\}=
\vertiii{U}_ \infty,
\end{equation*}
this shows, together with \eqref{step 1 for continuity}, that
\begin{equation*}
\vertiii{\boldsymbol{\Psi}(U)}_{\infty}\leq
\vertiii{U}_ \infty,
\end{equation*}
which implies the continuity of $\boldsymbol{\Psi} $. Regarding the inequality \eqref{second ineq psis}, let $H \in \mathbb{H}_{K _M}^{{\rm FMP}} $. We have:
\begin{multline*}
\vertiii{\boldsymbol{\Phi}(H )}_{\infty}=
\sup_{{\bf z} \in K _M} \left\{\sup_{t \in \mathbb{Z}_{-}}\left\{\left\|\boldsymbol{\Phi}(H ) ({\bf z})_t\right\|\right\}\right\}=
\sup_{{\bf z} \in K _M} \left\{\sup_{t \in \mathbb{Z}_{-}}\left\{\left\|H ((\mathbb{P} _{\mathbb{Z}_{-}}\circ T_{-t})({\bf z}))\right\|\right\}\right\}\\
\leq
\sup_{{\bf z} \in K _M} \left\{\left\|H ({\bf z}))\right\|_ \infty\right\}=\vertiii{H }_{\infty},
\end{multline*}
which proves the continuity of $\boldsymbol{\Phi} $. The inequality is a consequence of the fact that the sequence $(\mathbb{P} _{\mathbb{Z}_{-}}\circ T_{-t})({\bf z}) \in K _M $. The inequalities \eqref{first ineq psis kl} and \eqref{second ineq psis kl} are proved in a similar fashion.\quad $\blacksquare$
\subsection{Proof of Corollary \ref{esns have filters}}
\noindent\textbf{(i)} Consider the reservoir map $F_{{\rm ESN}}:[-1,1]^N \times D _n \longrightarrow [-1,1]^N $ given by $F_{{\rm ESN}}:= \sigma \left(A\mathbf{x} + C{\bf z} + \boldsymbol{\zeta}\right)$
The statement is a direct consequence of the continuity of $F_{{\rm ESN}} $, the compactness and convexity of $[-1,1]^N $ and $D _n $, and of part {\bf (i)} of Theorem \ref{uniform approx theorem}.
\medskip
\noindent {\bf (ii)} The result follows from part {\bf (ii)} of Theorem \ref{uniform approx theorem} since the hypotheses in the statement imply that the reservoir map $F_{{\rm ESN}} $ is in those circumstances a contraction. Indeed, let $\mathbf{x}, {\bf y} \in [-1,1]^N $ and let $z \in D _n $, then
\begin{equation*}
\left\|F_{{\rm ESN}} (\mathbf{x}, {\bf z})-F_{{\rm ESN}} (\mathbf{y}, {\bf z})\right\|= \left\|\sigma \left(A\mathbf{x} + C{\bf z} + \boldsymbol{\zeta}\right)-\sigma \left(A\mathbf{y} + C{\bf z} + \boldsymbol{\zeta}\right)\right\|\leq
L _\sigma \left\|A\right\|_2 \left\|\mathbf{x}- {\bf y}\right\|.
\end{equation*}
Since by hypothesis $L _\sigma \left\|A\right\|_2<1 $, we can conclude that $F_{{\rm ESN}} $ is a contraction, as required. The norm $ \left\|\cdot \right\| $ in the previous expression is the Euclidean norm in $\mathbb{R}^N $. The time invariance of the resulting unique fading memory reservoir filter is a consequence of Proposition \ref{esp implies ti}. \quad $\blacksquare$
\subsection{$\left(\ell ^{w}_-({\Bbb R}^n), \| \cdot \| _w\right) $ is a Banach space}
\label{lw is a banach space appendix}
\begin{proposition}
\label{lw is a banach space}
Let $w : \mathbb{N} \longrightarrow (0,1] $ be a weighting sequence and let $\| \cdot \| _w : (\mathbb{R}^n)^{\Bbb Z _{-}} \longrightarrow \overline{\mathbb{R}^+} $ be the corresponding weighted norm. Then, the space $\left(\ell ^{w}_-({\Bbb R}^n), \| \cdot \| _w\right) $ defined by
\begin{equation*}
\ell ^{w}_-({\Bbb R}^n):= \left\{{\bf z}\in \left(\mathbb{R}^n\right)^{\mathbb{Z}_{-}}\mid \| {\bf z}\| _w< \infty\right\},
\end{equation*}
endowed with weighted norm $\| \cdot \| _w $ is a Banach space.
\end{proposition}
\noindent\textbf{Proof.\ \ } We first show that $\ell ^{w}_- ({\Bbb R}^n)$ is a linear subspace of $(\mathbb{R}^n)^{\Bbb Z _{-}} $. Let $ \mathbf{u}, \mathbf{v} \in \ell ^{w}_- ({\Bbb R}^n) $ and let $\lambda \in \mathbb{R} $. Then,
\begin{multline*}
\left\|\mathbf{u}+ \lambda \mathbf{v}\right\|_w= \sup_{t \in \Bbb Z_-} \left\{\left\|\mathbf{u}_t+ \lambda \mathbf{v} _t\right\|w_{-t}\right\}\leq \sup_{t \in \Bbb Z_-} \left\{\left\|\mathbf{u}_t\right\|w_{-t}+ \lambda \left\|\mathbf{v} _t\right\|w_{-t}\right\}\\
\leq
\sup_{t \in \Bbb Z_-} \left\{\left\|\mathbf{u}_t\right\|w_{-t}\right\}+ \lambda \sup_{t \in \Bbb Z_-} \left\{\left\|\mathbf{v} _t\right\|w_{-t}\right\}= \left\|\mathbf{u}\right\|_w+ \lambda \left\|\mathbf{v}\right\|_w.
\end{multline*}
We now show that this space is complete. Let $\left\{\mathbf{u} (n)\right\}_{n \in \mathbb{N}} \subset \ell ^{w}_- ({\Bbb R}^n)$ be a Cauchy sequence. This implies that for any $\epsilon>0 $, there exists $N(\epsilon) \in \mathbb{N} $ such that for all $m,n >N(\epsilon) $ we have $\left\|\mathbf{u}(n)- \mathbf{u} (m)\right\|_w< \epsilon $. Hence, for any $t \in \Bbb Z_- $,
\begin{equation}
\label{ineq for t}
\left\|\mathbf{u}_t(n)- \mathbf{u}_t (m)\right\|w_{-t}\leq\sup_{t \in \Bbb Z_-} \left\{\left\|\mathbf{u}_t(n)- \mathbf{u}_t (m)\right\|w_{-t}\right\}= \left\|\mathbf{u}(n)- \mathbf{u} (m)\right\|_w< \epsilon.
\end{equation}
This implies that taking for each fixed $t \in \mathbb{Z}_{-} $ the value $N(\epsilon w_{-t}) $, the sequences $\left\{\mathbf{u}_t(n)\right\}_{n \in \mathbb{N}} $ in $\mathbb{R}^n $ are Cauchy and hence convergent to values $\mathbf{u}_t\in \mathbb{R}^n $. We now show that $\left\{\mathbf{u} (n)\right\}_{n \in \mathbb{N}} $ converges to $\mathbf{u} \in (\mathbb{R}^n)^{\Bbb Z _{-}}$. Using \eqref{ineq for t}, take $N(\epsilon/2) $ so that for all $m,n>N(\epsilon/2) $ and any $t \in \mathbb{Z}_{-} $ one has $\left\|\mathbf{u}_t(n)- \mathbf{u}_t (n)\right\|w_{-t}\leq \epsilon/2 $. If we take the limit $m \rightarrow \infty $ in this inequality, we obtain
\begin{equation*}
\left\|\mathbf{u}_t(n)- \mathbf{u}_t \right\|w_{-t}\leq \epsilon/2, \quad \mbox{for all $t \in \mathbb{Z}_{-}$.}
\end{equation*}
This implies that
\begin{equation*}
\left\|\mathbf{u}(n)- \mathbf{u} \right\|_w=\sup_{t \in \Bbb Z_-} \left\{\left\|\mathbf{u}_t(n)- \mathbf{u}_t \right\|w_{-t}\right\}\leq \epsilon/2< \epsilon,
\end{equation*}
which proves that $\left\{\mathbf{u} (n)\right\}_{n \in \mathbb{N}} $ converges to $\mathbf{u} $, as required. It remains to be shown that $\mathbf{u} \in \ell ^{w}_-({\Bbb R}^n) $, that is, that $\left\|\mathbf{u}\right\|< \infty $. In order to show that this is indeed the case, let $n \in \mathbb{N} $ be such that $\left\| \mathbf{u}- \mathbf{u}(n)\right\|< \epsilon $. This implies that
\begin{equation*}
\left\| \mathbf{u}\right\|_w- \left\|\mathbf{u}(n)\right\|_w<|\left\| \mathbf{u}\right\|_w- \left\|\mathbf{u}(n)\right\|_w|< \left\|\mathbf{u}- \mathbf{u}(n) \right\|_w < \epsilon,
\end{equation*}
and hence $\left\| \mathbf{u}\right\|_w< \left\| \mathbf{u}(n)\right\|_w + \epsilon < \infty $, as required. \quad $\blacksquare$
\medskip
\noindent {\bf Acknowledgments:} We thank Herbert Jaeger and Josef Teichmann for carefully looking at early versions of this work and for making suggestions that have significantly improved some of our results. We also thank the editor and two anonymous referees whose input has significantly improved the presentation and the contents of the paper. The authors acknowledge partial financial support of the French ANR ``BIPHOPROC" project (ANR-14-OHRI-0002-02) as well as the hospitality of the Centre Interfacultaire Bernoulli of the Ecole Polytechnique F\'ed\'erale de Lausanne during the program ``Stochastic Dynamical Models in Mathematical Finance, Econometrics, and Actuarial Sciences" that made possible the collaboration that lead to some of the results included in this paper. LG acknowledges partial financial support of the Graduate School of Decision Sciences and the Young Scholar Fund AFF of the Universit\"at Konstanz. JPO acknowledges partial financial support coming from the Research Commission of the Universit\"at Sankt Gallen and the Swiss National Science Foundation (grant number 200021\_175801/1).
\noindent
\addcontentsline{toc}{section}{Bibliography}
\bibliographystyle{wmaainf}
|
{
"timestamp": "2018-08-28T02:10:58",
"yymm": "1806",
"arxiv_id": "1806.00797",
"language": "en",
"url": "https://arxiv.org/abs/1806.00797"
}
|
\section{Introduction}
Colin de Verdi\`ere and Saint-Raymond~\cite{SC} recently found an interesting connection between modeling of internal waves in stratified fluids and spectral theory of zeroth order pseudodifferential operators on compact
manifolds. In other problems of fluid mechanics relevance of such operators
has been known for a long time, for instance in the work of Ralston \cite{Ra73}. We refer to \cite{SC} for pointers to current physics literature on internal waves and for numerical and experimental illustrations.
\begin{figure}
\includegraphics[width=7.5cm]{ex1.jpg}
\includegraphics[width=6.5cm]{flop.1}
\label{f:1}
\caption{On the left: the plot of the real part of $ u ( 50) $ for $ P =
\langle D \rangle^{-1} D_{x_2} + 2 \cos x_1 $ on $\mathbb T^2$ and $ f $ given by a smooth bump function centered at $ ( -\pi/2, 0 ) $.
We see the singularity formation on the line $ x_1 = -\pi/2 $.
On the right: $ \Sigma :=
\kappa ( p^{-1} ( 0 ) ) \subset \partial \overline T^* \mathbb T^2$.
The
attracting Lagrangian, $ \Lambda^+_0 $, comes from the highlighted circles.
See \S \ref{exa} for a discussion of the examples shown in the figures.
}
\end{figure}
The purpose of this note is to show how the main result of \cite{SC} (see
also \cite{C}) follows from
the now standard radial estimates for pseudodifferential operators. In particular, we avoid the use of Mourre theory, normal forms and Fourier integral operators and do not assume that the subprincipal symbols vanish. We also relax some geometric assumptions. The conclusions are formulated
in terms of Lagrangian regularity in the sense of
H\"ormander \cite[\S 25.1]{H3}. We illustrate the results with numerical examples. There are many possibilities for refinements but we restrict ourselves to applying off-the-shelf results at this stage.
Radial estimates were introduced by Melrose \cite{mel} for the study
of asymptotically Euclidean scattering and have been developed further in
various settings. We only mention some of the more relevant ones: scattering by zeroth order potentials (very close in spirit to the problems considered in \cite{SC}) by Hassell--Melrose--Vasy \cite{hmv}, asymptotically hyperbolic scattering by Vasy \cite{Va} (see also \cite[Chapter 5]{DZ} and \cite{V4D}) and
by Datchev--Dyatlov \cite{DaD}, in general relativity by Vasy \cite{Va}, Dyatlov \cite{dy} and Hintz--Vasy \cite{HiV}, and in hyperbolic dynamics by Dyatlov--Zworski \cite{DZ}.
Particularly useful here is the work of Haber--Vasy \cite{hb} which generalized some of the results of \cite{hmv}. A very general version of radial estimates is presented ``textbook style" in \cite[\S E.4]{res}.
\subsection{The main result}
Motivated by internal waves in linearized fluids the authors of \cite{SC} considered long time behaviour of solutions to
\begin{equation}
\label{eq:SC1}
\begin{gathered}
( i \partial_t - P ) u ( t ) = f , \ \ u (0) =0, \ \
f \in {C^\infty} ( M ) , \\
P \in \Psi^0 ( M ) , \ \ P= P^* \end{gathered}
\end{equation}
where $M$ is a closed surface and
$ P $ satisfies dynamical assumptions presented in \S \ref{ass}. By changing
$ P $ to $ P - \omega_0 $ we can change $ f $ to the more physically relevant
oscillatory forcing term, $ e^{ - i \omega_0 t } f $.
Since the solution $ u ( t ) $ is given by
\begin{equation}
\label{eq:uoft} u ( t ) = -i\int_0^t e^{ - i s P } f \, ds
= P^{-1} ( e^{ - it P }- 1)f
,\end{equation}
(where the operator $ P^{-1} (e^{ -i t P }-1 ) $
is well defined for all $ t $ using the spectral theorem), the properties of the spectrum of~$P$ play a crucial role in the description of the long time behaviour of $ u ( t ) $. Referring to~\S\ref{ass} for the precise assumptions we state
\medskip
\noindent
{\bf Theorem.} {\em Suppose that the operator $ P $ satisfies assumptions~\eqref{eq:assP},
\eqref{eq:dynaSC} below and that $ 0 \notin \Spec_{\rm{pp}} ( P ) $. Then, for any $ f \in C^\infty ( M ) $, the solution to \eqref{eq:SC1} satisfies
\begin{equation}
\label{eq:SC2}
\begin{gathered}
u ( t ) = u_\infty + b ( t ) + \epsilon ( t) ,\ \
\| b ( t ) \|_{L^2 } \leq C, \ \ \| \epsilon ( t ) \|_{ H^{-\frac12 - }}
\to 0 , \ \ t \to \infty ,
\end{gathered}
\end{equation}
where {(denoting by $H^{-\frac 12-}$ the intersection of the spaces $H^{-\frac 12-\varepsilon}$ over $\varepsilon>0$)}
\begin{equation}
\label{eq:DZ1}
u_\infty \in I^{0} ( M ; \Lambda^+_0 ) \subset H^{-\frac12-} ( M)
\end{equation}
and $ I^{0} ( M ; \Lambda^+_0 ) $ is the space of Lagrangian distributions of order~$ 0 $ (see~\S\ref{s:lagrangian-basic}) associated
to the attracting Lagrangian $ \Lambda^+_0 $ defined in \eqref{eq:Laplus}.}
The proof gives other results obtained in~\cite{SC}. In particular, we see that
in the neighbourhood of $ 0 $ the spectrum of~$ P$ is absolutely continuous except for finitely many eigenvalues with smooth eigenfunctions -- see~\S \ref{eig}.
In the case of {general} Morse--Smale flows {(allowing for fixed points)}, Colin de Verdi\`ere \cite[Theorem 4.3]{C} used a hybrid of
Mourre estimates (in particular their finer version
given by Jensen--Mourre--Perry \cite{jemp}) and of the radial estimates \cite[\S E.4]{res} to obtain a version of \eqref{eq:SC2} with an estimate on
$ \WF ( u_\infty ) $. At this stage the purely microlocal approach of this paper would only give
$ \| \epsilon ( t ) \|_{ H^{-\frac32 - }}
\to 0 $.
\subsection{Assumptions on $ P $}
\label{ass}
We assume that $M$ is a compact surface without boundary
and $ P\in\Psi^0(M) $ is a 0th order pseudodifferential operator with
principal symbol $ p \in S^0 ( T^* M \setminus 0;\mathbb R ) $ which is homogeneous (of order 0)
{and has $0$ as a regular value}. We also assume that for some smooth density, $ dm ( x ) $, on $M $, $ P $ is self-adjoint:
\begin{equation}
\label{eq:assP}
\begin{gathered}
P \in \Psi^0 ( M ) , \ \ P = P^* \text{ on $L^2 ( M , dm(x) ) $}, \\ p := \sigma (P ), \ \ p ( x , t \xi ) = p ( x, \xi) , \ t > 0, \ \ dp |_{p^{-1} ( 0 ) } \neq 0.
\end{gathered}
\end{equation}
The homogeneity assumption on $ p $ can be removed as the results of
\cite[\S E.4]{res} and \cite{zazi} we use do not require it. That would however complicate the statement of the dynamical assumptions.
We use the notation of \cite[\S E.1.3]{res}, denoting by $ \overline T^* M $ the fiber-radially
compactified co-tangent bundle. Define the quotient map for the
$ {\mathbb R}^+ $ action, $ ( x, \xi ) \mapsto ( x , t \xi )$, $ t > 0 $,
\begin{equation}
\label{eq:kap} \kappa : \overline T^* M\setminus 0 \longrightarrow
\partial \overline T^* M .
\end{equation}
{Denote by $|\xi|$ the norm of a covector $\xi\in T_x^*M$ with respect to some fixed Riemannian metric on~$M$.}
The rescaled Hamiltonian vector field $|\xi|H_p$ commutes with the $\mathbb R^+$ action and
\begin{equation}
\label{eq:defSig}
X:= \kappa_*(| \xi | H_p )\quad\text{is tangent to}\quad \Sigma := \kappa ( p^{-1} ( 0 ) ) .
\end{equation}
Note that $\Sigma$ is an orientable surface since it is defined by the equation $p=0$
in the orientable 3-manifold $\partial \overline T^*M$.
We now recall the dynamical assumption made by Colin de Verdi\`ere and Saint-Raymond \cite{SC}:
\begin{equation}
\label{eq:dynaSC}
\text{ The flow of $ X $ on $
\Sigma $ is a Morse--Smale flow with {\em no} fixed points. }
\end{equation}
For the reader's convenience we recall the definition of Morse--Smale flows
generated by $ X $ on a surface $ \Sigma $ (see \cite[Definition 5.1.1]{Surf}):
\begin{enumerate}
\item
$ X $ has a finite number of fixed points all of which are hyperbolic;
\item
$ X $ has a finite number of hyperbolic limit cycles;
\item
there are no separatrix connections between saddle fixed points;
\item every trajectory different from (1) and (2) has unique trajectories
(1) or (2) as its $\alpha$, $\omega$-limit sets.
\end{enumerate}
As stressed in \cite{SC}, Morse--Smale flows enjoy stability and genericity
properties -- see \cite[Theorem 5.1.1]{Surf}. At this stage, following \cite{SC}, me make the strong assumption that there are no fixed points.
By the Poincar\'e--Hopf Theorem that forces $ \Sigma $ to be a union of tori.
\begin{figure}
\includegraphics[width=7.5cm]{ex2.jpg}\includegraphics[width=6.5cm]{flop.2}
\caption{On the left: the plot of the real part of $ u ( 50) $ for
$P$ given by~\eqref{eq:P2} and $ f $ given by a smooth bump function centered at $ ( -\pi/2, 0 ) $.
We see the singularity formation on the line $ x_1 = -\pi/2 $ and the slower
formation of singularity at $ x_1 = \pi/2 $.
On the right: $ \Sigma :=
\kappa ( p^{-1} ( 0 ) )$. The
attracting Lagrangian $ \Lambda^+_0 $ comes from the highlighted circles. }
\label{f:2}
\end{figure}
Under the assumption \eqref{eq:dynaSC}, the flow of~$X$ on~$ \Sigma $ has an attractor $ L^+_0$, which is a union of closed attracting curves. We define
the following {\em conic Lagrangian submanifold} of $ T^* M \setminus 0 $ (see \cite[\S 21.2]{H3}
and Lemma~\ref{l:sink-established}):
\begin{equation}
\label{eq:Laplus} \Lambda^+_0 := \kappa^{-1} ( L^+_0 ) .
\end{equation}
\subsection{Examples}
\label{exa}
We illustrate the result with two simple examples
on $M:=\mathbb T^2=\mathbb S^1\times\mathbb S^1$
where $\mathbb S^1=\mathbb R/(2\pi\mathbb Z)$.
Denote $D:={1\over i}\partial$.
Consider first
\begin{equation}
\label{eq:P1}
\begin{gathered}
P := \langle D \rangle^{-1} D_{x_2} - 2 \cos x_1 , \ \
p = |\xi|^{-1} \xi_2 -2 \cos x_1 , \\
|\xi|H_p=-{\xi_1\xi_2\over |\xi|^2}\partial_{x_1}+{\xi_1^2\over|\xi|^2}\partial_{x_2}
-2(\sin x_1)|\xi|\partial_{\xi_1},\\
\Lambda^+_0 = \{ ( \pm \pi/2 , x_2; \xi_1 , 0 ) : x_2 \in \mathbb S^1 ,\ \pm \xi_1 < 0 \} .
\end{gathered}
\end{equation}
In this case $ \kappa ( p^{-1} ( 0 ) ) $ (with $ \kappa $ given in \eqref{eq:kap}) is a union of two tori which do {\em not} cover~$ \mathbb T^2 $ (and thus does not satisfy the assumptions of~\cite{SC} but is covered by the treatment here, and in \cite{C}). See Figure \ref{f:1} for the plot of $ \Re u ( t ) $, $ t = 50 $ and for a schematic visualization of $ \Sigma=\kappa (p^{-1} ( 0) ) $.
Our result applies also to the closely related operator
\begin{equation}
\label{eq:P2}
\begin{gathered}
P := \langle D \rangle^{-1} D_{x_2} - \tfrac12 \cos x_1 , \quad
p = |\xi|^{-1} \xi_2 - \tfrac 12 \cos x_1 ,\\
|\xi|H_p=-{\xi_1\xi_2\over |\xi|^2}\partial_{x_1}+{\xi_1^2\over|\xi|^2}\partial_{x_2}
-\tfrac12 {\sin x_1}|\xi|\partial_{\xi_1}.
\end{gathered}
\end{equation}
The attracting Lagrangians are the same but the energy surface $ \kappa ( p^{-1} ( 0 ) )$ consists of two tori covering $ \mathbb T^2 $ (and hence satisfying the assumptions of~\cite{SC}) -- see Figure \ref{f:2}.
\section{Geometric structure of attracting Lagrangians}
In this section we prove geometric properties of the attracting and repulsive
Lagrangians for the flow $e^{t|\xi|H_p}$ where $p$ satisfies~\eqref{eq:dynaSC}.
\subsection{Sink and source structure}
\label{s:sink-source}
Let $ \Sigma ( \omega ) := \kappa ( p^{-1} ( \omega ) ) $. If $ \delta>0 $ is sufficiently small then
stability of Morse--Smale flows (and the stability of non-vanishing of
$ X $) shows that \eqref{eq:dynaSC} is satisfied for $ \Sigma ( \omega ) $,
$|\omega|\leq 2\delta$.
Let $ L^\pm_ \omega \subset \Sigma ( \omega) $ be the attractive ($+$) and repulsive ($-$) hyperbolic cycles for the flow of $ X $ on $ \Sigma ( \omega ) $.
We first establish dynamical properties needed for the application of
radial estimates in \S \ref{reso}:
\begin{lemm}
\label{l:sink-established}
$L^+ _ \omega$ is a radial sink and $ L^-_\omega $ a radial source for the Hamiltonian flow
of $ |\xi|(p - \omega) = |\xi|\sigma ( P - \omega ) $
in the sense of \cite[Definition~E.50]{res}. The conic submanifolds
\[ \Lambda^\pm_\omega := \kappa^{-1} ( L^\pm_\omega )
\subset T^* M \setminus 0 \]
are Lagrangian.
\end{lemm}
\Remark It is not true that $L^\pm_\omega$ are radial sinks/sources
for the Hamiltonian flow of $p-\omega$ since~\cite[Definition~E.50]{res}
requires convergence of all nearby Hamiltonian trajectories,
not just those on the characteristic set $p^{-1}(\omega)$.
See Remark~3 following~\cite[Definition~E.50]{res} for details.
The singular behavior of $|\xi|$ at $\xi=0$ is irrelevant here
since we are considering a neighbourhood of the fiber infinity.
\begin{proof}
We consider the case of $L^+_\omega$ as that of $L^-_\omega$ is similar.
To simplify the formulas below we put $\omega:=0$.
To see that $ \Lambda^+_0 $ is a Lagrangian submanifold we note that
$ H_p $ and $ \xi \partial_\xi $ are tangent to $ \Lambda^+_0$
and independent (since $X$ does not vanish on $L^+_0$). Denoting the symplectic form by
$ \sigma $, we have $ \sigma ( H_p , \xi \partial_\xi ) = -dp ( \xi \partial_\xi) = 0 $, that is $ \sigma $ vanishes on the tangent space to $ \Lambda^+_0 $.
We next show that $L^+_0$ is a radial sink. For simplicity assume that it consists of a single attractive closed trajectory of $X$ of period $T>0$, in particular $e^{TX}=I$ on~$L^+_0$.
Define the vector field
$$
Y:=H_{|\xi|p}
$$
which is homogeneous of order~0
on $T^*M\setminus 0$ and thus extends smoothly to the fiber-radial compactification
$\overline T^*M\setminus 0$, see~\cite[Proposition~E.5]{res}. We have
$Y=X$ on $\partial\overline T^*M\cap p^{-1}(0)$, thus $L^+_0\subset\partial\overline T^*M$
is a closed trajectory of $Y$ of period $T$.
Fix arbitrary $(x_0,\xi_0)\in L^+_0$ and define the linearized Poincar\'e map
$\mathcal P$ induced by $de^{TY}(x_0,\xi_0)$
on the quotient space $T_{(x_0,\xi_0)}(\overline T^*M)/\mathbb RY_{(x_0,\xi_0)}$.
The adjoint map $\mathcal P^*$
acts on covectors in $T^*_{(x_0,\xi_0)}(\overline T^*M)$ which annihilate $Y_{(x_0,\xi_0)}$.
To prove that $L^+_0$ is a radial sink it suffices to show that
the spectral radius of~$\mathcal P$ is strictly less than~1.
Put $\rho:=|\xi|^{-1}$
which is a boundary defining function on $\overline T^*M$,
then $\Sigma=\partial\overline T^*M\cap p^{-1}(0)$
is given by $\{p=0,\ \rho=0\}$.
Since $Y=X$ on $\Sigma$ and $L^+_0$ is an attractive cycle for~$X$ on~$\Sigma$, we have
$$
\mathcal P|_{\ker(dp)\cap\ker(d\rho)}=c_1\quad\text{for some }c_1\in\mathbb R,\ |c_1|<1.
$$
Since $Y$ is tangent to $\partial\overline T^*M=\rho^{-1}(0)$,
we have $Y\rho=f_2\rho$ for some $f_2\in C^\infty(\overline T^*M\setminus 0;\mathbb R)$.
Recalling that $Y=H_{|\xi|p}$ we compute
$Yp=pH_{|\xi|}p=-pH_p(\rho^{-1})=f_2p$. Denoting $c_2:=f_2(x_0,\xi_0)$ we then have
$$
\mathcal P^*(dp(x_0,\xi_0))=c_2 dp(x_0,\xi_0),\quad
\mathcal P^*(d\rho(x_0,\xi_0))=c_2 d\rho(x_0,\xi_0).
$$
Thus $\mathcal P$ has eigenvalues $c_1,c_2,c_2$. On the other hand,
$e^{TY}$ preserves the symplectic density $|\sigma\wedge\sigma|$ which has the form
$\rho^{-3} d\vol$ for some density $d\vol$ on $\overline T^*M$ which is smooth
up to the boundary. Taking the limit of this statement
at $(x_0,\xi_0)$ we obtain $\det\mathcal P=\det de^{TY}(x_0,\xi_0)=c_2^3$.
It follows that $c_1=c_2$ and thus $\mathcal P$ has spectral
radius $|c_1|<1$ as needed.
\end{proof}
For future use we define the conic hypersurfaces in $T^*M\setminus 0$
\begin{equation}
\label{e:Lambda-pm-def}
\Lambda^\pm := \bigcup_{|\omega|<2\delta}\Lambda^\pm_\omega.
\end{equation}
\subsection{Geometry of Lagrangian families}
\label{s:lagrangian-geometry}
We next establish some facts
about families of Lagrangian submanifolds which do not need the dynamical assumptions~\eqref{eq:dynaSC}.
Instead we assume that:
\begin{itemize}
\item $p:T^*M\setminus 0\to\mathbb R$ is homogeneous of order~0;
\item $\Lambda\subset T^*M\setminus 0$ is a conic hypersurface;
\item $dp|_{T\Lambda}\neq 0$ everywhere;
\item the Hamiltonian vector field $H_p$ is tangent to $\Lambda$.
\end{itemize}
Under these assumptions, the sets
$$
\Lambda_\omega:=\Lambda\cap p^{-1}(\omega)
$$
are two-dimensional conic submanifolds of $T^*M\setminus 0$. Moreover, similarly
to Lemma~\ref{l:sink-established}, each $\Lambda_\omega$ is Lagrangian.
Indeed, if $G$ is a (local) defining function of $\Lambda$, namely
$G|_{\Lambda}=0$ and $dG|_{\Lambda}\neq 0$, then
$H_p$ being tangent to $\Lambda$ implies
\begin{equation}
\label{e:p-G-commute}
\{p,G\}=0\quad\text{on}\quad \Lambda.
\end{equation}
Thus $H_p,H_G$ form a tangent frame on $\Lambda_\omega$ and $\sigma(H_p,H_G)=0$ on~$\Lambda$, where
$\sigma$ denotes the symplectic form.
Since $\xi\partial_\xi$ is tangent to each $\Lambda_\omega$, for any choice
of local defining function $G$ of $\Lambda$ we can write
\begin{equation}
\label{e:Phi-new-def}
\xi\partial_\xi=\Phi H_p+\Theta H_G\quad\text{on}\quad\Lambda
\end{equation}
for some functions $\Phi,\Theta$ on $\Lambda$.
Since the one-dimensional subbundle
$\mathbb RH_G\subset T\Lambda$ is invariantly defined
we see that $\Phi\in C^\infty(\Lambda;\mathbb R)$ does not depend on the choice of $G$.
The function $\Phi$ is homogeneous of order~1. Indeed, we can choose $G$ to be homogeneous of order~1 which implies
that $[\xi\partial_\xi,H_G]=0$; we also have $[\xi\partial_\xi,H_p]=-H_p$.
By taking the commutator of both sides of~\eqref{e:Phi-new-def}
with $\xi\partial_\xi$ we see that $ \xi \partial_\xi \Phi = \Phi $.
{Similarly we see that $\Theta$ is homogeneous of order~0.}
On the other hand, taking the commutators of both sides of~\eqref{e:Phi-new-def}
with $H_p$ and $H_G$ and using the following consequence of~\eqref{e:p-G-commute},
$$
[H_p,H_G]=H_{\{p,G\}}\in \mathbb RH_G\quad\text{on}\quad \Lambda,
$$
we get the following identities:
\begin{equation}
\label{e:Phi-prop-1}
H_p\Phi\equiv 1,\quad
H_G\Phi\equiv 0\quad\text{on}\quad\Lambda.
\end{equation}
The function $\Phi$ is related to the $\omega$-derivative of a generating function of $\Lambda_\omega$
(see~\eqref{e:lm-par}):
\begin{lemm}
\label{l:phase-der}
Assume that $\Lambda_\omega$ is locally given (in some coordinate system on~$M$) by
\begin{equation}
\label{e:phase-der-1}
\Lambda_\omega=\{(x,\xi)\colon x=\partial_\xi F(\omega,\xi),\ \xi\in \Gamma_0\},
\end{equation}
where $ \xi \mapsto F ( \omega, \xi ) $ is a family of homogeneous functions of order~1 and $ \Gamma_0 \subset {\mathbb R}^2 \setminus 0 $ is a cone. Then we have
\begin{equation}
\label{e:phase-der-2}
\partial_\omega F(\omega,\xi)=-\Phi(\partial_\xi F(\omega,\xi),\xi).
\end{equation}
\end{lemm}
\begin{proof}
Let $G$ be a (local) defining function of $\Lambda$. Taking the $\partial_\xi$-component of~\eqref{e:Phi-new-def}
at a point $\zeta:=(\partial_\xi F(\omega,\xi),\xi)\in\Lambda$ we have
\begin{equation}
\label{e:clafouti-1}
\xi=-\Phi(\zeta)\partial_x p(\zeta)-\Theta(\zeta)\partial_x G(\zeta).
\end{equation}
On the other hand, differentiating in~$\omega$ the identities
$$
p(\partial_\xi F(\omega,\xi),\xi)=\omega,\quad
G(\partial_\xi F(\omega,\xi),\xi)=0
$$
we get
\begin{equation}
\label{e:clafouti-2}
\langle \partial_x p(\zeta),\partial_\xi\partial_\omega F(\omega,\xi) \rangle=1,\quad
\langle \partial_x G(\zeta),\partial_\xi\partial_\omega F(\omega,\xi) \rangle=0.
\end{equation}
Combining~\eqref{e:clafouti-1} and~\eqref{e:clafouti-2} we arrive {at}
$$
\langle\xi,\partial_\xi \partial_\omega F(\omega,\xi)\rangle=-\Phi(\zeta)
=-\Phi(\partial_\xi F(\omega,\xi),\xi)
$$
which implies~\eqref{e:phase-der-2} since the function $\xi\mapsto \partial_\omega F(\omega,\xi)$
is homogeneous of order~1.
\end{proof}
Now we specialize to the Lagrangian families used in this paper.
We start with a sign condition on $\Phi$ which will be used in \S \ref{asr}:
\begin{lemm}
\label{l:Phi-sign}
Suppose that for $\Lambda=\Lambda^+$
or $\Lambda=\Lambda^-$, with $\Lambda^\pm$ given in~\eqref{e:Lambda-pm-def}
we define $ \Phi^\pm $ using~\eqref{e:Phi-new-def}.
Then for some constant $c>0$
\begin{equation}
\label{e:Phi-sign}
\pm \Phi^\pm(x,\xi)\geq c|\xi|\quad\text{on}\quad \Lambda^\pm.
\end{equation}
\end{lemm}
\begin{proof}
We consider the case of $\Phi^+$ as the case of $\Phi^-$ is handled by
replacing $p$ with $-p$.
Recall from Lemma~\ref{l:sink-established}
that each $L^+_\omega=\kappa(\Lambda^+\cap p^{-1}(\omega))$ is a radial sink
for the flow $e^{t|\xi|H_p}$. Take $(x,\xi)\in \Lambda^+$ with $|\xi|$ large.
Then (with $S^*M$ denoting the cosphere bundle with respect to any fixed metric on~$M$)
\begin{equation}
\label{e:lynmar}
e^{-tH_p}(x,\xi)\in S^*M\quad\text{for some}\quad t>0,\quad
t\sim |\xi|.
\end{equation}
Recall from~\eqref{e:Phi-prop-1} that $H_p\Phi^+=1$ on $\Lambda^+$. Thus
$$
\Phi^+(x,\xi)=\Phi^+(e^{-tH_p}(x,\xi))+t\geq {c}|\xi|-C.
$$
It follows that $\Phi^+(x,\xi)\geq c|\xi|$ for large $|\xi|$; since $\Phi^+$ is homogeneous
of order~1, this inequality then holds on the entire $\Lambda^+$.
\end{proof}
We next construct adapted global defining functions of $\Lambda^\pm$ used in~\S\ref{s:lagreg}:
\begin{lemm}
\label{l:G-construction}
Let $\Lambda^\pm$ be defined in~\eqref{e:Lambda-pm-def}. Then there exist
$G_\pm\in C^\infty(T^*M\setminus 0;\mathbb R)$ such that:
\begin{enumerate}
\item $G_\pm$ are homogeneous of order~1;
\item $G_\pm|_{\Lambda^\pm}=0$ and $dG_\pm|_{\Lambda^\pm}\neq 0$;
\item $H_p G_\pm = a_\pm G_\pm$ in a neighborhood of $\Lambda^\pm$,
where $a_\pm\in C^\infty(T^*M\setminus 0;\mathbb R)$ are homogeneous
of order~$-1$ and $a_\pm|_{\Lambda^\pm}=0$.
\end{enumerate}
\end{lemm}
\begin{proof} We construct $G_+$, with $G_-$ constructed similarly.
Fix some function $\widetilde G_+$ which satisfies conditions~(1)~and~(2) of the
present lemma. It exists since $\Lambda^+$ is conic and orientable
(each of its connected components is diffeomorphic to~$[-\delta,\delta]\times \mathbb S^1\times\mathbb R^+$).
Let $\Theta_+$ be defined in~\eqref{e:Phi-new-def}:
\begin{equation}
\label{e:gcon-1}
\xi\partial_\xi=\Phi_+H_p+\Theta_+H_{\widetilde G_+}\quad\text{on}\quad\Lambda^+.
\end{equation}
Commuting both sides of~\eqref{e:Phi-new-def} with $\xi\partial_\xi$
we see that $\Theta_+$ is homogeneous of order~0.
Moreover $\Theta_+$ does not vanish on $\Lambda^+$ since
$H_p$ is not radial (since the flow of $ X $ in \eqref{eq:defSig} has no fixed points). Choose $G_+$ satisfying conditions~(1)~and~(2) and such that
$$
G_+=\Theta_+\widetilde G_+\quad\text{near }\Lambda^+.
$$
Then~\eqref{e:gcon-1} gives
\begin{equation}
\label{e:gcon-2}
\xi\partial_\xi=\Phi_+ H_p+H_{G_+}\quad\text{on}\quad\Lambda^+.
\end{equation}
We have $H_p G_+|_{\Lambda^+}=0$
{(since $H_p$ is tangent to $\Lambda^+$)}, therefore $H_pG_+=a_+G_+$ near $\Lambda^+$
for some function $a_+$. Commuting both sides of~\eqref{e:gcon-2} with $H_p$
and using that $H_p\Phi_+\equiv 1$ on $\Lambda^+$ from~\eqref{e:Phi-prop-1} we have
$$
H_p=[H_p,\xi\partial_\xi]=H_p+[H_p,H_{G_+}]=H_p+H_{\{p,G_+\}}
=H_p+a_+H_{G_+}\quad\text{on}\quad\Lambda^+.
$$
Since $ H_{G_+} $ does not vanish on $ \Lambda^+ $, this gives $a_+|_{\Lambda^+}=0$
as needed.
\end{proof}
One application of Lemma~\ref{l:G-construction} is the existence of an $H_p$-invariant
density on $\Lambda^\pm$:
\begin{lemm}
\label{l:density}
There exist densities $\nu^\pm_\omega$ on $\Lambda^\pm_\omega$, $\omega\in [-\delta,\delta]$, such that:
\begin{itemize}
\item $\nu^\pm_\omega$ are homogeneous of order~1, that is $\mathcal L_{\xi\partial_\xi}\nu^\pm_\omega=\nu^\pm_\omega$;
\item $\nu^\pm_\omega$ are invariant under $H_p$, that is $\mathcal L_{H_p}\nu^\pm_\omega=0$.
\end{itemize}
\end{lemm}
\begin{proof}
In the notation of Lemma~\ref{l:G-construction} define $\nu^\pm_\omega$ by
$
|\sigma\wedge \sigma|=|dp\wedge dG_\pm|\times \nu^\pm_\omega
$
where $\sigma$ is the symplectic form. The properties of $\nu^\pm_\omega$ follow from the identities
$$
\mathcal L_{\xi\partial_\xi}\sigma=\sigma,\quad
\mathcal L_{\xi\partial_\xi}dp=0,\quad
\mathcal L_{\xi\partial_\xi}dG_\pm=dG_\pm,\quad
\mathcal L_{H_p}\sigma=0
$$
and the following statement which holds on $\Lambda^\pm$:
$$
\hspace{1.6in} \mathcal L_{H_p}(dp\wedge dG_\pm)=dp\wedge d(a_\pm G_\pm)=0. \hspace{1.6in}\qedhere
$$
\end{proof}
\section{Resolvent estimates}
\label{reso}
Here we recall the radial estimates as presented in \cite[\S E.4]{res}
specializing to the setting of \S \ref{ass}. We use the notation of~\cite[Appendix~E]{res}
and we write
$\|u\|_s:=\|u\|_{H^s(M)}$.
Since we are not in the semiclassical setting
of \cite[\S E.4]{res} we will only use the usual notion of the wave front
set: for $ u \in \mathscr D' ( M ) $, $ \WF ( u ) \subset
T^* M \setminus 0 $ -- see \cite[Exercise~E.16]{res}.
Similarly, for
$A\in \Psi^k(M)$
we denote by $\Ell(A)\subset T^*M\setminus 0$ its (nonsemiclassical) elliptic set.
Both sets are conic.
\subsection{Radial estimates uniformly up to the real axis}
\label{rad}
Since $ L^-_\omega $ is a radial source we can apply
\cite[Theorem~E.52]{res} (with $ h := 1 $)
to the operator
$$
\widetilde P_\epsilon:=\widetilde P-i\epsilon\langle D\rangle\in\Psi^1(M),\quad
\widetilde P:=\langle D\rangle^{1/2} (P - \omega)\langle D\rangle^{1/2},\quad
{0\leq\epsilon\ll 1}.
$$
Here, since $\widetilde P$ is self-adjoint, the threshold regularity condition
\cite[(E.4.39)]{res} is satisfied for $\widetilde P$ with any $ s >0 $.
Strictly speaking one has to modify the proof of~\cite[Theorem~E.52]{res}
to include the antiselfadjoint part $-i\epsilon \langle D\rangle$ which has a favorable
sign but is of the same differential order as $\widetilde P$.
(In~\cite{res} it was assumed that the principal symbol of $P$ is real-valued
near $L^-_\omega$.)
More precisely, we put $\mathbf P:=\widetilde P$ and $f:=\widetilde P_\epsilon u$
(instead of $f:=\widetilde P u$)
in~\cite[Theorem~E.52]{res}.
Since $\widetilde P_\epsilon$ satisfies the sign condition
for propagation of singularities~\cite[Theorem~E.47]{res},
it suffices to check that the positive commutator estimate~\cite[Lemma~E.49]{res}
holds. For that we write
\begin{equation}
\label{e:crown}
\Im\langle f,G^*Gu\rangle_{L^2}=\Im\langle \widetilde Pu,G^*Gu\rangle_{L^2}-
\epsilon\Re\big\langle \langle D\rangle u,G^*Gu\big\rangle_{L^2}.
\end{equation}
Here $G\in\Psi^s(M)$ is the quantization of an escape function
used in the proof of~\cite[Lemma~E.49]{res};
recall that we put $h:=1$. We now estimate
the additional term in~\eqref{e:crown}:
$$
\begin{aligned}
-\Re\big\langle \langle D\rangle u,G^*Gu\big\rangle_{L^2}
&=-\|\langle D\rangle^{1/2}Gu\|_{L^2}^2+\langle \Re(G^*[\langle D\rangle,G])u,u\rangle_{L^2}\\
&\leq C\|B_1u\|_{s-1/2}^2+C\|u\|_{H^{-N}}^2
\end{aligned}
$$
where {$B_1$ satisfies the properties in the statement of~\cite[Lemma~E.49]{res}} and in the last line we used that
$G^*[\langle D\rangle,G]\in\Psi^{2s}(M)$ has purely imaginary principal symbol
and thus $\Re(G^*[\langle D\rangle,G])\in \Psi^{2s-1}(M)$.
The rest of the proof of~\cite[Lemma~E.49]{res} applies without changes.
See also~\cite[Lemma~3.7]{DG}.
Applying the radial estimate in~\cite[Theorem~E.52]{res} for the operator
$\widetilde P_\epsilon=\langle D\rangle^{1/2}(P-\omega-i\epsilon)\langle D\rangle^{1/2}$ to $\langle D\rangle^{-1/2}u$
we see that for every $ \widetilde B_- \in \Psi^0(M) $, $
\Lambda^- \subset \Ell ( \widetilde B_- ) $ there exists
$ A_- \in \Psi^0 ( M ) $, $ \Lambda^-
\subset \Ell( A_- ) $, such that
\begin{equation}
\label{eq:rad_source}
\begin{gathered}
\| A_- u \|_{ s } \leq C \|\widetilde B_-( P - \omega - i \epsilon ) u \|_{ s+1} +
C \| u \|_{ -N } , \\
u \in C^\infty ( M ) , \quad s > -\tfrac12 ,\quad
|\omega|\leq\delta,\quad
\epsilon \geq 0 ,
\end{gathered}
\end{equation}
where $ C $ does not depend on $ \epsilon, \omega $ and $ N$ can be chosen arbitrarily large.
The supports of $ A_-$, $\widetilde B_-$ are shown on Figure~\ref{f:3}.
The inequality~\eqref{eq:rad_source} can be
extended to a larger class of distributions
{(as opposed to $u\in C^\infty(M)$)}: it suffices that $ \widetilde B_- (P -\omega - i \epsilon )u \in H^{ s + 1} ( M ) $ and that
$ A_- u \in H^{s'} ( M ) $ for some $ s' > -\frac12 $.
See Remark~5 after \cite[Theorem~E.52]{res} or
\cite[Proposition~2.6]{DZ}, \cite[Proposition~2.3]{Va}.
\begin{figure}
\includegraphics{flop.3}
\qquad
\includegraphics{flop.4}
\caption{An illustration of the supports of the operators
appearing in \eqref{eq:rad_source} (left: radial sources)
and \eqref{eq:rad_sink} (right: radial sinks). The horizontal line
on the top denotes $\partial\overline T^*M$, the arrows denote
flow lines of $|\xi|H_p$.}
\label{f:3}
\end{figure}
Similarly we have estimates near radial sinks~\cite[Theorem~E.54]{res}
for $L^+_\omega$.
Namely, for every $ \widetilde B_+ \in \Psi^0 ( M ) $,
$ \Lambda^+ \subset \Ell (\widetilde B_+ ) $, there exist $ A_+,B_+ \in \Psi^0(M) $, such that $ \Lambda^+
\subset {\rm{ell}}( A_+ ) $,
$ \WF ( B_+ ) \cap \Lambda^+ =\emptyset $, and
\begin{equation}
\label{eq:rad_sink}
\begin{gathered}
\| A_+ u \|_{ s } \leq C \| \widetilde B_+ ( P - \omega - i \epsilon ) u \|_{ s+1} +
C \| B_+ u \|_{ s } +
C \| u \|_{ -N } , \\
u \in C^\infty ( M ) , \quad s < -\tfrac12 , \quad
|\omega|\leq\delta,\quad
\epsilon \geq 0 ,
\end{gathered}
\end{equation}
where $ C $ does not depend on $ \epsilon,\omega$ and $ N$ can be chosen arbitrarily large.
The inequality is also valid for distributions $ u $ such that
$ \widetilde B_+ ( P - \omega - i \epsilon ) u \in H^{s+1}( M ) $ and $ B_+ u \in H^{s} ( M) $ and it then provides (unconditionally) $ A_+ u \in H^s ( M ) $~--
see Remark~2 after~\cite[Theorem~E.54]{res} or
\cite[Proposition~2.7]{DZ}, \cite[Proposition~2.4]{Va}.
Away from radial points we have the now standard propagation results of
Duistermaat--H\"ormander \cite[Theorem~E.47]{res}: if
$ A, B, \widetilde B \in \Psi^0 ( M ) $ and
for each $(x,\xi)\in \WF(A)$ there exists
$T\geq 0$ such that
\[ e^{-T|\xi|H_p}(x,\xi)\in\Ell(B), \ \ e^{-t |\xi| H_p } ( x, \xi) \in
\Ell ( \widetilde B ) , \ 0 \leq t \leq T , \]
then
\begin{equation}
\label{eq:DH}
\begin{gathered}
\| A u \|_{ s } \leq C \| \widetilde B ( P - \omega - i \epsilon ) u \|_{ s+1} +
C \| B u \|_{ s } +
C \| u \|_{ -N } , \\
u \in C^\infty ( M ) , \quad s \in {\mathbb R} , \quad |\omega|\leq\delta , \quad
\epsilon \geq 0 ,
\end{gathered}
\end{equation}
with $ C $ independent of $ \epsilon,\omega $.
{We also have the elliptic estimate~\cite[Theorem~E.33]{res}:
\eqref{eq:DH} holds with $B=0$ if $\WF(A)\cap p^{-1}([-\delta,\delta])=\emptyset$
and $\WF(A)\subset\Ell(\widetilde B)$.}
Let us now consider
\[
u_\epsilon = u_\epsilon ( \omega ) := ( P - \omega - i \epsilon )^{-1} f , \ \
f \in C^\infty ( M ) , \quad|\omega|\leq\delta,\quad \epsilon > 0 .
\]
For any fixed $ \epsilon > 0 $, $ P - \omega - i \epsilon \in \Psi^0(M) $ is an elliptic operator
(its principal symbol equals $p-\omega-i\epsilon$ and $p$ is real-valued), thus by elliptic regularity $ u_\epsilon \in C^\infty ( M ) $. Combining
\eqref{eq:rad_source}, \eqref{eq:rad_sink} and \eqref{eq:DH} we see that
for any $ \beta > 0 $
\begin{equation}
\label{eq:uep1}
\| u_\epsilon \|_{ - \frac12 - \beta } \leq C \| f \|_{ \frac12 + \beta} + C \| u_\epsilon \|_{ -N} ,
\end{equation}
and that
\begin{equation}
\label{eq:uep2} \| A u_\epsilon \|_{ s } \leq C \| f \|_{s+1} + C \| u_\epsilon \|_{ -N } , \ \
\WF ( A ) \cap \Lambda^+ = \emptyset, \ \ s > - \tfrac12 .
\end{equation}
Here the constant $C$ depends on $\beta,s$ but does not depend on $\epsilon,\omega$.
Indeed, by our dynamical assumption~\eqref{eq:dynaSC}
every trajectory $e^{t|\xi|H_p}(x,\xi)$ with $(x,\xi)\in p^{-1}([-\delta,\delta])\setminus\Lambda^+$
converges to $\Lambda^-$ as $t\to -\infty$ (see Figure~\ref{f:global}).
Applying~\eqref{eq:DH} with $B:=A_-$ and using~\eqref{eq:rad_source} we get~\eqref{eq:uep2}.
Putting $A:=B_+$ in~\eqref{eq:uep2} and using~\eqref{eq:rad_sink} we get~\eqref{eq:uep1}.
\begin{figure}
\includegraphics{flop.5}
\caption{A schematic representation of the flow
$e^{t|\xi|H_p}$ on the fiber infinity $\partial\overline T^*M$
intersected with the energy surface $p^{-1}(\omega)$,
with the regularity thresholds for the estimates~\eqref{eq:rad_source}
and~\eqref{eq:rad_sink}.}
\label{f:global}
\end{figure}
In particular, we obtain a regularity statement for the limits of the family $(u_\epsilon)$:
\begin{equation}
\label{eq:uesj} \exists \, \epsilon_j \to 0 , \ u \in \mathscr D' ( M ) , \
u_{\epsilon_j } \xrightarrow{ \mathscr D' ( M) } u \quad
\Longrightarrow \quad u \in H^{ -\frac12 - } ( M ) , \ \
\WF ( u ) \subset \Lambda^+ .
\end{equation}
Note also that every $u$ in~\eqref{eq:uesj} solves the equation $(P-\omega)u=f$.
\subsection{Regularity of eigenfunctions}
\label{eig}
Motivated by \eqref{eq:uesj} we have the following regularity statement.
The proof is an immediate modification of the proof of
\cite[Lemma 2.3]{zazi}: replace $ P $ there by
$ A^{-1} ( P - \omega ) A^{-1} $ where $ A \in \Psi^{-\frac12}
( M ) $ is elliptic, self-adjoint on $ L^2 (M , dm ( x ) ) $
(same density with respect to which $ P $ is self-adjoint) and invertible.
We record this as
\begin{lemm}
\label{l:zazi}
Suppose that $P $ satisfies \eqref{eq:assP} and \eqref{eq:dynaSC}. Then for
$ \omega $ sufficiently small and for $ u \in \mathscr D' ( M ) $
\[
( P - \omega ) u \in C^\infty , \quad
\WF ( u ) \subset \Lambda^+ , \quad
\Im \langle ( P - \omega ) u , u \rangle \geq 0 ,\quad
|\omega|\leq\delta
\]
implies that $ u \in C^\infty ( M ) $.
\end{lemm}
In particular this shows that if $ ( P - \omega ) u = 0 $ and
$ \WF ( u) \subset \Lambda^+ $ then $ u \in L^2 $, that is
$ \omega$ lies in the point spectrum $\Spec_{\rm{pp}} ( P ) $. Radial estimates then show that
the number of such $ \omega$'s is finite in a neighbourhood of $ 0 $:
\begin{lemm}
\label{l:spec}
Under the assumptions \eqref{eq:assP} and \eqref{eq:dynaSC}, with $ \delta $ sufficiently small,
\begin{equation}
\label{eq:spec}
\begin{gathered}
| \Spec_{\rm{pp}} ( P ) \cap [- \delta, \delta ] | < \infty ; \\
( P - \omega ) u = 0 , \ u \in L^2 ( M ) , \
|\omega | \leq \delta \quad \Longrightarrow\quad u \in C^\infty ( M ) .
\end{gathered}
\end{equation}
\end{lemm}
\begin{proof}
If $ u \in L^2 ( M ) $ then the threshold assumption in
\eqref{eq:rad_source} is satisfied for $ P - \omega $ near $ \Lambda^- $ and for $ - ( P - \omega ) $ near $ \Lambda^+ $.
Using the remark about regularity after \eqref{eq:rad_source},
as well as~\eqref{eq:DH} away from sinks and sources,
we conclude that
\begin{equation}
\label{eq:N2s} \| u \|_{ s} \leq C \| u \|_{ -N }
\end{equation}
for any $ s $ and $ N $. That implies that $ u \in C^\infty ( M ) $.
Now, suppose that there exists an infinite set of $ L^2 $ eigenfunctions with
eigenvalues in $ [ - \delta, \delta ] $:
\[
( P - \omega_j ) u_j = 0 , \ \ \ \langle u_k, u_j \rangle_{ L^2 ( M) }
= \delta_{kj} , \ \ \ | \omega_j | \leq \delta.
\]
Since $ u_j \rightharpoonup 0 $, weakly in $ L^2 $, $ u_j \to 0 $ strongly in
$ H^{-1} $. But this contradicts \eqref{eq:N2s} applied with $ s = 0 $ and
$ N = 1 $.
\end{proof}
{From now on we make the assumption that $P$ has no eigenvalues in $[-\delta,\delta]$:
\begin{equation}
\label{e:no-spectrum}
\Spec_{\rm{pp}} ( P ) \cap [- \delta, \delta ]=\emptyset.
\end{equation}
By Lemma~\ref{l:spec} we see that~\eqref{e:no-spectrum} holds for $\delta$ small enough
as long as $0\notin\Spec_{\rm{pp}}(P)$.}
\subsection{Limiting absorption principle}
\label{lap}
Using results of \S\S\ref{rad},\ref{eig} we obtain a version of the limiting absorption principle sufficient for proving~\eqref{eq:SC2}. Radial estimates
can also easily give existence of $ ( P - \omega - i 0)^{-1} :
H^{\frac12+} ( M ) \to H^{-\frac12 - } ( M ) $ but we restrict ourselves to the simpler version and follow Melrose \cite[\S 14]{mel}.
The only modification lies in replacing scattering asymptotics by the regularity result given in Lemma \ref{l:zazi}.
\begin{lemm}
\label{l:lap}
Suppose that $ P $ satisfies \eqref{eq:assP}, \eqref{eq:dynaSC}, and~\eqref{e:no-spectrum}.
Then for $ |\omega | \leq \delta $ and $ f \in C^\infty ( M ) $, the limit
\[
( P - \omega - i \epsilon )^{-1}f
\xrightarrow{ H^{-\frac12 - } ( M) } ( P - \omega - i 0 )^{-1} f,\quad
\epsilon\to 0+
\]
exists. This limit is the unique solution to the equation
\begin{equation}
\label{e:lapidus}
(P-\omega)u=f,\quad \WF(u)\subset\Lambda^+,
\end{equation}
and the map $\omega\mapsto (P-\omega-i0)^{-1}f\in H^{-\frac 12-}(M)$
is continuous in $\omega\in [-\delta,\delta]$.
\end{lemm}
\Remark Replacing $P$ with $-P$ we see that there is also a limit
$$
( P - \omega + i \epsilon )^{-1}f
\xrightarrow{ H^{-\frac12 - } ( M) } ( P - \omega + i 0 )^{-1} f,\quad
\epsilon\to 0+
$$
which satisfies~\eqref{e:lapidus} with $\Lambda^+$ replaced by $\Lambda^-$.
\begin{proof}
We first note that Lemma~\ref{l:zazi} and the spectral assumption~\eqref{e:no-spectrum}
imply that~\eqref{e:lapidus} has no more than one solution.
By~\eqref{eq:uesj}, if a (distributional) limit
$ ( P - \omega-i \epsilon_j )^{-1} f $, $ \epsilon_j \to 0 $, exists then it solves~\eqref{e:lapidus}.
To show that the limit exists put $ u_\epsilon := ( P - \omega- i \epsilon )^{-1} f $ and suppose first that $ \| u_\epsilon \|_{ -\frac12 - \alpha }$
is not bounded as $ \epsilon \to 0 + $ for some $ \alpha > 0 $. Hence there exists $ \epsilon_j \to 0+ $ such that $ \| u_{\epsilon_j} \|_{ -\frac12 - \alpha } \to \infty $.
Putting
$ v_j := u_{\epsilon_j} / \| u_{\epsilon_j} \|_{ -\frac12 - \alpha } $ we obtain
\begin{equation}
\label{eq:Pie}
( P - \omega-i \epsilon_j ) v_j = f_j , \ \ \| v_j \|_{ {-\frac12 - \alpha }} = 1,
\ \ f_j \xrightarrow{ C^\infty ( M) } 0 .
\end{equation}
Applying \eqref{eq:uep1} with $ N = \frac12 + \alpha $
we see that $ v_j $ is bounded in $ H^{-\frac12 - \beta } ( M ) $ for any $ \beta > 0 $.
Since $ H^{-\frac12 - \beta } ( M ) \hookrightarrow H^{-\frac12 - \alpha } ( M ) $, $ \beta < \alpha $ is compact we can assume, by passing to a subsequence, that $ v_j \to v $ in
$ H^{-\frac12 - \alpha } ( M ) $. Then
$ (P-\omega) v = 0 $ and the same reasoning that led to \eqref{eq:uesj} shows that
$ \WF ( v ) \subset \Lambda^+ $. Thus $v$ solves~\eqref{e:lapidus} with $f\equiv 0$,
implying that $ v \equiv 0 $. This gives a contradiction with the normalization $\|v_j\|_{ -\frac12 - \alpha }=1$.
We conclude that $ u_\epsilon $ is bounded in $ H^{-\frac12 - \alpha }
( M ) $ for all $ \alpha > 0 $. But then similarly to the previous paragraph
$(u_\epsilon)_{\epsilon\to 0}$ is precompact in $H^{-\frac 12-\alpha}(M)$ for all $\alpha>0$.
Since every limit point has to be the (unique) solution to~\eqref{e:lapidus},
we see that $u_\epsilon$ converges as $\epsilon\to 0+$ in $H^{-\frac 12-\alpha}(M)$
to that solution.
As for continuity in $\omega$, we note that the above proof
gives the stronger statement
\begin{equation}
\label{e:continuor}
(P-\omega_j-i\epsilon_j)^{-1}f\xrightarrow{ H^{-\frac12 - } ( M) } (P-\omega-i0)^{-1}f
\end{equation}
for all $\epsilon_j\to 0+$,
$\omega_j\to \omega$,
and
$|\omega_j|\leq \delta$.
\end{proof}
In~\S\ref{s:lagreg} we will need the following
upgraded version of Lemma~\ref{l:lap}:
\begin{lemm}
\label{l:lapup}
Suppose that $ P $ satisfies \eqref{eq:assP}, \eqref{eq:dynaSC}, and~\eqref{e:no-spectrum}.
Let $ s < -\frac12 $ and
$ g \in H^{ s + 1 } ( M ) $, $ \WF ( g) \subset \Lambda^+ $,
where $ \Lambda^+ $ is defined by \eqref{e:Lambda-pm-def}.
Then for $|\omega|\leq\delta$ the limit
\begin{equation}
\label{eq:lapup} ( P - \omega - i \epsilon )^{-1} g
\xrightarrow{ H^{s-} ( M) } ( P - \omega - i 0 )^{-1} g,\quad
\epsilon\to 0+
\end{equation}
exists, and $ \WF ( ( P - \omega - i 0 )^{-1} g ) \subset
\Lambda^+ $. In particular, for $k\geq 1$ and $ f \in C^\infty ( M ) $
the limit
\begin{equation}
\label{e:lapup2}
( P - \omega - i \epsilon )^{-k}f
\xrightarrow{ H^{-k + \frac12 - } ( M) } ( P - \omega - i 0 )^{-k} f,\quad
\epsilon\to 0+ ,
\end{equation}
exists. Finally, $ ( P - \omega - i 0 )^{-1} f \in C^{k}_\omega ( [- \delta, \delta];H^{ -k- \frac12 - } ( M ) ) $
with $\partial_\omega^k( P - \omega - i 0 )^{-1} f=k!( P - \omega - i 0 )^{-k-1} f$.
\end{lemm}
\begin{proof}
We follow closely the proof of Lemma~\ref{l:lap} and put $ u_\epsilon := ( P - \omega- i \epsilon )^{-1} g $.
Since $P-\omega-i\epsilon$ is elliptic
for every $\epsilon>0$, we have $u_\epsilon\in H^{s+1}(M)$ and $\WF(u_\epsilon)\subset \WF(g)\subset\Lambda^+$,
so it remains to establish uniformity as $\epsilon\to 0+$.
We use the following version of~\eqref{eq:uep2} (which follows
from the same proof): for every $A\in\Psi^0(M)$ with $\WF(A)\cap \Lambda^+=\emptyset$
there exists $\widetilde B\in\Psi^0(M)$ with $\WF(\widetilde B)\cap\Lambda^+=\emptyset$ such that
\begin{equation}
\label{e:uep2-adv}
\|Au_\epsilon\|_{s'}\leq C\|\widetilde B g\|_{s'+1}+C\|u_\epsilon\|_{-N},\quad
s'>-\textstyle{1\over 2}
\end{equation}
where the constant $C$ does not depend on $\omega,\epsilon$. We also have the following
version of~\eqref{eq:uep1}: there exists $B'\in\Psi^0(M)$ with $\WF(B')\cap\Lambda^+=\emptyset$ such that
\begin{equation}
\label{e:uep1-adv}
\|u_\epsilon\|_s\leq
C\|g\|_{s+1}+C\|B' g\|_{1}+C\|u_\epsilon\|_{-N},\quad
s<-\textstyle{1\over 2}.
\end{equation}
Here the norms $\|\widetilde Bg\|_{s'+1}$ and $\|B'g\|_1$
are finite since $\WF(g)\subset\Lambda^+$. From~\eqref{e:uep2-adv} and~\eqref{e:uep1-adv} we get
regularity for limit points of $u_{\epsilon_j}$ similarly to~\eqref{eq:uesj}:
\[
\exists \, \epsilon_j \to 0+ , \ u \in \mathscr D' ( M ) , \
u_{\epsilon_j } \xrightarrow{ \mathscr D' ( M) } u \quad
\Longrightarrow \quad u \in H^{ s } ( M ) , \ \
\WF ( u ) \subset \Lambda^+ .
\]
The existence of the limit~\eqref{eq:lapup} follows as in the proof of Lemma~\ref{l:lap},
replacing $-{1\over 2}$ by $s$ in Sobolev space orders; here
$u=(P-\omega-i0)^{-1}g$ is the unique solution to
$$
(P-\omega)u=g,\quad
\WF(u)\subset\Lambda^+.
$$
Iterating this argument, we get existence of the limit~\eqref{e:lapup2}
and continuous dependence of $(P-\omega-i0)^{-k}f\in H^{-k+{1\over 2}-}$
on $\omega\in [-\delta,\delta]$ similarly to~\eqref{e:continuor},
with $u=(P-\omega-i0)^{-k}f$ being the unique solution to
$$
(P-\omega)^k u=f,\quad
\WF(u)\subset\Lambda^+.
$$
It remains to show differentiability in~$\omega$.
For simplicity we assume that $\omega=0$ and show that
for $f\in C^\infty(M)$,
\begin{equation}
\label{e:diffor}
\partial_\omega \big[(P-\omega-i0)^{-1}f\big]\big|_{\omega=0}
=(P-\omega-i0)^{-2}f\quad\text{in}\quad H^{-\frac 32-}.
\end{equation}
The case of higher derivatives is handled by iteration.
To show~\eqref{e:diffor} we denote $u_\epsilon(\omega):=(P-\omega-i\epsilon)^{-1}f$ and write for $\omega\neq 0$, with limits in $ H^{-\frac32 - } $
\begin{equation}
\label{e:diffor2}
\begin{aligned}
{u_0(\omega)-u_0(0)\over\omega}
&=\lim_{\epsilon\to 0+}{u_\epsilon(\omega)-u_\epsilon(0)\over\omega}
=\lim_{\epsilon\to 0+}(P-\omega-i\epsilon)^{-1}(P-i\epsilon)^{-1}f
\\&=(P-\omega-i0)^{-1}(P-i0)^{-1}f.
\end{aligned}
\end{equation}
To show the last equality above we first note
that the family $(P-\omega-i\epsilon)^{-1}(P-i\epsilon)^{-1}f$ is precompact
in $H^{-{3\over 2}-\alpha}(M)$ for any $\alpha>0$ as follows from iterating~\eqref{e:uep1-adv}.
By~\eqref{e:uep2-adv} every limit point $u$ of this family as $\epsilon\to 0+$ satisfies
$P(P-\omega)u=f$, $\WF(u)\subset\Lambda$ and thus equals
$(P-\omega-i0)^{-1}(P-i0)^{-1}f$. Finally, letting $\omega\to 0$ in~\eqref{e:diffor2}
we get~\eqref{e:diffor}.
\end{proof}
\section{Lagrangian structure of the resolvent}
\label{lare}
In this section we describe the Lagrangian structure of the resolvent
refining the results of Haber--Vasy~\cite{hb} in our special case.
To start, we briefly review basic theory of Lagrangian distributions following~\cite[\S25.1]{H4}.
\subsection{Lagrangian distributions}
\label{s:lagrangian-basic}
Let $M$ be a compact surface and $\Lambda_0\subset T^*M\setminus 0$
a conic Lagrangian submanifold without boundary.
Denote by $I^s(M;\Lambda_0)\subset\mathcal D'(M)$ the space of Lagrangian distributions
of order~$s$ on $M$
associated to $\Lambda_0$. They have the following properties:
\begin{enumerate}
\item $I^s(M;\Lambda_0)\subset H^{-{1\over 2}-s-}(M)$;
\item for all $u\in I^s(M;\Lambda_0)$ we have $\WF(u)\subset\Lambda_0$;
\item if $\Lambda_1\subset \Lambda_0$ is an open conic subset
and $u\in I^s(M;\Lambda_0)$, then $u\in I^s(M;\Lambda_1)$
if and only if $\WF(u)\subset \Lambda_1$;
\item for all $A\in \Psi^k(M)$ and $u\in I^s(M;\Lambda_0)$
we have $Au\in I^{s+k}(M;\Lambda_0)$;
\item if additionally $\sigma(A)|_{\Lambda_0}=0$, then $Au\in I^{s+k-1}(M;\Lambda_0)$.
\end{enumerate}
Denote
$$
I^{s+}(M;\Lambda_0):=\bigcap_{s'>s} I^{s'}(M;\Lambda_0).
$$
A simple example on a torus (in the notation of \S \ref{exa}) is given by
\begin{equation}
\label{eq:exala}
u ( x ) := ( x_1 - \tfrac{\pi}2 - i 0 )^{-1} \varphi(x), \ \
\varphi \in C^\infty_{\rm{c}} ( B ( 0 , 1 ) ) , \ \
u \in I^{0} ( \mathbb T^2 ; \Lambda_0^+ ) \subset H^{-\frac12 - } (
\mathbb T^2 ) ,
\end{equation}
where $ \Lambda_0^+ $ is given in \eqref{eq:P1}.
To define Lagrangian distributions we use Melrose's iterative
characterization~\cite[Definition~25.1.1]{H4}:
$u\in\mathcal D'(M)$ lies in
$I^{s+}(M;\Lambda_0)$ if and only if $\WF(u)\subset\Lambda_0$ and
\begin{equation}
\label{e:lagr-char}
A_1\dots A_\ell \, u\in H^{-{1\over 2}-s-}(M)\quad\text{for any}\quad
A_1,\dots,A_\ell\in\Psi^1(M),\
\sigma(A_j)|_{\Lambda_0}=0.
\end{equation}
Note that~\cite{H4} uses Besov spaces ${}^\infty H^s$, however this does not
make a difference in~\eqref{e:lagr-char} since $H^s\subset {}^\infty H^s\subset H^{s'}$
for all $s'<s$, see~\cite[Proposition~B.1.2]{H3}.
We also need oscillatory integral representations for Lagrangian distributions.
Assume that in some local coordinate system on $M$, $\Lambda_0$ is given by
\begin{equation}
\label{e:lm-par}
\Lambda_0=\{(x,\xi)\colon x=\partial_\xi F(\xi),\ \xi\in\Gamma_0\}
\end{equation}
where $\Gamma_0\subset\mathbb R^2\setminus 0$ is an open cone and $F:\Gamma_0\to\mathbb R$ is
homogeneous of order~1. (Every Lagrangian can be locally written in this form after a change of base, $ x $, variables~-- see~\cite[Theorem 21.2.16]{H3}. Using a pseudodifferential partition of unity
we can write every Lagrangian distribution as a sum of expressions of the form~\eqref{e:lagros}.)
Then $u\in I^s(M;\Lambda_0)$ if and only if $u$ can be written (modulo a $C^\infty$ function)
as
\begin{equation}
\label{e:lagros}
u(x)=\int_{\Gamma_0}e^{i(\langle x,\xi\rangle-F(\xi))}a(\xi)\,d\xi
\end{equation}
where $a(\xi)\in C^\infty(\mathbb R^2)$ is a symbol of order $s-{1\over 2}$, namely
\begin{equation}
\label{eq:symb}
|\partial^\alpha_\xi a(\xi)|\leq C_\alpha \langle\xi\rangle^{s-{1\over 2}-|\alpha|},\quad
\xi\in\mathbb R^2
\end{equation}
and $a$ is supported in a closed cone contained in $\Gamma_0$. See~\cite[Proposition~25.1.3]{H4}.
An equivalent way of stating~\eqref{e:lagros} is in terms of the Fourier transform $\hat u$:
$e^{iF(\xi)}\hat u(\xi)$ is a symbol, that is, satisfies estimates \eqref{eq:symb}.
We finally review properties of the principal symbol of a Lagrangian distribution,
used in the proof of Lemma~\ref{l:lagreg-plus} below, referring the reader to~\cite[Chapter~25]{H4}
for details.
The principal symbol of a Lagrangian distribution, $ u $, with values in half-densities, $u\in I^s(M, \Lambda; \Omega^{\frac12}_M )$, is the equivalence class
$$ \sigma(u)\in S^{s+{1\over 2}}(\Lambda;\mathcal M_\Lambda \otimes \Omega_\Lambda^{1\over 2})/
S^{s-{1\over 2}}(\Lambda;\mathcal M_\Lambda \otimes \Omega_\Lambda^{1\over 2}),
$$
see \cite[Theorem~25.1.9]{H4}, where
\begin{itemize}
\item $\Omega_\Lambda^{1\over 2}$ is the line bundle of half-densities on $\Lambda$;
\item $\mathcal M_\Lambda$ is the Maslov line bundle; it has a finite number of prescribed local frames with ratios of any two prescribed frames given by a constant of absolute value one. Consequently it has a canonical inner product and does not enter into the calculations below;
\item $S^k(\Lambda;\mathcal M_\Lambda\otimes\Omega_\Lambda^{1\over 2})$ is the space
of sections in $C^\infty(\Lambda;\mathcal M_\Lambda\otimes\Omega_\Lambda^{1\over 2})$
which are symbols of order~$k$, defined using the dilation operator
$(x,\xi)\mapsto (x,\lambda\xi)$, $\lambda>0$, see the discussion on~\cite[page~13]{H4}.
In the parametrization~\eqref{e:lagros} we have
$\sigma(u |dx|^{\frac12} )=(2\pi)^{-\frac12}a(\xi)|d\xi|^{\frac12}$. The factor $|d\xi|^{\frac12}$
accounts for the difference in the order of the symbol.
\end{itemize}
If $P\in\Psi^\ell(M; \Omega_M^{\frac12} )$ satisfies $\sigma(P)|_{\Lambda}=0$ and $u\in I^s(M,\Lambda;\Omega_M^{\frac12} )$ then
\begin{equation}
\label{e:transport-eqn}
Pu\in I^{s+\ell-1}(M, \Lambda; \Omega_M^{\frac12} ),\quad
\sigma(Pu)= \tfrac 1 i L \sigma(u)
\end{equation}
where $L$ is a first order differential operator on $C^\infty(\Lambda;\mathcal M_\Lambda\otimes\Omega_\Lambda^{1\over 2})$ with principal part $H_p$. The equation \eqref{e:transport-eqn} is the {\em transport equation} for $P $
(the {\em eikonal equation} corresponds to $ \sigma ( P ) |_\Lambda = 0 $)~-- see~\cite[Theorem~25.2.4]{H4}.
If $P$ is self-adjoint, then its subprincipal symbol is real-valued by~\cite[Theorem~18.1.34]{H3}
and thus by~\cite[(25.2.12)]{H4}
\begin{equation}
\label{eq:LLst}
L^* = -L \quad \text{on }
L^2 ( \Lambda; \mathcal M_\Lambda \otimes \Omega_\Lambda^{\frac12} ) .
\end{equation}
\subsection{Lagrangian regularity}
\label{s:lagreg}
We now establish Lagrangian regularity for elements in the range
of the operators $(P-\omega\mp i0)^{-1}$ constructed in~\S\ref{lap}:
\begin{lemm}
\label{l:lagreg}
Suppose that $ P $ satisfies \eqref{eq:assP}, \eqref{eq:dynaSC}, and~\eqref{e:no-spectrum}.
Let $f\in C^\infty(M)$ and
$$
u^\pm(\omega):=(P-\omega\mp i0)^{-1}f\in H^{-{1\over 2}-}(M),\quad
|\omega|\leq\delta.
$$
Then $u^\pm(\omega)\in I^{0}(M ; \Lambda^\pm_\omega)$.
Moreover, the symbols of $u^\pm(\omega)$ depend smoothly on $\omega$:
\begin{equation}
\label{e:lagreg}
u^\pm(\omega)\in C^\infty_\omega\big([-\delta,\delta];I^{0}(M;\Lambda^\pm_\omega)\big) , \end{equation}
where the precise meaning of~\eqref{e:lagreg} is explained in { Lemma~\ref{l:lagreg-oi} below (\eqref{e:lagreg-oi-2} and Remark 2)}.
\end{lemm}
\Remark
Lemma~\ref{l:lagreg} is similar to the results
of Haber and Vasy~\cite[Theorem 1.7, Theorem 6.3]{hb}. There are two differences:
\cite{hb} makes the assumption that the Hamiltonian field $H_p$ is radial on $\Lambda^\pm_\omega$
(which is not true in our case) and it also does not prove smooth dependence of the symbols of $u^\pm(\omega)$ on $\omega$.
Because of these we give a self-contained proof of Lemma~\ref{l:lagreg} below,
noting that the argument is simpler in our situation.
We focus on the case of $u^+(\omega)$, with
regularity of $u^-(\omega)$ proved by replacing $P, \, \omega$ with $-P, \, - \omega$, respectively.
By Lemma~\ref{l:lapup} we have for every $k\geq 0$
\begin{equation}
\label{e:lag-apriori}
u^+(\omega)\in C^k_\omega([-\delta,\delta];H^{-k-{1\over 2}-}(M)),\quad
\WF(\partial^k_\omega u^+(\omega))\subset \Lambda^+
\end{equation}
where the wavefront set statement is uniform in $\omega$.
To upgrade~\eqref{e:lag-apriori} to Lagrangian regularity, we use the criterion~\eqref{e:lagr-char},
applying first order operators $W$ and $D_\omega-Q$ to $u^+(\omega)$ (see Lemma~\ref{l:lagr-iter} below). Here,
\begin{equation}
\label{eq:defWQ}
W,Q\in\Psi^1(M),\quad
\sigma(W)=G_+,\quad
\sigma(Q)|_{\Lambda^+}=\Phi_+
\end{equation}
where $G_+$ is the defining function of $\Lambda^+$ constructed in Lemma~\ref{l:G-construction}
and $\Phi_+$ is defined in~\eqref{e:Phi-new-def}.
The operator $D_\omega-Q$, where $D_\omega:={1\over i}\partial_\omega$, is used to establish smoothness in~$\omega$.
Our proof uses the following corollary of~\eqref{eq:rad_sink}:
\begin{equation}
\label{e:step}
\begin{gathered}
\text{if}\quad Z\in\Psi^{-1}(M),\quad
\sigma(Z)|_{\Lambda^+}=0,\quad
s<-\textstyle{1\over 2}
\quad\text{then}
\\
v\in\mathcal D'(M),\quad
\WF(v)\subset\Lambda^+,\quad
(P+Z-\omega)v\in H^{s+1}\quad\Longrightarrow\quad
v\in H^s.
\end{gathered}
\end{equation}
The addition of $Z$ does not change the validity of~\eqref{eq:rad_sink}
since it is a subprincipal term whose symbol vanishes on $\Lambda^+$,
see~\cite[Theorem~E.54]{res}.
We also use the following identity valid for any operators $A,B$ on $\mathcal D'(M)$:
\begin{equation}
\label{e:a-bit-of-algebra}
B^mA=\sum_{j=0}^m \binom{m}{j}(\ad^j_B A)B^{m-j}, \ \ \ \
\ad_B A := [ B, A ] ,\quad
{\ad^0_BA:=A}.
\end{equation}
The first step of the proof is to establish regularity with respect to powers of $W$:
\begin{lemm}
\label{l:lagr-W}
Assume that $v\in\mathcal D'(M)$ satisfies for some $\ell\geq 0$
and $s<-{1\over 2}$
\begin{equation}
\label{e:lagr-W}
\WF(v)\subset\Lambda^+,\quad
W^j(P-\omega)v\in H^{s+1}\quad\text{for}\quad j=0,\dots,\ell.
\end{equation}
Then $W^\ell v\in H^s$, where $ W $ is defined in \eqref{eq:defWQ}.
\end{lemm}
\begin{proof}
We argue by induction on~$\ell$. For $\ell=0$ the lemma follows immediately from~\eqref{e:step}.
We thus assume that $\ell>0$ and the lemma is true for all smaller values of $\ell$, in particular
$W^kv\in H^s$ for $0\leq k\leq \ell-1$.
Using~\eqref{e:a-bit-of-algebra} we write
\begin{equation}
\label{e:lW-1}
W^\ell (P-\omega)=(P-\omega)W^\ell+\sum_{j=1}^\ell \binom{\ell}{j}(\ad^j_W P)W^{\ell-j}.
\end{equation}
We recall from Lemma~\ref{l:G-construction}
that near $\Lambda^+$ we have $H_{G_+}p=-a_+G_+$ where $a_+$ is homogeneous
of order~$-1$ and $a_+|_{\Lambda^+}=0$. Therefore for $j\geq 1$ we have
$H_{G_+}^j p=-(H_{G_+}^{j-1}a_+)G_+$ near $\Lambda^+$.
Motivated by this we take
$$
B_j\in \Psi^{-1}(M),\quad
\sigma(B_j)=(-1)^{j-1}i^jH_{G_+}^{j-1}a_+, \quad
1 \leq j \leq \ell .
$$
Then, for $1\leq j\leq \ell$
\begin{equation}
\label{e:lW-2}
\ad^j_WP=B_jW+R_j,\quad
R_j\in \Psi^{-1}\quad\text{microlocally near }\Lambda^+.
\end{equation}
Combining~\eqref{e:lW-1} and~\eqref{e:lW-2} we get
\begin{equation}
\label{e:lW-3}
(P-\omega)W^\ell=W^\ell (P-\omega)-\sum_{j=1}^\ell \binom{\ell}{j} (B_j W^{\ell+1-j}+R_j W^{\ell-j}).
\end{equation}
Applying both sides of~\eqref{e:lW-3} to $v$ and using that $W^kv\in H^s$ for $0\leq k\leq\ell-1$
and that $W^\ell (P-\omega)v\in H^{s+1}$
we get
$$
(P+\ell B_1-\omega)W^\ell v\in H^{s+1}.
$$
Since $\sigma(B_1)=ia_+$ vanishes on $\Lambda^+$, we apply~\eqref{e:step}
to conclude that $W^\ell v\in H^s$ as needed.
\end{proof}
Since $(P-\omega)u^+(\omega)=f\in C^\infty(M)$, Lemma~\ref{l:lagr-W} implies that
\begin{equation}
\label{e:limo}
W^\ell u^+(\omega)\in H^{-{1\over 2}-}(M)\quad\text{for all}\quad\ell\geq 0.
\end{equation}
This can be generalized as follows:
\begin{equation}
\label{e:upgrador}
A_1\dots A_\ell u^+(\omega)\in H^{-{1\over 2}-}(M)\quad\text{for all}\quad
A_1,\dots,A_\ell\in \Psi^1(M),\
\sigma(A_j)|_{\Lambda^+}=0.
\end{equation}
To see~\eqref{e:upgrador}, we argue by induction on~$\ell$.
We have $\sigma(A_j)=\tilde a_j G_+$ near $\WF(u^+(\omega))\subset\Lambda^+$
for some $\tilde a_j$ which is homogeneous of order~0.
Taking
$\widetilde A_j\in\Psi^0(M)$ with $\sigma(\widetilde A_j)=\tilde a_j$ we have
$$
A_j=\widetilde A_j W+\widetilde R_j\quad\text{where}\quad \widetilde R_j\in \Psi^0(M)\quad\text{microlocally near}\quad \WF(u^+(\omega)).
$$
Then we can write $A_1\dots A_\ell u^+(\omega)$
as the sum of two kinds of terms (plus a $C^\infty$ remainder):
\begin{itemize}
\item the term $\widetilde A_1\dots \widetilde A_\ell W^\ell u^+(\omega)$,
which lies in $H^{-{1\over 2}-}(M)$ by~\eqref{e:limo}, and
\item terms of the form $A'_1\dots A'_m u^+(\omega)$ where
$0\leq m\leq \ell-1$, $A'_j\in \Psi^1(M)$, and $\sigma(A'_j)|_{\Lambda^+}=0$,
which lie in $H^{-{1\over 2}-}(M)$ by the inductive hypothesis.
\end{itemize}
{From~\eqref{e:upgrador} we can deduce (similarly to the proof of Lemma~\ref{l:lagreg-oi} below)
that $u^+(\omega)\in I^{0+}(M;\Lambda^+_\omega)$ for each $\omega\in[-\delta,\delta]$.
To obtain the smooth dependence of the symbol of $u^+(\omega)$ on~$\omega$
we generalize~\eqref{e:limo} by additionally applying powers of $D_\omega-Q$:}
\begin{lemm}
\label{l:lagr-iter}
For all integers $\ell,m\geq 0$ we have
\begin{equation}
\label{e:lagr-iter}
W^\ell (D_\omega-Q)^m u^+(\omega)\in H^{-{1\over 2}-}(M),\quad
|\omega|\leq \delta,
\end{equation}
and the corresponding norms are bounded uniformly in $\omega$.
\end{lemm}
\begin{proof}
We argue by induction on $m$, with the case $m=0$ following from~\eqref{e:limo}. Put
$$
u_j(\omega):=(D_\omega-Q)^j u^+(\omega)\in\mathcal D'(M),\quad
0\leq j\leq m.
$$
By~\eqref{e:lag-apriori} we have $\WF(u_j(\omega))\subset\Lambda^+$ for all~$j$.
Moreover, by the inductive hypothesis
\begin{equation}
\label{e:liter-ind}
W^\ell u_j(\omega)\in H^{-{1\over 2}-}(M)\quad\text{for all}\quad \ell,\
0\leq j\leq m-1.
\end{equation}
Put
$$
Y:=[P-\omega,D_\omega-Q]=-i-[P,Q]\in\Psi^0(M)
$$
and note that since $ \sigma ( Q )|_{\Lambda^+} = \Phi_+ $ and $H_p\Phi_+\equiv 1$ on $\Lambda^+$ by~\eqref{e:Phi-prop-1},
\begin{equation}
\label{e:Y-vanisher}
\sigma(Y)|_{\Lambda^+}=0.
\end{equation}
Moreover, by~\eqref{e:Phi-prop-1} we have $H_{G_+}\Phi_+\equiv 0$ on $\Lambda^+$,
thus the Hamiltonian vector field $H_{\Phi_+}$ is tangent to $\Lambda^+$. This implies that
\begin{equation}
\label{e:Y-vanisher-2}
\sigma(\ad_Q^j Y)=(-i)^j H_{\Phi_+}^j \sigma(Y)\equiv 0\quad\text{on}\quad \Lambda^+\quad\text{for all}\quad
j\geq 0.
\end{equation}
Applying~\eqref{e:a-bit-of-algebra} with $A:=P-\omega$ and $B:=D_\omega-Q$
to $u^+(\omega)$ we get
\begin{equation}
\label{e:rightor}
(P-\omega)u_m(\omega)=(D_\omega-Q)^m f+
\sum_{j=1}^m (-1)^{j-1}\binom{m}{j}(\ad_Q^{j-1}Y)u_{m-j}(\omega).
\end{equation}
Since $f\in C^\infty$ does not depend on~$\omega$, we have $(D_\omega-Q)^mf\in C^\infty$.
Next, by the inductive hypothesis~\eqref{e:liter-ind} we have
$W^\ell u_{m-j}(\omega)\in H^{-{1\over 2}-}$ for all $\ell\geq 0$ and $1\leq j\leq m$.
Arguing similarly to~\eqref{e:upgrador} and using~\eqref{e:Y-vanisher-2} we see
that $W^\ell (\ad_Q^{j-1}Y)u_{m-j}(\omega)\in H^{{1\over 2}-}$ as well
(here $\ad_Q^{j-1}Y\in \Psi^0(M)$ which explains the stronger regularity).
Thus \eqref{e:rightor} implies
$$
W^\ell (P-\omega)u_m(\omega)\in H^{{1\over 2}-}(M)\quad\text{for all}\quad\ell\geq 0.
$$
Now Lemma~\ref{l:lagr-W} gives $W^\ell u_m(\omega)\in H^{-{1\over 2}-}$ for all $\ell\geq 0$
as needed.
Finally, uniformity of~\eqref{e:lagr-iter} in $\omega$ follows immediately
from the proof since the estimates~\eqref{e:lag-apriori} and~\eqref{eq:rad_sink}
that we used are uniform in~$\omega$.
\end{proof}
We now deduce from Lemma~\ref{l:lagr-iter} that $u^+(\omega)$
has microlocal oscillatory integral representations~\eqref{e:lagros}
with symbols depending smoothly on~$\omega$. This shows the weaker version of~\eqref{e:lagreg}
with $I^0$ replaced by $I^{0+}$.
\begin{lemm}
\label{l:lagreg-oi}
Assume that $\mathcal U\subset T^*M\setminus 0$ is an open conic set such that
$\Lambda^+_\omega\cap \mathcal U$ are given in the form~\eqref{e:phase-der-1}
in some local coordinate system on~$M$:
\begin{equation}
\label{e:lagreg-oi-1}
\Lambda^+_\omega\cap\mathcal U=\{(x,\xi)\colon x=\partial_\xi F(\omega,\xi),\ \xi\in\Gamma_0\},\quad
|\omega|\leq\delta
\end{equation}
where $\xi\mapsto F(\omega,\xi)$ is homogeneous of order~1 and $\Gamma_0\subset\mathbb R^2\setminus 0$
is an open cone. Let $A\in\Psi^0(M)$, $\WF(A)\subset\mathcal U$.
Then,
\begin{equation}
\label{e:lagreg-oi-2}
Au^+(\omega,x)=\int_{\Gamma_0}e^{i(\langle x,\xi\rangle-F(\omega,\xi))} a(\omega,\xi)\,d\xi
+C^\infty_{\omega,x},\quad
|\omega|\leq\delta
\end{equation}
where $a(\omega,\xi)$ is a smooth in $\omega$ family of symbols of order $-{1\over 2}+$ in $\xi$ supported in a closed cone inside $\Gamma_0$, see~\eqref{eq:symb}.
\end{lemm}
\Remarks
1. The statement \eqref{e:lagreg-oi-2} means that
$ u^+ ( \omega ) $ can be represented as~\eqref{e:lagros},
{\em microlocally} in every closed cone contained in $ \mathcal U $.
\noindent
2. When \eqref{e:lagreg-oi-2} holds for every choice of parametrization~\eqref{e:lagreg-oi-1} we write
\[ u^+(\omega)\in C^\infty_\omega\big([-\delta,\delta];I^{0+}(M;\Lambda^+_\omega)\big) , \]
with the analogous notation in the case of $ u^- ( \omega ) $. That explains the statement of Lemma~\ref{l:lagreg}.
\begin{proof}
Since $(P-\omega)u^+(\omega)=f\in C^\infty(M)$,
it follows from Lemma~\ref{l:lagr-iter} that for all $m,\ell,r\geq 0$
$$
(D_\omega-Q)^mW^\ell (P-\omega)^r u^+(\omega)\in H^{-{1\over 2}-}(M)
$$
This can be generalized as follows:
\begin{equation}
\label{e:oi-1}
(D_\omega-Q(\omega))^mA_1(\omega)\dots A_\ell(\omega)u^+(\omega)\in H^{-{1\over 2}-}(M)
\end{equation}
for all $m$ and all $A_1(\omega),\dots,A_\ell(\omega),Q(\omega)\in\Psi^1(M)$ depending smoothly
on~$\omega\in [-\delta,\delta]$ and such that $\sigma(A_j(\omega))|_{\Lambda^+_\omega}=0$,
$\sigma(Q(\omega))|_{\Lambda^+_\omega}=\Phi_+$.
The proof is similar to the proof of~\eqref{e:upgrador}, using the decomposition
$$
\begin{gathered}
A_j(\omega)=A'_j(\omega)W+A''_j(\omega)(P-\omega)+R_j(\omega)\\
\text{where}\quad
R_j(\omega)\in\Psi^0\quad\text{microlocally near}\quad \WF(u^+(\omega))
\end{gathered}
$$
for some $A'_j(\omega),A''_j(\omega)\in \Psi^0(M)$ depending smoothly
on $\omega\in [-\delta,\delta]$.
Since $\WF(A\partial^k_\omega u^+(\omega))\subset \Lambda^+\cap p^{-1}([-\delta,\delta])\cap \mathcal U$
for all $k$, by
the Fourier inversion formula we can write $Au^+(\omega)$ in the form~\eqref{e:lagreg-oi-2}
for some $a(\omega,\xi)$ which is smooth in $\omega,\xi$ and supported in $\xi\in\Gamma_1$
where $\Gamma_1\subset\Gamma_0$ is some closed cone.
It remains to show the following growth bounds as $\xi\to \infty$: for every $\varepsilon>0$
\begin{equation}
\label{e:derb-l2}
\langle\xi\rangle^{-{1\over 2}+|\alpha|-\varepsilon} \partial^m_\omega \partial^\alpha_\xi a(\omega,\xi)
\in L^\infty_\omega([-\delta,\delta]; L^2_\xi(\mathbb R^2)).
\end{equation}
(From~\eqref{e:derb-l2} one can get $L^\infty_\xi$ bounds using Sobolev embedding
as in the proof of~\cite[Proposition~25.1.3]{H4}.)
Denote by $\mathcal I(a)$ the integral on the right-hand side of~\eqref{e:lagreg-oi-2}.
By Lemma~\ref{l:phase-der} we have
$\partial_\omega F(\omega,\xi)=-\Phi_+(\partial_\xi F(\omega,\xi),\xi)$, therefore
we may take $Q(\omega):=-\partial_\omega F(\omega, D_x)$ to be a Fourier multiplier.
The operators
$$
A_{jk}(\omega):=D_{x_k}\big((\partial_{\xi_j}F)(\omega,D_x)-x_j\big),\quad
j,k\in \{1,2\},
$$
lie in $\Psi^1$ and
satisfy $\sigma(A_{jk}(\omega))|_{\Lambda^+_\omega}=0$. We have
$$
(D_\omega-Q(\omega))\mathcal I(a)=\mathcal I(D_\omega a),\quad
A_{jk}(\omega)\mathcal I(a)=\mathcal I(\xi_k D_{\xi_j}a).
$$
Also, if $\mathcal I(a)\in H^{-{1\over 2}-}$ uniformly in $\omega$, then
$\langle\xi\rangle^{-{1\over 2}-\varepsilon}a(\omega,\xi)\in L^\infty_\omega([-\delta,\delta];L^2_\xi(\mathbb R^2))$.
Applying~\eqref{e:oi-1} with the operators $D_\omega-Q(\omega)$ and $A_{jk}(\omega)$
we get~\eqref{e:derb-l2}, finishing the proof.
\end{proof}
We finally show the stronger statement of Lemma~\ref{l:lagreg} (with $I^0$ instead of $I^{0+}$)
using the transport equation satisfied by the principal symbol:
\begin{lemm}
\label{l:lagreg-plus}
We have
$$
u^+(\omega)\in
C^\infty_\omega\big([-\delta,\delta];
I^0(M;\Lambda_\omega^+)
\big),
$$
that is~\eqref{e:lagreg-oi-2} holds where $a(\omega,\xi)$ is a symbol of order $-{1\over 2}$ in~$\xi$.
\end{lemm}
\begin{proof}
In our setting $ P \in \Psi^0 ( M ) $ is self-adjoint with respect to a smooth density on~$ M$ -- see \eqref{eq:assP}.
Using that density to trivialize the half-density bundle we
obtain a self-adjoint operator
$P\in \Psi^0(M;\Omega^{1\over 2}_M)$.
Let $a^+\in S^{{1\over 2}+}(\Lambda^+_\omega;\mathcal M_{\Lambda^+_\omega} \otimes \Omega_{\Lambda^+_\omega}^{1\over 2})$ be a representative of $\sigma(u^+(\omega))$.
Using the transport equation~\eqref{e:transport-eqn}
and
$(P-\omega)u^+(\omega)=f\in C^\infty(M)$, we have
\begin{equation}
\label{e:tbone}
b^+:=La^+\in S^{-{3\over 2}+}(\Lambda^+_\omega;\mathcal M_{\Lambda^+_\omega} \otimes \Omega_{\Lambda^+_\omega}^{1\over 2}),
\end{equation}
where $L$ is a first-order differential operator on
$C^\infty(\Lambda^+_\omega;\mathcal M_{\Lambda^+_\omega} \otimes \Omega_{\Lambda^+_\omega}^{1\over 2})$
with principal part given by $H_p$ and $L^*=-L$
by~\eqref{eq:LLst}.
We trivialize $\Omega^{1\over 2}_{\Lambda^+_\omega}$ using the density $\nu^+_\omega$ constructed in Lemma~\ref{l:density}
and write
$$
a^+=\tilde a^+\sqrt{\nu^+_\omega},\quad
b^+=\tilde b^+\sqrt{\nu^+_\omega}.
$$
where $\tilde a^+\in S^{0+}(\Lambda^+_\omega;\mathcal M_{\Lambda^+_\omega})$,
$\tilde b^+\in S^{-2+}(\Lambda^+_\omega;\mathcal M_{\Lambda^+_\omega})$. By~\eqref{e:tbone} we have
\begin{equation}
\label{e:hradish}
(H_p+V)\tilde a^+=\tilde b^+
\end{equation}
where $H_p$ naturally acts on sections of the locally constant bundle
$\mathcal M_{\Lambda^+_\omega}$ and $V\in C^\infty(\Lambda^+_\omega)$
is homogeneous of order~$-1$. Moreover, since $L^*=-L$
we have
$$
\Re V=\tfrac{1}{2}(\mathcal L_{H_p}\nu^+_\omega)/\nu^+_\omega=0
$$
using Lemma~\ref{l:density}.
By~\eqref{e:hradish} for all $(x,\xi)\in\Lambda^+_\omega$ and $t\geq 0$ we have
\begin{equation}
\label{e:flourish}
\tilde a^+(x,\xi)= {\big(}e^{-t(H_p+V)}\tilde a^+ {\big)}(x,\xi)+\int_0^t {\big(}e^{-s(H_p+V)}\tilde b^+ {\big)}(x,\xi)\,ds.
\end{equation}
Since $\Re V=0$ we have $|e^{-t(H_p+V)}\tilde a^+(x,\xi)|=|\tilde a^+(e^{-tH_p}(x,\xi))|$
and same is true for $\tilde b^+$.
Take $(x,\xi)\in \Lambda^+_\omega$ with $|\xi|$ large.
As in~\eqref{e:lynmar} choose $t\geq 0$, $t\sim |\xi|$, such that $e^{-tH_p}(x,\xi)\in S^*M$;
we next apply~\eqref{e:flourish}. The first term on the right-hand side is bounded uniformly
as $\xi\to\infty$. Same is true for the second term since the function under the integral
is $\mathcal O((t-s)^{-2+})$.
It follows that $\tilde a^+(x,\xi)$ is bounded as $\xi\to\infty$.
Since $[\xi\partial_\xi,H_p+V]=-H_p-V$, we have for all $j$
\begin{equation}
\label{e:heald}
(H_p+V)(\xi\partial_\xi)^j \tilde a^+=(\xi\partial_\xi+1)^j \tilde b^+\in S^{-2+}(\Lambda^+_\omega;\mathcal M_{\Lambda^+_\omega}).
\end{equation}
It follows that $(H_p+V)^\ell (\xi\partial_\xi)^j\tilde a^+=\mathcal O(\langle\xi\rangle^{-\ell})$
for all $j,\ell$: the case $\ell=0$ follows from~\eqref{e:flourish} applied to~\eqref{e:heald}
and the case $\ell\geq 1$ follows directly from~\eqref{e:heald}.
Since $\xi\partial_\xi$ and $H_p$ form a frame on $\Lambda^+_\omega$,
we have $\tilde a^+\in S^{0}(\Lambda^+_\omega;\mathcal M_{\Lambda^+_\omega})$
which implies that $u^+_\omega\in I^0(M;\Lambda^+_\omega)$.
\end{proof}
\Remark It is instructive to consider the transport equation
\eqref{e:hradish} in the microlocal model used in \cite{SC}: near
a model sink $ \Lambda^+_\omega = \{ ( -\omega , x_2 ; \xi_1 , 0 ) : \xi_1 > 0 \}
\subset T^* ( {\mathbb R}_{x_1} \times \mathbb S^1_{x_2} ) \subset 0 $ (see the global examples in
\S \ref{exa}) we consider $ p ( x, \xi ) := \xi_1^{-1} \xi_2 - x_1 $.
We are then solving $( p ( x , D )-\omega) u^+ ( \omega ) \equiv 0 $
microlocally near $ \Lambda^+_\omega $ (see \cite[Definition E.29]{res}) and
for that we expand the symbol on $ u^+_\omega $ into Fourier modes in $ x_2 $,
\[
u^+_\omega ( x ) = \frac{1}{2\pi} \int_{\mathbb R}
\sum_{ n \in {\mathbb Z} } \hat a_\omega^+ ( n , \xi_1 ) e^{ i (x_1+\omega) \xi_1 }e^{inx_2} \,d \xi_1, \ \ a_\omega^+ = \sum_{n\in {\mathbb Z}} \hat a_\omega^+ ( n
, \xi_1 )e^{inx_2} | d \xi_1 dx_2 |^{\frac12} .
\]
The Fourier coefficients should satisfy
$ ( \xi_1^{-1} n + D_{\xi_1} ) \tilde a_\omega^+ ( n , \xi_1 ) = 0 $ for $ \xi_1 > 1 $ and $ \tilde a_+^\omega ( n, \xi_1 ) = 0 $ for $ \xi_1 < -1 $.
Hence the symbol is given by
\[
a_\omega^+=\tilde a^+ (\omega) |d x_2 d\xi_1|^{1\over 2},\quad
\tilde a^+ ( x_2, \xi_1 ) = \sum_{ n \in {\mathbb Z} } \xi_1^{ -i n }
a_n(\omega) e^{i n x_2} , \quad
a_n (\omega) = \mathcal O ( \langle n\rangle^{-\infty} ). \]
Hence, the symbol is very ``non-classical" in the sense that it does not have an expansion in powers of $ \xi_1 $. In the general case {an analogous conclusion} follows from the structure of~\eqref{e:hradish}.
\section{An asymptotic result}
\label{asr}
We now place ourselves in the setting of Lemma \ref{l:lagreg} and
assume that $ u ( \omega ) \in C^\infty_\omega ( [ - \delta, \delta ] ;
I^{ 0} ( M ; \Lambda_\omega )) $ in the sense described in Lemma~\ref{l:lagreg-plus},
where $\Lambda_\omega=\Lambda^+_\omega$ or $\Lambda_\omega=\Lambda^-_\omega$.
We are interested in the asymptotic behaviour as $t\to\infty$ of
\begin{equation}
\label{eq:Lagrancon1}
I ( t ) :=
\int_0^t \int_{\mathbb R} e^{ - i s \omega } \varphi ( \omega )u ( \omega ) \, d \omega ds \in\mathcal D'(M), \ \ \varphi \in \CIc ( (-\delta,\delta) ).
\end{equation}
We have the following local asymptotic result.
\begin{lemm}
\label{lem}
Suppose that $ u ( \omega ) \in \mathcal D' ( {\mathbb R}^2 ) $ is given by
\begin{equation}
\label{eq:defxy}
\begin{gathered} u ( \omega ) = u ( \omega, x ) = \frac{1}{ (2 \pi)^2 } \int_{\Gamma_0}
e^{ i ( \langle x , \xi \rangle - F ( \omega , \xi ) ) }
a (\omega , \xi ) \,d \xi ,
\end{gathered}
\end{equation}
where $\Gamma_0$, $F $, and $ a $ satisfy the general conditions in \eqref{e:lagreg-oi-2}.
Suppose also that
\begin{equation}
\label{eq:assFo}
\varepsilon \partial_\omega F ( \omega, \xi ) < 0 , \quad
\varepsilon = \pm, \quad
\xi \in \Gamma_0,\quad
|\omega|\leq\delta .
\end{equation}
Then as $t\to\infty$,
\begin{equation}
\label{eq:Lagrancon2}
\begin{gathered}
I ( t ) = u_\infty + b ( t ) + v ( t ) ,
\ \ \| {b} ( t ) \|_{ H^{ \frac 12 - } } \leq C, \ \ v ( t ) \to 0 \text{ in $ H^{ - \frac 12 - } ( {\mathbb R}^2 )$}, \\
u_\infty = \left\{ \begin{array}{ll} 2 \pi \varphi ( 0 ) u( 0 ) ,
& \varepsilon = +; \\
\ \ \ \ \ 0, & \varepsilon = - .
\end{array} \right.
\end{gathered}
\end{equation}
\end{lemm}
\begin{proof}
We start by remarking
that
we can assume that the amplitude $ a $ is
supported away from $ \xi = 0 $.
The remaining contribution can be absorbed into $ b ( t ) $: if $ a = a (\omega, \xi ) = 0 $ for $ |\xi| > C $ then
\begin{equation*}
\begin{split}
\widehat w ( t, \xi ) & :=\int_0^t \int_{\mathbb R} e^{ - i s \omega } e^{ - i F ( \omega, \xi ) }
a ( \omega, \xi ) \varphi ( \omega ) d \omega ds \\
& =
\int_0^t \int_{\mathbb R} \left[( 1 + s^2)^{-1} ( 1 + D_\omega^2 ) e^{ - i s \omega }\right]
e^{ -i F ( \omega, \xi ) }
a ( \omega, \xi ) \varphi ( \omega ) d \omega ds ,
\end{split} \end{equation*}
which by integration by parts in $ \omega $ is bounded in $ t $ and
compactly supported in $ \xi $.
{Since $u(\omega,x)$ has nice structure on the Fourier transform side it is natural to} consider the Fourier transform of $ x \mapsto I ( t ) ( x ) $, $J ( t ,\xi ) := \mathcal F_{ x \to \xi } { I ( t ) }$,
where
\begin{equation}
\label{eq:defJt}
J ( t,\xi ) = \frac 1 h \int_0^{ h t } \int_{\mathbb R}
e^{ - \frac i h ( F ( \omega , \eta ) + r \omega ) } a ( \omega, \eta /h ) \varphi ( \omega ) \, d \omega dr , \quad \xi = \frac \eta h , \ \ \eta \in \mathbb S^{1} .
\end{equation}
{From the assumptions on $a$ we have $J(t,\xi)=0$ unless $\eta\in\Gamma_1$, where
$\Gamma_1\subset \Gamma_0$ is a closed cone.}
The phase in $ J ( t ) $ is stationary when
\begin{equation}
\label{eq:cretin}
\omega = 0, \ \ r = r ( \eta ) := - \partial_\omega F ( 0, \eta ) .
\end{equation}
From \eqref{eq:assFo}, $ \partial_\omega F ( \omega , \eta ) \neq 0 $ and this means that for some $ \gamma > 0 $,
\begin{equation}
\label{eq:lowFom}
| r + \partial_\omega F( \omega, \eta ) | > c \langle r \rangle , \ \
\eta \in \mathbb S^{1}\cap \Gamma_1 , \ \ | \omega | \leq \delta,
\ \ |r| \notin ( \gamma, 1/\gamma ) .
\end{equation}
Let $ \chi \in \CIc ( ( \gamma/2 , 2/\gamma) ; [ 0, 1 ] ) $
be equal to $ 1 $ on $ ( \gamma , 1/\gamma ) $.
Using integration by parts based on
\[ {h^N}\left( - ( r+ \partial_\omega F ( \omega , \eta ) )^{-1} D_\omega \right)^N
e^{ - \frac i h ( F ( \omega , \eta )+r\omega) } = e^{ - \frac i h ( F( \omega , \eta ) +r\omega) } , \]
and \eqref{eq:lowFom} we see that, by taking $ N \geq 2 $,
\[
\begin{split} & \frac1h \int_0^{h t } \int_{\mathbb R} ( 1 - \chi( r ) )
e^{ - \frac i h ( F ( \omega , \eta ) + r \omega ) } a ( \omega , \eta /h ) \varphi ( \omega ) \, d \omega dr
= \mathcal O ( h^{N- 1 } ) ,
\end{split}
\]
uniformly in $t\geq 0$. Hence, for all $N$
\[
\begin{gathered} J ( t ) = \widetilde J ( t ) +\mathcal F_{ x \mapsto \xi } u_0 ( t ), \ \
\sup_{t\geq 0}\| u_0 ( t ) \|_{H^N} \leq C_N, \\
\widetilde J ( t,\xi ) := \frac 1 h \int_0^{ h t } \int_{\mathbb R} \chi ( r )
e^{ - \frac i h ( F ( \omega , \eta ) + r \omega ) } a ( \omega , \eta /h ) \varphi ( \omega ) \,d \omega dr , \quad
\xi = \frac \eta h , \ \eta \in \mathbb S^{1} . \end{gathered}
\]
When $ ht \geq 2/\gamma $, we have $ \widetilde J ( t,\xi ) =
\widetilde J ( \infty,\xi ) $ due to the support property of $\chi$.
In particular this implies that $\widetilde J(t,\xi)\to \widetilde J(\infty,\xi)$ as $t\to\infty$ pointwise in $\xi$.
We apply the standard
method of stationary phase to $\widetilde J(\infty)$ noting that
\[
- \partial^2_{ \omega, r } ( F ( \omega, \eta ) + r \omega ) =
\begin{bmatrix} - \partial_\omega^2 F & - 1 \\
-1 & 0 \end{bmatrix} , \ \ \ \sgn \partial^2_{ \omega, r } ( F ( \omega, \eta ) - r \omega ) = 0.
\]
Therefore
\begin{equation}
\label{e:asyy}
\widetilde J ( \infty ,\xi ) = \left\{ \begin{array}{ll}
2 \pi a ( 0 , \xi) \varphi ( 0 ) e^{- i F ( 0, \xi ) } + \mathcal O ( \langle \xi \rangle^{- \frac 32 + } ) ,
& \partial_\omega F ( 0 , \xi) < 0 , \\
\ \ \ \ \ \ \ \ \ \ \ \ \mathcal O ( \langle \xi\rangle^{-\infty } ), & \partial_\omega F ( 0 , \xi ) > 0.
\end{array} \right.
\end{equation}
Hence to obtain \eqref{eq:Lagrancon2} all we need to show is that $ \widetilde J ( t,\xi ) = \mathcal O (
\langle \xi\rangle ^{- \frac12 + } ) $ uniformly in $ t $ as then by dominated convergence,
\[
\begin{split} \langle \xi\rangle ^{ - \frac 12 - } \widetilde J (t )
& \xrightarrow{ L^2 ( {\mathbb R}^2, d\xi ) } \langle \xi \rangle^{ -\frac 12 - } \widetilde J ( \infty ) , \ \ \ t \to +\infty , \end{split}
\]
that is,
\[
\widetilde I(t):=\mathcal F^{-1} _{\xi \to x} \widetilde J ( t)
\xrightarrow{H^{ -\frac12- } ( {\mathbb R}^2 ) }
\mathcal F^{-1} _{\xi \to x} \widetilde J_\infty ( t ) , \ \ \ t \to + \infty .
\]
{Here the $\mathcal O(\langle \xi\rangle^{-\frac 32+})$ remainder in~\eqref{e:asyy}
can be put into $b(t)$ in~\eqref{eq:Lagrancon2}.}
The uniform boundedness of $\widetilde J(t,\xi)$ follows from
the following simple lemma:
\begin{lemm}
\label{l:trivial}
Suppose that $ A= A ( s , \omega ) \in \CIc ( {\mathbb R}^2 ) $ and
$ G \in {C^\infty} ( {\mathbb R}; {\mathbb R} ) $. Then as $h\to 0$
\begin{equation}
\label{eq:trivial}
L(h):= \int_0^\infty \int_{\mathbb R} e^{ \frac i h ( G ( \omega ) + s \omega ) }
A ( s, \omega ) \, d\omega ds = \mathcal O ( h \log (1/h)) .
\end{equation}
\end{lemm}
\begin{proof}
We define
\[
B ( \sigma, \omega ) := \int_0^\infty e^{ i s \sigma } A ( s , \omega ) \,
ds , \ \ B ( \sigma , \omega ) = i \sigma^{-1}{ A ( 0 , \omega ) } +
\mathcal O ( \sigma^{-2 }) , \ \ |\sigma| \to \infty.
\]
Hence,
\[
\begin{split} L ( h ) & = \int_{\mathbb R} e^{ \frac i h G ( \omega ) } B \left( \frac{ \omega} h ,
\omega \right) d \omega = h \int_{\mathbb R} e^{ \frac i h G ( h w ) }
B ( w , h w ) \,d w \\
& = \mathcal O ( h ) \int_{|w| \leq C/h} \,\frac{ dw}{ 1 + |w|} =
\mathcal O ( h \log (1/h ) ), \end{split}
\]
proving \eqref{eq:trivial}. (In fact we see that the estimate is sharp: if we take $ G \equiv 0 $ and $ A $ which is {\em odd} in $ \omega $
one does have logarithmic growth.)
\end{proof}
To use the lemma to show the bound $ \widetilde J ( t,\xi ) = \mathcal O (
\langle \xi\rangle ^{- \frac12 + } ) $, uniformly in $ t\geq 0 $,
it suffices to consider the case $ht\leq 2/\gamma$,
since otherwise $\widetilde J(t,\xi)=\widetilde J(\infty,\xi)$.
As before, we write $\xi=\eta/h$ where $\eta\in\mathbb S^1$. Then
\[
\widetilde J (t,\xi) = \frac1h \int_0^\infty \int_{\mathbb R}
e^{ \frac i h ( s\omega- ht\omega - F ( \omega, \eta ) ) }
\chi ( ht - s ) a ( \omega , \eta /h ) \varphi ( \omega )\, d\omega d s.
\]
We now apply Lemma~\ref{l:trivial} with $ A ( s, \omega ) := h^{\alpha - \frac12}
\chi ( ht - s ) a ( \omega , \eta /h ) \varphi ( \omega )$, $ \alpha >0 $ (and
arbitrary)
and
$ G ( \omega ) = -ht\omega-F ( \omega , \eta )$ to obtain,
$ \widetilde J ( t) = \mathcal O ( h^{\frac12 - \alpha } \log(1/h) ) =
\mathcal O ( \langle \xi \rangle^{-\frac12 + 2 \alpha } ) $ which concludes the proof.
\end{proof}
\section{Proof of the Main Theorem}
In the approach of \cite{SC} the decomposition of $ u ( t ) $ is obtained using
\eqref{eq:uoft} and proving that for $ \varphi $ supported in a neighbourhood of
$ 0 $,
\begin{equation}
\label{e:scarab}
P^{-1} ( e^{ - i tP } - 1 ) \varphi ( P ) f
\xrightarrow{ H^{-\frac12 - } ( M) } -( P - i 0 ) ^{-1}\varphi( P ) f , \quad
t \longrightarrow \infty ,
\end{equation}
which makes formal sense if we think in terms of distributions.
The rigorous argument requires finer aspects of Mourre theory developed by
Jensen--Mourre--Perry~\cite{jemp}.
Here we take a more geometric approach and use Lemma~\ref{l:lap} and~\ref{l:lagreg}
to study the behaviour of $ u ( t ) $. Fix $\delta>0$ small enough so that the results of~\S\ref{s:sink-source},
as well as~\eqref{e:no-spectrum}, hold.
Fix $\varphi\in C^\infty_{\rm{c}}((-\delta,\delta))$ such that $\varphi=1$ near 0.
By~\eqref{eq:uoft}, the spectral theorem,
and Stone's formula (see for instance \cite[Theorem B.8]{res}) we have
\begin{equation}
\label{eq:uoft1}
\begin{aligned}
u( t) &=
-i\int_0^t e^{-isP}\varphi(P)f\,ds
+P^{-1}(e^{-itP}-1)(1-\varphi(P))f
\\&=
\frac{1}{ 2 \pi }\int_0^t \int_{\mathbb R} e^{ - i s \omega } \varphi ( \omega )
(u^-(\omega)-u^+(\omega))\, d \omega ds
+ b_1 ( t) ,
\end{aligned}
\end{equation}
where $ \| b_1 ( t )\|_{L^2 } \leq C $ for all $t\geq 0$ and
$u^\pm(\omega):=(P-\omega\mp i0)^{-1}f\in H^{-1/2-}(M)$ are defined in
Lemma~\ref{l:lap}.
By Lemma~\ref{l:lagreg} we have
$u^\pm ( \omega ) \in C^\infty_{\omega} ( [ - \delta , \delta ] ; I^{0} ( M; \Lambda^\pm_\omega ))$.
The main result \eqref{eq:SC2}, \eqref{eq:DZ1} then follows from Lemma~\ref{lem}.
Here we use a pseudodifferential partition
of unity to write $u^\pm(\omega)$ as a finite
sum of oscillatory integrals~\eqref{eq:defxy} and the geometric condition~\eqref{eq:assFo} follows from
Lemmas~\ref{l:phase-der} and~\ref{l:Phi-sign}.
We obtain $u_\infty=-u^+(0)$ which is consistent with~\eqref{e:scarab}.
\medskip\noindent\textbf{Acknowledgements.}
This note is a result of a ``groupe de travail'' on \cite{SC} conducted in Berkeley in February and March of 2018. We would like to thank the participants of that seminar and in particular Thibault de Poyferr\'e for explaining the fluid mechanical motivation to us.
Thanks go also to Andr\'as Vasy for a helpful discussion of results of
\cite{hb}. We are also grateful to Micha\l{} Wrochna for pointing out to us
a mistake in Lemma~\ref{l:sink-established}~-- see
the remark following
that lemma~-- {and to the anonymous referee for many suggestions to improve the manuscript.}
This research was conducted during the period SD served as
a Clay Research Fellow and MZ was supported
by the National Science Foundation grant DMS-1500852 and by a Simons Fellowship.
|
{
"timestamp": "2019-04-26T02:08:21",
"yymm": "1806",
"arxiv_id": "1806.00809",
"language": "en",
"url": "https://arxiv.org/abs/1806.00809"
}
|
\section{Setup}
Notation: $[k]$ stands for the set $\{1, ..., k\}$, $\mathcal{P}(S)$ is the power-set of $S$, and $\Delta_S$ is the diagonal of $S\times S$. If $S$ is a set and $n$ is a natural number, then $\binom{S}{n}$ is the set of $n$-element subsets of $S$. Also, $P_n$ is a directed path of length $n$, that is, the digraph $([n+1],E)$ with $E = \{(i,i+1) \mid 1 \le i \le n\}$.
\begin{defn} A \emph{directed hypergraph} of uniformity $k$ is a pair $(V,E)$ with $E \subseteq V^k$. The \emph{chromatic number} of a directed hypergraph is the chromatic number of the associated undirected hypergraph, that is, the least number $\chi$ such that there exists a function $f:V\rightarrow [\chi]$ such that for each edge $e \in E$, not all of $f(e_1), ..., f(e_k)$ are equal. We'll assume that no edge of $E$ has any two coordinates equal to avoid annoying technical details which end up not mattering.
\end{defn}
\begin{defn} A $k$-\emph{machine} $\mathcal{M}$ is a tuple $\mathcal{M} = (S,f,\mathcal{B})$ where $S$ is a finite set of \emph{states}, $f : S\times [k]^2 \rightarrow \mathcal{P}(S)$ is a \emph{transition function}, and $\mathcal{B} \subseteq S\times S$ is the set of \emph{bad transitions}. We say that the $k$-machine $\mathcal{M}$ is \emph{deterministic} if the value of $f(s,(i,j))$ always has size at most one, and is empty for $i = j$. If $\mathcal{M}$ is deterministic, we abuse notation and think of $f$ as a function $f : S\times ([k]^2\setminus \Delta_{[k]}) \rightarrow S \cup \{\emptyset\}$, and think of $\emptyset$ as a special ``accepting'' state.
\end{defn}
\begin{defn} A \emph{cycle} of a $k$-uniform directed hypergraph $\mathcal{H} = (V,E)$ is a sequence $c = (v_0, e_1, v_1, ..., e_n, v_n)$ with $v_n = v_0$, and $v_{i-1}, v_i \in \{(e_i)_1, ..., (e_i)_k\}$ for each $i$. We define $|c| = n$, and we define the \emph{trace} of $c$ by $\tr(c,i) = (a,b)$ where $(e_i)_a = v_{i-1}, (e_i)_b = v_i$. We say that the cycle $c$ of $\mathcal{H}$ is $\mathcal{M}$-\emph{bad} if there is a sequence of states $s_0, ..., s_n \in S$ such that for each $i$ we have
\[
s_i \in f(s_{i-1}, \tr(c,i)),
\]
and such that
\[
(s_0,s_n) \in \mathcal{B}.
\]
We say that $\mathcal{H}$ is $\mathcal{M}$-\emph{good} if $\mathcal{H}$ has no $\mathcal{M}$-bad cycles.
\end{defn}
\begin{prob}\label{main-prob} Given a $k$-machine $\mathcal{M}$, determine whether there exist $\mathcal{M}$-good $k$-uniform directed hypergraphs $\mathcal{H}$ of arbitrarily large chromatic number.
\end{prob}
\begin{ex} Let $k = 2$, and consider the deterministic $2$-machine $\mathcal{M} = (\{s,t,u,v\},f,\{s\}\times\{t,v\})$, with $f$ given by $f(s,(1,2)) = t, f(t,(1,2)) = t, f(t,(2,1)) = u, f(u,(1,2)) = v$, and all other values of $f$ are $\emptyset$. Then a directed graph $\mathcal{G}$ is $\mathcal{M}$-good if and only if $\mathcal{G}$ is the Hasse diagram of a poset.
It's well-known that Hasse diagrams can have arbitrarily large chromatic number (\cite{blanche}, \cite{coloring-lattices}, \cite{ramsey-lattices}, \cite{hasse-eyebrows}). An explicit poset whose Hasse diagram has chromatic number $n$ is the poset $(\binom{[2^n]}{2},\preceq)$ with $\{a,b\} \preceq \{c,d\}$ when $\max(a,b) \le \min(c,d)$ \cite{hasse-explicit}.
\end{ex}
\section{Warm up: cycling $k$-machines}
\begin{defn} A \emph{cycling} $k$-\emph{machine} $\mathcal{M}$ is a $k$-machine $(S,f,\mathcal{B})$ such that $\mathcal{B} = \Delta_S$.
\end{defn}
In the context of cycling $k$-machines, we only consider a cycle $c$ to be $\mathcal{M}$-bad if it has $|c| > 0$. It's easy to modify a cycling $k$-machine such that it handles cycles of length $0$ correctly (while at most doubling the number of states), but this makes the definition clunky.
\begin{defn} If $\mathcal{M} = (S,f,\Delta_S)$ is a cycling $k$-machine, we say that $\prec$ is an $\mathcal{M}$-\emph{compatible order} on $S\times [k]$ if it is a total order such that the induced orderings on $S\times \{i\}$ agree for all $1 \le i \le k$, and for each $(s,i), (t,j) \in S\times [k]$ such that $t \in f(s,(i,j))$, we have $(s,i) \prec (t,j)$.
\end{defn}
\begin{thm}\label{cycling} If $\mathcal{M} = (S,f,\Delta_S)$ is a cycling $k$-machine, then there exist $\mathcal{M}$-good $k$-uniform directed hypergraphs $\mathcal{H}$ of arbitrarily large chromatic number if and only if there is an $\mathcal{M}$-compatible order $\prec$ on $S \times [k]$. Furthermore, if the chromatic number is bounded then it is bounded by $|S|!$.
\end{thm}
\begin{proof} First we show the necessity. Let $\mathcal{H} = (V,E)$ be an arbitrary $k$-uniform directed hypergraph. We define an auxiliary digraph $\mathcal{G}$ with vertex set $V\times S$ and edge set given by
\[
\{((a,s),(b,t)) \mid \exists e \in \mathcal{H},\ i, j \in [k]\text{ s.t. }e_i = a, e_j = b, t \in f(s,(i,j))\}.
\]
Any directed cycle in $\mathcal{G}$ corresponds to an $\mathcal{M}$-bad cycle in $\mathcal{H}$, and vice-versa. Therefore if $\mathcal{H}$ is $\mathcal{M}$-good, then $\mathcal{G}$ is a directed acyclic graph, so there exists a total order $\prec$ on $\mathcal{G}$ such that if $((a,s),(b,t))$ is an edge of $\mathcal{G}$ then $(a,s) \prec (b,t)$. Color the vertex $v \in V$ by the induced ordering $\prec\mid_{\{v\}\times S}$. If the chromatic number of $\mathcal{H}$ is greater than $|S|!$, then there must exist an edge $e \in E$ such that $e_1, ..., e_k$ all have the same induced orderings. We now define the ordering $\prec$ on $S\times [k]$ by $(s,i) \prec (t,j)$ if and only if $(e_i,s) \prec (e_j,t)$, and note that this is an $\mathcal{M}$-compatible order on $S\times [k]$.
Now we show the sufficiency. Fix an $\mathcal{M}$-compatible order $\prec$ on $S\times [k]$. We define $\mathcal{H} = (V,E)$ by taking $V = \binom{\mathbb{N}}{|S|}$, and defining $E$ by
\[
E = \{(\{a_{11}, ..., a_{1|S|}\}, ..., \{a_{k1}, ..., a_{k|S|}\}) \mid a_{is} < a_{jt} \iff (s,i) \prec (t,j)\}.
\]
It's easy to show that this $\mathcal{H}$ is $\mathcal{M}$-good (the auxiliary digraph $\mathcal{G}$ has vertices corresponding to elements of vertices of $\mathcal{H}$, with the correspondence determined by the restriction of $\prec$ to any $S\times \{i\}$, and every edge of $\mathcal{G}$ is increasing under the total ordering from $\mathbb{N}$). Finally, the chromatic number of $\mathcal{H}$ is infinite by Ramsey's theorem for hypergraphs (if we color the $k$-subsets of $\mathbb{N}$ by finitely many colors, then there is some subset $C$ of $\mathbb{N}$ of size $k|S|$ such that $\binom{C}{k}$ is monochromatic, and there is an edge $e \in E$ with $\cup_{i=1}^k e_i = C$).
\end{proof}
\begin{ex}\label{bounded-counter} Consider the family of cycling $2$-machines $\mathcal{M}_n$, with $\mathcal{M}_n = (\{0, ..., n\},f,\Delta_{\{0, ..., n\}})$ and $f(i,(1,2)) = \min(i+1,n)$ and $f(i,(2,1)) = i-2$ if $i \ge 2$, $f(0,(2,1)) = f(1,(2,1)) = \emptyset$. For $n \ge 2$, any $\mathcal{M}_n$-good digraph must be the Hasse diagram of a poset (but the converse is not true). We'll use Theorem \ref{cycling} to show that for each $n$, there is an $\mathcal{M}_n$-good digraph of infinite chromatic number. We just have to construct an $\mathcal{M}_n$-compatible order $\prec$ on $\{0, ..., n\} \times [2]$. We take the restriction of $\prec$ to $\{0, ..., n\}\times\{1\}$ to be the reverse of the usual ordering (and the same for $\{0, ..., n\} \times \{2\}$), and take $(i,1) \prec (j,2)$ if and only if $i > j-2$.
\end{ex}
The following type of digraph, parametrized by a real number $\alpha > 1$, acts like a limiting case of Example \ref{bounded-counter} in the case $\alpha = 2$.
\begin{defn} Let $\alpha > 1$. We say that a digraph is $\alpha$-\emph{balanced} if every cycle which has $k$ forward edges has strictly less than $\alpha k$ backwards edges.
\end{defn}
\begin{prop} A digraph is $2$-balanced if and only if it is $\mathcal{M}_n$-good for every $n$, with $\mathcal{M}_n$ defined as in Example \ref{bounded-counter}.
\end{prop}
\begin{thm} Any $\alpha$-balanced digraph $\mathcal{G} = (V,E)$ has chromatic number at most $\lceil \alpha\rceil +1$.
\end{thm}
\begin{proof} Assume WLOG that $\alpha$ is a whole number and that $\mathcal{G}$ is connected and finite. Pick some vertex $v_0 \in V$, and for every walk $w$ from $v_0$ to a vertex $v\in V$, we let $\ell(w)$ be the number of forward steps in $w$ minus $\alpha$ times the number of backward steps in $w$. For $v \in V$, we let $\ell(v)$ be the supremum of $\ell(w)$ over all walks $w$ from $v_0$ to $v$. To see that $\ell(v)$ is finite, note that for any walk $w$ containing a cycle, we can delete that cycle to get a walk $w'$ with the same endpoints such that $\ell(w') > \ell(w)$ (by the definition of an $\alpha$-balanced digraph), and that only finitely many of the walks in $\mathcal{G}$ contain no cycles. Now for any edge $(a,b) \in E$, we have $\ell(b) \ge \ell(a) + 1$, and $\ell(a) \ge \ell(b) - \alpha$, by extending a walk to $a$ or $b$ by one forward or backward step, respectively, so
\[
\ell(a) + 1 \le \ell(b) \le \ell(a) + \alpha.
\]
In particular, we have
\[
(a,b) \in E \;\; \implies \;\; \ell(a) \not\equiv \ell(b) \pmod{\alpha+1},
\]
so coloring the vertices of $\mathcal{G}$ according to the remainder of $\ell(v) \pmod{\alpha+1}$ finishes the proof.
\end{proof}
\subsection{Hardness of checking for a compatible ordering}
We would like to know how difficult it is to test whether a cycling $k$-machine has a compatible order. Our first result shows that if we allow the uniformity $k$ to vary, then this is NP-complete.
\begin{thm} Checking whether a given deterministic cycling $k$-machine $\mathcal{M}$ has a compatible order is NP-complete if $k$ is allowed to vary.
\end{thm}
\begin{proof} We'll reduce from 3-SAT. Suppose we have an instance with variables $V$ and constraints $C$, take $S$ to be the set of literals, take $k = 3|C|$, and number the constraints as $C_1, C_2, ...$. For each constraint $C_i$, we will introduce just three transitions for $\mathcal{M}$, and we will have all other transitions lead to $\emptyset$. Suppose that $C_i$ is the $\vee$ of the literals $a, b, c$, with negations $\overline{a}, \overline{b}, \overline{c}$. Then the transitions corresponding to $C_i$ are as follows:
\begin{align*}
f(a,(3i-2,3i-1)) &= \overline{b},\\
f(b,(3i-1,3i)) &= \overline{c},\\
f(c,(3i,3i-2)) &= \overline{a}.
\end{align*}
Now, if $\prec$ is an $\mathcal{M}$-compatible order on $S\times [k]$, then not all three of the inequalities $\overline{a} \prec a$, $\overline{b} \prec b$, $\overline{c} \prec c$ can be true, since these together with the above three transitions imply a directed cycle of inequalities. Thus, if we define the value for the literal $a$ to be true iff $a \prec \overline{a}$, then any $\mathcal{M}$-compatible order corresponds to a solution to our instance of 3-SAT. Conversely, given a solution to our 3-SAT instance, we can use it to first decide which of the inequalities $a \prec \overline{a}$ should hold, then extend this to an order on $S$, and finally extending this to an $\mathcal{M}$-compatible order on $S\times [k]$ is straightforward.
\end{proof}
Surprisingly, when $k = 2$ (i.e., in the case of digraphs), testing for a compatible ordering is equivalent to testing whether $P_n$ is $\mathcal{M}$-good for all $n$.
\begin{thm} If $\mathcal{M}$ is a cycling $2$-machine, then there are $\mathcal{M}$-good digraphs having arbitrarily large chromatic number if and only if the directed path $P_n$ is $\mathcal{M}$-good for all $n$. This can be tested in polynomial time (even if $\mathcal{M}$ is non-deterministic).
\end{thm}
\begin{proof} Let $\mathcal{M} = (S,f,\Delta_S)$. Suppose that $\prec$ is any total ordering on $S\times [2]$. Then there is an order preserving map $\iota : (S\times [2], \prec) \hookrightarrow (\mathbb{Q}, <)$. Thinking of this as a map $S \rightarrow \mathbb{Q}^2$, we can associate an interval $I_s \subset \mathbb{Q}$ to each element $s \in S$, with endpoints $\iota(s,1)$ and $\iota(s,2)$.
Suppose now that $\prec\mid_{S\times\{1\}}$ agrees with $\prec\mid_{S\times\{2\}}$. It's easy to check that there can't be any $s,t \in S$ with $I_s \subset I_t$. Therefore, by the fact that proper interval graphs are always unit interval graphs (\cite{indifference-graphs}, \cite{proper-interval-algorithmic}), we may assume without loss of generality that
\[
|\iota(s,2) - \iota(s,1)| = 1
\]
for all $s \in S$. Additionally, if $I_s$ overlaps with $I_t$ for any $s,t \in S$, then we can check that the endpoints of $I_s$ must be sorted in the same way as the endpoints of $I_t$. Thus, within any connected component of our unit interval graph, all the intervals must have their endpoints sorted the same way.
Now we associate a weighted digraph $\mathcal{G}$ to $\mathcal{M}$, as follows. For any $s,t \in S$ and any $i,j \in [2]$, if $t \in f(s,(i,j))$ then we draw an edge from $s$ to $t$ with weight $j-i$ in $\mathcal{G}$ (note that $\mathcal{G}$ might have multiple edges of different weights connecting a pair of vertices, and that some edges may have weight $0$). Note that if $\prec$ is $\mathcal{M}$-compatible, then every strongly connected component (ignoring the weights) of $\mathcal{G}$ must be mapped to a connected component of our unit interval graph. It's easy to see that an $\mathcal{M}$-compatible order on $S\times [2]$ exists if and only if each strongly connected component $C$ of $\mathcal{G}$ has an $\mathcal{M}$-compatible order on $C\times [2]$ (since we can linearly order the strongly connected components of $\mathcal{G}$), so we may assume without loss of generality that $\mathcal{G}$ is strongly connected.
If we have $\iota(s,2) > \iota(s,1)$ for all $s \in S$, then $\prec$ will be $\mathcal{M}$-compatible if and only if the system of inequalities
\[
\{x_s < x_t + w \mid (s,t)\text{ is an edge of }\mathcal{G}\text{ having weight }w\}
\]
is solved by taking $x_s = \iota(s,1)$. If $\iota(s,2) < \iota(s,1)$ for all $s \in S$, then the inequalities above must be replaced with $x_s < x_t - w$.
We will show that there is an $\mathcal{M}$-compatible order if and only if $\mathcal{G}$ has no directed cycles of total weight $0$ (an efficient way to test this is given in \cite{scaling-shortest-path}). First, if there is such a cycle, then adding the inequalities corresponding to its edges we see that the system of inequalities above has no solution (regardless of which way the endpoints of each interval are sorted). Conversely, if there is no solution to the above system of inequalities for either choice of how the endpoints of the intervals are sorted, then there must be a positive linear combination of these inequalities that comes out to $0 < 0$. Since each inequality has exactly one variable on each side, we can decompose this linear combination into positive linear combinations corresponding to directed cycles of $\mathcal{G}$, to see that $\mathcal{G}$ must have a directed cycle $c_+$ with nonnegative total weight and a directed cycle $c_-$ of nonpositive total weight. Since we have assumed that $\mathcal{G}$ is strongly connected, it isn't hard to show that in fact $\mathcal{G}$ must have a directed cycle of total weight $0$ (by finding a suitable positive linear combination of $c_+$, $c_-$, and any directed cycle that connects $c_+$ to $c_-$).
\end{proof}
\begin{thm} Checking whether a given (non-deterministic) cycling $3$-machine $\mathcal{M}$ has a compatible order is NP-complete.
\end{thm}
\begin{proof}[Proof sketch] We restrict to the case of cycling $3$-machines such that for each state $s \in S$, we have $s \in f(s,(1,2))$, so that our compatible order $\prec$ must satisfy $(s,1) \prec (s,2)$. Using the fact that proper interval graphs are always unit interval graphs as in the proof of the previous theorem, to any compatible $\prec$ we can associate an order preserving map $\iota : (S\times [3], \prec) \rightarrow (\mathbb{Q}, <)$ such that $\iota(s,2) = \iota(s,1) + 1$ for all $s \in S$. Introduce variables $x_s$ with $x_s = \iota(s,1)$. Since
\[
x_s < x_t \iff \iota(s,3) < \iota(t,3)
\]
for compatible orders $\prec$, we can find an increasing function $u : (\mathbb{Q}, <) \rightarrow (\mathbb{Q}, <)$ such that
\[
\iota(s,3) = u(x_s).
\]
Thus, the existence of a compatible order is equivalent to the existence of rational numbers $x_s$ for $s \in S$ and an increasing function $u$ satisfying a system of inequalities where each side of each inequality is in one of the forms $x_s, x_s + 1,$ or $u(x_s)$. Our goal is to show that solving such a system (for the $x_s$s and the unknown function $u$) is NP-complete.
Using polynomially many auxiliary variables, we can also use inequalities of the form $x_s < x_t \pm n$, where $n$ is a natural number which is at most polynomially large. Our main gadget will be based on the following observation. Suppose that $a_1, ..., a_n, b$ satisfy the system
\begin{align*}
\forall i \le n-1, \;\;\; a_{i+1} &< a_i + 1,\\
a_1 &< a_n - (n-2),\\
\forall i, \;\;\; u(a_i) &< a_i,\\
b + 1 &< u(b).
\end{align*}
Then we must have $b \not\in [a_1-1,a_n]$: if $b \in [a_i - 1, a_i]$, then
\[
a_i \le b + 1 < u(b) \le u(a_i) < a_i,
\]
a contradiction. If we let $m, k$ be natural numbers and let $x,y$ be two more variables, and add the inequalities
\begin{align*}
y + m &< a_1,\\
a_n &< y+m+n,\\
x + k &< b + 1,\\
b &< x + k,
\end{align*}
to the above system, then we see that
\[
x-y \not\in [(m-k)+2, (m-k)+n-2].
\]
The strategy is to fix $m-k$ and take $m$ large enough that the interval $[a_1-1,a_n]$ will not be anywhere near any other variables, other than $b$, giving us a gadget that guarantees that the difference $x-y$ is not in a given interval with integer endpoints.
Now it is straightforward to find a reduction from 3-coloring. Given a graph $\mathcal{G} = (V,E)$, we introduce variables $x_v$ corresponding to the vertices of $V$, and use the gadget described above to force
\[
x_v - x_w \in [-21,-19] \cup [-11,-9] \cup [-1,1] \cup [9,11] \cup [19,21]
\]
for all $v,w \in V$. For each edge $\{v,w\} \in E$, we use the above gadget to add the additional constraint $x_v - x_w \not\in [-2,2]$. Given a solution to the above system, if we color the vertex $v$ of $\mathcal{G}$ based on the closest multiple of $10$ to $x_v - x_{v_0}$ for some fixed vertex $v_0$, we get a 3-coloring of $\mathcal{G}$, and conversely from a 3-coloring of $\mathcal{G}$ we can easily construct a solution to the above system.
\end{proof}
\section{The general case}
In the general case, it is technically convenient to require trivial cycles not to be $\mathcal{M}$-bad (in particular, if a nontrivial $\mathcal{M}$-good hypergraph exists, we must have $\mathcal{B} \cap \Delta_S = \emptyset$).
\begin{defn} We define an \emph{order system} on a set $S$ to be a triple $(\sim, \preceq, \le)$ such that $\sim$ is an equivalence relation on $S$, $\preceq$ is a partial order on $S/\!\sim$, and $\le$ is an extension of $\preceq$ to a total order on $S/\!\sim$.
\end{defn}
\begin{defn} If $\mathcal{M} = (S,f,\mathcal{B})$ is a $k$-machine, then we say that the order system $(\sim, \preceq, \le)$ on $S \times [k]$ is \emph{compatible} with $\mathcal{M}$ if it satisfies the following three conditions:
\begin{itemize}
\item For any $(s,i), (t,j) \in S\times [k]$ with $t \in f(s,(i,j))$, we have $((s,i)/\!\sim) \preceq ((t,j)/\!\sim)$.
\item The induced order systems $(\sim, \preceq, \le)\!\mid_{S\times \{i\}}$ on $S$ are independent of $i$.
\item For any $s,t \in S$ with $(s/\!\sim) \preceq (t/\!\sim)$ in the induced order system on $S$, we have $(s,t) \not\in \mathcal{B}$.
\end{itemize}
\end{defn}
\begin{thm}\label{general} If $\mathcal{M} = (S,f,\mathcal{B})$ is a $k$-machine, then there exist $\mathcal{M}$-good $k$-uniform directed hypergraphs $\mathcal{H}$ of arbitrarily large chromatic number if and only if there is an order system $(\sim, \preceq, \le)$ on $S \times [k]$ which is compatible with $\mathcal{M}$. Furthermore, if the chromatic number is bounded then it is bounded by the number of possible order systems on $S$.
\end{thm}
\begin{proof} First we show the necessity. Let $\mathcal{H} = (V,E)$ be an arbitrary $k$-uniform directed hypergraph. As in the cycling case, we define an auxiliary digraph $\mathcal{G}$ with vertex set $V\times S$ and edge set given by
\[
\{((a,s),(b,t)) \mid \exists e \in \mathcal{H},\ i, j \in [k]\text{ s.t. }e_i = a, e_j = b, t \in f(s,(i,j))\}.
\]
We define an equivalence relation $\sim$ on the vertex set of $\mathcal{G}$ by partitioning $\mathcal{G}$ into its strongly connected components. Define a partial order $\preceq$ on $\mathcal{G}/\!\sim$ by $(u/\!\sim) \preceq (v/\!\sim)$ if there exists a directed path from $u$ to $v$ in $\mathcal{G}$. Finally, extend the partial order $\preceq$ to a total order $\le$ on $\mathcal{G}/\!\sim$. Note that $\mathcal{H}$ is $\mathcal{M}$-good if and only if, for any $v \in V$ and any $(s,t)$ with $((v,s)/\!\sim) \preceq ((v,t)/\!\sim)$, we have $(s,t) \not\in \mathcal{B}$.
Color the vertex $v \in V$ by the induced order system $(\sim, \preceq, \le)\!\mid_{\{v\}\times S}$. If the chromatic number of $\mathcal{H}$ is greater than the number of possible order systems on $S$, then there must exist an edge $e \in E$ such that $e_1, ..., e_k$ all have the same induced order systems. We now define the order system $(\sim,\preceq,\le)$ on $S\times [k]$ by $(s,i) \sim (t,j)$ if and only if $(e_i,s) \sim (e_j,t)$, and similary for $\preceq, \le$, and note that this order system is compatible with $\mathcal{M}$.
Now we show the sufficiency. Fix an order system $(\sim,\preceq,\le)$ on $S\times [k]$ which is compatible with $\mathcal{M}$. Let $A$ be the structure $(S/\!\sim, \preceq\mid_{S/\!\sim}, \le\mid_{S/\!\sim})$, and let $B$ be the structure $((S\times[k])/\!\sim, \preceq, \le)$, so $A,B$ are both partial orders with linear extensions. Let $A_i$ by the induced copy of $A$ in $B$ coming from $S\times\{i\}$. By structural Ramsey theory for posets with a linear extension (Theorem 4.9 of \cite{all-those-ramsey}), there exists a partial order with linear extension $C$ such that for every way of coloring the set of induced copies of $A$ in $C$ by finitely many colors, there exists an induced copy $B'$ of $B$ in $C$ such that all induced copies of $A$ in $B'$ are colored with the same color.
We define $\mathcal{H} = (V,E)$ by taking $V$ to be the set of induced copies of $A$ in $C$, and defining $E$ to be the set of $k$-tuples $(A_1', ..., A_k')$ such that there is an induced copy $B'$ of $B$ such that the map $B \xrightarrow{\sim} B'$ takes $A_i$ to $A_i'$.
It's easy to show that this $\mathcal{H}$ is $\mathcal{M}$-good (the auxiliary digraph $\mathcal{G}$ has an equivalence relation $\sim$ such that the vertices of $\mathcal{G}/\!\sim$ correspond to the elements of the induced copies of $A$ in $C$, and all of the edges of $\mathcal{G}/\!\sim$ are non-decreasing with respect the partial order $\preceq$), and the chromatic number of $\mathcal{H}$ is infinite by the choice of $C$.
\end{proof}
\begin{ex} Consider the $2$-machine $\mathcal{M} = (\{0,1\},f,\{(0,1)\})$, with $f(0,(1,2)) = f(0,(2,1)) = \{1\}, f(1,(1,2)) = \{0\}$, and $f(1,(2,1)) = \emptyset$. A digraph is $\mathcal{M}$-good if and only if it has no odd cycles such that every even-numbered edge points in the same direction (in particular, every odd cycle of an $\mathcal{M}$-good digraph must have length at least $7$). There is a unique order system $(\sim, \preceq, \le)$ on $\{0,1\}\times [2]$ which is compatible with $\mathcal{M}$: $(0,1) < (1,1) \sim (0,2) < (1,2)$, $0$ is incomparable with $1$ in the induced $\preceq$ on $\{0,1\}$, and $(0,1) \prec (1,2)$.
We can unwind the proof of Theorem \ref{general} to construct an explicit $\mathcal{M}$-good digraph with infinite chromatic number as follows. For our vertex set, we take the set of ordered pairs $(A,B)$ of finite subsets of $\mathbb{N}$ such that $A \not\subseteq B$ and $B \not\subseteq A$. For edges we take pairs of vertices of the form $((A,B),(B,C))$ such that $A \subset C$. It's easy to check that this digraph is $\mathcal{M}$-good. To see that it has infinite chromatic number, we apply structural Ramsey theory for posets with a linear extension and note that every finite poset has an induced copy inside the poset of finite subsets of $\mathbb{N}$.
\end{ex}
\section{Application to constructing terms in bounded width algebras}
We follow the same general proof strategy as in Theorem 3.2 of \cite{optimal-maltsev}. Rather than $(2,3)$-consistency, we'll use the framework of $pq$-instances from \cite{slac} - this will allow us to both prove stronger results and simplify the argument.
\begin{defn} We let $\mathcal{R}_n$ be the set of subdirect relations on the $n$-element set $\{x_1, ..., x_n\}$. For $R, S \in \mathcal{R}_n$, we define $R\circ S$ to be $\{(a,c) \mid \exists b\ (a,b) \in R,\ (b,c) \in S\}$, and we define $R^-$ to be $\{(b,a) \mid (a,b) \in R\}$.
\end{defn}
\begin{defn} We say a set $\mathcal{S} \subseteq \mathcal{R}_n$ of subdirect relations is $pq$-\emph{compatible} if $\mathcal{S}$ is closed under composition and reversal, and for any $P,Q\in \mathcal{S}$ there exists $j \ge 0$ such that
\[
\Delta_{\{x_1, ..., x_n\}} \subseteq P\circ (Q\circ P)^{\circ j}.
\]
\end{defn}
\begin{defn} For any $pq$-compatible set of subdirect relations $\mathcal{S}$, and any function $\pi : ([k]^2\setminus \Delta_{[k]}) \rightarrow \mathcal{R}$, we define the deterministic $k$-machine $\mathcal{M}_{\mathcal{S},\pi}$ to be $\mathcal{M}_{\mathcal{S},\pi} = (\mathcal{R}, f, \{\Delta_{\{x_1,...,x_n\}}\}\times(\mathcal{R}\setminus\mathcal{S}))$, where $f$ is defined by $f(R,(i,j)) = R\circ \pi(i,j)$.
\end{defn}
\begin{thm} Let $R \subseteq \{x_1, ..., x_n\}^k$ be subdirect, and define $\pi$ by $\pi(i,j) = \pi_{i,j}(R)$. For any $pq$-compatible set $\mathcal{S}$, if there are $\mathcal{M}_{\mathcal{S},\pi}$-good $k$-uniform directed hypergraphs of arbitrarily large chromatic number, then for any finite bounded width algebra $\mathbb{A}$ there exists a diagonal element in $\Sg_{\mathbb{A}}(R)$.
\end{thm}
\begin{proof} This follows from the definition of $\mathcal{M}_{\mathcal{S},\pi}$, the definition of a $pq$-compatible set of relations, and Theorem A.2 of \cite{slac}.
\end{proof}
\begin{cor} Every finite bounded width algebra has a $4$-ary term $t$ which satisfies $t(x,x,y,z) \approx t(y,z,z,x)$.
\end{cor}
\begin{proof} Let
\[
R = \left\{\begin{bmatrix} x \\ y \end{bmatrix}, \begin{bmatrix} x \\ z \end{bmatrix}, \begin{bmatrix} y \\ z \end{bmatrix}, \begin{bmatrix} z \\ x \end{bmatrix}\right\},
\]
define $\pi$ by $\pi(i,j) = \pi_{i,j}(R)$, and let $\mathcal{S}$ be the set of relations in the compositional semigroup generated by $R,R^-$ which correspond to words which either contain $R\circ R$ or $R^- \circ R^-$, or are equal to $(R\circ R^-)^{\circ j}$ or $(R^- \circ R)^{\circ j}$ for some $j \ge 0$. Since every element of $\mathcal{S}$ contains some power of the cyclic permutation $(x\; y\; z)$, and since both
\[
R\circ R\circ R = \left\{\begin{bmatrix} x \\ x \end{bmatrix}, \begin{bmatrix} x \\ y \end{bmatrix}, \begin{bmatrix} x \\ z \end{bmatrix}, \begin{bmatrix} y \\ y \end{bmatrix}, \begin{bmatrix} y \\ z \end{bmatrix}, \begin{bmatrix} z \\ x \end{bmatrix}, \begin{bmatrix} z \\ z \end{bmatrix}\right\}
\]
and
\[
R^- \circ R\circ R\circ R^- = \left\{\begin{bmatrix} x \\ x \end{bmatrix}, \begin{bmatrix} x \\ y \end{bmatrix}, \begin{bmatrix} y \\ x \end{bmatrix}, \begin{bmatrix} y \\ y \end{bmatrix}, \begin{bmatrix} y \\ z \end{bmatrix}, \begin{bmatrix} z \\ x \end{bmatrix}, \begin{bmatrix} z \\ y \end{bmatrix}, \begin{bmatrix} z \\ z \end{bmatrix}\right\}
\]
contain two distinct powers of the cyclic permutation $(x\; y\; z)$, $\mathcal{S}$ is $pq$-compatible.
The $\mathcal{M}_{\mathcal{S},\pi}$-bad cycles in a digraph are now exactly the odd cycles which alternate between forward steps and backward steps (aside from a single vertex where they do not alternate), so we just have to construct a digraph $\mathcal{G} = (V,E)$ which has no odd alternating cycles and has infinite chromatic number. To finish the proof, we take $V = \{(a,b) \in \mathbb{N}^2 \mid a < b\}$ and $E = \{((a,b),(b,c)) \mid a < b < c\}$.
\end{proof}
We can generalize the previous result, to give a new proof of the ``Loop Lemma'' (Theorem 3.5 of \cite{cyclic}, originally proved in \cite{smooth-digraph-dichotomy}) in the case of bounded width algebras.
\begin{prop} If $R \subseteq \{x_1, ..., x_n\}^2$, when viewed as a digraph, is smooth, weakly connected, and has algebraic length $1$, then there exists a number $k$ such that for all $l, m \ge k$,
\[
(R^{\circ l}\circ R^{-\circ m})^{\circ k} = \{x_1, ..., x_n\}^2.
\]
\end{prop}
\begin{defn} Say that a digraph $\mathcal{G}$ is $k$-\emph{unbalanced} if for every directed cycle $c$ of $\mathcal{G}$, either $c$ has exactly as many forward edges as backward edges, or there are two contiguous, non-overlapping stretches of $c$ such that one stretch has at least $k$ more forward edges than backward edges and the other stretch has at least $k$ fewer forward edges than backward edges.
\end{defn}
\begin{thm} For every fixed $k$, there exist digraphs which are $k$-unbalanced and have arbitrarily large chromatic number.
\end{thm}
\begin{proof} Define the deterministic $2$-machine $\mathcal{M}_k$ to be $\mathcal{M}_k = (S, f, \{a_0\}\times (S\setminus \{a_0\}))$, with
\[
S = \{a_{-k}, ..., a_k, b_0, ..., b_k\}
\]
and $f$ given by
\begin{align*}
\forall -k \le i < k, \;\;\; f(a_i,(1,2)) &= a_{i+1},\\
\forall -k < i \le k, \;\;\; f(a_i,(2,1)) &= a_{i-1},\\
f(a_k,(1,2)) &= b_0,\\
\forall 0 \le i < k, \;\;\; f(b_i,(2,1)) &= b_{i+1},\\
\forall 0 < i \le k, \;\;\; f(b_i,(1,2)) &= b_{i-1},\\
f(b_0,(1,2)) &= b_0,
\end{align*}
and all other values of $f$ are $\emptyset$. It's easy to see that any $\mathcal{M}_k$-good digraph is $k$-unbalanced. Thus, by Theorem \ref{general}, it suffices to exhibit an order system $(\sim, \preceq, \le)$ on $S\times [2]$ which is compatible with $\mathcal{M}$.
The equivalence relation $\sim$ and the total order $\le$ are given by
\begin{align*}
(a_{-k},2) &< (a_{-k},1) \sim (a_{-k+1},2) < \cdots \sim (a_k,2) < (a_k,1) < (b_0,1)\\
&< (b_0,2) \sim (b_1,1) < (b_1,2) \sim \cdots < (b_{k-1},2) \sim (b_k,1) < (b_k,2).
\end{align*}
The partial order $\preceq$ is a little bit more delicate. On $\{b_0, ..., b_k\}\times [2]/\!\sim$, $\preceq$ agrees with $\le$, while $\{a_{-k}, ..., a_k\}\times[2]/\!\sim$ forms a $\preceq$-antichain. Between the $a_i$s and the $b_j$s, we have
\[
(a_i,u) \prec (b_j,v) \iff i+j > k+u-v.
\]
In particular, in the induced partial order on $S$, $a_0$ is not comparable to any other element of $S$.
\end{proof}
\begin{cor} If $R \subseteq \{x_1, ..., x_n\}^2$ is smooth and has algebraic length $1$ when viewed as a digraph, and if $\mathbb{A}$ is a finite bounded width algebra, then there is a diagonal element in $\Sg_{\mathbb{A}}(R)$.
\end{cor}
\subsection*{Acknowledgement} I would have never explored this line of research if Jelena Jovanovi{\'{c}}, Petar Markovi{\'{c}}, Ralph McKenzie, and Matthew Moore hadn't first come up with the outrageous idea of using Ramsey-theoretic arguments to construct terms in bounded width algebras. I'd also like to thank Petar Markovi{\'{c}} and his students Vlado Uljarevi\'{c} and Samir Zahirovi\'{c} for being excellent hosts and for enjoyable discussions of this circle of ideas when I visited Novi Sad.
|
{
"timestamp": "2018-06-05T02:10:49",
"yymm": "1806",
"arxiv_id": "1806.00783",
"language": "en",
"url": "https://arxiv.org/abs/1806.00783"
}
|
\section{Figures illustrating properties of simulated signal events and background estimation\label{app:suppMat}}
\input{supplemental_material}
}
\cleardoublepage \section{The CMS Collaboration \label{app:collab}}\begin{sloppypar}\hyphenpenalty=5000\widowpenalty=500\clubpenalty=5000\input{EXO-17-022-authorlist.tex}\end{sloppypar}
\end{document}
|
{
"timestamp": "2018-10-09T02:19:23",
"yymm": "1806",
"arxiv_id": "1806.01058",
"language": "en",
"url": "https://arxiv.org/abs/1806.01058"
}
|
\section{Introduction}
\begin{figure}
\centering
\includegraphics[width=9cm, angle=0]{OI_on_SUBARU_-30_25_paper.pdf}
\caption{Near-IR image (filters at J (1.25 $\mu$m), H (1.65 $\mu$m),
and K$'$ (2.15 $\mu$m)) of S106 taken with Subaru (Oasa et
al. 2006), outlining the bipolar emission nebula, with contours of
velocity integrated (--30 to 25 km s$^{-1}$) [O\,{\scriptsize I}]\ emission (136 to
456 K km s$^{-1}$ in steps of 64 K km s$^{-1}$). The area mapped
with SOFIA in [O\,{\scriptsize I}]\ and CO 16$\to$15 emission is indicated with a
white polygon. The star indicates the position of the S106 IR binary
system and the triangle the position of the young stellar object
(YSO) S106 FIR.}
\label{overview}
\end{figure}
\noindent {\bf Massive star formation} \\
\noindent Massive stars form by accretion of mass in one of three
ways: the core accretion model similar to low-mass stars via a
prominent disk \citep{McKee2002}, the competitive accretion scenario
in a clustered environment with a very small accretion disk
\citep{Bonnell2007,Bate2012}, or the fragmentation-induced starvation
view \citep{Peters2011} by fragmentation from gravitational
instability in dense accretion flows. Only the second and third
scenarios straightforwardly explain why massive stars often form in
multiple systems and are found preferentially at the centre of stellar
clusters \citep{Bontemps2010}. Regardless of the scenario, what is
known so far about how the different stages can be observationally
traced, in particular with regard to the far-infrared (FIR), can be
summarized as follows \citep[see also][]{Tan2014}:
\vspace{0.2cm}
\noindent {\sl Pre-stellar phase:} Massive dense clumps (masses from a
few tens to a few thousands of M$_\odot$, size $\sim$1 pc, density
n$\ge$10$^6$ cm$^{-3}$, temperature T$\sim$15 K) are the locations
where high-mass stars form
\citep[e.g.][]{Beuther2002,Tan2013,Motte2007}. They often show
subfragmentation into several smaller cores, i.e. pre- and
protostellar objects, of size scale $\ll$0.1 pc
\citep[e.g.][]{Bontemps2010}, but there are also examples for isolated
massive cores \cite[e.g.][]{Duarte-Cabral2013,Csengeri2017}.
Observationally, these cold dense clumps and cores are best traced by
dust emission and molecular lines (such as N$_2$H$^+$) in the mm and
sub-mm wavelength range. In contrast, molecular line emission in the
FIR is not observed in the cold dense gas phase because the energy
levels of rotational transitions are too high. Only light hydrides
such as CH$^+$, CH, NH, OH, etc., can be seen in more diffuse
gas. Molecular lines with lower excitation than the background
continuum temperature can be observed in absorption. A good example
is NH$_3$ with its transitions from non-metastable to metastable
levels, which have low excitation temperatures. For example,
\citet{Wyrowski2012} successfully observed the 1.8 THz NH$_3$ line in
absorption in high-mass star-forming clumps.
\vspace{0.2cm}
\noindent {\sl Protostellar/stellar phase:} Massive protostars
contract so quickly that there is virtually no pre-main sequence
stage. Hydrogen burning starts almost instantaneously, and they are
main sequence objects while still accreting. However, no well-defined
Keplerian disk has been detected yet, only disk-like structures on
size scales smaller than 2000 AU
\citep[e.g.][]{Beltran2011,Sanchez-Monge2013}. The envelope--disk
interaction leads to an atomic jet of at least partly ionized gas
orthogonal to the disk, which can then drive a molecular outflow
\citep[e.g.][]{Kuiper2011}. For low-mass protostars and massive
protostars in the core accretion model the bipolar radio jet and
molecular outflow are well collimated with a time-dependent activity
and opening angle \citep{Bontemps1996}. For massive protostars in the
competitive accretion scenario, the jet/outflow is often poorly
collimated. Surrounding lower-mass protostars can cause multiple
overlapping, randomly aligned outflows so that the overall outflow
pattern can be rather complex \citep{Hunter2008}. A gas distribution
with a highly fragmented and filamentary structure is predicted in the
fragmentation-induced starvation scenario where the dense accretion
flows should be directly observable. Radiative heating leads to a
higher Jeans mass so that fewer but more massive stars form
\citep{Peters2010a}.
Generally, outflows from massive protostars are more massive and
energetic \citep{Beuther2002,Duarte-Cabral2013} than those from
low-mass protostars. Extreme UV photons (EUV, energy $>$13.6 eV) lead
to ionization of the protostellar outflows and create an
outflow-confined H\,{\scriptsize II}\ region. One class of such H\,{\scriptsize II}\ regions are
those with a morphology of an hourglass shaped parsec-scale bipolar
nebula such as S106. Another example is associated with the evolved
star MWC349A \citep{Menten2012} which in addition shows a subparsec
(sub-pc) nebula that is explained as being due to a biconical outflow
of ionized gas pinched at the waist by a small disk seen edge-on
\citep{Cohen1985}. Typical of many bipolar nebulae is also a belt of
dense cold gas in the equatorial plane of the nebula
\citep{Gvaramadze2010}. Though it is commonly accepted that a disk
oriented perpendicular to the symmetry axis of the H\,{\scriptsize II}\ region plays
a crucial role in shaping the small-scale nebula/H\,{\scriptsize II}\ region and the
large-scale bipolar nebula, it is not clear how this process works in
detail. In addition, the nature of the dense gas structure in the
nebula waist, where the existing star or stars are embedded, is not
clear. It might be the remains of an accretion flow, as was shown in
numerical simulations \citep{Peters2010a}, or the result of a shock
that leads to an expansion and ultimately the dispersal of the belt
\citep{Menten2012}.
The earliest stages of hyper-compact,
ultra-compact, and compact H\,{\scriptsize II}\ regions are sometimes difficult to
identify because the massive protostar is deeply embedded in dense
gas, and even cm-emission can become optically thick. Ionized
collimated jets have been seen in radio continuum for some massive
protostars \citep[e.g.][]{Gibb2003,Guzman2014}. At the stage when a
larger cavity is blown out, entrained gas can also arise from the ablation
of gas at the cavity borders to the surrounding molecular cloud,
caused by the stellar wind. Generally, as soon as the massive star
reaches the main sequence, all forms of radiative feedback (thermal
heating, ionization, radiation pressure on dust) and mechanical
feedback (stellar winds from the stellar surface, protostellar
winds/outflows from the magneto-centrifugally driven flows powered by
accretion) become important. Observationally, it is challenging to
disentangle the different processes and to establish an evolutionary
sequence for the protostellar phase because it is short (a few
10$^3$ yr, \citealt{Duarte-Cabral2013}) and because of the small number
of objects to study. An additional complication is that most massive
stars form as binary or multiple systems, so that the observed jet and
outflow patterns are even more complex and depend on the properties of
each system.
It is the objective of this study and follow-up studies on the
star-forming region S106 to identify and potentially better
characterize these detailed phases by combining the spatial, velocity,
and intensity information of atomic and molecular line observations
and continuum from the mm to the FIR. In this first paper we focus on
interpreting the [O\,{\scriptsize I}]\ 63 $\mu$m emission observed in the immediate
environment of the exciting source S106 IR (see below) in the
framework of a photodissociation region (PDR). A possible shock
origin of the [O\,{\scriptsize I}]\ emission will be discussed and modelled in a
subsequent paper, followed by a study of the large-scale structure of
the bipolar nebula using [C\,{\scriptsize II}]\ 158 $\mu$m SOFIA data. \\
\begin{figure}
\centering
\includegraphics[width=7cm, angle=0]{cartoon-intro.pdf}
\vspace{-1cm}
\caption{Schematic view of the S106 region as seen on the sky.
The nebula is tilted $\approx$25$^\circ$ to the east in the plane of
the sky; the northern lobe, which is more obscured by foreground gas
and dust, is inclined ($\approx$15$^\circ$) away from the observer.
The southern lobe is inclined towards the observer and lacks the
front side. The bipolar nebula and the H\,{\scriptsize II}\ region are embedded in
an extended molecular cloud.}
\label{cartoon}
\end{figure}
\noindent{\bf S106} \\
\noindent
The bipolar H\,{\scriptsize II}\ region S106 in Cygnus X is an enigmatic object that
has received considerable attention owing to its eye-catching
appearance. Its distance was estimated to be 1.7 kpc
\citep{Schneider2007} and then determined with maser parallax
measurements to be 1.3$\pm$0.1 kpc \citep{Xu2013}. In the recent Gaia
DR2 catalogue, the distance is derived from the trigonometric parallax
to be 1.671 kpc (+738 pc, -392 pc). In this paper, we use a value of
1.3 kp. S106 IR was always thought to be a single late O to early B
star (see \citealt{Hodapp2008} for a summary). However, recent
observations \citep{Comeron2018} show that it is a close (separation
$<$0.2 AU), massive binary system, most likely consisting of a late O
and a late B star, being responsible for the bipolar emission nebula
(Fig.~\ref{overview}). Hereafter, we refer to this binary system as
S106 IR. A small (only slightly larger than the beam of $\sim$30
mas), edge-on disk-like feature around the two stars was discovered by
cm-interferometry \citep{Hoare1996, Gibb2007}, but its exact
evolutionary status is not clear \citep{Adams2015}. The system
already shows signatures of main sequence stars, emitting copiously in
the UV (where the O star dominates) and driving an ionized wind with a
velocity of 100--200 km s$^{-1}$\ \citep{Bally1983,Simon1982} into the
bipolar cavity. Associated with this system is an IR cluster with
more than 160 members \citep{Hodapp1991}. The centroid of the cluster
lies about 30$''$ west and 15$''$ north of S106 IR and is
symmetrically distributed about this location.
Figure~\ref{cartoon} shows a cartoon of S106 with its known features
and those observed to date. The two lobes of the H\,{\scriptsize II}\ region are inclined
with respect to a vertical north--south axis. In addition, the northern
(southern) lobe is tilted away from (towards) the observer
\citep{Solf1982}. A large portion of the front part of the southern
cavity facing the observer has been eroded so that the backside of the
cavity is visible in the optical and IR. The northern lobe is
obscured by foreground gas and dust. The strongest nebular emission in
the optical and near-IR arises from the shell-like structures that
represent the ionization front and from the limb-brightened edges of
the cavity walls. High-velocity emission of the [C\,{\scriptsize II}]\ 158 $\mu$m fine
structure line and high-J CO lines \citep{Simon2012} was interpreted
as arising from swept-up material at the front and back sides of the
expanding wind-driven cavity.
Another prominent feature related to S106 is a dark lane dividing the
two lobes (indicated in dark blue in the cartoon) and lying apparently
in front of S106 IR. It was initially interpreted as the shadow of a
large-scale disk \citep{Bally1983,Bieging1984}, but dust continuum
observations \citep{Vallee2005,Motte2007,Simon2012,Adams2015} revealed
it to be an elongated high column-density feature associated with the
extended molecular cloud \citep{Schneider2007}. Dust observations
also show strong emission $\approx$15$''$ west of S106 IR, coinciding
with two clusters of H$_2$O masers \citep{Stutzki1982,Furuya1999}.
The dust peak was named S106 FIR by \citet{Richer1993}, and
interpreted as a Class 0 young stellar object (YSO). The molecular
gas close to S106 IR is highly clumped \citep{Schneider2002} and a
number of authors interpreted their observations of ionic and
fine-structure lines \citep{vandenAncker2000, Schneider2003,
Simon2012, Stock2015} as arising from photodissociation regions
(PDRs) on the surfaces of these individual clumps around S106
IR. However, most of the studies of the FIR fine-structure cooling
lines of S106 were hampered because only line integrated fluxes were
observed and used for PDR modelling. Here we take a different approach
in that we benefit from the unprecedented velocity resolution of the
German REceiver for Astronomy at Terahertz frequencies (GREAT) on
board the Stratospheric Observatory for Far-Infrared Astronomy
(SOFIA), and apply PDR modelling to line ratios in different velocity
ranges so that we can determine more precisely the density and
radiation field in individual portions of the gas.
In addition to getting a better understanding of the nature of the
S106 star-forming complex itself, one of the objectives of this paper
is putting our results into a larger context of how massive stars form
and evolve. The paper is structured as follows. Section~\ref{obs}
gives observational details and Sect.~\ref{pdr} outlines how we
realize the PDR modelling. In Sect.~\ref{results}, we present maps of
the observed lines and give results of PDR modelling. In
Sect.~\ref{discuss}, we develop a physical model of the S106 region
and put the findings into a larger context of massive star
formation. Section~\ref{summary} summarizes the results of the paper.
\section{Observations} \label{obs}
\begin{figure}
\centering
\includegraphics[width=8.5cm, angle=0]{continuum_withEB.pdf}
\caption{One example (position 3a, southern cavity) for KOSMA-$\tau$
PDR model continuum emission fitting the observed continuum
observations (black points) of MAMBO (1.2mm), {\sl Herschel} (PACS:
70, 160, SPIRE: 250 $\mu$m), and FORCAST (19.7, 25.3, 31.5, 37.1
$\mu$m), assuming a unity beam filling factor. The error for the
flux values is generally small (see Sect.~2.2). All data were
smoothed to a common angular resolution of 15$''$. The {\sl
Herschel}/SPIRE 250 $\mu$m data point has a resolution of 18$''$.}
\label{Fig:cont}
\end{figure}
\subsection{SOFIA} \label{sofia}
The [O\,{\scriptsize I}]\ $^3$P$_1 \rightarrow$ $^3$P$_2$ atomic fine structure line at
4.74477749 THz (63.2 $\mu$m) and the CO 16$\to$15 rotational line at
1.841345 THz (162.8 $\mu$m) were observed with the heterodyne receiver
GREAT\footnote{German Receiver for Astronomy at Terahertz. GREAT is a
development by the MPI f\"ur Radioastronomie and the
KOSMA/Universit\"at zu K\"oln, in cooperation with the MPI f\"ur
Sonnensystemforschung and the DLR Institut f\"ur Planetenforschung.}
on board SOFIA during one flight on December 15, 2015, from
Palmdale/California. The [O\,{\scriptsize I}]\ line was observed in the single-pixel H
channel (now upgraded to a 7-pixel array), and the CO line in the
single-pixel L2 channel. The average system temperatures over the
total bandpass were 3641 K and 5059 K for [O\,{\scriptsize I}]\ and CO, respectively.
We employed a fast Fourier transform spectrometer (AFFTS). The centre
intermediate frequency (IF) was 1455 MHz for the [O\,{\scriptsize I}]\ line and 1000
MHz for the CO line. The frequency resolution of the original
[O\,{\scriptsize I}]\ and CO data sets is 0.244 MHz, giving a velocity resolution of
$\sim$0.04 km s$^{-1}$ (16384 spectrometer channels).
Beam-switched on-the-fly maps with a scan length of 90$''$, a step
size of 3$''$, and a dump time of 1s were performed. The chopper throw
was 360$''$ with a chop angle of 45$^{\circ}$ anti-clockwise from
+R.A. The map centre position refers to S106 IR
(R.A., Dec.)(J2000)$=(20^h27^m26^s.74,37^\circ22'47''.9)$. The same
area was covered three times while scanning in R.A., and two times in
Dec. Procedures to determine the instrument alignment and telescope
efficiencies, antenna temperature and atmospheric calibration, as well
as the spectrometers used, are described in \citet{Heyminck2012} and
\citet{Guan2012}. Tracking was done on the H-channel and the
co-alignment between the H- and L2-channel is determined to be
$\sim$2$''$. All line intensities are reported as main beam
temperatures scaled with main-beam efficiencies of 0.69 and 0.68 for
[O\,{\scriptsize I}]\ and CO, respectively, and a forward efficiency of 0.97. The main
beam sizes are 15.3$''$ for the L2 channel and 6.1$''$ for the H
channel.
The telluric [O\,{\scriptsize I}]\ line, originating from the mesosphere, contributes
as a narrow feature at the velocity of the bulk emission of the cloud
(approximately +3 km s$^{-1}$). We corrected for the absorption
following the procedure described in Appendix A in
\citet{Leurini2015}, assuming that the profile can be characterized by
a Gaussian. In summary, for the opacity correction the absorption
strength was adjusted so as to achieve an adequate interpolation
between adjacent unaffected spectral channels.
The calibrated [O\,{\scriptsize I}]\ and CO spectra were further reduced and analysed
with the GILDAS software, developed and maintained by IRAM. From the
spectra, a third-order baseline was removed and spectra were then
averaged with 1/$\sigma^2$ weighting (baseline noise). The mean
r.m.s. noise temperatures per 0.6 km s$^{-1}$ velocity bin are 1.9 K
for [O\,{\scriptsize I}]\ (6$''$ beam) and 1.6 K for CO (15$''$ beam). We estimate
that the absolute calibration uncertainty is $\sim$20\%.
For some overlays and PDR modelling, we use [C\,{\scriptsize II}]\ 158 $\mu$m and
$^{12}$CO 11$\to$10 data observed with SOFIA from 2012
\citep{Simon2012} and new observations from May 2015 and December 2016
(Simon et al., in prep.).
\begin{figure*}
\centering
\includegraphics[width=13cm, angle=0]{s106-OI-map-grid6.pdf}
\caption{Spectral map of [O\,{\scriptsize I}]\ emission with a velocity resolution of 1
km s$^{-1}$ on a 6$''$ beam-sampled grid in the velocity range from -40
to 40 km s$^{-1}$ and main beam brightness temperature range from -4 to
40 K.}
\label{spectra-map}
\end{figure*}
\subsection{Complementary data} \label{other-data}
\noindent {\bf IRAM 30m} \\
We use molecular line data from the IRAM\footnote{IRAM is supported by
INSU/CNRS (France), MPG (Germany), and IGN (Spain)} 30m telescope,
obtained in January 2016 (PI: N. Schneider). The observations were
performed with two settings, one with the high-resolution spectrometer
FTS50 attached to the EMIR E0 receiver (at 3mm wavelength), and one
with the lower resolution FTS200 in combination with EMIR E1 (at 1mm
wavelength). A large number of molecular lines ($\sim$20) was
observed, covering a range from 85.3 to 106.1 GHz and from 89.8 to
110.5 GHz, respectively. We performed on-the-fly maps that consist of
one horizontally and one vertically scanned map of size 300$''$ at 3mm
and 168$''$ at 1mm (larger than is shown in Figure~1). The quality of
the data is very good, thus only baselines of first order were
removed. For the H$^{13}$CO$^+$ 1$\to$0 data we employ here, the
average system temperature during the observations was 88 K and the
average baseline rms for spectra smoothed to 15$''$ angular resolution
and with a velocity resolution of 0.17 km s$^{-1}$ is 0.16 K. \\
\noindent {\bf Other instruments} \\
For overlays and PDR modelling, we use published or archived continuum
data from {\sl Herschel} (70, 160, 250, 350, 500 $\mu$m) and FORCAST
(19.7, 25.3, 31.5, 37.1 $\mu$m) \citep{Adams2015}, {\sl Spitzer} (3.6,
4.5, 5.8, 8.0 $\mu$m), MAMBO (1.2mm) \citep{Motte2007}, the VLA
\citep{Bally1983}, and Subaru \citep{Oasa2006}. The flux uncertainties
for the continuum data are $\sim$10\% for PACS, SPIRE, and FORCAST,
and $\sim$20\% for MAMBO (see references above for these values).
\begin{table}[htb]
\begin{center}
\caption{Overview of the most important model parameters. All abundances
are given with respect to the total H abundance. }\label{parameter}
\begin{tabular}{lll}
\hline\hline
\multicolumn{3}{c}{\rule[-3mm]{0mm}{8mm}\bf Model Input Parameters}\\ \hline
\rule[2mm]{0mm}{2mm}He/H&0.0851&(1)\\
O/H & 4.47 10$^{-4}$ & (2)\\
C/H & 2.34 10$^{-4}$ & (2)\\
$^{13}$C/H & 3.52 10$^{6}$ & (3)\tablefootmark{a}\\
$^{18}$O/H & 8.93 10$^{-7}$ & (4)\tablefootmark{b}\\
N/H & 8.32 10$^{-5}$ & (2)\\
S/H & 7.41 10$^{-6}$ &(2)\\
F/H & 6.68 10$^{-9}$ &(2)\\
$Z$ & 1 &solar metallicity\\
$\zeta_{CR}$ & 2 10$^{-16}$ s$^{-1}$ & CR ionization rate (5)\\
$R_\mathrm{V}$ &3.1 & visual extinction/reddening (7,8) \\
$\sigma_\mathrm{D}$& 1.75 $^{-21}$~cm$^2$& UV dust cross section per H (8)\\
$\langle A(\lambda)/A_\mathrm{V} \rangle$&$3.339$&mean FUV extinction\\
$\tau_\mathrm{UV}$&$3.074 A_V$&FUV dust attenuation\\
$v_b$ & 1~km~s$^{-1}$&Doppler width\\
$n_0$&$10^{2,\ldots,7}$~cm$^{-3}$&total surface gas density\\
$M$&$10^2$~\msol&cloud mass\\
$\chi$ & $10^{0\ldots,6}$&FUV intensity w.r.t. (6)\tablefootmark{c}\\
$\alpha$&1.5&density power law index\\
$R_\mathrm{core}$&$0.2 R_\mathrm{tot}$&size of constant density core\\
$N_\mathrm{tot}/A_\mathrm{V}$& 1.62 10$^{21}$~cm$^{-2}$&(8)\\
\hline\\
\end{tabular}
\end{center}
\vspace{-1cm}
\tablebib{
(1) \citet{Asplund2005}; (2) \citet{Simon-Diaz2011}; (3) \citet{Langer1990}; (4) \citet{Polehampton2005}; (5) \citet{Hollenbach2012};
(6) \citet{Draine1978}; (7) \citet{Roellig2013}; (8) \citet{Weingartner2001a}.}
\tablefoot{
\tablefoottext{a}{based on a $^{12}$C/$^{13}$C ratio of 67}
\tablefoottext{b}{based on a $^{16}$O/$^{18}$O ratio of 500}
\tablefoottext{c}{$\chi =1.71~G_0$ where G$_0$ is the mean ISRF from \citep{Draine1978}.}
}
\end{table}
\section{Photodissociation region modelling} \label{pdr}
We employ the KOSMA-$\tau$ PDR model \citep{Stoerzer1996,Roellig2006}
to derive local physical conditions from the observed line intensities
and continuum fluxes. In the following, we explain the model and our
approach in more detail because this methodology will also be used for
subsequent studies.
\subsection{Model description} \label{model}
The KOSMA-$\tau$ PDR model numerically computes the energetic and
chemical balance in a spherical cloud that is externally irradiated.
We here use a non-clumpy model approach, i.e. a single, spherical PDR
with a density gradient similar to a Bonnor--Ebert sphere, and a cloud
mass of $M=100$~M$_\odot$ (the results depend only weakly on the model
clump mass and we ran models with a parameter range between 10$^{-3}$
M$_\odot$ and 10$^3$ M$_\odot$). The full numerical computation
scheme of KOSMA-$\tau$ involves three steps. First, the continuum
radiative transfer code MCDRT \citep{Szczerba1997} is used to compute
the thermal balance of all dust components (see below) as well as the
far-ultraviolet (FUV) radiative transfer within the model cloud, and
the emergent continuum radiation \citep{Roellig2013}. We include a
variety of different dust models, for example the MRN model
\citep{mrn} and the dust models by \citet{Weingartner2001a}. By
assuming that the dust temperature is independent of the gas
temperature, we then use the MCDRT output as input for the second step
where the KOSMA-$\tau$ code computes the chemical and physical state
of the gas. In Fig.~\ref{Fig:cont} we show as an example a comparison
between the observed mid-IR and FIR data at the position 3b (offsets
5$''$,--25$''$) and the best fitting dust continuum model. We
determined in this way the dust continuum at all positions (nine in
total) where we performed PDR modelling.
We assume a local chemical steady state, i.e. a balance between
chemical formation and destruction processes of all involved species
and compute their local densities and thermal equilibrium, i.e. a
balance between all heating and cooling processes, resulting in local
gas and dust temperatures. This is done for all radial grid positions
in an iterative way to reach the numerical convergence criteria which
is that the column density deviations between iterations are less
than 1\%. The result is the radial density and temperature profile of
the model clump. In a last step the profile is used to perform the
radiative transfer computations, giving the spectral line emission of
the model cloud for comparison with observations
\citep{Gierens1992}.\footnote{The KOSMA-$\tau$ code was part of the
PDR comparison benchmark study \citep{Roellig2007}.}
We use the most recent CO self-shielding functions by
\citet{Visser2009} and assume a Doppler line width according to the
linewidth--size relation as given by \citet{Larson1981}. The
photo-electric heating is computed according to
\citet{Weingartner2001b} (model 4 from their Table 2). The formation
of H$_2$ on grain surfaces follows
\citet{Cazaux2002,Cazaux2004,Cazaux2010erratum} and formation on polyaromatic hydrocarbon (PAH)
surfaces is suppressed. For more details see \citet{Roellig2013}. In
addition, we use the UMIST Database for Astrochemistry chemical
network \citep{McElroy2013} including all isotopic reaction variants
including $^{13}$C and/or $^{18}$O isotopes \citep{Roellig2013b}. In
total 3766 reactions are considered and the 227 species that are
included in the chemistry are listed in
Table~\ref{species}. Numerically problematic reaction rate coefficients
have been rescaled according to \citet{Roellig2011}. The formation of
CH$^+$ and SH$^+$ is computed using state-to-state reaction rates
\citep{Agundez2010,Nagy2013}.
\subsection{Model fitting} \label{fit}
Table~\ref{parameter} summarizes the model parameters. To derive
local physical conditions, we compare observed line
ratios\footnote{All ratios are computed from intensities in erg
s$^{-1}$cm$^{-2}$sr$^{-1}$. See Table~2 for conversion factors.}
with the corresponding model line ratios to find the best fitting
model parameters. We use two line ratios, ${\rm [CII]}_{158\mu m}/{\rm
[OI]}_{63\mu m}$ and CO(16-15)/CO(11-10), and compare them against
$({\rm [CII]}_{158\mu m}+{\rm [OI]}_{63\mu m})/\Sigma_{FIR_{range}}$.
We note that we do not use here the total FIR intensity
$\Sigma_{FIR_{total}}$, which is commonly used in PDR models
\citep{Stock2015} because we work with velocity resolved data. The
parameter $\Sigma_{FIR_{range}}$ is thus only the continuum intensity
in one particular velocity range. We define five velocity intervals
for the [C\,{\scriptsize II}]\ and [O\,{\scriptsize I}]\ emission as explained in
Sect.~\ref{results}. Observationally, however, we can only determine
the total FIR intensity $\Sigma_{FIR_{total}}$ obtained over all
velocity ranges. For that, we derive $\Sigma_{FIR_{total}}$ by
numerically integrating the continuum data between 10 and 1000 $\mu$m,
stemming from FORCAST\footnote{To convert the FORCAST data from
MJy/pixel to MJy/sr we use the conversion factor $7.213\times
10^{10}$~pixel/sr.}, {\sl Herschel}, and MAMBO (see
Sec.~\ref{other-data}) using a linear
interpolation. Figure~\ref{Fig:cont} shows an example of this
procedure. For the determination of $\Sigma_{FIR_{range}}$ we then
assume that gas and dust are well mixed and that the absolute [O\,{\scriptsize I}]\ 63
$\mu$m line integrated intensity in all velocity ranges is higher than
that of [C\,{\scriptsize II}]\ and CO (which is the case in each velocity range for
each observed/analysed position, see Table~\ref{tab1}). With these two
assumptions, the fraction of [O\,{\scriptsize I}]\ line integrated intensity in one
velocity range to that in the total velocity range is the same as the
fraction of $\Sigma_{FIR_{range}}$ to $\Sigma_{FIR_{total}}$.
All maps are on the same angular resolution of 15$''$ (also the mid-
and FIR continuum data), except the CO 11$\to$10 map, which has a
resolution of $\sim$20$''$, and the {\sl Herschel}/SPIRE 250$''$ data,
which have a resolution of 18$''$. In the following, we thus focus on
the values for density and UV field obtained from the ${\rm
[CII]}_{158\mu m}/{\rm [OI]}_{63\mu m}$ ratio since beam filling
factors are eliminated to the first order and because only the [O\,{\scriptsize I}]\ and
[C\,{\scriptsize II}]\ lines show emission in all velocity ranges, in contrast to the
CO lines that show no emission (or only marginal amounts) at the highest
velocities. The model continuum intensities are provided by MCDRT; the
model line intensities are computed by KOSMA-$\tau$.
\section{Results} \label{results}
\begin{figure}
\centering
\includegraphics[width=8.5cm, angle=0]{aver-spectra.pdf}
\caption{Spatially averaged spectrum of molecular and atomic lines
(the spectral resolution is typically 0.2--0.6 km s$^{-1}$). All
spectra within the central area of 40$''\times$40$''$ around S106 IR
were included. Based upon the different spectral features, we define
five major velocity ranges. The local standard of rest (lsr)
velocity of the S106 molecular cloud is approximately --3 km
s$^{-1}$.}
\label{spectra}
\end{figure}
\subsection{[O\,{\scriptsize I}]\ Spectral line map and average spectrum} \label{oi-line}
Figure~\ref{spectra-map} shows the [O\,{\scriptsize I}]\ spectral line map of the
region outlined in Fig.~\ref{overview} on a 6$''$ grid, representing
the original resolution of the data. The line profiles are very
complex, showing several velocity components and extended wing
emission, particularly in the blue range. Some of the spectral
features can be explained by self-absorption (see below).
Figure~\ref{spectra} displays the average spectrum (across the
40$''\times$40$''$ area indicated in Fig.~\ref{overview}) of
[O\,{\scriptsize I}]\ together with other tracers such as [C\,{\scriptsize II}]\ and molecular lines.
Based upon the [O\,{\scriptsize I}]\ line profile, the emission from other tracers, and
earlier studies of S106 \citep{Schneider2002, Schneider2003}, we
define and label five distinct velocity ranges to characterize the
velocity distribution in S106. We note that the terms `high-velocity' and
`outflow' used here are descriptive, comprising all gas dynamics
(molecular and atomic) provoked by stellar feedback
(radiation, wind, shocks). In the following sections, we single out
the possible origin of each of the different observed features.
\begin{itemize}
\item {\bf High-velocity blue emission (HV-blue)} \newline
v=--30 to --9 km s$^{-1}$, significant emission only observed
in [O\,{\scriptsize I}]\ and [C\,{\scriptsize II}], no emission in optically thin lines.
\item {\bf Blue outflow emission} \newline
v=--9 to --4 km s$^{-1}$, prominent emission in all
optically thick lines in the form of extended wings.
\item {\bf Bulk emission} \newline
v= --4 to 0.5 km s$^{-1}$, commonly defined velocity range for the
bulk emission of the associated molecular cloud, peak emission for
all observed lines, substructure in line profiles for optically thick
lines, strong self-absorption features in the [O\,{\scriptsize I}]\ line.
\item {\bf Red outflow emission} \newline
v= 0.5 to 8 km s$^{-1}$, broad line wings for CO lines, individual components in
[C\,{\scriptsize II}]\ and [O\,{\scriptsize I}]. At +3 km s$^{-1}$, there is a prominent dip in the
[O\,{\scriptsize I}]\ line and in all other line tracers that is due to
absorption\footnote{As already discussed and shown in more detail
in \citet{Schneider2003}, the dip seen in [O\,{\scriptsize I}], [C\,{\scriptsize II}],\, and
$^{12}$CO 2$\to$1 spectra gets filled in with emission in the
corresponding $^{13}$CO 2$\to$1 spectra. The ratio
$^{12}$CO/$^{13}$CO drops to values lower than 1 for this
velocity range over a large area on the western side of S106 IR
(see Fig.~8 in Schneider et al. 2003). The lack of $^{12}$CO
(and [O\,{\scriptsize I}]\ and [C\,{\scriptsize II}]) emission around +3 km s$^{-1}$ is thus due
to absorption of these lines becoming optically thick in cold
foreground material. This cold gas originates from a molecular
clump, associated with the extended molecular cloud.}.
\item {\bf High-velocity red emission (HV-red)} \newline
v=8 to 25 km s$^{-1}$, significant emission only observed in [C\,{\scriptsize II}]\ and [O\,{\scriptsize I}].
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=8cm, angle=0]{OI-positions.pdf}
\caption{[O\,{\scriptsize I}]\ emission of the total velocity range from --30 to 25 km
s$^{-1}$ in contours overlaid on a map of 19 $\mu$m emission
(FORCAST, \citealt{Adams2015}). Positions for which we show
individual spectra of various lines and report the line integrated
intensities in Table~2 (in a 15$''$ beam) are indicated with {\bf a
cross} and labelled 1 to 6. The angular resolution of the 19
$\mu$m emission and the [O\,{\scriptsize I}]\ beam are indicated in the panel. For
comparison, the 15$''$ resolution we used for PDR modelling is also
given. Positions 1 to 6 do not always correspond to
peaks in [O\,{\scriptsize I}]\ emission, but also to peak emission in high- and low-J
CO lines (see text for details).}
\label{OI-F19}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=7.5cm, angle=0]{pos_POS1.pdf}
\includegraphics[width=7.5cm, angle=0]{pos_POS2.pdf}
\includegraphics[width=7.5cm, angle=0]{pos_POS3a.pdf}
\includegraphics[width=7.5cm, angle=0]{pos_POS3b.pdf}
\includegraphics[width=7.5cm, angle=0]{pos_POS3c.pdf}
\includegraphics[width=7.5cm, angle=0]{pos_POS4.pdf}
\includegraphics[width=7.5cm, angle=0]{pos_POS5a.pdf}
\includegraphics[width=7.5cm, angle=0]{pos_POS5b.pdf}
\includegraphics[width=7.5cm, angle=0]{pos_POS6.pdf}
\caption{Individual spectra with 15$''$ angular resolution at the positions indicated in Fig.~5.}
\label{spectra-single}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=15cm, angle=0]{channel_land_OI.pdf}
\caption{Channel map of [O\,{\scriptsize I}]\ emission in 2 km s$^{-1}$ velocity
bins. The contour levels are 5, 10, 20, 45, and 60 K km
s$^{-1}$. The logarithmic colour scale ranges from 1.5 to 60 K km
s$^{-1}$. The star indicates the binary system S106 IR and the
triangle S106 FIR.}
\label{oi-chan}
\end{figure*}
\subsection{Overview of [O\,{\scriptsize I}], [C\,{\scriptsize II}], and CO emission in S106} \label{lines}
Because the line profiles of [O\,{\scriptsize I}]\ and other atomic and molecular line
tracers are very complex and show a strong positional variation, we
show the spectra for the positions where we performed PDR modelling in
Fig.~\ref{spectra-single}, an [O\,{\scriptsize I}]\ channel map (Fig.~\ref{oi-chan}),
and overlays of different tracers in the main velocity ranges
(Appendix B). The spectra are extracted at the peak of emission in
different line tracers in the respective velocity range and reflect
different physical environments. They are indicated in
Fig.~\ref{OI-F19} where we plot the overall line integrated
[O\,{\scriptsize I}]\ emission as contours on a dust map at 19 $\mu$m
\citep{Adams2015}. This overlay indicates that the total [O\,{\scriptsize I}]\ emission
correlates very well with the warm dust, while the most prominent
peaks of [O\,{\scriptsize I}]\ emission (positions 2, 3a, 3b) are slightly shifted with
regard to the peaks of 19 $\mu$m emission. The line integrated
intensities and the total FIR continuum (used for PDR modelling) are
listed in Table~\ref{tab1}. We note that positions 3a and 3b appear more
than once in the table because there are peaks of emission in
[O\,{\scriptsize I}]\ at different velocities. \\
\begin{table*}
\caption{Velocity integrated intensities in a 15$''$ beam (20$''$
for CO 11$\to$10) at the locations indicated in Fig.~4. Each
position reflects a peak of emission in a certain velocity range
and one or more tracers and/or a source position: {\bf Pos 1}
is the southern HV-blue [O\,{\scriptsize I}]\ peak, closest to the central binary
system S106 IR; {\bf Pos 2} is the northern HV-blue [O\,{\scriptsize I}]\ peak;
{\bf Pos 3a} is the southern cavity blue outflow [O\,{\scriptsize I}]\ peak and the
southern HV-red [O\,{\scriptsize I}]\ peak; {\bf Pos 3b} corresponds to the YSO
S106 FIR, associated with H$_2$O maser emission; {\bf Pos 3c} is
the CO 16$\to$15 emission peak; {\bf Pos 4} is the northern cavity
blue outflow [O\,{\scriptsize I}]\ peak; {\bf Pos 5a} is the bulk CO 2$\to$1 peak
located in the dark lane; {\bf Pos 5b} is the NW bulk [O\,{\scriptsize I}]\ peak;
{\bf Pos 6} is the northern HV-red [O\,{\scriptsize I}]\ peak. The first column for
each line gives the intensity in the respective velocity range,
the second column the total line integrated (--30 to + 25 km
s$^{-1}$) intensity, and the third column the percentage of how
much of the individual velocity integrated line contributes to the
total line flux. The last column gives the total FIR flux,
determined as explained in Sect.~3.2. The values for the red
outflow velocity range (v=0.5 to 8 km s$^{-1}$) should be treated
with care because the [O\,{\scriptsize I}]\ and [C\,{\scriptsize II}]\ lines suffer from absorption
and the CO lines are weak or show no emission. The overall
uncertainty of the values given in this table is $\sim$20\%.}
\label{tab1}
\centering
\begin{tabular}{lc|lll|lll|lll|lll|l}
\hline\hline
& & & & & & & & & & & & & & \\
Pos. & Offset & [O\,{\scriptsize I}]\tablefootmark{a} & [O\,{\scriptsize I}]\tablefootmark{a} & \% & [C\,{\scriptsize II}]\tablefootmark{b} & [C\,{\scriptsize II}]\tablefootmark{b} & \% & $^{12}$CO\tablefootmark{b} &
$^{12}$CO\tablefootmark{b} & \% & $^{12}$CO\tablefootmark{b} & $^{12}$CO\tablefootmark{b} & \% & Total \\
& ($''$,$''$) & & & & & & & {\small 16$\to$15} & {\small 16$\to$15} & & {\small 11$\to$10} & {\small 11$\to$10} &
& FIR\tablefootmark{c}\\
& & & \tiny{(total\tablefootmark{d})} & & & \tiny{(total)} & & & \tiny{(total)} & & &\tiny{(total)} & & \\
\hline
\multicolumn{12}{l}{\rule[-3mm]{0mm}{8mm} {\bf High-velocity blue {\sl v=--30 to --9 km s$^{-1}$}}}\\
{\bf 1} & (-2,-2) & 13.4 & 28.9 & 46 & 45.0 & 308 & 15 & 3.6 & 45.7 & 8 & 4.7 & 36.3 & 13 & 24.1 \\
{\bf 2} & (7,12) & 8.0 & 24.2 & 33 & 30.0 & 314 & 10 & 1.0 & 25.8 & 4 & 3.9 & 30.0 & 13 & 23.4 \\
\hline
\multicolumn{12}{l}{\rule[-3mm]{0mm}{8mm} {\bf Outflow blue {\sl v=--9 to --4 km s$^{-1}$}}}\\
{\bf 3a} & (2,-20) & 4.3 & 21.5 & 20 & 82.8 & 292 & 28 & 4.6 & 25.4 & 18 & 6.2 & 24.8 & 25 & 15.3 \\
{\bf 3b} & (-15,-5) & 4.9 & 22.4 & 22 & 65.0 & 245 & 27 & 7.1 & 24.7 & 29 & 8.5 & 24.3 & 35 & 17.5 \\
{\bf 3c} & (8,8) & 6.7 & 24.7 & 27 & 69.9 & 331 & 21 & 22.3 & 34.1 & 65 & 15.4 & 34.0 & 45 & 24.8 \\
{\bf 4} & (0,15) & 7.2 & 23.2 & 31 & 50.5 & 273 & 18 & 13.6 & 16.1 & 84 & 12.0 & 25.5 & 47 & 19.7 \\
\hline
\multicolumn{12}{l}{\rule[-3mm]{0mm}{8mm} {\bf Cloud bulk emission {\sl v=--4 to 0.5 km s$^{-1}$}}}\\
{\bf 3b} & (-15,-5) & 9.6 & 22.4 & 43 & 96.4 & 245 & 39 & 4.6 & 24.6 & 19 & 10.9 & 24.3 & 45 & 17.5 \\
{\bf 5a} & (10,-5) & 4.8 & 22.1 & 21 & 98.9 & 337 & 29 & 4.6 & 43.1 & 11 & 14.8 & 37.7 & 39 & 20.4 \\
{\bf 5b} & (-10,15) & 7.4 & 19.0 & 39 & 57.4 & 205 & 28 & - & - & - & 8.1 & 20.3 & 40 & 12.4 \\
\hline
\multicolumn{12}{l}{\rule[-3mm]{0mm}{8mm} {\bf Outflow red {\sl v=0.5 to 8 km s$^{-1}$}}}\\
{\bf 3a} & (2,-20) & 4.7 & 21.5 & 22 & 38.2 & 292 & 13 & 6.9 & 25.7 & 27 & 4.0 & 24.5 & 16 & 15.3\\
\hline
\multicolumn{12}{l}{\rule[-3mm]{0mm}{8mm} {\bf High-velocity red {\sl v=8 to 25 km s$^{-1}$}}}\\
{\bf 6} & (10,25) & 5.4 & 14.7 & 37 & 16.8 & 170 & 10 & 2.1 & 19.5 & 11 & 0.6 & 1.50 & 4 & 9.8\\
{\bf 3a} & (2,-20) & 3.2 & 21.5 & 15 & 46.1 & 292 & 16 & 0.5 & 25.7 & 2 & 0.3 & 24.5 & 1 & 15.3\\
\hline
\end{tabular}
\tablefoot{
\tablefoottext{a}{In units of [10$^{-3}$ erg s$^{-1}$ sr$^{-1}$ cm$^{-2}$]. Conversion I(OI) [erg s$^{-1}$ sr$^{-1}$ cm$^{-2}$] = 1.0943 10$^{-4}$ I(OI) [K km s$^{-1}$].}
\tablefoottext{b}{In units of [10$^{-5}$ erg s$^{-1}$ sr$^{-1}$ cm$^{-2}$]. Conversion I(CII) [erg s$^{-1}$ sr$^{-1}$ cm$^{-2}$] = 7.0354 10$^{-6}$ I(CII) [K km s$^{-1}$].
Conversion I(CO16-15) [erg s$^{-1}$ sr$^{-1}$ cm$^{-2}$] = 6.3953 10$^{-6}$ I(CO16-15) [K km s$^{-1}$]. Conversion I(CO11-10) [erg s$^{-1}$ sr$^{-1}$ cm$^{-2}$] =
2.0835 10$^{-6}$ I(CO11-10) [K km s$^{-1}$].}
\tablefoottext{c}{In units of [erg s$^{-1}$ sr$^{-2}$ cm$^{-2}$].}
\tablefoottext{d}{The total velocity range is from --30 to +25 km s$^{-1}$.}
}
\end{table*}
\noindent {\bf High-velocity blue emission} \\
\noindent
The most prominent feature of the [O\,{\scriptsize I}]\ spectra is the {\sl
high-velocity blue (HV-blue)} emission that appears as a single
component with an extended blue wing at temperatures up to 20 K at
positions {\bf 1, 2, and 3c} in Fig.~\ref{spectra-single}. The dip in
emission at --9 km s$^{-1}$ is most likely not caused by
self-absorption because the [C\,{\scriptsize II}]\ line has a very similar emission
profile (though it cannot be excluded that both lines are affected by
self-absorption). The channel map (Fig.~\ref{oi-chan}) clearly shows
how the HV-blue emission starts south-west of S106 IR at v=--20.6 km
s$^{-1}$ and then gradually develops a north-eastern component. Both
peaks together are best visible at v=--12.6 km s$^{-1}$. \\
\noindent {\bf Blue outflow emission} \\
\noindent
The channel map shows how the northern and southern cavity lobes are
outlined in [O\,{\scriptsize I}]\ emission in the {\sl blue outflow} velocity range
(panels --8.6 km s$^{-1}$ to --4.6 km s$^{-1}$). Position {\bf Pos
3c} represents the peak of blue outflow emission in the CO 16$\to$15
line, and {\bf Pos 4} is the peak position of the blue outflow in
[O\,{\scriptsize I}]\ emission in the northern cavity. \\
\noindent {\bf Bulk emission} \\
\noindent
Generally, the {\sl bulk emission} velocity range shows a very clumpy
distribution around S106 IR (panels --2.6 and --0.6 km s$^{-1}$). The
`hole' in the emission around S106 IR is partly caused by
self-absorption, but also reflects a real lack of gas since all
molecular line tracers show no emission (see Sec.~\ref{lane}). The
[O\,{\scriptsize I}]\ spectra (Fig.~\ref{spectra-single}) are very complex for this
velocity range, showing mostly several components, which can be due to
several PDR layers on clump surfaces along the line of sight and/or
intrinsic self-absorption effects. Future observations of the more
optically thin [O\,{\scriptsize I}]\ 145 $\mu$m line may help to decide between the
different possibilities. The [O\,{\scriptsize I}]\ 145 $\mu$m/63 $\mu$m ratio
determined from velocity unresolved data \citep{Schneider2003,
Stock2015} yield a value of 0.15--0.17, indicating that one or both
of the lines is optically thick. We note that the molecular cloud
velocity is --1 km s$^{-1}$, indicated by the $^{13}$CO 2$\to$1 line
in (Fig.~\ref{spectra-single}), while the [O\,{\scriptsize I}]\ line has a centre
velocity of typically --4 km s$^{-1}$. A very prominent peak of
emission is found at position {\bf Pos 3b} which arises from a single
clump SW of S106 IR (position S106 FIR, location of the H$_2$O maser),
clearly visible in panel --2.6 km s$^{-1}$ in the channel map.
Position {\bf 5a} represents the `dark lane', which is strong in
$^{13}$CO emission and weaker in the atomic and high-J CO lines, and
{\bf 5b} is the NW peak of [O\,{\scriptsize I}]\ emission. \\
\noindent {\bf Red outflow } \\
\noindent
The velocity range from 0.5 to 8 km s$^{-1}$ is strongly affected by
absorption, basically all [O\,{\scriptsize I}]\ spectra show a dip between 1 and 3 km
s$^{-1}$ (see footnote on page 6). Significant emission is only found
around v=8 km s$^{-1}$ in the southern cavity at {\bf Pos 3a}. \\
\noindent {\bf High-velocity red emission} \\
\noindent
The [O\,{\scriptsize I}]\ emission at velocities higher than v=8 km s$^{-1}$ is very
different from the HV-blue emission; it is more diffuse and extends
further away from S106 IR, mostly outlining the eastern cavity
walls. Position {\bf 6} represents the [O\,{\scriptsize I}]\ peak in the northern
cavity.
\begin{figure*}
\centering
\includegraphics[width=13cm, angle=0]{RatioPlotCIIvsOI_updated8.pdf}
\caption{PDR diagnostic diagrams for observed fine-structure lines
based on the KOSMA-$\tau$ PDR model for a clump of
$M=100$~M$_\odot$. $[CII]_{158\mu m}/[OI]_{63\mu m}$ against
$([CII]_{158\mu m}+[OI]_{63\mu m})/\Sigma_{FIR}$, where
$\Sigma_{FIR}$ is the continuum intensity between 10 and 1000~$\mu
m$ in the respective velocity range determined as described in
Sect.~3.2. The various markers indicate the different positions
corresponding to different velocity ranges. The density $n$ and
the Draine field $\chi$ are given for each position.}
\label{Fig:ratio1}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=13cm, angle=0]{RatioPlotCO16vsCO11_updated8.pdf}
\caption{PDR diagnostic diagrams for observed fine-structure lines
based on the KOSMA-$\tau$ PDR model for a clump of
$M=100$~M$_\odot$. CO(16-15)/CO(11-10) against $([CII]_{158\mu
m}+[OI]_{63\mu m})/\Sigma_{FIR}$, where $\Sigma_{FIR}$ is the
continuum intensity between 10 and 1000~$\mu m$ in the respective
velocity range determined as described in Sec.~3.2. The various
markers indicate the different positions corresponding to
different velocity ranges. The density $n$ and the Draine field
$\chi$ are given for each position.}
\label{Fig:ratio2}
\end{figure*}
\subsection{Shocks in S106} \label{shocks}
We did not detect the SiO (2$\to$1) or (5$\to$4) lines, classical
tracers of stationary C-type (e.g. \citealt{Schilke97},
\citealt{Gusdorf081}), J-type (e.g. \citealt{Guillet09}), or
non-stationary CJ-type (e.g. \citealt{Gusdorf082}) shocks. However, the
presence of shocks that can give rise to the [O\,{\scriptsize I}]\ emission observed at
high blue and red velocities and at outflow velocities cannot be
excluded. These non-detections could indeed mean that the
shocks are either self-irradiated (see the study of shocks with
radiative precursors by \citealt{Hollenbach1989}) or externally
irradiated (e.g. \citealt{Lesaffre2013}). In the latter case, the
irradiation could come from the FUV field of the central
protostars. In shocks where irradiation plays a significant role, SiO
is expected to be converted to Si and Si$^+$. These species can be
observed through three transitions: the $^3$P$_1$--$^3$P$_0$ at
129.68~$\mu$m and the $^3$P$_2$--$^3$P$_1$ at 68.47~$\mu$m for the Si
lines, and the $^2$P$_{3/2}$--$^2$P$_{1/2}$ at 34.82~$\mu$m. The
ground-state Si transition at 129.68~$\mu$m has been marginally
detected by {\sl Herschel}-PACS, contrary to its excited counterpart
at 68.47~$\mu$m. None of the Si lines has been detected by ISO in
contrast to the Si$^+$ transition, which was observed. The presence
of atomic and ionized silicon in the gas phase, combined with the
detection of optical and near-infrared lines reported by
\citet{Riera89}, hints at the presence of irradiated shocks in the S106
region. Though we briefly discuss possible shocks in
Sect.~\ref{hv-blue-red}, a more sophisticated attempt to characterize
their type and the role they play in the excitation of [O\,{\scriptsize I}]\ and
[C\,{\scriptsize II}]\, lies out of the scope of the present study, and will be
investigated in a separate publication.
\begin{table*}[htb]
\begin{center}
\caption{Data points and model results from Figure~\ref{Fig:ratio1}
for $\mathrm{[CII]}_{158\mu m}/\mathrm{[OI]}_{63\mu m}$ against
$(\mathrm{[CII]}_{158\mu m}+\mathrm{[OI]}_{63\mu
m})/\Sigma_\mathrm{FIR_{range}}$. The abbreviation `range'
signifies that we scale the total FIR flux with the fractional
[O\,{\scriptsize I}]\ intensity in this velocity range (see text for further
details). The values for density n and radiation field $\chi$
(Cols. 7 and 8) are given for the nominal observed [O\,{\scriptsize I}]\ intensity
and for an [O\,{\scriptsize I}]\ intensity increased by a factor of two (in
parentheses).} \label{tab:ratio1}
\begin{tabular}{lclccccc}
\hline\hline \rule[-3mm]{0mm}{8mm}
Position & velocity range & & Offset &[CII]/[OI]&$\frac{\mathrm{[CII]}+\mathrm{[OI]}}{\Sigma_\mathrm{FIR_{range}}}$&$\log n$&$\log \chi_\mathrm{Draine}$\\
& [km s$^{-1}$] & &$({''},{''})$&$(\times 10^{-2})$&$(\times 10^{-3})$& [cm$^{-3}$]&\\ \hline
\multicolumn{8}{c}{\rule[-3mm]{0mm}{8mm} {\bf high-velocity blue}}\\ \hline
{\bf 1} & -30/-9 & S106 IR & (-2,-2) & 3.4 & 1.24 & 4.8 (6.4) & 4.8 (4.4)\\
{\bf 2} & -30/-9 & northern HV-blue [O\,{\scriptsize I}]\ & (7,12) & 3.8 & 1.08 & 4.7 (5.0) & 4.9 (4.6) \\ \hline
\multicolumn{8}{c}{\rule[-3mm]{0mm}{8mm} {\bf outflow blue}}\\ \hline
{\bf 3a} & -9/-4 & southern cavity blue [O\,{\scriptsize I}]\ & (2,-20) & 19.4 & 1.68 & 4.1 (4.5) & 4.0 (4.0) \\
{\bf 3b} & -9/-4 & S106 FIR & (-15,-5) & 13.3 & 1.45 & 4.3 (4.6) & 4.3 (4.2) \\
{\bf 3c} & -9/-4 & CO 16$\to$15 peak & (8,8) & 10.5 & 1.10 & 4.3 (4.6) & 4.5 (4.4) \\
{\bf 4} & -9/-4 & northern cavity blue [O\,{\scriptsize I}]\ & (0,15) & 7.0 & 1.26 & 4.5 (4.8) & 4.6 (4.4) \\ \hline
\multicolumn{8}{c}{\rule[-3mm]{0mm}{8mm} {\bf cloud bulk emission}}\\ \hline
{\bf 3b} & -4/0.5 & S106 FIR & (-15,-5) & 10.1 & 1.41 & 4.4 (4.7) & 4.4 (4.3) \\
{\bf 5a} & -4/0.5 & dark lane & (10,-5) & 20.8 & 1.31 & 4.1 (4.4) & 4.1 (4.1) \\
{\bf 5b} & -4/0.5 & NW bulk [O\,{\scriptsize I}]\ & (-10,15) & 7.8 & 1.67 & 4.5 (4.8) & 4.4 (4.2) \\ \hline
\multicolumn{8}{c}{\rule[-3mm]{0mm}{8mm} {\bf outflow red}}\\ \hline
{\bf 3a} & 0.5/8 & southern cavity red [O\,{\scriptsize I}]\ & (2,-20) & 8.1 & 1.52 & 4.4 (4.8) & 4.5 (4.3) \\ \hline
\multicolumn{8}{c}{\rule[-3mm]{0mm}{8mm} {\bf high-velocity red}}\\ \hline
{\bf 6} & 8/25 & northern HV-red [O\,{\scriptsize I}]\ & (10, 25) & 3.1 & 1.54 & 4.8 (6.3) & 4.7 (4.4) \\
{\bf 3a} & 8/25 & southern HV-red [O\,{\scriptsize I}]\ & (2,-20) & 14.3 & 1.61 & 4.2 (4.6) & 4.2 (4.1) \\
\hline\\
\end{tabular}
\end{center}
\end{table*}
\begin{table*}[htb]
\begin{center}
\caption{Data points and model results from Figure~\ref{Fig:ratio2}
for CO(16-15)/CO(11-10) against $(\mathrm{[CII]}_{158\mu
m}+\mathrm{[OI]}_{63\mu m})/\Sigma_\mathrm{FIR_{range}}$. The
values for density n and radiation field $\chi$ (Cols. 7 and 8)
are given for the nominal observed [O\,{\scriptsize I}]\ intensity and for an
[O\,{\scriptsize I}]\ intensity increased by a factor of two (in
parentheses).} \label{tab:ratio2}
\begin{tabular}{lclccccc}
\hline\hline \rule[-3mm]{0mm}{8mm}
Position & velocity range & & Offset &$\frac{\mathrm{CO}(16-15)}{\mathrm{CO}(11-10)}$&$\frac{\mathrm{[CII]}+\mathrm{[OI]}}{\Sigma_\mathrm{FIR_{range}}}$&$\log n$&$\log \chi_\mathrm{Draine}$\\
& [km s$^{-1}$] & &$({''},{''})$& &$(\times 10^{-3})$&[cm$^{-3}$] &\\ \hline
\multicolumn{8}{c}{\rule[-3mm]{0mm}{8mm} {\bf high-velocity blue}}\\ \hline
{\bf 1} & -30/-9 & S106 IR & (-2,-2) & 0.76 & 1.24 & 4.4 (4.6) & 4.6 (4.3) \\
{\bf 2} & -30/-9 & northern HV-blue [O\,{\scriptsize I}]\ & (7,12) & 0.27 & 1.08 & 4.2 (4.3) & 4.4 (4.1) \\ \hline
\multicolumn{8}{c}{\rule[-3mm]{0mm}{8mm} {\bf outflow blue}}\\ \hline
{\bf 3a} & -9/-4 & southern cavity blue [O\,{\scriptsize I}]\ & (2,-20) & 0.74 & 1.68 & 4.5 (4.6) & 4.4 (4.1) \\
{\bf 3b} & -9/-4 & S106 FIR & (-15,-5) & 0.84 & 1.45 & 4.5 (4.7) & 4.5 (4.2) \\
{\bf 3c} & -9/-4 & CO 16$\to$15 peak & (8,8) & 1.44 & 1.10 & 4.7 (4.9) & 4.9 (4.6) \\
{\bf 4} & -9/-4 & northern cavity blue [O\,{\scriptsize I}]\ & (0,15) & 1.13 & 1.26 & 4.6 (4.8) & 4.7 (4.4) \\ \hline
\multicolumn{8}{c}{\rule[-3mm]{0mm}{8mm} {\bf cloud bulk emission}}\\ \hline
{\bf 3b} & -4/0.5 & S106 FIR & (-15,-5) & 0.42 & 1.41 & 4.3 (4.4) & 4.4 (4.1) \\
{\bf 5a} & -4/0.5 & dark lane & (10,-5) & 0.31 & 1.31 & 4.2 (4.3) & 4.3 (4.0) \\
{\bf 5b} & -4/0.5 & NW bulk [O\,{\scriptsize I}]\ & (-10,15) & - & - & - & - \\
\multicolumn{8}{c}{\rule[-3mm]{0mm}{8mm} {\bf outflow red}}\\ \hline
{\bf 3a} & 0.5/8 & southern cavity red [O\,{\scriptsize I}]\ & (2,-20) & 1.74 & 1.52 & 4.9 (7.0) & 4.9 (3.2) \\ \hline
\multicolumn{8}{c}{\rule[-3mm]{0mm}{8mm} {\bf high-velocity red}}\\ \hline
{\bf 6} & 8/25 & northern HV-red [O\,{\scriptsize I}]\ & (10, 25) & 3.52 & 1.55 & 6.3 (6.3) & 6.0 (6.0) \\
{\bf 3a} & 8/25 & southern HV-red [O\,{\scriptsize I}]\ & (2,-20) & 2.11 & 1.61 & 5.3 (5.4) & 4.8 (4.5) \\
\hline\\
\end{tabular}
\end{center}
\end{table*}
\section{Analysis and discussion} \label{discuss}
\subsection{Emission from different velocity ranges} \label{ranges}
In this section we qualitatively and quantitatively discuss the
emission of the observed [C\,{\scriptsize II}], [O\,{\scriptsize I}], and CO lines in the different
velocity intervals. Each velocity range reflects different properties
of the gas in terms of radiation field, temperature, and density, and
the emission can have a different origin. In particular the [O\,{\scriptsize I}]\ and
high-J CO lines can originate from PDRs, i.e. UV-heated clump
surfaces and cavity walls, and/or from shocks due to disk--envelope
interactions (accretion shock) and local shocks when the stellar winds
hit the cavity walls. One way to determine the origin of the emission
is by studying its velocity structure (see Sects. \ref{hv-blue-red}
and \ref{outflow-blue-red}). The excitation energies for [C\,{\scriptsize II}]\ and
[O\,{\scriptsize I}]\ emission are 91 and 228 K, respectively. These transitions are
easily excited by collisions with H and H$_2$ in a PDR with different
temperature layers. The critical densities for [O\,{\scriptsize I}]\ and [C\,{\scriptsize II}]\ are then
5 10$^5$ cm$^{-3}$ and 3 10$^3$ cm$^{-3}$, respectively. The high-J CO
lines (16$\to$15 and 11$\to$10) have higher excitation energies,
$\sim$750 K and $\sim$365 K, and when the emission is purely thermal
these lines probe hot gas associated with the PDR/molecular cloud
layer at critical densities of 2 10$^6$ cm$^{-3}$ and 4 10$^5$
cm$^{-3}$, respectively. Though we discuss a possible origin from
shocks in the following subsections, we use the observed FIR line
ratios and the FIR flux only for PDR modelling.
The observed line intensities in the different velocity ranges and the
total line integrated intensity and the fraction of intensity are
listed in Table~\ref{tab1}. All results from PDR modelling for the
individual velocity ranges are summarized in Fig.~\ref{Fig:ratio1} and
Fig.~\ref{Fig:ratio2} where we show the KOSMA-$\tau$ results for the
${\rm [CII]}_{158\mu m}/{\rm [OI]}_{63\mu m}$ and CO(16-15)/CO(11-10)
line ratios as iso-contours. Dashed lines show contours of constant
FUV fields, while black contours represent lines of constant
density. In Table~\ref{tab:ratio1} and \ref{tab:ratio2} we summarize
the observed ratios and give the model parameters corresponding to
their position in the diagnostic Figs.~\ref{Fig:ratio1} and
\ref{Fig:ratio2}.
Using the observed line ratios as diagnostics for PDR modelling
involves several caveats which need to be kept in mind. Firstly, we
use a non-clumpy PDR model because a more sophisticated set-up, as was
used when studying the Orion Bar \citep{Andree2017}, is beyond the
scope of this paper. Secondly, the [O\,{\scriptsize I}]\ 63 $\mu$m line and to a lesser
extent the [C\,{\scriptsize II}]\ 158 $\mu$m line are affected by self-absorption. This
becomes obvious in the line profiles (Fig.~\ref{spectra-single}), and
is indicated by the low [O\,{\scriptsize I}]\ 145 $\mu$m/63 $\mu$m ratio of $<0.2$
\citep{Schneider2003, Stock2015}. If the [O\,{\scriptsize I}]\ 63 $\mu$m line is only
moderately optically thick ($\tau \sim$1--2), \citet{Liseau2006}
showed that self-absorption is responsible for the low ratio. We thus
additionally constrained the PDR models using an [O\,{\scriptsize I}]\ intensity for
all velocity ranges that is a factor 2, 4, and 10 higher. This is only
a first approximation, but it allows us to estimate how the density
and the radiation field change with varying [O\,{\scriptsize I}]\ line intensity. Using
a much higher [O\,{\scriptsize I}]\ intensity (4 and 10 times the observed value)
results in higher densities (typically 10$^{5-6}$ cm$^{-3}$) for all
positions but in a much lower radiation field (typically $\chi
\sim$10$^{2-3}$). Such a low radiation field is not supported by
observations; for example, we determine from {\sl Herschel} flux
measurements (see Sec.~\ref{compare}) a value of $\chi \sim$(2-4)
10$^4$ as a lower limit. We thus only use the results of a PDR model
with the nominal [O\,{\scriptsize I}]\ intensity, and one that is a factor of 2 higher
(given in Cols. 7 and 8 in Tables~\ref{tab:ratio1} and
\ref{tab:ratio2} in parentheses.)
\begin{figure*}
\centering
\includegraphics[width=7cm, angle=0]{OI_on_F19_-30_-9-paper.pdf}
\includegraphics[width=7cm, angle=0]{OI_on_F19_8_25-paper.pdf}
\caption{[O\,{\scriptsize I}]\ line integrated emission overlayed as contours on 19
$\mu$m emission from dust \citep{Adams2015}. The left panel shows
the high-velocity blue emission and the right panel the
high-velocity red emission. The dust emission ranges between 0 and
2.6 Jy/pix, the [O\,{\scriptsize I}]\ contour lines go from 30\% to 100\% of maximum
intensity (397 K km s$^{-1}$ for HV-blue and 121 K km s$^{-1}$ for
HV-red).}
\label{OI-HV}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=7cm, angle=0]{OI_on_VLA_-9_-4_paper.pdf}
\includegraphics[width=7cm, angle=0]{OI_on_VLA_05_8_paper.pdf}
\caption{[O\,{\scriptsize I}]\ line integrated emission overlayed as contours on cm
emission from the VLA \citep{Bally1983}. The left panel shows the
blue outflow range and the right panel the red emission. The
[O\,{\scriptsize I}]\ contour lines go from 36 to 156 in steps of 10 K km s$^{-1}$ km s$^{-1}$
for the blue outflow and and 40 to 180 in steps of 10 K km s$^{-1}$ for the
red outflow. The star indicates the binary system S106
IR and the triangle S106 FIR.}
\label{OI-outflow}
\end{figure*}
\subsubsection{High-velocity blue and red range} \label{hv-blue-red}
The HV-blue (v$<$--9 km s$^{-1}$) [O\,{\scriptsize I}]\ emission is spatially
concentrated in two regions north-east (Pos 1) and south-west (Pos 2)
of S106 IR (Fig.~\ref{OI-HV}). Position 1 is correlated with a 19 $\mu$m
dust peak, and Pos 2 is located very close to S106 IR. It is
remarkable that at Pos 1, the [O\,{\scriptsize I}]\ emission at these high blue
velocities contributes $\sim$50\% of the whole line integrated
emission at this position (see Table~\ref{tab1}). The [O\,{\scriptsize I}]\ HV-blue
emission corresponds well to the [C\,{\scriptsize II}]\ emission distribution
(Fig.~\ref{channel-hv-blue} in Appendix B), though the latter is more
extended and outlines the cavity better. The $^{12}$CO 2$\to$1
emission in this velocity range (Fig.~\ref{channel-hv-blue} in
Appendix B) traces the molecular outflow and the cold gas of the dark
lane, but is probably more affected by beam dilution (11$''$ resolution
with respect to 6$''$ for [O\,{\scriptsize I}]). The $^{12}$CO 16$\to$15 and 11$\to$10
lines show only very weak emission at some positions
(Fig.~\ref{spectra-single}).
The HV-red (v$>$8 km s$^{-1}$) [O\,{\scriptsize I}]\ emission is more dispersed and has
two peaks along the eastern limb-brightened cavity wall, and a
northern peak that is not correlated with the dust. While the overall
[C\,{\scriptsize II}]\ emission in this velocity range corresponds well with [O\,{\scriptsize I}], the
$^{12}$CO 2$\to$1 line is only visible at the position of the northern
clump (Fig. B.5).
There are three possible explanations for the origin of the
high-velocity blue and red [O\,{\scriptsize I}]\ emission: 1) disk wind in the context
of an accretion shock, 2) shocks caused by the stellar wind hitting
the inside of the dark lane, or 3) classical PDR emission from the
inner surface of the dark lane. Further details of each possibility
are given below: \\
\noindent {\bf 1.} An {\bf {\sl accretion shock}} can occur because
S106 IR is surrounded by a disk-like structure. However, considering
the distance of less than 0.2 AU between the two stars, the
existence of a circumbinary disk is very unlikely; it is probably
only one disk or its remains that were seen in
cm-interferometry. Adopting what is known for low-mass stars (see
\citealt{Hartmann2016} for a review), accretion of material from the
disk takes place in small-scale (sub-pc) flows along magnetic field
lines (see e.g. Fig.~1 in Hartmann et al.). The material has high
free-fall velocities (of the order of several hundred km s$^{-1}$),
that then causes shocks at the stellar surface. However, the resulting
shock heats the gas to very high temperatures ($\sim$10$^6$ K), which
is far above the excitation of the [O\,{\scriptsize I}]\ 63 $\mu$m line. Moreover, the
HV [O\,{\scriptsize I}]\ emission could originate from the disk that produces a bipolar
flow or jet driven by accretion energy. The observed HV-blue
[O\,{\scriptsize I}]\ emission is indeed very collimated and would fit into this
scenario of an atomic jet, emanating from the disk--envelope--star
system. The projected velocity (towards the observer) is ${\rm
v}_{lsr}\sim$25 km s$^{-1}$, so that the real velocity is ${\rm v} =
{\rm v}_{lsr}/\sin(\alpha),$ where $\alpha$ is the inclination angle
of the ionized lobes. \citet{Hippelein1981} and \citet{Solf1982}
determined the angle to be around 15$^\circ$ so that the velocity is
approximately 100 km s$^{-1}$ and thus consistent with high-velocity
dissociative J-type shocks (see \citealt{Hollenbach1985,
Hollenbach1989} for theory, and \citet{Nisini2015,Leurini2015} for
observations). However, there are two arguments against this shock
scenario on a sub-pc scale. Firstly, the HV-red [O\,{\scriptsize I}]\ emission is not
collimated in the same way as the HV-blue emission (Fig. ~\ref{OI-HV})
which would be expected in the case of a symmetric atomic jet. Moreover,
the emission is much more extended (up to several parsecs) and
correlates well with the cavity walls. Secondly, the CO 16$\to$15
line as a typical J-type shock tracer has a very low line intensity
(Table~\ref{tab1}) in this velocity range and is only visible as an
extended wing (Fig.~\ref{spectra-single}).\\
\noindent {\bf 2.} S106 IR consists of an OB star system that has a
strong {\bf {\sl stellar wind}} of around 200 km s$^{-1}$
\citep{Hippelein1981}. The wind is a source of high kinetic energy and
develops a shock when it expands into the ambient medium, in this case
hitting the inner part of the dark lane and the molecular cloud
fragments close to S106 IR. The shocked gas can then cool via the
[O\,{\scriptsize I}]\ 63 $\mu$m line. \\
\noindent {\bf 3.} Given the proximity of the exciting S106 IR binary
system, the HV [O\,{\scriptsize I}]\ emission can be explained by {\bf {\sl PDRs}} on
the surface of surrounding clumps and from the backside of the dark
lane. From the PDR model (Sec.~\ref{fit}), we derive a density of 5--6
10$^4$ cm$^{-3}$ and an UV field of $\chi \sim$7 10$^4$ from the ratio
[C\,{\scriptsize II}]/[O\,{\scriptsize I}]\ for S106 IR and the northern HV-blue and HV-red peak
emission positions. The southern HV-red [O\,{\scriptsize I}]\ peak has a lower density
of 10$^4$ cm$^{-3}$ and a UV field of $\chi$=1.6 10$^4$ from
[C\,{\scriptsize II}]/[O\,{\scriptsize I}]. These density values increase to 2 10$^6$ cm$^{-3}$ (while
the radiation field becomes slightly lower, typically $\chi$ around a
few times 10$^4$) when the [O\,{\scriptsize I}]\ intensity is increased by a factor of
two. The density and radiation field obtained from the
CO(16$\to$15)/CO(11$\to$10) line ratio for the HV-blue emission are
lower (n=1.5 and 2.8 10$^4$ cm$^{-3}$ and $\chi$=2--4 10$^4$) and stay
below 10$^5$ for the density even if the [O\,{\scriptsize I}]\ intensity
increases. Modelling the HV-red CO emission is tricky because of the
low line intensities (see Table~\ref{tab1}). For Pos 6, we derive a
density of n$\sim$2 10$^6$ cm$^{-3}$ for both original and increased
[O\,{\scriptsize I}]\ intensity, but the radiation field with a value of $\chi$=10$^6$
is uncharacteristically high. \\
On first sight, it is puzzling that in the high-velocity blue range we
observe strong [O\,{\scriptsize I}]\ emission but nearly no CO 16$\to$15 emission, even
though the two lines have similar critical densities. Because CO
16$\to$15 has a higher excitation temperature (750 K) than [O\,{\scriptsize I}]\ (228
K), and emits much less prominently in this velocity range, we assume
that this high-velocity gas is not strongly heated and consists only
of gas with high densities ($>$10$^6$ cm$^{-3}$) at T$\sim$300 K. The
CO 16$\to$15 line is then mostly subthermally excited.
\subsubsection{Outflow blue and red [O\,{\scriptsize I}]\ emission} \label{outflow-blue-red}
The [O\,{\scriptsize I}]\ emission in the blue and red outflow velocity range (-9 to -4
km s$^{-1}$ and 0.5 to 8 km s$^{-1}$, respectively) outlines precisely
the cavity of the bipolar nebula, illustrated in
Fig.~\ref{OI-outflow}, which shows [O\,{\scriptsize I}]\ contours overlaid on a cm VLA
image \citep{Bally1983}, and in Fig.~B.2. The cm emission delineates
the H\,{\scriptsize II}\ region but avoids the dark lane. \citet{Bally1983} argue
that the density in the lane is so high that all ionizing radiation is
absorbed (see Sec.~\ref{lane}). Similar to the [C\,{\scriptsize II}]\ emission
distribution \citep{Simon2012}, the red [O\,{\scriptsize I}]\ outflow emission is
absent in the northern lobe. Both [O\,{\scriptsize I}]\ and [C\,{\scriptsize II}]\ red outflow emission
thus stem mostly from swept-up gas of the cavity walls from the {\sl
backside} of the southern lobe (since the front side does no longer
exists). In addition, we observe limb-brightened emission at the
cavity walls and, closer to S106 IR, the [O\,{\scriptsize I}]\ emission in the blue
outflow range is also correlated with the clump containing S106
FIR. What is causing the gas dynamics is not clear. The stellar wind
of S106 IR can provoke shocks when it hits (mechanically) the cavity
walls. However, the radiation of S106 IR ionizes the interface layer
between the H\,{\scriptsize II}\ region and the molecular cloud and creates a
classical PDR. A mixed scenario involving `irradiated shocks'
\citep{Lesaffre2013} and/or `moving PDRs' \citep{Stoerzer1998} is also
possible. From our modelling, we obtain average densities at the three
cavity positions (3a,b and 4) of 1.3--5.6 10$^4$ cm$^{-3}$ at a rather
constant radiation field of $\chi \sim$ a few 10$^4$, depending on
position and ratio ([C\,{\scriptsize II}]\ and [O\,{\scriptsize I}]\ or CO). Interestingly, the values
for density and radiation field do not change much if we increase the
[O\,{\scriptsize I}]\ intensity. Though the [C\,{\scriptsize II}]\ line and to a lesser extent the
[O\,{\scriptsize I}]\ line can also be excited by collisions with electrons, we assume
that only a very small fraction of [C\,{\scriptsize II}]\ emission arises from the
H\,{\scriptsize II}\ region because we detected only very weak [N\,{\scriptsize II}]\ 205 $\mu$m
emission. Generally, this line arises only from the H\,{\scriptsize II}\ region, so
that its detection only together with a sufficient [N\,{\scriptsize II}]\ to [C\,{\scriptsize II}]\ line
ratio leads us to conclude that a significant amount of [C\,{\scriptsize II}]\ also
emerges from the ionized gas phase \citep{Heiles1994}. This is not the
case for S106.
\begin{figure}
\centering
\includegraphics[width=7cm, angle=0]{s106-co1615-gtc-paper.pdf}
\caption{Optical image from the Gran Telescopio Canaries (colour) with
an overlay of CO 16$\to$15 line integrated emission (contour levels 18.6 to
83.8 in steps of 6.5 K km s$^{-1}$) in the blue outflow velocity
range (--9 to --4 km s$^{-1}$). The dark lane is indicated by one
black contour (3 Jy) of SHARC 350 $\mu$m emission (see Fig.~13).}
\label{blue-outflow-co1615}
\end{figure}
In contrast to [O\,{\scriptsize I}], the emission distribution of the CO 16$\to$15 line
(Fig.~\ref{blue-outflow-co1615}) in this velocity range shows a strong
spatial correlation with the `dark lane', peaking at position 3c, which
shows the highest values of CO 16$\to$15 emission
(Table~\ref{tab1}). Figure B.2 shows that the peak of emission in the
low-J CO lines ($^{12}$CO and $^{13}$CO 2$\to$1) is slightly shifted
towards S106 IR. Because the lane appears prominently in the optical
image as a dark feature and consists of cold gas (see SHARC image,
Fig.~\ref{bulk-co1615}, and overlays with the low-J CO lines,
Fig.~B.2), the CO 16$\to$15 emission can only arise from a PDR layer
of warm and dense gas on the backside of the lane towards S106 IR. The
rather low line velocities and small line width argue against a
high-velocity shock origin of the high-J CO emission, but low-velocity
shocks cannot be excluded. From PDR modelling, using the CO
16$\to$15/11$\to$10 ratio at position 3c, we derive a density of 5.6
10$^4$ cm$^{-3}$ and a radiation field of $\chi$=7.4 10$^4$. For the
red outflow velocity range (0.5 to 8 km s$^{-1}$) all lines with high
excitation temperature and high critical density ([O\,{\scriptsize I}], [C\,{\scriptsize II}], CO
16$\to$15) show a similar emission distribution (Fig.~B.4) with an
extended clump south of S106 IR. This velocity range is affected by
absorption so that the derived line intensities and ratios are not
reliable and we do not use those for PDR modelling.
Summarizing, we interpret the emission in the outflow velocity ranges
for all lines as arising from PDR surfaces of heated and UV
illuminated clumps around S106 IR, of ablated gas from the cavity
walls, and from a PDR surface on the backside of the dark lane.
\begin{figure}
\centering
\includegraphics[width=7cm, angle=0]{s106-sharc-co1615.pdf}
\caption{SHARC 350 $\mu$m emission (colour wedge in Jy) with overlays
of CO 16$\to$15 line integrated emission (levels 14 to 47 by 3 K km
s$^{-1}$, grey contours) in the bulk velocity range (--4 to 0.5 km
s$^{-1}$).}
\label{bulk-co1615}
\end{figure}
\subsubsection{Bulk emission} \label{bulk}
The velocity range from --4 to 0.5 km s$^{-1}$ is dominated by widespread
but clumpy [O\,{\scriptsize I}]\ emission (see channel map, Fig.~\ref{oi-chan}) in
which the area around S106 IR is mostly devoid of emission (best
visible in panel v=--0.6 km s$^{-1}$). The most prominent clump is
the western one associated with S106 FIR. Two other clumps north-west
and south of S106 IR (Fig.~B.3) are also clearly defined, and the
overall emission distribution corresponds very well with that of
[C\,{\scriptsize II}]\ in this velocity range (Fig.~B.3). The peaks of emission in the
low-J CO lines and CO 16$\to$15 emission are close to the dark lane
and at the position of S106 FIR (Fig.~\ref{bulk-co1615} and B.3). The
CO 16$\to$15 peak corresponds to Pos 5a, the NW peak of [O\,{\scriptsize I}]\ emission
to Pos 5b, and the western [O\,{\scriptsize I}]\ peak to Pos 3b, i.e. the continuum
source S106 FIR \citep{Richer1993,Little1995}, possibly a low- or
high-mass Class 0 YSO. The absolute line intensities of [O\,{\scriptsize I}]\ and
[C\,{\scriptsize II}]\ emission are high and the line fractional emission is 20--40\%,
indicating that these lines are the most important cooling lines for
this portion of the gas. The CO 16$\to$15 line is less strong at these
positions (in contrast to the CO 11$\to$10 line) and excited only in
the outermost surface PDR layer of the dark lane facing S106 IR.
Because the density and radiation field are not very high (see below),
it is a fair assumption that this line is subthermally
excited. \\ From PDR modelling, we derive a density of 1.1 (1.6)
10$^4$ cm$^{-3}$ (from the [C\,{\scriptsize II}]/[O\,{\scriptsize I}]\ and CO ratios, respectively), and
a radiation field of $\chi$=1.1 (2.1) 10$^4$ for Pos 5a, i.e. the CO
16$\to$15 peak at the inner working surface of the dark lane. The
density and radiation field (n=3.0 10$^4$ cm$^{-3}$ and $\chi$=2.7
10$^4$) are similar at the position of the NW [O\,{\scriptsize I}]\ peak. S106 FIR
shows peak emission in hot dust, but the gas density and radiation
field is not elevated, i.e. n$\sim$2 10$^4$ cm$^{-3}$ and $\chi
\sim$2.5 10$^4$. As for the blue and red outflow velocity range,
the density and radiation field do not change significantly when the
[O\,{\scriptsize I}]\ intensity is increased in the PDR model.
In summary, the [O\,{\scriptsize I}]\ emission in this velocity range seems to arise
from the inner cavity walls in combination with PDR surfaces of
individual clumps located within the cavity, close to S106 IR.
\subsection{Comparison between SOFIA results and earlier studies} \label{compare}
This is the first time that a spectrally resolved map of the [O\,{\scriptsize I}]\ 63
$\mu$m FIR cooling line is available for S106. A [C\,{\scriptsize II}]\ 158 $\mu$m map
obtained with GREAT on SOFIA was presented by \citet{Simon2012}. Other
observations (not spectrally resolved) of the [C\,{\scriptsize II}]\ and [O\,{\scriptsize I}]\ lines
with the Infrared Space Observatory (ISO), the Kuiper Airborne
Observatory (KAO), and {\sl Herschel} were reported by
\citet{vandenAncker2000,Schneider2003,Stock2015}, respectively. These
studies, however, have a much lower angular resolution (40$''$ to
80$''$) and thus contain the different physical regimes in S106
(H\,{\scriptsize II}\ region, molecular cloud bulk emission, cavity, etc.) in one
beam\footnote{As a consistency check, we determined the total [O\,{\scriptsize I}]\ 63
$\mu$m and [C\,{\scriptsize II}]\ 158 $\mu$m line intensity in a 40$''$ beam around
the position of S106 IR and compared it to the value obtained with
{\sl Herschel} \citep{Stock2015}. Our line intensities are slightly
lower but agree well, considering the calibration uncertainties for
both instruments (15--20\%). Our values of [O\,{\scriptsize I}]\ and [C\,{\scriptsize II}]\ emission
are 2.2 10$^{-2}$ erg cm$^{-2}$ sr$^{-1}$ s$^{-1}$ and 3.6 10$^{-3}$
erg cm$^{-2}$ sr$^{-1}$ s$^{-1}$, respectively, compared to 2.9
10$^{-2}$ erg cm$^{-2}$ sr$^{-1}$ s$^{-1}$ and 3.9 10$^{-3}$ erg
cm$^{-2}$ sr$^{-1}$ s$^{-1}$ from \citet{Stock2015}.}.
Nevertheless, all studies agree that there are different gas phases
with at least two temperature regimes. \citet{Schneider2003} proposed
a two-phase gas model with small ($<$0.2 pc) high-density (n=3 10$^5$
cm$^{-3}$ to 5 10$^6$ cm$^{-3}$) clumps with high surface temperature
(T$\sim$200-500 K) embedded in lower density (n$\sim$10$^4$ cm$^{-3}$)
gas. The radiation field in these two regimes is $\chi \sim$10$^5$ and
$\chi \sim$10$^3$, respectively. \citet{Stock2015}, based on their
PACS and SPIRE {\sl Herschel} study, support this scenario. They find
a two-phase regime with a hot (T$\sim$400 K) region characterized by a
high radiation field ($\chi >$10$^5$) and high density (n$>$10$^5$
cm$^{-3}$), based on CO rotational temperatures, and a less dense
region (n$\sim$10$^4$ cm$^{-3}$) with moderate UV field ($\chi
\sim$10$^4$) and a temperature of around 300 K, dominating the
emission of the atomic cooling lines. They argue that a third PDR gas
phase may be present with very high temperatures/radiation field and
densities to explain their excess emission in the CO rotational ladder
for the highest-J CO lines (J$>$20). From fitting the high-J CO lines
within a PDR model, they obtain a radiation field of $\chi \sim$10$^5$
and densities of 10$^8$ cm$^{-3}$. The assumption of such high
densities is somewhat problematic as they are not observed in any
other way, thus excitation by shocks can also be the reason for the
strong CO emission. However, the contribution of shocks is not clear.
\citet{vandenAncker2000} and \citet{Stock2015} estimate that their
contribution is small, while \citet{Noel2005} attribute their H$_2$
observations to shocks. In a subsequent paper on S106, we will apply
irradiated shock models and investigate whether they reproduce our
observed line intensities and ratios more closely.
In this study, we explain the observed [O\,{\scriptsize I}], [C\,{\scriptsize II}], and CO line ratios
with a PDR model of a single gas phase with densities of a few 10$^4$
cm$^{-3}$, exposed to a radiation field $\chi$ of a few 10$^{4}$. This
would be consistent with the lower density PDR component that was
found by \citet{Schneider2003} and \citet{Stock2015}. If we consider
self-absorption of the [O\,{\scriptsize I}]\ line and approximate the missing emission
by doubling the [O\,{\scriptsize I}]\ intensity, we find that the HV-blue and HV-red
emission has its origin in much denser gas of n$\sim$10$^{6}$
cm$^{-3}$, but with a similar radiation field\footnote{An independent
measure of the radiation field is to use the {\sl Herschel} fluxes
at 70 and 160 $\mu$m (see e.g. \citealt{Schneider2016}). With this
method, we obtained a field of typically $\chi$=2-4 10$^{4}$ or
S106.} of $\chi$=2-3 10$^{4}$. This can correspond to the
high-density gas component found by \citet{vandenAncker2000,
Schneider2003,Stock2015}. Because they are velocity unresolved
observations, it was not possible to attribute one gas component to a
certain velocity range. Our data now point toward a possible scenario
in which the FIR line emission in the outflow and bulk velocity ranges
arises from PDRs at lower density (a few 10$^4$ cm$^{-3}$) exposed to
a radiation field of a few $\chi$=10$^{4}$, while the high-density gas
component is only found at the high blue and red velocities. This
finding emphasizes the importance of {\sl velocity resolved}
observations of [O\,{\scriptsize I}], [C\,{\scriptsize II}], and high-J CO lines.
Our findings also show that we do not need to invoke a third, very
high-density gas phase (Stock et al.) nor a high-temperature two-gas
phase regime (at T$\sim$300 K and T$\sim$700 K) as was needed to model
outflow emission from low-mass protostars \citep{Kristensen2017}.
\begin{figure*}
\includegraphics[width=16cm, angle=0]{channel_h13co_subaru.pdf}
\caption{Channel map of H$^{13}$CO$^+$ 1$\to$0 emission (contours in K
km s$^{-1}$ from 0.25 to 1.75 in steps of 0.25) obtained with the
IRAM 30m telescope. The background image is the Subaru IR image from
Fig. 1. The blue contour is the 3 Jy level of the SHARC 350 $\mu$m
emission. The star marks the position of the binary system S106 IR;
the dashed red line indicates the run of the dark lane, i.e. the
possible accretion flow.}
\label{flow}
\end{figure*}
\subsection{The dark lane: an accretion flow or an evaporating filamentary structure? } \label{lane}
It was shown by dust observations in the mm-wavelength range
\citep{Vallee2005,Motte2007}, in the mid-IR with FORCAST
\citep{Adams2015}, and in the FIR with SHARC \citep{Simon2012} at 350
$\mu$m, that the dark lane east of S106 IR is a high column-density
feature (see Fig.~\ref{bulk-co1615} for the SHARC 350 $\mu$m
map). From a column density map derived from a SED fit to the 160 to
500 $\mu$m flux maps of {\sl Herschel} \citep{Schneider2016}, we
obtain an average column density\footnote{The visual extinction in the
lane has been estimated between A$_v$=12 mag and 21 mag
\citep{Eiroa1979,Hodapp1991,vandenAncker2000}.} of 4.5 10$^{22}$
cm$^{-2}$, a total mass of 275 M$_\odot$, and an average density of 3
10$^4$ cm$^{-3}$ for the lane.
Our IRAM 30m observations of H$^{13}$CO$^+$ 1$\to$0 reveal the
dynamics of the dark lane more precisely than earlier $^{12}$CO and
$^{13}$CO data of the region \citep{Schneider2002}. Figure~\ref{flow}
shows a channel map of H$^{13}$CO$^+$ 1$\to$0 emission in the velocity
range between --1.3 km s$^{-1}$ and --3.2 km s$^{-1}$ overlaid on the
Subaru IR image (Fig~1). The emission distribution has a close link
to S106 IR. It starts as a more widespread emission around --1.3 km
s$^{-1}$ with two lobes of emission and develops into a single feature
between --2.3 and --3.2 km s$^{-1}$. The peak of emission moves
gradually from a position further away north-east of S106 IR at --1.8
km s$^{-1}$ to a peak close to S106 IR at --3.2 km s$^{-1}$. This
velocity distribution is thus consistent with two possible geometries
that we illustrate in Fig.~\ref{cartoon-lane}. Either the far end of
the lane is slightly tilted away from the observer and the gas flows
towards S106 IR (the accretion flow scenario, in green in
Fig.~\ref{cartoon-lane}) or the far end of lane is oriented towards
the observer and the gas is streaming off the lane (the dispersal
scenario, in blue in Fig.~\ref{cartoon-lane})\footnote{See also
https://hera.ph1.uni-koeln.de/$\sim$nschneid/s106.html for an
animated version of this cartoon.}. In both scenarios, the lane is
located in front of the H\,{\scriptsize II}\ region because it is clearly visible as
a dark feature in front of a bright H\,{\scriptsize II}\ region
(e.g. Fig.~\ref{flow}). While this is straightforward to understand in
the dispersal scenario because the lane is inclined towards the
observer, the flow scenario requires that the lane should be only
slightly tilted away from the observer so that it can wrap around the
equatorial plane of the hourglass-shaped H\,{\scriptsize II}\ region, but still be
located in front of the ionized gas phase.
\begin{figure*}
\centering
\includegraphics[width=17cm, angle=0]{cartoon-lane.pdf}
\vspace{-4cm}
\caption{Schematic view of two possible scenarios (shown at the same
time) explaining the nature of the dark lane. The front view
(left), as S106 is seen on the sky, does not allow us to distinguish
between infall scenario (dark lane in green) or expansion scenario
(dark lane in blue) of the gas in the dark lane. The middle view
shows best the `accretion flow' scenario, i.e. how the (green)
dark lane wraps around the equatorial waist of the
hourglass-nebula. The right view shows how the (blue) dark lane,
which is more detached from the nebula, is tilted towards the
observer and dispersed.}
\label{cartoon-lane}
\end{figure*}
The {\sl {\bf dispersal scenario}} was originally proposed by
\citet{Hodapp1991}. They explain the bipolar H\,{\scriptsize II}\ region as arising
from an anisotropic circumstellar wind, absorbing the ionizing
radiation in the equatorial plane, and postulate that the single
massive star S106 IR must be in a mass-loss evolutionary state and
not in an accretion phase. However, such a scenario requires an energy
transfer of the stellar wind onto the dense and cold gas deep in the
dark lane. We observe in [O\,{\scriptsize I}]\ and [C\,{\scriptsize II}]\ high-velocity gas around --30
km s$^{-1}$ that stems from PDRs and possibly shocks, caused by the
stellar wind and radiation hitting the inner working surface of the
dark lane. However, the cold gas within the dark lane occurs around
the bulk emission of the cloud, from --1 to --3 km s$^{-1}$, and is
thus decoupled from the high-velocity gas phase.
In contrast, an {\sl {\bf accretion flow scenario}}\footnote{We talk
here of a parsec-scale flow that connects the molecular cloud with
the disk(s) or the remains thereof, and not of possible sub-pc flows
that channel material from the disk(s) onto the star (Sec.~5.1.1).}
was proposed by \citet{Bally1983} based on their VLA 5 GHz data (see
Fig.~\ref{OI-outflow}). The area just around S106 IR shows only very
weak 5 GHz radio emission, indicating the absence of ionized gas
(because the 5 GHz emission is not attenuated by dust). The gas at
very high densities is then able to absorb the emitted ionizing
radiation. Our data show that the lane can be a flow that is
illuminated by S106 IR.
Both scenarios were also proposed in \citet{Balsara2001}, based upon
interferometric $^{13}$CO observations. They emphasize the importance
of the magnetic field that can channel material in filamentary
structures onto the highest gravitational potential well. The
magnetic field was found to run parallel to the dark lane
\citep{Vallee2005}, thus consistent with an inflow of gas. A number of
recent observational studies support this dynamic scenario (e.g.
\citealt{Schneider2010,Kirk2013,Peretto2013,Rayner2017}). Despite the
higher angular resolution of their $^{13}$CO data compared to our
H$^{13}$CO$^+$ data at 30$''$ resolution, it is not possible to
discriminate between the two views, mostly because a low-J $^{13}$CO
line is not the best tracer for dense gas. It is also not possible to
tell whether accretion is still occurring. Interferometric
observations are required with a high-velocity resolution of
high-density tracers such as H$^{13}$CO$^+$, H$^{13}$CN, and in
particular N$_2$H$^+$ (a probe for cold dense gas). This sort of
observation would enable us to study the velocity structure of the
dark lane and to explore the immediate environment around S106 IR.
Assuming that we indeed observe a flow towards S106 IR, the projected
length $l$ of the flow is roughly 2.2$'$, corresponding to $\sim$1 pc
at a distance of 1.3 kpc. The velocity difference $\Delta$v along the
flow, determined from the H$^{13}$CO$^+$ map, is $\sim$1.4 km
s$^{-1}$. We assume a random distribution of orientation angles and
thus an average angle of the flow to the line of sight of 57.3$^\circ$
so that the lifetime $t$ of the flow calculates as $t$ =
$l$/($\Delta$v $\tan(57.3))$. The lifetime is then approximately 1.1
10$^6$ yr, leading to a mass input rate of 2.5 10$^{-4}$ M$_\odot$/yr
(using the {\sl Herschel} determined mass of the lane). This rate is
lower than that found for the massive subfilaments of the DR21 ridge
(a few 10$^{-3}$ M$_\odot$/yr, \citealt{Schneider2010}), but similar
to that observed for Mon R2 (a few 10$^{-4}$ M$_\odot$/yr,
\citealt{Rayner2017}). We note, however, that this is a very crude
approximation, and does not consider the true inclination and bending
of the flow.
The flow scenario is consistent with simulations of
\citet{Peters2010a,Peters2010b} who modelled the collapse of rotating,
massive cloud cores including radiative heating by both non-ionizing
and ionizing radiation. The simulations show fragmentation from
gravitational instability in the dense accretion flows in which either
a single massive star is formed or several massive stars with many
low-mass stars within the regions of the accretion flows. Because
there is competition of mass, but not in the classical meaning of
`competitive accretion' \citep{Bate2012,Bonnell2007}, they named this
process `fragmentation-induced starvation'. The simulations
\citep{Peters2010b} also show the non-uniform expansion of
H\,{\scriptsize II}\ regions with the formation of bipolar lobes, very similar to
what is observed for S106. The bipolar outflow structure of the
H\,{\scriptsize II}\ region visible at early time steps of their runs A (single sink)
and B (multiple sinks) reflects qualitatively what is seen for
S106. From what is known so far, S106 IR is a binary system
\citep{Comeron2018} and is associated with a large cluster of low-mass
stars, detected in the IR \citep{Hodapp1991}. In addition, several
dense pre-stellar cores have formed in the dark lane (S20 to S24 in
\citealt{Motte2007}), indicating ongoing fragmentation.
In the scenario of the dark lane as an ionized accretion flow in the
fragmentation-induced starvation model, the observed emission
distribution of [O\,{\scriptsize I}], [C\,{\scriptsize II}], and other tracers fit very well. In
particular the high-velocity blue emission is consistent with gas that
moves downward perpendicularly to the accretion flow, down the
steepest density gradient. This causes the very focused emission seen
in various tracers (Fig.~\ref{OI-HV} and B.1). Interestingly, the very
complex morphology and velocity distribution of jets and outflowing
gas -- as we observe it for S106 -- is well reproduced in the models
of \citet{Kuruwita2017}. They produced simulations of the outflow
pattern of close (separation $<$10 AU) and wide ($>$10 AU) binaries
and showed that the geometry and velocity of the jets and outflows are
strongly modified with respect to a single star.
\section{Summary: a scenario for S106 IR} \label{summary}
Summarizing our observational results and what is already known about
S106, we develop the following scenario:
S106 IR is a binary system with two stars that are sources of a strong
UV field and stellar wind. The very small disk-like structure that
was detected in cm-interferometry can be an intact accretion disk that
is connected to a large-scale accretion flow, known as the dark lane
or the remains of a disk without a link to the dark lane. The lane is
illuminated by the more massive star of the system, presumably an O9
star with 20 M$_\sun$ (the companion has a preliminary classification
as a B8 star with $\sim$3 M$_\sun$; \citealt{Comeron2018}), and forms
a dense hot PDR that cools mostly via [O\,{\scriptsize I}]\ 63 $\mu$m and CO 16$\to$15
emission. The PDR gas is highly dynamic; it flows fast and follows the
steepest density gradients of the dark lane, and `escapes' the lane
close to S106 IR, giving rise to the very collimated emission
distribution in the [O\,{\scriptsize I}]\ lines at velocities from --30 to --9 km
s$^{-1}$ and 8 to 25 km s$^{-1}$. We obtain a radiation field of
$\chi$=2--7 10$^4$ at a density of 1.5--6 10$^4$ cm$^{-3}$ from PDR
modelling. Generally, modelling the [O\,{\scriptsize I}]\ and [C\,{\scriptsize II}]\ emission is
difficult because both lines show self-absorption features. By
increasing the [O\,{\scriptsize I}]\ emission by a factor of 2, we obtain that the
densities in the HV emission ranges increase up to 10$^6$ cm$^{-3}$,
which is more consistent with earlier findings
\citep{Schneider2003,Stock2015}. If the emission in this velocity
range could also arise from a shock (disk-envelope interaction and/or
radiation and stellar wind hitting locally the dark lane) cannot yet
be answered and will be addressed in an upcoming study.
The gas in the blue and red outflow velocity ranges (from --9 to --4
km s$^{-1}$ and 8 to 25 km s$^{-1}$) has a very similar emission
distribution compared to the optical visible lobes. The [O\,{\scriptsize I}]\ (and
[C\,{\scriptsize II}]) emission distributions at the blue and red velocities indicate
that the emission mostly arises from the back side of the southern
lobe and the front side of the northern lobe. The outflow gas is
entrained in the wind of S106 IR and ablated by radiation (and
possibly shocks) from the cavity walls. PDR modelling gives densities
typically of a few 10$^4$ cm$^{-3}$ at a radiation field of $\chi$ a
few 10$^4$. The low value for the density indicates that the CO
16$\to$15 line is most likely subthermally excited.
Molecular cloud clumps and possibly fragments of what once was the
larger-scale circumbinary disk around S106 IR are seen in the close
environment of the star. These clumps form PDRs on their surfaces (the
most prominent one is the western clump) that emit in all lines at the
cloud bulk velocity around --3 km s$^{-1}$. The clumps are exposed to
a radiation field $\chi$ of a $\sim$2 10$^4$ and the density is
$\sim$2 10$^4$ cm$^{-3}$.
S106 IR and its bipolar H\,{\scriptsize II}\ region are embedded in a larger
molecular cloud that provides the gas reservoir for a possible
accretion flow onto S106 IR. Mapping of the high-density tracer
H$^{13}$CO$^+$ 1$\to$0 revealed a velocity gradient across the dark
lane that is consistent with either a flow onto S106 IR or gas
streaming away, depending on the geometry of the region. Only
interferometric observations can elucidate the nature of the dark
lane. The flow scenario is more consistent with the
fragmentation-induced starvation scenario of
\citet{Peters2010a,Peters2010b} than with the monolithic collapse
model of \citet{McKee2002}. As yet unclear is whether shocks driven by
the ionizing stellar wind that hits the accretion flow and the cavity
walls can also cause the observed emission of the FIR lines. It is
also not clear to what extent the [O\,{\scriptsize I}]\ 63 $\mu$m line is
self-absorbed. Higher line intensities will change the [C\,{\scriptsize II}]/[O\,{\scriptsize I}]\ line
ratios and thus modify the outcome of the PDR models. Observations of
the [O\,{\scriptsize I}]\ 145 $\mu$m line, if this line is optically thin, may help to
tackle this problem.
Summarizing, this study shows that the new detection of high-velocity
emission in the [O\,{\scriptsize I}]\ line, and the identification of various velocity
components in other FIR lines ([C\,{\scriptsize II}], high-J CO), which are only now
possible with the (up)GREAT receiver on SOFIA, help to diagnose more
precisely the physical properties of different gas phases in complex
star-forming regions.
\begin{acknowledgements}
This work was supported by the Agence National de Recherche
(ANR/France) and the Deutsche Forschungsgemeinschaft (DFG/Germany)
through the project `GENESIS' (ANR-16-CE92-0035-01/DFG1591/2-1).
N.S. acknowledges support from the BMBF, Projekt Number 50OR1714 (MOBS
- MOdellierung von Beobachtungsdaten SOFIA). This work is based on
observations made with the NASA/DLR Stratospheric Observatory for
Infrared Astronomy (SOFIA). SOFIA is jointly operated by the
Universities Space Research Association, Inc. (USRA), under NASA
contract NAS2-97001, and the Deutsches SOFIA Institut (DSI) under DLR
contract 50 OK 0901 to the University of Stuttgart. This work is
based on observations carried out under project number 140-15 with the
IRAM 30m telescope. IRAM is supported by INSU/CNRS (France), MPG
(Germany), and IGN (Spain). N.S. acknowledges support from the
Deutsche Forschungsgemeinschaft, DFG, through project number Os
177/2-1 and 177/2-2, and central funds of the DFG-priority program
1573 (ISM-SPP). This work was supported by the German \emph{Deut\-sche
For\-schungs\-ge\-mein\-schaft, DFG\/} project number SFB 956. We
thank B. Rumph for carefully reading the manuscript.
\end{acknowledgements}
|
{
"timestamp": "2018-06-05T02:14:56",
"yymm": "1806",
"arxiv_id": "1806.00991",
"language": "en",
"url": "https://arxiv.org/abs/1806.00991"
}
|
\subsection{Predicting From Other Responses (Oracle)}
\noindent To get a better sense of ideal model performance, we also constructed an `oracle' baseline. In this baseline, we use all other observed variables in the data to predict one held-out variable. For illustration, consider predicting an enumeration area's access to piped water from its access to sewerage, electricity, roads, etc. For consistency throughout the analysis, we ran this model using the binary variables (e.g. electricity, sewerage, and piped water) of Afrobarometer, predicting each binary variable using all other binary variables. Splitting the data into 80\% training validation, and 20\% testing, we find that some infrastructure categories have high predictability even from using simple logistic regression. Table\ref{cs325b:table:bs:other_resp:metric} displays the relevant metrics for some categories.
\\\\
\noindent The results here perform very well, but they represent an ideal, as they still depend on survey data from each enumeration area. We believe that the high scores are due to high correlation between the different survey response variables. Additionally, we tried SVM and random forest on the same dataset, with resulting performance more or less similar to that of logistic regression.
\subsubsection{OSM}
\noindent As OSM is highly related to infrastructure, we decided to use a simple OSM baseline model for comparison purposes. Using the previously mentioned metrics of \emph{accuracy}, \emph{precision}, \emph{recall}, \emph{f1-score}, and (\emph{\textbf{AUROC}}), we ran logistic regression on a set of input features. The features were various transformations of the number of highways and number of buildings in OSM in a given enumeration area. Feature transformation and normalization ensures that input data is similarly scaled. The transformations from raw inputs to the post-processed inputs (denoted as $\phi(x)$) are illustrated in following Table~\ref{cs325b:table:osm:bs:feat_trans} with accompanying interpretations.
\begin{table}[hpbt]
\centering
\caption{Feature transformation}
\label{cs325b:table:osm:bs:feat_trans}
\begin{tabular}{ccc}
\hline
feature & transformation & interpretation \\
\hline
\hline
$\phi_1(x)$ & $x_1$ & hwy \# \\
$\phi_2(x)$ & $x_2$ & bldg \# \\
$\phi_3(x)$ & $\log x_1$ & utility of hwy \\
$\phi_4(x)$ & $\log x_2$ & utility of bldg \\
$\phi_5(x)$ & $\frac{x_1}{x_2}$ & ratio of hwy per bldg \\
$\phi_6(x)$ & $\sqrt{x_1}$ & hwy \# along distance (km) \\
$\phi_7(x)$ & $\sqrt{x_2}$ & bldg \# along distance (km) \\
\hline
\end{tabular}
\end{table}
\\
We display results in Table~\ref{cs325b:table:osm:metric:result}, truncated to only include variables whose AUROC is larger then 0.6. We also conducted experiments using Support Vector Machine (SVM) and a random forest, finding similar performance compared with logistic regression.
\\\\
\noindent With AUROCs in the range of 0.7, results indicate meaningful yet limited predictive power. Precision and recall numbers greater than 0.5 indicate that the model is actively making predictions. For those categories that accuracies are not much higher than the balance of the predicted variable, it indicates that these predictions are moderately better than random guessing and close to the prior distribution of the response variables. We find this is reasonable given the nature of OSM as an infrastructure related dataset that is both incomplete and with features that are potentially less correlated with certain infrastructure measures.
\subsection{OpenStreetMap Data}
\noindent OpenStreetMap(OSM) is an open-source mapping project based on volunteer-supplied cartographic and infrastructure information from around the world. OSM is open source, and humanitarian teams have used it extensively in health outreach projects, particularly in Africa. Consequently, OSM has the most complete coverage of road conditions of any major mapping projects, including commercial map engines such as Google maps\cite{HumanOSMTeam,Zhao2016report}. OSM includes measures beyond the presence of roads, such as buildings and road-type. It should be noted that while OSM is extensive, it is not exhaustive in the same as satellite imagery and relies on human inputted values.
\\\\
\noindent OSM encodes roads as 'ways' that connect a collection of 'nodes'. Each node has an associated latitude and longitude, and includes other associated metadata, such as number of proximate buildings. In our usage of OSM, we align survey data with an appropriately sized area in OSM. Similarly as with satellite data, not every survey point had associated OSM data, but there were sufficient records to run our baseline model.
\section{Introduction}
Basic infrastructure availability in developing regions is a crucial indicator of quality of life \cite{pottas2014addressing}. Reliable infrastructure measurements create opportunities for effective planning and distribution of resources, as well as guiding policy decisions on the basis of improving the returns of infrastructure investments \cite{varshney2015targeting}.
Currently, the most reliable infrastructure data in the developing world comes from field surveys, and these surveys are expensive and logistically challenging \cite{dabalen2016mobile}. Some countries have not taken a census in decades \cite{xie2016transfer}, and data on key measures of infrastructure development
are still lacking for much of the developing world \cite{jean2016combining, ieag2014world}. Overcoming this data deficit with more frequent surveys is likely to be both prohibitively costly, perhaps costing hundreds of billions of U.S. dollars to measure every target of the United Nations Sustainable Development Goals in every country over a 15-year period \cite{jerven2014benefits}, and institutionally difficult, as some governments see little benefit in having their performance documented \cite{sandefur2015political,jean2016combining}.
\begin{figure*}[!hbpt]
\centering
\hspace*{-0.4cm}
\begin{tabular}{cccc}
\raisebox{5.5\normalbaselineskip}[0pt][0pt]{\rotatebox{90}{Labels}} &
{\includegraphics[scale=0.35]{elec_label.png}} &
{\includegraphics[scale=0.35]{water_label.png}} &
{\includegraphics[scale=0.35]{sewage_label.png}} \\
\raisebox{5.1\normalbaselineskip}[0pt][0pt]{\rotatebox{90}{Predictions}} &
{\includegraphics[scale=0.35]{elec_pred.png}} &
{\includegraphics[scale=0.35]{water_pred.png}} &
{\includegraphics[scale=0.35]{sewage_pred.png}} \\
& Sewerage & Electricity & Piped Water
\end{tabular}
\caption{We compare the labels for \emph{sewerage}, \emph{electricity}, and \emph{piped water}, in the top row, with our predictions for these variables in the bottom row. Positive labels are shown in blue and negative labels in green.}
\label{best_vis}
\end{figure*}
One emerging technology for the global observation of infrastructure quality is satellite imagery. As satellite monitoring becomes more ubiquitous, with an increasing number of commercial players in the sector, improvements in spatial and temporal resolution open up new applications, uses, and markets \cite{/content/publication/9789264217294-en}, including the possibility to monitor important sustainability outcomes at scale. Additionally, satellite imagery can observe developing countries that do not have significant prior data, containing a wealth of observations that can be harnessed for social development applications \cite{jean2016combining}, including infrastructure assessment.
Such rich and high quality image data enable advanced machine learning techniques to perform sophisticated tasks like object detection and classification, and deep learning in particular has shown great promise \cite{esteva2017dermatologist, he2016deep, dai2016r, albert2017using}. While a number of recent papers discuss the use of deep learning on satellite imagery for applications in land use cover \cite{albert2017using}, urban planning \cite{audebert_beyond_2017}, environmental science \cite{bragilevskydeep}, etc. \cite{DBLP:journals/corr/abs-1710-05483,you2017deep,pryzant2017monitoring}, many unanswered questions remain in the field, particularly in the application of deep learning to social and economic development.
Our contributions in this paper are to both the applied deep learning literature and to socioeconomic studies involving remote sensing data. We propose a new approach to using satellite imagery combined with field data to map infrastructure quality in Africa at the 10m and 30m resolution. We explore multiple infrastructure outcomes, including but not limited to \emph{electricity}, \emph{sewerage}, \emph{piped water}, and \emph{road} to identify the remote sensing predictability of different infrastructure categories on a continental level. Prediction maps for three outcomes are given in Figure 1. We show that, through fine tuning a pretrained convolutional neural network (CNN), our models achieve 0.881, 0.862, 0.739, and 0.786 area of the receiver operating characteristic (AUROC) scores on these outcomes and perform better than nighttime lights intensity (nightlights) and OpenStreetMap (OSM). Our primary datasets are a combination of 10m and 30m resolution satellite imagery from Sentinel 1 and Landsat 8 respectively as well as the georeferenced Afrobarometer Round 6 survey encompassing 36 countries in Africa. Our work provides the ability to assess infrastructure in an accurate and automated manner, to supplement the spatial extent of field survey data, and to generate predictions in unseen regions.
To the best of our knowledge, we are the first to use CNNs with Sentinel 1 imagery for social development research.
\subsection{Organization of Paper}
The remainder of the paper is organized as follows. Section 2 (Related Work) discusses recent applications of machine learning on satellite imagery and contextualizes previous work in infrastructure quality detection. Section 3 (Data) describes the survey data and satellite imagery data sources. Section 4 (Methodology) introduces the problem formulation and modeling techniques used in this paper. Section 5 (Experimental Results) presents the performance of our model. Section 6 (Baseline Models) benchmarks our model performance against three baselines. In Section 7 (Generalization Capabilities) we explore a few settings that test the deployment potential of the model, including its performance on urban and rural enumeration areas, as well as performance in countries that the model was not originally trained on. In Section 8 we discuss conclusions and future work.
\section{Related Work}
\noindent The application of CNNs to land use classification can be traced back to the work of \citet{Castelluccio2015land} and \citet{penatti2015deep} who trained deep models on the UC Merced land use dataset \cite{yang2010bag}, which consists of 2100 images spanning 21 classes. Similar early studies on land use classification that employ deep learning techniques are the works of \citet{romero2016unsupervised} and \citet{papadomanolaki2016benchmarking}. In \citet{liu2017learning} a spatial pyramid pooling technique is employed for land use classification using satellite imagery. These studies adapted architectures pre-trained to recognize natural images from the ImageNet dataset, such as VGGNet \cite{simonyan2014very}, to fine-tune them on their much smaller land use data. A more recent study \cite{albert2017using} uses state-of-the-art deep CNNs VGG-16 \cite{simonyan2014very} and Residual Neural Networks \cite{he2016deep} to analyze land use in urban neighborhoods with large scale satellite data.
A few recent works, which are related to infrastructure detection through deep learning, inspire us to use additional data sources such as OSM \cite{haklay2008openstreetmap} to support our investigation. One project that is closely related to our investigation is DeepOSM\footnote{https://github.com/trailbehind/DeepOSM}, in which the authors take the approach of pairing OSM labels with satellite imagery obtained from Google Maps and use a convolutional architecture for classification. In \cite{yuan2016automatic}, the authors show that their model can achieve a precision of 0.74 and recall of 0.70 on building detection, training CNNs on 0.3 meter resolution OSM images. Their CNN consisted of 7 identical blocks of of filtering, pooling, and convolutional layers. \citet{mnih2010learning, mnih2012learning} built satellite image models for road detection, and they obtain almost 0.8 precision and 0.9 at best case in one urban area.
Recently, \cite{albert2017using} predicted land use classes in urban environments with 0.7 to 0.8 accuracy, commenting on the inherent difficulty in the task of understanding high-level, subjective concepts of urban planning from satellite imagery.
Compared to prior work, our key contributions are that we train the first wide scale classifier of infrastructure quality via deep learning and publicly available satellite imagery on 11 infrastructure quality outcomes, and that we are able to achieve state-of-the-art performance in predicting infrastructure accessibility on a large imagery dataset in Africa.
\section{Data}
This project relates two data sources in a supervised learning setting: survey data from Afrobarometer as ground truth infrastructure quality labels, and satellite imagery from Landsat 8 and Sentinel 1 as input sources.
\subsection{Afrobarometer Round 6}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.85]{cropped__1_}
\caption{Distribution of \emph{piped water} in the Afrobarometer Round 6 survey. Our study is the first to conduct a large scale infrastructure analysis at the continent-scale across 36 nations in Africa. Positive examples are shown in green and negative examples in 0.}
\label{cs235b:report:afro:vis}
\end{figure}
\noindent Conducted over 2014 and 2015 across 36 countries in Africa, the Afrobarometer Round 6 survey collected surveyor-assessed quality indicators about infrastructure availability, access, and satisfaction based on respondent data \footnote{Available by request with AidData} \cite{benyishay2017geocoding}. The dataset surveys 7022 enumeration areas with 36 attributes regarding various aspects of welfare, infrastructure quality, and wealth. \cite{AfrobarometerElectricityReport}. Afrobarometer data from previous rounds dates back to 1999; our application used only Round 6, but adding other rounds represents valuable further work. Each survey response is an aggregate of face-to-face surveys in the enumerated area, which can encompass a city, village, or rural town, and between 1200 and 2400 samples are collected over all enumeration areas for each country. Each country in the Round 6 survey has between 150 and 300 enumeration areas. Each enumeration area is georeferenced with its latitude and longitude, and we center a satellite image for each enumeration area around these coordinates \cite{AfrobarometerManual}. Figure \ref{cs235b:report:afro:vis} shows the spatial distribution of all enumeration areas.
The Afrobarometer Round 6 survey includes 11 binary infrastructure outcomes, with each denoting the availability and quality of that infrastructure in the enumeration area. We primarily focus on highlighting results in \emph{electricity}, \emph{sewerage}, \emph{piped water}, and \emph{road} for their novel contributions. We show results on all binary outcomes except for \emph{cellphone} and \emph{school} due to their high class imbalances. Due to the variation of class balances across all variables we assess performance on multiple metrics and stress AUROC due to its insensitivity to class imbalances. This helps assess comparability in performance between the outcomes.
\begin{table}[h!]
\centering
\caption{Overview of number of examples in each class for all binary infrastructure variables in the Afrobarometer Round 6 survey, including the balance (proportion of positive labels).}
\label{afro_number}
\begin{tabular}{|c|c|c|c|}
\hline
Infrastructure & Label 1 & Label 0 & Balance \\
\hline
Electricity & 4680& 2343 & 0.667 \\
Sewerage &2239& 4784 & 0.319 \\
Piped Water & 4303& 2720 & 0.613 \\
Road & 3886& 3137 & 0.553 \\
Post Office& 1728& 5295 & 0.246 \\
Market Stalls& 4811& 5148 & 0.685 \\
Police Station& 2553& 4470 & 0.364\\
Bank& 1875& 0.767 & 0.267 \\
Cellphone & 6576 & 456 & 0.936\\
School & 6082 & 941 & 0.866\\
Health Clinic & 4115 & 2908 & 0.586 \\
\hline
\end{tabular}
\end{table}
\subsection{Satellite Imagery}
\noindent Two primary sources of satellite observations were used, both offering coverage of most of the enumeration areas. The satellite data is temporally consistent with the survey data, from 2014 and 2015. For a given enumeration area with sampling location (latitude, longitude) at the center, we collect $500\times 500$ pixel images.
\\\\
\noindent \textbf{Landsat 8}: Landsat 8 is a satellite with the objective of collecting publicly available multispectral imagery of the global landmass. Landsat 8 imagery has a 30m resolution providing $15km \times 15km$ coverage in the following six bands: Blue, Green, Red, Near Infrared, and two bands of Shortwave Infrared. Each pixel value represents the degree of reflectance in that specific band. Cloud cover removal is handled natively by the Landsat 7 Automatic Cloud Cover Assessment algorithm \cite{irish2000landsat}.
\\\\
\noindent \textbf{Sentinel 1}: Sentinel 1 is the first satellite in the Copernicus Programme satellite constellation. It uses a C-band synthetic-aperture radar (SAR) instrument to acquire imagery regardless of weather and light conditions. Imagery obtained from Sentinel 1 has a resolution of 10 meters, providing $5km \times 5km$ coverage. It is processed to five bands, comprised of four polarizations and a look angle: VV, VH, Angle, VV$_{\gamma^0}$, and VH$_{\gamma^0}$. Each pixel value in the polarization channels represents the degree of backscatter in that specific band. For the Afrobarometer dataset, images were taken from two different orbital paths resulting in different look angles, ascending or descending, though not every enumerated area had both images. We choose the image with ascending path when available, otherwise we choose the image with descending path.
\section{Methodology}
\subsection{Problem Formulation}
The infrastructure detection task is a multi-label binary classification problem. The input is a satellite image $X$ and the outputs are binary labels $Y_1, ..., Y_k \in \{0, 1\}$, corresponding to quality indicators of different infrastructure outcomes. We optimize the mean binary cross entropy loss
\begin{equation}
L(X, {Y_1, ..., Y_k}) = \frac{1}{k}\sum_{i=1}^k Y_i\log p(Y_i = 1 | X) + (1 - Y_i)\log p(Y_i = 0 | X)
\end{equation}
\noindent where $p(Y_i = j | X)$ is the probability that the model predicts that input $X$ has label $j$ for infrastructure outcome $Y_i$.
\subsection{Model}
We train deep learning models to learn useful representations of data from input imagery. Convolutional Neural Networks (CNNs) have been particularly successful in vision tasks and have also been demonstrated to perform well on satellite imagery \cite{xie2016transfer, jean2016combining, albert2017using, bragilevskydeep}. For all experiments in the paper, we train a Residual Neural Network (ResNet) architecture \cite{he2016deep}. The following describe further specifications of our model:
\textbf{ResNet}. ResNet has achieved state-of-the-art results in ImageNet \cite{he2016deep}, and its main contribution over previous convolutional neural networks is learning residual functions in every forward propagation step with reference to the layer inputs. We posit that this is useful in satellite imagery analysis for retaining low-level features in high-level classifications. We train an 18 layer network.
\textbf{Transfer Learning}. Instead of training the network from random initializations, we initialize our network weights with those of a ResNet pre-trained on ImageNet \cite{krizhevsky2012imagenet}. Even though the weights are initialized on an object recognition task, this approach has been demonstrated to be effective in training on new tasks compared with initializing using random weights \cite{oquab2014learning} and useful for learning low-level features like edges in satellite imagery.
\textbf{Multi Channel Inputs}. ImageNet architectures originally take inputs with three channels corresponding to RGB values. However, Landsat 8 and Sentinel 1 have six and five bands respectively. We change the first convolution layer in the network to have greater than three input channels by extending the convolutional filters to further channels. The number of output channels, stride, and padding for the first layer is the same as in the original ResNet. With Landsat 8, we initialize the RGB band parameters of the first layer layer with the same parameters as in the pre-trained ResNet weights, and initialize the non-RGB bands with Xavier initialization \cite{glorot2010understanding}. With Sentinel 1, which does not include RGB bands, we initialized three bands with the pretrained RGB channel weights, and the other two with Xavier initialization. In Xavier initialization, each weight $W$ is sampled uniformly from
\begin{equation}
W \sim U\bigg[\frac{\sqrt{6}}{n_{\text{in}} + n_{\text{out}}}, \frac{\sqrt{6}}{n_{\text{in}} + n_{\text{out}}}\bigg]
\end{equation}
\noindent where $n_{\text{in}}$ is the input dimensions of the convolutional filter and $n_{\text{out}}$ is the number of neurons in the next layer.
\subsection{Data Processing}
Our pipeline includes several data processing steps and augmentation strategies:
\textbf{Unique Geocoded Images in Test Set}. Each enumerated area in the Afrobarometer survey has a unique geocode field, and enumerated areas with the same geocode field have substantial spatial overlap. To ensure that there is no spatial overlap between images observed in the training set and in the test set, we enforce that only points with a unique geocode appear in the test set.
\textbf{Cropping}. Our satellite images are ingested at $500 \times 500$ pixel bounding boxes. We try downsampling, cropping at random regions, and cropping around the center pixel to $224 \times 224$ pixels, and we find that the latter has best performance and convergence.
\textbf{Horizontal Flipping}. To augment the limited size of our dataset, at training time we rotate the image around the x-axis with 50\% probability.
\textbf{Normalization}. We normalize our data channel-wise to zero mean and unit standard deviation.
\begin{table*}[!htb]
\centering
\caption{Our Landsat 8 model achieves AUROC scores above 0.85 on \emph{electricity} and \emph{sewerage}, and achieves scores above 0.7 on all but three outcome variables.}
\label{afro_test}
\begin{tabular}{|c|c|c||c|c|c|c|c|c|}
\hline
Satellite & Infrastructure & Balance & Accuracy & F1 Score & Precision & Recall & AUROC \\
\hline
L8 & \textbf{Electricity} & 0.667& 0.832 & 0.873 & 0.877 & 0.870 & \textbf{0.881} \\
L8 & \textbf{Sewerage} &0.319& 0.815 & 0.700& 0.756 & 0.650 & \textbf{0.862} \\
L8 & \textbf{Piped Water}& 0.613& 0.673 & 0.725& 0.730 & 0.720 & \textbf{0.739} \\
L8 & \textbf{Road}& 0.553& 0.705 & 0.704& 0.746 & 0.667 & \textbf{0.786} \\
L8 &Bank& 0.267& 0.767 & 0.364& 0.543 & 0.273 & 0.726 \\
L8 & Post Office& 0.246& 0.753 & 0.427& 0.434 & 0.420 & 0.712 \\
L8 &Market Stalls& 0.685& 0.681 & 0.791& 0.688 &0.930 & 0.665 \\
L8 &Health Clinic& 0.586& 0.622 & 0.719& 0.632 & 0.833 & 0.664 \\
L8 &Police Station& 0.364& 0.660 & 0.492& 0.490& 0.494 & 0.650\\
\hline
S1 & Electricity & 0.667& 0.769& 0.820& 0.820& 0.821 & 0.819 \\
S1 & \textbf{Sewerage} &0.319& 0.802 & 0.659& 0.678 & 0.842 & \textbf{0.862} \\
S1 & Piped Water& 0.613 & 0.663 & 0.722& 0.716 & 0.728 & 0.725\\
S1 & Road& 0.553 & 0.702 & 0.730& 0.681 & 0.786 & 0.779\\
\hline
\end{tabular}
\end{table*}
\subsection{Training}
We train independent models for Landsat 8 and Sentinel 1. Our models are trained with multi-label classifiers as well as single-label classifiers, and we report the higher performance for each variable. We train the network end-to-end using Adam optimizer with $\beta_1 = 0.9$ and $\beta_2 = 0.999$ \cite{kingma2014adam}. We train the model with a batch size of 128, and we update our weights with a decaying learning rate of 0.0001. The model weights are regularized with L2 regularization of 0.001.
Due to the limited size of our dataset, we evaluate our model on all the data with K-fold cross validation where $K=5$, producing a train-test split for every fold with 80\% training and 20\% testing. We train a model for each fold, predict values on the test set, and once every fold has been tested compute our evaluation metrics.
\section{Experimental results}
\subsection{Evaluation Metrics}
Performance of the model was evaluated by a number of metrics: accuracy, F1-score, precision, recall, and AUROC \cite{ROC}. F1 is calculated as the harmonic mean of precision and recall. AUROC corresponds to the probability that a classifier will rank a randomly chosen positive example higher than a randomly chosen negative example and generally ranges between 0.5 being a random classifier and 1.0 being a perfect predictor.
\subsection{Classification Results}
In Table \ref{afro_test} we show the classification results our model achieves on each category. Landsat 8 performs better than Sentinel 1 on every category; we show results for Sentinel 1 on our four highest scoring categories. The first column displays the proportion of images that have label 1 in that category to compare with the accuracy our model achieves.
On \emph{electricity} and \emph{sewerage}, our best model achieves AUROC greater than 0.85. In particular, the model achieves an F1 score of 0.873 on \emph{electricity}. Our model does not perform as effectively on other variables, such as \emph{market stalls}, \emph{health clinic}, and \emph{police station}. This is not surprising since Landsat 8 and Sentinel 1 operate at a resolution lower than that needed to resolve individual objects that signify the presence of these facilities. With the better performing categories, the imagery still cannot resolve individual electricity lines, roads, or water tanks at 30m resolution; however, the structures in aggregate might contribute to different spectral signatures. This means that the classification is likely relying on large-scale proxies, such as urban sprawl and geographical features, that correlate with the class values.
\section{Baseline Models}
We compare our model performance with several baselines of different input sources. To evaluate the difficulty of the task, we also compare against an ideal baseline that uses (expensive to collect) survey labels to make predictions. We suggest that the oracle defines a reasonable notion of great performance on this dataset.
\begin{table}[h!]
\centering
\caption{We compare our model with four baselines on AUROC scores. Our Landsat 8 models outperform nightlights and OSM models and performs slightly better or comparably with nearest neighbor spatial interpolation. Performance on three infrastructure outcomes is comparable with the oracle.}
\label{baseline_results}
\begin{tabular}{c|ccc|c|c}
\hline
Infrastructure & Nightlights & OSM & Spatial & Oracle & L8 \\
\hline
\hline
Electricity & 0.79 & 0.73 & 0.78 & 0.89 & \textbf{0.88} \\
Sewerage & 0.75 & 0.77 & 0.78 & 0.89 & \textbf{0.86} \\
Piped Water & 0.73 & 0.73 & 0.75 & 0.89 & \textbf{0.74} \\
Road & 0.67 & 0.68 & 0.74 & 0.79 & \textbf{0.79} \\
Bank & 0.57 & 0.70 & 0.67 & 0.93 & \textbf{0.73} \\
Post Office & 0.56 & 0.64 & 0.70 & 0.92 & \textbf{0.71} \\
Market Stalls & 0.50 & 0.62 & 0.66 & 0.84 & \textbf{0.66} \\
Health Clinic & 0.52 & 0.61 & 0.64 & 0.85 & \textbf{0.66} \\
Police Station & 0.54 & 0.63 & 0.66 & 0.90 & \textbf{0.65} \\
\hline
\end{tabular}
\end{table}
\begin{figure*}[!htb]
\centering
\begin{minipage}{0.22\textwidth}
\centering
\includegraphics[scale=0.23]{l8_3000.png}
\end{minipage}
\begin{minipage}{0.22\textwidth}
\centering
\includegraphics[scale=0.23]{s1_3000.png}
\end{minipage}
\begin{minipage}{0.22\textwidth}
\centering
\includegraphics[scale=0.23]{dmsp_3000.png}
\end{minipage}
\begin{minipage}{0.22\textwidth}
\centering
\includegraphics[scale=0.23]{viirs_3000.png}
\end{minipage}
\caption{Visualization of four satellite imagery sources we use in this paper. Each image is centered around the same geolocation. From left to right: Landsat 8 (multispectral), Sentinel 1 (synthetic aperture radar), DMSP (coarse nightlights), and VIIRS (nightlights). We find that when coupled with the neural network architectures we considered, Landsat 8 is the most informative source of information about infrastructure quality, followed by Sentinel 1.}
\label{satellites}
\end{figure*}
\subsection{Nightlights Intensity}
\citet{jean2016combining} used nighttime light intensities as a proxy for poverty level. Since poverty and infrastructure are closely related, we use nighttime lights as a baseline predictor for infrastructure level. For example, we expect nightlight intensity to be a good proxy for electricity access.
We use nighttime light intensity data from the Defense Meteorological Satellite Program (DMSP) \cite{dmsp}, imaged in 2013 with a resolution of 30 arc-seconds, and Visible Infrared Imaging Radiometer Suite (VIIRS) \cite{viirs}, imaged in 2015 with a resolution of 15 arc-seconds.
For each survey response, we take a $7\times 7$ DMSP or $14\times 14$ VIIRS patch of pixels centered on the geolocation. For both sources, this corresponds to roughly a $6km$ square, which matches the area coverage of the cropped Landsat images we used in our best model. Figure~\ref{satellites} visualizes all four satellite imagery sources used in this paper for one geolocation. We run a logistic regression classifier for each response variable using cross-validated parameters, and we take the prediction of the better-performing nightlights satellite for each variable. Table~\ref{baseline_results} shows full results from this baseline.
As expected, nightlights perform quite well at predicting \emph{electricity} (AUROC of 0.79), and has some predictive power with water, sewerage and roads. However, its performance in other outcomes is only slightly better than random chance (AUROC only a little better than 0.5). Using nightlights thus offers only a limited window into infrastructural provisions by proxying human activity as light emissions and fails to attend to facilities that may be present without such evidence.
\subsection{OpenStreetMap}
OpenStreetMap (OSM) is a collaborative project for creating a map of the world with crowdsourced information. Users and organizations upload georeferenced tags about anything they would like to identify on the map. OSM contains a wealth of information on infrastructure where it is available. However, because of its crowdsourced nature, the data is less reliable compared to professional surveys \cite{helbich2012comparative}.
For each enumeration area in a $15km \times 15km$ bounding box around the center geocoordinate, we extract the total number of highways and buildings in OSM. We expand the set of input features for every area with several non-linear transformations on these counts, including $log$, \emph{square root}, and highway-to-building $ratios$. We normalize each feature to zero mean and unit standard deviation, and then train logistic regression, support vector machine, and random forest classifiers on the set of input features. The logistic regression performs best.
We display results in Table~\ref{baseline_results}.
With AUROCs above 0.7 for \emph{electricity}, \emph{piped water} and \emph{sewerage}, results indicate meaningful predictive power. OSM performs worse than nightlights in electricity access, but is generally better for other tasks. Surprisingly, the OSM features are not predictive of the \emph{road} category (AUROC 0.68). Overall, our results indicate that although OSM is imperfect, it does provide some useful insight into infrastructure quality metrics, achieving AUROCs between $0.62$ and $0.77$, indicating good performance at discriminating high vs. low infrastructure quality enumeration areas across the African continent.
\subsection{Spatial Interpolation}
We also compute the baseline performance on the infrastructure outcomes using spatial interpolation methods. For each enumeration area we consider how predictive its latitude and longitude are of each infrastructure variable nonparametrically. We uniformly sample 80\% of the enumeration areas as the training set, and for each survey response in the test set use nearest neighbor grid interpolation, which labels the example with the label of its closest neighbor. We take the infrastructure value of the neighbor as our predicted value.
Our spatial interpolation model achieves better performance than Nightlights and OSM models but has lower performance compared with the Landsat 8 models on the highest performing infrastructure outcomes, especially in \emph{electricity} and \emph{sewarage}. This indicates that though geographic location is a non-negligible predictor of infrastructure development, satellite imagery is able to extract deeper and more useful insights. Additionally, since spatial interpolation methods are one long-established approach in survey data interpolation \cite{reibel2007geographic}, we suggest that our model can be used to improve how survey samples are interpolated to larger regions.
\begin{figure*}[!htb]
\centering
\begin{minipage}{0.22\textwidth}
\centering
\includegraphics[scale=0.22]{l8_tp.png}
\end{minipage}
\begin{minipage}{0.22\textwidth}
\centering
\includegraphics[scale=0.22]{l8_fp.png}
\end{minipage}
\begin{minipage}{0.22\textwidth}
\centering
\includegraphics[scale=0.295]{burundi_elec_tp.png}
\end{minipage}
\begin{minipage}{0.22\textwidth}
\centering
\includegraphics[scale=0.297]{elec_fp_1.png}
\end{minipage}
\caption{Predictions from left to right: true positive \emph{piped water} (Egypt, urban), false positive \emph{piped water} (Malawi, rural), true positive \emph{electricity} (Burundi, urban), false positive \emph{electricity}. (Burkino Faso, rural) }
\label{vis}
\end{figure*}
\subsection{Cross-label Predictions}
Finally, we construct an \textit{oracle} baseline to assess the discriminatory performance between the high quality infrastructure labels. In this baseline, we use all observed variables in the survey data to predict one held-out variable. That is, if there are $n$ infrastructure quality variables, we learn parameters $W_{ij}$ such that for all $i, j \leq n$,
\begin{equation}
\tilde{Y_i} = \sigma\bigg(\sum_{j=1}^n W_{ij}Y_j\bigg)
\end{equation}
where $\sigma$ is the sigmoid function. We optimize $Y_i$ and $\tilde{Y_i}$ with cross-entropy loss. We find that infrastructure categories have high predictability between the infrastructure labels, show in Table \ref{baseline_results}. \emph{Electricity}, \emph{piped water}, and \emph{sewerage} achieve 0.89 AUROC.
These results offer a useful comparison for our model. If our model achieves performance similar to the oracle's, then its predictive power is as potent as if it predicted a set of concrete infrastructure labels that were correlated with the target outcome variable. Additionally, the oracle represents the predictive performance using \emph{expensive} and \emph{limited} survey data, whereas satellite imagery is \emph{cheap} and \emph{widely available}. Our best model on \emph{electricity}, \emph{piped water}, and \emph{road} performs comparably to the oracle.
\section{Generalization Capabilities}
Ultimately, we are interested in deploying our deep learning approach to provide high-resolution maps of infrastructure quality that can be updated frequently based on relatively inexpensive remote sensing data. To this end, we evaluate the generalization capabilities of the model where we attempt to make predictions on data the model has not been explicitly trained on.
\subsection{Urban-Rural Split}
Each enumeration area in the Afrobarometer Round 6 survey is classified as being \emph{urban} or \emph{rural}. Urban and rural areas in Africa have significantly different infrastructural provisions. Urban areas are associated with improved water, and access to sanitation facilities is twice as great in urban areas compared with rural areas \cite{bentley2015inadequate}. Additionally, urban and rural areas have large visual differences in the satellite imagery that make them likely to be correlated with the other infrastructure metrics.
We measure the simple matching coefficient over all enumeration areas between the urban/rural variable and several infrastructure quality variables. Given binary variables $Y_i$ and $Y_j$, the simple matching coefficient of a sampled set of observations in $(Y_i, Y_j)$ is the proportion of matches in values between $Y_i$ and $Y_j$ divided by the number of samples in the set. The simple matching coefficient between \emph{urban/rural} and \emph{electricity}, \emph{sewerage}, and \emph{piped water} each is 0.70, 0.79, and 0.71 respectively.
To address the concern that our model might be classifying the urban and rural indicators as a proxy for infrastructure quality, we evaluate the performance of our model on infrastructure variables within the \emph{urban} and \emph{rural} classes. Table \ref{afro_test_urban} shows the classification results on our best performing infrastructure metrics when the model is trained on only urban or rural areas. The AUROC scores are lower for all infrastructure variables, but not by enough to suggest that the model in the original classification task is exclusively learning to classify based on the urban/rural category. The AUROC of our highest performing outcome \emph{electricity} drops from 0.88 across both classes to 0.76 in the \emph{urban} class and 0.82 in the \emph{rural} class.
\begin{table*}[!htb]
\centering
\caption{Results on \emph{electricity}, \emph{sewerage}, and \emph{piped water} when we stratify urban vs. rural areas. The model still performs well, indicating that it is not simply distinguishing urban and rural areas but is actually able to explain the variation within these classes.}
\label{afro_test_urban}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
Satellite & Infrastructure & Urban/Rural & Balance & Accuracy & F1 Score & Precision & Recall & AUROC \\
\hline
L8 & Electricity & Urban & 0.953& 0.861 & 0.923& 0.971 & 0.880 & 0.763 \\
L8 & Electricity & Rural & 0.454& 0.741 & 0.724& 0.701 & 0.748 & 0.816 \\
L8 & Sewerage & Urban&0.661& 0.687 & 0.729& 0.853 & 0.636 & 0.794 \\
L8 & Sewerage & Rural&0.089& 0.897 & 0.430& 0.425 & 0.436 & 0.807 \\
L8 & Piped Water & Urban&0.861& 0.807 & 0.885& 0.907 & 0.864 & 0.758 \\
L8 & Piped Water & Rural&0.408& 0.628 & 0.599& 0.535 & 0.680 & 0.686 \\
\hline
\end{tabular}
\end{table*}
\subsection{Country Hold-out}
In the original classification task, we sample our training and test sets uniformly among all 36 countries. With high probability, every country has data points that appear in the training set. However, we would also like to know whether deploying our model in an unobserved country leads to similarly strong classification results.
We perform an experiment where we validate our model on new countries that it has not trained on before. In this experiment, we train on the enumeration areas of 35 countries, holding out one country, and then test our model on the enumeration areas of the held-out country.
Table \ref{country_ho} shows the results on held-out countries Uganda, Tanzania, and Kenya, three of the countries with the most representation in the Afrobarometer survey. We train with strong regularization values to prevent overfitting on the trained countries. The results are not as strong as with the uniform sampling strategy; for example, we go from AUROC of 0.853 on \emph{electricity} on the test set when training with uniform sampling to AUROC of 0.637 on Ugandan enumeration areas when Uganda is held-out.
\begin{table}[h!]
\centering
\caption{Country hold-out results. We evaluate the performance of our model in a country not seen during training, simulating a realistic but challenging deployment situation. Compared to Table \ref{afro_test}, performance drops but the model maintains its usefulness for some infrastructure variables.}
\label{country_ho}
\begin{tabular}{cccccc}
\hline
Country & Infrastructure & Balance & Accuracy & AUROC \\
\hline
\hline
Uganda & Electricity & 0.348 & 0.464 & 0.637 \\
& Sewerage & 0.076 & 0.424 & 0.774 \\
& Piped Water & 0.268 & 0.527 & 0.638 \\
Tanzania & Electricity & 0.521 & 0.500 & 0.541 \\
& Sewerage & 0.103 & 0.502 & 0.578 \\
& Piped Water & 0.432 & 0.445 & 0.588 \\
Kenya & Electricity & 0.846 & 0.703 & 0.518 \\
& Sewerage & 0.137 & 0.714 & 0.813 \\
& Piped Water & 0.418 & 0.473 & 0.602 \\
\hline
\end{tabular}
\end{table}
However, this is expected as the sample space of satellite images differ between countries, so enumeration areas between countries likely have geographic differences that make the salient features for classification less predictive.
\subsection{Fine-tuning Held-out Countries}
Though the results for the country hold-out experiments suggest that the model does not immediately generalize to new countries, we aim to show that transfer learning with a small, labeled sample in a new country generalizes those predictions to a significantly larger sample if the distribution of the training sample is representative.
In this experiment, we repeat the procedure of the country hold-out experiments, but fine-tune the trained model on samples of the held-out countries. We train with L2 regularization of 0.1 and with different proportions of uniformly sampled data from the held-out country, from 0\% up to 80\%, where the lower end (0\%) is equivalent to the hold-out experiment, and the upper end (80\%) trains with the same amount of data as if the country was trained on in the initial training phase. We freeze the weight updates of the ResNet's parameters to obtain the final layer as visual features and then train a logistic regression with those features to predict the class label. We require the training and testing distribution to have the same proportion of positive labels.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.60]{electricity_holdout.png}
\caption{AUROC scores on held-out countries when fine-tuned on samples of data. The x-axis corresponds to the percentage of data from the held-out country that the model was fine-tuned with.}
\label{electricity_holdout}
\end{figure}
Our best results on Uganda, Tanzania, and Kenya show that when each of these countries is held out, only 20\% of the country's data is needed to yield approximately the same AUROC as if the country was sampled uniformly in the training set. Additionally, training with 80\% of the country's data yields test scores as good as or better than the average of all the other countries that the model was trained on, with AUROC up to 0.96 and accuracy up to 92\%.
Figure \ref{electricity_holdout} shows the AUROC results for Uganda, Tanzania, and Kenya on the \emph{electricity} category as a function of the amount of data trained on. These results suggest that the model can be fine-tuned on limited data in a new country to good performance on a much larger test set in that country.
\section{Conclusion and Future Work}
Data on infrastructure quality outcomes in developing countries is lacking, and this work explored the use of globally available remote sensing data for predicting such outcomes. Using Afrobarometer survey data, we introduced a deep learning approach that demonstrates good predictive ability.
We experimented with Landsat 8 (multispectral) and Sentinel 1 (SAR) data, and obtained the best overall performance on Landsat 8. We believe the superior performance of Landsat 8 when compared to Sentinel 1 follows from Landsat 8 having RGB bands, allowing it to better use the ResNet's pretrained parameters. Sentinel 1 has no RGB bands.
We found the best performance on \emph{electricity}, \emph{sewerage}, \emph{piped water} and \emph{road}. Accuracy far surpasses balance and random prediction, and the AUROC scores are greater than 0.85 on \emph{electricity} and \emph{sewerage}. The model significantly outperforms the OSM and nearest neighbor interpolation baselines on these two variables by an average of 0.1 AUROC. Results also surpass the nightlights baseline. Furthermore, these results are on par with the oracle baseline, indicating that the model is making meaningful and accurate predictions. Intuitively, and as our models show, these variables are feature-defined structures and infrastructure systems that are uniquely distinguishable from satellite imagery. Figure \ref{best_vis} shows the distribution of all test set predictions for \emph{sewerage}, \emph{electricity}, and \emph{piped water}.
The first two images from left to right in Figure \ref{satellites} show sample satellite images where our model predicted a true positive and a false positive respectively for access to \emph{piped water}; the former shows clear indication of high activity with a large swath of developed buildings and roads, while the latter may have confused the model due to a high concentration of activity at the center of the image. Similarly the right two images show true and false positive predictions for \emph{electricity}. These predictions both demonstrate a similar proclivity for developed areas of buildings and roads.
We found poor performance on outcomes like \emph{market stalls}, \emph{health clinic}, and \emph{police station}; such outcomes barely outperformed random guessing and often underperformed OSM baseline results. This make sense as there are few features to resolve the presence of these particular buildings from satellite imagery, and OSM data may offer more insightful features.
The model exhibits high confidence in most predictions and training performance is significantly better than testing performance but we do not observe total overfitting. Hyperparameter tuning was not able to resolve these issues while maintaining optimal model performance. Turning towards the data, we found that images can appear highly similar even if they have different classifications. Possible solutions to this problem include both more data and deeper, more flexible models, although without sufficient data, the latter approach risks overfitting.
Our results demonstrate an exciting step forward in remote sensing and infrastructure mapping, far surpassing the OSM baseline. However, this task is presently underexplored, and we believe further improvements could be made. More local to the model, Sentinel 1 performance could likely be improved and more data could provide superior performance. Furthermore, transfer learning from other datasets, such as OSM, to these tasks offers a potential way to create a more effective model by learning to associate ground-level features and other observations with satellite imagery.
Within the more general task of infrastructure mapping, we have also identified valuable future work. First, using the previous rounds of Afrobarometer, this model could be tested on its ability to generalize temporally. The ability to extrapolate how infrastructure has developed over time using contemporaneous imagery would be another exciting step in development. Second, a model that simultaneously trains using images from different satellites is worthy of further investigation. Third, the 10m and 30m resolution imagery used in this project is far from the resolution of today's satellites. We expect that higher resolution data would lead to better results and believe such an approach worthy of investigation. Finally, a model that could take into account prior beliefs about infrastructure availability could offer a powerful tool for practical use.
For all these endeavors, data will form a core issue. The quality of a deep model heavily relies on adequate data available, and a large focus should be towards making better use of existing image and survey data, through strong cataloging and collating efforts. However, our results demonstrate the proof of concept that satellite imagery can be used to predict infrastructure quality.
\section{Acknowledgments}
We would like to acknowledge Zhongyi Tang, Hamza Husain, and George Azzari for support in data collection, and the Stanford Center on Global Poverty and Development for financial support.
|
{
"timestamp": "2018-06-05T02:13:09",
"yymm": "1806",
"arxiv_id": "1806.00894",
"language": "en",
"url": "https://arxiv.org/abs/1806.00894"
}
|
\section{Introduction}
Simultaneous multiwavelength timing and spectral observations are very important in studying emission mechanisms of
astrophysical objects with high energy emission, such as X-ray
pulsars and X-ray binary systems \citep{YadiRomani95,MuslimovHarding03}, to model their phase resolved spectra \citep{HU17},
and to constrain their magnetic field structure.
Many of objects, particularly pulsars with known radio pulsations, require
high-precision alignments of their radio and high energy light curves.
Apart from constraining the nature of pair producing gaps, high
precision alignment of their light curves can shed light on the
nature of giant pulses (GPs)\footnote{Intense
nano second wide pulses, with typical intensities about 1000 times the mean
pulse intensity, seen sporadically at radio frequencies in PSR B0531+21
and some other pulsars} in some of these pulsars \citep{Joshi2004,JohnstonRomani04,JhRomanireview04,lundgren1995giant,mikami2014,hankins2007radio}.
Unfortunately, the behavior of electromagnetic radiation makes it
impossible to observe all bands using a single instrument. In radio
telescopes, the data are recorded in the form of voltages as a function
of time, whereas high energy detectors count the photons. The variability
of both the radio and high energy emission necessitates having accurate
synchronization of time, when the radiation at different frequency
arrives at different observatories. This requires calibrating the delays
in data acquisition and processing pipelines for each observatory
through observations of sources with known time alignment. The
calibration of fixed offsets for the instruments on board the
first Indian multiwavelength space observatory, ASTROSAT, are presented
in this paper for the first time.
ASTROSAT, launched in October 2015, has five instruments on board \citep{singh2014astrosat}. These are the
Cadmium Zinc Telluride Imager \citep[\textit{CZTI};][] {bhalerao2016cadmium},
Large Area X-ray Proportional Counter
\citep[\textit{LAXPC};][]{yadav2016large},
Soft X-ray Telescope \cite[\textit{SXT};][]{singh2016orbit}, Ultra Violet
Imaging Telescope \citep[\textit{UVIT};][]{hutchings2014uvit}, and Scanning Sky Monitor (\textit{SSM}). This is a unique observatory providing multiwavelength coverage from 1300 $\AA$ to 380 keV.
We have used Indian ground-based facilities such as the Giant Meterwave Radio Telescope \citep[\textit{GMRT};][]{swarup1991giant} and the Ooty Radio Telescope \citep[\textit{ORT};][]{swarup1971large} simultaneously with the ASTROSAT to calibrate the fixed timing offsets of various instruments. For this, we needed to use a standard calibrator with well known properties in all bands. We used the brightest high energy pulsar in the sky, the Crab pulsar (PSR B0531+21), for our calibration. PSR B0531+21 emits pulsed radiation from radio to very high energies \citep{abdo2010first}. The average light curve of the
pulsar shows two peaks, with the larger peak at 1.4 GHz defined as
the main pulse (\figurename{ \ref{radio_profile}}).
\begin{figure}
\label{radio_profile}
\includegraphics[scale=0.24]{all_prof_aligned.pdf}
\caption{Average pulse profile of PSR B0531+21 obtained using phase coherent average over all data with different instruments used in this study. The data were aligned using the offsets estimated in this study. The larger peak is called as main-pulse (MP), while the smaller peak is referred as inter-pulse (IP).
The panels show the average profiles obtained with FERMI archival data, the CZTI, the GMRT and the ORT from bottom panel to top respectively. The radio profile were obtained with the GMRT at 1390 MHz and with the ORT at 334.5 MHz.}
\end{figure}
The pulsar's profile at low frequencies (334.5 MHz and 1.4 GHz) are aligned with the profile at optical and high energies. The main pulse at high energies leads the radio main pulse by
241 $\pm$ 29 $\mu$s \citep[$>$ 30 Mev ;][]{kuiper2003absolute}, 344 $\pm$ 40 $\mu$s \citep[2-30 keV ;][]{rots2004absolute} and (280 $\pm$ 40 $\mu$ s) \citep{kuiper2003absolute}. We used these reported intrinsic offsets to calibrate the instrumental offset of the instruments aboard the ASTROSAT.
This paper is organized as follows. The instruments aboard the ASTROSAT and the details of observations used for this study are described in Section \ref{obs}. The ephemeris for PSR B0531+21 were obtained using high cadence radio observations at 334.5 MHz with the Ooty Radio Telescope (ORT). The analysis of these data and our calibration method is discussed in Section
\ref{anal}. We conclude with results and discussion in Section \ref{result}.
\section{Observations}
\label{obs}
The radio observations used the ORT and the GMRT, whereas the X-Ray observations were carried
out using the CZTI instrument aboard ASTROSAT. We also used publicly available archival data from Fermi\footnote{https://fermi.gsfc.nasa.gov/ssc/data/access/}mission. These instruments and the observational setup along-with the details of observations are described in this section.
\subsection{The Ooty Radio Telescope (ORT) }
\label{ort}
The ORT is an offset parabolic
cylindrical antenna of 530-m length in north-south direction and
30-m width in east-west direction, sensitive to a single
linear polarization, with system temperature of
150 K and the antenna gain of 3.3 K/Jy \citep{swarup1971large}.
PSR B0531+21 was observed as part of a larger pulsar-monitoring program \citep{kjm+18} since 2014 March
at 334.5 MHz with a bandwidth
of 16 MHz. The pulsar was observed daily for 15 minutes as part of this program
and these observations were used to obtain monthly ephemeris of the
pulsar as described below. The observations utilized the pulsar
back-end at the ORT, called PONDER \citep{pondernaidu}, which employed
real-time coherent dedispersion to obtain directly time-stamped average
profiles of the pulsar using the monthly ephemeris generated by us.
In PONDER, data acquisition is started at the rising edge of
the minute pulse derived from global positioning system (GPS) and
data are sampled in synchronization with observatory frequency standard,
which was a Rubidium clock. The typical instrumental uncertainty
on the time stamp was 200 ns. Observations from 2015 September 01
(MJD 57226) to 2017 January 14 (MJD 57767) were used in this work.
\subsection{The Giant Meterwave Radio Telescope (GMRT)}
\label{gmrt}
The GMRT is an interferometer consisting of thirty 45-m fully
steerable antennas \citep{swarup1971large}, 14 of which are arranged
in a compact array within 1 km and the rest are distributed in three
arms. We used the arm antennas nearest to the compact array along with the 14 compact
array antennas to form a phased array at 1390 MHz, with an overall
gain of 3.5 K/Jy. The two linear polarizations across 16 MHz bandwidth
from each antenna were digitized at Nyquist rate. The resultant
time series was transformed with a 512 point fast Fourier
transform (FFT) to obtain 256 channel voltages in the frequency
domain. These were then compensated in the Fourier domain
for instrumental phase for each antenna, determined by observing a
point source (3C147) before each observations. The phase compensated
voltages from all antenna in the phased array were then added
and this coherent sum was recorded as 256 channel complex voltages,
with a time-stamp for each block of 256 channels derived from
observatory Rubidium frequency standard disciplined using one pulse per minute
output of GPS. The recorded voltages were processed offline
as described in Section \ref{anal}. PSR B0531+21 was observed at the GMRT and the ORT simultaneously with ASTROSAT observations at four epochs. At other 13 epochs, the GMRT observations were not possible, so the ASTROSAT observations were carried out with the ORT only.
The details of observations used are given
in Table \ref{obsdet}.
\begin{table*}[ht]
\label{obsdet}
\begin{tabular}{|l|l|l|l|}
\hline
{\bf Telescope}& {\bf BW or Energy range}& {\bf Start MJD}& { \bf Stop MJD}\\
\hline
ORT& 16 MHz& 57226& 57767\\
\hline
GMRT legacy& 16 MHz& 57316& 57772\\
\hline
ASTROSAT-CZTI& 20-150 keV& 57303& 57771\\
\hline
Fermi-LAT& 0.1-300 GeV& 57284&57800\\
\hline
\end{tabular}
\\
\caption{Brief summary of the observations. The participating telescopes and their payloads, bandwidth or the energy range employed for observations, and the range of MJD for which the data have been used are listed. } \label{obsdet}
\end{table*}
\subsection{ASTROSAT-CZTI}
\label{czt}
The Cadmium Zinc Telluride Imager (CZTI) instrument \citep{bhalerao2016cadmium} aboard ASTROSAT is a two-dimensional coded mask Imager with solid state pixelated Cadmium Zinc Telluride detectors of 976 cm$^2$ total geometric area divided into four quadrants, each containing 4096 pixels. The instrument operates in the energy range 20--150 keV for direct imaging, providing an angular resolution of $\sim 8$~arc-min within a field of view of $4.6^{\circ} \times 4.6^{\circ}$. Events recorded by the CZTI are time stamped with a resolution of 20 $\mu$s as per the instrument clock. On ASTROSAT, the primary time standard is provided by a spacecraft positioning system (SPS) which generates a GPS-synchronized UTC reference. A synchronizing pulse is sent to all X-ray payloads once in every 16 UTC seconds. The
local clock values of all the instruments and of the SPS are recorded at each such pulse into a time correlation table (TCT). Events recorded in the CZTI are assigned UTC time-stamps by interpolation in the TCT. Accuracy of absolute time stamps thus assigned to CZTI events are estimated to be within $\sim 3 \mu$s standard deviations \citep{bhattacharya2017}. Unlike most other space observatories, the event time stamps in ASTROSAT are provided in the UTC system instead of TT. In order to derive the Barycentric arrival time of each event, these UTC time-stamps are processed, along with information regarding the orbital motion of ASTROSAT, through a modified version of the well-known AXBARY task of NASA HEASOFT package. The modification, made available under the name "as1bary", takes into account the additional bookkeeping required for leap seconds while processing UTC time stamps.
\section{Analysis}
\label{anal}
\subsection{Analysis of radio data}
\label{analrad}
As mentioned in Section \ref{ort}, the data obtained at the ORT
were already available as coherently dedispersed time-stamped profiles,
which were used in the timing analysis described later. The GMRT
spectral voltage data were coherently dedispersed offline using a
pipeline developed by us. This pipeline first converts the spectral
voltages to a voltage time series by taking an inverse FFT. The
time-series is then convolved with a unity gain phase delay
filter, representing the effect of inter-stellar medium
as described in \cite{pondernaidu}. Both the coherently
dedispersed time series as well as an average profile, folded
using the monthly ephemeris generated with the ORT data translated to the start time of the observations, were
recorded for further analysis after integration to a resolution of
1 $\mu$s. The dispersion measure\footnote{Dispersion measure
is the integrated column density of electrons in the line
of sight of the pulsar, expressed in units of pc\,cm$^{-3}$} (DM)
used in the real-time coherent dedispersion carried out in
PONDER back-end at the ORT used the DM value provided in the
Jodrell Bank monthly
ephemeris\footnote{http://www.jb.man.ac.uk/pulsar/crab.html} \citep{lyne199323}
nearest to the epoch of
observations. As the GMRT data were coherently dedispersed
offline, subsequent to observations, DM derived from our
timing analysis was used in this case to obtain the folded average
profiles. The profiles, obtained with the ORT and the GMRT, were converted
to PSRFITS\footnote{PSRFITS is an open data storage format,
which is based on the flexible image transport system
(FITS) \citep{hsm04}} format.
In addition to pulse profiles, the off-line analysis of the GMRT data
also yielded data dedispersed to 64 sub-bands within the 16 MHz band-pass.
These were folded to 32 sub-integration for each of the 600 s observations at the
GMRT and converted to PSRFITS.
First, a noise-free template was created from the observed
average profiles for a given telescope as described below. The pulse
at 334.5 MHz is broadened due to multipath propagation in the inter-stellar
medium (\figurename{ \ref{radio_profile}}). Furthermore,
the pulse suffers a variable scatter-broadening at this frequency
due to varying inhomogeneities in the Crab nebula. This
can introduce a systematic error into the estimation of time-of-arrivals (TOAs) depending
on the extent of scatter-broadening in the profiles used for forming
the template. Hence, we chose a high signal-to-noise ratio (S/N)
profile with minimum scatter broadening for
creating a template at this frequency. As the scatter-broadening
is negligible at 1390 MHz (\figurename{ \ref{radio_profile}}),
the template at this frequency was obtained from a
profile generated by aligning and averaging best average profiles from several epochs of
observations. These profiles were then modeled as a sum of Gaussians,
using tools in PSRCHIVE package \citep{hsm04}, to obtain a noise free templates.
Separate templates were obtained for the ORT and the GMRT and these
were aligned with the MP positioned at pulse phase 0.24.
The average profile for each epoch at a given frequency were
cross correlated with the noise free template for that frequency
using a Fourier domain method \citep{taylor1992pulsar} to obtain the shift at
each epoch. The time-stamp at each epoch was adjusted by
this shift to obtain TOA.
These TOAs were used to refine the pulsar rotation parameter using pulsar timing package
TEMPO2 \citep{hobbs2006tempo2} \footnote{http://www.atnf.csiro.au/research/pulsar/tempo2/}.
In brief, this technique (called pulsar timing) compares
the observed TOAs with those predicted by an assumed rotation
model of the pulsar, keeping track of every rotation and minimizing the timing residuals through a least square fit
PSR B0531+21 shows rotational irregularities in the form
of timing noise \citep{scotttimingnoise03} and
glitches \citep{crabglitchwang12}.
These irregularities can significantly affect the residuals
leading to phase ambiguities. Thus, closely spaced observations
of pulsar are required to keep track of pulse phase and maintain
phase connection. Our experiment used the ORT for high cadence
observations of the pulsar to achieve the required phase
connection.
\begin{table}[h]
\begin{tabular}{|l|l|}
\hline
Pulsar parameter & Value \\ \hline
RAJ (hh:mm:ss) & 05:34:31.973 \\
DECJ (dd:mm:ss) & +22:00:52.06 \\
F0 (Hz) & 29.6607409(4) E$-$7 \\
F1 (Hz s$^{-1}$) & $-$3.6937842(9) E$-$10 \\
F2 (Hz s$^{-2}$) & 1.1905(3) E$-$20 \\
PEPOCH (MJD) & 57311.000000136 \\
POSEPOCH (MJD) & 40675 \\
DMEPOCH (MJD) & 57311.000000136 \\
DM (pc\,cm$^{-3}$) & 56.7957 \\
PMRA (mas/year) & $-$14.7 \\
PMDEC (mas/year) & 2 \\
WAVE\_OM (year$^{-1}$) & 0.0054325986245627 \\
WAVEEPOCH (MJD) & 57311.000000136 \\
DMMODEL (pc\,cm$^{-3}$) & 56.7957 \\ \hline
\end{tabular}
\\
\caption{ Reference timing solution for PSR B0531+21
after accounting for the timing noise and DM variations using the
multiband observations presented in this paper.}
\label{timsol}
\end{table}
The TOAs from the ORT data were divided in 30 day intervals and
local fits to the spin frequency (F0) and its derivatives (F1 and F2)
were performed at an epoch in the center of each 30-day
interval. These 30-day ephemerides were then used for folding
the high energy data as well as the 1390 MHz GMRT data. The details
of full phase connected timing analysis are described in Section \ref{analtim}.
\begin{figure*}[ht]
\includegraphics[scale=.4]{Timing_noise_post_submission.pdf}
\caption {Phase connected TOAs from the Fermi-LAT (yellow cross markers), the ASTROSAT CZTI (green circles), the GMRT (purple plus markers) and the ORT (gray diamonds) observations. The phase connection was obtained with the high cadence TOAs derived from the ORT observations and then applied to TOAs from other telescopes. The systematic pattern in the timing residuals is due to timing noise. The TOAs for different telescopes are offset with each other due to relative delays in the data acquisition at each telescope.}
\label{ortphsplt}
\end{figure*}
\subsection{Analysis of high energy data}
\label{analhigh}
\subsubsection{Analysis of Fermi-LAT data}
\label{analhighfermi}
We used the available archival data from
Fermi-LAT\footnote{https://fermi.gsfc.nasa.gov/cgi-bin/ssc/LAT/LATDataQuery.cgi}
and extracted all events in a 3-deg radius around the
position of PSR B0531+21 in the energy range of 0.1 to 300 GeV.
These were then split into separate event files, each spanning
seven days using Fermi science
tools\footnote{https://fermi.gsfc.nasa.gov/ssc/data/analysis
/scitools/overview.html}. The event times were referenced to the solar system barycenter
(SSB) and the events were folded using the Fermi
plugin \citep{rkp+11} of TEMPO2 with the ephemeris obtained in
Section \ref{analrad}. A template for the averaged
light curve in gamma-ray energies was constructed in a
manner similar to the radio data and was aligned with the
1390 MHz and the 334.5 MHz templates. The TOAs for each seven-day integrations
were then derived by cross-correlating with this template and used
in the subsequent timing analysis.
\subsubsection{Analysis of ASTROSAT data}
\label{analhighasat}
As mentioned before, instruments on board ASTROSAT provide
individual photons with time-stamps derived from a satellite
positioning system (SPS). The time tags of the photon were
converted to solar system barycentre using the position
of satellite in a code called {\it as1bary}.
The barycentered events were then binned across 256 pulse phase bins
using the ephemeris obtained in Section \ref{analrad}. The binned profile were then written as PSRFITS files.
CZTI instrument has four quadrant detectors. Hence for a good S/N,
we needed to combine data from all the quadrants.
We checked the alignment of the individual detector
data by folding the photons from each quadrant separately as well
as after combining the data from all quadrants. The profiles,
so obtained, are shown in \figurename{ \ref{cztquad}}, where the phases were
appropriately aligned using TEMPO2. All the analysis in
this paper uses the data combined from all four quadrants.
Separate profile templates were constructed for CZTI data
in a manner similar to the radio template. These were aligned with the
the Fermi-LAT, the GMRT and the ORT templates. Finally, the CZTI
template was cross-correlated with the observed profiles for CZTI to obtain TOAs in a manner similar to Fermi data. These were
subsequently used in the timing analysis described in the next section.
Timing offsets evaluated from observed differences in the Crab pulsar phase may suffer from ambiguities amounting to integral multiples of the pulse period. To test whether the offset between AstroSat-CZTI and Fermi could be as large or larger than 33 milliseconds, we compared the detection times of gamma ray bursts by these two missions. In particular, the bright, short burst GRB170127C \citep{bissaldi,vidushi} provided the best S/N for this test. We binned the UTC light curves from Fermi-GBM and AstroSat-CZTI at 10~ms resolution. The cross-correlation function of these two light curves showed a sharp peak at a delay of 0.0~ms with a formal error of 2.3~ms ($1\sigma$). The relative distance between the two spacecrafts, projected in the direction of the GRB, was 877~km at the time of this detection, corresponding to a travel time difference of 2.9~ms. We therefore conclude that the difference in the absolute time stamps of Fermi and AstroSat-CZTI is much less than 33~ms and hence no integral-period ambiguity is expected in the relative phase comparison of the Crab pulsar between these two missions.
\begin{figure}[h]
\includegraphics[scale=0.22]{All_quads.pdf}
\caption{Phase aligned profiles of PSR B0531+21 using four different detectors of CZTI arranged in a four quadrant fashion (Q0, Q1, Q2 and Q3)}
\label{cztquad}
\end{figure}
\subsection{Timing analysis }
\label{analtim}
All the radio and high energy TOAs, analyzed using
the high cadence timing solution obtained with the ORT, are
shown in \figurename{ \ref{ortphsplt}}. The timing noise is
clearly visible in this plot and so are the relative offsets
between the telescopes. The assumed parameters of the timing model are given
in Table \ref{timsol} along-with the reference epochs.
The timing analysis was done using the pulsar-timing
package TEMPO2. First, a reference timing solution was
obtained by local fits to ORT high cadence TOAs between
MJD 57282$-$57324 with a model involving the known
astrometric and rotational parameters and DM for the pulsar.
The fitted ephemeris were then used to phase connect
the TOAs of all the telescopes as shown in \figurename{\ref{ortphsplt}}. This reference ephemeris was the
starting point for the subsequent analysis described below.
The main objective of this work was to estimate the offset
in data acquisition pipeline of ASTROSAT. This was done
by comparison of the phase of the main pulse (or TOAs) seen at the GMRT and the ASTROSAT in simultaneous observations. This is complicated by both time dependent and frequency dependent systematics in the TOAs. As is evident from \figurename{\ref{ortphsplt}}, the pulsar shows
considerable timing noise, which is independent of frequency. As the lower frequency TOAs are also affected by frequency dependent propagation effects, the timing noise was modeled using the regular cadence Fermi-LAT TOAs instead. These were fitted with a combination of eight sine waves in addition to the already fitted parameters in the reference ephemeris to model the red timing noise and obtain white timing residuals using FITWAVES model in TEMPO2 \citep{hobbs2006tempo2}.
As the pulsar is located in a dynamic pulsar wind nebula with
nebular filaments, with trapped charged particles, moving
across the line of sight, the DM of the pulsar and the pulse broadening
varies significantly from epoch to epoch. This
introduces a systematic frequency dependent shift in barycentered
TOAs, particularly significant for
those derived from low radio frequency ORT topocentric TOAs.
The typical variation in DM is on the order of
0.01 pc\,cm$^{-3}$, which is equivalent to a shift of 21 $\mu$s and
370 $\mu$s at 1390 MHz and 334.5 MHz respectively. Thus, it
is essential to correct for DM variations to obtain
reliable estimates for rotational parameters and lower
post-fit timing residuals. We used the constrained DMMODEL
in TEMPO2 to estimate the offsets from the chosen reference
DM at epochs, where simultaneous ORT and GMRT observations were
available. Our measurements are plotted in \figurename{ \ref{dmvar}}.
\begin{figure}[ht]
\includegraphics[scale=.23]{DM_plot_post_submission.pdf}
\caption{Dispersion measure (DM) variations with observations epoch.}
\label{dmvar}
\end{figure}
The DM model was used along-with the timing
noise model and the astrometric and rotational model for PSR B0531+21
for a fit to TOAs from the Fermi-LAT, the ASTROSAT-CZTI,
the GMRT and the ORT. This corrects both the frequency independent
and frequency dependent systematics in these TOAs allowing a
more robust determination of relative offsets between the telescopes.
\section{Results and discussion}
\label{result}
TEMPO2 provides a way to fit the offsets between different telescopes
and the resulting timing residuals are shown in Figure \ref{postfitfinal}.
\cite{rots2004absolute} concluded that the X-ray main pulse leads its radio
counterpart by about 344 $\pm$ 40 $\mu$s. We can use this
measurement to find out the ASTROSAT pipeline offset. The relative offsets
between the GMRT and the CZTI aboard ASTROSAT was found to be
-4716 $\pm$ 50 $\mu s$. While determining these offsets
was the major objective of our project, we also determined in the process
the offsets between the GMRT and the ORT and the GMRT and Fermi to be
-29639 $\pm$ 50 $\mu s$ and -5368 $\pm$ 56 $\mu s$ respectively.
In addition, we verified that our timing solution fits the Jodrell Bank radio ToAs without introducing any time variable pattern.
\begin{figure}[h]
\includegraphics[scale=0.23]{offset_removed_postsubmission.pdf}
\caption{Post-fit residuals after fitting the offsets between different telescopes. The symbols used are same as those used in \figurename{ \ref{ortphsplt}}.}
\label{postfitfinal}
\end{figure}
The calibration of relative offsets between the radio and high energy
emission is also important for a simultaneous radio $-$ high
energy study of GPs to look for a radio$-$ high energy correlation.
Such a study is currently underway.
\section{Acknowledgements}
This publication makes use of data from the Indian astronomy mission AstroSat, archived at the Indian Space Science Data Centre (ISSDC).
The CZT Imager instrument was built by a TIFR-led consortium of institutes across India, including VSSC, ISAC, IUCAA, SAC, and PRL. The Indian Space Research Organisation funded, managed and facilitated the project. We thank the staff of the Ooty Radio Telescope and the Giant Meterwave Radio Telescope for taking observations over such a large number of epochs. Both these telescopes are operated by National Centre for
Radio Astrophysics of Tata Institute of Fundamental Research. PONDER backend, used in this work, was built with TIFR XII plan grants 12P0714 and 12P0716.
We would like to thank the anominous referee for his/her useful comments and suggestions. AB
would like to thank Alessandro Ridolfi for exposing to various techniques of PSRCHIVE package and Surajit Mondal for various fruitful discussions related to computational issues. We also thank Yogesh Maan for his valuable suggestions. BCJ, PKM and MAK acknowledges support for this work from DST-SERB grant EMR/2015/000515.
\bibliographystyle{aa}
|
{
"timestamp": "2018-06-18T02:09:07",
"yymm": "1806",
"arxiv_id": "1806.01066",
"language": "en",
"url": "https://arxiv.org/abs/1806.01066"
}
|
\section{Introduction} \label{sec-intro}
The spectrum $\sigma(G)$ of an undirected graph $G$ consists of eigenvalues of its adjacency symmetric $n\times n$ matrix ${\mathcal A}(G)$, i.e. $\sigma(G)=\{\lambda_k(G), k=1,\cdots, n, \ \lambda_k(G)$ is an eigenvalue of ${\mathcal A}(G)\}$, where
$\lambda_1(G) \ge \cdots \ge \lambda_n(G)$ (cf. \cite{Cvetkovic1988, Brouwer2012}). If the spectrum does not contain zero there exists the inverse matrix $A^{-1}$ of the adjacency matrix $A={\mathcal A}(G)$, and the graph $G_A$ is called invertible.
The concept of an inverse graph has been introduced by Godsil \cite{Godsil1985}. In addition to invertibility of the adjacency matrix it is required that $A^{-1}$ is diagonally similar to a nonnegative or nonpositive integral matrix (cf. Godsil \cite{Godsil1985}, Pavl\'{\i}kov\'a and \v{S}ev\v{c}ovi\v{c} \cite{Pavlikova2016}). Notice that the least positive eigenvalue of a graph is the reciprocal value of the maximal eigenvalue of the inverse graph. Therefore properties of inverse graphs can be used in estimation of the least positive eigenvalue (cf. Pavl\'{\i}kov\'a et al. \cite{Pavlikova1990, Pavlikova2015, Pavlikova2016}).
In many applied fields, e.g. theoretical chemistry, biology, or statistics, spectral indices and properties of graphs representing structure of chemical molecules or transition diagrams for finite Markov chains play an important role (cf. Cvetkovi\'c \cite{Cvetkovic1988,Cvetkovic2004}, Brouwer and Haemers \cite{Brouwer2012} and references therein). In the last decades, various graph energies and indices have been proposed and analyzed. For instance, the sum of absolute values of eigenvalues is referred to as the matching energy index (cf. Chen and Liu \cite{Lin2016}), the maximum of the absolute values of the least positive and largest negative eigenvalue is known as the HOMO-LUMO index (see Mohar \cite{Mohar2013,Mohar2015}, Li \emph{et al.} \cite{Li2013}, Jakli\'c \emph{et al.} \cite{Jaklic2012}, Fowler \emph{et al.} \cite{Fowler2010}), their difference is the HOMO-LUMO separation gap (cf. Gutman and Rouvray \cite{Gutman1979}, Li \emph{et al.} \cite{Li2013}, Zhang and An \cite{Zhang2002}, Fowler \emph{et al.} \cite{Fowler2001}).
In computational chemistry, eigenvalues of a graph describing an organic molecule are related to energies of molecular orbitals. Following H\"uckel's molecular orbital method \cite{Huckel1931} (see also Pavl\'{\i}kov\'a and \v{S}ev\v{c}ovi\v{c} \cite{Pavlikova2016-CMMS}), the energies $E_k, k=1,\cdots, n$, are the eigenvalues of the Hamiltonian matrix $H$ and its eigenvectors are orbitals. The square symmetric matrix $H$ has the following elements:
\begin{itemize}
\item[] $H_{ii} = \alpha$ for the carbon C atom at the $i$-th vertex, and $H_{ii} = \alpha + h_A\beta$ for other atoms A, where $\alpha<0$ is the Coulomb integral and $\beta<0$ is the resonance integral;
\item[] $H_{ij} = \beta$ if both vertices $i$ and $j$ are carbon C atoms, $H_{ij} = k_{AB}\beta$ for other neighboring atoms A and B;
\item[] $H_{ij} = 0$ otherwise.
\end{itemize}
The atomic constants $h_A, k_{AB}$ have to be specified ($h_C=k_{CC}=0$). For instance, the molecule of pyridine contains one atom of nitrate N and five atoms of carbon C. Clearly, in the case of pure hydrocarbon we have $H = \alpha I + \beta A$ where $I$ is the identity and $A$ is the adjacency matrix of the molecular structural graph $G$. Hence $E_k=\alpha +\beta \lambda_k$. Now, the energy $E_{HOMO}$ of the highest occupied molecular orbital (HOMO) corresponds to the eigenvalue $\lambda_{HOMO}=\lambda_k$ where $k=n/2$ for $n$ even and $k=(n+1)/2$ for $n$ odd. The energy $E_{LUMO}$ of the lowest unoccupied molecular orbital (LUMO) corresponds to the subsequent eigenvalue $\lambda_{LUMO}=\lambda_{k+1}$ for $n$ even, and $\lambda_{LUMO}=\lambda_{k}$ for $n$ odd. The HOMO-LUMO separation gap is the difference between $E_{LUMO}$ and $E_{HOMO}$ energies, i.e. $E_{LUMO} - E_{HOMO} = -\beta ( \lambda_{HOMO} - \lambda_{LUMO}) \ge 0$ because $\beta<0$. The so-called properly closed shells have the property $\lambda_{HUMO}>0>\lambda_{LUMO}$ containing either zero or two electrons are called closed shells for which $n$ is even (cf. Fowler and Pisanski \cite{Fowler2010}). For such orbital systems, the HOMO-LUMO separation gap is equal to the energy difference $E_{LUMO} - E_{HOMO} = -\beta \Lambda_{HL}(G_A)$
where
\begin{equation}
\Lambda_{HL}(G_A) = \check{\lambda}^+(G_A) - \hat{\lambda}^-(G_A).
\label{HLgap}
\end{equation}
Here $\check{\lambda}^+(G_A)=\lambda_k$ is the smallest positive eigenvalue, and $\hat{\lambda}^-(G_A)=\lambda_{k+1}$ is the largest negative eigenvalue of the adjacency matrix $A$ of the structural molecular graph $G_A$ (cf. \cite{Fowler2010}). According to Aihara \cite{Aihara1999JCP,Aihara1999TCH} the large HOMO-LUMO gap implies high kinetic stability and low chemical reactivity of the molecule, because it is energetically unfavorable to add electrons to a high-lying LUMO orbital. Notice that the HOMO-LUMO energy gap is generally decreasing with the size $n$ of the structural graph (cf. Bacalis and Zdetsis \cite{Bacalis2009}).
In this paper, our goal is to investigate extremal properties of the HOMO-LUMO spectral gap $\Lambda_{HL}(G_A)$. We show how to represent $\Lambda_{HL}(G_A)$ by means of the optimal solution to a convex semidefinite programming problem (Section 2). We study spectral properties of graphs which can be constructed from two given (not necessarily bipartite) graphs by bridging them over a bipartite graph (Section 3). We analyze their HOMO-LUMO spectral gap of such a bridged graph and its dependence on the bridging bipartite graph. Finding an optimal bridging bipartite graph leads to a mixed integer nonconvex optimization problem with linear matrix inequality constraints (Section 4). We prove that the optimal HOMO-LUMO spectral gap can be obtained by solving a mixed integer semidefinite convex program. The optimization problem is, in general, NP hard (Section 5). This is why we also derive upper (Section 6) and lower (Section 7) bounds for the optimal HOMO-LUMO spectral graph by means of semidefinite relaxation techniques which can be solved in a fast and computationally efficient way. Various computational examples of construction of the optimal bridging graph are presented in Section 8.
\section{Semidefinite programming representation of the HOMO-LUMO spectral gap}
The HOMO-LUMO spectral gap of a graph $G_C$ is defined as follows:
\[
\Lambda_{HL}(G_C) = \check{\lambda}^+(G_C) - \hat{\lambda}^-(G_C),
\]
where $\check{\lambda}^+(G_C) \ge 0$ is the smallest nonnegative eigenvalue, and $\hat{\lambda}^-(G_C)\le 0$ is the largest nonpositive eigenvalue of the adjacency matrix $C$. Notice that the spectrum $\sigma(G_C) = \sigma(C)$ of a nontrivial graph $G_C$ without loops must contain negative as well as positive eigenvalues because the trace $Tr(C) =\sum_{\lambda\in\sigma(C)} \lambda = 0$.
Clearly, if the graph $G_C$ is invertible then $\check{\lambda}^+(G_C) > 0$ and $\hat{\lambda}^-(G_C)< 0$ and so $\Lambda_{HL}(G_C)>0$, otherwise $\Lambda_{HL}(G_C)=0$.
\subsection{Semidefinite representation of the HOMO-LUMO gap}
Suppose that a graph $G_C$ is invertible. Following \cite{Pavlikova2016} the smallest positive and largest negative eigenvalues of $G_C$ can be expressed as follows:
\[
\check{\lambda}^+(G_C) = \frac{1}{\lambda_{max}(C^{-1})}, \qquad \hat{\lambda}^-(G_C) = \frac{1}{\lambda_{min}(C^{-1})},
\]
where $\lambda_{max}(C^{-1})>0$ and $\lambda_{min}(C^{-1})= - \lambda_{max}(-C^{-1})<0$ are the maximum and minimum eigenvalues of the inverse matrix $C^{-1}$, respectively. We denote by $\preceq$ the L\"owner partial ordering on symmetric matrices, i.e. $A\preceq B$ iff the matrix $B-A$ is a positive semidefinite matrix, that is $B-A\succeq 0$. The maximal and minimal eigenvalues of $C^{-1}$ can be expressed as follows:
\[
0< \lambda_{max}(C^{-1}) = \min_{C^{-1}\preceq t I} t, \qquad 0 > \lambda_{min}(C^{-1}) = \max_{ s I \preceq C^{-1}} s,
\]
(see e.g. \cite{bova}, \cite{Cvetkovic2004}).
Since $\{ t,\ C^{-1} \preceq t I \} \subset (0,\infty)$ and $\{ s,\ s I \preceq C^{-1} \} \subset (-\infty, 0)$ then, by using the substitution $\mu=1/t, \eta=-1/s$, we obtain the following characterization of the lowest positive and largest negative eigenvalues of the graph $G_C$:
\begin{equation}
\check{\lambda}^+(G_C) = \max_{\mu C^{-1}\preceq I} \mu, \qquad \hat{\lambda}^-(G_C) = - \max_{- \eta C^{-1}\preceq I} \eta.
\label{lambdapm}
\end{equation}
As a consequence, we obtain the following semidefinite representation of the HOMO-LUMO spectral gap for a vertex labeled invertible graph $G_C$ without loops. Then the HOMO-LUMO spectral gap $\Lambda_{HL}(G_C)$ of the graph $G_C$ is the optimal value of the following semidefinite programming problem:
\begin{eqnarray}
\Lambda_{HL}(G_C) &=& \max_{\mu,\eta\ge0} \quad \mu+\eta
\\
&& s.t. \quad \mu C^{-1} \preceq I, \nonumber
\\
&& \quad\ \ -\eta C^{-1} \preceq I. \nonumber
\label{homolumo}
\end{eqnarray}
(cf. Pavl\'{\i}kov\'a and \v{S}ev\v{c}ovi\v{c} \cite{Pavlikova2016-CMMS}).
\section{Graphs bridged over a bipartite graph}
In this section we introduce a notion of a graph which is constructed from two given graphs $G_A$ and $G_B$ by bridging vertices of $G_A$ to vertices of $G_B$. More, precisely, let $G_A$ and $G_B$ be two undirected vertex-labeled graphs on $n$ and $m$ vertices without loops, respectively. In general, we do not assume that $G_A$ and $G_B$ are bipartite graphs. Let $G_K$ be a $(n,m)$-bipartite graph on $n+m$ vertices with the adjacency matrix:
\begin{equation}
{\mathcal A}(G_K) = \left(
\begin{array}{cc}
0 & K\\
K^T & 0
\end{array}
\right),
\label{bipartite}
\end{equation}
where $K$ is an $n\times m$ matrix containing $\{0,1\}$-elements only.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=7truecm]{figures/obrazky-bipartite}
\end{center}
\caption{
A bridged graph $G_C={\mathcal B}_K(G_A,G_B)$ through a bipartite graph $G_K$.}
\label{fig-bipartite}
\end{figure}
By ${\mathcal B}_K(G_A,G_B)$ we shall denote the graph $G_C$ on $n+m$ vertices which is obtained by bridging the vertices of the graph $G_A$ to the vertices of $G_B$ through the $(n,m)$-bipartite graph $G_K$, i.e. its adjacency matrix $C={\mathcal A}(G_C)$ of the graph $G_C$ has the form:
\begin{equation}
C = \left(
\begin{array}{cc}
A & K\\
K^T & B
\end{array}
\right),
\label{matrixC}
\end{equation}
In what follows, we will assume that the adjacency matrices $A$ and $B$ are symmetric $n\times n$ and $m\times m$ invertible matrices, respectively.
\begin{theorem}\label{theo-1}
Let $G_A$ and $G_B$ be two undirected vertex-labeled invertible graphs on $n$ and $m$ vertices, respectively. Let $G_K$ be a $(n,m)$-bipartite graph. Let $G_C={\mathcal B}_K(G_A,G_B)$ be the graph which is constructed by bridging the graphs $G_A$ and $G_B$ through the bipartite graph $G_K$.
Then the graph $G_C$ is invertible if and only if the $n\times n$ matrix $S= A- K B^{-1} K^T$ is invertible. In this case we have
\begin{eqnarray}
C^{-1}&=&\left(
\begin{array}{cc}
A & K\\
K^T & B
\end{array}
\right)^{-1}
=
\left(
\begin{array}{cc}
S^{-1} & - S^{-1} K B^{-1} \\
- B^{-1} K^T S^{-1} & B^{-1} + B^{-1} K^T S^{-1} K B^{-1}
\end{array}
\right).
\nonumber
\\
&=&
Q^T \left(
\begin{array}{cc}
S^{-1} & 0 \\
0 & B^{-1}
\end{array}
\right)
Q,
\label{invC2}
\end{eqnarray}
where $Q$ is an invertible matrix with the inverse $Z=Q^{-1}$ given by:
\[
Q =
\left(
\begin{array}{cc}
I & - K B^{-1} \\
0 & I
\end{array}
\right), \qquad Z =
\left(
\begin{array}{cc}
I & K B^{-1} \\
0 & I
\end{array}
\right).
\]
\end{theorem}
\noindent P r o o f. The proof is a direct consequence of the Schur complement theorem (see e.~g. \cite[Theorem A.6]{Maja2013}). Indeed, $C\left(\begin{array}{c} x\\ y\end{array}\right) = \left(\begin{array}{c} 0\\ 0\end{array}\right) $ if and only if $Ax + Ky =0$ and $K^Tx + By=0$, that is, $S x = (A- K B^{-1} K^T)x =0$. As $x\not=0\Leftrightarrow y\not=0$ we have $C$ is invertible if and only if $S$ is invertible. The rest of the proof is a straightforward verification of the form of the inverse matrix $C^{-1}$.
\hfill $\diamondsuit$
\subsection{Semidefinite representation of the HOMO-LUMO gap for a bridged graph}
Now, let $G_C = {\mathcal B}_K(G_A,G_B)$ be the graph obtained from graphs $G_A$ and $G_B$ by bridging them through a bipartite graph $G_K$ with adjacency matrix $K$ (\ref{bipartite}).
Then, for any $\mu\ge 0$, we have $\mu C^{-1} \preceq I$ if and only if $\mu Z^T C^{-1} Z \preceq Z^T Z$, i.e.,
\[
\mu \left(
\begin{array}{cc}
S^{-1} & 0 \\
0 & B^{-1}
\end{array}
\right)
\preceq Z^T Z =
\left(
\begin{array}{cc}
I & K B^{-1} \\
B^{-1} K^T & I + B^{-1} K^T K B^{-1}
\end{array}
\right).
\]
Therefore,
\begin{equation}
\mu C^{-1} \preceq I \quad\Leftrightarrow \quad
\left(
\begin{array}{cc}
I - \mu S^{-1} & K B^{-1} \\
B^{-1} K^T & I - \mu B^{-1} + B^{-1} K^T K B^{-1}
\end{array}
\right) \succeq 0.
\label{ineqmu}
\end{equation}
Similarly,
\begin{equation}
-\eta C^{-1} \preceq I \quad\Leftrightarrow \quad
\left(
\begin{array}{cc}
I + \eta S^{-1} & K B^{-1} \\
B^{-1} K^T & I + \eta B^{-1} + B^{-1} K^T K B^{-1}
\end{array}
\right) \succeq 0.
\label{ineqeta}
\end{equation}
With regard to (\ref{homolumo}) we obtain the following representation of the HOMO-LUMO spectral gap $\Lambda_{HL}(G_C)$ for a the bridged graph:
\begin{eqnarray}
\label{homolumobridged-S}
\Lambda_{HL}(G_C) &=& \max_{\scriptsize
\begin{array}{c}
\mu, \eta \ge 0
\end{array}
}
\quad \mu+\eta
\\
s.t.&&
\left(
\begin{array}{cc}
I - \mu S^{-1} & K B^{-1} \\
B^{-1} K^T & I - \mu B^{-1} + B^{-1} K^T K B^{-1}
\end{array}
\right) \succeq 0,
\nonumber
\\
&&
\left(
\begin{array}{cc}
I +\eta S^{-1} & K B^{-1} \\
B^{-1} K^T & I + \eta B^{-1} + B^{-1} K^T K B^{-1}
\end{array}
\right) \succeq 0.
\nonumber
\end{eqnarray}
Since for the Schur complement we have $S= A- K B^{-1} K^T$ then the matrix inequality constraints appearing in (\ref{homolumobridged-S}) represent, in general, nonconvex constraints with respect to the matrix $K$. To overcome this difficulty we further restrict the class of bipartite graphs $G_K$ bridging $G_A$ to $G_B$ to those turning (\ref{homolumobridged-S}) to a convex semidefinite program in the $K$ variable.
\begin{definition}\cite{Pavlikova2016}
\label{def-arbitrarily}
Let $G_B$ be an undirected vertex-labeled graph on $m$ vertices with an invertible adjacency matrix $B$. We say that $G_B$ is arbitrarily bridgeable over the first $\{1, \cdots, k_B\}$ vertices of $G_B$ if the $k_B\times k_B$ upper principal sub-matrix of $B^{-1}$ is a null matrix, i.e. $E B^{-1} E^T=0$ where $E=(I, 0)$ is a $k_B\times m$ block matrix and $I$ is a $k_B\times k_B$ identity matrix.
A graph $G_B$ is said to be arbitrarily bridgeable over the subset $\{ i_1, \cdots, i_{k_B}\}$ of vertices of $G_B$ if there exists a permutation $P$ of its vertices such that $i_k\mapsto k, k=1,\cdots, k_B,$ and $E \tilde B^{-1} E^T=0$ where $\tilde B= P^T B P$.
\end{definition}
Notice that if $G_B$ is arbitrarily bridgeable then $k_B\le m/2$ because there is no regular $m\times m$ matrix $B^{-1}$ such that $E B^{-1} E^T =0$ for $k_B > m/2$.
Using the notion of an arbirtarily bridgeable graph we conclude the following theorem:
\begin{theorem}\label{th-semidefinitebridged}
Let $G_A$ and $G_B$ be undirected vertex-labeled invertible graphs on $n$ and $m$ vertices without loops, respectively. Assume that $G_B$ is arbitrarily bridgeable over the first $\{1, \cdots, k_B\}$ vertices of $G_B$. If the $n\times m$ matrix $K$ has zero last $m-k_B$ columns, i.e. $K_{ij}=0$ for $j=k_B+1, \cdots m$, then $K B^{-1} K^T = 0$, and, consequently, for the Schur complement $S$ we have $S= A- K B^{-1} K^T = A$, and $S^{-1}= A^{-1}$.
Moreover, the HOMO-LUMO spectral gap $\Lambda_{HL}(G_C)$ for the bridged graph $G_C={\mathcal B}_K(G_A,G_B)$ through the bipartite graph $G_K$ is the optimal value of the following semidefinite programming problem:
\begin{eqnarray}\small
\label{homolumobridged}
\Lambda_{HL}(G_C) &=& \max_{\scriptsize
\begin{array}{c}
\mu, \eta \ge 0
\end{array}
}
\quad \mu+\eta
\\
s.t.&&
\left(
\begin{array}{cc}
I - \mu A^{-1} & K B^{-1} \\
B^{-1} K^T & I - \mu B^{-1} + B^{-1} K^T K B^{-1}
\end{array}
\right) \succeq 0,
\nonumber
\\
&&
\left(
\begin{array}{cc}
I +\eta A^{-1} & K B^{-1} \\
B^{-1} K^T & I + \eta B^{-1} + B^{-1} K^T K B^{-1}
\end{array}
\right) \succeq 0.
\nonumber
\end{eqnarray}
\end{theorem}
\section{Construction of an optimal bridging bipartite graph by means of a mixed integer nonlinear programming problem}
In this section we focus our attention on extremal properties of the HOMO-LUMO spectral gap for bridged graphs. Given an invertible graph $G_A$ and arbitrarily bridgeable invertible graph $G_B$, over the first $\{1, \cdots, k_B\}$ vertices of $G_B$, our goal is to find an optimal bridging graph $G_K$ (see (\ref{bipartite})) such that $K_{ij}=0$ for $j=k_B+1, \cdots, m$ and the HOMO-LUMO spectral gap $\Lambda_{HL}(G_C)$ is maximal, where $G_C={\mathcal B}_K(G_A, G_B)$.
Using representation of $\Lambda_{HL}(G_C)$ for the graph $G_C={\mathcal B}_K(G_A,G_B)$ (see Theorem~\ref{th-semidefinitebridged}), the maximal HOMO-LUMO gap $\Lambda^{opt}_{HL}=\Lambda^{opt}_{HL}(G_A, G_B)$ with respect to a bipartite matrix $K$ is given as the optimal value of the following mixed integer nonlinear optimization problem:
\begin{eqnarray}\small
\label{homolumoopt}
\Lambda^{opt}_{HL} &=& \max_{\scriptsize
\begin{array}{c}
\mu, \eta \ge 0 \\
K, W
\end{array}
}
\quad \mu+\eta
\\
s.t.&&
\left(
\begin{array}{cc}
I - \mu A^{-1} & K B^{-1} \\
B^{-1} K^T & I - \mu B^{-1} + B^{-1} W B^{-1}
\end{array}
\right) \succeq 0,
\nonumber
\\
&&
\left(
\begin{array}{cc}
I +\eta A^{-1} & K B^{-1} \\
B^{-1} K^T & I + \eta B^{-1} + B^{-1} W B^{-1}
\end{array}
\right) \succeq 0,
\nonumber
\\
&& \ \
W = K^T K, \quad K_{ij}\in \{0,1\}\quad\hbox{for all}\ i,j, \quad
\sum_{k,l} K_{kl} \ge 1,
\nonumber \\
&& K_{ij}=0 \ \ \hbox{for} \ j=k_B+1, \cdots, m,\ \ i=1,\cdots, n.
\nonumber
\end{eqnarray}
Notice that the condition $K\not=0$ for a binary matrix $K$ is equivalent to the condition $\sum_{k,l} K_{kl} \ge 1$. The objective function as well as the first two matrix inequality constraints in the optimization problem (\ref{homolumoopt}) are linear\footnote{Convex semidefinite problems with linear matrix inequality constraints can be solved by means of computational Matlab toolboxes available for semidefinite programming, e.g. SeDuMi solver developed by J.~Sturm \cite{sturm} with Yalmip Matlab programming framework due to J. L\"ofberg \cite{Lofberg2004}.} in the variables $\mu,\eta, K, W$. However, the last two constraints in (\ref{homolumoopt}) make the problem considerably harder to solve because of the nonconvex constraint $W = K^T K$ and the binary constraint $K_{ij}\in \{0,1\}$. It means that (\ref{homolumoopt}) is a mixed integer nonconvex programming problem which is, in general, NP-hard to solve.
\section{Construction of upper bounds for the HOMO-LUMO spectral gap by semidefinite relaxation techniques}
In the field of solving mixed integer nonconvex problems various techniques have been developed in the last decades. We refer the reader to the book \cite{bova} by Boyd and Vanderberghe on recent developments on semidefinite relaxation methods for solving nonconvex and mixed integer nonlinear optimization problems. In general, semidefinite relaxations of an original nonconvex problem can be constructed by means of the second Lagrangian dual problem which is already a convex semidefinite problem (see e.g. \v{S}ev\v{c}ovi\v{c} and Trnovsk\'a \cite{ST}).
\subsection{Mixed semidefinite-integer relaxation}
In order to construct a suitable convex programming relaxation of (\ref{homolumoopt}) we have to enlarge the domain of variables $\mu,\eta, K, W$. Notice that the integer constraint $K_{ij}\in \{0,1\}$ is equivalent to the equality: $K_{ij}=K^2_{ij}$. Moreover, from the constraint $W=K^T K$ we deduce $W_{ij}\in \mathbb{N}^+_0$ and $W_{jj} = \sum_{l} K^2_{lj} = \sum_{l} K_{lj}$. The nonconvex constraint $W=K^T K$ can be relaxed by a convex matrix inequality constraint $W\succeq K^T K$. Using the Schur complement theorem (cf. \cite{Maja2013}), it can be rewritten as a linear matrix inequality constraint:
\[
W\succeq K^T K \quad\Leftrightarrow\quad
\left( \begin{array}{cc}
W & K^T \\
K & I
\end{array}\right)\succeq 0.
\]
Hence the nonconvex-integer programming problem (\ref{homolumoopt}) can be relaxed by means of the following mixed integer semidefinite programming problem with linear matrix inequality constraints and integer constraints for the upper bound approximation $\overline{\Lambda}^{sir}_{HL}=\overline{\Lambda}^{sir}_{HL}(G_A,G_B)$:
\begin{eqnarray}
\label{homolumosir}
\overline{\Lambda}^{sir}_{HL} &=& \max_{\scriptsize
\begin{array}{c}
\mu, \eta \ge 0 \\
K, W
\end{array}
}
\quad \mu+\eta
\nonumber
\\
s.t.&&
\left(
\begin{array}{cc}
I - \mu A^{-1} & K B^{-1} \\
B^{-1} K^T & I - \mu B^{-1} + B^{-1} W B^{-1}
\end{array}
\right) \succeq 0,
\nonumber
\\
&&
\left(
\begin{array}{cc}
I +\eta A^{-1} & K B^{-1} \\
B^{-1} K^T & I + \eta B^{-1} + B^{-1} W B^{-1}
\end{array}
\right) \succeq 0,
\\
&&
\left( \begin{array}{cc}
W & K^T \\
K & I
\end{array}\right)\succeq 0,\nonumber
\\
&&
K_{ij}\in \{0,1\},
\ \ W_{ij}\in\mathbb{N}^+_0, \ \ W_{jj} = \sum_{l} K_{lj}
\quad\hbox{for all}\ i,j, \ \sum_{k,l} K_{kl} \ge 1. \nonumber
\\
&& K_{ij}=0 \ \ \hbox{for} \ j=k_B+1, \cdots, m, \ \ i=1,\cdots, n.
\nonumber
\end{eqnarray}
It is worth noting that if $(\hat\mu, \hat\eta, \hat K, \hat W)$ is the optimal solution to the mixed integer semidefinite programming problem (\ref{homolumosir}) then $(\hat\mu, \hat\eta, \hat K, \hat W)$ is also feasible for (\ref{homolumoopt}) because $\hat W=\hat K^T \hat K$. Indeed, if we denote $L=\hat W - \hat K^T \hat K$ then $L\succeq 0$ and $L_{jj}=\hat W_{jj} - \sum_l \hat K_{lj}^2 = \hat W_{jj} - \sum_l \hat K_{lj} =0$. Hence $diag(L)=0$ and so $L=0$, as claimed. Consequently, the HOMO-LUMO gap $\Lambda_{HL}({\mathcal B}_{\hat K}(G_A,G_B) ) = \overline{\Lambda}^{sir}_{HL}(G_A,G_B)$. Hence
\[
\Lambda^{opt}_{HL}(G_A,G_B) = \overline{\Lambda}^{sir}_{HL}(G_A,G_B).
\]
Next we present a sample code for solving the mixed integer semidefinite programming problem (\ref{homolumosir}) for construction of the optimal bridging for maximal HOMO-LUMO spectral gap $\overline{\Lambda}^{sir}_{HL}(G_A, G_B) = \overline{\Lambda}^{opt}_{HL}(G_A, G_B)$. We employed the Matlab programming environment Yalmip which is capable of solving mixed integer problems with semidefinite linear matrix inequality constraints due to L\"ofberg \cite{Lofberg2004}). The structure of the code is shown in Table~\ref{tab-code}. After declaring classes of variables and setting the constraints, then the main solver routine {\tt solvsdp} is executed. It is designed for solving minimization problem. It employs SeDuMi semidefinite programming solver (cf. Sturm \cite{sturm}) as the lower solver and branch and bound integer rounding solver as the upper solver.
\begin{table}[htp]
\label{tab-code}
\caption{A sample Matlab code for computing mixed integer semidefinite programming problem (\ref{homolumosir}). The output of the program is the optimal value
$\Lambda^{opt}_{HL}(G_A,G_B) = \overline{\Lambda}^{sir}_{HL}(G_A,G_B)$.
}
\begin{center}
\hrule
\vskip 0.5truemm
\hrule
{\footnotesize
\begin{verbatim}
mu=sdpvar(1); eta=sdpvar(1); W=intvar(m,m); K=binvar(n,m);
ops=sdpsettings('solver','bnb','bnb.maxiter', bnbmaxiter);
Fconstraints=[...
[[W, K'];
[K, eye(n,n)]
]>=0, ...
mu>=0, eta>=0, ...
[[eye(n,n) - mu*inv(A), K*inv(B)];
[inv(B)*K', eye(m,m) - mu*inv(B) + inv(B)*W*inv(B)]
] >= 0, ...
[[eye(n,n) + eta*inv(A), K*inv(B)];
[inv(B)*K', eye(m,m) + eta*inv(B) + inv(B)*W*inv(B)]
] >= 0, ...
sum(K(:,:))==diag(W)', sum(K(:))>=1, ...
vec(W(:))>=0, 0<=vec(K(:))<=1, ...
sum([[A, K]; [K', B] ])<=maxdegree*ones(1,n+m), ...,
K*[zeros(kB,m-kB); eye(m-kB,m-kB)] == zeros(n, m-kB), ...
];
solvesdp(Fconstraints, -mu-eta, ops)
LambdaSIR = double(mu + eta)
\end{verbatim}
}
\hrule
\end{center}
\end{table}
\subsection{Full semidefinite relaxation}
Next, we further relax the binary and integer constraints appearing in (\ref{homolumosir}). The integer constraint $K_{ij}\in\{0,1\}$ can be relaxed by the box convex inequality constraints: $0\le K_{ij}\le 1$ for all $i,j$. Clearly, such a relaxation may lead to a non-integer optimal matrix $K$. The maximization problem for the full semidefinite relaxation of the HOMO-LUMO spectral gap $\overline{\Lambda}_{HL}^{sdp}=\overline{\Lambda}_{HL}^{sdp}(G_A,G_B)$ can be formulated as follows:
\begin{eqnarray}
\label{homolumosdp}
\overline{\Lambda}_{HL}^{sdp} &=& \max_{\scriptsize
\begin{array}{c}
\mu, \eta \ge 0 \\
K, W
\end{array}
}
\quad \mu+\eta
\nonumber \\
s.t.&&
\left(
\begin{array}{cc}
I - \mu A^{-1} & K B^{-1} \\
B^{-1} K^T & I - \mu B^{-1} + B^{-1} W B^{-1}
\end{array}
\right) \succeq 0,
\nonumber
\\
&&
\left(
\begin{array}{cc}
I +\eta A^{-1} & K B^{-1} \\
B^{-1} K^T & I + \eta B^{-1} + B^{-1} W B^{-1}
\end{array}
\right) \succeq 0,
\\
&&
\left(
\begin{array}{cc}
W & K^T \\
K & I \\
\end{array}
\right)\succeq 0, \nonumber
\\
&&
0\le K_{ij} \le 1,\ \ W_{jj} = \sum_{l} K_{lj}, \ \
W_{ij}\ge 0\ \hbox{for all}\ i,j,\ \ \sum_{k,l} K_{kl}\ge 1,
\nonumber
\\
&& K_{ij}=0 \ \ \hbox{for} \ j=k_B+1, \cdots, m, \ \ i=1,\cdots, n.
\end{eqnarray}
In order to compute $\overline{\Lambda}^{sdp}_{HL}(G_A, G_B)$ the full semidefinite relaxation (\ref{homolumosdp}) we have to change the specification of real variables, i.e. {\tt W=sdpvar(m,m); K=sdpvar(n,m)} and add the box constraint {\tt 0<=vec(K(:))<=1} in the code shown in Table~\ref{tab-code}.
\begin{remark}
Following the recent paper by Kim, Kojima and Toh \cite{KKT2016} the box constraint $0\le K_{ij}\le 1$ can be further enhanced by introducing a slack variable $\tilde K$ where $\tilde K_{ij}= 1-K_{ij}$. Then $K_{ij}\in\{0,1\}$ if and only if $K_{ij}\tilde K_{ij}=0$ for all $i,j$. It is equivalent to the condition $V_{jj}=0$ for each $j$, where $V = \tilde K^T K$. Next, the nonconvex matrix constraints $W= K^T K, \tilde W = \tilde K^T \tilde K$, can be relaxed in the form of the following linear matrix inequality:
\[
\left(
\begin{array}{cc}
W & V^T \\
V & \tilde W \\
\end{array}
\right) \succeq
\left(
\begin{array}{c}
K^T \\
\tilde K^T \\
\end{array}
\right)
\left(
\begin{array}{cc}
K & \tilde K \\
\end{array}
\right)
\quad\Longleftrightarrow\quad
\left(
\begin{array}{ccc}
W & V^T & K^T \\
V & \tilde W & \tilde K^T \\
K & \tilde K & I \\
\end{array}
\right)\succeq 0,
\]
$W_{ij}, \tilde W_{ij}, V_{ij}\ge 0,\ V_{jj} =0$ for all $i,j$.
\end{remark}
\begin{theorem}\label{theo-3}
Let $G_A$ and $G_B$ be undirected vertex-labeled invertible graphs on $n$ and $m$ vertices without loops, respectively. Assume $G_B$ is arbitrarily bridgeable over the first $k_B$ vertices $\{1,\cdots, k_B\}$. Then
\[
\Lambda_{HL}(G_C)\le \Lambda^{opt}_{HL}(G_A,G_B)
\equiv \overline{\Lambda}_{HL}^{sir}(G_A,G_B)
\le \overline{\Lambda}_{HL}^{sdp}(G_A,G_B)
\le \Lambda_{HL}(G_A),
\]
for any graph $G_C={\mathcal B}_K(G_A,G_B)$ which is constructed from graphs $G_A, G_B$ by bridging the vertices of $G_A$ to the first $k_B$ vertices of $G_B$ through an $(n,m)$-bipartite graph $G_K$ such that $K_{ij}=0$ for $j=k_B+1, \cdots, m$.
\end{theorem}
\noindent P r o o f. The set
\[
\{ (K,W),\
K_{ij}\in \{0,1\}, \ \ W_{ij}\in\mathbb{N}^+_0, \ \ W_{jj} = \sum_{l} K_{lj}
\quad\hbox{for all}\ i,j,
\]
\[
\quad \sum_{k,l} K_{kl} \ge 1, W\succeq K^T K \}
\]
of feasible integer matrices $K,W$ for (\ref{homolumosir}) is a subset of the set:
\[
\{ (K,W),\
0\le K_{ij} \le 1,\ \ W_{ij}\ge 0,\ \ W_{jj} = \sum_{l} K_{lj},
\quad \hbox{for all}\ i,j,
\]
\[
\quad \sum_{k,l} K_{kl}\ge 1, W\succeq K^T K
\},
\]
of real matrices $K,W$ that are feasible for (\ref{homolumosdp}). From this fact we conclude the inequality $\overline{\Lambda}_{HL}^{sir}(G_A,G_B) \le \overline{\Lambda}_{HL}^{sdp}(G_A,G_B)$. The inequality $\overline{\Lambda}_{HL}^{sdp}(G_A,G_B) \le \Lambda_{HL}(G_A)$ follows from the fact that
\[
\left(
\begin{array}{cc}
I - \mu A^{-1} & K B^{-1} \\
B^{-1} K^T & I - \mu B^{-1} + B^{-1} W B^{-1}
\end{array}
\right) \succeq 0 \quad \Longrightarrow I - \mu A^{-1} \succeq 0,
\]
that is $1/\mu \ge \lambda_{max}(A^{-1})$
and so $\mu \le \check{\lambda}^+(G_A)$. Similarly, we obtain $I + \eta A^{-1} \succeq 0$ and, consequently, $\eta \le - \hat{\lambda}^-(G_A)$. Therefore,
$\mu+\eta\le \Lambda_{HL}(G_A)$, as claimed.
\hfill$\diamondsuit$
\medskip
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.3\textwidth]{figures/examplequartic}
\end{center}
\caption{
Simple graphs $G_A$ and $G_B$ (left) and the bridged graph $G_C$ with the maximal HOMO-LUMO spectral gap which can be constructed by bridging $G_A$ and $G_B$ over the vertex 1 of $G_B$ ($k_B=1$) to the vertices of $G_A$ (right).
}
\label{fig-quarticfamily}
\end{figure}
\begin{example}
In Figure~\ref{fig-quarticfamily} (left) we show two simple graphs $G_A$ and $G_B$ having the spectrum $\sigma(G_A) = \sigma(G_B) = \{1,-1\}$, i.e. $\Lambda_{HL}(G_A)=2$. The graph $G_B$ is arbitrarily bridgeable over the vertex $1$. The optimal bipartite graph $G_K$ bridging $G_B$ to $G_A$ with $k_B=1$ has the adjacency matrix $K=(1,1)^T$. The optimal bridged graph $G_C$ is shown in Figure~\ref{fig-quarticfamily} (right) and it has the spectrum $\sigma(G_C)=\{2.1701, 0.3111, -1, -1.4812\}$, i.e. $\Lambda^{opt}_{HL}(G_A,G_B) = \overline{\Lambda}_{HL}^{sir}(G_A,G_B)= 1.3111$. On the other hand, it turns out that $\overline{\Lambda}_{HL}^{sdp}(G_A,G_B)=1.67597$. Hence we have the strict inequalities
\[
\Lambda^{opt}_{HL}(G_A,G_B) \equiv \overline{\Lambda}_{HL}^{sir}(G_A,G_B)
< \overline{\Lambda}_{HL}^{sdp}(G_A,G_B)
< \Lambda_{HL}(G_A),
\]
in this example.
\end{example}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=3.5truecm]{figures/fulvene}
\ \
\includegraphics[width=2truecm]{figures/fulvene-chem}
\end{center}
\caption{
An example of an invertible graph $F_0$ (left) representing the chemical organic molecule of fulvene (right).}
\label{fig-fulvene}
\end{figure}
In Figure~\ref{fig-fulvene} (left) we show the graph $F_0$ on 6 vertices representing the fulvene organic molecule (5-methylidenecyclopenta-1,3-diene) (right). The spectrum consists of the following eigenvalues:
\[
\sigma(F_0)= \{ 2.1149, 1, 1/q, -0.2541, -q, -1.8608 \},
\]
where $q=(\sqrt{5}+1)/2$ is the golden ratio. The HOMO-LUMO spectral gap $\Lambda_{HL}(F_0)= 0.872134$. It is easy to verify that the graph $G_B\equiv F_0$ is arbitrarily bridgeable over the following subsets of vertices:
$\{5\}, \{4\}, \{3\}, \{2\}, \{1\}$ for $k_B=1$, $ \{4,5\}, \{2,5\}, \{3,4\}, \{2,4\}, \{1,4\}, \{1,3\}, \{1,2\}$ for $k_B=2$, and $\{2,4,5\}, \{1,3,4\}, \{1,2,4\}$ for $k_B=3$ (cf. Pavl\'{\i}kov\'a and \v{S}ev\v{c}ovi\v{c} \cite{Pavlikova2016}).
\section{Lower bounds for the optimal HOMO-LUMO spectral gap}
In this section, our aim is to derive lower bounds for the optimal HOMO-LUMO separation gap $\Lambda^{opt}_{HL}(G_A,G_B)$. Similarly as in derivation of upper bounds we will construct the lower bound by means of a solution to a certain nonlinear optimization problem.
The idea is based on construction of upper bounds for the maximal eigenvalues $\lambda_{max}(\pm C^{-1})$ of the inverse matrices $C^{-1}$ and $-C^{-1}$. Here $C$ is the adjacency matrix of the bridged graph $G_C={\mathcal B}_K(G_A, G_B)$. This way we obtain a lower bound for the first positive and negative eigenvalues of $C$ yielding the HOMO-LUMO spectral gap for $G_C$.
The maximal eigenvalue $\lambda_{max}(C^{-1})$ can be expressed by means of the Rayleigh quotient, and, consequently, it can be estimated as follows:
\begin{eqnarray*}
\lambda_{max}(C^{-1}) &=& \max_{\Vert z\Vert^2=1} z^T C^{-1} z
= (Q z)^T
\left(
\begin{array}{cc}
A^{-1} & 0 \\
0 & B^{-1}
\end{array}
\right)
Q z \\
&=& (x-K B^{-1} y)^T A^{-1} (x-K B^{-1} y) + y^T B^{-1} y
\\
&\le& \lambda_{max}(A^{-1}) \Vert x-K B^{-1} \Vert^2
+ \lambda_{max}(B^{-1}) \Vert y \Vert^2,
\end{eqnarray*}
where $z=(x,y)\in \mathbb{R}^n\times\mathbb{R}^m$ and the matrix $Q$ is given as in (\ref{invC2}). Analogously,
\[
\lambda_{max}(-C^{-1})\le \lambda_{max}(-A^{-1}) \Vert x-K B^{-1} \Vert^2
+ \lambda_{max}(-B^{-1}) \Vert y \Vert^2.
\]
To estimate the right hand side of the estimate for $\lambda_{max}(\pm C^{-1})$ we apply the following auxiliary lemma proved in \cite{Pavlikova2016}.
\begin{lemma}\cite[Lemma 1]{Pavlikova2016}
\label{lemmaMax}
Assume that $D$ is an $n\times m$ matrix and $\alpha,\beta>0$ are positive constants. Then, for the optimal value $\gamma^*$ of the following constrained optimization problem:
\begin{equation}\label{minD}
\begin{array}{rl}
\gamma^*= \max & \alpha \Vert x- D y\Vert^2 +\beta \Vert y\Vert^2 \\
{\rm s. t.} & \Vert x\Vert^2 + \Vert y\Vert^2 = 1,\ \ x\in\mathbb{R}^n, y\in\mathbb{R}^m,
\end{array}
\end{equation}
we have the explicit expression:
\begin{eqnarray*}
\gamma^*
&=& \max\left\{ \gamma,\ \frac{(\gamma-\alpha)(\gamma-\beta)}{\alpha\gamma} \in\sigma(D^T D)\right\} \\
&=& \frac{\alpha(\omega^* + 1)+\beta + \sqrt{(\alpha(\omega^* +1) +\beta)^2-4\alpha\beta}}{2},
\end{eqnarray*}
where $\omega^*=\max\{\sigma(D^T D)\}$ is the maximal eigenvalue of the matrix $D^T D$.
\end{lemma}
With help of the previous lemma we obtain the upper estimate:
\[
\lambda_{max}(\pm C^{-1})\le \frac{\alpha^\pm(\omega^* + 1)+\beta^\pm + \sqrt{(\alpha^\pm(\omega^* +1) +\beta^\pm)^2-4\alpha^\pm\beta^\pm}}{2}
\]
where $\alpha^\pm=\lambda_{max}(\pm A^{-1})$,
$\beta^\pm=\lambda_{max}(\pm B^{-1})$, and,
\[
\omega^* = \max \sigma(B^{-1}K^T K B^{-1}).
\]
Indeed, for the matrix $D= K B^{-1}$ we have $D^T D = B^{-1} K^T K B^{-1}$.
The maximal eigenvalue of the matrix $B^{-1}K^T K B^{-1}$ can be expressed by means of a solution to the semidefinite programming problem:
\begin{eqnarray}
\label{omega}
\omega^* &=& \max \sigma(B^{-1} K^T K B^{-1})
= \min_{B^{-1} K^T K B^{-1}\preceq \omega I} \omega \nonumber
\\
&=& \min_{\omega} \ \ \omega
\\
&& s.t.\ \
\left(
\begin{array}{cc}
\omega I & B^{-1} K^T \\
K B^{-1} & I
\end{array}
\right) \succeq 0. \nonumber
\end{eqnarray}
Since
\begin{eqnarray*}
\Lambda_{HL}(G_C)&=&\Lambda_{HL}({\mathcal B}_K(G_A, G_B))
\\
&=& \check{\lambda}^+(G_C) - \hat{\lambda}^-(G_C) = \frac{1}{\lambda_{max}(C^{-1})} + \frac{1}{\lambda_{max}(-C^{-1})},
\end{eqnarray*}
and the optimal value $\gamma^*$ is an increasing function of $\omega^*$ we obtain the following lower bound $\underline{\Lambda}_{HL}^{sir}(G_A, G_B)\le \Lambda_{HL}^{opt}(G_A, G_B)$ for the optimal HOMO-LUMO spectral gap $\Lambda_{HL}^{opt}(G_A, G_B)$, where
\begin{eqnarray}
\label{lowerSIR}
\underline{\Lambda}_{HL}^{sir}(G_A, G_B)
&=&
\frac{2}{\alpha^+(\omega^* + 1)+\beta^+ + \sqrt{(\alpha^+(\omega^* +1) +\beta^+)^2-4\alpha^+\beta^+}} \nonumber \\
&& + \frac{2}{\alpha^-(\omega^* + 1)+\beta^- + \sqrt{(\alpha^-(\omega^* +1) +\beta^-)^2-4\alpha^-\beta^-}}, \nonumber
\\
&&\nonumber
\\
\hbox{where} && \omega^* = \min_{\omega, K} \ \ \omega \nonumber
\\
&& s.t.\ \
\left(
\begin{array}{cc}
\omega I & B^{-1} K^T \\
K B^{-1} & I
\end{array}
\right) \succeq 0. \label{lowerconsSIR}
\\
&& K_{i,j}\in\{0,1\}, \ \ \hbox{for each}\ i,j,\ \ \sum_{k,l} K_{kl}\ge 1.
\nonumber
\end{eqnarray}
Similarly, as in the construction of the upper bound, we can relax the condition $K_{i,j}\in\{0,1\}$ by the box constraint
\begin{equation}
0\le K_{i,j} \le 1, \ \ \hbox{for each}\ i,j, \label{lowerconsSDP}
\end{equation}
in order to construct the full semidefinite relaxation for the lower bound $\underline{\Lambda}_{HL}^{sdp}(G_A, G_B)$.
\begin{theorem}\label{theo-4}
Let $G_A$ and $G_B$ be undirected vertex-labeled invertible graphs on $n$ and $m$ vertices without loops, respectively. Assume $G_B$ is arbitrarily bridgeable over the first $k_B$ vertices $\{1,\cdots, k_B\}$. Then
\[
\underline{\Lambda}_{HL}^{sdp}(G_A, G_B)
\le
\underline{\Lambda}_{HL}^{sir}(G_A, G_B)
\le
\Lambda^{opt}_{HL}(G_A,G_B).
\]
\end{theorem}
\section{Additional constraints imposed on the bridging bipartite graph}
In practical applications one may impose additional constraints on the bridging bipartite graph $G_K$. For example, in computational chemistry the so-called chemical molecules play important role. The structural graph $G$ of a chemical molecule has all vertices of the degree less or equal to 3. If the goal is to construct a bridged graph $G_C={\mathcal B}_K(G_A, G_B)$ representing a chemical molecule with the maximal degree $M_{d}$, we can add additional constraint:
\begin{equation}
\sum_k C_{ik} \le M_{d}, \quad \hbox{for all}\ i,\ \ \hbox{where}
\quad
C = \left(
\begin{array}{cc}
A & K \\
K^T & B
\end{array}
\right).
\label{constraintChemGraph}
\end{equation}
The inequality (\ref{constraintChemGraph}) is linear in the $K$ variable and it can be easily added to all nonlinear optimization problems (\ref{homolumoopt}), (\ref{homolumosir}), (\ref{homolumosdp}), (\ref{lowerconsSIR}), (\ref{lowerconsSDP}). The computational results of construction of graphs with the maximal degree $M_d=3$ are presented in the next section.
Another useful constraint imposed on the bridging graph $G_K$ is the min-max box constraints:
\begin{eqnarray}
\label{constraintMinMax}
&&\underline{L}^{A}_i\le \sum_k K_{ik} \le \overline{L}^{A}_i, \quad\hbox{for all}\ i=1,\cdots, n,
\\
&&\underline{L}^{B}_j\le \sum_k K_{kj} \le \overline{L}^{B}_j, \quad\hbox{for all}\ j=1,\cdots, k_B,
\end{eqnarray}
representing the box constraints for minimal and maximal number of edges in the bridging graph $G_K$ pointing from the graph $G_A$ to $G_B$. Again, such a box constraint can be easily added to (\ref{homolumoopt}), (\ref{homolumosir}), (\ref{homolumosdp}), (\ref{lowerconsSIR}), (\ref{lowerconsSDP}).
\begin{center}
\end{center}
\section{Computational results}
\begin{figure}[htp]
\begin{center}\includegraphics[width=0.5\textwidth]{figures/bridged-F0F0-1}
\\
(a)
\vskip 4truemm
\includegraphics[width=0.5\textwidth]{figures/bridged-F0F0-2}
\\
(b)
\vskip 4truemm
\includegraphics[width=0.75\textwidth]{figures/bridged-F0F1}
\\
(c)
\end{center}
\caption{Results of optimal bridging of the fulvene graph $G_B=F_0$ through the vertices $\{1,2\}$ to $G_A=F_0$ a); through the vertices $\{1,4\}$ to $G_A=F_0$ b); and through the vertices $\{1,2\}$ to $G_A=F_1$ c).}
\label{fig:opt-bridgeF0F0}
\end{figure}
\begin{table}[ht]
\caption{The computational results and comparison of various semidefinite relaxations. The first two columns describe the graph $G_A$ and $G_B$ with the chosen set of bridging vertices. The optimal value $\Lambda_{HL}^{opt}=\overline{\Lambda}_{HL}^{sir}$ is shown in bold in the middle column. The upper $\overline{\Lambda}_{HL}^{sdp}$ and lower bounds $\underline{\Lambda}_{HL}^{sdp}$, $\underline{\Lambda}_{HL}^{sir}$ are also presented together with computational times in seconds computed on Quad core Intel 1.5GHz CPU with 4 GB of memory.}
\label{tab-results}
\begin{center}
\scriptsize
\begin{tabular}{c|c||c|c|c|c|c}
$G_A$ & $G_B$ & $\underline{\Lambda}_{HL}^{sdp}$ & $\underline{\Lambda}_{HL}^{sir}$ & $\Lambda_{HL}^{opt}=\overline{\Lambda}_{HL}^{sir}$ & $\overline{\Lambda}_{HL}^{sdp}$ & \hbox{bridging $G_B \mapsto G_A$ } \\
\hline\hline
$F_0$ & $F_0$ & $0.233688$ & $0.531664$ & $\bf 0.74947$ & $0.87214$ & $1\mapsto 3,5;\ \ 2\mapsto 6$ \\
&$(1,2)$ & $(0.27s)$ & $(3.38s)$ & $(83s)$ & $(2.2s)$ & \\
\hline
$F_0$ & $F_0$ & $0.333126$ & $0.72678$ & $\bf 0.85828$ & $0.87214$ & $1\mapsto \emptyset;\ \ 4\mapsto 3,5,6$ \\
&$(1,4)$ & $(0.31s)$ & $(4.75s)$ & $(36s)$ & $(2.2s)$ & \\
\hline
$F_0$ & $F_0$ & $0.333126$ & $0.719668$ & $\bf 0.81389$ & $0.87214$ & $1\mapsto 4;\ \ 3\mapsto 4$ \\
&$(1,3)$ & $(0.31s)$ & $(4.27s)$ & $(75s)$ & $(2.2s)$ & \\
\hline
$F_1$ & $F_0$ & $0.163626$ & $0.450022$ & $\bf 0.56655$ & $0.56666$ & $1\mapsto \emptyset; \ \ 2\mapsto 9,11,12$ \\
&$(1,2)$ & $(0.28s)$ & $(7.65s)$ & $(12470s)$ & $(2.2s)$ & \\
\hline
$P_4$ & $P_4$ & $0.472136$ & $0.86953$& $\bf 1.06418$ & $1.23607$ & $2\mapsto 2,4;\ \ 3\mapsto 1,3$ \\
&$(2,3)$ & $(0.27s)$& $(2.18s)$ & $(12.6s)$ & $(2.2s)$ & \\
\hline
$P_6$ & $P_4$ & $0.367365$ & $0.811369$ & $\bf 0.87366$ & $0.89008$ & $1\mapsto 4,6;\ \ 3\mapsto 4,6$ \\
&$(1,3)$ & $(0.26s)$ & $(4.6s)$ & $(59s)$ & $(2.1s)$ & \\
\hline
$P_6$ & $P_4$ & $0.367365$ & $0.737641$ & $\bf 0.87321$ & $0.89008$ & $2\mapsto 4,6;\ \ 3\mapsto 1,3$ \\
&$(2,3)$ & $(0.26s)$ & $(3.41s)$ & $(57s)$ & $(2.1s)$ & \\
\hline
$P_{10}$&$P_4$ & $0.252282$ & $0.523808$ & $\bf 0.56837$ & $0.56926$ & $2\mapsto 8,10;\ \ 3\mapsto \emptyset$ \\
&$(2,3)$ & $(0.26s)$ & $(6.32s)$ & $(4109s)$ & $(2.6s)$ & \\
\hline
$T_4$ & $P_4$ & $0.38832$ & $0.73094$& $\bf 0.93258$ & $0.95452$ & $2\mapsto 3,8$ \\
&$(2)$ & $(0.31s)$& $(1.57s)$ & $(12s)$ & $(2.31s)$ & \\
\hline
\end{tabular}
\end{center}
\end{table}
In this section we present computational results. In Table~\ref{tab-results} we present results of construction of the optimal bridging by a bipartite graph for various sets of bridged graphs $G_A$ and $G_B$. First, we chose the fulvene graph $F_0$ as the graph $G_B$ and set $k_B=2$. The graph $G_B\equiv F_0$ is arbitrarily bridgeable through the pairs vertices $\{1,2\}, \{1,3\}, \{1,4\}$ (cf. \cite{Pavlikova2016}). We show the results of the optimal value $\Lambda_{HL}^{opt}=\overline{\Lambda}_{HL}^{sir}$ for target graphs $G_A = F_0$ and $G_A=F_1$ (see Figure~\ref{fig:opt-bridgeF0F0}). We also presented upper and lower bounds obtained by means of the full semidefinite relaxation. Among the tested examples the maximal HOMO-LUMO gap was attained in the case when $G_B=F_0$ was bridged to $G_A=F_0$ through vertices $\{1,4\}$. Solving mixed integer semidefinite program (\ref{homolumosir}) is time consuming (see Table~\ref{tab-results}). On the other hand, we provided upper and lower bounds which had been obtained efficiently by means of the full semidefinite relaxation technique. A graphical presentation of optimal bridging of fulvene graphs can be seen in Figure~\ref{fig:opt-bridgeF0F0}.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.4\textwidth]{figures/bridged-P4P6-13}
\qquad
\includegraphics[width=0.4\textwidth]{figures/bridged-P4P6-23}
\\
(a) \hskip 3truecm (b)
\vskip 6truemm
\includegraphics[width=0.5\textwidth]{figures/bridged-P4T4}
\\
(c)
\end{center}
\caption{Results of optimal bridging of the graph $G_B=P_4$ through the vertices $\{1,3\}$ to $G_A=P_6$ a); through the vertices $\{2,3\}$ to $G_A=P_6$ b); and through the vertex $\{2\}$ to $G_A=T_4$ c).}
\label{fig:opt-bridgeP4P6}
\end{figure}
The next set of examples consists of bridging a simple path $G_B=P_m, m=4$ to the path $G_A=P_n, n=4,6$. An illustration of optimal bridging of $P_4$ to $P_6$ over various pairs of vertices is shown in Figure~\ref{fig:opt-bridgeP4P6}.
The last example is the optimal bridging of $G_B=P_4$ to the graph $G_A=T_{2k}$, where $T_{2k}$ is the graph consisting of the simple path $P_k$ with attached pendant vertices to each vertex of $P_4$. In this case solving the optimal bridging problem yields the bridged graph $G_C$ containing a circle $C_4$ (see Figure~\ref{fig:opt-bridgeP4P6}, c)).
In Section 7 we discussed additional constraints imposed on the bridging graph $G_K$. In what follows, we present results of computing the optimal HOMO-LUMO gap and its upper and lower bound under the constraint that the resulting graph $G_C$ represents a chemical molecule with the maximal vertex degree $M_d=3$. The results are summarized in Table~\ref{tab-results-chem} and illustrative examples are shown in Figure~\ref{fig:opt-bridgeP4P6-chem}. In Figure~\ref{fig:opt-bridgeP4P6-chem}, c), we confirmed the well known fact that the comb graph $T_{2k}$ has the maximal HOMO-LUMO gap among all trees on $2k$ vertices with perfect matchings. It was first proved by Kr\v{c} and Pavl\'{\i}kov\'a \cite[Theorem 7]{Pavlikova1990} (see also Zhang and An \cite{Zhang1999}). Interestingly enough, adding additional constraint on maximal degree of vertices considerably reduced computational time for solving the mixed integer semidefinite problem (\ref{homolumosir}).
\begin{table}
\caption{The computational results and comparison of various relaxations. The chosen graphs and description of columns is the same as in Table~\ref{tab-results}. In this table we present results of optimization when additional constraint of the maximal degree 3 has been imposed.}
\label{tab-results-chem}
\begin{center}
\scriptsize
\begin{tabular}{c|c||c|c|c|c|c}
$G_A$ & $G_B$ & $\underline{\Lambda}_{HL}^{sdp}$ & $\underline{\Lambda}_{HL}^{sir}$ & $\Lambda_{HL}^{opt}=\overline{\Lambda}_{HL}^{sir}$ & $\overline{\Lambda}_{HL}^{sdp}$ & \hbox{bridging $G_B \mapsto G_A$ } \\
\hline\hline
$F_0$ & $F_0$ & $0.233688$ & $0.507678$ & $\bf 0.720830$ & $0.87214$ & $1\mapsto \emptyset;\ 2\mapsto 6$ \\
&$(1,2)$ & $(0.31s)$ & $(2.73s)$ & $(7.1s)$ & $(2.9s)$ & \\
\hline
$F_0$ & $F_0$ & $0.233688$ & $0.468053$ & $\bf 0.720830$ & $0.87214$ & $1\mapsto 6; 4\mapsto \emptyset$ \\
&$(1,4)$ & $(0.31s)$ & $(1.1s)$ & $(2.33s)$ & $(2.85s)$ & \\
\hline
$F_0$ & $F_0$ & $0.333126$ & $0.706635$ & $\bf 0.776875$ & $0.87214$ & $1\mapsto 6; 3\mapsto 6$ \\
&$(1,3)$ & $(0.35s)$ & $(2.45s)$ & $(8.4s)$ & $(2.82s)$ & \\
\hline
$F_1$ & $F_0$ & $0.163626$ & $0.389941$ & $\bf 0.493727$ & $0.566658$ & $1\mapsto 6; 2\mapsto \emptyset$ \\
&$(1,2)$ & $(0.38s)$ & $(3.67s)$ & $(13.4s)$ & $(2.83s)$ & \\
\hline
$P_4$ & $P_4$ & $0.472136$ & $0.869530$& $\bf 0.954520$ & $1.23607$ & $3\mapsto \emptyset; 2\mapsto 2$ \\
&$(2,3)$ & $(0.31s)$& $(1.86s)$ & $(7.8s)$ & $(2.86s)$ & \\
\hline
$P_6$ & $P_4$ & $0.367365$ & $0.811369$ & $\bf 0.828427$ & $0.89008$ & $1\mapsto 4,6; 3\mapsto 2$ \\
&$(1,3)$ & $(0.36s)$ & $(3.35s)$ & $(22.9s)$ & $(2.83s)$ & \\
\hline
$P_6$ & $P_4$ & $0.367365$ & $0.737641$ & $\bf 0.820751$ & $0.89008$ & $2\mapsto 5; 3\mapsto 2$ \\
&$(2,3)$ & $(0.33)$ & $(2.73s)$ & $(9.21s)$ & $(2.87s)$ & \\
\hline
$P_{10}$&$P_4$ & $0.252282$ & $0.523808$ & $\bf 0.559046$ & $0.56926$ & $2\mapsto \emptyset; 3\mapsto 11$ \\
&$(2,3)$ & $(0.33s)$ & $(4.78s)$ & $(13.87s)$ & $(2.86s)$ & \\
\hline
$T_4$ & $P_4$ & $0.38832$ & $0.692266$& $\bf 0.890084$ & $0.95452$ & $2\mapsto 4$ \\
&$(2)$ & $(0.31s)$& $(0.88s)$ & $(1.5s)$ & $(2.11s)$ & \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.4\textwidth]{figures/bridged-P4P6-13-chem}
\qquad
\includegraphics[width=0.4\textwidth]{figures/bridged-P4P6-23-chem}
\\
(a) \hskip 3truecm (b)
\vskip 6truemm
\includegraphics[width=0.5\textwidth]{figures/bridged-P4T4-chem}
\\
(c)
\end{center}
\caption{Results of optimal bridging of the graph $G_B=P_4$ through the vertices $\{1,3\}$ to $G_A=P_6$ a); through the vertices $\{2,3\}$ to $G_A=P_6$ b); and through the vertex $\{2\}$ to $G_A=T_4$ c); with the constraint of maximal degree equal to 3.}
\label{fig:opt-bridgeP4P6-chem}
\end{figure}
\section*{Conclusions}
We analyzed spectral properties of graphs which are constructed from two given invertible graphs by bridging them over a bipartite graph. We showed how the HOMO-LUMO spectral gap can be computed by means of a solution to mixed integer semidefinite programming problem. We investigated the optimization problem in which we constructed a bridging graph maximizing the HOMO-LUMO spectral gap. We also provided upper and lower bounds to the optimal value, again expressed as solution to relaxed semidefinite programming problems. Various computational examples were presented in this paper.
|
{
"timestamp": "2018-06-05T02:12:30",
"yymm": "1806",
"arxiv_id": "1806.00870",
"language": "en",
"url": "https://arxiv.org/abs/1806.00870"
}
|
\section{Introduction}
Graph processing plays an important role in many real-world applications, e.g., ranking the web sites~\cite{shun2013ligra}, analysing the social networks~\cite{teixeira2015arabesque}, and streaming applications~\cite{liao}. Therefore, a large number of research efforts have been made to build the dedicated hardware that can execute graph applications with more efficiency than what the general-purpose processors and systems can provide~\cite{nurvitadhi2014graphgen, ham2016graphicionado, ozdal2016energy, dai2017foregraph}.
Despite these efforts, the graph algorithms may still suffer from a considerable performance impact caused by the atomic protections. During the graph iteration, each vertex sends its value to all associated vertices. Therefore, it is common that many vertices may read/write the same vertex simultaneously, needing a significant number of atomic protections in existing graph accelerators for preserving the correctness. This performance overhead arising from the atomic operations can be as much as nearly half of total graph execution, as demonstrated in previous work~\cite{nai2017graphpim, wu2015g} and also witnessed in our motivating study in Section 2.
Much effort has been put into reducing the atomic overhead. By offloading the atomic operations to specialized memory (e.g., hybrid memory cubes~\cite{lee2015bssync,nai2017graphpim}), data access overhead can be reduced. Speculative lock elision can expose the fine-grained parallelism due to inappropriate atomic protection~\cite{Herlihy1993Transactional}. Recent studies also attempt to reduce the number of atomic operations by a series of sophisticated preprocessing, e.g., graph partition~\cite{dai2017foregraph} and dynamic scheduling~\cite{ozdal2016energy}. Unlike these previous work that concentrates on optimizing the individual atomic overhead, this work focuses on the totally-sequential performance impact between atomic operations, which is under-studied in graph processing.
Interestingly, graph processing for many graph algorithms (e.g., BFS, PageRank and WCC) shows a significant, common feature for their atomic operations: 1) {\em incremental}--the atomic operations follow the commutative and associative law, 2) {\em simplex}--all atomic operations are similar. Instead of enforcing sequential execution of conflicting operations as traditional designs, this unique observation in graph processing enables to parallelize massive conflicting vertex updates in an accumulative manner in the sense of simultaneously processing multiple operations and merging the results in parallel. In this paper, we are addressing how we can design such an efficient accumulator for parallelizing the conflicting data accesses for vertex updating in graph processing.
We propose a novel accelerator that can simultaneously process multiple atomic operations for parallelizing the vertex updates with a data conflict while ensuring the correctness. Considering that the real-world graphs generally follow the power-law distribution~\cite{gonzalez2012powergraph}, a specialized accumulator is designed to distinguish the processing of low-degree and high-degree vertices. Internally, it executes multiple low-degree vertices in parallel for efficient edge-level parallelism and limits the vertex parallelism for the high-degree vertices to avoid frequent synchronization. To keep the architecture balanced, our accelerator is built with a high-throughput on-chip memory to provide efficient vertex access for the accumulator. The memory evenly distributes the requests based on a rearranging mechanism and process them in an out-of-order manner to ensure an efficient throughput
The contributions of this work are summarized as follows:
\begin{itemize}
\item We study a wide range of graph workloads and perform a detailed analysis on their atomic operations. We demonstrate that their distinct characteristics enable the parallel execution for conflicting vertex updates.
\item We propose a graph-specific accelerator which supports parallel execution of atomic operations. A parallel accumulator is designed to guarantee efficient process of vertices with different degrees. A high-throughput on-chip memory is also provided for the efficient use.
\item We compare our accelerator with the state-of-art ForeGraph. Experimental results with three graph algorithms on six real-world graphs show that our accelerator provides 2.36 GTEPS on average, outperforming ForeGraph by up to 3.14x speedup.
\end{itemize}
The rest of this paper is organized as follows. In Section 2, we introduce the background of graph processing and provide our motivations and challenges in detail. Section 3 and Section 4 propose our parallel accumulator designs and optimizations in memory subsystem. The evaluation results are presented in Section 5. We survey related work in Section 6 and conclude the paper in Section 7.
\section{Background and Motivation}
This section first reviews the vertex updating mechanism of existing graph accelerators for the conflicting data accesses. We next discuss its potential deficiency for graph processing through a motivating study, finally presenting our approach.
\subsection{Modern Graph Accelerator and Its Data Conflict Management}
\begin{figure}
\centering
\subfigure[Pseudocode of BFS]{
\begin{minipage}[b]{0.5\textwidth}
\centering
\includegraphics[width=2.7in, height=1.6in]{fig/bfs_algorithms}
\label{fig_bfs_code}
\end{minipage}
}
\subfigure[Execution flow of BFS]{
\begin{minipage}[b]{0.5\textwidth}
\centering
\includegraphics[width=2.7in, height=0.7in]{fig/bfs_atomic}
\label{fig_bfs_atomic}
\end{minipage}
}
\caption{BFS pseudocode and its execution flow}
\vspace{-1.5em}
\end{figure}
Graph accelerator is a customized hardware that is specially designed for iterating the computation on graphs. In graph representation, each entity is traditionally defined as {\em vertex}, and its connection is defined as {\em edge}. The {\em degree} of a vertex denotes the number of connections it has. The degree distribution is the probability distribution of all degrees.
In existing graph accelerators with shared memory architecture, all vertices in the graph are shared and also able to be accessed by multiple pipelines. As a result, there is a high coverage of data contentions for graph processing, particularly those vertices associated with a large number of edges. For ensuring the correctness of vertex updating, existing researches often seek to use atomic structures (e.g., content addressable memory~\cite{ham2016graphicionado, ozdal2016energy, pagiamtzis2006content}), which tend to atomically protect the update of each vertex if a conflicting data access to this vertex has been detected at runtime.
A typical procedure of data conflict management used in many graph accelerators~\cite{ham2016graphicionado, ozdal2016energy} is as follows. Multiple edges of the given vertices will be fetched and sent to the accelerator in each cycle. When receiving these edges, the accelerator will check the pipeline states at first. If an edge is connected with a vertex which is executing in the pipeline, its process would be stalled until the prior one finishes execution. In this way, the same vertex cannot appear in more than one of the pipeline stages for vertex execution at the same time, thus ensuring atomicity.
\subsection{Inefficiency in Graph Processing}
Graph often exhibits the complex connections where any vertex may be shared among different vertices. This is particularly true and serious for nature graph that follows the power-law degree distribution, where most vertices have low degree while a few have extremely large degree~\cite{gonzalez2012powergraph}. Thus, there may involve a high risk that a large number of low-degree vertices simultaneously access the same high-degree vertex, leading to serious data contention. Unfortunately, modern graph accelerators (e.g., ForeGraph~\cite{dai2017foregraph} and Graphicionado~\cite{ham2016graphicionado}) fall short in handling these highly-frequent data conflicts in graph processing due to its serial semantics with atomic protection for vertex updates.
{\bf Atomic Protection Analysis}\quad
Figure~\ref{fig_bfs_code} illustrates the pseudo-code of {\em Breadth-First-Search} (BFS). It starts from a root vertex $r$ and iteratively traverses the graph to calculate the shortest distance from the root vertex to other vertices. During the traversal, each vertex $v$ in the scheduling list will receive values $dis[u]$ from its neighboring vertices and update its own data based on these values (Line 7). In the end of the traversal, a new vector $Q^\prime$ is generated and used as the scheduling list of the next iteration.
Because of the atomic protection, these received data from neighboring vertices has to be updated one-by-one in each cycle for preserving the correctness of final result. Figure~\ref{fig_bfs_atomic} shows the execution flow of BFS with atomic protection. Each scheduled vertex will access data from itself and one of its neighbors, and write back the updated data after finishing processing. The data of other neighbors is cached and will not be released to the pipeline before receiving the completion of prior process. In other word, the process inside each vertex is enforced to be sequential for reducing data contention at the cost of performance.
\begin{figure}
\centering
\includegraphics[width=2.7in, height=1.6in]{fig/memory_syn}
\caption{Normalized performance overhead caused by sequential atomic operations}
\vspace{-1em}
\label{memory_syn}
\end{figure}
{\bf\em Experimental Demonstration}\quad We further make a set of experiments to investigate how much performance impact may be incurred by atomic protection in graph processing. We use a cycle-accurate simulation to perform the vertex iteration with a parallel update for a maximal set of 16 edges\footnote{The simulation is conducted with a pipelined architecture that is similar to ForeGraph~\cite{dai2017foregraph}. While data width of edges is usually 32-bits in BFS, we set 16-edge parallelism according to the memory access granularity (512-bits). Edge shuffling optimization~\cite{dai2017foregraph} is not covered in our simulation.}. Figure~\ref{memory_syn} depicts the comparative results.
It is observed that the pure atomic protection leads to a significant performance degradation for all real-world graphs, with 45\% extra memory overheads on average in contrast to 16-edge parallel vertex update. This is particularly true and serious for those graphs that have the greater average degree (e.g., {\em Orkut}).
{\bf Remark} There are also a number of potential solutions that can be used for reducing the performance impact arising from atomic operations. ForeGraph~\cite{dai2017foregraph} proposes a shuffling mechanism to rearrange the edges with potential data conflicts. ~\cite{ozdal2016energy} excessively schedules destination vertices and sends part of them to the processing unit based on a credit based mechanism. Similarly, the basic idea of the above mechanism is to avoid simultaneously scheduling edges with the same destination vertex. While they can reduce the pipeline stalls caused by atomic protection, they still have sequential process of different edges for the same destination vertex.
Some work~\cite{ahn2016scalable, nai2017graphpim} uses novel {\em processing-in-memory} (PIM) technology~\cite{gokhale1995processing} to offload the atomic operations to specialized memory region, which reduces the processing time of atomic operations. However, it needs to incorporate with specialized memory architecture and also increases the memory requests since all atomic operations needs to be sent to the memory.
\begin{table}[htbp]
\centering
\caption{Atomic operation types for the vertex update in different graph algorithms}
\vspace{-1em}
\label{atomic_type}
\begin{tabular}{|c|c|}
\multicolumn{1}{c}{}&\multicolumn{1}{c}{} \\ \hline
{\bf Algorithm} & {\bf Operation Type} \\ \hline
Breadth-First Search & CAS if less \\ \hline
Weakly Connected Components & CAS if less \\ \hline
Shortest Path & CAS if less \\ \hline
PageRank & Atomic add \\ \hline
Triangle Counting & Atomic add \\ \hline
Degree Centrality & Atomic add \\ \hline
Collaborative Filtering & Atomic add \\ \hline
\end{tabular}
\vspace{-0.5em}
\end{table}
\subsection{Potential of Accumulator}
The key insight of this work is that atomic operations for many graph algorithms can be parallelized in an accumulative manner.
Table~\ref{atomic_type} illustrates the typical operations that need an atomic protection for seven popular graph algorithms. We can observe that these atomic operations as a whole have two aspects of significant properties.
{\bf\em Observation 1}: {\em The atomic operations on different edges follow the commutative and associative law}.
The commutative law means that the execution sequence of the operations has no effect on the result. Associativity ensures the correctness of merging multiple operations. That is, any of the operations can be simultaneously merged without changing the final result. For example, {\em PageRank} follows the atomic-add operations. It updates every vertex by following $Rank(v) = \varepsilon + \sum_{u \in neighbor(v)} Rank(u) / |neighbor(u)|$, where $\varepsilon$ is a constant. Actually, no matter how we change the sequence of these atomic operations or merge successive atomic operations, the final result can be still consistent.
{\bf\em Observation 2}: {\em The atomic operations for updating the value of conflicting vertex are simple and used repeatedly}.
Taking {\em PageRank} as the example, we find that all of its atomic operations use the same atomic-add to sum up their values to the final result. This similarity allows to use a unique structure to merge all atomic operations.
\begin{figure}
\centering
\includegraphics[width=2.7in, height=1.6in]{fig/architecture_overview}
\caption{Architecture of Graph Accelerator. $P_i$ denotes the $i$th pipeline stage}
\label{architecture_overview}
\vspace{-1.5em}
\end{figure}
These two observations consequently enable us to leverage existing well-developed accumulator to parallelize the vertex update conflicts. Accumulator is a hardware component that merges the inputs into a set of results with specific function. Nevertheless, designing such accumulator for large-scale graph processing remains tremendously challenging.
First, the real-world graph topology is often sparse with a low averaged degree. Although traditional accumulator designs~\cite{knowles2001family, ladner1980parallel, blelloch1989scans} can provide desirable throughput, they often establish a fixed mapping relationship between the inputs and the results. The reality is that the degree of vertices is dynamically changing during the iteration. The accumulator may get incorrect results when simultaneously processing multiple vertices. Therefore, the traditional accumulator can only accumulate the atomic operations of the single low-degree vertex at the same time, leading to extremely low parallelism for graph processing. There remains a significant gap in applying the accumulation ideology into graph processing without losing a wealth of edge-level parallelism.
Second, natural graphs often follow a power-law distribution. When processing the low-degree vertex, the accumulator is expected to simultaneously process multiple vertices. However, for the high-degree vertices with a large number of edges that can be easily more than millions (e.g., {\tt twitter}), an accumulator with limited width is extremely difficult to handle so many edges simultaneously. If multiple vertices are simultaneously processed in this case, the accumulator will be invoked several times at the cost of increased synchronization overheads. Moreover, it may lead to massive random edge accesses since the edges of these vertices are more likely to be non-sequential. Therefore, there still lacks an effective technique that can improve the synchronization overheads and random accesses for an efficient accumulation.
Third, it is also extremely difficult to predict the non-sequential neighboring vertices of each vertex in real-world graphs. A large number of random accesses have be incurred before invoking the accumulator. Although the accumulator can largely reduce the atomic overheads and provide desirable execution performance, the vertex access remains to be a potential bottleneck and significantly limits the throughput.
\subsection{Architectural Overview}
Figure~\ref{architecture_overview} shows an overview of our accelerator, which is designed in pipeline with six stages in total. These stages basically serve as two major objectives as follows:
{\bf How to Design an Efficient Accumulator} (Section 3): As explained in the challenge discussions, the accumulator generally suffers from the sparse topology and power-law degree distribution in real-world graphs. To achieve desirable performance, the accumulator is expected to efficiently process both of the low-degree and high-degree vertices.
When processing the low-degree vertex, the accumulator is expected to simultaneously process multiple vertices for efficient parallelism. Since the vertex degrees are mutable during the process, the accumulator should establish a dynamic relationship between the input vertices and the final results to ensure the correctness.
When processing the high-degree vertex, the number of vertices scheduled should be decreased to avoid random access. Therefore, the accumulator should be dynamically aware of the changes in degree and distinguish the process of different vertices. Furthermore, there is a significant synchronization overhead between the multiple accumulations of the same high-degree vertex, which requires an efficient synchronization mechanism.
{\bf How to Use Accumulator Efficiently} (Section 4): While the accumulator could provide high execution efficiency, the on-chip memory is likely to be a potential performance bottleneck. To keep with the throughput of accumulator, the on-chip memory is required to be partitioned into independent parts to process multiple accesses. Furthermore, considering the randomness in vertex access, the address values of vertices may follow an unbalanced distribution. Consequently, multiple requests will be sent to the same memory part in each cycle, leading to significant throughput degradation. To ensure a high throughput, a specialized mechanism is required to dynamically balance the memory requests for on-chip memory.
\section{Parallel Accumulator Design}
This section discusses the design guideline for a parallel accumulation as well as its core components for the efficiency.
\subsection{Design Philosophy}
Since accumulator is bounded with fixed width, it generally needs to consider two situations where skewed graph vertices with different degrees that can be greater or less than accumulator width, involving different parallel designs.
\subsubsection{\bf Accumulation Design for Low-Degree Vertex}
As is known, most of vertices for a natural graph have a very few degree which can be often no more than the fixed number of ports for a typical accumulator. It is clear of a necessity to simultaneously process the update values of multiple low-degree vertices at a time for high parallelism.
{\bf Problem Definition:} Assuming $N$ update values, belonging to $M$ vertices, need to be processed at once. This problem can be described by $p_j = \sum_{1 \le i \le N} a_i \cdot b_{ij}, 1 \le j \le M$, where $p_j$ denotes the accumulated result of vertex $j$. $a_i$ denotes the update value $i$, and $b_j^i$ denotes whether $a_i$ belongs to vertex $j$. The objective is to get $p$ with minimal latency.
Considering the locality of graph traversal, this problem can be further simplified. During traversal, edges of the same destination vertex are sequentially accessed in common graph representations, e.g., CSR/CSC~\cite{shun2013ligra}. It ensures that update values of the same destination vertex are sequentially received by the accumulator. Therefore, assuming that $C_j = [c_j^1, c_j^2]$ denotes the interval of vertex $j$'s update values in all $a_i$, the function of accumulator could be simplified by $p_j = f(c_j^2)$, where
\begin{align}
\label{compressed_dp}
f(i) = \left \{
\begin{aligned}
f(i - 1) + a_i, & \quad i \notin \{c_1^1, c_2^1, \ldots, c_M^1\} \\
a_i, & \quad i \in \{c_1^1, c_2^1, \ldots, c_M^1\}
\end{aligned}
\right.
\end{align}
{\bf Solution Discussion:} A naive method for solving this problem is to use a Multi-N-Way~\cite{ma2017garaph} accumulator, which reserves a N-Way accumulator with the binary tree architecture for each vertex. However, its hardware overhead is unacceptable for graph applications. First, its fanouts are too large to implement, which can be up to 8192 when processing a cacheline-width data for 16 vertices. Second, its resource utilization is extremely low since only $N$ among $N \times M$ received values are useful for the real accumulation.
In Equation~(\ref{compressed_dp}), we find that $f(i) = f(i - 1) + a_i$ is a typical prefix-sum problem, which has been extensively studied in previous work~\cite{sklansky1960conditional, kogge1973parallel, ladner1980parallel, brent1982regular, knowles2001family}. Beyond the prefix-sum problem, a significant problem is that we still need to consider solving the otherwise case. This needs to 1): dynamically recognize the breakpoints that {\em break} the sequential computation and cancel the related operations, and (2) select the results in appropriate ports since not all outputs are required. These are what we have additionally contributed to cope with.
\subsubsection{\bf Accumulation Design for High-degree Vertex}
There are also many high-degree vertices that over-fit the width of an accumulator. Invoking the accumulator multiple times can be considered a useful approach by dividing these edges into multiple parts and processing one of them at the same time, but this costs more overhead.
First, iteratively reading the temporary vertex data and writing it back after merging with the accumulated result can lead to an extra synchronization. Second, the graph edges are sequentially stored with common data structure (e.g., {\em CSR/CSC} or {\em adjacency list}), which means that these edges are distributed to many continuous cachelines. When multiple vertices are simultaneously processed with a high-degree vertex, their edges may be located in non-adjacent cachelines, leading to performance degradation.
We present a potential design with an efficient accumulation for solving these problems.
For the first problem, the update values of the same destination vertex come in sequence. It ensures that the results of multiple accumulations for the same high-degree vertex are also continuously generated. Therefore, the write back of the vertex data can be delayed before the accumulator sending a different vertex.
For the second problem, the inefficiency mainly comes from fixed granularity for vertex scheduling. Without considering the differences in the vertex degree, it schedules fixed number of vertices and simultaneously accesses their edges in each cycle. Instead of accessing the edges based on the scheduled vertices, the viable method is to sequentially access all edges and dynamically schedule the vertices based on the accessed edges.
\subsection{ Parallel Accumulator Architecture}
\begin{figure}
\centering
\includegraphics[width=3.2in, height=1.8in]{fig/architecuture_acc}
\caption{Architecture of parallel accumulator}
\vspace{-1em}
\label{src_acc}
\end{figure}
Figure~\ref{src_acc} shows the overview of a parallel accumulator, consisting of four parts. The {\em source vertex accumulator} simultaneously accumulates update values of different destination vertices. The {\em multiplexer} is responsible for dynamically selecting accumulated data from appropriate ports of the source vertex accumulator. The {\em destination vertex accumulator} receives the selected data and fully accumulates each destination vertex. The {\em degree-aware accumulation} dynamically decides the number of vertices to be scheduled.
{\bf Source Vertex Accumulator: } The research of prefix-sum has been extensively studied since 1960s~\cite{sklansky1960conditional, kogge1973parallel, brent1982regular, ladner1980parallel}. In this work, we choose Ladner-Fischer Adder~\cite{ladner1980parallel} as the basis of our accumulator among a large number of previous efficient accumulators for three reasons as follow.
First, our main objective is to get the accumulated results in minimal latency, which filters the networks with depth larger than log($N$). Second, among all networks with minimal latency, it has relatively fewer adders, which means that we could add fewer extra resources for breakpoint recognition and result selection. Finally, although its fanouts are relatively larger than others, it does not increase the length of critical path since its delay and route time is much smaller comparing to that of on-chip memory access.
Ladner-Fischer Adder opens a great opportunity for our graph-specific source vertex accumulator. In Ladner-Fischer Adder's original design, it establishes a fixed mapping relationship between the inputs and outputs, which leads to incorrect results when multiple vertices with mutable degrees are processed. As a result, we complement a breakpoint recognizing mechanism. We add a new vector $V = (v_1, v_2, \ldots, v_N)$ where $v_i$ denotes the destination vertex that $a_i$ belongs to. With the vector $V$, the recognition conditions could be easily implemented by comparing the destination vertices of two inputs:
\begin{align}
\label{new_dp}
f(i) = \left \{
\begin{aligned}
f(i - 1) + a_i, & \quad v_i = v_{i - 1} \\
a_i, & \quad v_i \neq v_{i - 1}
\end{aligned}
\right.
\end{align}
We attach each update value with the ID of its destination vertex in our source vertex accumulator. To further reduce resource usage, we compress the destination vertex ID by only using its last log$m$ bits, where $m$ denotes the width of the accumulator. Based on Formula~(\ref{new_dp}), the adder nodes (refer to the gray nodes) are modified to compare the IDs of two inputs at first. If two IDs are the same, the behaviors of the adder nodes are the same as the original design which directly accumulates the input values. Otherwise, they will recognize the second destination vertex as breakpoint and send its update value to the output.
{\bf Multiplexer: } Once the results are accumulated, the next is to dynamically select the accumulated results for each destination vertex from the output ports of source vertex accumulator. We use a $N \times M$ multiplexer to implement such a logic. Instead of directly comparing the destination vertex IDs, the multiplexer selects the data based on edge offsets to simplify the conditional logic. When the edges in pipeline stage P2 are accessed, each scheduled vertex is attached with its right edge offset, indicating the last edge connected to it. Based on this information, the multiplexer is thus able to naturally select the data for each scheduled destination vertex in the ports related to its last edge. For example, if the updated values $a_1, a_2, a_3$ belong to the vertex, the multiplexer would select the accumulated data from the third port of the source vertex accumulator.
\begin{figure}
\centering
\includegraphics[width=3.3in, height=1.2in]{fig/edge_parallel}
\caption{Degree aware accumulation}
\vspace{-1.5em}
\label{fig_edge_parallel}
\end{figure}
{\bf Destination Vertex Accumulator: } In light of the sequential arrival of accumulated values, this opens an opportunity to avoid synchronization on the temporary vertex data by delaying the write back of the destination vertex data until the accumulated value of a different vertex is received.
We design a destination vertex accumulator to merge different accumulated results of the same vertex. The accumulator holds the destination vertex ID and the accumulated value in private registers. In each cycle, if the IDs in the input and register are found to be the same, the accumulator would accumulate the vertex data in the input and register. Otherwise, the vertex data in the register will be written back and replaced by the input data. Furthermore, since the source vertex accumulator may simultaneously process multiple destination vertices, we replicate the destination vertex accumulators and use a crossbar switch to connect them with multiplexer. The crossbar switch routes the vertex data based on the destination vertex. That is, the last log($m$) bits in its ID are used for $m$ replications.
{\bf Degree Aware Accumulation: } Figure~\ref{fig_edge_parallel} shows the specific design of degree aware accumulation. The basic idea is to sequentially access all edges and dynamically schedule vertices based on the runtime information of their edge offsets (e.g., edge ID table in CSR/CSC~\cite{ham2016graphicionado} which denotes the location for the edges of each vertex). To make sure that multiple vertices could be accessed in each cycle, we replicate vertex units in stage P1 and P2. Furthermore, a special matching mechanism is implemented in the vertex units of stage P2 to dynamically decide the vertices to be scheduled.
More specifically, we use a specialized generator to automatically generate memory address to sequentially access all edges. In each cycle, every vertex unit stores received edge offsets, and compares generated memory address with the top data in its FIFO. If the memory address is within the range of two edge offsets, the top vertex would be scheduled and sent to the next stage. Moreover, if the memory address is equal to the right edge offset, which means all edges of the vertex have been read, the top vertex in the FIFO would be removed. In this way, the number of scheduled vertex is ensured to be the same with that of vertex contained in requested cacheline. Furthermore, the edge units could be shared among all vertex units to improve resource utilization.
\section{Optimizations For Efficient Use}
In this section, we present several optimizations that are the key for using the proposed parallel accumulator efficiently.
\begin{figure}
\centering
\includegraphics[width=2.6in, height=1.5in]{fig/memory_partition_overhead}
\caption{Normalized performance for processing 16 random memory requests}
\vspace{-1.5em}
\label{fig_memory_partition_overhead}
\end{figure}
\subsection{Source Vertex Access Parallelization}
While the above accumulator can provide reasonable execution efficiency, the memory access is likely to be a potential performance bottleneck. In practice, the neighbors of every vertex are discontinuous, leading to significant randomness in vertex access. Consequently, the vertex data is typically stored in on-chip memory (e.g., BRAM in FPGA)~\cite{dai2017foregraph, nurvitadhi2014graphgen, ozdal2016energy} to improve memory performance.
Despite that it could efficiently reduce the latency of vertex access, the throughput of on-chip memory is hard to keep with that of accumulator. For example, assuming that the accumulator runs at 250MHz with a DDR4-2400 memory. In each cycle, the accelerator would receive 16 32-bits edges and generate memory requests based on their source vertices, which means the on-chip memory need to simultaneously process 16 random read requests. Nevertheless, the standard RAM module could only process one read and write request in each cycle. Considering the limitation of capacity and frequency for on-chip memory in typical FPGA chips, memory partitioning~\cite{cong2011automatic, wang2013memory} is the most practical method to implement such multi-ported memory.
Typical memory partitioning mechanisms divide the memory into $n$ independent parts and shuffle the requests to achieve a maximal throughput of $n$. Nevertheless, due to the randomness in vertex access, we find a significant number of requests are shuffled to the same memory partition in each cycle, which means that the memory needs more than one cycle to process these requests. As shown in Figure~\ref{fig_memory_partition_overhead}, the unbalanced shuffling increases up to 70\% cycles, even if we partition the memory into 128 parts.
{\bf Optimizations: } Through analysing the graph data, we find that such inefficiency is caused by the unbalanced edge values: 1) the edge values are not evenly distributed when accessing in the cacheline-width granularity, 2) the edge values themselves are unbalanced when processing in the single-vertex granularity.
Algorithm~\ref{alo_rearrange} represents the pseudocode of our mechanism for solving the first problem. The basic idea is to rearrange the edges of each vertex to ensure that the address values are relatively balanced in cacheline-width granularity before processing the graph. Assuming that the memory is partitioned to 16 dependent parts, we would also maintain 16 queues for each vertex to store the edges based on the connected vertex's ID. During rearranging, we would iteratively select edges from each queue in sequence for every vertex. The overhead of rearrangement is about O(|E|), which is the same as that of compressing algorithms commonly used in graph processing (e.g., CSR/CSC). With the mechanism, the address values could be evenly rearranged, thus improving the memory performance.
\begin{figure}
\includegraphics[width=3in, height=1.6in]{fig/source_access}
\caption{Workflow of accessing source vertex data}
\vspace{-1.5em}
\label{fig_source_access}
\end{figure}
\begin{algorithm}
\caption{Pseudocode of the rearranging mechanism}
\label{alo_rearrange}
\small
\KwIn {Graph $G = (V, E)$, partition number $P$}
\KwOut {Rearranged edge list $NewEdge$}
\For {$v \in G$} {
\For {$u \in \{k | (k, v) \in E \}$} {
$Edge(v, u \ MOD \ P).push(u)$\;
}
$N(v) \leftarrow |\{k | (k, v) \in E \}|$\;
}
\For {$v \in G$} {
$i \leftarrow 0$\;
\While {$N(v) > 0$} {
$NewEdge(v).push(Edge(v, i).pop())$\;
$i \leftarrow (i + 1) \ MOD \ P$\;
}
}
\end{algorithm}
For the second problem, we find that even though address values of single vertex are unbalanced, those of the whole graph are relatively balanced. Therefore, we try to change processing granularity to deal with such imbalance. More specifically, we allow the on-chip memory to process the requests in an unblocking (out-of-order) manner. Through unblocked process, the idle memory ports could be utilized by the latter requests, thus improving memory efficiency.
Figure~\ref{fig_source_access} shows the work flow of our mechanism. In each cycle, stage P3 receives $N$ edges from memory, and shuffles them to different request FIFOs based on their values. The FIFOs cache these edges and send the requests generated by the top ones to the on-chip memory. To avoid the unblocked requests breaking sequentiality of edge access and further leading to incorrect results, a reorder stage is involved after accessing the source vertex data. The reorder stage caches the accessed vertex data, reorders them to match the sequence of original requests, and sends reordered data to stage P4. To implement such reordering logic, each memory request would be attached with a token based on the last log$(m)$ of original edge memory address, where $m$ denoted the size of buffer in reorder stage. All accessed data with the same token would be stored in the same location in reorder stage. Once the top data finishes reordering, i.e., all data of the first request has been received, it would be sent to the next stage.
\subsection{Source-Based Graph Partition}
While storing vertex data in on-chip memory could avoid costly random access in main memory, it might require a large number of resources that exceed the capacity of the chip. Assuming the 4-byte width of vertex data and 8 M vertices, the on-chip memory is desired to be larger than 32 MB, which is unpractical for most of FPGAs. To enable process of large-scale graphs without losing the benefit of on-chip memory usage, we partition the graph into several parts and process a single part at a time.
To ensure that all vertex data needed to be processed in each graph parts could be held in on-chip memory, we use a source-based partition mechanism~\cite{gonzalez2012powergraph}. The partition mechanism works as follows. Firstly, the vertices of the input graph are divided into $K$ parts based on their vertex IDs. The value of $K$ depends on the number of vertex and the capacity of on-chip memory. For each part, the out-edges of each vertex are also included. After the input graph is partitioned, our accelerator sequentially processes each graph part in each iteration. Since every edge would be partitioned to the graph part which includes its destination vertex, no edges need to be processed twice. The graph partition does incur some extra memory overhead, since the same destination vertex data might be read and written more than once. More specific impacts would be discussed in Section 5.4.
\section{Evaluation}
This section evaluates the effectiveness and efficiency of our graph accelerator on a wide variety of graph algorithms with real-world graph datasets.
\subsection{Experimental Settings}
{\bf Evaluation Tools: } We implement our accelerator on Xilinx Virtex Ultrascale+ XCVU9P-FLGA2104 FPGA with -2L speed grade. The target FPGA chip provides 1.18 M LUTs, 2.36 M registers, and 9.49 MB on-chip BRAM resources. We verify the correctness and get the clock rate as well as resource utilization using Xilinx Vivado 2017.1. All these results have passed post-place-and-route simulations. Our target off-chip memory is Micron 4GB DDR4 SDRAM (MT40A256M16GE-083E). We use DRAMSim2~\cite{rosenfeld2011dramsim2} to simulate the cycle-accurate behavior of the off-chip access. The memory has a running frequency of 1.2 GHz and a peak bandwidth of 19.2 GB/s.
\begin{table}[htbp]
\centering
\vspace{-0.5em}
\caption{Graph datasets}
\vspace{-1.5em}
\label{Graph_datasets}
\tabcolsep=0.1cm
\begin{tabular}{|c|c|c|c|}
\multicolumn{1}{c}{}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{} \\ \hline
{\bf Names} & {\bf\# Vertices} & {\bf\# Edges} & {\bf Description} \\ \hline
Slashdot & 0.08 M & 0.95 M & Link Graph \\ \hline
DBLP & 0.32 M & 1.05 M & Collaboration Graph \\ \hline
Youtube & 1.13 M & 2.99 M & Social Network \\ \hline
Wiki & 2.39 M & 5.02 M & Website Graph \\ \hline
LiveJournal & 4.85 M & 69.0 M & Follower Graph\\ \hline
Orkut & 3.07 M & 117 M & Social Network\\ \hline
\end{tabular}
\vspace{-0.5em}
\end{table}
{\bf Graph Algorithms: } We implement three well-known graph algorithms on our accelerator, covering both CAS-if and atomic-add operation types in Table~\ref{atomic_type}.
\begin{itemize}[leftmargin=*]
\item {\em Breadth First Search (BFS)} is a basic traversal algorithm utilized by many graph algorithms. It iteratively traverses the input graph and calculates the distance of shortest path from root to every vertex.
\item {\em PageRank (PR)} is an important graph algorithm used to rank web pages according to their importance. It updates every vertex based on the formula $Rank(v) = \varepsilon + \newline \sum_{u \in in-neighbor(v)} Rank(u) / |out-neighbor(u)|$ in each iteration, where $\varepsilon$ is a constant.
\item {\em Weakly Connected Components (WCC)} is an algorithm that checks the connectivity between two vertices in a graph. During the traverse, every vertex would receive the labels from all neighbors and update itself with the minimal one.
\end{itemize}
{\bf Graph Datasets:} The graph datasets for the experiments are summarized in Table~\ref{Graph_datasets}. All these graphs are real graph data sets collected from SNAP~\cite{snapnets} and TAMU~\cite{DvisSparse}. In our implementation, each undirected edge is treated as two directed edges between source vertex and destination vertex by being processed twice. Therefore, the number of edges for undirected graphs ({\em DBLP}, {\em Youtube}, and {\em Orkut}) is considered double in our evaluation.
\begin{figure}
\centering
\includegraphics[width=2.8in, height=1.5in]{fig/performance_compare_foregraph}
\caption{Our accelerator normalized to the ForeGraph performance. YT denotes graph {\em Youtube}, Wk denotes graph {\em Wiki}, and LJ denotes graph {\em LiveJournal}. AVG presents the average speedup of all tested graphs}
\label{fig_performance_compare_foregraph}
\end{figure}
\subsection{Overall Performance}
{\bf Resource utilization: }
Table~\ref{resource_utilization} shows the resource utilization and clock rate of the FPGA design with 8 vertex pipelines and 16 edge pipelines, which maximizes throughput given the peak DRAM bandwidth. First of all, because of the shared edge pipeline design described in Section 3.2, the number of resources required is reduced. Therefore, the logic resource (LUT and register) consumption of our accelerator is relatively low. Secondly, we implement the on-chip memory with BRAM resources to maintain vertex data. Similar to prior work~\cite{dai2017foregraph}, we use 1 byte integer to represent the depth value in BFS, single-precision floating point (4 bytes) in PR, and 4 bytes integer in WCC. In this way, the maximal memory requirement is 1 $\times$ 4.85 $=$ 4.85 MB for 1 byte data and 4 $\times$ 4.85 $=$ 19.4 MB for 4 bytes data. Therefore, we hold all vertex data when running BFS and about 1.7 M vertex data for other algorithms, which consumes 57.9\% and 69.9\% of available BRAM resources, respectively. The UltraRAM resources are not used in our implementation.
\begin{table}[htbp]
\centering
\caption{Resource utilization and clock rate}
\vspace{-1.5em}
\label{resource_utilization}
\begin{tabular}{|c|c|c|c|}
\multicolumn{1}{c}{}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{} \\ \hline
{\bf } & {\bf BFS} & {\bf PR} & {\bf WCC} \\ \hline
LUT & 7.39\% & 10.1\% & 8.26\% \\ \hline
registers & 2.53\% & 4.47\% & 3.02\% \\ \hline
BRAM & 57.9\% & 69.9\% & 69.9\% \\ \hline
Maximal clock rate & 256 MHz & 211 MHz & 251 MHz \\ \hline
Simulation clock rate & 250 MHz & 200 MHz & 250 MHz \\ \hline
\end{tabular}
\vspace{-0.5em}
\end{table}
{\bf Throughput:} Figure~\ref{fig_performance_compare_foregraph} shows the normalized performance comparing to ForeGraph, which is one of the fastest graph processing accelerator implemented on FPGA, with respect to throughput. By throughput, we refer to the number of {\em traversed edges per second} (TEPS)~\cite{Graph500}, which is a performance metric frequently used in graph processing. As described above, ForeGraph is a representative accelerator that sequentially processes different edges of the same destination vertex to ensure atomicity.
Since ForeGraph has not been open-sourced, we execute the same graph algorithms (BFS, PR, and WCC) and datasets ({\em youtube}, {\em wiki-talk} and {\em LiveJournal}) used by its evaluation on our accelerator, and compare the results with the performance reported in its work (just as previous work has also done~\cite{dai2017foregraph, zhou2016high}). When running PR and WCC on {\em Wiki}, the BRAM resources available in the FPGA chip used in ForeGraph is large enough to (up to 16.6 MB) hold all vertex data on-chip, which is unreliable for that of our FPGA chip (9.49 MB). Therefore, we compress the vertex data to 2 bytes when running PR and WCC on {\em Wiki} for fair comparison.
As shown in Figure~\ref{fig_performance_compare_foregraph}, our accelerator achieves 1.36x $\sim$ 3.14x speedup compared to the ForeGraph. As analysed in Section 2.2, the speedup mainly comes from the reduced synchronization overheads by simultaneously processing atomic operations. Moreover, our accelerator could achieve better load-balance using degree-aware accumulation by dynamically deciding the number of vertices scheduled.
For the results of different algorithms, we find that the speedup of PR is smaller. This is because of the lower clock rate caused by complex floating units. Since the number of edge pipelines is fixed in our implementation, the clock rate directly influences the overall performance. Moreover, the floating point units significantly increase the length of pipelines, thus would need more cycles when recovering from pipeline stalls. Therefore, the algorithms that use integer values could achieve slightly higher performance.
\subsection{Sensitivity Study}
To get a more comprehensive performance result, we execute all graphs described in Table~\ref{Graph_datasets} on our accelerator. The structures of these graphs significantly differ from each other (e.g., number of vertices and edges, average degree), thus providing an in-depth overview on the performance. As shown in Figure~\ref{fig_throughput}, our accelerator achieves 1.4 GTEPS $\sim$ 3.5 GTEPS over all graph algorithms and datasets.
\begin{figure}
\centering
\subfigure[Different graphs]{
\begin{minipage}[b]{0.46\linewidth}
\includegraphics[width=1.6in, height=1.2in]{fig/performance_throughput}
\label{fig_throughput}
\vspace{-1.5em}
\end{minipage}
}
\subfigure[Different average degrees]{
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[width=1.6in, height=1.2in]{fig/average_throughput}
\label{fig_averge_throughput}
\vspace{-1.5em}
\end{minipage}
}
\vspace{-1em}
\caption{Sensitive study on throughput with different graphs and average degrees}
\vspace{-2em}
\end{figure}
Among all graph datasets, {\em Wiki}'s throughput is particularly low when executing on our accelerator. This is because {\em Wiki} is extremely sparse and makes the accelerator exhibits unbalance between the vertex and edge pipelines. With low average, the edges accessed from {\em Wiki} in each cycle prefer to belong to multiple vertices (more than 8). Therefore, the vertex pipelines might need more than one cycle to process these edges, leading to lower performance.
As shown in Figure~\ref{fig_averge_throughput}, the performance is almost linearly increased when the average degree is less than 16. This is because that the percentage of low-degree vertex ($\le 2$) decreases. Moreover, the performance improves slightly when increasing the average degree from 16 to 76. This is because that the memory bandwidth becomes the potential bottleneck in these cases, since it could only send a cacheline-width edges in each cycle. In summary, the performance improves as the average degree increases before reaching the limitation of maximal memory bandwidth.
Lastly, we find obvious performance degradation for PR and WCC when average degree is about 14 ({\em LiveJournal}). Moreover, the performance of PR and WCC is significantly lower than that of BFS when average degree is larger than 14 ({\em LiveJournal} and {\em Orkut}). This is because that the vertices data is too large to be all held in on-chip memory in these cases. Therefore, the graph partition mechanism is used when executing PR and WCC on these graphs, which involves in more vertex access. More detailed analysis of degree distribution and graph partition is presented in Section 5.4.
\subsection{Benefit Breakdown}
We next break down the respective benefits of our different graph accelerator designs as follow:
{\bf Benefits from Parallel Accumulation: } Figure~\ref{performance_accumulator} presents the normalized performance results.
The baseline represents the basic design without any optimizations described in Section 3 and 4. It sequentially processes each edge, and accumulates its values to the final result in each cycle. CFG 1 represents source vertex accumulation. CFG 2 further uses destination vertex accumulation based on CFG1.
\begin{figure}
\centering
\includegraphics[width=2.6in, height=1.5in]{fig/performance_accumulator}
\caption{Benefit of parallel accumulation}
\vspace{-1.5em}
\label{performance_accumulator}
\end{figure}
It is shown that CFG1 achieves 1.9x$\sim$ 5.2x speedup compared to the baseline. Note that {\em Wiki} is lowest performance among all graph workloads. This is because that the number of vertex pipelines to set to one, leading to the fact that only one vertex can be scheduled in each cycle for CFG 1. Therefore, the number of edges sent to the accumulator in each cycle is directly depended on the average degree. In a word, the graphs with higher degree could experience higher speedup when using source vertex accumulator.
For CFG 2, destination vertex accumulator achieves about 1.3x speedup in most of graphs, except for {\em Slashdot} (2.0x speedup). This is because that {\em Slashdot} has self-loops, which means that some edges connect a vertex to itself. When processing these self-loops, the memory requests of source and destination vertex would be assigned to the same on-chip memory partition, leading to increased memory cycles. With the source vertex accumulator, the request of destination vertex could be avoid, thus improving the overall performance.
{\bf Benefits from Degree-aware Accumulation: }Secondly, we explore the impact of degree aware accumulation on above accumulators. Figure~\ref{fig_performance_vertex_parallel} presents the results which assume that on-chip memory could process any 16 memory requests in each cycle. For the performance, we analyse the speedup brought by different number of vertex pipelines, which denotes the maximal parallelism of the accumulation\footnote{When the number of vertex pipelines is set to $N$, the mechanism dynamically schedules $1 \sim N$ vertices based on the degree.}.
We make the observation that the performance improves sub-linearly as the number of vertex pipelines increases. This is because of the power-law degree distribution of graphs. Assuming that the number of vertex pipelines is $N$, our degree aware mechanism could cover the vertices with degree $\ge 16 / N$ with 16 edge pipelines. As depicted in Figure~\ref{degree_distribution}, the percentage of the covered edges for most graphs increases sub-linearly because that high-degree vertices have most of the edges. While for {\em Wiki}, the skewness of its degree distribution is low, thus leading to an almost linear increment.
\begin{figure}
\centering
\subfigure[Performance]{
\begin{minipage}{0.42\linewidth}
\centering
\includegraphics[width=1.6in, height=1.2in]{fig/performance_vertex_parallel}
\label{fig_performance_vertex_parallel}
\end{minipage}
}
\hspace{0.1in}
\subfigure[Percentage of covered edges]{
\begin{minipage}{0.42\linewidth}
\centering
\includegraphics[width=1.6in, height=1.2in]{fig/degree_distribution}
\label{degree_distribution}
\end{minipage}
}
\caption{Benefit of degree-aware accumulation}
\vspace{-1.5em}
\end{figure}
{\bf Benefits from Vertex Access Parallelization: } Figure~\ref{fig_performance_memory} explores the impact of different optimization for parallel accumulations, without ignoring the influence of the on-chip memory's throughput. The left most bar in Figure~\ref{fig_performance_memory} represents the baseline case where only parallel accumulation is applied. CFG 3 represents that degree aware accumulation is involved in with 8 vertex pipelines based on CFG2. CFG 4 shows the effects of rearranging mechanism and CFG 5 shows the effects of reordering discussed in Section 4.1.
The first observation is that the speedup of degree aware accumulation is decreased to about 1.3x when considering the influence of on-chip memory's throughput. Without any optimizations, there would be a significant amount of increased memory requests caused by the unbalanced edge values, thus decreasing the impact of degree aware accumulation. Another observation is that our rearranging mechanism could achieve 1.5x speedup and reordering mechanism could achieve another 1.5x $\sim$ 2.8x speedup. With these mechanisms, the increased memory requests could be reduced to $\le 10\%$, which significantly improves the memory efficiency.
\begin{figure}
\centering
\includegraphics[width=2.6in, height=1.5in]{fig/performance_memory}
\caption{Effect of different optimizations in memory subsystem discussed in Section 4}
\vspace{-1.6em}
\label{fig_performance_memory}
\end{figure}
{\bf Benefits from Graph Partition: } Figure~\ref{fig_performance_partition} explores the impact of graph partition described in Section 4.2. The leftmost bar represents the case where the on-chip memory size is enough to hold all vertex data, denoted as partition number = 1. The other bars represent cases where on-chip memory size is only enough to hold $1 / N$ of the total vertex data where $N$ represents the number of partitions.
In general, partitioning the graphs into 4 parts would result in around 40\% performance degradation. Among all workloads, the {\em Wiki} experiences the largest performance degradation which reaches about 61\%. This is because that we would traverse all vertex in each sub-iteration when processing each graph partition. As the average degree decreases, the increased vertex access overheads would account for a significant percentage of total overheads. Therefore, the performance of graphs with lower average degree would be more sensitive to the partition number.
\section{Related Work}
A wealth of recent studies~\cite{guo2014well, beamer2015locality, guo2015empirical} indicate that even with extensive optimizations, graph processing still subjects to the underlying limitation of general-purpose processors. A vast body of research efforts have been therefore put into making the graph-specific architectural innovations to improve the execution efficiency. Graphicionado~\cite{ham2016graphicionado} proposes a pipelined graph accelerator which efficiently utilizes large on-chip scratchpad memory. GraphGen and Graphops~\cite{nurvitadhi2014graphgen, oguntebi2016graphops} propose FPGA-based frameworks which automatically compile graph algorithms to specialized graph processors. Compared with these prior researches with strict atomic protection, we argue that the heavy reliance on atomic operations leads to significant performance degradation and propose a novel accelerator to reduce atomic overhead.
There are also a large number of attempts that aim at reducing the number or the execution time of atomic operations for graph processing. ForeGraph~\cite{dai2017foregraph} partitions the input graph in a grid-manner~\cite{zhu2015gridgraph} to avoid simultaneously scheduling edges with the same vertex. ~\cite{ozdal2016energy} proposes a specialized synchronizing mechanism to avoid scheduling conflicting edges. Shijie et al~\cite{zhou2016high} use a combing network to avoid the same vertex being simultaneously scheduled through filtering the unnecessary edges before processing. In general, their basic idea is to avoid scheduling the edges with conflict vertices through preprocessing. Speculative Lock Elision~\cite{rajwar2001speculative} speculatively remove the lock operations and enable highly
concurrent execution.
As a comparison, we focus on the performance impact between multiple atomic operations, instead of the performance of atomic operation itself. We find that these atomic operations could be parallelized according to distinct characteristics in vertex updates of graph processing. We thus propose an efficient graph-specific accumulator to exploit the potential benefits of this insight.
Many other efforts also have been put into improving the execution time of atomic operations.
Tesseract~\cite{ahn2016scalable} offloads all graph operations to memory-based accelerator to ensure atomicity without requiring software synchronization primitives. There are also some researches~\cite{ahn2015pim, nai2017graphpim} enables offloading operations at instruction-level. They statically or dynamically detect the atomic instructions during processing and directly map them into PIM region~ with minor extension to the host processors. Compared to these PIM-enabled graph architecture, our accelerator can achieve efficient management on shared data conflicts without introducing special memory components. Moreover, our parallel data conflict management can be also integrated into PIM-enabled graph accelerators and help to reduce the memory requests.
\begin{figure}
\centering
\includegraphics[width=2.6in, height=1.5in]{fig/performance_partition}
\caption{Effect of graph partition mechanism}
\vspace{-1.5em}
\label{fig_performance_partition}
\end{figure}
\section{Conclusion}
In this paper, we present a pipelined graph processing accelerator to enable massive parallelism of vertex updates. Our accelerator provides a parallel accumulator to simultaneously schedule and process multiple destination vertices without losing edge-level parallelism. Moreover, the accumulator is designed to be degree-aware and can adaptively adjust the vertex parallelism to different kinds of graphs. We also present vertex access parallelization and source-based graph partition for better supporting the efficient use of graph accelerator. Our evaluation on a variety of graph algorithms shows that our accelerator can achieve the throughput by 2.36 GTEPS on average, and up to 3.14x speedup compared to the stat-of-the-art FPGA-based graph accelerator ForeGraph with its single-chip version.
{
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2018-06-05T02:09:51",
"yymm": "1806",
"arxiv_id": "1806.00751",
"language": "en",
"url": "https://arxiv.org/abs/1806.00751"
}
|
\section{Introduction}
Quantum Communications via satellite offers a paradigm shift in our ability to deploy quantum information protocols over very large scales, e.g. \cite{bedin,Neda1,Neda2, neda_rev}. Propagation through the atmosphere to and from LEO satellites can overcome the scourge of the roughly $100$km limited distance that plagues point-to-point optical-fiber optical links and free-space-optical links. Indeed, in the past few years great strides have been made in regard to actual
deployments of quantum communications via satellites\cite{china1,china2,china3,gers,japan}.
These latter works on satellite-based quantum communications are largely based on the deployment of discrete-variable (DV) quantum information protocols, a technology that is dependent on the production of single-photon states.
Continuous-variable (CV) technology offers a different pathway to the implementation of quantum information protocols. The main advantage of CV technology over DV technology is that detection can be realized by more reliable, and more efficient homodyne (or heterodyne) detectors e.g., \cite{CV1, CV2, thesis, Weedbrook2012}. Indeed, it is argued by many that relative to DV detectors,
CV based-detectors offer the promise of a more pragmatic route to higher secret key rates for certain QKD protocols, e.g. \cite{pir}.
Currently, no experimental deployment of space-based CV quantum technology has been carried out, but this is expected to change soon (see \cite{neda_rev} for review). CV technologies are largely based around so-called Gaussian states, e.g. \cite{thesis, Weedbrook2012} - quantum states in which the quasi-probability distribution (the Wigner function) of the electromagnetic-field quadratures follow a Gaussian distribution.
However,
the use of non-Gaussian states in the implementation of CV quantum information protocols has also garnered interest, e.g. \cite{nG-modulation, nG1, nG2, nG-coherent, Neda3}.
Non-Gaussian operations such as photon subtraction (PS) \cite{1st_PSS, 2, telep, 3, 9, Oxford, beijing} on a mode of an incoming two mode squeezed vacuum (TMSV) state can lead to higher levels of entanglement, potentially higher secret (QKD) key rates, as well as forming a pivotal resource for quantum error correction.
In this work we will focus on single PS as a means to produce non-Gaussian states. We will be specifically focussed on the question as to whether PS at the transmitter offers a better pathway to improved QKD (higher secret key rates) when propagation between ground stations and LEO satellites is considered. The answer to this question has important implications not only for future space-based implementations of CV-QKD protocols, but also potentially for other space-based quantum information protocols that utilize non-Gaussian states.
The structure of the remainder of this paper is as follows. In Section~II, the nature of the quantum channel between terrestrial stations and LEO satellites is described. In Section~III, a model for CV-QKD with PS at the transmitter is described, whilst in Section~IV a system for PS at the receiver is described. In Section~V our performance analysis is described, and in Section VI our simulation results are presented, comparing key rates produced from both systems.
\section{Earth-Satellite Channels}
We consider the model of single uplink and single downlink satellite channels in an entanglement-based version of a CV-QKD protocol.\footnote{Each entanglement-based QKD protocol has an equivalent prepare and measure scheme that will give, in theory, exactly the same results.} Our quantum information carrier will be a pulsed optical beam.
For the uplink, we assume that Alice first prepares a TMSV state ($A_0-B_0$) at a ground station, subsequently sending one of her modes ($B_0$) to the satellite. For the downlink, the TMSV is prepared on the satellite with $B_0$ being sent to the ground station.
For optical signals in the uplink channel, the dominant loss mechanism will be beam-wander caused by turbulence in the Earth's atmosphere \cite{fso}. Assuming the beam spatially fluctuates around the receiver's center point, the fading of the signal as a consequence of the beam-wander can be described by a distribution of transmission coefficients (amplitude attenuation) $\eta$. The probability density distribution of these coefficients, $p(\eta)$, can be approximated by the log-negative Weibull distribution, given by \cite{20} \cite{21}
\begin{equation}\
p\left( \eta \right) = \frac{{2{L^2}}}{{\sigma _b^2\lambda \eta }}{\left( {2\ln \frac{{{\eta _0}}}{\eta }} \right)^{\left( {\frac{2}{\lambda }} \right) - 1}}\exp \left( { - \frac{{{L^2}}}{{2\sigma _b^2}}{{\left( {2\ln \frac{{{\eta _0}}}{\eta }} \right)}^{\left( {\frac{2}{\lambda }} \right)}}} \right)
\label{f1}
\end{equation}
for $\eta \in \left[ {0,\,{\eta _0}} \right]$, with $p\left( \eta \right) = 0$ otherwise.
Here, ${\sigma _b}^2$ is the beam wander variance,
$\lambda$ is the shape parameter, $L$ is the scale parameter, and ${\eta _0}$ is the maximum transmission value. The latter three parameters are given by
\begin{equation}
\begin{array}{*{20}{l}}
&\lambda = 8h\frac{{\exp \left( { - 4h} \right){I_1}\left[ {4h} \right]}}{{1 - \exp \left( { - 4h} \right){I_0}\left[ {4h} \right]}}{\left[ {\ln \left( {\frac{{2\eta _0^2}}{{1 - \exp \left( { - 4h} \right){I_0}\left[ {4h} \right]}}} \right)} \right]^{ - 1}},\\
\\
&L = \beta_r{\left[ {\ln \left( {\frac{{2\eta _0^2}}{{1 - \exp \left( { - 4h} \right){I_0}\left[ {4h} \right]}}} \right)} \right]^{ - \left( {{1 \mathord{\left/
{\vphantom {1 \lambda }} \right.
\kern-\nulldelimiterspace} \lambda }} \right)}},\\
\\
&{\eta _0}^2 = 1 - \exp \left( { - 2h} \right) ,
\end{array}
\label{f2}
\end{equation}
where ${I_0}\left[ . \right]$ and ${I_1}\left[ . \right]$ are the modified Bessel functions, and where $h = {\left( {{\beta_r \mathord{\left/
{\vphantom {a W}} \right.
\kern-\nulldelimiterspace} W}} \right)^2}$, with $\beta_r$ being the aperture radius and $W$ the beam-spot radius. Here we set $\beta_r=W=1$ unit length (which for typical configurations is 1 meter).
In the downlink satellite channel diffraction effects are anticipated to dominate. This is largely because beam-wander in the downlink is relatively suppressed since the beam-width, on entry into the atmosphere from space, is generally broader than the scale of the turbulent eddies \cite{fso}. As such, with well-engineered designs\footnote{This involves properly-dimensioned lenses, use of state-of-the-art adaptive optics, and use of feedback from concurrent classical channel measurements. On the latter measurements we note fluctuations caused by turbulence are in the kHz range (compared to the Mhz rate of the laser pulses), thus allowing for channel-coefficient measurements to be made dynamically (within the coherence time of the channel) by a ground receiver.} losses in the downlink can be as small as 5-10 dB, compared to the 20-30 dB losses that can be anticipated for well-engineered uplink channels. For simplicity, we model all losses by varying $\sigma_b$.
To investigate the effect of the PS we mainly consider three schemes. The first scheme is where there is no PS (No-PS). The second scheme is PS at the transmitter side (T-PS), where the PS is performed immediately after Alice prepares her TMSV state. The last scheme is PS at the receiver side (R-PS), where Bob performs the PS after he receives the mode from Alice, but before his homodyne measurement. We adopt the QKD protocol of \cite{5n}, modified as required for our additional T-PS scheme. Reverse reconciliation at Alice, in which both Alice and Bob undertake homodyne measurements is always used. We will assume the asymptotic limit in the number of measurements taken.
\section {Photon subtraction at transmitter side }
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{Fig1.pdf}
\caption{Photon subtraction at transmitter side (T-PS).
Here Alice (ground station) prepares a TMSV ($A_0-B_0$), sending $B_0$ through a PS process using a beam-splitter with transmissivity $T_S$. The exiting mode $C$ is sent to a photodetector, whilst the exiting $B_1$ is sent to Bob (the satellite). The channel is controlled by Eve using a second beam-splitter with transmissivity $T_E$.
\label{TPS}}
\end{figure}
The system model for the CV-QKD protocol with photon subtraction is illustrated in Fig.~\ref{TPS}. We assume that Alice first prepares a TMSV $A_0-B_0$ at her ground station (for briefness we just describe the uplink). She then sends one of her modes ($B_0$) through a PS process in which $B_0$ interacts with a mode $C_0$ at a beam-splitter with transmissivity (intensity attenuation) $T_S$. One of the exiting modes ($C$) is sent to a photodetector (PD), whilst the other ($B_1$) is sent to Bob (the satellite). In the following we take mode $C_0$ to be a vacuum state.\footnote{We note that a PS at the transmitter in the context of a somewhat different QKD protocol from that studied here has been investigated for the Earth-satellite channel \cite{Neda5qkd}.}
In this work we assume that Eve performs a collective attack.\footnote{A collective attack is where Eve creates a series of ancillary modes with a member from this series independently entangling with each incoming mode sent by Alice. Following Bob's measurements Eve then takes an optimal collective measurement on her series of ancillary modes. In the asymptotic limit, security under collective attacks can be shown to be equivalent to security under coherent attacks (for many protocols) in which Eve's ancillary modes are no longer constrained to interact independently with Alice's modes.} The channel can then be modeled by Eve feeding one mode, $E_0$, of a TMSV state ($E_0-F$) prepared by her into a beam-splitter with transmissivity $T_E$, with $B_1$ being fed into the other input mode of the beam-splitter. After passing through Eve's beam-splitter, Eve retains the quantum state $F$-$E$, $E$ being one of the output modes of her beam-splitter. The other output mode of the beam-splitter is forwarded to Bob. Setting $T_E={\eta}^2$, we assume that Eve sets $T_E$ so as to follow a probability density function given by equations~(\ref{f1})-(\ref{f2}). Following its traversal through the channel Bob then receives an ``attenuated" version of $B_1$, namely $B_2$.
Note that PS is not a Gaussian operation, but rather an operation that transforms a Gaussian state into a non-Gaussian state.
Because of this, the state following the PS cannot be fully described by the first and second moment of the quadrature operators $\hat x$
and $\hat q$ of the electromagnetic field. As such, a somewhat more complex state description is required relative to that used for quantum protocols based on Gaussian-states. We now describe this more complex quantum state.
Using the Fock basis, Alice's initial TMSV state ${\left| \psi \right\rangle _{A{B_0}}}$
has the form
\[{\left| \psi \right\rangle _{A{B_0}}} = \sum\limits_{n = 0}^\infty {{\alpha _n}} {\left| {n,n} \right\rangle _{A{B_0}}} \ ,\]
with
$${\alpha _n} = \sqrt {\frac{{{\alpha ^{2n}}}}{{{{\left( {1 + {\alpha ^2}} \right)}^{n + 1}}}}} \ ,$$
where ${\alpha ^2}$
is the mean photon number of Alice's mode. We note that $\alpha^2={\rm sinh}^2r$, where $r$ is the squeezing parameter of the two-mode squeezing operator $$S\left( \xi \right) = \exp \left( {\xi {\hat a} {\hat b}-\xi{\hat a}^\dag{\hat b}^\dag } \right), \ \xi = r{e^{i\theta} } \ ,$$ where $\theta$ represents the orientation of the squeezing, and where ${\hat a}$ and ${\hat a^\dagger }$ represent the annihilation and creation operators, respectively, of mode $A$. Here, we assume $\theta=0$.
\hfil
\noindent\textbf{Result 1:} The quantum state after the channel can be written as
\[\begin{array}{*{20}{l}}
{{\left| \psi \right\rangle }_{TPS}}
=&- \frac{1}{{\sqrt {{P_1}} }}\sum\limits_{n = 1}^\infty {\sum\limits_{k = 0}^{n - 1} {\sum\limits_{m = 0}^\infty {\sum\limits_{l = 0}^m {s_{n,k,m,l}} } } } \\
&{\times {{\left| {n,n - 1 - k + l,k + m - l,m} \right\rangle }_{A{B_2}EF}},}\\
\end{array}\]
where
${s_{n,k,m,l}} = {\alpha _n}{\beta _m}{( - 1)^k}r_{n,1}^{T_S}r_{n - 1,k}^{{T_E}}r_{m,l}^{{T_E}}{z_{n - 1,k,m,l}}$, and the other variables introduced above are defined in the following proof.
\noindent\textbf{Proof:} Initially we have the following description of the combined $AB_0C_0B_1C$ mode
\[\begin{array}{*{20}{l}}
{\left| \psi \right\rangle _{A{B_0}{C_0}{B_1}C}} &= \sum\limits_{n = 0}^\infty {{\alpha _n}} {\left| {n,n} \right\rangle _{A{B_0}}}{\left| {0,0,0} \right\rangle _{{C_0}{B_1}C}} \\
&= \sum\limits_{n = 0}^\infty {{\alpha _n}} \frac{{{{\left( {\hat b_0^\dag } \right)}^n}}}{{\sqrt {n!} }}{\left| {n,0} \right\rangle _{A{B_0}}}{\left| {0,0,0} \right\rangle _{{C_0}{B_1}C}} \ .
\end{array}\]
The\ presence\ of\ the\ beam-splitter\ at\ the\ PS stage\ alters\ this\ combined\ mode\ to\ the\ form
\[\begin{array}{*{20}{l}}
&\sum\limits_{n = 0}^\infty {{\alpha _n}} \frac{{{{(\sqrt {{T_S}} \hat b_1^\dag - \sqrt {1 - {T_S}} {{\hat c}^\dag })}^n}}}{{\sqrt {n!} }}{\left| {n,0} \right\rangle _{A{B_0}}}{\left| {0,0,0} \right\rangle _{{C_0}{B_1}C}} \\
= &\sum\limits_{n = 0}^\infty {{\alpha _n}} \sum\limits_{k = 0}^n {{{( - 1)}^k}r_{n,k}^{{T_S}}} {\left| {n,0} \right\rangle _{A{B_0}}}{\left| {0,n - k,k} \right\rangle _{{C_0}{B_1}C}} \ ,
\end{array}\]
where
$r_{n,k}^T = \sqrt {\left( {\begin{array}{*{20}{c}}
n \\
k \\
\end{array}} \right)} {(\sqrt T) ^{n - k}}{\sqrt {1 - T} ^k}$.
We assume that the subtraction is for the single photon case (i.e. $k = 1$
and $C = \left| 1 \right\rangle $). Tracing out mode ${B_0}$,
$C$, and ${C_0}$
we have,
\[{\left| \psi \right\rangle _{A{B_1}}} = - \frac{1}{{\sqrt {{P_1}} }}\sum\limits_{n = 1}^\infty {{\alpha _n}r_{n,1}^{T_S}} {\left| {n,n - 1} \right\rangle _{A{B_1}}},\]
where
$$P_1=\sum\limits_{n = 1}^\infty \left( {{\alpha_n}r_{n,1}^{T_S}}\right )^2$$
is the probability of subtracting one photon.
Similar to Alice, Eve's initial TMSV state is,
\[{\left| \psi \right\rangle _{{E_0}F}} = \sum\limits_{m = 0}^\infty {{\beta _m}{{\left| {m,m} \right\rangle }_{{E_0}F}}} \]
with
$${\beta _m} = \sqrt {\frac{{{\beta ^{2m}}}}{{{{\left( {1 + {\beta ^2}} \right)}^{m + 1}}}}} \ ,$$
where ${\beta ^2}$
is the mean photon number of Eve's mode - a parameter used to simulate the channel noise.
As it passes the channel, mode ${B_1}$
evolves to mode ${B_2}$. Prior to Eve acting on the incoming states we have the following description of the combined $AB_1E_0EFB_2$ mode
\[\begin{array}{*{20}{l}}
{\left| \psi \right\rangle _{A{B_1}{E_0}EF{B_2}}}
=& - \frac{1}{{\sqrt {{P_1}} }}\sum\limits_{n = 1}^\infty {{\alpha _n}r_{n,1}^{T_S}} {\left| {n,n - 1} \right\rangle _{A{B_1}}} \\
&\otimes \sum\limits_{m = 0}^\infty {{\beta _m}{{\left| {m,m} \right\rangle }_{{E_0}F}}} {\left| {0,0} \right\rangle _{{B_2}E}} \ .
\end{array}\]
The presence of the beam-splitter at Eve alters this combined mode to the form
\[\begin{array}{c}
- \frac{1}{{\sqrt {{P_1}} }}\sum\limits_{n = 1}^\infty {{\alpha _n}r_{n,1}^{T_S}} \frac{{{{(\sqrt {{T_E}} \hat b_2^\dag - \sqrt {1 - {T_E}} {{\hat e}^\dag })}^{n - 1}}}}{{\sqrt {(n - 1)!} }}{\left| {n,0} \right\rangle _{A{B_1}}} \\
\otimes \sum\limits_{m = 0}^\infty {{\beta _m}\frac{{{{(\sqrt {{T_E}} \hat e^\dag + \sqrt {1 - {T_E}} {{\hat b_2}^\dag })}^m}}}{{\sqrt {m!} }}{{\left| {0,m} \right\rangle }_{{E_0}F}}} {\left| {0,0} \right\rangle _{{B_2}E}} \\
= - \frac{1}{{\sqrt {{P_1}} }}\sum\limits_{n = 1}^\infty {{\alpha _n}r_{n,1}^{T_S}} \sum\limits_{k = 0}^{n - 1} {{{( - 1)}^k}r_{n - 1,k}^{{T_E}}} \\
\times \sum\limits_{m = 0}^\infty {{\beta _m}} \sum\limits_{l = 0}^m {r_{m,l}^{{T_E}}{z_{n - 1,k,m,l}} } \\
\times {\left| {n,n - 1 - k + l,k + m - l,m,0,0} \right\rangle _{A{B_2}EF{B_1}{E_0}}} \ , \\
\end{array}\]
where
$${z_{n,k,m,l}} = \sqrt {\left( {\begin{array}{*{20}{c}}
{n - k + l} \\
l \\
\end{array}} \right)} \sqrt {\left( {\begin{array}{*{20}{c}}
{k + m - l} \\
k \\
\end{array}} \right)} \ . $$
Rearranging the summation and tracing out ${B_1}$
and ${E_0}$
we arrive at the Result~1.
\section {Photon subtraction at receiver side}
If the photon subtraction occurs at the receiver side instead of the transmitter side (Fig.~\ref{RPS}), a different outcome is achieved for the final state - a result previously derived in \cite{5n}. We simply provide that result here (the proof follows a similar path to that given for PS at the transmitter). However, we note the work of \cite{5n} considers the fixed-attenuation channel only, and therefore the results of that work cannot be directly utilized for the Earth-satellite channels we are concerned with here.
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{Fig2.pdf}
\caption{Photon subtraction at receiver side (R-PS).
Here Alice (ground station) prepares a TMSV ($A_0-B_0$),
sending $B_0$ through a channel controlled by Eve using a beam-splitter with transmissivity $T_E$.
The exiting mode $B_1$ is sent by Eve to Bob (the satellite) who undertakes a PS process on $B_1$ using a beam-splitter with transmissivity $T_S$, leading to $B_2$.\label{RPS}}
\end{figure}
Prior to the PS at the receiver the quantum state is given by
\[\begin{array}{*{20}{l}}
{\left| \psi \right\rangle _{A{B_1}EF}} =& \sum\limits_{n = 0}^\infty {\sum\limits_{k = 0}^n {\sum\limits_{m = 0}^\infty {\sum\limits_{l = 0}^m {{\alpha _n}{\beta _m}{{( - 1)}^k}r_{n,k}^{{T_E}}}r_{m,l}^{{T_E}}{z_{n,k,m,l}} } } } \\
&\times {\left| {n,n - k + l,k + m - l,m} \right\rangle _{A{B_1}EF}} \ . \\
\end{array}\]
After the channel, Bob performs PS on $B_1$, leading to the $B_2$ mode. This latter mode is subsequently used in Bob's
homodyne detection.\hfil
\noindent{\textbf{Result 2}:} The photon subtracted quantum state at the receiver can be written
\[\begin{array}{*{20}{l}}
{{\left| \psi \right\rangle }_{RPS}}
=&- \frac{1}{{\sqrt {{P'_1}} }}\sum\limits_{n = 0}^\infty {\sum\limits_{k = 0}^{n } {\sum\limits_{m = 0}^\infty {\sum\limits_{l = 0}^m {s'_{n,k,m,l}} } } } \\
&{\times {{\left| {n,n - 1 - k + l,k + m - l,m} \right\rangle }_{A{B_2}EF}},}\\
\end{array}\]
where ${s'_{n,k,m,l}} = {\alpha _n}{\beta _m}{( - 1)^k}r_{n - k + l,1}^{{T_S}}r_{n,k}^{{T_E}}r_{m,l}^{{T_E}}{z_{n,k,m,l}}$
and $P'_1$ is a new normalization constant (\emph{cf}. Eq. (19) of \cite{5n}).
\section {Performance analysis}
\subsection {Covariance Matrix}
Before moving into our investigation of the secret key rate we note that the covariance matrix of a given state $\left| \psi \right\rangle_{AB}$
with two modes $A$ and mode $B$, can be written as
$${{\bf{M}}_{{\bf{AB}}}} = \left[ {\begin{array}{*{20}{c}}
{{V_A}{\bf{I}}} & {{C_{AB}}{\bf{\sigma }}} \\
{{C_{AB}}{\bf{\sigma }}} & {{V_B}{\bf{I}}} \\
\end{array}} \right]\ , $$
where ${\bf{I}} = diag(1,1)$, ${\bf{\sigma }} = diag(1, - 1)$. Here, \[{V_A} = \left\langle \psi \right|1 + 2{\hat a^\dag }\hat a\left| \psi \right\rangle_{AB} \]
is the variance of mode $A$ (likewise $V_B$), and \[{C_{AB}} = \left\langle \psi \right|\hat a\hat b + {\hat a^\dag }{\hat b^\dag }\left| \psi \right\rangle_{AB} \]
is the covariance between mode $A$ and mode $B$.
Consider next the variances of mode $A$ and mode $F$ following PS at the transmitter. Using the above, we can see that the variances of mode $A$ and $F$ can be given as,
\[\begin{array}{*{5}{l}}
{V_A}
&= \left\langle \psi \right|1 + 2{{\hat a}^\dag }\hat a{\left| \psi \right\rangle _{TPS}} \\
&= 1 - \frac{2}{{\sqrt {{P_1}} }}\left\langle \psi \right|\sum\limits_{n = 1}^\infty {\sum\limits_{k = 0}^{n - 1} {\sum\limits_{m = 0}^\infty {\sum\limits_{l = 0}^m {n {s_{n,k,m,l}}} } } } \\
&\ \ \ \times {\left| {n,n - 1 - k + l,k + m - l,m} \right\rangle _{A{B_2}EF}} \ ,
\end{array}\]
\[\begin{array}{*{5}{l}}
{V_F}
&= \left\langle \psi \right|1 + 2{{\hat f}^\dag }\hat f{\left| \psi \right\rangle _{TPS}} \\
&= 1 - \frac{2}{{\sqrt {{P_1}} }}\left\langle \psi \right|\sum\limits_{n = 1}^\infty {\sum\limits_{k = 0}^{n - 1} {\sum\limits_{m = 0}^\infty {\sum\limits_{l = 0}^m {m {s_{n,k,m,l}}} } } } \\
&\ \ \ \times {\left| {n,n - 1 - k + l,k + m - l,m} \right\rangle _{A{B_2}EF}} \ , \\
\end{array}\ \]
respectively.
Likewise, the covariance between two different modes, say $E$ and $F$, can be given by
\[\begin{array}{*{20}{l}}
{C_{EF}} &= \left\langle \psi \right|\hat e\hat f + {{\hat e}^\dag }{{\hat f}^\dag }{\left| \psi \right\rangle _{TPS}} \\
&= - \frac{1}{{\sqrt {{P_1}} }}\left\langle \psi \right|\left. \varphi \right\rangle\ \ ,
\end{array}\]
where
\[\begin{array}{c}
\left| \varphi \right\rangle = \sum\limits_{n = 1}^\infty {\sum\limits_{k = 0}^{n - 1} {\sum\limits_{m = 0}^\infty {\sum\limits_{l = 0}^m {{s_{n,k,m,l}}\sqrt {m + 1} \sqrt {k + m - l + 1} } } } } \\
\times {\left| {n,n - 1 - k + l,k + m - l + 1,m + 1} \right\rangle _{A{B_2}EF}} \ + \\
\sum\limits_{n' = 1}^\infty {\sum\limits_{k' = 0}^{n' - 1} {\sum\limits_{m' = 1}^\infty {\sum\limits_{l' = 0}^{m'-1} {{s_{n',k',m',l'}}\sqrt {m'} \sqrt {k' + m' - l'} } } } } \\
\times {\left| {n',n' - 1 - k' + l',k' + m' - l' - 1,m' - 1} \right\rangle _{A{B_2}EF}} \ . \\
\end{array}\]
Similar variance and covariance terms can be derived for PS at the receiver. These terms can be calculated numerically simply by using the fact that $\left\langle{n,k,m,l}|{n',k',m',l'}\right\rangle = \delta_{nkml,n'k'm'l'}$. The usefulness of such terms will become evident when we calculate the keys rates, an issue we turn to next.
\subsection{The Secret Key Rate}
Under a collective attack, the key rate is related to the difference of $I(A:{B_2})$ - the mutual information between mode $A$ and mode $B_2$; and $\chi({B_2}:EF)$ - the Holevo information that Eve can extract from her measurement \cite{Weedbrook2012}. More specifically, we can say, the key rate (per pulse generated by the source laser) is,
$$K({T_E}) = {P}\left[ {fI(A:{B_2}) - \chi ({B_2}:EF)} \right] \ ,$$
where $f$ is the decoding reconciliation efficiency, and $P$ is the probability of subtracting one photon in the PS.
However, calculation of the key rate for a non-Gaussian state is analytically not tractable since the non-Gaussian state has more than two non-zero moments. To make progress, we utilize the Gaussian state (metrics of which will be indicated by the subscript $G$) that produces the same covariance matrix ${\bf{{ M}}}$
as the non-Gaussian state ${\left| \psi \right\rangle _{A{B_2}EF}}$. This provides a lower bound for the key rate by the theorem of Gaussian optimality \cite{2n}. Emphasizing that all key rates discussed from this point on are bounds, we have\footnote{Note, the beam-splitter attack we use is the most pragmatic, but it is slightly sub-optimal. Under an optimal attack (purification), the key rate will be approximately 1.1dB lower for all our schemes.}
\[K({T_E}) \ge {P}\left[ {f{I_G}(A:{B_2}) - {\chi _G}({B_2}:EF)} \right]\ , \]
where \cite{Weedbrook2012}
\[{I_G}(A:{B_2}) = \frac{1}{2}{\log _2}\frac{{{V_{{B_2}}}}}{{{V_{{B_2}|A}}}} \ , \]
and the conditional variance ${V_{{B_2}|A}}$ is
\[{V_{{B_2}|A}} = {V_{{B_2}}} - \frac{{{C_{A{B_2}}}^2}}{{{V_A}}} \ .\]
For Eve's stolen information, we can write
\[{\chi _G}({B_2}:EF) = \sum\limits_i {g(v_i^{EF})} - \sum\limits_j {g(v_j^{EF|{B_2}})} \ , \]
where
$$g(v) = \frac{{v + 1}}{2}{\log _2}\frac{{v + 1}}{2} - \frac{{v - 1}}{2}{\log _2}\frac{{v - 1}}{2} \ .$$ In the above,
${v^{EF}}$
and ${v^{EF|{B_2}}}$
are the symplectic eigenvalues of the covariance matrices ${{\bf{M}}_{{\bf{EF}}}}$
and ${{\bf{M}}_{{\bf{EF|}}{{\bf{B}}_{\bf{2}}}}}$, respectively,
where \cite{Weedbrook2012}
\[{{\bf{M}}_{{\bf{EF|}}{{\bf{B}}_{\bf{2}}}}} = {{\bf{M}}_{{\bf{EF}}}} - \left[ {\begin{array}{*{20}{c}}
{{C_{E{B_2}}}{\bf{I}}} \\
{{C_{F{B_2}}}{\bf{\sigma }}} \\
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{{V_{{B_2}}}^{ - 1}} & 0 \\
0 & 0 \\
\end{array}} \right]{\left[ {\begin{array}{*{20}{c}}
{{C_{E{B_2}}}{\bf{I}}} \\
{{C_{F{B_2}}}{\bf{\sigma }}} \\
\end{array}} \right]^T} \ . \]
Finally, we can now determine the bound on the key rate achieved in the satellite lossy channel by taking the average over all possible transmission coefficient values, namely,
${K_{avg}} = \int {p({\eta})K({\eta^2})d{\eta}}$. Allowing the initial squeezing to be dependent on $\eta$ allows for further optimization of the key rate - an issue we ignore for simplicity.
\section{Simulation results}
\begin{figure}[h]
\includegraphics[width=0.38\textwidth]{Fig3}
\centering
\caption{The key rate vs. transmissivity. \label{fixeda}}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.38\textwidth]{Fig4}
\centering
\caption{The key rate vs. distance. \label{fixedb}}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.4\textwidth]{Fig5}
\centering
\caption{The key rate over the fixed channel for different noise conditions. The top, middle, and bottom layers are No PS, T-PS and R-PS, respectively. \label{sat3}}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.4\textwidth]{Fig6}
\centering
\caption{The key rate over the fixed channel for different mean photon number. The top, middle, and bottom layers are No PS, T-PS and R-PS, respectively. \label{sat4}}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.38\textwidth]{Fig7}
\centering
\caption{The key rate averaged over the satellite channel as a function of the standard deviation of the beam wandering for range 0-20. \label{sat}}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.45\textwidth]{Fig8}
\centering
\caption{A close up of the key rate averaged over the satellite channel as a function of the standard deviation of the beam wandering for the range 0-1. \label{sat2}}
\end{figure}
For comparison purposes we first consider a non-variable attenuation channel, before comparing the performance of our three schemes for the satellite channel we have discussed earlier in the paper.
Unless otherwise stated, the parameters utilized in the calculations shown are ${\alpha ^2} = 1.3$, ${\beta ^2} = 0.001$, $f = 0.95$, and $T_S = 0.9$ (for simplicity a detector efficiency of 1 is assumed). The infinite summation limits are constrained to 20 for $n$ and $m$ \cite{errorv1}.
As stated, we first consider a fixed attenuation channel. Here we fix the value of $\alpha^2$ for all attenuation conditions. We plot the key rate against transmissivity in Fig.~(\ref{fixeda}), and against distance in Fig.~(\ref{fixedb}).
In Fig.~(\ref{fixedb}) we assume that the channel has a fixed attenuation of $0.2$dB/km. The results of Figs.~(\ref{fixeda})-(\ref{fixedb}) show that the R-PS scheme has the longest key distribution range at a cost of a reduced key rate. That is, the R-PS scheme is in some sense the most robust against channel attenuation (provides a non-zero key rate at the largest distance).
We further compare the performance of the three schemes as a function of the noise ${\beta ^2}$
and the mean photon number ${\alpha ^2}$ (i.e. sinh$^2r$, $r$ being the squeezing parameter) - the results of which are shown in Figs.~(\ref{sat3}) and (\ref{sat4}), respectively. Note, that in these figures the rates are not plotted in the logarithmic domain so the comparison in the small rate region is not as apparent. As can be seen, for some parameter space we find distances where the T-PS scheme shows better key rate performance than the other schemes. We also find the T-PS and R-PS schemes can outperform the No-PS scheme in some parameter space (again we caution that optimisation of the initial squeezing can alter these conclusions).
We next investigate the key rates of the three schemes in the variable Earth-satellite channel, calculating their average key rates under different average channel fluctuations, quantified using ${\sigma _b}$ within equations~(\ref{f1})-(\ref{f2}). These results are shown in Fig.~(\ref{sat}).
The No-PS case shows better performance in terms of key rate for the entire range of channel conditions - a result not found for the fixed attenuation case. The PS cases (T-PS and R-PS) are impacted by the low probability of obtaining a subtracted photon in any given pulse, and this effect dominates when channel averaging over the fading channel is accounted for. The blue dashed curve (marked normalized) in Fig.~(\ref{sat}) show the impact of a quantum memory in place such that the low probability for PS can be negated. Here the schemes are assumed to be \emph{a priori} storing the required states in memory, then sending the same rate of quantum states into the satellite channel on-demand.
A close up at low $\sigma_b$ is shown in Fig.~(\ref{sat2}) for different noise conditions. These latter results show the rates possible in very-high quality downlinks from the satellite-to-Earth.\footnote{ Note that $\sigma_{b}=1$ corresponds to approximately 5dB of loss. Such low loss rates are possible for well-engineered systems in which diffraction of the beam is the major factor contributing to photon loss.}
A main aim of our study was to determine whether PS at the transmitter-side outperforms PS at the receiver-side for a range of Earth-Satellite channels (where no instantaneous channel-dependent optimisation of squeezing occurs at the transmitter). Figs.~(\ref{sat})-(\ref{sat2}) provide an answer to this question - yes. This result holds for all anticipated channel conditions (only at unrealistic noise levels is the opposite found).
\section{Conclusions}
We have studied the use of non-Gaussian CV quantum states - created via photon subtraction - in the context of a straightforward QKD protocol.
More specifically, we have studied the lower-bounds on secret key rates delivered by such states.
Contrary to what is found in fixed attenuation channels (such as optical fiber), we find that for the variable-channels anticipated for Earth-satellite communications, photon subtraction at the transmitter, for an initially fixed squeezing, outperforms photon subtraction at the receiver for all realistic conditions.
The authors acknowledge support from the UNSW, the CSC, and Northrop Grumman.
|
{
"timestamp": "2018-11-22T02:04:14",
"yymm": "1806",
"arxiv_id": "1806.00924",
"language": "en",
"url": "https://arxiv.org/abs/1806.00924"
}
|
\section{Introduction}\label{sect 1}
Let $Q$ be a quiver without loops and 2-cycles and let $\mathcal{A}(Q)$ be the corresponding cluster algebra with trivial coefficients. We define a {\em frieze of type $Q$} to be a ring homomorphism $\mathcal{F}\colon\mathcal{A}(Q)\to R$ from the cluster algebra to an integral domain $R$. The frieze $\mathcal{F}$ is called \emph{non-zero} if every cluster variable is mapped to a non-zero element of $R$ and $\mathcal{F}$ is said to be \emph{unitary} if there exists a cluster $\mathbf{x}$ such that $\mathcal{F}(x)$ is a unit in $R$, for all $x\in\mathbf{x}$. Moreover $\mathcal{F}$ is called \emph{integral} if $R=\mathbb{Z}$, and \emph{positive} if $R=\mathbb{Z}$ and every cluster variable is mapped to a positive integer.
Positive integral friezes of Dynkin type $\mathbb{A}_n$ are precisely the classical Conway-Coxeter friezes, where the classical frieze pattern is given by the displaying the values of $\mathcal{F}$ on the cluster variables in the shape of the Auslander-Reiten quiver of the cluster category.
Every non-zero frieze is determined by its values $\mathcal{F}(\mathbf{x})=(a_1,\ldots,a_n)$ on an arbitrary cluster $\mathbf{x}=(x_1,\ldots,x_n)$ in $\mathcal{A}(Q)$. It is therefore natural to ask which values $(a_1,\ldots,a_n)$ produce positive unitary integral friezes. We call such a vector $(a_1,\ldots,a_n)$ a \emph{unitary frieze vector relative to the cluster $\mathbf{x}$}. Our first main result is the following.
\begin{theoremA}\label{thm A}
Let $Q$ be a quiver without loops and 2-cycles and let $\mathbf{x}=(x_1,\ldots,x_n)$ be an arbitrary cluster of $\mathcal{A}(Q)$. Then there is a bijection
\[
\begin{array}{rcl}
\phi\colon\{\textup{unordered clusters in $\mathcal{A}(Q)$}\} &\longrightarrow&\{\textup{positive unitary frieze vectors relative to $\mathbf{x}$}\} \\
\mathbf{x}'=\{x_1',\ldots,x_n'\} &\longmapsto& \phi(\mathbf{x}')=(a_1,\ldots,a_n).
\end{array}\]
\end{theoremA}
Thus every cluster $\mathbf{x}'$ defines a unique unitary frieze vector. One can thus think of the frieze vectors as another parametrization of the clusters in the cluster algebra. The frieze vectors are different from other known vectors appearing in cluster algebra theory like denominator vectors, $c$-vectors or $g$-vectors.
\smallskip
Our second main result is about the unitarity of positive integral friezes. Since Conway and Coxeter's work in 1973, it is known that every positive integral frieze of Dynkin type $\mathbb{A}$ is unitary. For Dynkin types $\mathbb{D}$ and $\mathbb{E}$ there exist non-unitary positive integral friezes, see \cite{FP}. We extend these results to the affine Dynkin types as follows.
\begin{theoremA} \label{thm B}
Let $Q$ be a quiver of type $\widetilde{\mathbb{A}}_{p,q}$ and let $\mathcal{F}\colon\mathcal{A}(Q)\to \mathbb{Z}$ be a positive integral frieze. Then $\mathcal{F}$ is unitary.
\end{theoremA}
Our proof is constructive. We give an algorithm that starts from an arbitrary positive integral frieze $\mathcal{F}$ and produces the unique cluster $\mathbf{x}$ such that $\mathcal{F}(\mathbf{x})=(1,\ldots,1)$.
In the other affine types $\widetilde{\mathbb{D}}$ and $\widetilde{\mathbb{E}}$, there are non-unitary positive integral friezes.
It is natural to ask if friezes of types $\mathbb{A}$ and $\widetilde{\mathbb{A}}$ remain unitary if one replaces the ring of integers by other integral domains. However, already over the Gaussian integers we give an example of a non-unitary frieze of Dynkin type
$\mathbb{A}_2$. The classification of friezes over the Gaussian integers or other integral domains besides $\mathbb{Z}$ is open even in type $\mathbb{A}$. For type $\mathbb{A}_1$ there are 12 non-zero friezes over the Gaussian integers, see \cite{F}.
The paper is organized as follows. In section \ref{sect 2}, we give the formal definition of friezes and show how they are a generalization of Conway-Coxeter friezes. We also give several examples of friezes of type $\mathbb{A}_3$ over different rings. Section \ref{sect 3} is devoted to the definition of frieze vectors and the proof of Theorem \ref{thm A}, and Theorem \ref{thm B} is proved in section \ref{sect 4}.
\section{Friezes}\label{sect 2}
Friezes of type $\mathbb{A}_n$ were classified by Conway and Coxeter in \cite{CoxCon} in 1973. More than 30 years later, Caldero and Chapoton discovered a relation between friezes and cluster algebras in \cite{CC}. Since then friezes were studied by many authors, see for example \cite{BM, ARS, KS, MG1, MGOT, FP, BFGST, BRM, BFPT, GMV, LLMSS}.
For a survey we refer the reader to \cite{MG2}.
Usually classical friezes are defined as certain planar arrays of positive integers that satisfy a diamond relation. In this paper however, we take a different point of view and we define a frieze to be a homomorphism from an arbitrary cluster algebra to an arbitrary integral domain $R$. The usual planar array is obtained from the Auslander-Reiten quiver of the corresponding cluster category by replacing the indecomposable objects (i.e. the vertices of the Auslander-Reiten quiver) by the values of the homomorphism on the corresponding cluster algebra elements. Friezes as homomorphisms to the integers were also considered in \cite{F,FP}, and friezes with values in subsets of the complex numbers in \cite{CH}.
\subsection{Definition}
Let $Q$ be a quiver without loops and 2-cycles and let $\mathcal{A}(Q)$ be the corresponding cluster algebra with trivial coefficients, see \cite{FZ}. We could just as well include coefficients in our definition, but since we are not using them in this paper we impose trivial coefficients for simplicity.
\begin{definition}
(1) A \emph{frieze of type $Q$} is a ring homomorphism
\[\mathcal{F}\colon \mathcal{A}(Q)\longrightarrow R\]
from the cluster algebra to an integral domain $R$. The frieze is called \emph{integral} if $R=\mathbb{Z}$.
(2) A frieze $\mathcal{F}\colon \mathcal{A}(Q)\longrightarrow R$ is said to be \emph{unitary} if there exists a cluster $\mathbf{x}$ in $\mathcal{A}(Q)$ such that every cluster variable $x\in\mathbf{x}$ is mapped by $\mathcal{F}$ to a unit in $R$.
(3) A frieze is said to be \emph{non-zero} if every cluster variable in $\mathcal{A}(Q)$ is mapped by $\mathcal{F}$ to a non-zero element of $R$.
(4) An integral frieze is said to be \emph{positive} if every cluster variable in $\mathcal{A}(Q)$ is mapped by $\mathcal{F}$ to a positive integer.
\end{definition}
\begin{remark}
Our definition of unitary friezes agrees with that of \cite{MG1,FP} for positive integral friezes. Note however that if the integral frieze is not positive, we also allow specialization at -1.
\end{remark}
\subsection{Cluster category and Auslander-Reiten quiver} Let $Q$ be a quiver without loops and 2-cycles.
If the quiver $Q$ is mutation equivalent to an acyclic quiver $Q'$, we let $\mathcal{C}_Q$ be the cluster category $\mathcal{C}_Q=\mathcal{D}^b(\textup{mod}\,kQ')/\tau^{-1}[1]$ introduced in \cite{BMRRT} and in \cite{CCS} for type $\mathbb{A}$. More generally, if $Q$ comes with a non-degenerate potential, we let $\mathcal{C}_Q$ be the generalized cluster category introduced in \cite{A}. We denote by $\Gamma(\mathcal{C}_Q)$ the Auslander-Reiten quiver of $\mathcal{C}_Q$. Its vertices are the isoclasses of indecomposable objects in $\mathcal{C}_Q$ and its arrows are given by irreducible morphisms in $\mathcal{C}_Q$. If $Q$ is mutation equivalent to an acyclic quiver $Q'$, then $\Gamma(\mathcal{C}_Q)$ has a special connected component, called the \emph{transjective component}, that contains both the preprojective component and the preinjective component of $\textup{mod}\,kQ'$. In finite type, this transjective component is all of $\Gamma(\mathcal{C}_Q)$.
The cluster category is a triangulated category equipped with a Serre functor given by the Auslander-Reiten translation $\tau$. Moreover $\mathcal{C}_Q$ has Auslander-Reiten triangles and it is 2-Calabi-Yau, meaning that $\mathrm{Ext}^1_{\mathcal{C}_Q}(X,Y)\cong D\mathrm{Ext}^1_{\mathcal{C}_Q}(Y, X)$, where $D=\mathrm{Hom}(-,k)$ denotes the standard duality, see \cite{K,A}. An object $X\in \mathcal{C}_Q$ is called rigid if $\mathrm{Ext}^1_{\mathcal{C}_Q}(X,X)=0$, and an indecomposable rigid object in $\mathcal{C}_Q$ is called reachable if it can be reached under mutation from the initial cluster-tilting object. If $Q$ is mutation equivalent to an acyclic quiver all rigid indecomposable objects are reachable and all indecomposables in the transjective component are rigid.
The cluster character is a map $X_?\colon\mathcal{C}_Q\to \textup{Frac} \mathcal{A}(Q)$ from the cluster category to the field of fractions of the cluster algebra that maps (isoclasses of reachable) indecomposable rigid objects in $\mathcal{C}_Q$ bijectively to cluster variables in $\mathcal{A}(Q)$, see \cite{CC,CK,CK2,Palu,FK}. The key for the relation to classical friezes lies in the image of Auslander-Reiten triangles under the cluster character, which is expressed in the following proposition.
\begin{proposition}\label{prop 2.2} Let $Q$ be an acyclic quiver.
If $\tau N \to \oplus_{i\in I} M_i \to N\to \tau N[1]$ is an Auslander-Reiten triangle in the transjective component of $\mathcal{C}_Q$ with $\tau N, M_i, N$ indecomposable rigid objects then we have the following identity in the cluster algebra.
\[ X_{\tau N}\, X_{N} = \prod_{i\in I} X_{M_i} +1.\]
\end{proposition}
\begin{proof} Since $N$ is rigid transjective, we have $\dim \mathrm{Ext}^1(N,\tau N)=1$ and therefore $N$ and $\tau N$ form an exchange pair
\cite[Theorem 7.5]{BMRRT}. This implies that there are unique (up to isomorphism) triangles
\[\tau N \to \oplus_{i\in I} M_i \to N\to \tau N[1]\quad \textup{and} \quad N \to \oplus_{i\in I'} M'_i \to \tau N\to N[1]\] such that $X_{\tau N} \,X_N = \prod_{i\in I} X_{M_i} +\prod_{i\in I'} X_{M'_i}$.
Now, in the cluster category, we have $\tau=[1]$, and thus the second triangle is isomorphic to $N \to 0\to N[1]\stackrel{1}{\to} N[1]$. This completes the proof.
\end{proof}
\begin{remark}
(1) This proposition gives the so-called diamond relation in the friezes.
(2) If we were considering cluster algebras with non-trivial coefficients the constant 1 on the right hand side of the equation in Proposition \ref{prop 2.2} would be replaced by a coefficient monomial. Friezes of that type were studied in \cite{BRM}.
\end{remark}
\subsection{Examples}\label{sect 2.3}
(1) The identity homomorphism $\mathcal{A}(Q)\to\mathcal{A}(Q)$ is a non-zero frieze of type $Q$. For example, if $Q$ is the type $\mathbb{A}_3$ quiver $1\to 2\leftarrow 3$, we can visualize this frieze in the Auslander-Reiten quiver of $\mathcal{C}_Q$ as follows.
First let us write down the Auslander-Reiten quiver.
\[\xymatrix@!@R10pt@C10pt{
& {\begin{smallmatrix} 3\\2 \end{smallmatrix}}{\scriptstyle[1]} \ar[rd] &&
{\begin{smallmatrix} 3\\2 \end{smallmatrix}} \ar[rd] &&
{\begin{smallmatrix} 1 \end{smallmatrix}} \ar[rd] &&
{\begin{smallmatrix} 1\\2 \end{smallmatrix}}{\scriptstyle[1]} \\
{\begin{smallmatrix} 2 \end{smallmatrix}}{\scriptstyle[1]} \ar[rd]\ar[ru] &&
{\begin{smallmatrix} 2 \end{smallmatrix}} \ar[rd]\ar[ru] &&
{\begin{smallmatrix} 1\ 3\\2 \end{smallmatrix}} \ar[rd]\ar[ru] &&
{\begin{smallmatrix} 2 \end{smallmatrix}}{\scriptstyle[1]} \ar[rd]\ar[ru] &&
\\
& {\begin{smallmatrix} 1\\2 \end{smallmatrix}}{\scriptstyle[1]} \ar[ru] &&
{\begin{smallmatrix} 1\\2 \end{smallmatrix}} \ar[ru] &&
{\begin{smallmatrix} 3 \end{smallmatrix}} \ar[ru] &&
{\begin{smallmatrix} 3\\2 \end{smallmatrix}}{\scriptstyle[1]} \\
}
\]
Here we use a standard notation for the representations of the quiver $Q$, see for example \cite{Schiffler}, and $[1]$ denotes the shift. Vertices with the same label are identified, so the quiver lies on a Moebius strip. The Auslander-Reiten translation $\tau$ is the horizontal translation to the left. For example $\tau {\begin{smallmatrix} 3 \end{smallmatrix}} ={\begin{smallmatrix} 1\\2 \end{smallmatrix}}$. The Auslander-Reiten triangles are given by the meshes in the Auslander-Reiten quiver, for example
\[\to {\begin{smallmatrix} 1\\2 \end{smallmatrix}}{\scriptstyle[1]} \to {\begin{smallmatrix} 2 \end{smallmatrix}} \to {\begin{smallmatrix} 1\\2 \end{smallmatrix}} \to
\qquad \textup{and} \qquad
\to {\begin{smallmatrix} 2 \end{smallmatrix}} \to {\begin{smallmatrix} 1\\2 \end{smallmatrix}} \oplus {\begin{smallmatrix} 3\\2\end{smallmatrix}} \to {\begin{smallmatrix} 1\ 3\\2 \end{smallmatrix}} \to
\]
are Auslander-Reiten triangles.
The identity homomorphism $\mathcal{A}(Q)\to\mathcal{A}(Q)$ gives the following frieze.\[\xymatrix@R-10pt@C-10pt{
&\hspace{10pt} x_3\hspace{10pt} \ar[rd] &&
\frac{x_1x_3+1+x_2}{x_2x_3} \ar[rd] &&
\frac{x_2+1}{x_1} \ar[rd] &&
\hspace{10pt} x_1
\hspace{10pt} \\
\hspace{5pt} x_2\hspace{5pt} \ar[rd]\ar[ru] &&
\frac{x_1x_3+1}{x_2} \ar[rd]\ar[ru] &&
\hspace{-5pt} \hspace{-5pt} \frac{x_2^2+2x_2+1+x_1x_3}{x_1x_2x_3}\hspace{-5pt}\fuenfm \ar[rd]\ar[ru] &&
\hspace{5pt} x_2\hspace{5pt} \ar[rd]\ar[ru] &&
\\
& x_1 \ar[ru] &&
\frac{x_1x_3+1+x_2}{x_1x_2} \ar[ru] &&
\frac{x_2+1}{x_3} \ar[ru] &&
x_3 \\
}
\]
This is an example of a non-zero frieze of type $\mathbb{A}_3$. Note that the Auslander-Reiten triangles give the usual diamond rules, for example
\[x_1
\ \frac{x_1x_3+1+x_2}{x_1x_2} = \frac{x_1x_3+1}{x_2} \ +\ 1
\qquad \textup{and} \qquad \]
\[\frac{x_1x_3+1}{x_2}\ \frac{x_2^2+2x_2+1+x_1x_3}{x_1x_2x_3}
\ =\ \frac{x_1x_3+1+x_2}{x_1x_2} \ \frac{x_1x_3+1+x_2}{x_2x_3} \ + \ 1
\]
\noindent (2) Specializations. We compute several specializations of the example above.
(i) Specializing $x_1=x_2=x_3=1$, we obtain the following unitary positive integral frieze.
\[\xymatrix{
& 1 \ar[rd] &&
3 \ar[rd] &&
2 \ar[rd] &&
1 &&
\\
1 \ar[rd]\ar[ru] &&
2 \ar[rd]\ar[ru] &&
5 \ar[rd]\ar[ru] &&
1 \ar[rd]\ar[ru] &&
\\
&
1\ar[ru] &&
3\ar[ru] &&
2\ar[ru] &&
1 &&
}
\]
Here the previous examples of the diamond rules become simply
\[1\cdot 3 = 2+1 \qquad \textup{and} \qquad 2\cdot 5 =3\cdot 3+1.\]
This is an example of a classical Conway-Coxeter frieze; let us point out that one can extend this frieze pattern by a row of 1's above and below the current pattern, which is how the Conway-Coxeter friezes are usually represented. We will not include these rows of 1's in this article.
(ii) Specializing $x_1=x_2=1$ and $x_3=-1$, we obtain the following unitary integral frieze which is non-positive, not even non-zero.
\[\xymatrix{
& -1 \ar[rd] &&
-1\ar[rd] &&
2 \ar[rd] &&
1 &&
\\
1 \ar[rd]\ar[ru] &&
0 \ar[rd]\ar[ru] &&
-3 \ar[rd]\ar[ru] &&
1 \ar[rd]\ar[ru] &&
\\
&
1\ar[ru] &&
1\ar[ru] &&
-2\ar[ru] &&
-1 &&
}
\]
Our example diamond relations become here $1\cdot 1 =0+1$ and $0\cdot(-3)=(-1)\cdot 1 +1$.
(iii) Specializing $x_1=1$, $x_2=i$, and $x_3=i$, we obtain the following unitary non-zero frieze in the Gaussian integers $\mathbb{Z}[i]$.
\[\xymatrix{
& \hspace{5pt} i \hspace{5pt} \ar[rd] &&
\hspace{-5pt}-1-2i \hspace{-5pt} \ar[rd] &&
1+i \ar[rd] &&
\fuenf1 \hspace{5pt} &&
\\
i \ar[rd]\ar[ru] &&
1-i \ar[rd]\ar[ru] &&
-3i \ar[rd]\ar[ru] &&
i \ar[rd]\ar[ru] &&
\\
&
1\ar[ru] &&
2-i\ar[ru] &&
1-i\ar[ru] &&
i &&
}
\]
Here our example diamond relations become $1\cdot (2-i) =(1-i)+1$ and $(1-i)\cdot(-3i)=(-1-2i)\cdot (2-i) +1$.
(iv) Specializing $x_1=1$, $x_2=\frac{1+\sqrt{-3}}{2}$, $x_3=1$, we obtain the following unitary non-zero frieze in the quadratic integer ring $\mathbb{Z}[\sqrt{-3}]$. Recall that the units in this ring are $\{\pm 1,\frac{\pm1 \pm\sqrt{-3}}{2}\}$.
\[\xymatrix{
& 1 \ar[rd] &&
\scriptstyle 2-\sqrt{-3} \ar[rd] &&
\frac{3+\sqrt{-3}}{2} \ar[rd] &&
1 &&
\\
\frac{1+\sqrt{-3}}{2} \ar[rd]\ar[ru] &&
\scriptstyle 1-\sqrt{-3} \ar[rd]\ar[ru] &&
\frac{7-\sqrt{-3}}{2} \ar[rd]\ar[ru] &&
\frac{1+\sqrt{-3}}{2} \ar[rd]\ar[ru] &&
\\
& 1 \ar[ru] &&
\scriptstyle 2-\sqrt{-3} \ar[ru] &&
\frac{3+\sqrt{-3}}{2} \ar[ru] &&
1 &&
}
\]
In this case, the examples of the diamond relations become $1\cdot (2-\sqrt{-3} )= 1-\sqrt{-3} $ and $ (1-\sqrt{-3})(\frac{7-\sqrt{-3}}{2} ) =(2-\sqrt{-3} ) ^2+1$.
\subsection{Positive unitary integral friezes}\label{sect 2.4} In this subsection we show that for a positive unitary integral frieze, the cluster that carries the unitarity property is unique.
\begin{proposition}
\label{prop 4}
Let $\mathcal{F}\colon\mathcal{A}(Q)\to \mathbb{Z}$ be a positive unitary integral frieze and let $\mathbf{x}$ be a cluster such that $\mathcal{F}(\mathbf{x})=(1,\ldots,1)$. Then for all cluster variables $u\notin \mathbf{x}$ we have $\mathcal{F}(u)>1$. In particular $\mathbf{x}$ is the unique cluster such that $\mathcal{F}(\mathbf{x})=(1,\ldots,1)$.
\end{proposition}
\begin{proof} Suppose $\mathcal{F}(u)=1$.
Since $u$ is a Laurent polynomial in $\mathbf{x}$ with positive coefficients, this implies that $u$ is a Laurent monomial in $\mathbf{x}$. Let $M(u)$ be the indecomposable reachable rigid object in the cluster category that is mapped to $u$ under the cluster character. Then the number of terms of the Laurent polynomial of $u$ is equal to the the sum $\sum_{\underline{e}} \chi(Gr_{\underline{e}}(\mathrm{Ext}^1_{\mathcal{C}_Q}(T,M(u))$, where $T$ is the cluster-tilting object corresponding to $\mathbf{x}$, $\mathrm{Ext}^1_{\mathcal{C}_q}(T,M(u))$ is a module over the Jacobian algebra of the quiver with potential of the seed containing $\mathbf{x}$, $Gr_{\underline{e}}$ is the quiver Grassmannian of submodules of dimension vector $\underline{e}$ and $\chi$ is the Euler characteristic \cite{FK}. Whenever $\mathrm{Ext}^1_{\mathcal{C}_q}(T,M(u))$ is nonzero, this sum has at least two terms coming from the two trivial submodules $0$ and $ \mathrm{Ext}^1_{\mathcal{C}_q}(T,M(u))$. Using the positivity theorem \cite{LS4} we see that $\mathcal{F}(u)=1 $ if and only if $\mathrm{Ext}^1_{\mathcal{C}_q}(T,M(u))=0$, which means that $M(u)$ is a summand of $T$, and thus $u\in \mathbf{x}$, a contradiction.
\end{proof}
\section{Frieze vectors}\label{sect 3}
In this section, we introduce a class of positive integer vectors and show that they are in bijection with the clusters of the cluster algebra.
\subsection{Definition}\label{sect 3.1}
We start with a general result on non-zero friezes.
\begin{proposition}\label{prop 1}
Every non-zero frieze $\mathcal{F}\colon \mathcal{A}(Q)\to R$ is completely determined by its values on an arbitrary cluster in $\mathcal{A}(Q)$.
\end{proposition}
\begin{proof}
Let $\mathbf{x}=(x_1,\cdots,x_n)$ be a cluster in $\mathcal{A}(Q)$ and let $u$ be an arbitrary cluster variable in $\mathcal{A}(Q)$ that does not lie in $\mathbf{x}$. By the Laurent phenomenon \cite{FZ}, we can write $u$ as a Laurent polynomial in $x_1,\ldots,x_n$, thus
\[ u= \frac{f(x_1,\ldots,x_n)}{x_1^{d_1}\cdots x_n^{d_n}} \qquad \textup{with } f\in \mathbb{Z}[x_1,\ldots,x_n], d_i\ge 0.\]
Thus \[\mathcal{F}(u) = \frac{f(\mathcal{F}(x_1),\ldots,\mathcal{F}(x_n))}{\mathcal{F}(x_1)^{d_1}\cdots \mathcal{F}(x_n)^{d_n}}\]
in the field of fractions of $R$. Note that this expression is well-defined since the frieze is non-zero. Therefore $\mathcal{F}(u)$ is determined by the values $\mathcal{F}(x_i)$. Since the cluster algebra is generated by its cluster variables, this completes the proof.
\end{proof}
Proposition \ref{prop 1} implies that given an arbitrary cluster $\mathbf{x}=(x_1,\ldots,x_n)$ we can obtain \emph{every} non-zero frieze by specializing the cluster variables $x_i$ of the cluster to certain ring elements $\mathcal{F}(x_i)=a_i\in R$.
It is important to note that by far not every choice of elements $a_i\in R$ will produce a frieze with values in $R$, because in general the values will be in the field of fractions of $R$. It is natural to ask which choices $a_i\in R$ do. This leads us to the following definition.
\begin{definition} Let $\mathbf{x}=(x_1,\ldots,x_n)$ be a cluster of $\mathcal{A}(Q)$.
(1) A vector $(a_1,\ldots,a_n) \in R^n$ is called a \emph{frieze vector relative to $\mathbf{x}$} if the frieze $\mathcal{F}$ defined by $\mathcal{F}(x_i)=a_i$ has values in $R$. If the frieze $\mathcal{F}$ is unitary we say that the frieze vector $(a_1,\ldots,a_n)$ is \emph{unitary}.
(2) A vector $(a_1,\ldots,a_n) \in \mathbb{Z}_{>0}^n$ is called a \emph{positive frieze vector relative to $\mathbf{x}$} if the frieze $\mathcal{F}$ defined by $\mathcal{F}(x_i)=a_i$ is positive integral.
\end{definition}
\begin{proposition}
\label{prop 2}
{\rm (1)} Let $(a_1,\ldots,a_n)\in R^n$ such that every $a_i$ is a unit in $R$. Then $(a_1,\ldots,a_n)$ is a (unitary) frieze vector relative to every cluster $\mathbf{x}=(x_1,\ldots,x_n)$ in $\mathcal{A}(Q)$.
{\rm (2)} The vector $(1,\ldots,1)\in \mathbb{Z}_{>0}^n$ is a positive (unitary) frieze vector relative to every cluster $\mathbf{x}=(x_1,\ldots,x_n)$ in $\mathcal{A}(Q)$
\end{proposition}
\begin{proof}
(1) By the Laurent phenomenon, every cluster variable is a Laurent polynomial in $\mathbf{x}$. Since each $x_i$ is specialized to a unit in $R$, the denominator of this Laurent polynomial also specializes to a unit in $R$. Therefore the image of every cluster variable lies in $R$, and hence $\mathcal{F}(\mathcal{A}(Q))\subset R$.
(2) The frieze is integral by part (1) and positivity follows from the positivity theorem for cluster variables \cite{LS4}.
\end{proof}
\subsection{Acyclic type}
In the case where the quiver $Q$ is mutation equivalent to an acyclic quiver, we have the following characterization of frieze vectors.
\begin{proposition}
\label{prop 3}
Let $(\mathbf{x}=(x_1,\ldots,x_n),Q)$ be an acyclic seed of the cluster algebra. Then a vector $(a_1,\ldots,a_n)\in R^n$ is a frieze vector relative to $\mathbf{x}$ if and only if $\textup{$a_i$ is a divisor of } \prod_{i\to j} a_j +\prod_{i\leftarrow j} a_j$
in $R$, for all $i=1,\ldots,n$.
\end{proposition}
\begin{proof}
Let $x_i'$ denote the cluster variable obtained from $(\mathbf{x},Q)$ by mutating in direction $i$. Then
\[x_i'= \frac{\prod_{i\to j} x_j +\prod_{i\leftarrow j} x_j}{x_i}.\]
By \cite[Corollary 1.21]{BFZ}, the cluster algebra is generated by the $2n$ variables $x_1,\ldots,x_n,x_1',\ldots,x_n'$.
Let $\mathcal{F} $ be the homomorphism defined by $\mathcal{F}(x_i)=a_i$. Then
\[\mathcal{F}(\mathcal{A}(Q))\subset R \Leftrightarrow \mathcal{F}(x_i')\in R \textup{ for each $i$ }\Leftrightarrow a_i \textup{ divides } \prod_{i\to j} a_j +\prod_{i\leftarrow j} a_j \textup{ in $R$ for all $i$.}\qedhere\]
\end{proof}
\subsection{Main result on frieze vectors}
We are now ready to state and prove our first main result.
\begin{theorem}
\label{thm1}
Let $Q$ be a quiver without loops and 2-cycles and let $\mathbf{x}=(x_1,\ldots,x_n)$ be an arbitrary cluster of $\mathcal{A}(Q)$. Then there is a bijection
\[
\begin{array}{rcl}
\phi\colon\{\textup{unordered clusters in $\mathcal{A}(Q)$}\} &\longrightarrow&\{\textup{positive unitary frieze vectors relative to $\mathbf{x}$}\} \\
\mathbf{x}'=\{x_1',\ldots,x_n'\} &\longmapsto& \phi(\mathbf{x}')=(a_1,\ldots,a_n).
\end{array}\]
\end{theorem}
\begin{remark}
(1) The theorem implies that every cluster $\mathbf{x}'$ defines a unique positive unitary frieze vector in $\mathbb{Z}_{>0}^n$. This vector is different from the $g$-vector and the $c$-vector of the seed.
(2) We stress that, while the order of the cluster variables $x_1',\ldots,x_n'$ is irrelevant, the order of the entries of the frieze vector $\phi(\mathbf{x}')=(a_1,\ldots,a_n)$ is important. In other words, if $\sigma$ is a permutation then $\phi(\sigma \mathbf{x}')=\phi(\mathbf{x}')$, but $\sigma\phi(\mathbf{x}')\ne \phi(\mathbf{x}')$ in general.
\end{remark}
\begin{proof}
Each cluster variable $x_1,\ldots,x_n$ in the fixed cluster $\mathbf{x}$ can be expressed as a Laurent polynomial in the cluster $\mathbf{x}'$, say $x_i=\mathcal{L}_i(x_1',\ldots,x_n')$. We define the map $\phi$ by $\phi(\mathbf{x}')=(a_1,\ldots,a_n) $, with $a_i=\mathcal{L}_i(1,\ldots,1)$. In other words, $\phi(\mathbf{x}')$ is equal to the vector $\mathcal{F}(\mathbf{x})=(a_1,\ldots,a_n)$, where $\mathcal{F}$ is the frieze defined by specializing the cluster variables in $\mathbf{x}'$ to 1. By Proposition \ref{prop 2}, the frieze $\mathcal{F}$ is unitary, integral and positive. Thus $(a_1,\ldots,a_n)$ is a positive unitary frieze vector relative to $\mathbf{x}$. Furthermore, since every variable in $\mathbf{x}'$ is specialized to 1, we clearly have $\phi(\sigma \mathbf{x}')=\phi(\mathbf{x}')$, for every permutation $\sigma$. Thus the map $\phi $ is well-defined.
To show that $\phi$ is surjective, let $(a_1,\ldots,a_n)\in \mathbb{Z}_{>0}^n$ be any positive unitary frieze vector relative to $\mathbf{x}$. By definition, the corresponding frieze defined by $\mathcal{F}(x_i)=a_i$ is positive and unitary, which means that there exists a cluster $\mathbf{x}'=(x_1',\ldots,x_n')$ such that $\mathcal{F}(x_i')=1$, for $i=1,\ldots,n$. By construction of $\phi$, we have $\phi(\mathbf{x}')=(a_1,\ldots,a_n)$, so $\phi $ is surjective.
To show injectivity, let $\mathbf{x}',\mathbf{x}''$ be two clusters in $\mathcal{A}(Q)$ such that $\phi(\mathbf{x}')=\phi(\mathbf{x}'')$. Let $\mathcal{F}'$ and $\mathcal{F}''$ be the unitary friezes defined by $\mathcal{F}'(x_i')=1$ and $\mathcal{F}''(x_i'')=1$, respectively. Since $\phi(\mathbf{x}')=\phi(\mathbf{x}'')$, both friezes have the same values on $\mathbf{x}$, thus $\mathcal{F}'(\mathbf{x})=\mathcal{F}''(\mathbf{x})=(a_1,\ldots,a_n)$. Now Proposition \ref{prop 1} implies that $\mathcal{F}'=\mathcal{F}''$, and Proposition \ref{prop 4} yields $\mathbf{x}'=\mathbf{x}''$.
\end{proof}
\begin{remark}
The inverse of the bijection $\phi$ is given as follows. Given a positive unitary frieze vector $(a_1,\dots, a_n)$, we compute the corresponding unitary frieze $\mathcal{F}$ by specializing $(x_1,\dots,x_n)=(a_1,\dots,a_n)$. By Proposition \ref{prop 4}, this frieze has a unique cluster $\mathbf{x'}$ such that $\mathcal{F}(\mathbf{x}')=(1,\ldots,1)$. Then $\phi^{-1}(a_1,\ldots,a_n)=\mathbf{x}'$.
\end{remark}
\subsection{Example} Thanks to Proposition \ref{prop 3}, the positive integral frieze vectors $(a_1,a_2,a_3)$ relative to the seed $(x_1,x_2,x_3),1\to 2 \leftarrow 3$ are characterized by the condition that the following three expressions are integers
\[\frac{a_2+1}{a_1},\ \frac{a_1a_3+1}{a_2},\ \frac{a_2+1}{a_3}.\]
The 14 frieze vectors $(a_1,a_2,a_3)$ are the following.
\[
\begin{array}
{cccccccccccccc} (1,1,1)&(1,1,2)&(1,2,1)&(1,2,3)&(1,3,2) & (2,1,1) & (2,1,2) & (2,3,1)&(2,3,4)\\(2,5,2) & (3,2,1) &(3,2,3)&(3,5,3)&(4,3,2)
\end{array}
\]
Equivalently, we can think of the conditions as Diophantine equations in two sets of integers as follows.
\[a_1b_1=a_2+1,\ a_2b_2=a_1a_3+1,\ a_3b_3={a_2+1}.\]
The vectors $(b_1,b_2,b_3)$, in the same order as the frieze vectors above, are the following.
\[\begin{array}
{cccccccccccccc} (2,2,2)&(2,3,1)&(3,1,3)&(3,2,1)&(4,1,2) & (1,3,2) & (1,5,1) & (2,1,4)&(2,3,1)\\(3,1,3) & (1,2,3) &(1,5,1)&(2,2,2)&(1,3,2)
\end{array} \]
Figure \ref{fig 3} shows the frieze vectors and their clusters in the exchange graph, where the clusters are illustrated by their position in the Auslander-Reiten quiver of the cluster category.
\begin{figure}
\Large\scalebox{0.5}{\input{figure3a.pdf_tex}}
\caption{Frieze vectors relative to $(x_1,x_2,x_3),1\to 2 \leftarrow 3$ together with their clusters.}
\label{fig 3}
\end{figure}
\subsection{Mutation of frieze vectors in type $\mathbb{A}$}
Mutations of positive integral friezes are described in \cite{BFGST}, where the authors compute the effect of mutation on the whole frieze. Here, we are interested in describing the effect of mutation on the frieze vector relative to a fixed cluster $\mathbf{x}$. To give this description, we use the combinatorial formula of \cite{MS} to write the cluster variables of $\mathbf{x}$ with respect to the cluster $\mathbf{x}'$ in terms of perfect matchings of snake graphs. Then the values in the frieze vectors are simply given as the number of perfect matchings of the appropriate snake graph.
We will not define snake graphs here but rather refer to the survey \cite{S2}. For our purpose it suffices to say that a snake graph is a planar graph consisting of a sequence of square tiles that are glued together such that two consecutive tiles share exactly one edge which is either the north edge of the first tile and the south edge of the second tile or
the east edge of the first tile and the west edge of the second tile. We associate a snake graph to each cluster variable in $\mathbf{x}$. The tiles of the snake graph are labeled by the cluster variables in the cluster $\mathbf{x}'=(x_1',\ldots,x_n')$ and its edges are labeled by the cluster variables in $\mathbf{x}'$ or by the constant 1. Since our cluster algebra is of Dynkin type $\mathbb{A}$, no two tiles have the same label and no two interior edges are labeled by the same cluster variable.
The mutation from $\mathbf{x}'$ to $\mathbf{x}''=(\mathbf{x}'\setminus\{x_i'\})\cup \{x_i''\} $ has the following effect on the snake graphs from a cluster algebra of Dynkin type $\mathbb{A}$.
\begin{enumerate}
\item If the first or last tile of the snake graph has label $x_i'$ then this tile is removed and the new boundary edge is labeled by the new cluster variable $x_i''$, see the top row of Figure \ref{fig 5}.
Conversely, if the snake graph ends with an edge that is labeled $x_i'$ then a new tile with label $x_i''$ is glued to this edge.
\item If the snake graph has a tile labeled $x_i'$ that is the middle tile of a 3-tile subgraph then
\begin{enumerate}
\item if the 3-tile subgraph is straight then it transforms as shown in the second row of Figure \ref{fig 5};
\item if the 3-tile subgraph is not straight then it transforms as shown in the third row of Figure \ref{fig 5};
conversely, if the snake graph contains an interior edge labeled $x_i'$ shared by two tiles with labels $x_h',x_j'$ then a new tile labeled $x_i''$ is inserted such that the three consecutive tiles labeled $x_h',x_i'',x_j'$ do not form a straight subgraph.
\end{enumerate}
\begin{figure}
\scalebox{0.8}{\input{figure5.pdf_tex}}
\caption{Mutation of snake graphs in direction $x_i'$}
\label{fig 5}
\end{figure}
\end{enumerate}
\smallskip
The above description gives the mutations of frieze vectors in the example of Figure \ref{fig 3}.
For example the mutation $(2,5,2)\longleftrightarrow (3,5,3) $ is given by the snake graph mutation below.
\\ \nopagebreak
\centerline{ \input{figure6.pdf_tex}}
\section{Friezes of type $\widetilde{\mathbb{A}}$}\label{sect 4}
In this section, we study the special case of integral friezes of affine Dynkin type $\mathbb{A}$. We show that every positive integral frieze of this type is unitary.
Let $Q$ be a quiver that is mutation equivalent to a quiver $Q'$ of type $\widetilde{\mathbb{A}}_{p,q}$. The cluster algebra $\mathcal{A}(Q)$ is of surface type and the corresponding surface is an annulus with $p$ marked points on one boundary component and $q$ marked points on the other boundary component, see \cite{FST}. The cluster variables $x_\gamma$ in $\mathcal{A}(Q)$ are in bijection with the arcs $\gamma$ in the annulus. We call a cluster variable $x_\gamma$ \emph{transjective} if its arc $\gamma$ has its two endpoints on two different boundary components (bridging arc) and we call the cluster variable $x_\gamma$ \emph{regular} if the arc $\gamma$ has both endpoints on the same boundary component (peripheral arc). The terminology transjective versus regular comes from the cluster category $\mathcal{C}_Q$.
\begin{lemma}
\label{lem 4.1}
Let $\mathcal{F}\colon\mathcal{A}(Q)\to \mathbb{Z}$ be a positive integral frieze of type $\widetilde{\mathbb{A}}_{p,q}$ and let $\mathbf{x}=(x_1,\ldots,x_n)$ be a cluster such that $\mathcal{F}(x)=1$ for each regular cluster variable $x\in \mathbf{x}$ if any. Let $i$ be such that $\mathcal{F}(x_i)\ge\mathcal{F}(x_j)$ for all $j$, and suppose that $\mathcal{F}(x_i)>1$. Let $x_i'$ be the cluster variable obtained from $\mathbf{x}$ by mutation in direction $i$. Then $\mathcal{F}(x_i')<\mathcal{F}(x_i)$ and if $x_i'$ is a regular cluster variable then $\mathcal{F}(x_i')=1.$
\end{lemma}
\begin{proof}
Let $\tau_j$ be the arc corresponding to the cluster variable $x_j$, so that $T=(\tau_1,\ldots,\tau_n)$ is the triangulation corresponding to the cluster $\mathbf{x}$. The mutation in direction $i$ is given by flipping the arc $\tau_i$ in $T$, and the exchange relation in the cluster algebra is of the form
\begin{equation}\label{eq 1} x_ix_i'=x_ax_c+x_bx_d\end{equation}
where $\tau_i$ is the diagonal in the quadrilateral in $T$ with sides $\tau_a,\tau_b,\tau_c,\tau_d$ as in Figure \ref{fig 1} some of which may be boundary edges.
\begin{figure}
\input{figure1.pdf_tex}
\caption{Quadrilateral in the triangulation $T$.}
\label{fig 1}
\end{figure}
Our assumption that $\mathcal{F}(x_i)>1$ and $\mathcal{F}(x)=1$ for every regular cluster variable $x\in \mathbf{x}$ imply that $x_i$ is transjective. Hence $\tau_i$ is a bridging arc, so its endpoints lie on different boundary components. Therefore one of the arcs $\tau_a,\tau_b$ is bridging and the other is peripheral (or a boundary edge), and also one of $\tau_c,\tau_d$ is bridging and the other is peripheral (or a boundary edge). We assume without loss of generality that $\tau_a$ is bridging and consider two cases.
Suppose first that $\tau_c$ is bridging. Then the relation (\ref{eq 1}) implies
\begin{equation}
\label{eq 2}
\mathcal{F}(x_i')=(\mathcal{F}(x_a)\mathcal{F}(x_c)+1)/\mathcal{F}(x_i)
\end{equation}
because the frieze has value 1 on the two regular variables (or boundary edge weights) $x_b$ and $x_d$. Note that in this case the flipped arc $\tau'_i$ is bridging. Recall that $\mathcal{F}(x_a)\le\mathcal{F}(x_i)$ and $\mathcal{F}(x_c)\le \mathcal{F}(x_i)$. If $\mathcal{F}(x_a)=\mathcal{F}(x_i)$ then the right hand side of (\ref{eq 2}) would be equal to $\mathcal{F}(x_c)+ 1/\mathcal{F}(x_i)$ which is not an integer. Thus $\mathcal{F}(x_a)<\mathcal{F}(x_i)$ and similarly $\mathcal{F}(x_c)<\mathcal{F}(x_i)$. Therefore the right hand side of (\ref{eq 2}) is at most $((\mathcal{F}(x_i)-1)^2+1)/\mathcal{F}(x_i)=\mathcal{F}(x_i)-2+(2/\mathcal{F}(x_i))$ which is strictly smaller than $\mathcal{F}(x_i)$, and we are done.
Suppose now that $\tau_c$ is a peripheral arc. Then $\tau_d$ is bridging and the relation (\ref{eq 1}) implies
\begin{equation}
\label{eq 3}
\mathcal{F}(x_i')=(\mathcal{F}(x_a)+\mathcal{F}(x_d))/\mathcal{F}(x_i)
\end{equation}
Note that in this case the arc $\tau_i'$ is peripheral and forms a triangle with the two peripheral arcs $\tau_b$ and $\tau_c$. We will show that $\mathcal{F}(x_i')=1$. Since $\mathcal{F}(x_i)$ is the maximal frieze value in $\mathbf{x}$, equation (\ref{eq 3}) yields
$ \mathcal{F}(x_i')\le 2\mathcal{F}(x_i)/\mathcal{F}(x_i)=2.
$
If $\mathcal{F}(x_i')=1$ we are done. Assume therefore that $\mathcal{F}(x_i')=2$.
Then equation (\ref{eq 3}) implies
\begin{equation}
\label{eq 4}
\mathcal{F}(x_a)=\mathcal{F}(x_d)=\mathcal{F}(x_i)\ge 2.
\end{equation}
Consider the quadrilateral in $T$ in which $\tau_d$ is the diagonal and denote its sides $\tau_i,\tau_c,\tau_e,\tau_f$ where $\tau_i,\tau_e $ are bridging arcs and $\tau_c,\tau_f$ are peripheral, see Figure \ref{fig 2}.
\begin{figure}
\scalebox{0.8}{ \input{figure2.pdf_tex}}
\caption{Two possible configurations in the triangulation $T$ when $\tau_c$ is a peripheral arc or a boundary edge.}
\label{fig 2}
\end{figure}
Let $x_d'$ be the cluster variable obtained by mutating $\mathbf{x}$ in direction $d$. Then in the situation of the left picture in Figure \ref{fig 2} we have
\[\mathcal{F}(x_d')=(\mathcal{F}(x_i)\mathcal{F}(x_e)+1)/\mathcal{F}(x_d) =\mathcal{F}(x_e) +1/\mathcal{F}(x_i),\]
where the last equality holds by (\ref{eq 4}). But since $\mathcal{F}(x_i)\ge 2$, this expression is not an integer, so we have a contradiction.
Therefore we must be in the situation of the right picture in Figure \ref{fig 2}, and we have
\[\mathcal{F}(x_d')=(\mathcal{F}(x_i)+\mathcal{F}(x_e))/\mathcal{F}(x_d) =1+\mathcal{F}(x_e)/\mathcal{F}(x_i),\]
where the last identity holds by (\ref{eq 4}). Since $\mathcal{F}(x_i)\ge\mathcal{F}(x_e)$ and $\mathcal{F}$ is a positive integral frieze, we must have $\mathcal{F}(x_i)=\mathcal{F}(x_e)$ and $\mathcal{F}(x_d')=2$.
We have thus shown that if $\mathcal{F}(x_i')=2$ then the triangulation $T$ contains a fan of bridging arcs $\tau_i,\tau_d,\tau_e$ and $\mathcal{F}(x_d')=2, \mathcal{F}(x_e)=\mathcal{F}(x_i)$.
We can now repeat this argument by considering the cluster variable $x_e'$ obtained by mutating $\mathbf{x}$ in direction $e$, and recursively with every new bridging arc in the fan and we obtain a fan of bridging arcs in $T$ and each arc in this fan has the same frieze value $\mathcal{F}(x_i)\ge 2$. Since $T$ is a triangulation of the annulus, this fan is finite, and the two arcs bounding it correspond to a sink and a source in the quiver $Q_T$. Mutating at one of those arcs again gives a contradiction as in the left picture of Figure \ref{fig 2}. We have shown that $\mathcal{F}(x_i')$ cannot be equal to 2, and thus $\mathcal{F}(x_i')=1$.
\end{proof}
We are now ready for the main theorem of this section.
\begin{theorem}
\label{thm 2} Let $Q$ be a quiver of type $\widetilde{\mathbb{A}}_{p,q}$ and let $\mathcal{F}\colon\mathcal{A}(Q)\to \mathbb{Z}$ be a positive integral frieze. Then $\mathcal{F}$ is unitary.
\end{theorem}
\begin{proof}
We need to show that there exists a cluster $\mathbf{x}'$ such that $\mathcal{F}(\mathbf{x}')=(1,\ldots,1)$. Let $\mathbf{x}_0$ be a cluster consisting entirely of transjective cluster variables. Its triangulation $T_0$ consists entirely of bridging arcs. Then $\mathbf{x}_0=(x_1,\ldots,x_n)$ is a cluster that satisfies the condition of Lemma \ref{lem 4.1}. If $\mathcal{F}(\mathbf{x}_0)=(1,\ldots,1)$ we are done. Otherwise Lemma \ref{lem 4.1} implies that mutating at a cluster variable $x_i$ with maximal frieze value will produce a cluster $\mathbf{x}_1=(\mathbf{x}_0\setminus\{x_i\})\cup\{x_i'\}$ such that $\mathcal{F}(x_i')<\mathcal{F}(x_i)$ and if $x_i'$ is regular then $\mathcal{F}(x_i')=1$. Therefore, if $\mathcal{F}(\mathbf{x}_1)\ne(1,\ldots,1)$ then the cluster $\mathbf{x}_1$ also satisfies the hypothesis of Lemma~\ref{lem 4.1}, and we can repeat this procedure to produce a sequence of clusters $\mathbf{x}_0,\mathbf{x}_1,\ldots,\mathbf{x}_s,\ldots$ such that $\mathbf{x}_s=(\mathbf{x}_{s-1}\setminus\{x\})\cup\{x'\}$ with $\mathcal{F}(\mathbf{x}_s)\ne(1,\ldots,1)$ and $\mathcal{F}(x')<\mathcal{F}(x)$. Since the frieze is positive integral this process must stop. Thus there is a cluster $\mathbf{x}_t$ such that $\mathcal{F}(\mathbf{x}_t)=(1,\ldots,1)$.
\end{proof}
\subsection{Friezes of type $\widetilde{\mathbb{A}}_{2,1}$}
There are precisely two positive integral friezes of type $\widetilde{\mathbb{A}}_{2,1}$ up to symmetry, and they are depicted in Figures \ref{fig:A1_2_acyclic} and \ref{fig:A1_2_cyclic}. By Theorem \ref{thm 2} both are unitary.
In the first example, the cluster $\mathbf{x}$ with $\mathcal{F}(\mathbf{x})=(1,1,1)$ is transjective and in the second example one of the cluster variables in $\mathbf{x}$ is regular. In the figures, we show the values of the friezes on the transjective component of the Auslander-Reiten quiver.
\begin{figure}[h]
\tiny\def0.75{0.75}
\begin{tikzpicture}[xscale=0.75]
\node at (-7.3,-1) {$\dots$};
\draw node at (12.5,-1) {$\dots$};
\def0.5{0.5}
\foreach \n in {-2,...,5}
{
\foreach \vertex in
{0,1,2}
{
\path[black] (\n*0.5-\vertex+2*\n,-\vertex) node (x\vertex\n) {}
}
\foreach \source/\target in
{2/1,1/0}
{
\path[->,>=stealth] (x\source\n) edge[blue] (x\target\n);
}
\foreach \source/\target in
{2/0}
{
\path[->,>=stealth] (x\source\n) edge[blue, bend left=55] (x\target\n);
}
}
\foreach \nminusone/\n in
-2/-1,-1/0,0/1,1/2,2/3,3/4,4/5}
{
\foreach \s/\t in
{1/2,0/2,0/1}
{
\path[->,>=stealth] (x\s\nminusone) edge[red] (x\t\n);
}
}
\foreach \vertex/\n/\weight in
{
0/-2/11,1/-2/26,2/-2/41,
0/-1/2,1/-1/3,2/-1/7,
0/0/1,1/0/1,2/0/1,
0/1/7,1/1/3,2/1/2,
0/2/41,1/2/26,2/2/11,
0/3/\ \ 362,1/3/153,2/3/97,
0/4/\ \ \ 2131,1/4/1351,2/4/571,
0/5/\ \ \ \ \ 18817,1/5/7953,2/5/5042
}
{
\path[black] (\n*0.5-\vertex+2*\n,-\vertex) node (x\vertex\n) {\weight};
}
\end{tikzpicture}
\caption{An $\widetilde{A}_{1,2}$ frieze obtained by specializing the cluster variables of an acyclic seed to $1$. The two peripheral arcs have frieze values $2$ and $3$.}\label{fig:A1_2_acyclic}.\
\begin{tikzpicture}[xscale=0.75]
\node at (-7.3,-1) {$\dots$};
\draw node at (12.5,-1) {$\dots$};
\def0.5{0.5}
\foreach \n in {-2,...,5}
{
\foreach \vertex in
{0,1,2}
{
\path[black] (\n*0.5-\vertex+2*\n,-\vertex) node (x\vertex\n) {}
}
\foreach \source/\target in
{2/1,1/0}
{
\path[->,>=stealth] (x\source\n) edge[blue] (x\target\n);
}
\foreach \source/\target in
{2/0}
{
\path[->,>=stealth] (x\source\n) edge[blue, bend left=55] (x\target\n);
}
}
\foreach \nminusone/\n in
-2/-1,-1/0,0/1,1/2,2/3,3/4,4/5}
{
\foreach \s/\t in
{1/2,0/2,0/1}
{
\path[->,>=stealth] (x\s\nminusone) edge[red] (x\t\n);
}
}
\foreach \vertex/\n/\weight in
{
0/-2/5,1/-2/18,2/-2/13,
0/-1/3,1/-1/2,2/-1/7,
0/0/1,1/0/2,2/0/1,
0/1/7,1/1/2,2/1/3,
0/2/13,1/2/18,2/2/5,
0/3/\ \ 123,1/3/34,2/3/47,
0/4/\ \ 233,1/4/322,2/4/89,
0/5/\ \ \ \ 2207,1/5/610,2/5/843
}
{
\path[black] (\n*0.5-\vertex+2*\n,-\vertex) node (x\vertex\n) {\weight};
}
\end{tikzpicture}
\caption{An $\widetilde{A}_{1,2}$ frieze obtained by specializing the cluster variables of a non-acyclic seed to $1$. The two peripheral arcs have frieze values $1$ and $5$.}
\label{fig:A1_2_cyclic}
\end{figure}
\subsection{Further unitarity questions}
It was shown in \cite{CoxCon} that every positive integral frieze of Dynkin type $\mathbb{A}_n$ is unitary, and by Theorem \ref{thm 2}, the same is true for affine type $\widetilde{\mathbb{A}}_{p,q}$. It is natural to ask if these results can be extended to friezes with values in other integral domains, for example in quadratic integer rings. However the following example shows that the result already fails over the Gaussian integers.
\begin{example}
Let $Q$ be the quiver $1\to 2$ and define a frieze $\mathcal{F}\colon\mathcal{A}(Q)\to\mathbb{Z}[i]$ by $\mathcal{F}(x_1)=1$ and $\mathcal{F}(x_2)=1+i$. We can visualize $\mathcal{F}$ as usual in the Auslander-Reiten quiver as follows
\[ \xymatrix{&&1+i\ar[rd]&&2-i\ar[rd]&&1\ar[rd]\\
&1\ar[ru]&&2+i\ar[ru]&&1-i\ar[ru] &&1+i}
\]
This is a non-unitary frieze of Dynkin type $\mathbb{A}_2$.
\end{example}
\subsubsection{Other Dynkin or affine types} For Dynkin types $\mathbb{D} $ and $\mathbb{E}$ there are non-unitary positive integral friezes, see \cite{FP}, and these examples also give rise to non-unitary positive integral friezes in the affine types $\widetilde{\mathbb{D}}$ and $\widetilde{\mathbb{E}}$.
\subsection*{Acknowledgements}
We thank A. Garc\'{i}a Elsener, G. Musiker and P.-G. Plamondon for helpful discussions.
|
{
"timestamp": "2018-08-10T02:04:17",
"yymm": "1806",
"arxiv_id": "1806.00940",
"language": "en",
"url": "https://arxiv.org/abs/1806.00940"
}
|
\section{#1}
\setcounter{equation}{0}}
\renewcommand{\thelemma}{\arabic{section}.\arabic{lemma}}
\renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}}
\renewcommand{\thecorollary}{\arabic{section}.\arabic{corollary}}
\renewcommand{\theproposition}{\arabic{section}.\arabic{proposition}}
\renewcommand{\thedefinition}{\arabic{section}.\arabic{definition}}
\newcommand{y}{y}
\newcommand{x}{x}
\newcommand{Y}{Y}
\newcommand{X}{X}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\textsc{Ran}}{\textsc{Ran}}
\newcommand{\mathrm{id}}{\mathrm{id}}
\newcommand{\textsc{Nul}}{\textsc{Nul}}
\begin{document}
\title{ $k$-Space Deep Learning for Parallel MRI: \\ Application to
Time-Resolved MR Angiography
}
\author{Eunju~Cha, Eung Yeop Kim,
and~Jong~Chul~Ye$^{*}$,~\IEEEmembership{Senior Member,~IEEE
\thanks{EJC and JCY are with the Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST),
Daejeon 34141, Republic of Korea (e-mail: \{eunju.cha,jong.ye\}@kaist.ac.kr).
EYK is with Department of Radiology, Gil Medical Center, Gachon University College of Medicine, Incheon, South Korea.
This work is supported by National Research Foundation of Korea, Grant number NRF2016R1A2B3008104.
}
\maketitle
\begin{abstract}
Time-resolved angiography with interleaved stochastic trajectories (TWIST) has been widely used
for dynamic contrast enhanced MRI (DCE-MRI). To achieve highly accelerated acquisitions,
TWIST combines the periphery of the $k$-space data from several adjacent frames to reconstruct one temporal frame.
However, this view-sharing scheme limits the true temporal resolution of TWIST.
Moreover, the $k$-space sampling patterns have been specially designed for a specific
generalized autocalibrating partial parallel acquisition (GRAPPA) factor so that it is not possible to reduce the number of view-sharing
once the $k$-data is acquired.
To address these issues, this paper proposes a novel $k$-space deep learning approach for parallel MRI.
In particular,
we have designed our neural network so that accurate $k$-space interpolations are performed simultaneously for multiple coils by exploiting the redundancies along the coils and images.
Reconstruction results using in vivo TWIST data set confirm that
the proposed method can immediately generate high-quality reconstruction results with various choices of view-sharing, allowing us to exploit the trade-off between spatial and temporal resolution
in time-resolved MR angiography.
\end{abstract}
\begin{IEEEkeywords}
Dynamic contrast enhanced MRI, Parallel imaging, Compressed Sensing, $k$-space, Deep learning
\end{IEEEkeywords}
\section{Introduction}
DCE-MRI is one of the important imaging protocols for clinical applications. In DCE-MRI, a series of MR images are acquired after the injection of the contrast agent to patients. DCE-MRI provides information on the physiological characteristics of tissues, such as status of blood vessels, so it is useful for stroke or cancer imaging \cite{turnbull2009dynamic,yankeelov2009dynamic}.
In particular, the time-resolved angiography with interleaved stochastic trajectories (TWIST) \cite{laub2006syngo} is widely used due to its superior temporal and spatial resolution.
In TWIST, the center and the periphery of $k$-space data are acquired at different rates. Specifically, the low frequency region is completely sampled to retrain the overall image contrast, but the high frequency region is randomly sub-sampled at each time frame. Then, the high frequency regions of the $k$-space from multiple temporal frames are combined to obtain uniformly sub-sampled $k$-space data so that the missing data can be interpolated using GRAPPA \cite{griswold2002generalized}. Thanks to the aggressive under-sampling of the periphery of $k$-space data, TWIST offers a significant improvement in both temporal and spatial resolution. It is known that TWIST imaging can follow the perfusion dynamics more closely, which is useful for time-resolved MR angiography (tMRA)\cite{nael2009time}.
\begin{figure*}[!hbt]
\center{
\includegraphics[width=16cm]{./architecture_v2.png}
}
\caption{The architecture of $k$-space deep learning for parallel MRI. Here, IFT stands for inverse Fourier transform.}
\label{fig:scheme}
\end{figure*}
However, the temporal resolution of TWIST is not a true one due to the extensive view-sharing from several adjacent frames, so the quantitative study of perfusion dynamics using TWIST alone is not usually recommended.
Another important disadvantage of TWIST is that the sampling pattern is designed for specific GRAPPA acceleration factor, so it is not possible to
change the number of view sharing once the acquisition is done.
Despite the needs for new image reconstruction algorithms,
there are several technical difficulties in developing methods to address these problems.
Since the $k$-space samples from the reduced view-sharing are
a subset of uniformly subsampled $k$-space data, the sampling pattern is not incoherent, so the existing compressed sensing (CS) approaches\cite{jung2009k,lustig2007sparse} have difficulties in removing aliasing artifacts.
In our previous works \cite{cha2017true}, we therefore proposed to improve temporal resolution of TWIST via $k$-space interpolation
using ALOHA \cite{7547372, lee2016acceleration, lee2016reference} that can synergistically combine parallel MRI (pMRI) and CS-MRI by exploiting the sparsity and the inter-coil redundancy in a unified matrix completion framework. However, the computational cost for TWIST reconstruction using ALOHA was very expensive due to the multiple matrix factorization to allow 4-dimensional TWIST imaging. Moreover, spatial resolution losses are often observed when the number of view sharing is not sufficient. Therefore, a new approach is required to overcome this limitation.
Recently, deep learning approaches have been extensively employed for computer vision applications thanks to the availability of the massive datasets and high performance graphical processing units (GPUs) \cite{krizhevsky2012imagenet, he2016deep}.
In MR literature, the works in \cite{wang2016accelerating,hammernik2018learning,kwon2017parallel} were among
the first that applied deep learning approaches to CS MRI.
These works were followed by novel
extension using deep residual learning \cite{lee2018deep}, domain adaptation \cite{han2017deep}, data consistency layers \cite{schlemper2018deep}, etc.
All these pioneering works have consistently demonstrated superior reconstruction performances over the compressed sensing approaches \cite{lustig2007sparse,jung2009k,lingala2011accelerated,jin2016general,lee2016acceleration}
at significantly lower run-time computational complexity.
Therefore, the purpose of this research is to develop a deep learning approach that
can improve the temporal resolution of TWIST imaging by reducing the number of view-sharing. Moreover,
we aim at developing algorithm that can reconstruct images at various choices of view sharing to see the trade-off between the spatial and temporal resolution.
However, the application of deep learning for TWIST requires overcoming two major technical hurdles.
First, to be backward compatible with TWIST imaging as well as to allow reconstruction with various number of view-sharing,
the deep network is required to learn $k$-space interpolation kernels.
However, most of the popular deep learning MR reconstruction algorithms
are either in the form of image domain post-processing \cite{kwon2017parallel,lee2018deep,han2017deep}, or iterative updates between the $k$-space and the image domain using a cascaded network \cite{hammernik2018learning,wang2016accelerating,schlemper2018deep},
which are different from GRAPPA-based TWIST protocol.
Second, with reduced view sharing, standard GRAPPA fails to provide reasonable reconstruction quality,
so there are no ground-truth data that can be used as labels.
One of the main contributions of this paper is therefore to show that the recent $k$-space deep learning approach \cite{han2018k} is very versatile in meeting these technical challenges. More specifically, the $k$-space deep learning \cite{han2018k} was inspired by the recent mathematical discovery
that a deep convolutional neural network can be derived as a data-driven decomposition of Hankel matrix so that a neural network can be effectively designed
in the domain where Hankel matrix can have a low-rank structure \cite{ye2018deep}.
Recall that the basic idea of ALOHA for parallel MRI \cite{jin2016general}
is based on the observation that the extended Hankel matrix in the $k$-space domain constructed by stacking Hankel matrices from each coil side-by-side has a low-rank structure.
This implies that, in contrast to the common practice that the neural networks are implemented in the image domain,
a better neural network should be constructed in the $k$-space domain by stacking multi-coil $k$-space data along the channel direction of the neural network. Then,
our deep neural network is trained to learn the relationship between the multi-coil $k$-space channel data and the channel-by-channel reconstructed coil images as shown in Fig.~\ref{fig:scheme}.
To overcome the lack of ground-truth data for different temporal resolutions and allow flexible reconstruction for different numbers of view-sharing, our neural network is designed to learn the $k$-space interpolation relationship between the minimum number of $k$-space samples and the fully sampled $k$-space data from the GRAPPA reconstruction.
Interestingly, the thus-trained neural network is generalized very well for all temporal resolution, since
our neural network is to learn the Fourier domain structure rather than the image content.
As a byproduct, our theory and numerical verification can address some of the fundamental issues in designing a deep neural network for MR reconstruction.
In particular, the current practice of splitting real and imaginary channels as well as multi-coil data is valid for $k$-space interpolation using neural network,
because it preserves the low-rank nature of Hankel-structured matrix.
In addition, our theoretical analysis confirm that we do not need to reinvent any new non-linearity for complex-valued MR image reconstruction problems because the main role of ReLU's positivity constraint is to allow a conic decomposition of the Hankel matrix.
The implication of conic decomposition will be detailed later.
\section{Theory}
\subsection{Problem Formulation }
In TWIST, the center and periphery of $k$-space data are sampled at different rates. More specifically, at each time frame, the center of $k$-space data (A region in Fig. \ref{fig:TWIST_concept}) is fully sampled while the periphery of $k$-space data (B region in Fig. \ref{fig:TWIST_concept}) is partially sampled. Thus, the acquired
$k$-space samples can be reduced for each frame, resulting in a reduced acquisition time. However, due to the strongly subsampled high frequency $k$-space data, individual reconstruction of each frame provides degraded images. Therefore, high frequency $k$-space data should be combined from adjacent frames to create a time frame. The predominant justification for this sliding window approach is that the image contrast is assumed to be dependent on the low-frequency $k$-space center, so that the dynamics of the contrast agent are not altered by such view sharing.
However, as shown in our experimental section, this common belief is not always true because the temporal dynamics of the contrast agents are degraded by sharing views. Therefore, the actual temporal resolution of TWIST imaging is determined by the number of view-sharing.
\begin{figure}[!b]
\center{
\includegraphics[width=7cm]{./Figure_TWIST_vs_v3.png}
}
\caption{View-sharing scheme for our carotid vessel data. The center and periphery of $k$-space are designated A and B, respectively. (a) Conventional scheme for 2D GRAPPA reconstruction, and (b) an example of reduced view sharing.}
\label{fig:TWIST_concept}
\end{figure}
There are different types of TWIST sampling patterns. For example, the one in Fig. \ref{fig:TWIST_concept}(a) was developed specifically for 2-D GRAPPA reconstruction,
where high frequency regions of five time frames ($B_{i-2}, \cdots, B_{i+2}$) are integrated to generate a 2-D uniform sub-sampled $k$-space data with the downsampling factor of three and two along $k_x$ and $k_y$ directions, respectively. Then, 2-D GRAPPA \cite{griswold2002generalized} is used to interpolate the missing elements in that $k$-space. Since the net sliding window corresponds to 9 frames (five B regions and four A
frames), the resulting temporal resolution is severely impaired.
Unlike the existing TWIST, which utilizes the five adjacent frames for the 2-D GRAPPA reconstruction, we are interested in using various number of
view-sharing.
For example, Fig. \ref{fig:TWIST_concept}(b) shows the number of view-sharing is reduced to two frames, which is considered the minimum number of view-sharing in this paper.
The reduced view-sharing results in highly under-sampled and irregular sampling pattern that is difficult to apply the existing GRAPPA algorithm.
Therefore, our goal is to develop a multi-coil deep learning approach to interpolate the missing $k$-space data.
To provide a mathematical formulation of our imaging problem,
the spatial Fourier transform of a function $x:\mathbb{R}^2\to\mathbb{R}$ is first defined by
\begin{align*}
\hat{x}(\mathbf{k})=\mathcal{F}[x](\mathbf{k}):=\int_{\mathbb{R}^d} e^{-\iota\mathbf{k}\cdot {\mathbf r}}x({\mathbf r})d{\mathbf r},
\end{align*}
with spatial frequency $\mathbf{k}\in\mathbb{R}^2$ and $\iota=\sqrt{-1}$.
Let $\{\mathbf{k}_n\}_{n=1}^N$, for some integer $N\in\mathbb{N}$, be a collection of finite number of sampling points of the $k$-space
confirming to the Nyquist sampling rate.
Accordingly, the discretized $k$-space data $\widehat{\mathbf x}\in {\mathbb C}^N$ is introduced by
\begin{equation
\widehat {\mathbf x} = \begin{bmatrix} \widehat x({\mathbf k}_1) & \cdots & \widehat x({\mathbf k}_N) \end{bmatrix}^T \ .
\end{equation}
In parallel MRI, the unknown 2-D image $g_i({\mathbf r}), {\mathbf r}\in \mathbb{R}^2$ from the $i$-th coil can be represented as
\begin{equation}\label{eq:gi}
g_i({\mathbf r}) = s_i({\mathbf r}) x({\mathbf r}),\quad i=1,\cdots, P,
\end{equation}
where $s_i({\mathbf r})$ denotes the $i$-th coil sensitivity map, $x({\mathbf r})$ is an unknown image, and $P$ denotes the number of coils.
Then, the $k$-space data at the $i$-th coil
is defined by
\begin{align*}
\widehat g_i(\mathbf{k})= \mathcal{F}[g_i](\mathbf{k})
\end{align*}
whose discretized $k$-space data $\widehat{\mathbf g}_i\in {\mathbb C}^N$ is denoted by
\begin{equation}\label{eq:coil}
\widehat {\mathbf g}_i = \begin{bmatrix} \widehat g_i({\mathbf k}_1) & \cdots & \widehat g_i({\mathbf k}_N) \end{bmatrix}^T \ .
\end{equation}
For a given under-sampling pattern $\Lambda$ from the reduced view sharing, let
the downsampling operator ${{\mathcal P}}_\Lambda: {\mathbb C}^{N} \to {\mathbb C}^{N}$
be defined as
\begin{eqnarray}
\left[{{\mathcal P}}_\Lambda[\hat {\mathbf x}] \right]_j= \begin{cases} \left[\widehat{\mathbf x}\right]_j &j \in \Lambda \\
0, & \mbox{otherwise} \end{cases} \ .
\end{eqnarray}
The main goal of parallel imaging is then to exploit the common signal $x({\mathbf r})$ that is measured through multiple channels.
Specifically, our image reconstruction problem is given by
\begin{eqnarray}\label{eq:fwd}
\min_{x,\{g_i\}_{i=1}^{P}} & &R\left(x,\{g_i\}_{i=1}^{P}\right) \\
\mbox{subject to} &&\hat {\mathbf y}_i :={{\mathcal P}}_\Lambda[\hat {\mathbf g}_i] ,\quad i=1,\cdots, P
\end{eqnarray}
where $R\left(x,\{g_i\}_{i=1}^{P}\right)$ denotes some regularization term that depends on the unknown signal $x({\mathbf r})$
and coil images $g_i({\mathbf r}),i=1,\cdots, P$. In CS-MRI \cite{lustig2007sparse,jung2009k,lingala2011accelerated},
the regularization term is usually chosen to have minimum non-zero support in some sparsifying
transform domain.
\subsection{Multi-coil $k$-Space Low-Rank Hankel Matrix}
Although ALOHA \cite{jin2016general,lee2016acceleration} also takes advantages of the image domain spasifying transform as in the conventional CS-MRI algorithms,
in contrast to the CS-MRI approaches, ALOHA is for direct
$k$-space interpolation.
Specifically, according to the theory of ALOHA \cite{ye2017compressive,jin2016general},
if the underlying signal in the image domain is sparse and described as the signal with the finite rate of innovations (FRI) \cite{vetterli2002sampling},
the associated Hankel matrix from its $k$-space data
is low-ranked.
Therefore, if some of $k$-space data are missing,
we can construct an appropriate weighted Hankel matrix with missing elements such that the missing elements are recovered
using low rank Hankel matrix completion approaches \cite{candes2009exact,gross2011recovering}.
Beside the low-rank property originating from sparsity in the transform domain, there exists an additional low-rank relationship that is unique in parallel MRI.
More specifically, Eq.~\eqref{eq:gi} leads to the following inter-coil relationship:
$$g_i({\mathbf r})s_j({\mathbf r}) - g_j({\mathbf r})s_i({\mathbf r}) = 0, \quad \forall i\neq j, $$
which is equivalent to the $k$-space inter-coil annihilating filter relationship \cite{jin2016general}:
\begin{eqnarray}\label{eq:intercoil}
\widehat g_i \circledast \widehat s_j - \widehat g_j\circledast \widehat s_i = 0,\quad \forall i\neq j .
\end{eqnarray}
In \cite{jin2016general}, we formally showed that
the inter-coil annihilating filter relationship in \eqref{eq:intercoil} leads to the low-rank property of the
following extended Hankel structured matrix:
\begin{eqnarray}\label{eq:PHankel}
\mathscr{H}_{d|P}(\widehat{\mathbf G}) = \begin{bmatrix} \mathscr{H}_d(\widehat{\mathbf g}_1) & \cdots & \mathscr{H}_d(\widehat{\mathbf g}_P) \end{bmatrix}
\end{eqnarray}
where $$\widehat{\mathbf G}=\begin{bmatrix} \widehat{\mathbf g}_1 & \cdots & \widehat{\mathbf g}_P \end{bmatrix} \in {\mathbb C}^{N\times P}$$
with the $k$-space measurement $\widehat {\mathbf g}_i$ in \eqref{eq:coil}, and
$\mathscr{H}_d(\widehat {\mathbf g}_i)$ denote a
Hankel matrix constructed from $\widehat{\mathbf g}_i$ with $d$ denoting the
matrix pencil size. For more details on the construction of Hankel matrices and their relation to the convolution, see Appendix A in the Supplementary Material.
Therefore, if some of $k$-space data are missing,
the missing elements are recovered
using low rank Hankel matrix completion approaches \cite{candes2009exact,gross2011recovering}:
\begin{eqnarray}\label{eq:EMaC}
(MC)
&\min\limits_{\widehat {\mathbf Z}\in {\mathbb C}^{N \times P}} & \textsc{rank}~ \mathscr{H}_{d|P} (\widehat {\mathbf Z}) \\
&\mbox{subject to } & {{\mathcal P}}_\Lambda[\widehat{\mathbf g}_i ] = {{\mathcal P}}_\Lambda[\widehat {\mathbf z}_i] ,\quad i=1,\cdots, P \nonumber \ .
\end{eqnarray}
The low-rank Hankel matrix completion problem $(MC)$ can be solved in various ways, and ALOHA employs
the matrix factorization approaches \cite{jin2016general,lee2016acceleration,lee2016reference}.
However, the main technical issue is its relatively expensive computational cost for matrix factorization. In the following section, we show that a deep learning approach can address this problem by handling the matrix decomposition fully data-driven way.
\subsection{From ALOHA to Deep Neural Network}
Recently, we showed that the Hankel matrix decomposition in ALOHA is closely related to deep neural network \cite{ye2018deep}.
To understand its link to a deep neural network, ALOHA optimization problem $(MC)$ is converted to the following regression problem:
\begin{eqnarray}
\label{eq:image_regression}
(MC') \quad \quad\quad \min_{\widehat{{\mathbf Z}} \in {\mathbb C}^{N\times P} } && \sum_{i=1}^P\left\|g_i- {{\mathcal F}}^{-1}[\widehat{{\mathbf z}}_i] \right\|^2 \\
\mbox{subject to}&& \textsc{rank}~ \mathscr{H}_{d|P} (\widehat {\mathbf Z}) = Q \label{eq:sol} \\
&&{{\mathcal P}}_\Lambda[\widehat{\mathbf g}_i ] = {{\mathcal P}}_\Lambda[\widehat {\mathbf z}_i] ,\quad i=1,\cdots, P \nonumber \ ,
\end{eqnarray}
where $Q$ is estimated rank of Hankel structured matrix and ${{\mathcal F}}$ denotes Fourier transform. Note that
the low rankness is enforced in the $k$-space, whereas the cost function is defined as
the image reconstruction error for each coil.
Now, for any feasible solution $\widehat{\mathbf Z}$ for \eqref{eq:sol}, suppose that the singular value decomposition of the associated Hankel structured matrix is given by
$\mathscr{H}_{d|P}(\widehat{\mathbf Z})={\mathbf U} \bold{\Sigma} {\mathbf V}^\top$, where ${\mathbf U}=[{\mathbf u}_1~\cdots~{\mathbf u}_Q] \in {\mathbb R}^{N\times Q}$ and ${\mathbf V}=[{\mathbf v}_1~\cdots~{\mathbf v}_Q] \in {\mathbb R}^{d\times Q}$ are the left and right singular vector bases matrices, respectively; $\bold{\Sigma} = (\sigma_{ij}) \in {\mathbb R}^{Q \times Q}$ is the diagonal matrix with singular values. Let ${\boldsymbol {\Psi}}=[{\boldsymbol{\psi}}_1,\cdots,{\boldsymbol{\psi}}_Q]$ and $\widetilde{{\boldsymbol {\Psi}}}=[\widetilde{\boldsymbol{\psi}}_1,\cdots,\widetilde{\boldsymbol{\psi}}_Q]$ $\in {\mathbb R}^{d \times Q}$ are a pair of matrix satisfying the low-dimensional subspace constraint:
\begin{eqnarray}\label{eq:projection}
{\boldsymbol {\Psi}} \widetilde {\boldsymbol {\Psi}}^{\top} = {\mathbf P}_{R({\mathbf V})} ,
\end{eqnarray}
where ${\mathbf P}_{R({\mathbf V})}$ denotes the projection matrix to the range space of ${\mathbf V}$. Similarly, another pair of matrices ${\boldsymbol {\Phi}}=[{\boldsymbol{\phi}}_1,\cdots,{\boldsymbol{\phi}}_M]$ and $\widetilde{{\boldsymbol {\Phi}}}=[\widetilde{\boldsymbol{\phi}}_1,\cdots,\widetilde{\boldsymbol{\phi}}_M] \in {\mathbb R}^{N \times M}$
satisfy the so-called the frame condition \cite{ye2018deep}:
\begin{eqnarray}\label{eq:projectionU}
{\boldsymbol {\Phi}} \widetilde {\boldsymbol {\Phi}}^{\top} = {\mathbf I}_{N},
\end{eqnarray}
where ${\mathbf I}_N$ denotes the $N\times N$ identity matrix.
Then, we can obtain the following matrix equality:
\begin{eqnarray}\label{eq:equiv}
\mathscr{H}_{d|P}\left(\widehat {\mathbf Z} \right) &=& \widetilde{\boldsymbol {\Phi}} {\boldsymbol {\Phi}}^{\top}\mathscr{H}_{d|P}\left(\widehat {\mathbf Z} \right) {\boldsymbol {\Psi}} \tilde {\boldsymbol {\Psi}}^{\top} = \widetilde{\boldsymbol {\Phi}} {\mathbf{C}} \tilde {\boldsymbol {\Psi}}^{\top} \\
&=& \sum_{k=1}^M\sum_{l=1}^Q [{\mathbf{C}}]_{kl}\widetilde {\mathbf B}^{kl}
\end{eqnarray}
with $ [{\mathbf{C}}]_{kl}$ denoting the $(k,l)$-element of ${\mathbf{C}} \in {\mathbb C}^{M\times Q}$,
where
\begin{eqnarray}\label{eq:C}
{\mathbf{C}} = {\boldsymbol {\Phi}}^{\top}\mathscr{H}_{d|P}\left(\widehat{\mathbf Z}\right) {\boldsymbol {\Psi}}
\end{eqnarray}
is the so-called convolution framelet coefficient, and
\begin{eqnarray}\label{eq:B}
\widetilde{\mathbf B}^{kl} = \widetilde{\boldsymbol{\phi}}_k\widetilde{\boldsymbol{\psi}}_l^\top,\quad k=1,\cdots, M, ~l=1,\cdots, Q
\end{eqnarray}
refers the matrix basis that decomposes the Hankel matrix.
\begin{figure}[!tb]
\centering
\includegraphics[width=1.05\linewidth]{./geometry.png}
\caption{Geometry of single layer encoder decoder network. The signal is first lifted into higher dimensional space, which is then decomposed into the positive combination of
bases. During this procedure, the missing $k$-space data (black color) are placed inside of the conic hull of the bases, so that they can be interpolated
during the recomposition with conic bases. When this high dimensional conic decomposition procedure is observed in the original signal space, it becomes one level encoder-decoder neural network with ReLU.
}
\label{fig:geometry}
\end{figure}
One of the most important discoveries in \cite{ye2018deep} is that if
this high-dimensional operation is un-lifted to the original signal space, it becomes
a single layer neural network with encoder-decoder architecture:
\begin{eqnarray} \label{eq:decomp0}
{\mathbf{C}} = {\boldsymbol {\Phi}}^\top \left( \widehat {\mathbf Z} \circledast \overline {\boldsymbol {\Psi}}\right) , ~\quad
\widehat{\mathbf Z} =
\left(\widetilde{\boldsymbol {\Phi}} {\mathbf{C}}\right) \circledast \nu(\tilde {\boldsymbol {\Psi}}),
\end{eqnarray}
where $\circledast$ is the multi-channel input multi-channel output convolution, and
the convolutional filters of encoder and decoder layers are respectively given as follows:
\begin{align} \label{eq:enc_dec_filter}
&\overline{\boldsymbol {\Psi}}:=
\begin{pmatrix}
\overline{\boldsymbol{\psi}}_{1} & \cdots & \overline{\boldsymbol{\psi}}_{Q}
\end{pmatrix}
\in {\mathbb C}^{d \times r}, ~~\quad
\nu(\widetilde{{\boldsymbol {\Psi}}}):=
\begin{pmatrix}
\widetilde{{\boldsymbol{\psi}}}_{1}
\\
\vdots
\\
\widetilde{{\boldsymbol{\psi}}}_{Q}
\end{pmatrix} \in {\mathbb C}^{dr \times 1}.
\end{align}
where $\overline{\boldsymbol{\psi}}_i$ denotes the flipped version of the vector ${\boldsymbol{\psi}}_i$.
The main advantage of this explicit representation of the low-order Hankel matrix decomposition by the encoder-decoder structure is that it enables filter learning from the training data.
In particular, to prevent the learned bases from being significantly different from the training data, we enforce the following positivity constraint on the framelet coefficients:
$$[{\mathbf{C}} ]_{kl} \geq 0,~\forall k,l $$
Thus, the signal should live in the conical hull of the learned bases, so that learned bases and the signals are forced to live in geometric proximity (see Fig.~\ref{fig:geometry}).
Interestingly, this conic (nonnegative) coding scheme is known
to allow part-by representation, which is the key idea of non-negative matrix factorization (NMF) \cite{
lee1997unsupervised,lee1999learning,lee1999learning,lee2001algorithms}.
This positivity constraint can be implemented using rectified linear unit (ReLU) during training.
Under this constraint, $(MC')$ can be converted to the following equivalent problem:
\begin{eqnarray}
\label{eq:image_regression2}
\quad \quad\quad \min_{\widehat{{\mathbf Z}} \in {\boldsymbol{\mathcal H}}^0 } && \sum_{i=1}^P\left\|g_i- {{\mathcal F}}^{-1}[\widehat{{\mathbf z}}_i] \right\|^2 \\
\mbox{subject to
&&{{\mathcal P}}_\Lambda[\widehat{\mathbf g}_i ] = {{\mathcal P}}_\Lambda[\widehat {\mathbf z}_i] ,\quad i=1,\cdots, P \nonumber \ .
\end{eqnarray}
where ${\boldsymbol{\mathcal H}}^0$ denotes a constrained signal space:
\begin{eqnarray}
{\boldsymbol{\mathcal H}}^0 &=& \left\{ {\mathbf G} \in {\mathbb R}^{N} \,\Big|\,\ {\mathbf G} = \left(\tilde{\boldsymbol {\Phi}} {\mathbf{C}} \right) \circledast \nu(\tilde {\boldsymbol {\Psi}} ), \right. \notag \\
&& \left. {\mathbf{C}} = {\boldsymbol {\Phi}}^{\top} \left( {\mathbf G} \circledast \overline {\boldsymbol {\Psi}} \right) ,\right. \notag\\
&& \left. [{\mathbf{C}} ]_{kl} \geq 0,~\forall k,l \right\} \notag
\end{eqnarray}
where the convolution framelet coefficients $ [{\mathbf{C}} ]_{kl}$ are enforced to be non-negative.
Then,
for a given $P$-channel training data set $\{\widehat{\mathbf y}_i^{(t)}, g_i^{(t)}\}_{i,t=1}^{P,T}$, where $\widehat {\mathbf y}_i^{(t)}$ and $g_i^{(t)}$ denotes the $t$-th batch
under-sampled $k$-space data and the corresponding ground-truth image, respectively, from the $i$-th coil,
the network training problem can be formulated as follows:
\begin{eqnarray}\label{eq:newcost}
\min_{ {\boldsymbol {\Psi}}, \widetilde{\boldsymbol {\Psi}}\in {\mathbb R}^{2d \times Q}} \sum_{i=1}^P\sum_{t=1}^T\left\|g_i^{(t)}- {{\mathcal F}}^{-1}{{\mathcal K}}(\widehat{\mathbf y}_i^{(t)};{\boldsymbol {\Psi}},\widetilde{\boldsymbol {\Psi}})\right\|^2 ,
\end{eqnarray}
Here, the operator ${{\mathcal K}}: {\mathbb C}^{N} \to {\mathbb C}^{N}$ denotes the encoder-decoder network.
The geometric implication of this training procedure for $k$-space interpolation is illustrated in
Fig.~\ref{fig:geometry}, where 1-D single coil $k$-space data is used as an example for simplicity.
Specifically, the original $k$-space data with missing element is first {\em lifted} to higher dimensional space via Hankel matrix,
which is then decomposed using the matrix bases $\widetilde{\mathbf B}_i^{kl}$ in \eqref{eq:B}.
Here, the training goal is to find the conic bases such that the measured and missing $k$-space data can
be represented as the conic (nonnegative) combination of the resulting basis so that the interpolation can be readily done during
the signal recomposition step using the bases.
When this conic coding procedure is unlifted to the original lower dimensional space, the interpolated $k$-space data
appear. When this lifting, conic decomposition and unlifting
procedure are viewed from the original $k$-space domain,
it becomes one level encoder-decoder neural
network with ReLU. In other word, an encoder-decoder network can be understood as a signal space manifestation of the
conic coding of the signal being lifted to a higher-dimensional space.
\begin{figure}[!t]
\center{
\includegraphics[width=9.cm]{./coordinate_v2.png}
}
\caption{Coordinate system for the data set. Here, VS stands for the number of view-sharing.}
\label{fig:coordinate}
\end{figure}
The idea can be further extended to the multi-layer deep convolutional framelet expansion, when the encoder and decoder convolution filter $\overline{\boldsymbol {\Psi}}, \nu(\tilde{\boldsymbol {\Psi}}) \in {\mathbb R}^{d\times Q}$
can be represented in a cascaded convolution of small length filters. For more details, see \cite{han2018k}.
\begin{figure*}[!hbt]
\center{
\includegraphics[width=16cm]{./network.png}
}
\caption{ Network architecture of tight-frame U-net.}
\label{fig:network}
\end{figure*}
\subsection{Sparsification}
Many MR images can be sparsified using the finite difference
\cite{jin2016general}.
In this case, we can easily see that the extended Hankel matrix from the weighted $k$-space data given by
\begin{eqnarray}
\begin{bmatrix} \mathscr{H}_d(\widehat {\mathbf h}\odot\hat {\mathbf g}_1) &\cdots & \mathscr{H}_d(\widehat {\mathbf h}\odot\hat {\mathbf g}_P)\end{bmatrix}
\end{eqnarray}
has lower rank than \eqref{eq:PHankel}, where
$\odot$ refers to the element-wise multiplication and the weight $\widehat{\mathbf h}$ is given by \cite{jin2016general,ye2017compressive}:
\begin{eqnarray}\label{eq:h}
\widehat{\mathbf h} = \begin{bmatrix}\widehat h({\mathbf k}_1) & \cdots & \widehat h({\mathbf k}_N)\end{bmatrix}^T \in {\mathbb C}^{N}
\end{eqnarray}
with
\begin{eqnarray}
\widehat h({\mathbf k}) := {{\mathcal F}}[h]({\mathbf k}) = \sin(\pi|{\mathbf k}|),\quad |{\mathbf k}|\leq \frac{1}{2} \ .
\end{eqnarray}
Thus, the deep neural network is applied to the weighted $k$-space data to estimate
the missing spectral data $\widehat h({\mathbf x})\widehat g_i({\mathbf k})$, after which the original $k$-space data is obtained
by dividing with the same weight.
This can be easily implemented using a weighting and unweighting layer as shown in Fig.~\ref{fig:scheme}.
\section{Method}
\subsection{Training dataset}
Four sets of in vivo 3D DCE data were obtained with TWIST sequence using Siemens 3T Verio scanners in Gachon University Gil Medical Center. The data sets were for carotid vessel scan. The scanning parameters for two sets of carotid vessel data were as follows: repetition time (TR) = 2.5 ms, echo time (TE) = 0.94 ms, 159$\times$640$\times$80 matrix, 1.2 mm slice thickness, 16 coils, 30 temporal frames. For other two sets of carotid vessel data, the acquisition parameters were same as above, except for 2.5 mm slice thickness and 37 temporal frames. The sampling pattern of these data sets is described in Fig. \ref{fig:TWIST_concept}(a), where 24$\times$24 ACS regions were required for the conventional 2D GRAPPA reconstruction. In addition, the partial Fouirer was applied to the data, so only 63$\%$ of data was acquired. The downsampling rate was 3 and 2 along $k_x$ and $k_y$ direction, respectively. The read-out direction is $k_z$, which is fully sampled (see Fig.~\ref{fig:coordinate}).
Then, the input $k$-space data for the neural network in Fig.~\ref{fig:scheme} is the $k_x - k_y$ data from the
$k_x-k_y-z$ volume in Fig.~\ref{fig:coordinate}, which is applied for each slice along the readout direction and the temporal frames.
Among four patient data sets of TWIST acquisition, we used three patient data for training and validation. The remaining one patient data was used for testing. More specifically, we used 33,810 slices for training and validation, and 12,210 slices for test.
\subsection{Network architecture}
We now describe the CNN block in Fig.~\ref{fig:scheme}.
Note that the multi-channel convolution in Eq.~\eqref{eq:decomp0} is complex-valued convolution.
Thus, to convert the complex-valued multi-channel convolution to a real-valued ones, we divide the complex-valued $ k $ space data into real and imaginary channels similar to \cite{han2018k}.
So, the actual implementation of Eq.~\eqref{eq:decomp0} is as follows.
First, the $P$-channel multi-coil $k$-space data $\widehat{\mathbf Z} $ are splitted into $2P$-channel input after each $k$-space data has been split into real and image components.
Then, the encoder filters generates $Q$-channel outputs from these channel inputs using multi-channel convolution, after which
the pooling operation defined by ${\boldsymbol {\Phi}}^\top$ is applied to each $Q$-channel output.
The resulting $Q$-channel feature maps corresponds to the convolutional framelet coefficients
(if there are multiple layers, this procedure is applied recursively).
Now, at the decoder, the $Q$-channel feature maps are processed using unpooling layer represented by $\tilde{\boldsymbol {\Phi}}$,
which are then
convoluted with the decoder filters to generate $P$-set of real and imaginary channels of the estimated $k$-space data.
Finally, complex valued $k$-space data are formed from each real and image channels. By doing this, all the successive
layers are implemented using real-valued machine learning toolboxes.
In our prior work \cite{ye2018deep}, we also showed that the deep network with the encoder-decoder architecture mainly differ in their implementation
of the pooling and unpooling layers.
Here, we employed the tight-frame U-net \cite{ye2018deep}, which has Haar wavelet decomposition and recomposition as pooling and unpooling layers.
This choice of pooling and unpooling satisfies the so-called frame condition \cite{ye2018deep},
preserving the detail of images.
This condition still holds even when we add additional by-pass connection \cite{ye2018deep}.
For example, as shown in Fig. \ref{fig:network}, tight frame U-net was composed of convolution, batch nomarlization \cite{ioffe2015batch}, ReLU \cite{nair2010rectified}, and skip connection with concatenation. Each stage is composed of three $3 \times 3$ convolution layers followed by batch normalization and ReLU, except for the layer presented as a red arrow in Fig. \ref{fig:network}, which is $1 \times 1 $ convolution layer.
In contrast to the standard U-net \cite{ronneberger2015u}, $2 \times 2$ pooling and unpooling were replaced by 2-D Haar wavelet decomposition and recomposition. Specifically, wavelet decomposition results in four subband (LL, LH, HL, HH bands), from which the LL band is only processed by following convolution layers, whereas the remaining subbands are skipped for wavelet recomposition. The number of convolutional filters increases from 64 in the first stage to 1024 in the final stage.
Note that we have extra layers at the input and the output to transform complex-valued $k$ space data into real and imaginary channels, and vice versa.
Thus, the total number of channels for our tight-frame U-net is 32.
Since all data sets are from multi-channel acquistion, the square root of sum of squares (SSoS) images were finally applied after the coil-by-coil reconstruction. In addition, subtracted maximum intensity projection (MIP) to the SSoS images was obtained to focus on the dynamics of the contrast agent.
\begin{figure*}[!bt]
\centerline{
\includegraphics[width=18cm]{./all_comp_axial.png}
}
\centerline{\mbox{(a)}}
\centerline{
\includegraphics[width=18cm]{./all_comp_coronal.png}
}
\centerline{\mbox{(b)}}
\caption{ (a) Axial and (b) coronal view of reconstruction results at various downsampling factors. }
\label{fig:all}
\end{figure*}
\begin{figure*}[!hbt]
\center{
\includegraphics[width=18cm]{./temporal_resol.jpg}
}
\caption{Time resolution comparison of the reconstruction results of GRAPPA, raw data and the proposed methods for different view-sharing numbers. Here VS stands for the number of view sharing.}
\label{fig:temporal_resol}
\end{figure*}
\subsection{Network training}
For our network training, we use the $k$-space data from the two adjacent view sharing as input (see Fig.~\ref{fig:TWIST_concept}(b) and Fig.~\ref{fig:coordinate}),
whereas the coil-by-coil reconstructed images using GRAPPA with five view-sharing are used as labels.
The input data for the inference stage are the downsampled $k$-space data using 2 to 5 contiguous frames, so that we can produce the images with various temporal resolution.
The parameters for GRAPPA reconstruction were chosen to provide the best results. The kernel size for GRAPPA is 5$\times$5 for the data sets.
As a representative CS reconstruction method, the ALOHA \cite{jin2016general} was used.
The parameters for ALOHA were as following: annihilating filter size = 13$\times$5 , 3 levels of pyramidal decomposition, decreasing LMaFit tolerance values ($10^{-3}, 10^{-4}, 10^{-5}$) at each level, and ADMM parameter $\mu$ = $10^{-1}$.
The network was trained using Adam optimization \cite{kingma2014adam} with the momentum $\beta _1 = 0.9$ and $\beta_2 = 0.999$. We used $l_2$ loss in the image domain. The initial learning rate was $10^{-2}$, and it was divided in half at every 50 epochs. The size of mini-batch was 40. The number of epochs to train this pre-trained network was 150.
The proposed network was implemented in Python using TensorFlow library \cite{abadi2016tensorflow} and trained using an NVidia GeForce GTX 1080-Ti graphics processing unit.
It took about 6 days for the network training.
\subsection{Comparative Studies}
To evaluate the performance of the proposed method, the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) index \cite{wang2004image} were calculated as quantitative metrics. Since the reconstruction outputs from the network are multi-coil images, we calculate the sum-of-squares images to combine all the coil images to calculate
these quantitative metrics.
The PSNR is defined as
\begin{eqnarray}
PSNR
&=& 20 \cdot \log_{10} \left(\dfrac{MAX_{Y}}{\sqrt{MSE(\widehat{X}, Y)}}\right),
\label{eq:psnr}
\end{eqnarray}
where $\widehat{X}$ and $Y$ denote the reconstructed sum-of-squares image and noise-free sum-of-squares image (ground truth), respectively. $MAX_{Y}$ is the maximum value of noise-free sum-of-squares image.
SSIM is used to measure the similarity between original image and distorted image due to deformation, and it is defined as
\begin{equation}
SSIM = \dfrac{(2\mu_{\widehat{X}}\mu_{Y}+c_1)(2\sigma_{\widehat{X}Y}+c_2)}{(\mu_{\widehat{X}}^2+\mu_{Y}^2+c_1)(\sigma_{\widehat{X}}^2+\sigma_{Y}^2+c_2)},
\end{equation}
where $\mu_{M}$ is a average of $M$, $\sigma_{M}^2$ is a variance of $M$ and $\sigma_{MN}$ is a covariance of $M$ and $N$.
To stabilize the division, $c_1=(k_1R)^2$ and $c_2=(k_2R)^2$ are defined in terms of $R$, which is the dynamic range of the pixel values. We followed the default values of $k_1 = 0.01$ and $k_2 = 0.03$.
\section{Results}\label{sec:result}
We first compare the performance of the proposed $k$-space deep learning for parallel MR image reconstruction.
In this experiments, the fully sampled $k$-space data generated
using GRAPPA were considered as the ground-truth, from which retrospective sub-sampling were performed at various downsampling ratios, which corresponds
to specific view sharing number (VS).
Then, the proposed multi-coil $k$-space deep learning and ALOHA method were applied for comparative studies. The axial reconstruction results
and the coronal reformatted images in Fig.~\ref{fig:all} clearly show that the proposed method significantly outperformed ALOHA.
For example, near perfect reconstructions are obtained at $R=10.36$ downsampling (i.e. VS=3) using the proposed method, while still blurry images
are observed from the ALOHA reconstruction.
In terms of PSNR and SSIM values, the proposed method significantly outperforms the ALOHA.
For example, from the axial reconstruction images,
at $R=13.93$ acceleration factor (i.e. VS=2), the proposed $k$-space deep learning is about $2\sim 4$dB better than ALOHA in terms of PSNR,
whereas at $R=10.36$ downsampling (i.e. VS=3), the proposed $k$-space deep learning is about $5\sim 13$dB better than ALOHA.
When we compare the results in coronal reformatted images, similar PSNR gains were observed.
At $R=13.93$ acceleration factor, the proposed $k$-space deep learning is about $2\sim 3$dB better than ALOHA in terms of PSNR,
whereas at $R=10.36$, the proposed $k$-space deep learning is about $6\sim 8$dB better than ALOHA.
{
Reconstruction results of the carotid vessel data sets for test result are demonstrated in Fig. \ref{fig:temporal_resol}.
The temporal frames were chosen to illustrate the propagation of the contrast agent and to compare the temporal resolution.
In the proposed method, the same neural network can produce reconstruction results using various
view sharing number, so we provide reconstruction results with VS=2, 3, and 5.
The raw data in Fig. \ref{fig:temporal_resol} were obtained by directly applying inverse Fast Fourier Transform (FFT) to the $k$-space data without view-sharing, which appear blurry but still have the true temporal resolution. The raw data reconstruction results are used as for evaluating the temporal resolution.
By inspection, we can see that the contrast agent was abruptly spread out in the GRAPPA reconstruction. For example, there was a rapid propagation of contrast agent from the $T=11$ frame to the $T=12$ frame as shown in Fig. \ref{fig:temporal_resol}. This is because the combination of several temporal frames resulted in the blurring of the temporal resolution. Therefore, the flow of the contrast agent was suddenly changed between just one frame. This degradation of temporal dynamics could be found frequently as the number of view-sharing increased.
In the reconstructed images using the proposed method, the flow of the contrast agent and the detail of dynamics were captured correctly with
the VS=2, and a minor temporal blurring starts with VS=3. With VS=5, which is the equal to the view sharing used in GRAPPA, the spatial and temporal resolution of the proposed methods were near identical to those of GRAPPA.
More specifically, the contrast agent shown in the GRAPPA reconstruction was not visible in the raw image, and the proposed reconstruction of VS = 2 to VS = 5 at $ T = 12 $ frame clearly shows the temporal resolution degradation with the increase in the number of view sharing.
In the GRAPPA reconstruction, the detail of the spread of the contrast agent was influenced by the high-frequency region in the $k$-space so that the temporal dynamics of the future frame was erroneously incorporated in the current frame.
}
In addition, the proposed method is computationally more efficient than GRAPPA and ALOHA as shown in
Table~\ref{tab:time}. Specifically, the proposed method is several order of magnitude faster than GRAPPA and ALOHA.
\begin{table}[!hbt]
\centering
\resizebox{0.45\textwidth}{!}{
\begin{tabular}{c|ccc}
\hline
& GRAPPA & ALOHA & Proposed \\ \hline\hline
Time/slice (sec) & 6.09 & 84.61& 0.029 \\ \hline
\end{tabular}}
\caption{Computational time for 16 coil $k$-space interpolation time using various methods.}
\label{tab:time}
\end{table}
\section{Discussion}
Although we have mainly focused on the TWIST reconstruction, the proposed $k$-space deep learning approach
can be used for various parallel imaging applications. Similar to the results in Fig.~\ref{fig:all}, we expect the
significant gain compared to the existing parallel imaging and compressed sensing approaches.
ALOHA improved the image quality for TWIST imaging \cite{7547372}, but we found that
the spatial resolution of ALOHA reconstruction was not sufficiently good at high acceleration factor from the reduced view sharing.
On the other hand, the proposed method significantly improves the performance at high acceleration factor.
This may lead to an interesting question: why the proposed $k$-space deep learning outperforms ALOHA even though they are related to each other?
It is important to note that the proposed neural
network is not another implementation of ALOHA for computational saving,
but that it is a new algorithm that significantly improves the interpolation performance of ALOHA by exploiting the exponential expressiveness of a deep neural network
by increasing the number of layers. In fact, our geometric view in Fig.~\ref{fig:geometry} is for the single layer encoder-decoder architecture, and the deep
neural network is obtained by recursively applying this high-dimensional lifting, conic coding, and un-lifting as shown in Fig.~\ref{fig:multilayer}. During this recursive lifting to higher dimensional
space, it is believed that
the complicated input $k$-space data can have simpler manifold that can be exploited for better interpolation.
\begin{figure}[!t]
\center{
\includegraphics[width=7cm]{./multilayer.png}
}
\caption{Geometry of multi-layer encoder decoder architecture.}
\label{fig:multilayer}
\end{figure}
As shown in the Table~\ref{tab:time}, the computational time of the proposed method is more than 100 times faster than the GRAPPA reconstruction because we do not need to compute the interpolation kernel from the data on the fly.
This is because the neural network has already learned the interpolation kernel from the training data, so all necessary calculations are simple convolution and pooling. The significant computational savings, in addition to the flexibility of obtaining reconstruction results at various numbers of view sharing, may suggest a new paradigm for DCE-MRI.
In particular, instead of using a fixed spatial and temporal resolution for dynamic studies, our methods can immediately generate reconstruction results for all possible combinations of spatial and temporal resolutions by simply changing the view-sharing number.
Thus, the proposed method may allow more accurate and quantitative time-resolved angiography studies.
Given the flexibility to generate images at various temporal resolution, one may wonder how thus-trained
neural network can be generalized to the unseen images with faster temporal resolution.
Recall that our neural network is trained to learn
the structure of the weighted $k$-space data based on the observation
that the Hankel matrix is low-ranked. Thus, the low-rankness is not a specific feature
from
GRAPPA reconstruction; rather it is a general feature of $k$-space data from MR images. Therefore,
as long as our neural network is trained to learn the structure of the Fourier data, it can be generalized well to images at any intermediate time frame.
This is another important advantage of the proposed $k$-space deep learning.
\section{Conclusion}\label{sec:conclusion}
The purpose of this study was to improve the temporal resolution of TWIST imaging
and to propose an algorithm that generates reconstruction results at various sliding window size.
To address this problem, we developed a novel $k$-space deep learning algorithm for parallel MRI.
Specifically, based on the recent mathematical discovery that a deep neural network
can be developed as a multilayer extension of data-driven Hankel matrix decomposition,
our $k$-space deep neural network was designed to simultaneously exploit the
multi-coil diversity and spatial domain redundancy.
The improvement of temporal resolution in TWIST imaging was verified by the reconstruction of in vivo data sets. Moreover,
it was demonstrated that one trained network can immediately generate multiple reconstruction results with various spatial and temporal resolution trade-off
by simply changing the number of view sharing at the inference stage.
As this method can
be used with the existing TWIST acquisition protocol without any modification of pulse sequence, we believe that the method provides an important new research direction
that can significantly extend the clinical applications.
|
{
"timestamp": "2018-06-12T02:08:59",
"yymm": "1806",
"arxiv_id": "1806.00806",
"language": "en",
"url": "https://arxiv.org/abs/1806.00806"
}
|
\section{Introduction}
At the Sun, transient events on magnetic field open to interplanetary space manifest themselves as jets, i.e. collimated, sudden ejections of chromospheric or coronal plasma. These are commonly thought to arise by interchange reconnection between closed and open field, allowing plasma and accelerated electrons to escape the lower corona.
Jets are fundamental solar phenomena, occurring in all layers of the atmosphere and all regions (coronal holes, active regions, and the quiet Sun). These events often offer a convenient magnetic configuration in which to study solar flare particle acceleration, particularly the acceleration of escaping energetic electrons.
In the Shibata two-dimensional model of interchange reconnection, emerging flux reconnects with the overlying coronal field and causes the open field to switch footpoints \citep[e.g.][]{shibata1992}. This occurrence can drive a hot (several MK) jet generated in the corona at the location of the upper reconnection outflow shock, and/or a relatively cooler jet of chromospheric temperature via sudden chromospheric evaporation \citep{yokoyama1995, yokoyama1996}. In the 3D model of \citet{pariat2015}, the jet arises in a fan-spine geometry, with reconnection occurring at a separatrix layer. This model can generate straight, linear jets, in which the ejection is simple and rotationless, but these straight jets quickly devolve into more complicated, untwisting, helical jets. The untwisting arises as the newly open field, no longer constrained by two photospheric footpoints, is able to shed its twist, and the propagation of this twist upward can help to drive the jet. \citet{pariat2015} surmise that these two types of jets, straight and helical, might account for the dichotomy identified by \citet{moore2010}, which those authors called ``standard'' and ``blowout'' jets. \citet{sterling2015, sterling2016} have proposed that jets are miniature filament eruptions, akin to a small-scale CME, and \citet{wyper2017} have performed 3D simulations suggesting that CMEs and jets result from the same processes.
Jets are often detected at soft X-ray (SXR) wavelengths \citep{shimojo1996, shimojo2000}, with high occurrence rates in the polar regions and coronal holes \citep{sako2012, savcheva2007}, presumably because of the prevalence of open field.
Jets in active regions may be accompanied by flares \citep[e.g.][]{kundu1999} and can accelerate electrons to energies at least up to hundreds of keV. These accelerated electrons travel along the path of the jet, escaping the low corona, as indicated by the strong correlation between jets and Type III radio bursts \citep[e.g.][]{kundu1995, raulin1996}, with Type III sources appearing along the jet path in order of decreasing frequency (i.e. decreasing density) with height. Jets are sometimes associated with impulsive, electron/He$^3$-rich solar energetic particle (SEP) events \citep{wang2006, nitta2015}, emphasizing that jets accelerate particles on field open to interplanetary space.
Gyrosynchrotron microwaves and bremsstrahlung hard X-rays (HXRs) are complementary tools for studying flare-accelerated electron distributions, as they can both quantitatively measure those distributions. In principle, both should be useful in studying electrons accelerated in jets, but in practice neither is typically observed from coronal jet tracks due to observational difficulties. HXR observations are dominated by footpoint emission in the (dense) chromosphere from downward-directed beams; emission from escaping electron beams in the low-density corona is typically too faint to be observed \citep{saint-hilaire2009}. Gyrosynchrotron observation requires a ground-based observatory measuring at the right time and right frequency range for the magnetic field specific to each event, and so is also usually not observed from jets. Type III radio emission is an excellent qualitative marker of accelerated electrons and their paths \citep[e.g.][]{reid2014, chen2013}, but due to the nonlinear processes in its generation cannot be used to quantitatively measure the emitting electron distributions.
The few jets studied in HXRs shed insight on the physical mechanism. \citet{krucker2011} studied jet HXR footpoints and found the configuration to be consistent with interchange reconnection; most of the 16 events, identified via prompt in-situ electron detections, exhibited three footpoints, as opposed to typical two-ribbon flares. All HXR sources observed in that study were footpoint sources due to downward-directed electron beams impacting the chromosphere; any coronal sources in the jet were too faint to be observed. \citet{bain2009} and \citet{glesener2012} did find coronal HXRs in jets, emphasizing the accelerated electrons' access to open field. These rare observations were made possible due to an extraordinarily large electron flux in the former case and, in the latter, the fact that the flare footpoints were occulted by the solar limb. The partly occulted observation revealed a double coronal source early in the flare, suggesting a reconnection site between the sources, followed by a relatively intense, extended HXR source cospatial and cotemporal with the emerging jet. These HXRs imply accelerated electrons near the base of the jet, lower in altitude than the reconnection region, but it is not clear how electrons might access this site from the reconnection region.
In that event, HXR emission was not strong enough for detailed spectroscopy to precisely determine the parameters of the emitting electron distribution.
\begin{figure}[tb]
\centering
\quad\includegraphics[width=0.6\columnwidth,trim=0 3cm 0 0,clip]{f1.eps}
\caption{\label{f_20020819_overview} Overview of the 2002 August 19 flare. (a) Dynamic spectra from the Culgoora Radio Observatory, showing Type III radio bursts. Although Type III emission is prevalent in this jet, the most intense plasma emission is not concurrent with the most intense broadband emission. (b) Broadband emission observed in OVSA microwaves, showing strong and quickly changing spectral variability. The dotted rectangle indicates the range of frequencies and times used to obtain the {microwave}\ image shown in Figures~\ref{f_Jet_2002_08_19_images} and \ref{f_Jet_2002_08_19_OVSA_decon}. The vertical dashed lines indicate the times for the instantaneous spectra
used in our 3D modeling; (c) SXR light curves in two \textit{GOES} channels.
(d) HXR count time profiles from {Konus-\textit{Wind}}\, and {\textit{RHESSI}}. The {\textit{RHESSI}}\, curve includes its 8 segmented detectors (out of 9) and the counts have been scaled for comparison with {Konus-\textit{Wind}}. The shaded area indicates the time interval used for {\textit{RHESSI}}\ imaging shown in Figure~\ref{f_Jet_2002_08_19_images}.
}
\end{figure}
In this work we continue the progression of knowledge on accelerated electrons escaping the Sun in jets. While context measurements from Type III observations and in-situ studies have long established the existence of these escaping electrons, it was not until the HXR studies of \citet{glesener2012} and \citet{bain2009} that energetic estimates of the escaping populations could begin to be performed. These limited previous observations rarely allowed detailed spectroscopy due to limited HXR imaging dynamic range, and did not have cotemporal spectrally and spatially resolved microwave observations. In this work, we combine imaging and spectral HXR observation by the \textit{Reuven Ramaty High Energy Solar Spectroscopic Imager} ({\textit{RHESSI}}) with high cadence spectroscopy from {Konus-\textit{Wind}}, {microwave}\ spectral and imaging data from the Owens Valley Solar Array (OVSA), and extreme ultraviolet (EUV) data from the \textit{Transition Region and Coronal Explorer} ({\textit{TRACE}}). We do not know of any previous flare-related jet that has been studied using this set of observational tools. The observations are augmented by 3D modeling utilizing photospheric line-of-site measurements of the magnetic field and linear force-free reconstruction of the coronal magnetic environment for one of the 16 jet events reported by \citet{krucker2011}. With these tools we investigate accelerated electron distributions on open field, the relation of these accelerated electrons to the flare and the jet, and fast spectral changes in the HXRs and microwaves emitted by these particles. The GX Simulator framework \citep{Nita_etal_2015, 2018ApJ...853...66N} is used to develop 3D models of the flare and jet sources, which include open and closed magnetic flux tubes and distributions of thermal and nonthermal electrons within the sources. These models are validated via comparison with X-ray, EUV, and {microwave}\ data. The work demonstrates the use of HXRs and microwaves together to \textit{quantitatively} constrain accelerated electron distributions in a jet.
\section{Observations}
\begin{figure*}\centering
\quad\includegraphics[width=0.95\textwidth,clip]{f2a_NEW.eps}
\includegraphics[width=\textwidth,clip,angle=0]{f2b.eps}
\caption{\label{f_Jet_2002_08_19_images} Top panels: {\textit{TRACE}}\, images at 195\AA\, as the jet evolves. (An animated form of these panels is available in the supplementary materials). Middle row: {\textit{RHESSI}}\, and OVSA contours overlaid on a {\textit{TRACE}}\, 195\AA\, image of the emerging jet. All {\textit{RHESSI}}\ and OVSA contour levels are 30, 50, 70, and 90\% of their respective maxima, and {\textit{RHESSI}}\, images were produced using the CLEAN algorithm. At low energies, HXR emission is thermal and likely emanates from flaring coronal loops, while the highest energies are nonthermal and probably denote footpoints at the base of the flare/jet. In the intermediate range (18-30 keV), some HXR emission is elongated along the jet. Bottom row: {\textit{RHESSI}}\ and {\textit{TRACE}}\ emission overlaid on MDI magnetograms.
}
\end{figure*}
\subsection{EUV data from {\textit{TRACE}}}
\label{S_EUV_data}
The {\textit{TRACE}}\ spacecraft was an EUV imager that operated from 1998 to 2010 with an 8.5 arcminute field of view and a spatial resolution of 1 arcsecond. {\textit{TRACE}}\, measured flux in three EUV and several UV wavelengths sensitive to selected temperatures from 6000 K to 10 MK \citep{trace}. On 2002 August 19, {\textit{TRACE}}\, observed a solar jet associated with a flare from active region 10069 (see \textit{GOES} curve in Figure \ref{f_20020819_overview}), with coverage in the 195~\AA\ filter at approximately 23 second cadence during the impulsive part of the event. This passband is sensitive to Fe XII and Fe XXIV lines with peak temperature sensitivity at log[T(MK)]=6.2 and 7.2, respectively \citep{landi2013}. The top set of panels in Figure~\ref{f_Jet_2002_08_19_images} shows {\textit{TRACE}}\, snapshots of the jet. Some panels evidence saturation and diffraction during the bright flare, which had a \textit{GOES} class of M3.1. EUV jet emission continued for several minutes after the initial bright phase.
{\textit{TRACE}}\, pointing knowledge is not precise and could be incorrect by a few arcsec \citep{trace}. For the 2002 August 19 event, there are no context observations at a similar time and wavelength that can be used for absolute calibration of this pointing. Instead, we coaligned quiescent Fe XII plage features observed by {\textit{TRACE}}\, to \textit{SOHO}/MDI magnetic data. The primary feature utilized can be seen in Figure \ref{f_Jet_2002_08_19_images} extending southeast from $\sim$[520, -310] arcseconds. All {\textit{TRACE}}\ images shown in this paper include this alignment correction.
\subsection{Hard X-ray data from {\textit{RHESSI}}\ and {Konus-\textit{Wind}}}
\label{S_Xray_data}
The flare/jet event was observed by the {\it RHESSI} spacecraft, which provides high-resolution X-ray spectra and full-disk images of the Sun from 3 keV to 17 MeV \citep{lin2002}. {\textit{RHESSI}}\, utilizes high-purity germanium detectors and rotation modulation collimation, an indirect, Fourier-based imaging system \citep{hurford2002}. {\textit{RHESSI}}\, emission comes from two types of populations: hot thermal ($\gtrsim$10 MK) plasma and accelerated electrons. The brightest nonthermal HXR sources customarily occur at flare footpoints; due to limited imaging dynamic range, {\textit{RHESSI}}\, only occasionally observes fainter nonthermal sources in the corona.
\begin{figure*}[b]\centering
\includegraphics[width=\textwidth]{f3.eps}
\caption{\label{f_light_curves_OVSA_KW} HXR spectral evolution. (a) {Konus-\textit{Wind}}\ HXR light curves obtained with 256~ms time resolution in wide energy channels G1 ($\sim$20--70~keV) and G2 ($\sim$70--300~keV). A 10.6 GHz OVSA light curve (shifted in time by 2.224~s to correct for OVSA clock error) with 4~s time resolution is shown for comparison. Sub-second time variability of the HXR emission is apparent. (b) (Green) evolution of the effective spectral index defined using the the {Konus-\textit{Wind}}\ hardness ratio as explained in \citet{Fl_etal_2016coldFl}. This spectral index displays time variability similar to that of the HXR light curves (blue, red), with the spectrum hardening at most HXR peaks. On average, the effective spectral index agrees well with that determined from the {\textit{RHESSI}}\ fit (black) in 2-second time bins.
}
\end{figure*}
\begin{figure*}\centering
\includegraphics[width=0.8\textwidth]{f4.eps}
\caption{\label{f_cross_correlation} Comparison of HXR flux and spectral index. Panel (a) shows the correlation between the {Konus-\textit{Wind}}\ spectral index and the flux in channel G1 (20--70~keV), making evident the spectral hardening at times of high flux. Panel (b) plots the cross-correlation between the two quantities. Panels (c) and (d) show the same for channel G2 (70--300 keV). The spectral hardening is delayed relative to the G1 intensity by roughly 0.2~s, while there is no delay relative to G2 intensity. }
\end{figure*}
The middle row of panels of Figure \ref{f_Jet_2002_08_19_images} shows {\textit{RHESSI}}\, images in three energy ranges for the 2002 August 19 flare overlaid on a {\textit{TRACE}}\, 195~\AA\ image of the emerging jet. Images were produced using the CLEAN method with subcollimators 1, 3, 4, 5, 6, and 7 and a clean beam width factor of 0.9, integrated over 28 seconds. This set of subcollimators is chosen to elucidate fine structure in the flare and jet. All three {\textit{RHESSI}}\, images show contour levels at 30, 50, 70, and 90\% of their respective maxima. At low energies (10--18 keV), HXR emission is dominated by the thermal flare, while the highest energies (30--100 keV) are nonthermal and most likely trace out footpoints at the base of the flare/jet. In the intermediate range (18-30 keV), HXR emission is elongated along the jet, as indicated by the 30\% green contour. (No imaging was attempted below 10 keV because {\textit{RHESSI}}'s thickest attenuator was inserted.) The {\textit{RHESSI}}\, sources at the flare and the jet are not isolated enough to perform imaging spectroscopy to separate the sources. X-ray power-law spectral fits to the spatially integrated emission were performed using the {\textit{RHESSI}}\, OSPEX software and are used as upper limits on the emission from any sub-component; see Section \ref{S_modeling}. Integrated spectra (not shown) indicate that 10--18 keV and 30--100 keV emissions are part of the thermal and nonthermal spectral components, respectively, but the spectral shape of the 18--30 keV emission cannot be determined.
While {\textit{RHESSI}}\, provides detailed images and spectra throughout the evolution of the event, its time resolution is limited by its rotation modulation. Straightforward time profiles (i.e. without attempting demodulation) can be produced on a cadence $\ge$ 2 seconds (half a spacecraft rotation). For high-cadence spectroscopy (though not imaging), we turn to HXR data from the Konus instrument aboard the \textit{Wind} spacecraft \citep[{Konus-\textit{Wind}}, in operation since 1994;][]{Aptekar1995, Palshin_etal_2014}. {Konus-\textit{Wind}}\, uses NaI(Tl) crystals to measure photons from astrophysical sources from $\sim$10 keV to 15 MeV. {Konus-\textit{Wind}}\, lightcurves in two high-energy bands are shown in Figure \ref{f_20020819_overview} and Figure \ref{f_light_curves_OVSA_KW}. The instrument records energy spectra independently from the light curves \citep{Aptekar1995} using an adaptive spectrum accumulation duration based on the signal intensity; thus, the spectra are taken over a number of uneven time intervals. However, as has been demonstrated by \citet[][Eq.~4]{Fl_etal_2016coldFl}, the electron spectral index can be accurately recovered from the hardness ratio determined using two wide channels only, G1 and G2 (20--70 and 70--300 keV, respectively).
This ratio is available with a sub-second cadence of 16---256~ms.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.9\linewidth]{f5.eps}
\caption{ Autocovariance of the {Konus-\textit{Wind}}\, 70--300 keV lightcurve for the 1-minute impulsive phase shown in Figure \ref{f_light_curves_OVSA_KW}, panel (b). The average pulse duration is given by the width of the autocovariance peak. (Left) Autocovariance for the raw {Konus-\textit{Wind}}\, data and for data that have been smoothed over 3 seconds (to remove the effect of the fast time variation). (Right) Autocovariance of the difference of the two time profiles (raw minus smoothed) isolates the contribution of the quickly-varying component. A characteristic pulse duration is measured from the FWHM of this curve, which is 1.05 seconds, though note that individual bursts can be longer or shorter than this average duration. The short pulses, which are too fast for {\textit{RHESSI}}\, to resolve, could indicate electron acceleration timescales. }
\label{fig:autocorrelation}
\end{center}
\end{figure}
The time history of the {Konus-\textit{Wind}}\ power-law index $\delta$ of the electron distribution is shown in the second panel of Figure \ref{f_light_curves_OVSA_KW}, with the {\textit{RHESSI}}\, derived power-law index calculated at a two-second cadence overplotted.
Fast time variations down to subsecond timescales are apparent in the {Konus-\textit{Wind}}\, intensity and spectral index. Most notably, each HXR peak corresponds to a decrease in $\delta$, i.e. a momentary spectral hardening. The relationship between HXR flux peaks and spectral hardening is explored more quantitatively in Figure \ref{f_cross_correlation}, where clear correlations are evident between the spectral index and the flux measured in {Konus-\textit{Wind}}\, channels G1 and G2. Figure \ref{f_cross_correlation} also shows a lag-correlation analysis, suggesting that the spectral hardness is delayed by roughly 0.2~s relative to the intensity peaks in G1 (20--70~keV), while the data are consistent with zero time lag between the intensity peaks in G2 (70--300 keV) and spectral hardening. This implies that a duration of $\sim 0.2$~s is needed for the electrons with energy $\gtrsim 20$~keV to be accelerated up to $\gtrsim 100$~keV.
Figure \ref{fig:autocorrelation} examines the timescales of the fast HXR fluctuations. A typical pulse duration can be measured as the width of the autocovariance of the time profile. The lefthand panel shows the autocovariance of the {Konus-\textit{Wind}}\, 70--300~keV lightcurve for the 1-minute impulsive phase shown in Figure \ref{f_light_curves_OVSA_KW}, panel (b), as well as that for the same profile smoothed over 3 seconds (effectively a low-pass filter that removes the fast time variation). In the righthand panel, an autocovariance for the difference of the raw and smoothed time profiles isolates the contribution of the quickly-varying component. The FWHM of this curve (with height measured from peak to valley) is 1.05 seconds. This is the average pulse duration; note that visual inspection of the lightcurve in Figure \ref{f_light_curves_OVSA_KW}, panel (b) shows that some peaks are subsecond. At longer time scales, the autocovariance deviates from that of the smoothed curve at widths of 11.3 and 18.4 seconds, with smaller deviations at widths of 6.4 and 24.6 seconds. The short pulses are too fast for {\textit{RHESSI}}\, to resolve using traditional {\textit{RHESSI}}\, 4- or 2-second time binning, making this a rare observation enabled by the use of {Konus-\textit{Wind}}\, data.
\subsection{{Microwave}\ data from OVSA}
\label{S_radio_data}
{Microwave}\ data from OVSA are highly complementary to the {\textit{RHESSI}}\ X-ray data in that they offer a second method by which to measure flare-accelerated electrons that produce gyrosynchrotron emission. Before decommissioning in 2010, OVSA \citep{ovsa_1984, Gary_Hurford_1994} provided the total power data in the form of dynamic spectra \citep{Nita_etal_2004}, from which instantaneous spectra or frequency-specific light curves could be derived and analyzed, and interferometric data for imaging.
Recently, {microwave}\ imaging with OVSA has become more easily accessible\footnote{See detailed OVSA imaging manual at \url{http://ovsa.njit.edu/legacy/}.} with a newly-calibrated OVSA legacy data base\footnote{Currently available at \url{http://ovsa.njit.edu/data/archive/calibration_files/} from Jan 2000 to Aug 2003.} and updated OVSA imaging software \citep{2014AAS...22421845N}.
For our jet-associated microwave burst we employ both spectral and imaging OVSA capabilities, as briefly outlined in \citet[][Section~4.2]{Fl_etal_2016narrow} and described in the OVSA imaging manual in more detail.
Using the SSW \verb"ovsa_explorer" widget, we identified the burst time from a calibrated solar observation data set, subtracted the pre-burst background, and fit sequentially all instantaneous spectra using a built-in fit function. For this fit we assume the broadband emission has a spectral shape consistent with a generic function that has the form \citep{1989SoPh..120..351S,Nita_etal_2004}
\begin{figure*}\centering
\includegraphics[width=0.35\textwidth, trim=0 5.6cm 8.2cm 0, clip=true]{f6.eps}
\includegraphics[width=0.35\textwidth, trim=0 0cm 8.2cm 5.7cm, clip=true]{f6.eps}\\
\includegraphics[width=0.35\textwidth, trim=8.2cm 5.6cm 0cm 0, clip=true]{f6.eps}
\includegraphics[width=0.35\textwidth, trim=8.2cm 0cm 0 5.7cm, clip=true]{f6.eps}
\caption{\label{f_spec_evol_OVSA_KW} Evolution of microwave spectral fit parameters, including (a) peak frequency, (b) peak flux, (c) high-frequency spectral index, and (d) low-frequency spectral index. Fast temporal variability is apparent. }
\end{figure*}
\begin{figure}\centering
\includegraphics[width=0.45\textwidth]{f7_NEW.eps}
\caption{\label{f_Jet_2002_08_19_OVSA_decon} { OVSA image synthesized over frequencies 3.2--5.4~GHz and 96 seconds at the burst peak, overlaid on the {\textit{TRACE}}\, jet. Black contours show the OVSA image produced via the CLEAN$+$SELFCAL method, and the dotted white oval shows the CLEAN beam size (FWHM). Deconvolution of the CLEAN beam from the image produces the elliptical source shown in red, which is elongated roughly in the direction of the jet, suggesting that the microwave emission emanates from the jet itself.
}
}
\end{figure}
\begin{equation}
\label{Eq_mw_fit}
S=e^{A} f^{\alpha}\left[1-e^{-e^{B} f^{-\beta}}\right],
\end{equation}
where $f$ is the frequency in GHz, and $A$, $B$, $\alpha$, and $\beta$ are the free fitting parameters that yield the physical parameters of interest. For example, the low-frequency spectral index is $\alpha_{\rm lf} \equiv \alpha$, while the high-frequency spectral index is $\alpha_{\rm hf} = \alpha-\beta$. The fit results shown in Figure~\ref{f_spec_evol_OVSA_KW} reveal strong time variability in the spectral fit parameters---the spectral peak frequency, $f_{\rm peak}$, and the high- and low- frequency spectral indices, $\alpha_{\rm hf}$ and $\alpha_{\rm lf}$. This spectral parameter variability closely follows the {microwave}\ light curve variability seen in Fig.~\ref{f_light_curves_OVSA_KW}. We note that this simple spectral fitting has assumed a single component to the spectrum. Since this does not generally need to be the case, the 3D modeling described in Section \ref{S_modeling} will include multiple components and spatially varying magnetic fields.
The OVSA images are produced from the \verb"uv"-files using the IDL widget-based application \verb"wimagr."
This application can generate images using CLEAN or CLEAN+SelfCal methods with various combinations of time and frequency synthesis, import other context images for comparison, and save the results in various formats.
Most recently, the capability to analytically deconvolve a gaussian source model, which was employed in \citet{Fl_etal_2015} following \citet{1970AuJPh..23..113W}, has been added. Figure~\ref{f_Jet_2002_08_19_OVSA_decon} displays
in black contours an OVSA image synthesized over time (96~s at the burst peak) and frequency (3.2--5.4~GHz) obtained using the CLEAN+SELFCAL restoration method; it is overlaid on a {\textit{TRACE}}\,195~\AA\, image. These time and frequency boundaries are also shown in the OVSA time profile in Panel (b) of Figure \ref{f_20020819_overview}. In addition, Figure~\ref{f_Jet_2002_08_19_OVSA_decon} shows a model gaussian deconvolved source (red) obtained using the built-in analytical deconvolution of a gaussian source and a gaussian beam proposed by \citet{1970AuJPh..23..113W}, where the effect of the finite and anisotropic beam of the array (dotted oval) has been removed. To produce an OVSA image, high flux and precise, accurate phase determination are required. Phase error increases with frequency, and we were not able to produce reasonable images above 5 GHz. At low frequencies (e.g. 1--2 GHz), where the jet is more dominant, there is not sufficient flux to produce OVSA images.
The OVSA phase calibration employs observations of cosmic sources; thus, normally, the accuracy of source location is only limited by the phase fluctuations of the cosmic source (``calibrator'') measurement---typically, a few arcsec. No additional coalignment was applied to plot the images in Figure~\ref{f_Jet_2002_08_19_OVSA_decon}.
Comparison of the magnetic topology highlighted by the {\textit{TRACE}}\, image with the deconvolved {microwave}\ image suggests that the imaged {microwave}\ emission is elongated along the jet, rather than from a closed magnetic flux tube associated with the X-ray sources.
Indirect support for this possible role of the jet comes from the relative lack of signatures of trapping, which would broaden the temporal peaks in the radio emission time profile.
\section{3D modeling}
\label{S_modeling}
To facilitate solar 3D modeling and comparison with data the NJIT group developed a powerful tool, GX Simulator (Gyrosynchrotron/X-ray Simulator), which is now in SolarSoft (Nita et al. 2015, 2018).
The tool allows the user to create a 3D model from an imported photospheric magnetic base map. In GX Simulator various aspects of the model can be displayed, manipulated, or altered, such as magnetic field components, field lines, flux tubes, thermal and nonthermal density, and nonthermal energy parameters (e.g. power-law index, pitch angle, high/low energy cutoffs). We use the tool to build 3D models using realistic magnetic configurations obtained from force-free extrapolations of photospheric magnetograms from the Michelson Doppler Imager (MDI; see Figure \ref{f_Jet_2002_08_19_images} for the magnetogram in comparison with the flare/jet).
We then compute emission in EUV, HXRs, and radio by numerically solving the corresponding radiative transfer equations. We thus synthesize, from a single 3D model, multi-wavelength images and spectra and compare them with all available observed data to validate the model.
\begin{figure*}\centering
\includegraphics[width=0.75\textwidth,clip]{f8a.eps}
\includegraphics[width=0.23\textwidth,trim=0 -1mm 0 0, clip]{f8b.eps}
\caption{\label{f_perspective_jet_model} Modeled magnetic field geometry and spatial distribution of nonthermal electrons in GX Simulator. LFFF extrapolations were performed on MDI data in two sets with different values of $\alpha$, including closed, twisted field connecting flare footpoints (loops marked I and II), and the open, low-twist field along which the jet emerges (marked J), as well as additional closed field (marked A). Red lines trace a few field lines to serve as ``center'' (reference) lines of the flux tubes. Yellow lines indicate open field, while green lines indicate closed. Panel (a) shows a composite view of all relevant flux tubes over the MDI data from which they were extrapolated. In Panel (b), a zoomed-in view shows the nonthermal electron distribution (green volume) on the closed loops I and II overplotted on a {\textit{RHESSI}}\ 25-80~keV image. In Panel (c), a different perspective view shows more clearly the escape path of the jet J. The modeled accelerated electron distributions at the jet and adjacent closed loop are illustrated with blue and green intensities, providing a visualization of the nonthermal electrons that escape along the jet. Panel (d) shows the extrapolated field lines (open and closed) superposed on a {\textit{TRACE}}\ EUV image.
}
\end{figure*}
\begin{figure*} \centering
\includegraphics[width=0.75\textwidth,clip]{f9a.eps}
\includegraphics[width=0.75\textwidth,clip]{f9b.eps}
\caption{(Top row) Observed microwave total power spectra at two time frames: 21:00:34~UT (left) and 21:01:30~UT (right) and the corresponding synthetic spectra obtained from the 3D models using GX Simulator. The dotted line in the upper right panel shows the contribution computed from the closed flux tube, while the dashed line shows the contribution from the jet itself, necessary to reconcile the observed data at low frequencies. (Bottom row) Observed and modeled X-ray emission for the two time intervals.
In the bottom right, the model shown by the solid line considers only the contribution from the jet-related part of the model, Fig.~\ref{f_perspective_jet_model}(c); its contribution to the total X-ray spectrum is small throughout the spectrum. The relative contribution of the jet electrons to the predicted X-ray spectra agrees with {\textit{RHESSI}}\ imaging data, which reveal the jet only (and barely) in the 18--30 keV range.
\label{f_20020819_model_spec}
}
\end{figure*}
\subsection{Approach to magnetic field modeling}
\label{sec:mdi}
The 195~\AA\ {\textit{TRACE}}\, EUV images illustrate a complex magnetic topology, which is unlikely to be recovered using either a potential field (PF) or a linear force-free field (LFFF) model of the coronal magnetic field; thus, a nonlinear force-free field (NLFFF) model would be the most suitable for 3D magnetic field modeling, as in \citet{2018ApJ...852...32K}. However, NLFFF modeling requires vector components of the magnetic field at the the photosphere, which were unavailable at the time of our event. Instead, we create separate magnetic data cubes for distinct components of the overall magnetic structure. Modeling them individually using LFFF extrapolations allows those different components, which may be closed or open field, to have dissimilar values of the force-free parameter $\alpha$, as in \citet{Fl_etal_2016coldFl}.
In particular, different models must be developed for the (presumably highly variable) {microwave}\ emission from the jet and (more gradual) emission from any closed loops, from which most of the soft X-ray emission originates.
Next, we develop two different models for two time frames indicated by dashed vertical lines in Figure~\ref{f_20020819_overview}b. At time 21:00:34~UT, the {microwave}\ emission evidences a valley between peaks (presumably a slowly varying pedestal due to a trapped component), while at 21:01:30~UT (close to the peak of the burst) the jet produces a highly variable component. We refer to these as ``pre-jet'' and ``jet'' intervals.
\subsection{Modeling the pre-jet interval: closed field}
For the pre-jet time (21:00:34~UT) we use the morphology implied by the observed X-ray structure as an initial guess. We find that the data near the flare site are well represented by an LFFF extrapolation (3D magnetic data cube) with force-free parameter $\alpha=(8\pm2)\times10^{-10}$~cm$^{-1}$. Our closed field lines computed from the extrapolated 3D magnetic data cube connect all three primary nonthermal HXR sources observed in the event; see Figure~\ref{f_perspective_jet_model}, panels A and B.
Using one of these field lines as a reference, we
create a flux tube (flux tube I in Fig.~\ref{f_perspective_jet_model}) connecting two HXR sources, presumably footpoints, and fill this loop with hot, dense, thermal plasma ($T=29$~MK, $n_0=10^{11}$~cm$^{-3}$) and nonthermal electrons with a looptop number density $n_b=4\times10^{9}$~cm$^{-3}$ above $E_{\min}=25$~keV. A relatively dense thermal plasma is needed to avoid, via Razin suppression, otherwise strong {microwave}\ emission at frequencies below 10~GHz. The total number of nonthermal electrons integrated over the loop volume is $\int n_bdV\approx 2.8\times10^{34}$. Any larger nonthermal number density would overestimate the HXR flux compared with that observed, while a larger thermal density or lower temperature would overestimate the {\textit{TRACE}}\, 195~\AA\ emission\footnote{A routine computing the EUV {\textit{TRACE}}\ emission has been recently added to GX Simulator SSW distribution.}. Taking the nonthermal electron spectral index to be $\delta = 4.8$, which ensures the correct high-frequency slope of the microwave emission, we find that the modeled microwave spectrum matches the observed one; see Fig.~\ref{f_20020819_model_spec} (top left). This tells us that the nonthermal electrons trapped in the loop connecting the HXR footpoints do play a role in forming the slowly-varying microwave emission, which may form the pedestal on top of which the highly variable emission is superimposed. This flux tube alone is insufficient to reproduce the thermal X-ray emission. Thus, we added one more thermal loop with number density $n_0=6\times10^{10}$~cm$^{-3}$ ($EM$ $=5.16\times10^{47}$~cm$^{-3})$ and temperature $T$ $=25$~MK, consistent with those obtained from the {\textit{RHESSI}}\ fit. The components together well reproduce both the low-energy imaging data and the spectrum (Fig.~\ref{f_20020819_model_spec}, bottom left). The agreement between the model and data is perfect due to the recently added ability of GX Simulator to for account collisions not only with hydrogen but with other atoms, as well as free-free and free-bound transitions, various line emissions in the thermal part of the spectrum, and thick-target emission in the nonthermal part of the spectrum.
The data cube so far, however, does \textit{not} include an open magnetic flux tube matching the EUV jet.
\subsection{Modeling the jet: open flux tube}
To create the jet-like open flux tube requires a less twisted magnetic structure, which we illustrate by creating a separate model for the second time frame, 21:01:30~UT. To produce the required magnetic data cube we employ an LFFF extrapolation with force-free parameter $\alpha=(1.5\pm0.8)\times10^{-10}$~cm$^{-1}$ (almost PF); see Figure~\ref{f_perspective_jet_model}, panel C. We populate this open flux tube with moderately dense hot thermal plasma ($T=2.05$~MK, $n_0=10^{10}$~cm$^{-3}$; needed to make the jet visible in the 195~\AA\ channel) and nonthermal electrons ($n_b=1.5\times10^{7}$~cm$^{-3}$ above $E_{\min}=25$~keV, and spectral index $\delta = 5$). The total number of nonthermal electrons integrated over the jet volume is $\int n_bdV\approx 2.9\times10^{34}$. The spectral index of the nonthermal electrons at the jet is not well constrained by the data but its exact value has only a minor effect on the optically thick low-frequency emission. The nonthermal number density and spatial extent of the nonthermal electrons are reliably constrained by the data at least within a factor of a few. Microwave emission computed from this jet model yields the correct location and also offers a good match to the low-frequency (optically thick) portion of the microwave spectrum (dashed curve in Fig.~\ref{f_20020819_model_spec}, top right), but cannot account for the high-frequency part of the spectrum, because the magnetic field is too small along the jet. To account for this high-frequency component we created a closed flux tube, adjacent to the open flux tube, but located lower in the corona (green field lines in Figure~\ref{f_perspective_jet_model}, panels C and D).
We added a fraction of nonthermal electrons to this closed flux tube where the magnetic field is reasonably strong: $n_b=3\times10^{8}$~cm$^{-3}$ above $E_{\min}=25$~keV, and spectral index $\delta = 4.2$. The total number of nonthermal electrons integrated over the loop volume is $\int n_b dV\approx 1.4\times10^{34}$. This offers a good match to the high-frequency part of the microwave spectrum; see the dotted curve in Panel B of Figure~\ref{f_20020819_model_spec}.
In all cases we computed the EUV emission in the 195~\AA\ passband and made sure that the model does not overestimate the observed EUV emission.
Although it is not possible to perform detailed spectroscopy on the faint HXRs from the jet itself given the bright thermal and nonthermal HXR emission from the flare site, the integrated {\textit{RHESSI}}\, emission provides upper limits on the jet HXRs. From the images in Figure \ref{f_Jet_2002_08_19_images}, the low energy ($\lesssim$18 keV) and high energy ($\gtrsim$30 keV) emission probably emanates from thermal flare plasma and chromospheric footpoints, respectively; these are the usual features observed in HXR flares. Only the ``mid'-range'' emission (18-30 keV) shows a hint of HXR extension along the jet. Our modeled electron distribution for the jet is consistent with these observations. The black histogram in the lower-left panel of Figure \ref{f_20020819_model_spec} represents the synthetic jet HXR emission and lies $\sim$2 orders of magnitude below the synthetic emission from the closed loops across most of the energies except for a knee around the electron low-energy cutoff at 25 keV. Near that energy, the synthetic jet emission comes within an order of magnitude of the closed-loop emission, making it barely imageable within {\textit{RHESSI}}'s imaging dynamic range. Note that the low-energy cutoff is not well constrained, but the {\textit{RHESSI}}\, emission gives us confidence that it lies between 18 and 30 keV as it would be difficult to find another way for the jet HXRs to be barely imageable by {\textit{RHESSI}}\, only in that range. (A ``knee'' is necessary.)
\subsection{Observational constraints and the allowed parameter ranges for the modeled distributions}
Table 1 gives a summary of the GX Simulator electron distributions including thermal and nonthermal parameters for all of the modeled loops, both at the initial time (21:00:34 UT) when only two twisted loops are included, and at the later time of the jet (21:01:30 UT) when the structure has changed to an open set of field lines (along which the jet emerges) and one closed loop. Here, we discuss the parameter ranges and how they are constrained by the broad set of observations. We do not guarantee that this model is a unique solution; it is conceivable that other flux tube geometries and other spectral assumptions (besides our assumed gyrosynchrotron model) could fit the data. However, after an extensive (though non-exhaustive) effort to seek believable alternatives, we did not find other configurations or models that well represented all the data sets.
The magnetic field lines were extrapolated from \textit{SOHO}/MDI data using LFFF extrapolation, with force-free parameters selected so as to reproduce the magnetic connectivity suggested by the data, as explained in Section \ref{sec:mdi}. The parameters given are the best-fit values based on these extrapolations. The force-free parameter has an overall ``nominal'' value and varies slightly along each flux tube due to numerical effects in the LFFF modeling; this range of variance is given in the table. In all cases, the extrapolated field lines connect the HXR footpoints and match the location of heated plasma observed by {\textit{TRACE}}. \\
\subsubsection{Time interval 1: Loops I and II.}
For Loops I and II we use thermal parameters (emission measure $EM$, temperature $T$) constrained by spectral fits to {\textit{RHESSI}}\, integrated data, so the thermal plasma parameters closely agree with the X-ray data. Nonthermal parameters in the tens-of-keV range (cutoff energy $E_0$ and spectral index $\delta_1$) are also fit from an integrated {\textit{RHESSI}}\, spectrum. As is often the case in fitting {\textit{RHESSI}}\, integrated spectra, the low-energy cutoff of the electrons is poorly constrained in the presence of bright thermal plasma. The fit value (25 keV) is best viewed as an upper limit on this parameter. To ensure that our results do not hinge on our adjusting this parameter, we use 25 keV as the low-energy cutoff for \textit{all} nonthermal electron distributions considered in this paper. If the value is truly lower then even more energy is present in accelerated electrons. (See Section 3 of \citet{holman2011} for a thorough discussion.)
The loop width is taken to match the low-frequency, optically thick part of the {microwave}\ emission. This emission is fully defined by the product of only two parameters---source area and brightness temperature. The latter is fully defined by the magnetic field (fixed for a given flux tube) and parameters of the nonthermal electron distribution (fixed from the X-ray fit); meaning that the loop area can be derived from the {microwave}\ data.
The remaining parameters are: the break energy, high-energy spectral index, and the maximum energy; all of these high-energy parameters are constrained by the {microwave}\ data. The observed {microwave}\ spectrum is well reproduced with a single power-law electron distribution ($\delta=4.8$, see Table~\ref{Table_parms_full}) if the maximum electron energy $E_{\max}\sim400$~keV. If we instead permit the maximum electron energy to be larger than $E_{\max}>500$~keV, then a double power-law model is needed with a break around 300~keV. In other words, a softening of the distribution above 300 keV is necessary, whether by a break in the power law or a hard cutoff to the distribution. The details of this softening have a negligible effect on our computation of total electron energy or in the interpretation of the scenario, but a single power-law extending to the energies above 500~keV is excluded. Table \ref{Table_parms_full} lists parameters for only a single power law. \\
\subsubsection{Time interval 2: Jet plus adjacent closed loop.}
The X-ray, EUV, and microwave data work together to constrain the thermal distribution at this time. The jet density must be below $\sim1.7\times10^{10}$~cm$^{-3}$ in order that the cutoff frequency is below $f=1.2$~GHz (where the {microwave}\ spectrum starts). We approximate this density in our model as $n_0=1\times10^{10}$~cm$^{-3}$. This density could be adjusted, but should not be orders of magnitude lower or the jet would not be observed in the EUV given the {\textit{TRACE}}\ 195\AA\ response and reasonable temperatures.
For the closed loop, the thermal density is taken to be $n_0=2\times10^{11}$~cm$^{-3}$. Values larger than this would violate three observational constraints: (1) 195~\AA\ emission would be stronger than observed for all reasonable coronal (including flaring) temperatures; (2) SXR emission would be stronger than is observed; and (3) the {microwave}\ spectrum would be dominated by a free-free component, which would not match the observed {microwave}\ spectral shape. The density could be smaller but would require adjustment of the loop geometry; therefore, there is a range of thermal densities that could fit the observed data. For the likely value of $1\times10^{10}$~cm$^{-3}$ for the jet density, a temperature of 2.05 MK fits the measured {\textit{TRACE}}\ 195~\AA\ emission from the jet location. Since the {\textit{TRACE}}\ 195~\AA\ temperature response function \citep{handy1999} decreases for all higher temperatures, a higher density would be required for any jet temperatures higher than this (and would violate the density constraint from {microwave}\ data mentioned earlier).
For the closed loop, a density of $n_0=2\times10^{11}$~cm$^{-3}$ can produce the observed 195~\AA\ brightness at the jet base, where this loop is located, for a temperature between 7.3 and 10.6~MK.
Since the plasma temperature is not constrained within this range, we choose a temperature of 7.3 MK.
For the nonthermal electrons in the jet, the {microwave}\ spectrum deviates from observations if the upper cutoff to the electron energy $E_{\max}$ is below $\sim300$~keV; it is not well constrained from above. Further decrease of this parameter could be compensated by an increase of the source area, but this would be in conflict with the {microwave}\ source size. For the closed loop the {microwave}\ spectrum deviates from observations if $E_{\max}< 0.8$~MeV. In the model we use $E_{\max}=0.9$~MeV.
\begin{deluxetable*}{|ll|cc|cc|}
\tablecolumns{5}
\tablewidth{0pc}
\tabletypesize{\footnotesize}
\tablecaption{Field and electron distributions modeled in GX Simulator based on MDI, {\textit{RHESSI}}, OVSA, and {\textit{TRACE}}\, data. }
\tablehead{\colhead{Parameter}& \colhead{Symbol, units} & \colhead{Loop I} & \colhead{Loop II} & \colhead{Open jet } & \colhead{Adjacent loop} }
\startdata
\multicolumn{2}{c|}{Time} & \multicolumn{2}{c|}{21:00:34 UT} & \multicolumn{2}{c}{21:01:30 UT} \\
\hline
{\textit{Central field line}:} & & & & & \\
\quad Length & $l$, cm & $1.875\cdot10^9$ & $1.27\cdot10^9$ & $1.83\cdot10^{10}$ & $5.0\cdot10^9$ \\
\quad Force-free parameter: Nominal &
$\alpha/ (10^{-10}$cm$^{-1}$) & $8.18$ & $8.18$ & 1.36 & 1.36\\
\quad \quad \quad \quad Along the flux tube
& $\alpha/ (10^{-10}$cm$^{-1}$) & $8\pm2$ & $8\pm2$ & $0$ & $1.5\pm0.8$ \\
\quad Number of twists & $N_{twist}={\alpha l}/({4\pi})$ & $0.12$ & $0.08$ & $0$ & $0.06$\\
\quad Magnetic field, positive footpoint & $B_{f+}$,~G & 655 & 292 & 258 & 423 \\
\quad Magnetic field, negative footpoint & $B_{f-}$,~G & -397 & -585 & -3625 & -768 \\
\quad Magnetic field, looptop & $|B_{\rm ref}|$,~G & 294 & 268 & 102 & 111 \\
\textit{Flux tube:} & & & & & \\
\quad Reference cross-section radius\tablenotemark{a} & $a=b$, cm & $1\cdot10^8$ & $5.87\cdot10^8$ & $8.0\cdot10^8$ & $6.2\cdot10^8$\\
\quad Model volume; $\left[\int n_0 dV\right]^2/\int n_0^2 dV$ & $V$, cm$^3$ & $2.43\cdot10^{25}$ & $5.64\cdot10^{26}$ & $8.35\cdot10^{27}$ & $1.77\cdot10^{27}$ \\
{\textit{Thermal plasma}:} & & & & & \\
\quad Number density at central field line & $n_0$, cm$^{-3}$ & $1.0\times10^{11}$ & $0.6\times10^{11}$ & $<1.7\times10^{10}$ & $2\times10^{11}$ \\
\quad Emission Measure; $\int n_0^2 dV$ & $EM$, cm$^{-3}$ & $3.3\times10^{46}$ & $5.16\times10^{47}$ & $1.1\times10^{47}$& $4.46\times10^{48}$ \\
\quad Mean number density; $\int n_0^2 dV/\int n_0 dV$ & $\langle n_0\rangle$, cm$^{-3}$ & $3.7\times10^{10}$ & $3.0\times10^{10}$ & $3.62\times10^{9}$ & $5.0\times10^{10}$ \\
\quad Temperature & $T$, MK & 29 & 25 & $>2.05$ & 7.3--10.6 \\
\quad Parameters of transverse distribution\tablenotemark{*} & $p_0,~p_1,~p_2,~p_3$ & 2, 2, 0, 0 & 2, 2, 0, 0 & 2, 2, 0, 0 & 0.5, 0.5, 0, 0 \\
\quad Parameters of distribution along the loop\tablenotemark{**} & $q_0,~q_1,~q_2$ & barometric\tablenotemark{$\dag$} & barometric\tablenotemark{$\dag$} & barometric\tablenotemark{$\dag$} & 5, 0, -0.9 \\
\textit{Nonthermal electrons:} & & & & & \\
\quad Number density at central field line & $n_b$, cm$^{-3}$ & $4.0\times10^{9}$ & --- & $1.5\times10^{7}$ & $3.0\times10^{8}$ \\
\quad Mean number density; $\int n_b^2 dV/\int n_b dV$ & $\langle n_b\rangle$, cm$^{-3}$ & $1.37\times10^{9}$ & --- & $5.08\times10^{6}$ & $1.06\times10^{8}$ \\
\quad Parameters of transverse distribution\tablenotemark{$\ddag$} & $p_0,~p_1,~p_2,~p_3$ & 2, 2, 0, 0 & --- & 2, 2, 0, 0 & 2, 2, 0, 0 \\
\quad Parameters of distribution along the loop\tablenotemark{$\flat$} & $q_0,~q_1,~q_2$ & 1, 2.9, -0.3 & --- & 5, 0, 0.2 & 11, 0, -0.7 \\
\quad Total electron number & $N_b$, cm$^{-3}$ & $2.8\times10^{34}$ & --- & $2.9\times10^{34}$ & $1.4\times10^{34}$ \\
\quad Low-energy cutoff & $E_0$, keV & 25 & --- & 25 & 25 \\
\quad Maximum electron energy & $E_{\rm max}$, MeV & 0.4 & --- & $>0.3$ & $>0.8$ \\
\quad Electron spectral index & $\delta_{n1}(<E_{\rm break})$ & 4.8 & --- & 5 & 4.2 \\
\enddata
\tablenotetext{*}{In our case, the reference location is chosen to be that of minimum magnetic field value.}
\tablenotetext{*}{The distribution is described by Equation~(\ref{Eq_n0_xy}).}
\tablenotetext{**}{\ The distribution is described by Equation~(\ref{Eq_n0_s}).}
\tablenotetext{$\dag$}{The distribution is described by Equation~(\ref{Eq_n0_z}).}
\tablenotetext{$\ddag$}{The distribution is described by Equation~(\ref{Eq_nb_xy}).}
\tablenotetext{\flat}{The distribution is described by Equation~(\ref{Eq_nb_s}).}
\label{Table_parms_full}
\end{deluxetable*}
\section{Discussion}
To recap the observations, we find high time variability in HXRs and microwaves, with HXR spectral hardening strongly correlated with fast bursts of intense flux. HXR images are dominated by the flaring (thermal) loops at low energies and (nonthermal) footpoints at high energies, but at intermediate energies ($\sim$18--30 keV), some {\textit{RHESSI}}\, emission is elongated along the jet, where OVSA data also reveal an elongated microwave source at the location of the jet.
We interpret the fast HXR spikes as bursts of particle acceleration, and the duration of these peaks gives the timescale over which electrons are accelerated. The very short time lag between the burst intensity and its hardening (less than 200~ms in the case of G1 and no lag in the case of G2) implies that electrons are accelerated quickly to $\sim100$~keV. Combining the time lag from Figure \ref{f_cross_correlation} with the peak width from Figure \ref{fig:autocorrelation}, we conclude that the event exhibits electron energy increases on an average timescale of 0.2 seconds, with acceleration episodes lasting, on average, 1 second. Once accelerated, the electrons that have access to open field simply escape, and so there is no lengthening of HXR pulse durations due to magnetic trapping. We surmise that the short pulses are associated with the acceleration of electrons into the jet. Since these electrons escape, many injections over a substantial time interval are needed to replenish the electrons and produce observable emission; this is consistent with the high variability lasting for $\gtrsim$1 minute in the HXR and {microwave}\ time profiles. For accelerated electrons that are injected into the flare loop, on the other hand, trapping could lengthen the duration of associated HXR and radio components; this could account for the $>10$ second timescales in the {Konus-\textit{Wind}}\, data and the pedestal in the {microwave}\ emission. We note that \citet{kiplinger1983} performed a search for fast HXR variations, finding them in $\sim$10\% of the flares studied. \citet{qiu2012} and \citet{cheng2012} also found $<$1 second HXR peaks in demodulated {\textit{RHESSI}}\, data. It is difficult to tell from these past studies (using data taken before the \textit{SDO} era, and only occasionally overlapping with {\textit{TRACE}}\ coverage) whether these quickly-varying events were systematically associated with jets. Future work will explore this subject using more recent observations.
The variability observed in OVSA data is unusual for a microwave burst but when observed is interpreted as pulsed or beam-like acceleration of electrons \citep[e.g.][]{Altyntsev_etal_2008, Fl_etal_2008}. This behavior is difficult to reconcile with variability of a single nonthermal electron population due to transport effects in a given magnetic flux tube, and instead either requires a number of distinct sources (loops) or a sequence of distinct acceleration / injection episodes in a single source.
However, strong variability of the spectral peak frequency, sensitive to the magnetic field value at the source, favors the scenario with multiple sources---loops with accordingly different magnetic field magnitudes. Most of the {microwave}\ emission comes from the closed loop (A), while a contribution from the open field with a lower magnetic field magnitude is needed to account for the low-frequency spectral knee. While gyrosynchtron is not the only mechanism that can produce broadband {microwave}\ emission, we do not find reasonable alternatives for this event. We attempted fits to the {microwave}\, data using a thermal bremsstrahlung model and found that this scenario would require unphysically large densities or source sizes; in either case, the emission would violate {\textit{RHESSI}}\ observational constraints. Additionally, cooling and heating timescales in the corona would not allow the observed fast ($\lesssim$ 1 second) time variations for a thermal population.
The question of symmetry (or lack thereof) in numbers of flare electrons accelerated upward (toward interplanetary space) versus downward (toward the chromosphere) is important for flare acceleration theories, but is poorly understood to date. Some studies of in-situ electrons at 1 AU find that escaping electrons represent only a minor ($<1\%$) fraction of the electrons accelerated in flares \citep{lin1971, krucker2007}. However, more recent work found similar energies in electrons accelerated upwards and downwards \citep{james2017}. In our event, combination of the data and 3D modeling allows us to estimate the nonthermal electron population in the open, jet-forming flux tube, finding ($\sim 3\times10^{34}$) electrons (above 25 keV) on the open field. This is comparable to the number of electrons in the closed flux tube. Our result is in line with the work of \citet{james2017} and also with a recent finding of \citet{Fl_etal_2016coldFl}, who analyzed a flare produced by an interaction between two loops---one small and one large. In that flare the accelerated electrons were divided roughly equally between the two loops.
\section{Conclusions}
In summary, we have used a combination of HXR, EUV, and radio data, combined with modeling of the emission from thermal and accelerated
electron populations, to form a credible spatial and energetic distribution for the accelerated electrons in a solar
jet. The direct microwave detection and the HXR upper limits are needed to construct the simulated accelerated electron distribution. As far as
we are aware, this is the first case in which microwave gyrosynchrotron emission has been detected from an open, rather
than closed, magnetic configuration, and is the most direct constraint to date on the accelerated electron population within a
solar jet. It is extremely important that the model built using actual magnetic field data yields an excellent match of the simulated and
observed radio image and spectrum, thus validating and quantifying the nonthermal electron distribution on the open field flux tube. We
stress that for the identification and analysis of such an event, the necessary approach was the careful consideration of HXR, EUV,
and radio data combined with modeling.
Jets occupy an important role in solar and heliospheric physics, as they provide a direct way for impulsive solar events to influence the heliosphere. We expect the study of jets to become even more prominent with the promise of in-situ measurements by \textit{Solar Probe Plus} and \textit{Solar Orbiter}, which will measure the energetic particles that reach the heliosphere directly and through their Type III radio emission. Future investigation using cutting-edge instruments will utilize direct imaging of solar HXRs -- for example using the technology from the successful \textit{FOXSI} rocket program \citep{krucker2014, glesener2016} -- as well as the microwave imaging spectroscopy offered by EOVSA \citep[e.g.][]{wang2015}. With these instruments, the escaping electrons could be imaged at their source and these measurements could be compared with those at multiple points in the heliosphere, allowing for a complete picture of electron acceleration and transport.
\acknowledgments
This work was supported in part by NSF grants
AST-1615807 and AGS-1817277, NASA grants
NNX16AL67G, 80NSSC18K0015,
80NSSC18K0667 to the New Jersey
Institute of Technology, and by an NSF Faculty Development Grant (AGS-1429512) to the University of Minnesota. The authors are grateful to Alexandra Lysenko and the {Konus-\textit{Wind}}\ team for making their data available and to Dale Gary, Gelu Nita, S\"{a}m Krucker, and Sophie Musset for insightful comments on the text.
\bibliographystyle{apj}
|
{
"timestamp": "2018-10-31T01:06:12",
"yymm": "1806",
"arxiv_id": "1806.00858",
"language": "en",
"url": "https://arxiv.org/abs/1806.00858"
}
|
\subsection{Keywords}
Riemann zeta function, zeros of zeta function, recursion relation of zeta function, functional equation of zeta function
\subsection{Mathematical Classification}
MSC: 11M26
\end{abstract}
\section{Introduction}
\subsection{Motivation}
Developing methods for studying of the nature of the nontrivial zeros on the critical line of the Riemann zeta function are very important. One is attempting in this paper to find a way of eliminating the zeta function out of the functional equations allowing possibly new expressions to be developed in terms of more elementary functions. We are attempting to find an expression for the phase of the zeta function along the critical line.
\subsection{Preliminaries}
As is well known, the Riemann zeta function has a number of zeros, along the negative real axis at even integer points and along the critical line $x=\frac{1}{2}$. These latter appear at points whose exact positions are not known beforehand but require a lot of numerical work, the points being of a transcendental nature. Every zero in the upper half of the complex plane has a pair mirroring on the negative half plane. This work is now focusing in these nontrivial zeros. The following treatment is relying on the validity of the Riemann hypothesis \cite{Titchmarsh1999}. It requires all nontrivial zeros to reside on the critical line $x=\frac{1}{2}$ and none outside it. We are living the times of seeing the hypothesis finally proven, if not already done.
Now what will happen if a zeta function approaching a zero is divided by another zeta approaching a corresponding pairing zero on the negative half plane? As both of them go to zero one would expect the ratio to become singular. While investigating numerically the behavior of the Riemann zeta function approaching some of its zeros on the critical line, it was noted that the phase behavior of the function has a particular feature. When the argument $s$ approaches any zero $s_0$ of the zeta function $\zeta(s)$. The ratio seems to be
\begin{equation}
\lim_{s\rightarrow{s_0}}\frac{\zeta(s)}{\zeta(\bar{s})}=e^{i\theta} \label{eqn2}
\end{equation}
with $s=\frac{1}{2}+i{t}$. One can evaluate this ratio while approaching the zero at $s_0$ from any direction and it appears that the ratio will not be singular. It can be proven as done in the following section. This leads one to think that there could exist another way to study the zeros of the zeta function.
\section{Elimination of the Zeta Function}
Since the zeta function is real on the real axis and meromorphic elsewhere, one has according to Schwarz's reflection principle
\begin{equation}
\zeta(\bar{s})=\bar{\zeta(s)} \label{eqn30}
\end{equation}
The function has a general form
\begin{equation}
\zeta(s)=e^{i\phi}\rho \label{eqn40}
\end{equation}
over the complex plane (with $\phi,\rho \in \textbf{\large{R}}$). Especially one does have along the critical line
\begin{equation}
\zeta(s_0)=e^{i\phi_0}\rho_0 \label{eqn42}
\end{equation}
and then one has the following
\begin{equation}
\bar{\zeta(s_0)}={{\rho}_0}e^{-i\phi_0} \label{eqn52}
\end{equation}
But according to equation (\ref{eqn30}) this is equal to
\begin{equation}
\zeta(\bar{s_0})={{\rho}_0}e^{-i\phi_0} \label{eqn56}
\end{equation}
Therefore,
\begin{equation}
\frac{\zeta(s_0)}{\zeta(\bar{s_0})}=e^{2i\phi_0} \label{eqn60}
\end{equation}
proving the assertion and verifying the original numerical observation. This holds along the entire critical line. The function ratio (\ref{eqn2}) can be processed as follows at or very near a zero $s_0$
\begin{equation}
\frac{\zeta(s_0)}{\zeta(\bar{s_0})}=\frac{\zeta(s_0)}{\zeta(1-s_0)} \label{eqn20}
\end{equation}
Approaching the zero can be done along the critical line from either direction. Then the ratio's amplitude remains at unity and the phase angle alone is varying. It can be done from a close distance from other directions on the complex plane while the amplitude is not unity and the phase angle is varying. The correct value is finally obtained at the zero and the amplitude reaches unity.
The functional equation of the Riemann zeta function with the argument $s \in \textbf{\large{C}}$ is well known \cite{Riemann1858}, \cite{Siegel1932}, \cite{Edwards2001}, \cite{Titchmarsh1999}. One applies it in the following form.
\begin{equation}
\frac{\zeta(s)}{\zeta(1-s)}=\frac{(2\pi)^s}{2cos(\frac{\pi{s}}{2})\Gamma(s)} \label{eqn71}
\end{equation}
From this it follows that at any nontrivial zero along the critical line
\begin{equation}
\frac{\zeta(s_0)}{\zeta(\bar{s_0})}=\frac{(2\pi)^{s_0}}{2cos(\frac{\pi{s_0}}{2})\Gamma(s_0)}=e^{2i\phi_0+2\pi{iN}} \label{eqn100}
\end{equation}
This is true for any point $s_0$ on the complex plane and the right side equality is to be used in the following. One has eliminated the zeta function from this equation. One has implemented the $Mod(2\pi{i}), N\in\textbf{\large{N}}$ to be carried further on the way to the final expressions. The Weierstrass formula for the Gamma function $\Gamma(s)$ is valid for $s \in{\textbf{\large{C}}}$
\begin{equation}
\frac{1}{\Gamma(s)}={s}e^{\gamma{s}}\prod_{k=1}{(1+\frac{s}{k})e^{-\frac{s}{k}}} \label{eqn120}
\end{equation}
and one can substitute it getting
\begin{equation}
e^{2i\phi_0+2\pi{iN}}=\frac{{(2\pi)}^{s_0}{s_0}e^{\gamma{s_0}}\prod_{k=1}{(1+\frac{s_0}{k})e^{-\frac{s_0}{k}}}}{2cos(\frac{\pi{s_0}}{2})} \label{eqn140}
\end{equation}
Taking the logarithm of the equation above one will obtain
\begin{equation}
2i\phi_0+2\pi{iN}={s_0}(\gamma+ln(2\pi))+ln(\frac{s_0}{2})-ln(cos(\frac{\pi{s_0}}{2}))+\sum_{k=1}{(-\frac{s_0}{k}+ln(1+\frac{s_0}{k}))} \label{eqn160}
\end{equation}
This equation with a complex variable $s_0$ is valid for all nontrivial zeros.
Breaking down the $s_0$ as $s_0=\frac{1}{2}+i{t}$ where $t \in \textbf{\large{R}}$, and substituting it will lead to
\begin{equation}
2i\phi_0+2\pi{iN}=({\frac{1}{2}+i{t}})(\gamma+ln(2\pi))+ln(\frac{\frac{1}{2}+i{t}}{2})-ln(cos(\frac{\pi({\frac{1}{2}+i{t}})}{2})) \nonumber
\end{equation}
\begin{equation}
+\sum_{k=1}{[-\frac{\frac{1}{2}+i{t}}{k}+ln(1+\frac{\frac{1}{2}+i{t}}{k})]} \label{eqn200}
\end{equation}
The complex logarithms and the cosine term can further be broken down to real and imaginary parts as well and then one will get
\begin{equation}
2i\phi_0+2\pi{iN}=({\frac{1}{2}+i{t}})(\gamma+ln(2\pi))+ln(\frac{\sqrt{1+4{t^2}}}{4})-ln(\sqrt{\frac{cosh^2(\frac{\pi{t}}{2})+sinh^2(\frac{\pi{t}}{2})}{2}}) \nonumber
\end{equation}
\begin{equation}
+i\cdot{artan(2{t})}+i\pi{M}-i\cdot{artan(-tanh(\frac{\pi{t}}{2}))}+i\pi{L} \nonumber
\end{equation}
\begin{equation}
+\sum_{k=1}{[-\frac{1}{2k}-\frac{it}{k}+ln(\sqrt{(1+\frac{1}{2k})^2+\frac{t^2}{k^2}})+i\cdot{artan(\frac{t}{k+\frac{1}{2}})}]}+i\pi{K} \label{eqn220}
\end{equation}
Obviously, the real and imaginary parts of each term succeeded to get a linear form allowing a trivial separation.
\subsection{The Real Part}
For studying the zeros, one would be interested in the real part, equation (\ref{eqn220})
\begin{equation}
0=\frac{1}{2}(\gamma+ln(2\pi))+ln(\frac{\sqrt{1+4{t^2}}}{4})+ \nonumber
\end{equation}
\begin{equation}
-ln\sqrt{\frac{cosh^2(\frac{\pi{t}}{2})+sinh^2(\frac{\pi{t}}{2})}{2}}+ \nonumber
\end{equation}
\begin{equation}
+\sum_{k=1}{[-\frac{1}{2k}+\frac{1}{2}ln(1+\frac{1}{k}+\frac{1}{4k^2}+\frac{t^2}{k^2})]} \label{eqn250}
\end{equation}
This can be developed further simplifying to
\begin{equation}
\gamma+ln(\frac{\pi}{4})=ln(\frac{cosh(\pi{t})}{1+4{t^2}}) \nonumber
\end{equation}
\begin{equation}
-\sum_{k=1}{ln[e^{-\frac{1}{k}}({1+\frac{1}{k}+\frac{1}{4k^2}+\frac{t^2}{k^2}})]} \label{eqn270}
\end{equation}
In spite of its apparent complexity, this equation is an identity which is valid for all $t$.
\subsection{Solving the Phases}
The imaginary part of the equation (\ref{eqn220}) can be used for getting a general expression for the limit phase $\phi_0$ at a point along the critical line
\begin{equation}
\phi_0+\frac{\pi{M}}{2}=\frac{t}{2}(\gamma+ln(2\pi))+\frac{1}{2}artan(2{t})-\frac{1}{2}{artan(-tanh(\frac{\pi{t}}{2}))} \nonumber
\end{equation}
\begin{equation}
+\frac{1}{2}\sum_{k=1}{[\frac{-t}{k}+artan(\frac{t}{k+\frac{1}{2}})]} \label{eqn320}
\end{equation}
The result has become $Mod({\frac{\pi}{2}}), M\in\textbf{\large{N}}$. The resulting function is odd with respect to $t$.
\subsection{Numerical Evaluation of the Phase}
A crude calculation of the ratio from equation (\ref{eqn60}) shows a value of -80.95 degrees $\pm{90}$ degrees for the first nontrivial zero and for the second one 77.36 degrees. One can plot the curve as a function of $t$ along the axis $x=0.5$ as in Figure \ref{Fig.1}.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.100\textwidth]{zeros.jpg}
\caption{Phase in degrees along $x=0.5$}
\label{Fig.1}
\end{figure}
This is the angle $\phi_0$ as such. The $Mod({\frac{\pi}{2}})$ of the angle is more natural and produces Figure \ref{Fig.2}.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.100\textwidth]{zerosmod90.jpg}
\caption{$Mod({\frac{\pi}{2}})$ of the phase in degrees along $x=0.5$}
\label{Fig.2}
\end{figure}
The negative side curve is of opposite sign as the function is odd.
\section{Discussion}
A few interesting results have been obtained. The first is that according to equation (\ref{eqn60}) the ratio between the zeta function at any zero and its conjugate is not singular, but always with unity absolute value and with a particular phase. This fact holds along the whole critical line ($x=\frac{1}{2}$) but fails immediately outside of it. The failure is not dramatic but will cause an error which increases while the point of focus is moving farther from the critical line. The second is that the equation (\ref{eqn100}) has only elementary functions left for studying the zeros of the zeta function. The third interesting finding is ensuing from the previous two, equation (\ref{eqn320}) presenting a simplified expression for calculation of phase at a zero. These equations form the main results of this work.
|
{
"timestamp": "2018-06-05T02:18:55",
"yymm": "1806",
"arxiv_id": "1806.01148",
"language": "en",
"url": "https://arxiv.org/abs/1806.01148"
}
|
\section{Introduction}\label{intro}
A Riemannian manifold $(M^n,g)$ is called a gradient Yamabe soliton if there exist a smooth function $F$ on $M$ and a constant $\rho\in \mathbb{R}$, such that
\begin{equation}\label{YS}
(R-\rho)g=\nabla\nabla F,
\end{equation}
where $R$ is the scalar curvature on $M$ and $\nabla\n F$ is the Hessian of $F$.
If $\rho>0$, $\rho=0$, or $\rho<0$, then the Yamabe soliton is called shrinking, steady, or expanding.
If the potential function $F$ is constant, then the Yamabe soliton is called trivial. It is known that any compact Yamabe soliton is trivial (see for example \cite{CMM12},~\cite{Hsu12}). Yamabe solitons are special solutions of the Yamabe flow which was introduced by R. Hamilton \cite{Hamilton89}.
The Yamabe soliton equation $(\ref{YS})$ is similar to the equation of Ricci solitons.
Ricci solitons are special solutions of the Ricci flow
which was also introduced by R. Hamilton \cite{Hamilton82}.
As is well known, by using the Ricci flow, G. Perelman \cite{Perelman1},~\cite{Perelman2},~\cite{Perelman3} proved Thurston's geometrization conjecture \cite{Thurston} and Poincar\'e conjecture.
In the first paper of Perelman, he mentioned that ``any 3-dimensional complete noncompact $\kappa$-noncollapsed gradient steady Ricci soliton with positive curvature is rotationally symmetric, namely Bryant soliton".
In \cite{CCCMM14}, H.-D. Cao, G. Catino, Q. Chen, C. Mantegazza and L. Mazzieri gave a partial answer to the conjecture.
Finally, S. Brendle proved the conjecture \cite{Brendle}.
In this paper, we consider the similar problem.
More precisely, we consider the following problem.
\begin{problem}\label{PCY}
Classify nontrivial non-flat complete 3-dimensional gradient Yamabe solitons.
\end{problem}
P. Daskalopoulos and N. Sesum \cite{DS13} showed that ``all locally conformally flat complete gradient Yamabe solitons with positive sectional curvature have to be rotationally symmetric". The proof was inspired by H.-D. Cao and Q. Chen's paper \cite{CC11}.
Furthermore, they constructed some examples of rotationally symmetric gradient Yamabe solitons on $\mathbb{R}^n$ with positive sectional curvature.
Recently, H.-D. Cao, X. Sun and Y. Zhang relaxed the assumption, and showed that any nontrivial non-flat complete and locally conformally flat gradient Yamabe soliton with nonnegative scalar curvature is rotationally symmetric.
G. Catino, C. Mantegazza and L. Mazzieri's work \cite{CMM12} is also important.
They classified complete conformal gradient solitons with nonnegative Ricci tensor. As a corollary, they classified nontrivial complete gradient Yamabe solitons with nonnegative Ricci tensor. Finally, it is shown that complete gradient Yamabe solitons are rotationally symmetric under (1) nonnegative Ricci tensor is positive definite at some point, by Catino, Mantegazza and Mazzieri \cite{CMM12}, or (2) positive Ricci curvature, by Cao, Sun and Zhang \cite{CSZ12}.
Therefore, in this paper, we consider $3$-dimensional Yamabe solitons without any assumptions for non-negativity of curvatures.
Our main theorem gives an affirmative partial answer to Problem~\ref{PCY}:
\begin{theorem}\label{main}
Let $(M^3,g,F)$ be a nontrivial non-flat $3$-dimensional complete gradient Yamabe soliton with divergence-free Cotton tensor $($i.e., Bach flat$)$.
${\rm I}.$ If $M$ is steady, then $M$ is rotationally symmetric and equal to the warped product
$$([0,\infty),dr^2)\times_{|\nabla F|}(\mathbb{S}^{2},{\bar g}_{S}),$$
where $\bar g_{S}$ is the round metric on $\mathbb{S}^{2}.$
${\rm II}$. If $M$ is shrinking, then either
$(1)$ $M$ is rotationally symmetric and equal to the warped product
$$([0,\infty),dr^2)\times_{|\nabla F|}(\mathbb{S}^{2},{\bar g}_{S}),$$
where $\bar g_{S}$ is the round metric on $\mathbb{S}^{2},$ or
$(2)$ $|\nabla F|$ is constant and $M$ is isometric to the Riemannian product
$$(\mathbb{R},dr^2)\times \left(\mathbb{S}^2\left(\frac{1}{2}\rho|\nabla F|^2\right),\bar g\right),$$
where $\mathbb{S}^2(\frac{1}{2}\rho|\nabla F|^2)$ is the sphere of constant Gaussian curvature $\frac{1}{2}\rho|\nabla F|^2$.
${\rm III}$. If $M$ is expanding, then either
$(1)$ $M$ is rotationally symmetric and equal to the warped product
$$([0,\infty),dr^2)\times_{|\nabla F|}(\mathbb{S}^{2},{\bar g}_{S}),$$
where $\bar g_{S}$ is the round metric on $\mathbb{S}^{2},$ or
$(2)$ $|\nabla F|$ is constant and $M$ is isometric to the Riemannian product
$$(\mathbb{R},dr^2)\times \left(\mathbb{H}^2\left(\frac{1}{2}\rho|\nabla F|^2\right),\bar g\right),$$
where $\mathbb{H}^2(\frac{1}{2}\rho|\nabla F|^2)$ is the hyperbolic space of constant Gaussian curvature $\frac{1}{2}\rho|\nabla F|^2$.
\end{theorem}
\begin{remark}
For dimension $n\geq4$, Bach \cite{Bach} introduced the Bach tensor in $1920$'s.
\begin{align}
B_{ij}
=&\frac{1}{n-3}\nabla^k\nabla^lW_{ikjl}+\frac{1}{n-2}R_{kl}W_i{}^{k}{}_j{}^l\\
=&\frac{1}{n-2}(\nabla_kC_{kij}+R_{kl}W_i{}^{k}{}_j{}^l),\notag
\end{align}
where $\nabla$ is the Levi-Civita connection, $W$ is the Weyl tensor, $R_{ij}$ is the Ricci tensor, and $C$ is the Cotton tensor.
In \cite{CCCMM14}, the Bach tensor for $3$-dimensional manifolds was introduced as follows:
$$B_{ij}=\nabla_kC_{kij}.$$
\end{remark}
The remaining sections are organized as follows. Section~$\ref{Pre}$ contains some necessary definitions and preliminary geometric results.
Section~$\ref{Proof of main}$ is devoted to the proof of Theorem~$\ref{main}$.
In section~\ref{NRIC}, we consider complete gradient Yamabe solitons with nonpositively curved Ricci curvature in the direction of the gradient of the potential function.
\section{Preliminary}\label{Pre}
The Riemannian curvature tensor is defined by
$$R(X,Y)Z=-\nabla_X\nabla_YZ+\nabla_Y\nabla_XZ+\nabla_{[X,Y]}Z.$$
The Ricci tensor $R_{ij}$ (also denoted by ${\rm Ric}$) is defined by
$R_{ij}=R_{ipjp}.$
The Weyl tensor $W$ and the Cotton tensor C are defined by
\begin{align*}
R_{ijkl}
=&W_{ijkl}
+\frac{R}{(n-1)(n-2)}(g_{il}g_{jk}-g_{ik}g_{jl})\\
&-\frac{1}{n-2}(R_{il}g_{jk}+R_{jk}g_{il}-R_{ik}g_{jl}-R_{jl}g_{ik})\\
=&W_{ijkl}+S_{ik}g_{jl}+S_{jl}g_{ik}-S_{il}g_{jk}-S_{jk}g_{il},
\end{align*}
\begin{align*
C_{ijk}
=&\nabla_iR_{jk}-\nabla_jR_{ik}-\frac{1}{2(n-1)}(g_{jk}\nabla_iR-g_{ik}\nabla_jR)\\
=&\nabla_iS_{jk}-\nabla_jS_{ik},\notag
\end{align*}
where $S={\rm Ric}-\frac{1}{2(n-1)}Rg$ is the Schouten tensor.
The Cotton tensor is skew-symmetric in the first two indices and totally trace free, that is,
$$C_{ijk}=-C_{jik} \quad \text{and} \quad g^{ij}C_{ijk}=g^{ik}C_{ijk}=0.$$
As is well known, a Riemannian manifold $(M^n,g)$ is locally conformally flat if and only if
(1) for $n\geq4$, the Weyl tensor vanishes; (2) for $n=3$, the Cotton tensor vanishes.
Moreover, for $n\geq4$, if the Weyl tensor vanishes, then the Cotton tensor vanishes. We also see that for $n=3$, the Weyl tensor always vanishes, but the Cotton tensor does not vanish in general.
We prove some formulas needed later.
Taking trace of the Yamabe soliton equation \eqref{YS},
\begin{equation}\label{TYS}
n(R-\rho)=\Delta F,
\end{equation}
where $\Delta$ is the Laplacian on $M$.
In general, we have
\begin{equation}\label{p.1}
\Delta {\nabla}_iF={\nabla}_i\Delta F+R_{ij}{\nabla}_jF.
\end{equation}
Substituting
\begin{equation*}
\Delta {\nabla}_iF={\nabla}_k{\nabla}_k{\nabla}_iF={\nabla}_k((R-\rho)g_{ki})={\nabla}_iR,
\end{equation*}
and
\begin{equation*}
{\nabla}_i\Delta F={\nabla}_i(n(R-\rho))=n{\nabla}_iR,
\end{equation*}
into $(\ref{p.1})$, we have
\begin{equation}\label{p.2}
(n-1)\nabla_iR+R_{il}\nabla_lF=0.
\end{equation}
Thus, we have
\begin{equation}\label{p.3}
(n-1)g(\nabla R,\nabla F)=-{\rm Ric}(\nabla F,\nabla F).
\end{equation}
On the other hand, by $(\ref{p.2})$ and the contracted second Bianchi identity,
\begin{equation}\label{p.4}
(n-1)\Delta R+\frac{1}{2}g(\nabla R, \nabla F)+R(R-\rho)=0.
\end{equation}
Combining $(\ref{p.3})$ with $(\ref{p.4})$, we obtain
\begin{equation}\label{p.5}
\Delta R=\frac{1}{2(n-1)^2}{\rm Ric}(\nabla F,\nabla F)-\frac{1}{n-1}R(R-\rho).
\end{equation}
\quad\\
\section{Proof of Theorem $\ref{main}$}\label{Proof of main}
In this section, we prove Theorem $\ref{main}$.
To prove Theorem \ref{main}, we use the following useful theorem by H.-D. Cao, X. Sun and Y. Zhang:
\begin{theorem}[\cite{CSZ12}]\label{Thm of CSZ12}
Let $(M^n,g,F)$ be a nontrivial complete gradient Yamabe soliton. Then, $|\nabla F|^2$ is constant on regular level surfaces of $F$, and either
$(1)$ $F$ has a unique critical point at some point $p_0\in M$, and $M$ is rotationally symmetric and equal to the warped product
$$([0,\infty),dr^2)\times_{|\nabla F|}(\mathbb{S}^{n-1},{\bar g}_{S}),$$
where $\bar g_{S}$ is the round metric on $\mathbb{S}^{n-1},$ or
$(2)$ $F$ has no critical point and $M$ is the warped product
$$(\mathbb{R},dr^2)\times_{|\nabla F|}(N^{n-1},\bar g),$$
where N is a Riemannian manifold of constant scalar curvature.
Furthermore, if the Ricci curvature of $N$ is nonnegative, then $M$ is isometric to the Riemannian product
$(\mathbb{R},dr^2)\times(N^{n-1},\bar g)$; if $R\geq0$, then either $R>0$, or $R=\overline R=0$ and $(M,g)$ is isometric to the Riemannian product $(\mathbb{R},dr^2)\times(N^{n-1},\bar g)$.
\end{theorem}
\begin{proof}[Proof of Theorem~$\ref{main}$]
We only have to consider the case (2) of Theorem $\ref{Thm of CSZ12}$.
Since
\begin{align*}
\nabla_iB_{ij}
=&\nabla_i\nabla_kC_{kij}\\
=&\nabla_i\nabla_k(\nabla_kS_{ij}-\nabla_iS_{kj})\\
=&\nabla_i\nabla_k\nabla_kS_{ij}-\nabla_k\nabla_i\nabla_kS_{ij}\\
=&R_{ikkp}\nabla_pS_{ij}+R_{ikip}\nabla_kS_{pj}+R_{ikjp}\nabla_kS_{ip}\\
=&-R_{ip}\nabla_pS_{ij}+R_{kp}\nabla_kS_{pj}+(S_{ij}g_{kp}+S_{kp}g_{ij}-S_{ip}g_{kj}-S_{kj}g_{ip})\nabla_kS_{ip}\\
=&S_{ij}C_{kik}+S_{ip}C_{ijp}\\
=&-C_{jip}R_{ip},
\end{align*}
\begin{align}\label{m1}
\nabla_i\nabla_jB_{ji}=-\nabla_iC_{ijk}R_{jk}-C_{ijk}\nabla_iR_{jk}.
\end{align}
By the definition and a property of the Cotton tensor,
\begin{align*}
C_{ijk}\nabla_iR_{jk}
=&C_{ijk}(C_{ijk}+\nabla_jR_{ik}+\frac{1}{4}(g_{jk}\nabla_iR-g_{ik}\nabla_jR))\\
=&|C_{ijk}|^2-C_{jik}\nabla_jR_{ik}.
\end{align*}
Thus, we have
\begin{equation}\label{m2}
C_{ijk}\nabla_iR_{jk}=\frac{1}{2}|C_{ijk}|^2.
\end{equation}
Substituting (\ref{m2}) into (\ref{m1}), we have
$$\nabla_i\nabla_jB_{ji}=-B_{jk}R_{jk}-\frac{1}{2}|C_{ijk}|^2.$$
By the assumption, the Cotton tensor vanishes.
As in the proof of Theorem \ref{Thm of CSZ12}, it is shown that
in any open neighborhood $U$ of $N^{2}$ in which $F$ has no critical points,
$$g=dr^2+(F'(r))^2{\bar g}=dr^2+\frac{(F'(r))^2}{(F'(r_0))^2}g_{ab}(r_0,x)dx^adx^b,$$ where $(x^2, x^3)$ is any local coordinates system on $N^{2}$ and $\bar g=(F'(r_0))^{-2}\bar g_{r_0}$, where $\bar g_{r_0}$ is the induced metric on $N^{2}$
By a direct calculation, we can get formulas of the warped product manifold of the warping function $|\nabla F|=F'(r).$
For $a,b,c,d=2,3,$
\begin{align}\label{RT1}
R_{1a1b}&=-F'F'''{\bar g}_{ab},\quad R_{1abc}=0,\\
R_{abcd}&=(F')^2{\bar R}_{abcd}+(F'F'')^2(\bar g_{ad}\bar g_{bc}-\bar g_{ac}\bar g_{bd}),\notag
\end{align}
\begin{align}\label{RT2}
R_{11}=&-2\frac{F'''}{F'},\quad
R_{1a}=0,\\
R_{ab}=&\bar R_{ab}-((F'')^2+F'F''')\bar g_{ab},\notag
\end{align}
\begin{align}\label{RT3}
R=(F')^{-2}\bar R-2\Big(\frac{F''}{F'}\Big)^2-4\frac{F'''}{F'},
\end{align}
where the curvature tensors with bar are the curvature tensors of $(N,\bar g)$.
By (\ref{YS}),
\begin{align}\label{R-rho}
R-\rho=F''.
\end{align}
Since $(N^2,{\overline g})$ is a 2-dimensional manifold,
$${\overline R}_{abcd}=-\frac{\bar R}{2}(\bar g_{ad} \bar g_{bc}-\bar g_{ac}\bar g_{bd}),$$
$$\bar R_{ad}=\frac{\bar R}{2}\bar g_{ad}.$$
Substituting these into (\ref{RT1}) and (\ref{RT2}), we have
\begin{align}\label{RT11}
R_{1a1b}&=-F'F'''{\bar g}_{ab},\quad R_{1abc}=0,\\
R_{abcd}&=-(F')^3\Big(\frac{1}{2}F'R+2F'''\Big)(\bar g_{ad}\bar g_{bc}-\bar g_{ac}\bar g_{bd}),\notag
\end{align}
\begin{align}\label{RT22}
R_{11}=&-2\frac{F'''}{F'},\quad
R_{1a}=0,\\
R_{ab}=&\Big(\frac{R}{2}(F')^2+F'F'''\Big)\bar g_{ab}.\notag
\end{align}
Hence, the Cotton tensor $C_{ijk}$ is
\begin{equation*}
C_{ijk}=\left\{
\begin{aligned}
&\nabla_1(R_{ab}-\frac{1}{4}Rg_{ab})
&\quad (a,b=2,3),\\
&0\qquad (other).
\end{aligned}
\right.
\end{equation*}
Thus, we have
\begin{equation}\label{key2}
\frac{R}{4}(F')^2+F'F'''=c~(\text{constant}).
\end{equation}
Combining \eqref{key2} with $(\ref{RT3})$, we have
$$(F'')^2=\frac{1}{2}\bar R -2c.$$
Since $\bar R$ is constant, $F''$ is constant.
Thus, $R$ is constant by the Yamabe soliton equation.
Therefore the equation $(\ref{key2})$ is as follows.
\begin{equation}\label{key3}
\frac{1}{4}R(F')^2=c.
\end{equation}
If $c=0,$ then $R=0$. From this and $(\ref{RT11})$, $M$ is flat.
If $c\not=0$, then we have $R\not=0$ and $F'$ is constant. Thus, $R-\rho=F''=0.$
Case I. $M$ is steady: We have $R=\rho=0$, which is a contradiction.
Case II. $M$ is shrinking: Since $R=\rho>0$ and $(\ref{RT3})$,
$\bar R=R(F')^2=\rho|\nabla F|^2>0$.
Case III. $M$ is expanding: Since $R=\rho<0$ and $(\ref{RT3})$,
$\bar R=R(F')^2=\rho|\nabla F|^2<0$.
\end{proof}
\section{Complete gradient Yamabe solitons with ${\rm Ric}(\nabla F,\nabla F)\leq0$}\label{NRIC}
As mentioned before, H.-D. Cao, X. Sun and Y. Zhang showed that any nontrivial non-flat complete and locally conformally flat gradient Yamabe soliton with $R\geq0$ is rotationally symmetric. Therefore, in this section, we consider Yamabe solitons with ${\rm Ric}(\nabla F,\nabla F)\leq0$ instead of ``locally conformally flat".
\begin{proposition}\label{prop1}
Let $(M^n,g,F)$ be an $n$-dimensional complete gradient Yamabe soliton with ${\rm Ric}(\nabla F, \nabla F)\leq0$.
Suppose that $F$ has no critical point. Then, the following holds.
$(1)$ $M$ is shrinking or steady: If $R\geq\rho$, then $R=\rho.$
$(2)$ There exists no expanding soliton with $R\geq0.$
\end{proposition}
As a corollary, by the similar argument as in the proof of Theorem $\ref{main}$, we can classify nontrivial non-flat complete 3-dimensional gradient Yamabe solitons:
If $M$ is shrinking with $R\geq\rho$ and ${\rm Ric}(\nabla F,\nabla F)\leq0$, then either
(1) $M$ is rotationally symmetric and equal to the warped product
$$([0,\infty),dr^2)\times_{|\nabla F|}(\mathbb{S}^{2},{\bar g}_{S}),$$
where $\bar g_{S}$ is the round metric on $\mathbb{S}^{2}$, or
(2) $|\nabla F|$ is constant and $M$ is isometric to the Riemannian product
$$(\mathbb{R},dr^2)\times \left(\mathbb{S}^2\left(\frac{1}{2}\rho|\nabla F|^2\right),\bar g\right),$$
where $\mathbb{S}^2(\frac{1}{2}\rho|\nabla F|^2)$ is the sphere of constant Gaussian curvature $\frac{1}{2}\rho|\nabla F|^2$.
If $M$ is steady or expanding with $R\geq0$ and ${\rm Ric}(\nabla F,\nabla F)\leq0$, then
$M$ is rotationally symmetric and equal to the warped product
$$([0,\infty),dr^2)\times_{|\nabla F|}(\mathbb{S}^{2},{\bar g}_{S}).$$
\\
\begin{proof}[Proof of Proposition $\ref{prop1}$]
As in the proof of Theorem $\ref{Thm of CSZ12}$, it is shown that
in any open neighborhood $U$ of $N^{n-1}$ in which $F$ has no critical points,
$$g=dr^2+(F'(r))^2{\bar g}=dr^2+\frac{(F'(r))^2}{(F'(r_0))^2}g_{ab}(r_0,x)dx^adx^b,$$ where $(x^2,\cdots, x^n)$ is any local coordinates system on $N^{n-1}$ and $\bar g=(F'(r_0))^{-2}\bar g_{r_0}$, where $\bar g_{r_0}$ is the induced metric on $N^{n-1}$
By a direct calculation, we can get formulas of the warped product manifold of the warping function $|\nabla F|=F'(r).$
For $a,b,c,d=2,\cdots,n,$
\begin{align}\label{RT1-2}
R_{1a1b}&=-F'F'''{\bar g}_{ab},\quad R_{1abc}=0,\\
R_{abcd}&=(F')^2{\bar R}_{abcd}+(F'F'')^2(\bar g_{ad}\bar g_{bc}-\bar g_{ac}\bar g_{bd}),\notag
\end{align}
\begin{align}\label{RT2-2}
R_{11}=&-(n-1)\frac{F'''}{F'},\quad
R_{1a}=0,\\
R_{ab}=&\bar R_{ab}-((n-2)(F'')^2+F'F''')\bar g_{ab},\notag
\end{align}
\begin{align}\label{RT3-2}
R=(F')^{-2}\bar R-(n-1)(n-2)\Big(\frac{F''}{F'}\Big)^2-2(n-1)\frac{F'''}{F'}.
\end{align}
By (\ref{YS}),
\begin{align}\label{R-rho-2}
R-\rho=F''.
\end{align}
Since $\nabla F=F'\frac{\partial}{\partial r}$,
\begin{equation}\label{Ricnf}
{\rm Ric}(\nabla F,\nabla F)=(F')^2R_{11}=-(n-1)F'F'''.
\end{equation}
By the assumption, $R'=F'''\geq0$.
By the definition of the Laplacian,
\begin{align*}
\Delta R
=&g^{ij}(\partial_i\partial_jR-\Gamma_{ij}^k\partial_kR)\\
=&R''-g^{ij}\Gamma_{ij}^1R',
\end{align*}
where $\partial_1=\frac{\partial}{\partial r}$ and $\partial _i=\frac{\partial}{\partial x_i},~(i=2,\cdots,n).$
The Christoffel symbol is given by
\begin{align*}
\Gamma_{ij}^1
=&\frac{1}{2}g^{1k}(\partial_ig_{jk}+\partial_{j}g_{ik}-\partial_kg_{ij})\\
=&\frac{1}{2}(\partial_ig_{j1}+\partial_{j}g_{i1}-\partial_1g_{ij}).
\end{align*}
Here,
$$\partial_1g_{11}=0, \quad \partial_ag_{11}=0\quad\text{and}\quad \partial_1g_{ab}=2F'F''\bar g_{ab}.$$
Thus, we have
$$\Gamma_{11}^1=0,\quad \Gamma_{1a}^1=0\quad\text{and}\quad\Gamma_{ab}^1=-F'F''\bar g_{ab}.$$
Hence,
\begin{equation}\label{DR}
\Delta R=R''+(n-1)\frac{F''}{F'}R'.
\end{equation}
Combining $(\ref{DR})$ with $(\ref{p.5})$,
\begin{equation}\label{R''}
R''=-(n-1)\frac{F''}{F'}R'-\frac{1}{2(n-1)}F'R'-\frac{1}{n-1}R(R-\rho).
\end{equation}
Case $(1)$, $M$ is shrinking or steady:
By the assumption, $R\geq\rho(\geq0)$, that is, $F''\geq0$.
From this and $R'\geq0$,
$$F^{(4)}=R''\leq0.$$
Thus, $F''$ is a non-negative weakly concave function, which means that $F''$ must be constant.
Hence, $R$ is constant. By $(\ref{R''})$, $R=\rho$.
Case $(2)$, $M$ is expanding: Assume that there exists an expanding Yamabe soliton with $R\geq0.$ The same argument shows $R=0,$ that is, $F''=-\rho<0$.
From this, $F'(>0)$ is a non-constant linear function, which cannot happen.
\end{proof}
\begin{remark}
If we assume that ${\rm Ric} (\nabla F,\nabla F)\geq0$ instead of ${\rm Ric}(\nabla F,\nabla F)\leq0$ on Proposition $\ref{prop1}$, then we immediately obtain $R=\rho$ without the assumption $R\geq\rho$ $($or $R\geq0$$)$.
In fact, by $(\ref{Ricnf})$, $F'''\leq0.$ Thus, $F'$ is a positive weakly concave function, which means that $F'$ must be constant.
Therefore, $R=\rho$.
As a result, we can get the same classification as in Theorem $\ref{main}$ for $n=3$, under ${\rm Ric}(\nabla F, \nabla F)\geq0$ instead of flatness of the Bach tensor.
\end{remark}
By the similar argument as in the proof of Theorem $\ref{main}$,
we can get the following classification of complete gradient Yamabe solitons with ${\rm Ric}(\nabla F, \nabla F)\leq0$.
\begin{lemma}\label{lem1}
Let $(M^n,g,F)$ be an $n$-dimensional complete gradient Yamabe soliton with Ricci curvature bounded from below and ${\rm Ric}(\nabla F, \nabla F)\leq0$. Suppose that $F$ has no critical point.
Then, the following holds.
$(1)$ There exists no shrinking soliton with $R\leq0$.
$(2)$ $M$ is steady or expanding: If $R\leq \rho$, then $R=\rho.$
\end{lemma}
\begin{proposition}
Let $(M^3,g,F)$ be a nontrivial non-flat $3$-dimensional complete gradient Yamabe soliton with Ricci curvature bounded from below and ${\rm Ric}(\nabla F, \nabla F)\leq0$.
${\rm I}.$ $M$ is shrinking or steady: If $R\leq0$, then $M$ is rotationally symmetric and equal to the warped product
$$([0,\infty),dr^2)\times_{|\nabla F|}(\mathbb{S}^{2},{\bar g}_{S}),$$
where $\bar g_{S}$ is the round metric on $\mathbb{S}^{2}.$
${\rm II}$. $M$ is expanding: If $R\leq\rho$, then either
$(1)$ $M$ is rotationally symmetric and equal to the warped product
$$([0,\infty),dr^2)\times_{|\nabla F|}(\mathbb{S}^{2},{\bar g}_{S}),~\text{or}$$
$(2)$ $|\nabla F|$ is constant and $M$ is isometric to the Riemannian product
$$(\mathbb{R},dr^2)\times \left(\mathbb{H}^2\left(\frac{1}{2}\rho|\nabla F|^2\right),\bar g\right),$$
where $\mathbb{H}^2(\frac{1}{2}\rho|\nabla F|^2)$ is the hyperbolic space of constant Gaussian curvature $\frac{1}{2}\rho|\nabla F|^2$.
\end{proposition}
\begin{proof}[Proof of Lemma $\ref{lem1}$]
$(1)$ $M$ is shrinking: Assume that there exists a Yamabe soliton with $R\leq0$. Set $L=-R\geq0$.
By $(\ref{p.5})$,
\begin{align*}
\Delta L
=&-\frac{1}{2(n-1)^2}{\rm Ric} (\nabla F,\nabla F)+\frac{1}{n-1}L(L+\rho)\\
\geq&-\frac{1}{2(n-1)^2}{\rm Ric} (\nabla F,\nabla F)+\frac{1}{n-1}L^2.
\end{align*}
By the assumption ${\rm Ric}(\nabla F,\nabla F)\leq0$, we obtain
$$\Delta L\geq\frac{1}{n-1}L^2.$$
Since $L$ is nonnegative, by Omori-Yau's maximum principle, $L=0.$ Thus, $F''=-\rho$ (as in the proof of Proposition~\ref{prop1}). Hence, $F'(>0)$ is a non-constant linear function, which cannot happen.
$(2)$ $M$ is steady or expanding:
Set $u=\rho-R$. By $(\ref{p.5})$,
\begin{align*}\Delta u
=&-\frac{1}{2(n-1)^2}{\rm Ric} (\nabla F,\nabla F)+\frac{1}{n-1}R(R-\rho)\\
\geq&-\frac{1}{2(n-1)^2}{\rm Ric} (\nabla F,\nabla F)+\frac{1}{n-1}u^2.
\end{align*}
By the assumption ${\rm Ric}(\nabla F,\nabla F)\leq0$, we obtain
$$\Delta u\geq\frac{1}{n-1}u^2.$$
Since $u$ is nonnegative, by Omori-Yau's maximum principle, $u=0,$ that is, $R=\rho.$
\end{proof}
\noindent
{\bf Acknowledgments.}~
The work was done while the author was visiting the Department of Mathematics of Texas A $\&$ M University-Commerce as a Visiting Scholar and he is grateful to the department and the university for the hospitality he had received during the visit.
\bibliographystyle{amsbook}
|
{
"timestamp": "2019-08-27T02:14:38",
"yymm": "1806",
"arxiv_id": "1806.00795",
"language": "en",
"url": "https://arxiv.org/abs/1806.00795"
}
|
\section{Introduction}\label{sec:intro}
Modern communications systems usually employ adaptive modulation and coding (AMC) mechanisms to cope with the highly varying channel conditions. In an AMC scenario, the devices at the two endpoints of each communication link agree on a combination of modulation and coding through a control channel. However, in recent communications standards, the control channel can itself use one of several modulation and coding combinations. It has thus become essential for wireless devices to be able to blindly detect and decode the information on the control channel in order to successfully join the wireless network. In practice, several parameters may need to be blindly detected (e.g., modulation, coding, interleaving), but in this work we focus on the problem of blind channel code detection, which can be loosely formulated as follows. Given a set of candidate codes $\codeSet$, a set of noisy codewords, and the knowledge that all of the noisy codewords are produced by the same code $\code \in \codeSet$, what is the most ``plausible'' candidate code $C \in \codeSet$ to have generated those words?
The design of practical algorithms for the above version of blind detection of channel codes has drawn significant attention in the past years. For example, various heuristic methods have been proposed for the blind detection of Hamming and BCH codes~\cite{Yardi2014,Chabot2007}, convolutional codes~\cite{Cluzeau2009,Moosavi2011}, Turbo codes~\cite{Debessu2012,Tillich2014}, LDPC codes~\cite{Xia2014,Yu2016}, and polar codes~\cite{Condo2018, Condo2017, Giard2017, Giard2018}. In contrast, comparatively little is known about the fundamental computational complexity of the blind code detection problem.
\subsubsection*{Contributions} To the best of our knowledge, this is the first work that \emph{formally} studies the computational complexity of the blind code detection problem. To this end, in Section~\ref{sec:background} we first express the problem in a form that enables us to theoretically analyze its computational complexity. Then, in Section~\ref{sec:mdcd} we examine the practically relevant case where $\codeSet$ contains only a constant number of candidate linear codes (i.e., $|\codeSet| = \ell,~\ell > 0$) and we show that the \textsc{Minimum Distance Code Detection} problem in this case is NP-hard. In essence, our hardness result justifies the heuristic approach of a large body of existing work (c.f., \cite{Yardi2014, Chabot2007, Cluzeau2009, Moosavi2011, Debessu2012, Tillich2014, Xia2014, Yu2016, Condo2018, Condo2017, Giard2017, Giard2018} and references therein). In the related work of~\cite{Valembois2001}, the author formulated the problem when $\codeSet$ is the set of all linear codes of dimension $k$. While this choice of $\codeSet$ is appropriate for some scenarios (cf.~\cite[Sec. I]{Carrier2018}), the case where $|\codeSet| = \ell$ is much more natural and has a greater practical significance, since in most applications~(cf. \cite{Yardi2014, Chabot2007, Cluzeau2009, Moosavi2011, Debessu2012, Tillich2014, Xia2014, Yu2016, Condo2018, Condo2017, Giard2017, Giard2018}) the set of candidate codes is usually small and pre-defined by the employed communication standard. We discuss the relation between~\cite{Valembois2001} and our work in more detail in Section~\ref{sec:valembois}. Finally, in Section~\ref{sec:open} we identify and discuss a number of interesting related open problems.
\section{Blind Code Detection Background}\label{sec:background}
In this section, we first provide some brief background on binary linear codes and we then define the \textsc{Minimum Distance Code Detection} problem.
\subsection{Binary Linear Codes}
A binary linear code $\code$ of length $n$ is a set of $n$-bit vectors, called \emph{codewords}, with the property that for any $\codeVec_1, \codeVec_2 \in \code$, we also have $\codeVec_1 + \codeVec_2 \in \code$, where additions are performed using {modulo-$2$} arithmetic. The dimension $k$ of the code $\code$ is equal to the dimension of the subspace spanned by the codewords in $\code$. The number of codewords of a binary linear code of dimension $k$ is $2^k$. A binary linear code $\code$ can be efficiently represented using a $k \times n$ binary generator matrix $\genMat$ of rank $k$, so that each codeword can be generated as $\mathbf{u}\genMat$, for some ${\mathbf{u} \in \{0,1\}^k}$, and where all operations are carried out using {modulo-$2$} arithmetic. We use $\text{span}(\genMat)$ to denote the row span of $\genMat$, i.e., $\text{span}(\genMat) = \left\{\mathbf{u}\genMat: \mathbf{u} \in \{0,1\}^k\right\}$. Note that $\code = \text{span}(\genMat)$ and, due to this equivalence, we slightly abuse the terminology for simplicity and we refer to $\genMat$ both as a \emph{generator matrix} and as a \emph{code} depending on the context.
\subsection{Minimum Distance \& Maximum Likelihood Code Detection}
The blind detection problem can be formally stated as follows. Let $\obsVec_1, \ldots, \obsVec_N$, denote a set of $N$ binary row vectors of length $n$ that are observed at the output of a noisy channel and let the matrix $\obsMat$ be defined as:
\begin{align}
\obsMat & = \begin{bmatrix}
\obsVec_{1}^T & \hdots & \obsVec_{N}^T
\end{bmatrix}^T.
\end{align}
We will refer to $\obsVec_1, \ldots, \obsVec_N$ as the \emph{noisy codewords} and to the matrix $\obsMat$ as the \emph{observation matrix}. The code detection problem can generally be defined as follows. Given a set of codes $\codeSet$, an observation matrix $\obsMat$, and the knowledge that all of the noisy codewords are produced by the same code in $\codeSet$, find a code $C \in \codeSet$ that optimizes an appropriately defined metric. We briefly describe two distinct code detection problems that use different metrics below.
In \textsc{Minimum Distance Code Detection} (MDCD) the goal is to minimize the sum minimum distance between the noisy codewords in $\obsMat$ and the code $\code$. More specifically, let:
\begin{align}
d(\obsVec_i,\code) &= \min _{\codeVec \in \code} d_{\text{H}}(\obsVec_i,\codeVec).
\end{align}
Then, the MDCD problem can be formulated as follows.
\vspace{0.1cm}
\begin{problem}{\textsc{Minimum Distance Code Detection} (MDCD)}
\textbf{Input:} Positive integers $N, n$, a binary $N \times n$ matrix $\obsMat$, and a set $\codeSet$ of binary linear codes of dimension $k\leq n$, where each $\code \in \codeSet$ is given by a generator matrix $\genMat$.
\textbf{Output:} A generator matrix $\genMat$ of a binary linear code $\code_{\text{MDCD}} \in \codeSet$ such that:
\begin{align}
\code_{\text{MDCD}} = & \arg \min _{\code \in \codeSet} \sum _{i=1}^Nd(\obsVec_i,\code), \label{eq:mdcr}
\end{align}
where potential ties are broken arbitrarily.
\end{problem}
\vspace{0.1cm}
\textsc{Maximum Likelihood Code Detection} (MLCD) is a closely related problem that is of particular interest because it minimizes the detection error rate when all codes in $\codeSet$ are equiprobable. Let us assume that transmission takes place over a BSC with crossover probability $p \in \left(0,\frac{1}{2}\right)$, which we denote by BSC$(p)$, and let $d_{\text{H}}(\mathbf{a},\mathbf{b})$ denote the Hamming distance between $\mathbf{a}$ and $\mathbf{b}$. The MLCD problem, which was derived in~\cite{Valembois2001}, can then be formulated as follows.
\vspace{0.1cm}
\begin{problem}{\textsc{Maximum Likelihood Code Detection} (MLCD)}
\textbf{Input:} Positive integers $N, n$, a binary $N \times n$ matrix $\obsMat$, and a set $\codeSet$ of binary linear codes of dimension $k\leq n$, where each $\code \in \codeSet$ is given by a generator matrix $\genMat$.
\textbf{Output:} A generator matrix $\genMat$ of a binary linear code $\code_{\text{MLCD}} \in \codeSet$ such that:
\begin{align}
\code_{\text{MLCD}} = & \arg \max _{\code \in \codeSet} \prod _{i=1}^N\sum _{\codeVec \in \code} \left(\frac{p}{1-p}\right)^{d_{\text{H}}(\obsVec_i,\codeVec)}, \label{eq:mlcd}
\end{align}
where potential ties are broken arbitrarily.
\end{problem}
\vspace{0.1cm}
We discuss the relation between the MDCD problem and the MLCD problem in more detail in Section~\ref{sec:open}.
\section{The MDCD Problem for $|\codeSet| = \ell$}\label{sec:mdcd}
In this section, we prove that when we are given a fixed set of $\ell$ binary linear codes, finding a code that minimizes the sum distance from the noisy codewords is NP-hard. By a fixed set, here we mean a set of size which is constant in the input parameters, which is a restriction that can be added to the input of the formal definition of the MDCD problem. Typically, when studying the computational complexity of a problem, we refer to \emph{decision problems}, i.e., problems for which the answer is either ``yes'' or ``no''. In contrast, the MDCD problem defined above is an \emph{optimization problem}, i.e., a problem in which we are looking for a solution that optimizes an objective function, potentially under some constraints. However, the definition of NP-hardness can be extended to optimization problems using \emph{Turing reductions}, e.g., see the discussion on the complexity of search problems in~\cite[Chapter 5]{Garey1979}. We avoid talking about NP-completeness here intentionally, because the notion is only well-defined for the decision versions of the problems.
\begin{theorem}\label{thm:NPhard}
The MDCD problem for $|\codeSet|=\ell$ is NP-hard.
\end{theorem}
We construct a reduction from the \textsc{Minimum Distance Decoding} problem (MDD), proven to be NP-hard in \cite{Berlekamp1978}.\footnote{The MDD problem was referred to as the \textsc{Coset Weights} problem in \cite{Berlekamp1978}, where it was defined as a decision problem. We reduce from the optimization version of the MDD problem which is NP-hard as well, since the objective function is computable in polynomial time~\cite[Chapter 5]{Garey1979}.} The MDD problem can be formulated as follows.
\vspace{0.2cm}
\begin{problem}{\textsc{Minimum Distance Decoding} (MDD)}
\textbf{Input:} A generator matrix $\genMat$ of a binary linear code $\code$ of length $n$ and an $n$-bit binary vector $\mathbf{y}$.
\textbf{Output:} An $n$-bit binary vector $\hat{\codeVec} = \arg\min_{\codeVec \in \code} d_H(\mathbf{y},\codeVec)$.
\end{problem}
\vspace{0.2cm}
Our reduction constructs an algorithm $\mathcal{A}_{\text{MDD}}$ that solves the MDD problem when given access to any algorithm $\mathcal{A}_{\text{MDCD}}$ that solves the MDCD problem. The algorithm $\mathcal{A}_{\text{MDD}}$ only makes a polynomial number of calls to $\mathcal{A}_{\text{MDCD}}$ and only performs polynomial-time computations otherwise. Therefore, if an efficient algorithm for MDCD existed, $\mathcal{A}_{\text{MDD}}$ would solve the MDD problem in polynomial time, which is not possible (unless $\text{P}=\text{NP}$) since the MDD problem is NP-hard.
\begin{algorithm}[t]
\KwData{Full-rank $k \times n$ generator matrix $\genMat$, an $n$-bit binary vector $\mathbf{y}$.}
\KwResult{Codeword $\hat{\codeVec} = \arg\min_{\codeVec \in \code} d_H(\mathbf{y},\codeVec)$.}
$\genMat^{(k)}=\genMat$\;
$l=k$\;
\While{$l >0$}{
$\{\genMat_1,\genMat_2,\genMat_3\} = \textsc{SplitCover}(\genMat^{(l)})$\;\label{algline:MDCD4}
$\genMat^{(l-1)} = \mathcal{A}_{\text{MDCD}}(\mathbf{y},\{\genMat_1,\genMat_2,\genMat_3\})$\; \label{algline:MDCD5}
$l = l-1$\;
}
$\hat{\codeVec} = \genMat^{(0)}$\; \smallskip
\caption{Algorithm $\mathcal{A}_{\text{MDD}}$ for solving the MDD problem using $\mathcal{A}_{\text{MDCD}}$ as a subroutine.}\label{alg:2}
\end{algorithm}
More precisely, the $\mathcal{A}_{\text{MDCD}}$ algorithm has inputs $\codeSet$ (i.e., a set of $\ell$ generator matrices, here we take $\ell=3$) and the observation matrix $\obsMat$, and it outputs a generator matrix $\genMat$ for a code $\code \in \codeSet$ which is a solution to the MDCD problem. Our algorithm for solving the MDD problem using $\mathcal{A}_{\text{MDCD}}$ is given in Algorithm~\ref{alg:2}. The main idea is that, starting from the code $\genMat$ of dimension $k$ given as an input to the MDD problem, we call the \textsc{SplitCover} function described in Algorithm \ref{alg:3}. This function constructs (in polynomial time) three generator matrices $\genMat_1$, $\genMat_2$, and $\genMat_3$ of binary linear codes of dimension $(k-1)$, with the property that a codeword is generated by $\genMat$ if and only if it is generated by at least one of $\genMat_1$, $\genMat_2$, or $\genMat_3$. Then, we use the $\mathcal{A}_{\text{MDCD}}$ algorithm on $\mathbf{y}$ (i.e., the input of the MDD problem) and $\{\genMat_1,\genMat_2,\genMat_3\}$, which returns the code of dimension $(k-1)$ with the minimum distance from $\mathbf{y}$ that contains the solution to the MDD problem. We repeat this another $(k-1)$ times until the resulting code contains a single codeword, which is the solution $\hat{\codeVec}$ to the MDD problem.
In the following lemma, we prove the aforementioned properties of the \textsc{SplitCover} function.
\begin{lemma}\label{lemma:splitcover}
\textsc{SplitCover} given in Algorithm~\ref{alg:3} takes an $l \times n$ matrix $\genMat$ of rank $l$ as an input and produces (in polynomial time) a set of three $(l-1)\times n$ generator matrices $\{\genMat_1,\genMat_2,\genMat_3\}$ with the following properties:
\begin{enumerate}
\item The rank of $\genMat_1$, $\genMat_2$, and $\genMat_3$ is $(l-1)$.
\item $\text{span}(\genMat) = \text{span}(\genMat_1) \cup \text{span}(\genMat_2) \cup \text{span}(\genMat_3)$.
\end{enumerate}
\end{lemma}
\begin{IEEEproof}
The construction of $\genMat_1$, $\genMat_2$, and $\genMat_3$ is a concatenation of a subset of rows of $\genMat$, so it clearly has polynomial complexity. Moreover, by assumption, $\genMat$ has $l$ linearly independent rows. Since $\genMat_1$ and $\genMat_2$ are constructed using $(l-1)$ distinct rows of $\genMat$, they are clearly of rank $(l-1)$. Similarly, $\genMat_3$ is constructed using $(l-2)$ distinct rows of $\genMat$ and one row that is the sum of the remaining $2$ rows of $\genMat$, so it also clearly of rank $(l-1)$ and the first property follows. Finally, recall that $\text{span}(\genMat) = \left\{\mathbf{u}\genMat: \mathbf{u} \in \{0,1\}^k\right\}$. Since $\genMat_1$ is $\genMat$ with the second row omitted, it is easy to see that
\begin{align}
\text{span}(\genMat_1) & = \left\{\mathbf{u}\genMat: \mathbf{u} \in \{0,1\}^{k}, u_2 = 0\right\}.
\end{align}
Similarly, we have:
\begin{align}
\text{span}(\genMat_2) & = \left\{\mathbf{u}\genMat: \mathbf{u} \in \{0,1\}^{k}, u_1 = 0\right\}.
\end{align}
Finally, since the first row of $\genMat_3$ is equal to $(\genVec_1 + \genVec_2)$, $\text{span}(\genMat_3)$ will contain all vectors $\mathbf{u}\genMat$ for which either $u_1=0$ and $u_2=0$, or $u_1 = 1$ and $u_2 = 1$, or equivalently:
\begin{align}
\text{span}(\genMat_3) = \left\{\mathbf{u}\genMat: \mathbf{u} \in \{0,1\}^{k}, u_1 = u_2\right\}.
\end{align}
Since the set $\text{span}(\genMat_1) \cup \text{span}(\genMat_2) \cup \text{span}(\genMat_3)$ covers all possibilities for $u_1$ and $u_2$ and the remaining elements of $\mathbf{u}$ are free variables in all three cases, the second property follows.
\end{IEEEproof}
\begin{algorithm}[t]
\SetKwProg{Fn}{Function}{:}{}
\KwData{Full-rank $l \times n$ matrix $\genMat = \begin{bmatrix} \genVec_1^T & \genVec_2^T & \hdots & \genVec_l^T \end{bmatrix}^T$.}
\KwResult{Set of three $(l-1) \times n$ matrices $\{\genMat_1,\genMat_2,\genMat_3\}$.}
\Fn{\textsc{SplitCover}{$(\genMat)$}}{
$\genMat_{1} = \begin{bmatrix}
\genVec_1^T & \genVec_3^T & \hdots & \genVec_l^T
\end{bmatrix}^T$\;
$\genMat_{2} = \begin{bmatrix}
\genVec_2^T & \genVec_3^T & \hdots & \genVec_l^T
\end{bmatrix}^T$\;
$\genMat_{3} = \begin{bmatrix}
(\genVec_1 + \genVec_2)^T & \genVec_3^T & \hdots & \genVec_l^T
\end{bmatrix}^T$\; \smallskip
\Return{$\{\genMat_1,\genMat_2,\genMat_3\}$}\;
}\medskip
\caption{Algorithm {\textsc{SplitCover}.}}\label{alg:3}
\end{algorithm}
\begin{IEEEproof}[Proof of Theorem \ref{thm:NPhard}]
First, note that in our reduction, the observation matrix $\obsMat$ is in fact an $n$-bit binary vector and, in particular, it is the $n$-bit binary vector $\mathbf{\mathbf{y}}$ that is given as input to the MDD problem. In that case, the solution to the MDCD problem is a code $\code \in \codeSet$ such that:
\begin{align}
\code & = \arg\min_{\code \in \codeSet} d(\mathbf{y},\code) = \arg\min_{\code \in \codeSet} \left(\min_{\codeVec \in \code} d_H(\mathbf{y},\codeVec)\right),
\end{align}
where the last equation follows from the definition of $d(\mathbf{y},\code)$.
Let $\mathcal{G}_{\ell} = \{\genMat_1,\hdots,\genMat_{\ell}\}$ denote a set of $\ell$ generator matrices and let $\text{span}(\mathcal{G}_{\ell}) = \bigcup _{i=1}^{\ell} \text{span}(\genMat_i)$. Then, identifying a code in $\mathcal{G}_{\ell}$ that is closest to $\mathbf{y}$ in terms of the minimum Hamming distance is equivalent to identifying a code in $\mathcal{G}_{\ell}$ that contains a codeword $\hat{\codeVec}= \arg\min_{\codeVec \in \text{span}(\mathcal{G}_{\ell})} d_{H}(\mathbf{y},\codeVec)$. In Algorithm~\ref{alg:2}, at every iteration $l$ it holds that $\text{span}(\genMat^{(l)}) = \text{span}(\genMat_1) \cup \text{span}(\genMat_2) \cup \text{span}(\genMat_3)$ by Lemma~\ref{lemma:splitcover}. By the discussion above and since we started from $\genMat^{(k)} = \genMat$, at every iteration $l$ of Algorithm~\ref{alg:2}, the $\mathcal{A}_{\text{MDCD}}$ algorithm identifies the code $\genMat^{(l-1)} \in \{\genMat_1, \genMat_2, \genMat_3\}$ that contains a solution $\hat{\codeVec}$ to the MDD problem. Since $\genMat^{(0)}$ is a single $n$-bit binary vector, Algorithm~\ref{alg:2} terminates by returning $\hat{\codeVec}$.
Both $\mathcal{A}_{\text{MDCD}}$ and \textsc{SplitCover} are called $k$ times in Algorithm~\ref{alg:2}. Moreover, by Lemma~\ref{lemma:splitcover} we know that the complexity of \textsc{SplitCover} is polynomial. Finally, all remaining computations can clearly be carried out in polynomial time, meaning that the overall complexity of our reduction is polynomial.
\end{IEEEproof}
One can view our reduction as a ternary search-style procedure, where the space of all codewords is split into three sets (which only have a constant overlap of codewords) and the set containing a solution is returned by the $\mathcal{A}_\text{MDCD}$ algorithm.
\section{The MDCD Problem for $\codeSet=\mathcal{LC}_k$}\label{sec:valembois}
In Section~\ref{sec:mdcd}, we studied the MDCD problem when $\codeSet$ is a fixed set of $\ell$ binary linear codes. In contrast, in~\cite{Valembois2001} the author formulated the MDCD problem when $\codeSet$ is the space of all possible linear codes of a given dimension $k$, which we will denote by $\mathcal{LC}_k$. We note that the MDCD problems for $\codeSet = \mathcal{LC}_k$ and for $\codeSet = \{\code_1,\code_2,\hdots,\code_\ell\}$ are fundamentally different. When $\codeSet=\mathcal{LC}_k$, we are looking for \emph{some} code among all possible linear codes that minimizes the total distance from the noisy codewords and there might be a very large number of codes that are solutions to the problem. On the other hand, when $\codeSet = \{\code_1,\code_2,\hdots,\code_\ell\}$, we need to decide which code is closest to the observation matrix $\obsMat$ in terms of the minimum Hamming distance, which might be a much harder task to do.
In \cite{Valembois2001}, it is stated that the MDCD problem is equivalent\footnote{Such an equivalence result would indeed imply that the MDCD problem is generally NP-hard when $\codeSet=\mathcal{LC}_k$, which is the claim attributed to~\cite{Valembois2001} in certain related works~(e.g., \cite{Chabot2007,Carrier2018}). However, the equivalence statement appears without proof in~\cite{Valembois2001}.} to a \textsc{Rank Reduction} (RR) problem, which is then proven to be NP-hard via a reduction from the \textsc{Minimum Distance} problem~\cite{Vardy1997}. The term ``rank-reduction'' already hints at the fact that such an equivalence requires that the rank of the observation matrix $\obsMat$ is at least $k$, which implies that at least $k$ noisy codewords have to be observed. However, in the practical application described in Section~\ref{sec:intro}, the number of observations (and thus $\textrm{rank}(\obsMat)$) is always significantly smaller than $k$, since the decision latency and the signal processing cost have to be minimized.
In this case, it turns out that it is simple to identify the computational complexity of the MDCD problem. In particular, we describe a polynomial-time algorithm that can find a code $\code \in \mathcal{LC}_k$ that minimizes $\sum _{i=1}^{N}d(\obsVec_i,\code)$ when $\textrm{rank}(\obsMat) \leq k$. The main idea of the algorithm is that, since $\textrm{rank}(\obsMat) \leq k$, we can always construct a full-rank $k \times n$ generator matrix $\genMat$ with $\obsMat$ as a submatrix to achieve $\sum_{i=1}^N d(\obsVec_i,\code)=0$ in polynomial time. This algorithm has two steps: the first step ensures that $\sum_{i=1}^N d(\obsVec_i,\code)$ is minimized, while the second step ensures that $\genMat$ has rank $k$ and thus generates a code of the desired dimension $k$.
\begin{algorithm}[t]
\KwData{Full-rank $r \times n$ generator matrix $\genMat$ from step 1.}
\KwResult{Full-rank $k \times n$ generator matrix $\genMat$.}
$i = 1$\;
\While{$\textrm{rank}(\genMat) < k$ and $i \leq n$}{
$\genMat' = \begin{bmatrix} \genMat \\ \mathbf{e}_i \end{bmatrix}$\;
\If{$\textrm{rank}(\genMat') > \textrm{rank}(\genMat) $}{
$\genMat = \genMat'$\;
}
$i = i + 1$\;
}
\caption{Rank augmentation of $\genMat$.}\label{alg:1}
\end{algorithm
\textbf{Step 1:} Let $\textrm{rank}(\obsMat) = r \leq k$ and let $\mathcal{L} = \left\{l_1, l_2, \hdots, l_r \right\}$ denote a set of indices of any $r$ linearly independent rows of $\obsMat$. The set $\mathcal{L}$ can be constructed in polynomial time using Gaussian elimination. We construct the $r$ first rows of $\genMat$ as:
\begin{align}
\genMat_{r \times n} & = \begin{bmatrix}
\obsVec_{l_1}^T & \hdots & \obsVec_{l_r}^T
\end{bmatrix}^T.
\end{align}
\textbf{Step 2:} Let $\mathbf{e}_i$ denote the standard basis row vector of length $n$ with a $1$ in the $i$-th coordinate and $0$'s elsewhere. We extend $\genMat$ to have dimensions $k \times n$ and rank $k$ by following the procedure of Algorithm \ref{alg:1}. This procedure is guaranteed to construct a full-rank $k \times n$ generator matrix $\genMat$ and it requires at most $n$ steps, with each step having polynomial complexity. The final $k \times n$ generator matrix $\genMat$ has the following form:
\begin{align}
\genMat_{k \times n} & = \begin{bmatrix}
\obsVec_{l_1}^T & \hdots & \obsVec_{l_r}^T & \mathbf{e}_{i_1}^T & \hdots & \mathbf{e}_{i_{k-r}}^T
\end{bmatrix}^T,
\end{align}
for some $\{i_{1},\hdots,{i_{k-r}}\} \subset \{1,\hdots,n\}$. Since the $2^k$ codewords of the code $\code$ corresponding to $\genMat$ are generated as $\mathbf{u}\genMat$, where $\mathbf{u} \in \{0,1\}^k$, it is easy to see that $\obsVec_i \in \code, \, \forall i = 1,\hdots,N$. This means that $\sum_{i=1}^N d(\obsVec_i,\code) = 0$ and $\code$ indeed minimizes $\sum_{i=1}^N d(\obsVec_i,\code)$.
\section{Open Problems}\label{sec:open}
In this section, we identify and discuss some interesting open problems related to the complexity of code detection.
\subsection{Computational Complexity of the MLCD Problem}
Unlike minimum distance decoding and maximum likelihood decoding which are equivalent over the BSC (and known to be NP-hard~\cite{Berlekamp1978}), MDCD is generally not equivalent to MLCD. This is demonstrated through the following example.
\begin{example}
Consider the case where transmission takes place over a BSC$(0.25)$, we have $|\codeSet| = 2$, and the full-rank generator matrices $\genMat_1$ and $\genMat_2$ that describe the codes $\code_1$ and $\code_2$ (both of dimension $k=3$), respectively, are:
\begin{align}
\genMat_1 = \begin{bmatrix}
0 & 1 & 0 & 0 & 1 \\
1 & 1 & 1 & 0 & 0 \\
1 & 1 & 1 & 1 & 1
\end{bmatrix},
\;
\genMat_2 = \begin{bmatrix}
0 & 1 & 0 & 1 & 0 \\
1 & 0 & 0 & 1 & 0 \\
0 & 1 & 1 & 0 & 0
\end{bmatrix}.
\end{align}
Moreover, let us assume that we have the following observation matrix with a single noisy codeword $\obsMat = \obsVec_1 = \begin{bmatrix} 1 & 1 & 1 & 0 & 0 \end{bmatrix}$. Finally, let us define:
\begin{align}
f(\code) & = \sum _{\codeVec \in \code} \left(\frac{p}{1-p}\right)^{d_{\text{H}}(\obsVec_1,\codeVec)},
\end{align}
so that $\code_{\text{MLCD}} = \arg\max_{C \in \{\code_1,\code_2\}} f(\code)$.
It is easy to verify that $d(\obsVec_1,\code_1) = 0$ and $d(\obsVec_1,\code_2) = 2$, but $f(\code_1) = 1.449$ and $f(\code_2) = 1.481$, meaning that $\code_{\text{MDCD}} \neq \code_{\text{MLCD}}$. So, the code that is the optimal solution of the MDCD problem is not the optimal solution of the MLCD problem, and vice-versa.
\end{example}
In~\cite{Valembois2001}, it is not explained rigorously how MDCD is related to MLCD. Here, we provide the following explanation. Let $\alpha = \frac{p}{1-p}$. Then, using the well-known max-log approximation with a base-$\alpha$ logarithm and the fact that $\log _{\alpha}(x)$ is decreasing in $x$ since $\alpha \leq 1$, we can re-write \eqref{eq:mlcd} as:
\begin{align}
\code_{\text{MLCD}} = & \arg \max _{\code \in \codeSet} \sum _{i=1}^N \log _{\alpha}\left(\sum _{\codeVec \in \code} \alpha^{d_{\text{H}}(\obsVec_i,\codeVec)}\right) \\
\approx & \arg \min _{\code \in \codeSet} \sum _{i=1}^N \min _{\codeVec \in \code}d_{\text{H}}
(\obsVec_i,\codeVec) =\code_{\text{MDCD}}.
\end{align}
Note however, that this approximation does not imply anything about the computational complexity of MLCD merely from the computational complexity of MDCD, nor vice-versa.
Arguably, the maximum likelihood objective of the MLCD problem is a better distance metric than the minimum distance objective of the MDCD problem, since it minimizes the probability of detection error. As such, studying the complexity of the MLCD problem is an important next step. In this direction, one could attempt to construct a reduction from the MDD problem to the MLCD problem by replacing $\mathcal{A}_{\text{MDCD}}$ with an algorithm $\mathcal{A}_{\text{MLCD}}$ that solves the MLCD problem in Algorithm~\ref{alg:2}. However, for this to work one would have to show that the code $\genMat ^{(l-1)}$ returned by $\mathcal{A}_{\text{MLCD}}$ always contains the solution to the MDD problem (as shown for $\mathcal{A}_{\text{MDCD}}$ in the proof of Theorem~\ref{thm:NPhard}), which does not necessarily hold.
\subsection{Detection Complexity for Subclasses of Linear Codes}
Similarly to the case of maximum likelihood decoding, it would be interesting to examine specific subclasses of linear codes (e.g., LDPC codes, polar codes), for which, in principle, efficient algorithms for the MDCD problem could exist. In this direction, given a subclass of linear codes, our reduction can be applied if this subclass is \emph{closed} under a \emph{split-cover} operation similar to \textsc{SplitCover} defined in Algorithm~\ref{alg:3}. Specifically, closure in this context means that a full-rank $k \times n$ generator matrix $\genMat$ that belongs to the given subclass of linear codes, can be split into $\ell$ full-rank $(k-1)\times n$ generator matrices $\genMat_1,\hdots,\genMat_\ell$, that belong to the same subclass such that $\text{span}(\genMat) = \bigcup _{i=1}^{\ell} \text{span}(\genMat_i)$. A procedure that generates $\genMat_1,\hdots,\genMat_\ell$, in polynomial time can be then used instead of the specific \textsc{SplitCover} function that we used in Algorithm~\ref{alg:2} in order to prove hardness for specific subclasses of codes.
\subsection{Complexity of MDCD for any $\ell$ and $N$}
The proof of Theorem~\ref{thm:NPhard} establishes the NP-hardness of the MDCD problem when $\ell=3$ and $N=1$, which is sufficient to show that the problem is NP-hard in general.
A very similar reduction can be used to prove NP-hardness for any $\ell > 3$. The main idea is that Lemma~\ref{lemma:splitcover} can be extended to the case where $\genMat$ is split into $\ell$ distinct\footnote{Note that, if the codes in $\codeSet$ are not required to be distinct, the NP-hardness of the MDCD problem with $\ell > 3$ follows easily from the NP-hardness of the $\ell = 3$ case since we can simply set, e.g., $\genMat_\ell = \genMat_3$ for all $\ell > 3$.} codes $\genMat_1,\hdots,\genMat_\ell$. We note that the case of $\ell = 1$ is trivial and the NP-hardness of the case when $\ell = 2$ follows easily from the NP-hardness of the case when $\ell = 3$. Specifically, a hypothetical polynomial-time algorithm for the $\ell = 3$ case could call a hypothetical polynomial-time algorithm for the $\ell = 2$ case three times (one for each of the three possible pairs of candidate codes) and combine the partial results in order to solve the MDCD problem.
The case where $N>1$ observations are available is also of practical interest. Showing NP-hardness for a given $N>1$ is an open problem, which does not seem to follow directly from the techniques we have used in this work.
\section{Conclusion}
In this work, we studied the fundamental problem of the computational complexity of code detection for binary linear codes and we proved that the MDCD problem is NP-hard through a reduction from the \textsc{Minimum Distance Decoding} problem in the practically relevant case where $\codeSet$ contains a fixed number $\ell$ of candidate codes. Moreover, we identified a number of open problems, the most
interesting being the computational complexity of the MLCD problem.
\section{Acknowledgment}
The work of Alexios Balatsoukas-Stimming is supported by the Swiss National Science Foundation project \#175813. The work of Aris Filos-Ratsikas is supported by the Swiss National Science Foundation under contract No. 200021\_165522. The authors would like to thank the anonymous reviewers for their useful suggestions.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2019-04-09T02:17:36",
"yymm": "1806",
"arxiv_id": "1806.01050",
"language": "en",
"url": "https://arxiv.org/abs/1806.01050"
}
|
\section{Introduction}
In the words of Aaron Siegel, and we could not have said it better ourselves:
\hspace{6pt}\textit{Combinatorial game theory is the study of two-player games with no hidden information and no chance elements. The theory assigns algebraic values to positions in such games and seeks to quantify the algebraic and combinatorial structure of their interactions.}
\smallskip
Combinatorial Game Theory by Siegel~\cite{cgt} and Winning Ways for Your Mathematical Plays by Berlekamp, Conway and Guy~\cite{winningways} are the foremost literature on combinatorial game theory. While the classical theory only considers two-player games, in recent years, more and more work has been done to extend the theory to games with more than two players, both for specifically three and for an arbitrary finite number of players. Most of these efforts make restrictive assumptions about the behaviour of the players, as mentioned by Cincotti~\cite{cincotti}.
Cincotti presents a theoretical framework to classify partisan games with an arbitrary finite number of players. We decided to see how far we could simplify the game tree using as few assumptions as possible, using the game of Clobber as an example to apply our theory.
In Section~\ref{sec:N-player games} we define the games we study in this paper and the notation we use to represent their values. Section~\ref{sec:Clobber} introduces the game of Clobber and how to extend it to an $N$-player game. In Section~\ref{sec:simplifying} we present our simplification rules, the results of which we discuss in Section~\ref{sec:results}. Finally, we summarise our findings and conclusions in Section~\ref{sec:conclusions}, in which we also suggest several areas for further research.
This research was done by the first author as a Master research project at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, under the supervision of the second author.
\section{N-player games}
\label{sec:N-player games}
We consider a game with $N \geq 2$ players, where we are most interested in the case $N > 2$. The players are numbered $1,2,\ldots,N$. They take turns, where player $i + 1$ succeeds player $i$ (if $i \in \{1,2,\ldots,N-1\}$), and player $1$ succeeds player $N$. Player $1$ starts the game. The last player that can make a legal move, wins the game. This is called \emph{normal play}. If a player cannot make a valid move, their turn is skipped. We assume that at least one player can make a move in the initial position. Furthermore, we assume the game to be \emph{converging}: positions can be ordered in a game tree without backlinks.
We first construct the full game tree, starting from the initial position. Leaves are positions where no player can move. The value of such a node is equal to the number of the winner. These values represent unconditional wins for the corresponding player.
Now we can recursively label all nodes, in the following bottom-up way. The general value of a non-leaf position $P$ with player $i$ to play (note that $i$ is formally part of $P$, and could therefore be omitted) is the list $L$ with all unique values of the children, using some fixed ordering. The underlying intuition is that the list elements represent the choices for the player to move. A value $L$ thus represents a tree, with the leaves having the aforementioned single number values. $L$ is said to \emph{contain} a value $a$ if some node in the tree represented by $L$ has the value $a$.
Note that if all children have the same value, this will also be the value of the parent. We identify a list $[x]$ with $x$ a leaf position with its only member $x$: we use $3$ instead of $[3]$. However, note that, e.g., $[1,3]$ differs from $[[1,3]]$. Here, the first list denotes a situation where the player to move can select $1$ or $3$ as the winner, whereas the second list passes this option to the next player to move. But $[[[3]]] = 3$.
Of course, the order of the list elements does not matter and multiple occurrences of elements can be represented with single occurrences.
We mention some examples:
\begin{framed}
\begin{example}
Suppose the children have values 2, 2, 2 and 3, respectively; then the parent has value $[2,3]$. The parent contains the values $2$, $3$ and $[2,3]$.
Note that, if it is player 1's turn, this value makes player 1 a so-called \emph{kingmaker}. As also noted by Propp~\cite{propp}, the player has no winning move, but their action determines which of the other players will win.
\end{example}
\begin{example}
Suppose the children have values 2, $[2,3]$, $[2,3]$ and $[1,[1,3]]$, respectively; then the parent has value $[2,[2,3],[1,[1,3]]]$. The parent contains the values $1$, $2$, $3$, $[1,3]$, $[2,3]$, $[1,[1,3]]$ and $[2,[2,3],[1,[1,3]]]$.
\end{example}
\end{framed}
\section{Clobber}
\label{sec:Clobber}
Clobber is a \emph{partisan game} consisting of an undirected graph, usually a grid graph, with the vertices containing a black or white token or being empty. A player must move one of their tokens to an adjacent vertex containing a token of the opponent. The player's token replaces, ``clobbers'', the opponent's token, which is then removed from the game. The first player unable to make such a move loses the game. Note that Clobber is \emph{dicotic}, formally known as \emph{all-small}~\cite[pages 60--63]{cgt}, meaning that both players can move from every nonempty position. In competitions, Clobber is usually played on a checkerboard with black tokens on the black squares and white tokens on the white ones. Human competitions usually use a $5 \times 6$ board while computer competitions generally use larger board sizes such as $10 \times 10$.
For further reading on Clobber, we recommend the 2005 paper ``An introduction to Clobber''~\cite{introductiontoclobber} and Siegel's 2013 book on combinatorial game theory~\cite[pages 146--149]{cgt}. Recent work on Clobber was done in 2016 by Griebel and Uiterwijk~\cite{griebeluiterwijk}, who combined combinatorial game theory with an $\alpha$-$\beta$-solver to solve larger and more complex Clobber boards.
To extend Clobber into an $N$-player game, a vertex now contains a number between 0 and $N$, 0 meaning the vertex is empty and a number $i \geq 1$ meaning the vertex contains a token from the corresponding player $i$. A valid move now consists of clobbering an adjacent token belonging to any other player. As defined in Section~\ref{sec:N-player games}, a player unable to make a valid move will skip their turn --- and can never move again --- and the last player to make a valid move wins the game.
We now give some examples of three-player Clobber games on $1 \times n$ boards and their values. In all examples, we assume it is player 1's turn.
\begin{framed}
\begin{example}
\begin{tabular}{| c | c | c |}
\hline
2 & 1 & 3 \\
\hline
\end{tabular}
has value 1. Player 1 can clobber either player 2 or 3 and wins in both cases.
\end{example}
\begin{example}
\begin{tabular}{| c | c | c | c | c |}
\hline
1 & 2 & 2 & 2 & 3 \\
\hline
\end{tabular}
has value [[1,3]]. Player 1 has no choice but to clobber player 2. Player 2 then must clobber either player 1 or 3, after which the other one will clobber them in return and win. Player 2 thus chooses the winner.
\end{example}
\begin{example}
\begin{tabular}{| c | c | c | c | c | c |}
\hline
1 & 2 & 3 & 2 & 1 & 3 \\
\hline
\end{tabular}
has value [[1,3],[1,[1,2]],[2,3]]. This can still easily be checked by hand, which we leave as an exercise for the reader.
\end{example}
\begin{example}
\label{example:1x10-unsimplified}
\begin{tabular}{| c | c | c | c | c | c | c | c | c | c |}
\hline
1 & 2 & 3 & 2 & 1 & 3 & 2 & 3 & 2 & 1 \\
\hline
\end{tabular}
has value [[[1, 3, [3, [[1, 3]]], [[1, 3, [2, 3]], [2, 3, [1, 2]]], [[[1, 2]]]], [3, [2, 3], [2, [1, 3]], [[1, 2]]], [3, [3, [1, 3]], [3, [[1, 3]]], [[1, 3], [2, 3]], [[1, 3]]], [3, [[1, 2], [2, 3]]], [[1, 3], [3, [1, 2], [[1, 2]]], [[1, 2, 3], [1, 2, [2, 3]]]], [[1, 3], [3, [1, 2], [[1, 2]]], [[1, 3], [2, 3, [2, 3]], [2, 3]]]], [[2, [1, 3], [2, 3], [[1, 3]]], [2, [3, [1, 3]]], [3, [1, [2, 3]], [[1, 3, [2, 3]]], [[1, 3], [[1, 2]]], [[2, 3], [[1, 3]]]], [[1, 3], [1, [2, 3]], [3, [2, 3]], [[2, 3]]], [[1, [1, 2, 3], [2, 3, [2, 3]]], [2, 3], [2, [[1, 2]]]], [[2, 3], [2, [1, 2]], [[1, 2, [2, 3]], [1, 2]]]], [[2, [2, 3]], [2, [3, [1, 3]], [3, [2, 3]]], [2, [3, [1, 3]]], [3, [2, 3], [[1, 3]], [[2, 3], [[1, 3]]]], [[1, 2], [1, 3, [1, 3]], [2, 3]], [[1, 2], [2, 3]]], [[[1, 2], [1, [2, 3]], [2, 3]], [[1, 2], [2, 3]], [[1, 3, [1, 2]]], [[2, 3]], [[3, [1, 3]], [[1, 3]]]]]. Some spaces added for readability. We do not recommend to check this one without the assistance of a computer.
\end{example}
\end{framed}
\section{Simplifying the game tree}
\label{sec:simplifying}
We have seen from the examples in Section~\ref{sec:Clobber} that the length and complexity of values grow rather fiercely for larger board sizes. To counter this, for games with $N = 3$, we introduce an additional notation, give several general, syntactic simplification rules and experiment with several player preferences and the semantic simplification rules they infer. Note that these do not rely on any rules specific to Clobber and should instead be applicable to any short game, as defined in~\cite[page 54]{cgt}.
\subsection{Simple values}
\label{sec:simplevalues}
For $N = 3$, we use $\bar{1}$ (pronounced ``1 bar'') to denote the value $[2,3]$. $\bar{1}$ can be interpreted as the complement of 1, as it consists of all single number values except for 1 itself.
Similarly, we use $\bar{2}$ for $[1,3]$ and $\bar{3}$ for $[1,2]$.
Following the same intuition, we use the notation $\bar{\bar{1}}$ (pronounced ``1 bar bar'') for $[\bar{2},\bar{3}]~(\left[[1,3],[1,2]\right])$, and so forth.
Due to the large number of bars in larger and more complex values, we will often omit the actual bars and instead denote their number in subscript; for instance, $1_2 = \bar{\bar{1}}$ and $1_6 = \bar{\bar{\bar{\bar{\bar{\bar{1}}}}}}$.
We use the notation $a_i$, $a$ being the \emph{base value} and $i \geq 0$ being the \emph{exponent}, to denote a value consisting of the list containing the values $\{b_{i-1}\,|\,b \neq a\}$, with $a_0$ representing $a$, the unconditional win for player $a$. We call values that can be represented using this notation \emph{simple values}. We call all other values \emph{complex values}.
\subsection{Syntactic simplifications}
\label{sec:syntaxsimpl}
We give several operations, denoted by ``$\Rightarrow$'', that can be performed to simplify the syntax of the game tree without semantically changing the possibilities available to the players or the possible outcomes of the game tree.
\begin{framed}
\begin{myrule}
\label{rule:simplebrackets}
For a simple value $x$: $[x] \Rightarrow x$.
\end{myrule}
In our previous notation, there was a semantic difference between, e.g., $[1,3]$ and $[[1,3]]$, as a different player makes the choice between the values 1 and 3. However, a simple value encapsulates this as its base value is the player making the choice, e.g. $\bar{2}$ can be interpreted as ``Regardless of what else happens, at some moment player 2 will make a choice between two moves leading to positions with values 1 and 3.'' Therefore, $\bar{2}$ can be used to represent both $[1,3]$ and $[[1,3]]$.
In general, a simple value defines a complete binary game tree, which contains all possible outcomes, the choices leading to these outcomes and which player makes which choice. Of course, the value $\bar{2}$ could represent a game where a hundred moves are played without any choice being involved, or where every choice leads to positions with the same values, before player 2 makes their deciding choice, and several hundred more moves could be played after this choice, but this does not change the outcome.
\end{framed}
\begin{framed}
\begin{myrule}
\label{rule:triplebrackets}
For a (simple or complex) value $x$ with $N = 3$: $[[[x]]] \Rightarrow x$.
Or, in general, with $N$ the number of players: $x = \underbrace{[[[\ldots[[[}_N x ]]]\ldots]]]$.
\vspace{-6pt}
\end{myrule}
This rule again uses the argument that we can omit nodes of the game tree if they do not influence the possible outcomes or the choices leading to them in any way. Two values $x$ and $[[[x]]]$ are only different in that the latter has three extra moves leading up to the same choice. As these moves do not influence the choice or its outcomes, and the player who is to make these choices is the same, the values are semantically equivalent and we can omit the three sets of square brackets.
\end{framed}
\begin{framed}
\begin{myrule}
\label{rule:sameplayerchoice}
For (simple or complex) values $x_1,\ldots,x_m$ and $y_1,\ldots,y_k$ with $N = 3$: \\
$[x_1,\ldots,x_m,[ [ [y_1,\ldots,y_k]]]] \Rightarrow [x_1,\ldots,x_m,y_1,\ldots,y_k]$.
Or, in general, with $N$ the number of players: \\
$[x_1,\ldots,x_m,\underbrace{[[[\ldots[[[}_Ny_1,\ldots,y_k]]]\ldots]]]] \Rightarrow [x_1,\ldots,x_m,y_1,\ldots,y_k]$
\vspace{-6pt}
\end{myrule}
This rule can be used together with Rule~\ref{rule:triplebrackets} to merge nodes with their ancestors in the game tree if no choices by other players were involved on the path between them. This builds upon the intuition used in Rule~\ref{rule:triplebrackets} that, as long as the intermediate nodes where other players can make a move do not branch, the same player keeps making every choice, giving them complete control over the possible outcomes. The outcomes can thus be merged into the children of a single node, a single list, as this does not reduce the possibilities available to the players.
For instance, this rule can be used to simplify $[\bar{\bar{1}},\bar{2}]$ into $\bar{\bar{1}}$, assuming it is player 1's turn. As $\bar{\bar{1}} = [\bar{2},\bar{3}]$, player 1 has the choice between choosing for $\bar{2}$ immediately or taking a different path in which they will eventually choose between $\bar{2}$ and $\bar{3}$. In the end, player 1 chooses between $\bar{2}$ and $\bar{3}$ without any influence from the other players, so the original choice can be simplified to $[\bar{2},\bar{3}] = \bar{\bar{1}}$.
\end{framed}
\subsection{Player preferences}
\label{sec:preferences}
Until now, we have only considered simplification rules that do not actually change the semantics of the game tree. We now consider possible player preferences to actually discard certain values and thus prune the game tree. We define a binary relation over the values of game trees: a value $X$ is said to be \emph{weaker than or equal to} a value $Y$ from the perspective of player $p$, written as $X \leq_p Y$, if they represent the same value or if the relation can be inferred from the values of their children. Formally, with $X = [x_1,\dotsc,x_n], Y = [y_1,\dotsc,y_m]$, we define the relation $\leq_p$ as follows:
\begin{framed}
\begin{definition}
\label{def:leqp}
$X \leq_p Y \Leftrightarrow (X = Y) \text{ or } (\forall i: x_i \leq_p Y) \text{ or } (\forall i,j: x_i \leq_p y_j)$
\end{definition}
With $X = Y$, we mean that $X$ and $Y$ are exactly the same values after using syntactic simplifications as in Section~\ref{sec:syntaxsimpl}.
\end{framed}
Using this definition, we can define three more relations:
Two values $X$ and $Y$ are \emph{equal} to each other from the perspective of player $p$, written as $X =_p Y$, if they are both weaker than or equal to each other. Formally:
\begin{framed}
\begin{definition}
\label{def:eqp}
$X =_p Y \Leftrightarrow X \leq_p Y \text{ and } Y \leq_p X$
\end{definition}
\end{framed}
A value $X$ is \emph{strictly weaker} than a value $Y$ from the perspective of player $p$, written as $X <_p Y$, if $X$ is weaker than or equal to $Y$ and they are not equal. Formally:
\begin{framed}
\begin{definition}
\label{def:ltp}
$X <_p Y \Leftrightarrow X \leq_p Y \text{ and } X \neq_p Y$
\end{definition}
\end{framed}
A value $X$ is \emph{incomparable} with a value $Y$ from the perspective of player $p$, written as $X \not\gtrless_p Y$, if $X$ is not weaker than or equal to $Y$ and $Y$ is not weaker than or equal to $X$. Formally:
\begin{framed}
\begin{definition}
\label{def:incmpp}
$X \not\gtrless_p Y \Leftrightarrow X \nleq_p Y \text{ and } Y \nleq_p X$
\end{definition}
\end{framed}
\newpage
\subsection{Selfish play}
\label{sec:selfish}
A logical first player preference to introduce would be a \emph{selfish} player --- when faced with a choice between two values, they will choose the value that is in their own best interest. In particular, this implies that the player will always choose a move where they will certainly win. If no such move is present, they will choose a move where they might win. Formally, it allows us to define two additional rules. We use the term \emph{selfish game} to denote a game where all players are selfish.
\begin{framed}
\begin{myrule}
\label{rule:guaranteedwin}
Assuming a selfish player $a$, it being player $a$'s turn, and $x$ being an arbitrary value that is neither a guaranteed win or loss, then: $x <_a a$.
\end{myrule}
If player $a$ can make a move leading to unconditional victory, they will choose to do so and disregard all other moves.
\end{framed}
\begin{framed}
\begin{myrule}
\label{rule:avoidloss}
Assuming a selfish player $a$, it being player $a$'s turn, $x$ being an arbitrary value that is neither a guaranteed win or loss, and $y$ being an arbitrary value that is a guaranteed loss, then: $y <_a x$.
\end{myrule}
As player $a$ plays to win and the value $y$ and $c$ represents a guaranteed loss, the player will prefer any value $x$ that still has some possibility, no matter how small, to lead to a victory.
\end{framed}
We can use the above rules to show that $\bar{\bar{3}} <_1 \bar{2}$. After all, $\bar{\bar{3}} = [\bar{1},\bar{2}]$. Rule~\ref{rule:avoidloss} gives us that $\bar{1} <_1 \bar{2}$. We can conclude from this that $\bar{\bar{3}} <_1 \bar{2}$. Note that using this and Definition~\ref{def:leqp}, we can also show that $[\bar{2},\bar{\bar{3}}] <_1 \bar{2}$, and then that $[\bar{2},[\bar{2},\bar{\bar{3}}]] <_1 \bar{2}$, and so on. However, $\bar{2}$ and $\bar{\bar{2}}$ for instance are incomparable. Since we have no way of comparing 2 and 3 from the perspective of player 1, we also have no way of comparing $\bar{2}$ and $\bar{3}$ and so forth. This significantly limits the gains of the simplification rules so far. Furthermore, we would actually like to be able to compare $\bar{2}$ and $\bar{\bar{2}}$; both have a single path where player 1 wins, but $\bar{\bar{2}}$ has three paths where player 1 loses compared to a single path in $\bar{2}$, and player 1 has no way to steer towards its winning path in either case. Although we assume nothing about the preferences of players 2 and 3 beyond them playing selfishly, it would seem wise to prefer $\bar{2}$ over $\bar{\bar{2}}$ as player 1. Therefore, we need something more.
\subsection{Prudently selfish play}
\label{sec:prudent}
To counter the issue we raised at the end of Section~\ref{sec:selfish}, we introduce the notion of a \emph{prudently selfish} player, or a \emph{prudent} player for short --- and \emph{prudent game} for a game with only prudent players. A prudent player will, in addition to playing selfishly, when choosing between two values $X$ and $Y$ where neither is strictly weaker than the other, avoid one if it can lead to a situation that is worse than or incomparable to every single situation in the other one. If such a choice occurs where the option $X$ is discarded in favour of $Y$, we say that, from the point of view of player $p$, $X$ is \emph{prudently weaker} than $Y$, written as $X <^P_p Y$. Formally, with $X = [x_1,\dotsc,x_n], Y = [y_1,\dotsc,y_m]$, we define the relation $<^P_p$ and its corresponding incomparability relation $\not\gtrless^P_p$ as follows:
\begin{framed}
\begin{definition}
$X <^P_p Y \Leftrightarrow X <_p Y \text{ or } (\forall i,j: (x_i <^P_p y_j \text{ or } x_i \not\gtrless^P_p y_j) \text{ and } \exists i,j: (x_i <^P_p y_j))$
\end{definition}
\end{framed}
\begin{framed}
\begin{definition}
$X \not\gtrless^P_p Y \Leftrightarrow X \nless^P_p Y \text{ and } Y \nless^P_p X$
\end{definition}
\end{framed}
We now show that this new relation $<^P_p$ allows us to compare almost every single pair of simple values and that we can use it to simplify any complex value to a single simple value. To this end, we prove with three theorems that this holds for the relation $<^P_1$, from the perspective of player $1$. The proofs for the relations $<^P_2$ and $<^P_3$ are the same.
First we prove that the values $2_i$ and $3_i$ are incomparable and that $1_i$ is comparable with $2_i$ and $3_i$, with the sign depending on the parity of $i$:
\begin{framed}
\begin{theorem}
\[
\forall i \geq 0: 2_i \not\gtrless^P_1 3_i \land
\begin{cases*}
2_i <^P_1 1_i \land 3_i <^P_1 1_i & when $i$ is even \\
1_i <^P_1 2_i \land 1_i <^P_1 3_i & when $i$ is odd \\
\end{cases*}
\]
\end{theorem}
We prove this by induction. Our base case is $i = 0$: $2 \not\gtrless^P_1 3$. It can easily be determined that $2$ and $3$ are not comparable under the rules we have defined. Furthermore, we determine that $2 <^P_1 1$ and $3 <^P_1 1$. Both of these can be easily checked.
For our induction step, we assume our hypothesis to hold for all $0 \leq i \leq k$. We now prove this to induce that the hypothesis also holds for $k+1$. Consider the game trees for the values $1_{k+1}$, $2_{k+1}$ and $3_{k+1}$:
\Tree [.$1_{k+1}$ $2_k$ $3_k$ ]
\Tree [.$2_{k+1}$ $1_k$ $3_k$ ]
\Tree [.$3_{k+1}$ $1_k$ $2_k$ ]
We now attempt to compare $2_{k+1}$ with $3_{k+1}$. There are two ways for the two values to be comparable: $2_{k+1} <^P_1 3_{k+1}$ or $3_{k+1} <^P_1 2_{k+1}$. From our induction hypothesis, it follows that $2_k \not\gtrless^P_1 3_k$. What remains is to compare $1_k$ with $2_k$ and $3_k$ with $1_k$. We first consider the option $2_{k+1} <^P_1 3_{k+1}$. Since we know from our induction hypothesis that $1_k$ is comparable both with $2_k$ and $3_k$, we know that both $1_k <^P_1 2_k$ and $3_k <^P_1 1_k$ must hold. This is impossible by our induction hypothesis. The second option, $3_{k+1} <^P_1 2_{k+1}$, can be analogously proven to be impossible. We conclude that $2_{k+1} \not\gtrless^P_1 3_{k+1}$.
We then compare $1_{k+1}$ with $2_{k+1}$. It follows from our induction hypothesis that $2_k \not\gtrless^P_1 3_k$. Furthermore, from hypothesis we know that either $1_k <^P_1 2_k \land 1_k <^P_1 3_k$ or $2_k <^P_1 1_k \land 3_k <^P_1 1_k$. In the first case, when $k$ is even, it follows that $2_{k+1} <^P_1 1_{k+1}$. In a similar fashion it then holds that $3_{k+1} <^P_1 1_{k+1}$. In the second case, when $k$ is odd, the inverse holds: $1_{k+1} <^P_1 2_{k+1} \land 1_{k+1} <^P_1 3_{k+1}$.
Together, these comparisons show that our hypothesis also holds for $k+1$. By mathematical induction, the statement holds for all $i \geq 0$.
\end{framed}
Second we prove that the value $1_{i+1}$ is incomparable with the values ${2_i}$ and ${3_i}$:
\begin{framed}
\begin{theorem}
$\forall i \geq 0: 1_{i+1} \not\gtrless^P_1 2_i \land 1_{i+1} \not\gtrless^P_1 3_i$
\end{theorem}
For this, again consider the tree for $1_{i+1}$:
\Tree [.$1_{i+1}$ $2_i$ $3_i$ ]
From our previous proof, it holds that $2_i \not\gtrless^P_1 3_i$. It follows that there exists no child value of $1_{i+1}$ which is prudently weaker than a child value of $2_i$ and vice versa. Thus, we conclude that $1_{i+1} \not\gtrless^P_1 2_i$. Analogously, it holds that $1_{i+1} \not\gtrless^P_1 3_i$.
\end{framed}
Finally, we prove that the following ordering holds:
\begin{framed}
\begin{theorem}
\label{theorem:prudentordering}
$\{\bar{1},2,3\} <^P_1 \{\bar{\bar{\bar{1}}},\bar{\bar{2}},\bar{\bar{3}}\} <^P_1 \{\bar{\bar{\bar{\bar{\bar{1}}}}},\bar{\bar{\bar{\bar{2}}}},\bar{\bar{\bar{\bar{3}}}}\} <^P_1 \ldots <^P_1 \{\bar{\bar{\bar{\bar{1}}}},\bar{\bar{\bar{2}}},\bar{\bar{\bar{3}}}\} <^P_1 \{\bar{\bar{1}},\bar{2},\bar{3}\} <^P_1 1$ where values within sets of brackets are incomparable with each other.
\end{theorem}
We prove this by induction. Our base case consists of the ordering of the values $1$, $\bar{1}$, $2$, $\bar{2}$, $3$ and $\bar{3}$. Since $1$ is a guaranteed victory and $\bar{1}$, $2$ and $3$ are guaranteed losses, it is easy to place them in the ordering. We know already that $\bar{1}$, $2$ and $3$ are incomparable. Since $\bar{2}$ and $\bar{3}$ are neither guaranteed victories or guaranteed losses, they are weaker than $1$ and stronger than $\bar{1}$, $2$ and $3$. We know already that they are incomparable with each other. Combining all this, we obtain the following ordering:
\begin{center}
$\{\bar{1},2,3\} <^P_1 \{\bar{2},\bar{3}\} <^P_1 1$
\end{center}
For our induction step, we assume the following ordering to hold for some $i > 1$:
\begin{center}
$\{1_1,2,3\} <^P_1 \{1_3,2_2,3_2\} <^P_1 \ldots <^P_1 \{1_{2i-1},2_{2i-2},3_{2i-2}\} <^P_1 \{2_{2i-1},3_{2i-1}\} <^P_1 \{1_{2i-2},2_{2i-3},3_{2i-3}\} <^P_1 \ldots <^P_1 \{1_2,2_1,3_1\} <^P_1 1_0$
\end{center}
We want to prove that the next values, $1_{2i}$, $1_{2i+1}$, $2_{2i}$, $2_{2i+1}$, $3_{2i}$ and $3_{2i+1}$, are inserted in the ordering as follows:
\begin{center}
$\ldots <^P_1 \{1_{2i-1},2_{2i-2},3_{2i-2}\} <^P_1 \mathbf{\{1_{2i+1},2_{2i},3_{2i}\}} <^P_1 \mathbf{\{2_{2i+1},3_{2i+1}\}} <^P_1 \{\mathbf{1_{2i}},2_{2i-1},3_{2i-1}\} <^P_1 \{1_{2i-2},2_{2i-3},3_{2i-3}\} <^P_1 \ldots$
\end{center}
We prove this in four steps:
\begin{enumerate}[label=\textbf{\Roman*)}]
\item
\Tree [.$1_{2i}$ $2_{2i-1}$ $3_{2i-1}$ ]
We know by induction hypothesis that $\{2_{2i-1},3_{2i-1}\} <^P_1 1_{2i-2}$. Therefore, $1_{2i} <^P_1 1_{2i-2}$. We know by induction hypothesis that $\{2_{2i-1},3_{2i-1}\} <^P_1 \{2_{2i-3},3_{2i-3}\}$. Therefore, $1_{2i} <^P_1 \{2_{2i-3},3_{2i-3}\}$. We know from our induction hypothesis that $\{2_{2i-1},3_{2i-1}\} <^P_1 \{1_{2i-2},2_{2i-3},3_{2i-3}\}$ and we know from a previous proof that $1_{2i} \not\gtrless^P_1 \{2_{2i-1},3_{2i-1}\}$.
Combining the above gives us the ordering $\{\mathbf{1_{2i}},2_{2i-1},3_{2i-1}\} <^P_1 \{1_{2i-2},2_{2i-3},3_{2i-3}\}$.
\item
\Tree [.$2_{2i+1}$ $1_{2i}$ [.$3_{2i}$ $1_{2i-1}$ $2_{2i-1}$ ] ]
\Tree [.$3_{2i+1}$ $1_{2i}$ [.$2_{2i}$ $1_{2i-1}$ $3_{2i-1}$ ] ]
We know already that $\{1_{2i},2_{2i-1},3_{2i-1}\} \not\gtrless^P_1 \{2_{2i-1},3_{2i-1}\}$, and by induction hypothesis that $\{1_{2i-1}\} <^P_1 \{2_{2i-1},3_{2i-1}\}$. It follows that $\{2_{2i},3_{2i}\} <^P_1 \{2_{2i-1},3_{2i-1}\}$ and thus that $\{2_{2i+1},3_{2i+1}\} <^P_1 \{2_{2i-1},3_{2i-1}\}$. We know from a previous proof that $\{2_{2i},3_{2i}\} <^P_1 1_{2i}$. Therefore, $\{2_{2i+1},3_{2i+1}\} <^P_1 1_{2i}$. We know from a previous proof that $1_{2i} \not\gtrless^P_1 \{2_{2i-1},3_{2i-1}\}$.
Combining the above gives us the ordering $\mathbf{\{2_{2i+1},3_{2i+1}\}} <^P_1 \{\mathbf{1_{2i}},2_{2i-1},3_{2i-1}\}$.
\item
\Tree [.$2_{2i+1}$ $1_{2i}$ $3_{2i}$ ]
\Tree [.$3_{2i+1}$ $1_{2i}$ $2_{2i}$ ]
We know from a previous proof that $\{2_{2i},3_{2i}\} <^P_1 \{1_{2i}\}$. It follows that $\{2_{2i},3_{2i}\} <^P_1 \{2_{2i+1},3_{2i+1}\}$. We know from a previous proof that $1_{2i+1} <^P_1 \{2_{2i+1},3_{2i+1}\}$ and $1_{2i+1} \not\gtrless^P_1 \{2_{2i},3_{2i}\}$.
Combining the above gives us the ordering $\mathbf{\{1_{2i+1},2_{2i},3_{2i}\}} <^P_1 \mathbf{\{2_{2i+1},3_{2i+1}\}}$.
\item
\Tree [.$1_{2i+1}$ $2_{2i}$ $3_{2i}$ ]
\Tree [.$2_{2i}$ $1_{2i-1}$ $3_{2i-1}$ ]
\Tree [.$3_{2i}$ $1_{2i-1}$ $2_{2i-1}$ ]
We know from our induction hypothesis that $1_{2i-1} <^P_1 \{2_{2i-1},3_{2i-1}\}$. Therefore, $1_{2i-1} <^P_1 \{2_{2i},3_{2i}\}$, which lets us conclude that $1_{2i-1} <^P_1 1_{2i+1}$. We know from our induction hypothesis that $1_{2i-1} \not\gtrless^P_1 \{2_{2i-2}, 3_{2i-2}\}$ and that $\{2_{2i-2},3_{2i-2}\} <^P_1 \{2_{2i},3_{2i}\}$. It follows that $1_{2i-1} <^P_1 \{2_{2i},3_{2i}\}$ and that $\{2_{2i-2},3_{2i-2}\} <^P_1 \{2_{2i},3_{2i}\}$. From this, we can conclude that $\{2_{2i-2},3_{2i-2}\} <^P_1 1_{2i+1}$.
Combining the above gives us the ordering $\{1_{2i-1},2_{2i-2},3_{2i-2}\} <^P_1 \mathbf{\{1_{2i+1},2_{2i},3_{2i}\}}$.
\end{enumerate}
Together, these four steps conclude our proof by induction.
\end{framed}
It follows that a prudent player will always simplify a list of simple values to a single simple one. They can not construct complex values. After all, almost every single pair of simple values is comparable, allowing the player to discard one of them. The only values incomparable with each other, from the perspective of player 1, are $2_i$, $3_i$ and $1_{i+1}$. $[2_i,3_i] = 1_{i+1}$ and any combination including $1_{i+1}$ can be simplified to $1_{i+1}$. This leads us to our final theorem:
\begin{framed}
\begin{theorem}
\label{theorem:prudentsimplicity}
All prudent short games with $N = 3$ and a given starting player result in a single simple value.
\end{theorem}
As mentioned in Section~\ref{sec:N-player games}, we assume the game to be converging. We can thus construct the full game tree. As we are given a fixed starting player, we can then determine for each node in the tree which player makes the corresponding choice. If we label the nodes in a bottom-up way, we can apply Theorem~\ref{theorem:prudentordering} to obtain a simple value in every node, as any combination of simple values, from the perspective of a given player, can be merged into another simple value.
\end{framed}
It might be interesting to note that the number of different values for a given starting player thus becomes at most linear in the size of the board. As each move, and thus each level in the game tree, removes a single token from the game and possibly isolates more tokens, the depth of the game tree --- and therefore also the exponent of a simple value --- can not exceed the number of initial tokens, which in turn can not exceed the number of vertices $n$ on the game board. As we have three different bases for simple values and at most $n$ different exponents ($0, \ldots, n-1$ as you need at least two tokens to make a move), this gives us at most $3n$ different values or outcome classes.
However, we have now fixed a starting player, so a position now consists of a configuration of the board and the player whose turn it is. Naturally, different values can be assigned to the same configuration depending on the starting player. For instance, the game
\begin{tabular}{|c|c|}
\hline
1 & 2 \\
\hline
\end{tabular}
is won either by player 1, if player 1 or player 3 starts, or by player 2, if player 2 starts. A different notation would be needed to construct a value that includes all possible starting players, as has been done for two-player games in classical combinatorial game theory.
\subsubsection*{Indifference}
Another possible approach, instead of playing prudently, would be to consider the values 2 and 3 to be equal from the perspective of player 1, so $2 =_1 3$. We call this an \emph{indifferent} player, as the player does not differentiate between the outcomes where they lose. They are simply losses, no matter which other player won. The assumption of an indifferent selfish player leads to the same ordering as in Theorem~\ref{theorem:prudentordering}, with two differences: the ordering uses the relation $<_1$ instead of $<^P_1$, and the values within sets of brackets are equal to each other instead of being incomparable. The proof is quite similar to the one in Theorem~\ref{theorem:prudentordering}, which we will leave as an exercise to the reader.
\section{Simplification results}
\label{sec:results}
In this section, we analyse the efficiency of our simplification rules by computing the number of different possible values for games of three-player Clobber on a $1 \times n$ board using the different simplification rules.
Recall Example~\ref{example:1x10-unsimplified} of a $1 \times 10$ board, which had a rather large value of 675 characters, excluding the spaces added for readability:
\begin{framed}
\textbf{Unsimplified: } [[[1, 3, [3, [[1, 3]]], [[1, 3, [2, 3]], [2, 3, [1, 2]]], [[[1, 2]]]], [3, [2, 3], [2, [1, 3]], [[1, 2]]], [3, [3, [1, 3]], [3, [[1, 3]]], [[1, 3], [2, 3]], [[1, 3]]], [3, [[1, 2], [2, 3]]], [[1, 3], [3, [1, 2], [[1, 2]]], [[1, 2, 3], [1, 2, [2, 3]]]], [[1, 3], [3, [1, 2], [[1, 2]]], [[1, 3], [2, 3, [2, 3]], [2, 3]]]], [[2, [1, 3], [2, 3], [[1, 3]]], [2, [3, [1, 3]]], [3, [1, [2, 3]], [[1, 3, [2, 3]]], [[1, 3], [[1, 2]]], [[2, 3], [[1, 3]]]], [[1, 3], [1, [2, 3]], [3, [2, 3]], [[2, 3]]], [[1, [1, 2, 3], [2, 3, [2, 3]]], [2, 3], [2, [[1, 2]]]], [[2, 3], [2, [1, 2]], [[1, 2, [2, 3]], [1, 2]]]], [[2, [2, 3]], [2, [3, [1, 3]], [3, [2, 3]]], [2, [3, [1, 3]]], [3, [2, 3], [[1, 3]], [[2, 3], [[1, 3]]]], [[1, 2], [1, 3, [1, 3]], [2, 3]], [[1, 2], [2, 3]]], [[[1, 2], [1, [2, 3]], [2, 3]], [[1, 2], [2, 3]], [[1, 3, [1, 2]]], [[2, 3]], [[3, [1, 3]], [[1, 3]]]]]
\end{framed}
Our syntactic rules turn out to be inapplicable in this case, but the assumption of three selfish players makes a huge difference, reducing the value to a simple one:
\begin{framed}
\textbf{Selfish: } [[[1,2]]].
\end{framed}
Note that this assumes that the first turn is player~1's. Three prudent players will get the same result, although they might express it as a simple value:
\begin{framed}
\textbf{Prudent: } $3_1$ (= [[[1,2]]]), brackets as seen from the perspective of player~1.
\end{framed}
To give a different example, where the assumption of prudent players simplifies the value more than just selfish players, consider the following game: \begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
1 & 3 & 2 & 3 & 2 & 3 & 1 & 2 & 3 \\
\hline
\end{tabular}. This gives us the following values:
\begin{framed}
\textbf{Selfish: } [[[1,2],[[2,3],[[1,3]]]],[[1,2],[[2,3]]],[[[2,3],[[1,3]]]]]
\textbf{Prudent: } $3_2$ (= [[[[2,3],[[1,3]]]]])
\end{framed}
Table~\ref{table:results} shows the number of unique possible values for boards of size $1~\times~n$ with $2 \leq n \leq 13$. We only analysed the configurations that do not occur on earlier board sizes. This means we skipped all configurations with a 0 at either extremity of the board or with at least two consecutive 0's, as these configurations would have already occurred at some smaller board size\footnote{Recall that a 0 is an empty vertex.}. We also took mirror symmetry into account, so we only analysed configurations whose string representation is lexicographically greater than or equal to their reverse's. Finally, following our assumption in Section~\ref{sec:N-player games}, we only considered configurations where at least one move is possible for some player. Because of this, the number of games we analysed is less than the number of actual possible configurations, which is $4^n$ for a $1 \times n$ board. Note that this also means that the number of values shown in the table is the number of different possible values at the starting position. Once the players proceed to make moves, different values may occur. For instance, with selfish players, some values occur on $1 \times 10$ boards that do not occur on $1 \times 11$ boards. Furthermore, we assume all players share the same preference --- we have not analysed a game in which, for instance, only one of the players is prudent --- and we assume that it is player 1's turn in each starting position, which seems to lower the number of possible values. On a $1 \times 4$ board with three prudent players, the value $1_1$ does occur while $2_1$ does not. Were player 2 the starting player, the value $2_1$ would have occurred, for instance on the board
\begin{tabular}{|c|c|c|c|}
\hline
1 & 2 & 2 & 3 \\
\hline
\end{tabular}. However, our results are from the perspective of the starting player, and we can always renumber the players so their number matches their turn order, to obtain a value from our results.
\begin{table}[ht!]
\begin{center}
\begin{tabular}{| r | r || r | r | r | r |}
\hline
Board length & Games analysed & Unsimplified & Syntactic & Selfish & Prudent \\
\hline
2 & 3 & 2 & 2 & 2 & 2 \\
3 & 15 & 3 & 3 & 3 & 3 \\
4 & 60 & 7 & 7 & 4 & 4 \\
5 & 243 & 21 & 21 & 5 & 5 \\
6 & 924 & 77 & 77 & 7 & 7 \\
7 & 3\,609 & 506 & 501 & 8 & 8 \\
8 & 13\,704 & 2\,408 & 2\,398 & 9 & 8 \\
9 & 52\,497 & 9\,777 & 9\,748 & 20 & 10 \\
10 & 199\,329 & 36\,407 & 36\,326 & 154 & 11 \\
11 & 758\,556 & 128\,345 & 128\,179 & 2\,163 & 13 \\
12 & 2\,878\,512 & 434\,571 & 434\,274 & 30\,378 & 13 \\
13 & 10\,949\,499 & 1\,441\,816 & 1\,441\,334 & 256\,975 & 14 \\
\hline
\end{tabular}
\caption{Number of Clobber games analysed and unique resulting values adding the different simplification methods. Note that the selfish and prudent columns also use the syntactic simplifications.}
\label{table:results}
\end{center}
\end{table}
As the table shows, the syntactic rules do reduce the number of unique values, but only very slightly. Selfish play reduces the numbers significantly on smaller board sizes, but the number of values still grows exponentially and the reduction factor seems to decrease as the board grows larger. This could be explained with the incomparability of the more complex values, as mentioned at the end of Section~\ref{sec:selfish}, as these more complex values occur more often on larger boards and a combination of two such values usually can not be simplified. As argued at the end of Section~\ref{sec:prudent}, prudent play results into a linear upper bound on the number of values. This is strenghtened by the results shown in the table, which also show that the upper bound of $3n$ is not sharp.
As a specific example, let us consider the single value that ``disappears'' when going from selfish to prudent play on a $1 \times 8$ board. This is [[1,3],[[1,2],[[2,3]]]], or, using the bar notation, $[\bar{2},\bar{\bar{2}}]$. This should indeed be simplified to just $\bar{2}$ by a prudent player 1. The other values occurring in selfish play are 1, 2, 3, $\bar{1}$, $\bar{2}$, $\bar{3}$, $\bar{\bar{1}}$ and $\bar{\bar{2}}$, which are all already simple values. As argued before, the value $\bar{\bar{3}}$ does not occur because we assume player 1 to be the starting player. We have also verified by hand that the reduction from 20 selfish to 10 prudent values for $1 \times 9$ is correct.
\section{Conclusions and further research}
\label{sec:conclusions}
In this paper, we have attempted to use simple player preferences to simplify the game tree in games with $N \geq 2$ players, and particularly with $N = 3$, using the game of Clobber as an example. We have presented two sets of generic player preferences which significantly reduce the number of unique values for arbitrary game positions --- our simplification rules for prudent play lead to a linear upper bound on this number for three-player games. These rules apply both to impartial and partisan games and are shown to work on Clobber. We postpruned our game trees, unfortunately meaning we still had to compute the entire tree before being able to simplify. While we have managed to significantly reduce the number of outcome classes for game values, our rules did not (significantly) lower the time needed to compute these values.
As we have only considered the outcome classes and not the victory margins, it is impossible to simply determine the value of a complex position from the values of its disjoint components. For instance, consider the games
\begin{tabular}{|c|c|}
\hline
1 & 2 \\
\hline
\end{tabular}
,
\begin{tabular}{|c|c|}
\hline
1 & 3 \\
\hline
\end{tabular}
and
\begin{tabular}{|c|c|c|}
\hline
1 & 1 & 2 \\
\hline
\end{tabular}
. These three games all have the value 1 if player 1 begins. Combining the first two, gives the game
\begin{tabular}{|c|c|c|c|c|}
\hline
1 & 2 & 0 & 1 & 3 \\
\hline
\end{tabular}
, which has value $\bar{1}$ if player 1 begins, while combining the second two gives the game
\begin{tabular}{|c|c|c|c|c|c|}
\hline
1 & 1 & 2 & 0 & 1 & 3 \\
\hline
\end{tabular}
, which has value $1$ if player 1 begins. While all components can be won by player 1 if they begin, player 1 can not begin in both games at once. Additional information is thus required to allow for a simple calculation of disjunctive sums, such as the options for the other players.
A logical next step would be to see how our simplification rules perform on games with more than three players. Furthermore, as our simple values are simply a means to simplify the notation of three-player games, it could be interesting to attempt to find a similar useful notation, and a generalisation of Theorem~\ref{theorem:prudentsimplicity}, for games with $N > 3$. A logical generalisation would be to keep the notation from Section~\ref{sec:simplevalues}: $a_i = \{b_{i-1}\,|\,b \neq a\}$. Using this notation, in a four-player game, we would have $2_1 = \{1, 3, 4\}$. However, there would be no simple notation for, for instance, $\{1, 4\}$, so it remains to be seen how useful this notation would be. Our final suggestion would be to devise more player preferences --- a risky or paranoid player, for instance --- and to mix several types of players to research the effects of different combinations. We have now assumed all players to have the same preference to experience the full effects of those specific rulesets, but naturally this is not always the case.
|
{
"timestamp": "2018-06-05T02:16:44",
"yymm": "1806",
"arxiv_id": "1806.01043",
"language": "en",
"url": "https://arxiv.org/abs/1806.01043"
}
|
\section{Introduction}
The quantum spin liquid (QSL) is an exotic phase of matter characterized by a disordered yet highly entangled ground state. Geometrically frustrated magnets with, for example, a triangular arrangement of spins have been predicted to host such states. Another promising route to a QSL is the Kitaev honeycomb, which consists of spin-1/2 particles arranged on a honeycomb lattice. \cite{kitaev2003fault, kitaev2006anyons} In this model, anisotropic Ising-like exchange interactions between nearest neighbors give rise to frustration.
The ground state is a gapless Z$_2$ spin liquid, with excitations taking the form of itinerant Majorana quasiparticles and static fluxes.
\par The Kitaev honeycomb is of recent experimental interest, as the anisotropic interactions characteristic of the model can manifest in real materials,~\cite{jackeli2009mott, chaloupka2010kitaev} in particular transition metal compounds with strong spin-orbit coupling (SOC) such as the Na and Li iridates~\cite{williams2016incommensurate, biffin2014noncoplanar,modic2014realization, takayama2015hyperhoneycomb} and $\alpha$-RuCl$_{3}$. \cite{kim2015kitaev, plumb2014alpha} Despite the presence of a Kitaev term in the effective spin Hamiltonian, these materials order magnetically at low temperatures~\cite{liu2011long, choi2012spin, chun2015direct, biffin2014noncoplanar, takayama2015hyperhoneycomb, sears2015magnetic} indicating that they host interactions beyond Kitaev exchange. Characterizing these interactions can help to navigate the rich phase diagrams of these materials, wherein one may approach a quantum-disordered state by applying external perturbations such as fields or chemical substitution~\cite{yadav2016kitaev}.
\par $\alpha$-RuCl$_{3}$ has risen to prominence as a candidate Kitaev system, driven by the availability of single crystals suitable for inelastic neutron scattering (INS)~\cite{banerjee2016proximate,ran2017spin} and optical spectroscopy \cite{little2017antiferromagnetic, ponomaryov2017unconventional, wang2017magnetic, shi2018field}, as well as the observation that magnetic order disappears in an in-plane magnetic field $H_c\sim$ 7.5 T~\cite{sears2017phase, baek2017observation, zheng2017gapless,leahy2017anomalous, banerjee2018excitations}. In this material, quasi-2D layers of Ru$^{3+}$ atoms surrounded by Cl$_6$ octahedra are arranged on a honeycomb lattice. The combination of octahedral crystal field splitting, electron correlations, and SOC gives rise to a Mott-insulating state with a localized J$_{eff}$ = 1/2 moment on each Ru$^{3+}$ site~\cite{kim2015kitaev}. The quasi-2D layers are stacked and van der Waals coupled to form bulk $\alpha$-RuCl$_{3}$. Such layered magnets are of particular interest because they can be assembled and stacked with other 2D materials, forming heterostructures with potentially topological phases~\cite{soumyanarayanan2016emergent}.
\par Spectroscopic probes such as INS~\cite{banerjee2016neutron, banerjee2016proximate,banerjee2018excitations}, Raman scattering~\cite{sandilands2015scattering}, and THz absorption~\cite{little2017antiferromagnetic, wang2017magnetic, ponomaryov2017unconventional, shi2018field} have been employed to characterize magnetic fluctuations in $\alpha$-RuCl$_{3}$ and test for the existence of, or proximity to, a QSL phase. Below T$_{N}$ = 7 K, the ground state has zigzag antiferromagnetic order~\cite{sears2015magnetic}, as shown in Fig. 1. (b). In the ordered phase and in zero applied magnetic field, INS measurements observed peaks consistent with magnons together with a continuum of scattering centered at $\bf{Q}$ = 0 $(\Gamma$-point) that was seen as well by Raman spectroscopy. This continuum was found to persist at fields above $H_c$, as well as at temperatures above $T_{N}$ at zero field, and was interpreted as a possible signature of fractionalized excitations, \textit{i.e.}, Majorana fermions and Z$_2$ vortices. However, it has also been suggested that the continuum reflects the breakdown of coherent magnons originating from strong anharmonicity in the magnon Hamiltonian.~\cite{winter2017breakdown} THz absorption measurements~\cite{little2017antiferromagnetic} showed that below T$_{N}$ the majority of the $\Gamma$-point spectral weight at low energies was accounted for by spin waves, and furthermore, that the contribution from a magnetic continuum did not grow with increasing magnetic field, up to $H_c$.
Recent measurements have explored in detail the region of the phase diagram near the critical field for the loss of magnetic order. Thermodynamic and transport measurements, including specific heat~\cite{kubota2015successive, wolter2017field, sears2017phase}, nuclear magnetic resonance~\cite{baek2017observation}, and thermal transport\cite{leahy2017anomalous, hirobe2017magnetic, hentrich2018unusual, yu2018ultralow} indicate a transition to a gapped magnetically disordered state. However, varying interpretations of the nature of that state and its low-energy excitations leave the question of a transition to a QSL at or near $H_c$ unresolved. Recent experiments reporting a quantized thermal Hall effect \cite{kasahara2018majorana} for off-axis applied fields, a signature of chiral Majorana modes, suggest that a topological phase may exist in the vicinity of $H_c$.
Measurements of the magnetic excitation spectrum using time-domain THz spectroscopy (TDTS) can aid theoretical understanding of the $\alpha$-RuCl$_3$ phase diagram by constraining the effective spin Hamiltonian parameters. TDTS probes this spectrum with high sensitivity and energy resolution, yielding an absolute measurement of the imaginary part of the dynamic magnetic susceptibility at zero wavevector, $\chi^{\prime\prime}(\omega, Q = 0)$ in the energy range 0.1 to 1.7 THz, or 0.4 to 7.0 meV ~\cite{little2017antiferromagnetic}. In Section II we describe THz absorption measurements that fully characterize $\chi^{\prime\prime}(\omega, Q = 0)$ associated with the antiferromagnetic state of $\alpha$-RuCl$_3$ as a function of static field $\bf{H}$ and THz probe field $\bf{B_{THz}}$. We observe four resonances whose frequency and amplitude exhibit a complex dependence on applied field that depends strongly on the relative orientation of $\bf{H}$ and $\bf{B_{THz}}$. We use the absolute determination of $\chi^{\prime\prime}(\omega, Q = 0)$ provided by THz absorption to track the the dependence of the spin wave spectral weight on $H$ for $\bf{B_{THz}}\ \|\ \bf{H}$ and $\bf{B_{THz}}\perp\bf{H}$. These spectral weights are then compared with the static susceptibility, $\chi(\omega = 0, Q = 0)$ to determine the relative contributions of spin wave vs. continuum to the total weight of magnetic fluctuations at zero wave vector. In Section III we compare our experimental results with calculations based on linear spin wave theory (LSWT). Surprisingly, we find that LSWT can account for all the essential features of the spectra -- the number of modes, their spectral weight and optical selection rules, the variation of resonant frequency with $H$, and a discontinuity in mode frequency and amplitude at a low field of $\sim$ 1.5 T. Achieving this description requires considering a $C_3$-breaking distortion of the honeycomb lattice and the resulting multi-domain structure, as well as a refinement of existing parameterizations of the effective spin Hamiltonian to account for a zero-field splitting of the lowest frequency spin waves. In addition, the contribution to the spectrum from two-magnon states is clearly identified. Finally, Section IV summarizes the conclusions of our study. Finally, Section IV summarizes the conclusions of our study. \newline
\section{Experimental Results}
\subsection{Definition of axes}
To guide the polarized TDTS measurements, the optical anisotropy of $\alpha$-RuCl$_3$ samples was first characterized by measuring their transmitted THz electric field amplitude when rotated between crossed linear polarizers. Fig. 1(a) shows a typical room temperature scan of transmission as a function of angle of rotation about the optic axis. The nearly four-fold pattern, observed in all samples studied, indicates breaking of $C_3$ symmetry. This result is consistent with X-ray diffraction measurements that indicate a $\sim 0.2 \%$ elongation of one of the Ru-Ru bonds and a monoclinic $C2/m$ space group~\cite{johnson2015monoclinic, cao2016low}. Fig. 1(b) depicts a Ru honeycomb layer that forms this structure, where $x$, $y$, and $z$ label the Ising axis of the Kitaev exchange term on the Ru-Ru bonds. An elongation in the direction of one the bonds (the one labeled by $z$ in the sketch) defines the $\bf{b}$ axis of the monoclinic structure. The color of the atoms illustrates the zigzag antiferromagnetic order that arises below the N\'eel temperature (T$_N$).
\begin{figure}[htp]
\centering
\includegraphics[width=1.0\columnwidth]{Fig1.pdf}
\caption{(a) Transmitted THz electric field amplitude at T = 294 K as a function of sample angle. Blue and red lines represent the minimum transmission axes at $\bf{a^\prime}$ and $\bf{b^\prime}$ (b) Schematic of honeycomb structure showing ${\bf a}$ and $\bf{b}$ monoclinic axes relative to Ru-Ru bonds. Color of atoms illustrates zigzag order. Bond labels x, y, and z denote the component of the spin interacting along a given bond in the Kitaev model. (c) Magnon absorption as a function of frequency for $\bf{H\ \|\ b^\prime\ \| \ B_{THz}}$ and $\bf{H\ \|\ b^\prime \perp B_{THz}}$ respectively. The magnon contribution is extracted from the total THz absorption by subtracting a reference at T = 8 K, above T$_{N}$, from a T = 4 K spectrum at each field. Traces are offset for clarity.
}
\label{fig1}
\end{figure}
\par The absence of nodes in the polar pattern in Fig. 1(a) indicates that the local $C2/m$ symmetry is broken globally by the presence domains of the three equivalent orientations of monoclinic distortion, which are rotated 120$^\circ$ from one another. A single domain $C2/m$ crystal would exhibit zero transmission for THz fields polarized parallel to the $\bf{a}$ or $\bf{b}$ axes, which is not seen in any of the samples we have studied. On the other hand, in a sample containing equal populations of three domains the optical anisotropy of each would be effectively canceled and the THz transmission between crossed polarizers would vanish for all angles. What we observe instead is the intermediate case, where unequal domain population gives rise to weak residual anisotropy. To confirm the presence of multiple domains, we performed scanning X-ray micro-Laue diffraction measurements~\cite{tamura2003scanning} that indeed revealed the presence of all three domains with spatially varying populations as discussed Appendix A, section 3. This multi-domain character, as we will show, is essential to understanding the THz absorption spectra in the zigzag state as a function of magnetic field.
Because of the low effective symmetry, the directions of minimum transmission in Fig. 1(a) do not coincide with the monoclinic axes of a single domain, although they will be close to those of a dominant domain. In this study, we reference our THz polarization and external magnetic field $\bf{H}$ to the two directions of minimum transmission, which we label as $\bf{a^\prime}$ and $\bf{b^\prime}$ to distinguish them from the monoclinic axes of a single domain. We measure the absorption coefficient $\alpha(\omega)$ with the THz probe field in the honeycomb plane, $\bf{B_{THz}}$, oriented parallel to $\bf{a^\prime}$ and $\bf{b^\prime}$, and in both cases we compare measurements with in-plane $\bf{H}$ parallel and perpendicular to $\bf{B_{THz}}$.
\subsection{Magneto-optical THz spectroscopy} The magnetic dipole contribution to $\alpha(\omega)$ that is associated with the presence of antiferromagnetic order can be isolated by subtracting spectra measured at T = 8 K, which is sufficiently above T$_N$ such that magnons are no longer present, from spectra in the ordered phase at T = 4 K (see Appendix A, section 5). The residual spectrum omits any magnetic contribution that does not change when crossing T$_N$. Figs. 1(c) and (d) show differential (4 K - 8 K) absorption spectra, $\Delta\alpha(\omega) d$, for a sample of thickness $d\sim1$ mm for $\textbf{H}$ parallel to $\bf{b^\prime}$. In the parallel ($\bf{B_{THz}}\ \|\ \textbf{H}$) channel (Fig. 1(c)), a single magnon is observed at $\Omega_1$ = 2.6 meV (0.62 THz) for $\bf{H}$ = 0, which shifts to lower energy and broadens as the field is increased. The spectra measured with $\bf{B_{THz}}\perp \textbf{H}$, shown in Fig. 1(d), are more complex in that the frequency and spectral weight appear to vary non-monotonically in field. In addition, two broader features, which we denote by $L_3$, and $L_4$, appear in the energy range 4-6 meV and become more strongly absorbing as the field is increased.
\begin{figure*}[htp]
\centering
\includegraphics[width=1.8\columnwidth]{Fig2.pdf}
\caption{ Magnon energies and absorption strengths at $\bf{Q}$ = 0 as a function of external in-plane magnetic field, H. Experimental data is in panels (a)-(d). Magnon absorption was extracted by subtracting the 8 K spectra from the 4 K spectra at each value of H. Spectra were taken in 0.2 T steps from 0 - 5 T and in 0.1 T steps from 5 - 7 T; intermediate field values are interpolated. The mode dispersion is shown for four configurations of H and the THz probe field, $\bf{B_{THz}}$ relative to $\bf a^\prime$ and $\bf b^\prime$: (a) and (c) show $\bf{H}\ \|\ \bf{B_{THz}}$ along the $\bf a^\prime$ and $\bf b^\prime$ directions respectively, while (b) and (d) show $\bf{H}\perp\bf{B_{THz}}$. Note the difference of color scales: absorption in the parallel configuration is roughly twice as strong. Panels (e) and (f) show LSWT calculations for absorption in $\bf{H}\ \|\ \bf{b}$ with the probe field parallel and perpendicular, respectively. Solid dots overlaid on (f) represent mode energies predicted by LSWT. The orange and pink dots coincide with observed $\Omega_1$ and $\Omega_2$. Two higher energy modes (white dots) are forbidden by selection rules and do not contribute to THz absorption. Intensity in the region 4 - 6 meV, consistent with observed $L_3$ and $L_4$, results from 2-magnon absorption.
}
\label{fig2}
\end{figure*}
\par The evolution of the spectra with $H$ is greatly clarified by the color scale plots in Fig. 2, which illustrate the magnitude of $\Delta\alpha d$ in the $\hbar\omega-H$ plane. Panels (a), and (b) show spectra with $\bf{H}\ \|\ \bf{a^\prime}$, for $\bf{B_{THz}}$ parallel and perpendicular to $\bf{H}$, respectively. Panels (c), and (d) are the corresponding spectra for the $\bf{H}\ \|\ \bf{b^\prime}$ configuration. Panels (e) and (f) show fits obtained by LSWT calculations discussed below.
We first note that the anisotropy with respect to rotation of the crystal by 90$^\circ$ is weak, that is, the pair of panels (a) and (b) share the same qualitative features as panels (c) and (d), with overall amplitude difference of only $\sim 2$. As we discuss below, LSWT predicts a much larger anisotropy in the dynamic susceptibility between the two principal ($\bf{a}$ and $\bf{b}$) axes of a single zigzag domain. We interpret the observed weak anisotropy to be further evidence for the presence of multiple domains. The width of the peaks remains relatively constant until around H $\sim$ 5 T at which point they start to broaden~\cite{little2017antiferromagnetic}. The broadening occurs more rapidly for $\bf{H\ \|\ a^{\prime}}$ as is apparent in Fig. 2 (a), where the $\Omega_1$ magnon becomes diffuse approaching 7 T. This is an indication that for $\bf{H\ \|\ a^{\prime}}$ the system is close to the critical point and corrections to the spin wave expansion become relevant (see Appendix B, section 3).
\par A far stronger contrast is seen when comparing spectra with $\bf{B_{THz}}\ \| \ \bf{H}$ (panels (a) and (c)) to $\bf{B_{THz}}\perp\bf{H}$ (panels (b) and (d)). For $\bf{B}\ \|\ \bf{H}$ we observe a single mode that shifts to lower frequency with increasing $H$, with the field-induced mode softening slightly more pronounced with $\bf{H}\ \|\ \bf{a^\prime}$. For $\bf{B_{THz}} \perp \bf{H}$ the color plots show clearly that, rather than a single mode with a non-monotonic dependence of energy on field, there are in fact two distinct low energy modes. At $H = 0$ there is a strong mode, $\Omega_1$ = 2.6 meV, and a much weaker one, $\Omega_2$ = 3.3 meV. We note in particular the 0.7 meV splitting between these modes, which informs our LSWT calculations. As ${H}$ increases the spectral weight of $\Omega_1$ decreases rapidly and then shifts to $\Omega_2$ for H $\sim 1.5$ T. Surprisingly, the total spectral weight at this crossover field is close to zero.
\par The absorption features centered at $L_3 = 5.2$ meV and $L_4 = 6.2$ meV at H = 4 T, grow with increasing $H$ and persist as $H$ approaches $H_c$. An exact diagonalization study of $\alpha$-RuCl$_3$ associated eigenstates in this energy range with a two-magnon continuum~\cite{winter2018probing}. Our results for $\chi^{\prime\prime}(\omega)$ using LSWT described in the next section [and shown in Fig. 4(f)] account for the field and polarization dependence of $L_3$ and $L_4$, and confirm their origin as two-magnon excitations in the longitudinal response, that is, $\bf{B_{THz}}$ parallel to the zigzag wavevector.
\subsection{Magnetic susceptibility}
The differential THz absorption is directly related to the imaginary part of the zero wave vector dynamic susceptibility $\chi^{\prime\prime}(\omega$), that is,
\begin{equation}
\Delta\alpha(\omega)\cong\frac{n}{2}\frac{\omega}{c}\chi^{\prime\prime}(\omega),
\end{equation}
where $c/n$ is the speed of light in $\alpha$-RuCl$_3$ in the THz regime, which is determined independently (see Appendix A, section 2). The thermodynamic sum rule, derived from the Kramers-Kronig relation, relates $\chi^{\prime\prime}(\omega)$ to the dc magnetic susceptibility, $\chi(0)$. With this sum rule, the contribution to $\chi(0)$ from $\bf{Q}$ = 0 spin waves can be determined from the spectral weight of the spin wave peaks,
\begin{equation}
\chi_{sw}(0)\equiv\frac{2}{\pi}\int_0^\infty\frac{\chi^{\prime\prime}_{sw}(\omega)}{\omega}d\omega,
\end{equation}
where the subscript $sw$ denotes the component of susceptibility originating from spin wave resonances. By comparing $\chi_{sw}(0)$ to $\chi(0)$ we can place an upper bound on the spectral weight not accounted for by spin waves, i.e., a magnetic continuum~\cite{little2017antiferromagnetic}.
We evaluate $\chi^{\prime\prime}_{sw}(\omega)$ by fitting a Lorentzian function to the THz resonances (see Appendix A, section 6). The resulting $\chi_{sw}(0)$ is plotted in Fig. 3, for each of the four configurations of $\bf{H}$ and $\bf{B_{THz}}$ shown in Fig. 2. Also shown in Fig. 3 is $\chi_{\|}(0)$ as a function of magnetic field, which is defined by the change in magnetization resulting from a $\bf{\delta}H$ parallel to $\bf{H}$. Note that $\chi_{sw}(0)$ in the $\bf{B_{THz}}\|\bf{H}$ channel tracks $\chi_{\|}(0)$ as they both increase with increasing field. The difference $\chi_{\|}(0)-\chi_{sw}(0)$, which is an upper bound on the spectral weight of a magnetic continuum, persists but does not increase until $H$ becomes close to $H_c$. Finally, we note a small feature near 5.5 T in the parallel configuration for both the $\bf{a^\prime}$ and $\bf{b^\prime}$ curves, roughly consistent with a proposed intermediate phase in the $5-7$ T range~\cite{banerjee2018excitations}.
\begin{figure}[htp]
\centering
\includegraphics[width=0.85\columnwidth]{Fig3.pdf}
\caption{Colored dots: Contribution of Q = 0 magnons to static magnetic susceptibility, $\chi_{sw}(0)$ as measured by fits to THz spectra for all four configurations of $\bf{H}$ and $\bf{B_{THz}}$. Orange and purple squares: Total value of $\chi(0)_{\|}$ as measured by low- frequency susceptometry for two directions.
}
\label{fig3}
\end{figure}
\par The dependence on field of the spin wave spectral weight measured with $\bf{B_{THz}}\perp\bf{H}$ is shown as well in Fig. 3, where it is seen to be strikingly different from the results for $\bf{B_{THz}}\ \| \ \bf{H}$. In this configuration the spectral weight exhibits a deep minimum at 2 T for both the $\bf a^\prime$ and $\bf b^\prime$ directions, where it nearly vanishes. The field at which this minimum occurs is the same as the field at which the crossover from the $\Omega_1$ magnon to $\Omega_2$ magnon takes place in the THz spectra. In the following section, we explain how the main features of these data can be modeled using LSWT.
\section{Theoretical Description}
\subsection{Linear spin wave theory}
The starting point for the LSWT calculations is the effective spin Hamiltonian,
\small
\begin{equation}
\begin{aligned}
H_S = &\sum_{<ij>} \left[J_1 {\bf S}_i\cdot {\bf S}_j + \Gamma (S_i^{\alpha_{ij}}S_j^{\beta_{ij}}+S_i^{\beta_{ij}}S_j^{\alpha_{ij}})+K S_i^{\gamma_{ij}}S_j^{\gamma_{ij}}\right]\\
&+\sum_{<ij>_3} J_3 {\bf S}_i\cdot {\bf S}_j-\mu_B g \sum_{i} {\bf H}\cdot {\bf S}_i
\end{aligned}
\end{equation}
\label{eq:H}
where $\left<ij\right>$ and $\left<ij\right>_3$ denote summation over nearest neighbor and third neighbor bonds, respectively~\cite{rau2014generic, winter2016challenges, yadav2016kitaev, wang2017theoretical, winter2017breakdown}. $K$ is the Kitaev interaction, $\Gamma$ is the symmetric off-diagonal term and $J$, $J_3$ are the nearest-neighbor and third neighbor Heisenberg couplings, respectively. The $\gamma_{ij}$ are bond labels ($x$, $y$, or $z$) as shown in Fig. 1 (a) and $\alpha_{ij}, \beta_{ij}$ are the two remaining directions for each bond. Note that the magnetic field is expressed in spin-space components, for example, $\bf{H}\ \|\ \bf{a}$ is expressed as ${\bf H}=H\,(1,1,-2)/\sqrt{6}$ and $\bf{H}\ \| \ \bf{b}$ is ${\bf H}=H\,(1,-1,0)/\sqrt{2}$.
The parameters in Eq. 3 lead to a classical ground state with the observed zigzag antiferromagnetic order. We obtain the collective modes by expanding the Hamiltonian to quadratic order in the fluctuations about the ordered magnetic moment~\cite{holstein_primakoff_1940,toth_lake_2015,colpa_1978}. The spin wave theory is reliable when quantum (or thermal) fluctuations are small compared to the ordered moment, in which case the normal modes are non-interacting magnons. We obtain the theoretical THz absorption by computing the linear response of the magnons to an oscillating magnetic field (see Appendix B, section 1).
In the zigzag state, the unit cell of the honeycomb is enlarged to include four sites; as such there are four independent dispersing magnon modes. Of these, only two contribute to THz absorption, corresponding to the $\Omega_1$ and $\Omega_2$ modes discussed above. The two higher energy modes cannot be excited by the uniform in-plane THz field. This selection rule is exact, and is a result of a $Z_2$ symmetry of the zigzag state, whereby two pairs of spins within the unit cell may be exchanged (see Appendix B, section 3). Thus we do not associate the observed peaks at $L_3$ and $L_4$ with these modes.
To find appropriate values for the parameters in Eq. \ref{eq:H}, we began with the representative values chosen by Winter et al.\cite{winter2018probing, winter2017breakdown} to model INS data, and adjusted them to fit the energies of the modes seen by TDTS. We note that the parameters suggested by Ran et al.~\cite{ran2017spin}, obtained by fitting exclusively to INS spectra at the M point, yield spin wave energies at $\bf{Q}=0$ much larger than found experimentally. A linear spin wave calculation with the parameters of Winter et al. leads to an accidental degeneracy of modes $\Omega_1$ and $\Omega_2$. Refinement of these parameters is needed in order to account for our observation that these modes are split by 0.7 meV at $H$ = 0. In particular we find that fitting the spectra is accomplished by increasing the relative strength of the $\Gamma$ term, such that $\Gamma/K \sim-1$ instead of $\Gamma/K=-1/2$. A representative fit to the energies of modes $\Omega_1$ and $\Omega_2$ as a function of $H$ using the parameter set ($J$, $K$, $\Gamma$, $J_3$) = ( -0.35, -2.8, 2.4, 0.34) meV is shown as dots in Fig. 2 (f). We assume the same in-plane g-factor of 2.3 as used by Winter et al.~\cite{winter2018probing, winter2017breakdown}.
The calculated energies of the magnon modes are an excellent fit to the measured energies. Nevertheless the parameters we have chosen should not be viewed as a definitive set representing microscopic interactions. As we show below, there are sizable quantum corrections to spin wave theory, which should be viewed as based on renormalized parameters. Such renormalized interactions may be dependent on magnetic field and the wave vector of the mode. In this context the main role of the LSWT analysis is to explain the origin of defining features of the spectra, such as spectral weight ratios, zero-field splittings, polarization selection rules, and trends with increasing applied magnetic field.
\subsection{Low-field crossover}
\begin{figure*}[htp]
\centering
\includegraphics[width=1.2\columnwidth]{Fig4.pdf}
\caption{Illustration of the evolution of the three possible zigzag states and active modes for perpendicular case ${\bf H}\ \|\ \bf{b}$, ${\bf B_{THz}}\ \|\ \bf{a}$, where $\bf{a}$, $\bf{b}$ are axes of the z-stretched domain. Bottom row of honeycombs shows preferred spin orientations at H = 0 T, with ordering wave vectors defined with respect to the z-stretched domain. The ellipses show the projection of polarization of $\Omega_1$ (red) and $\Omega_2$ (blue) onto the $ac$ plane for each domain above and below $H_X$ = 1.5 T. Solid arrows indicate a mode that absorbs for $\bf{B_{THz}}\ \|\ \bf{a}$, dashed arrows indicate a mode that does not absorb. Upper row of honeycombs shows reorientated spins above $H_X$.}
\label{fig4}
\end{figure*}
\par In the following we show that the polarization selection rules predicted by LSWT account for the intricate mode-switching behavior observed at intermediate magnetic fields, shown in Fig. 2 (a-d). The crossover at H =1.5 T coincides with the disappearance of magnetic Bragg peaks corresponding to one of the three possible orientations of zigzag order on the honeycomb lattice~\cite{sears2017phase, banerjee2018excitations}. Previously, this effect was interpreted assuming that three degenerate zigzag orientations are present as domains~\cite{sears2017phase}. Within this picture, application of a magnetic field lifts the 3-fold degeneracy, driving energetically favored domains to grow at the expense of others. The possibility that the disappearance of magnetic Bragg peaks is related to a gradual reorientation of the ordered moments within domains was also discussed~\cite{banerjee2018excitations}.
We find that a picture of gradual domain growth~\cite{sears2017phase} or spin reorientation~\cite{banerjee2018excitations} is incompatible with the abrupt changes in the THz spectra that are observed when the applied magnetic field reaches 1.5 T. Instead, our explanation of the sudden changes at 1.5 T is based on the fact that in $\alpha$-RuCl$_3$ the $C_3$ symmetry of the honeycomb lattice is broken, which removes the degeneracy of the three different possible orientations of the zig-zag magnetic order. The dependence of the relative energy of the three orientations on $\bf{H}$ will lead to a field-induced level crossing in which the wavevector of the zig-zag order will abruptly switch. In the following, we refer to this phenomenon as a ``$\bf{Q}$-flop" transition to distinguish it from the conventional spin-flop in which the spin direction changes but not the ordering wavevector. We believe that a $\bf{Q}$-flop transition is required to account for the the abrupt changes in the THz spectra and the vanishing of certain elastic neutron peaks near 1.5 T~\cite{sears2017phase, banerjee2018excitations}. Below, we discuss in detail how the $\bf{Q}$-flop picture accounts for the unusual evolution of mode frequencies and spectral weights as a function of magnetic fields.
As mentioned previously, the breaking of $C_3$ occurs with a relatively small elongation of one of the three bond directions. We incorporate this distortion into the spin Hamiltonian by reducing the coupling constants $J$, $K$ and $\Gamma$ for the ``stretched" bond. Breaking $C_3$ symmetry in this manner lifts the degeneracy between the three possible zigzag wave vectors, $\bf{Q}$; the zigzag with $\bf{Q}$ parallel to the direction of its stretched bond (local monoclinic $\bf{b}$ axis) is energetically favored, the two other orientations of $\bf{Q}$ related by $\pm120^\circ$ rotation are degenerate and higher in energy. This zero-field splitting plays a key role in shaping the field dependence of the THz spectra.
Our scenario for the evolution of the spectra with magnetic field is illustrated in Fig. 4, which presents a table of the energetically preferred states and active modes for each domain, for values of $H$ below and above 1.5 T. We label each bond direction by $x$, $y$, or $z$, depending on the orientation of its Kitaev interaction. The hexagons with $x$, $y$, and $z$-stretched bonds shown in the bottom row of the table illustrate the spin order of the three domains at $H = 0$, where the spins are projected onto the $ab$ plane. Our calculations show that application of a magnetic field favors zigzag orientations for which $|\bf{Q}\cdot\bf{H}|$ is largest. At a crossover field, $H_X$, the $\bf{Q}\cdot\bf{H}$ energy gain exceeds the zero-field splitting. For $H>H_X$ the zigzag wave vector in all domains aligns with the direction selected by the magnetic field, while structural domains remain intact. The field-induced crossover is illustrated in Fig. 4 for the case where the applied magnetic field favors the domain shown in the left-hand column, in which the $z$ bonds are stretched. For $H>H_X$ the zigzag wave vector of the $y$ and $x$ domains will reorient to the $\bf{Q}$ of the $z$-stretched domain. This process is analogous to the usual spin-flop transition in antiferromagnets, with the distinction that here the rotation involves both the direction of the moments and wave vector of the magnetic order.
The $\bf{Q}$-flop crossover described above accounts naturally for the complex evolution of the THz absorption with applied field, when we take into account the polarization and relative spectral weight of $\Omega_1$ and $\Omega_2$. As illustrated by the arrows inside the ellipses in Fig. 4, for the preferred zigzag order of a z-stretched domain (${\bf Q = Y}$), $\Omega_1$ is excited by $\bf{B_{THz}}\ \| \ \bf{b}$ and $\Omega_2$ by $\bf{B_{THz}}\ \| \ \bf{a}$. The polarization of these modes reflects an approximate symmetry with respect to exchange of \textit{x} and \textit{y} spin coordinates within the zigzag state. This symmetry is exact at zero field, and is explained in further detail in Appendix B, section 3. Furthermore, our LSWT calculations predict that the spectral weight of $\Omega_1$ is approximately a factor of six larger than that of $\Omega_2$ (as indicated by the eccentricity of the ellipses). Thus, LSWT predicts strong optical anisotropy for a single structural domain. The fact that the measured THz absorption is nearly isotropic in plane follows from the presence of the three structural domains with comparable, though unequal, population.
The state of the system for $\bf H\ <\ H_X$ is indicated by the lower row of ellipses in Fig. 4. In this regime, for all directions of $\bf{B_{THz}}$ the spectrum is dominated by the strong $\Omega_1$ mode at 2.6 meV, although $\Omega_2$ at 3.3 meV appears faintly as well. The upper row of ellipses shows the reorientation of the polarization that accompanies the $\bf{Q}$-flop crossover at $H_X$. With all the ellipses now aligned with the applied field, there is suddenly a strong dependence on the relative orientation of $\bf{B_{THz}}$ and $\bf{H}$; $\bf{B_{THz}}\ \|\ \bf{H}$ couples only to $\Omega_1$ while $\bf{B_{THz}} \perp \bf{H}$ couples only to $\Omega_2$. This results in the mode-switching from $\Omega_1$ to $\Omega_2$ that is observed only in the $\bf{B_{THz}}\perp\bf{H}$ channel. Figs. 2(e) and (f) show the evolution of the THz absorption spectra calculated with LSWT on the basis of the above model, which accurately reproduces the complex field and polarization dependent features of the experimental data.
In Fig. 5, we show that the multi-domain LSWT theory described above captures the curious deep minimum in spectral weight for $\bf{B_{THz}}\perp\bf{H}$ at 1.5 T (expressed as $\chi_\perp(0)$). The upper theoretical curve is the classical result, while the lower curve includes zero-point fluctuations of the spin 1/2 moments. The sudden reduction in spectral weight for $\bf{B_{THz}}\perp\bf{H}$ occurs when the applied field aligns the $\textbf{Q}$ of each domain, such that at $H= H_X$, $\bf{B_{THz}}$ couples only to the weaker $\Omega_2$ mode. Although the crossover predicted by the theory is sharp when compared with experiment, broadening of the $\bf{Q}$-flop crossover is expected in the presence of structural disorder. We note that our scenario is consistent with the increase of the M point spin-wave intensity at 2 T observed in INS measurements~\cite{banerjee2018excitations}.
\begin{figure}[htp]
\centering
\includegraphics[width=0.8\columnwidth]{Fig5.pdf}
\caption{Experimental and theoretical $\chi(0)$ demonstrating selection of the z-bond stretched ($\bf{Q= Y}$ wave vector) order at crossover field of 1.5 T for the $\bf{H}\ \| \ \bf{b}\perp \bf{B_{THz}}$ configuration. Blue: Susceptibility of the classical spin configuration. Green: Calculation of susceptibility with corrections. Magenta: Experimental values.}
\label{fig5}
\end{figure}
\subsection{Two-magnon contribution}
\par Finally, we discuss the features $L_3$ and $L_4$ that are observed in the $\bf{B_{THz}}\perp\bf{H}$ channel in the photon energy range $\sim 4-6$ meV (Figs. 2 (b) and (d)). These modes cannot be identified as single magnon excitations because of the exact $Z_2$ symmetry discussed above. However, LSWT predicts absorption by a continuum of two-magnon states in precisely this energy range (Fig. 2(f)). A further prediction is that the two-magnon absorption takes place selectively for $\bf{B_{THz}}$ parallel to the ordered moment. As shown in Fig. 4, for $H > H_X$ the moments have flopped to an orientation that is nearly perpendicular to $\bf{H}$. Thus the two-magnon interpretation of $L_3$ and $L_4$ is consistent with the selection rule seen in the data, as these features appear for $\bf{B_{THz}} \perp\ \bf{H}$ and are unobservable for $\bf{B_{THz}}\ \|\ \bf{H}$.
\par Although the selection rules show unambiguously that $L_3$ and $L_4$ are two-magnon excitations, the details of the calculated field dependence (Fig. 2(f)) differ from the data. This is in contrast to the excellent agreement in the case of the single-magnon modes $\Omega_1$ and $\Omega_2$. The most likely origin of this discrepancy is that while the single magnon modes are measured at $\bf{Q}$ = 0 the two-magnon absorption depends on the spin wave dispersion over the entire Brillouin zone. While our LSWT parameters reproduce the local minima at the M-points seen by INS, they do not reproduce the local minimum observed also at the $\Gamma$-point~\cite{banerjee2018excitations} (see Appendix B, section 5). Indeed, all the theoretical models of this system studied to date do not reproduce this feature of the INS data~\cite{winter2018probing, suzuki2018effective} However we find that a $\Gamma$-point minimum appears within LSWT when further interactions are added, for example second nearest-neighbor ferromagnetic coupling. Finding a spin Hamiltonian that describes all aspects of the single-magnon, two-magnon, and INS spectra is a goal for future research.
\section{Summary and conclusion}
In summary, we used polarized time-domain THz spectroscopy to track the frequencies and spectral weights of optically accessible magnetic excitations in $\alpha$-RuCl$_3$ approaching the $\sim$7.5 T transition to a spin disordered state. The THz spectra were determined for parallel and perpendicular orientation of the static and THz magnetic fields. We observed two sharp resonances at 2.5 and 3.2 meV and broader features in the range 4-6 meV that appear only at applied fields of above approximately 4 T. In the theoretical section of the paper, we showed that linear spin wave theory can account for the totality of the data, \textit{i.e}, the field dependence of spectral weights, mode frequencies, and polarization selection rules. The two lower frequency peaks are attributed to zero-wavevector magnons and the higher energy features that appear at approximately 4 T are consistent with a continuum of two-magnon excitations.
\par In our analysis, we focused on the unusual field dependence observed with $\bf{H}$ perpendicular to $\bf{B_{THz}}$, where an apparent jump in spin wave frequency from 2.5 to 3.2 meV and a deep, narrow minimum in spectral weight occur at an applied field of 1.5 T. We showed these phenomena arise from the combination of two factors. First the $C_3$ symmetry of a perfect honeycomb is broken in the $\alpha$-RuCl$_3$ lattice, which gives rise to the presence of three structural domains. Second, the frequencies of the two optically active spin waves are split even in zero applied magnetic field; the degeneracy of these modes seen in previous spin-wave calculations \cite{winter2018probing, winter2017breakdown} is an artifact of the parameters used in those models. Based on these factors, we conclude that the apparent jump in frequency and spectral weight minimum arise from a $\bf{Q}$-flop crossover at 1.5 T, where the external field overcomes the anisotropy of the crystal to select a preferred ordering wave vector of the zigzag state. Although the mode jump was previously associated with Dzayaloshinskii-Moriya (DM) interaction~\cite{ponomaryov2017unconventional}, or to a sudden splitting of modes caused by the applied magnetic field~\cite{shi2018field}, we believe that our model based on zero-field splitting and field-induced ground state energy crossing is uniquely able to account for the totality of the data. The constraints on the effective spin Hamiltonian parameters that emerge from our analysis will aid in understanding the phase diagram of $\alpha$-RuCl$_3$ and potential for existence of spin liquid ground states in this fascinating compound.
\section{Acknowledgements} We thank N. Tamura and C. V. Stan for support at the Advanced Light Source and E. Angelino for help processing the Laue microdiffraction data. We thank T. Scaffidi for useful discussions. Terahertz spectroscopy was performed at Lawrence Berkeley National Laboratory under the Spin Physics program (KC2206) supported by the US DOE, Office of Science, Office of Basic Energy science, Materials Sciences and Engineering Division under Contract No. DE-AC02-05-CH11231. Laue microdiffraction measurements were carried out at beam line 12.3.2 at the Advanced Light Source, which is a Department of Energy User Facility under Contract No. DE-AC02-05CH11231. Device fabrication and dc conductivity measurement were done at Stanford University under the Spin Physics program supported by the US DOE, Office of Science, Office of Basic Energy science, Materials Sciences and Engineering Division under Contract No. DE-AC02-76SF00515. A.L. and L.W. were supported by the Gordon and Betty Moore Foundation's EPiQs Initiative through the Grant No. GBMF4537 to J.O. at U.C. Berkeley. The work at ORNL was supported by the US DOE, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division (J.Q.Y. and C.B.), and Division of Scientific User Facilities (A.B. and S.E.N.) under contract number DE-AC05-00OR22725. P. L. K. and D. M. acknowledge support from Gordon and Betty Moore Foundation's EPiQS Initiative through Grant GBMF44. E.A. acknowledges support from the ERC synergy grant UQUAM. D.B.'s participation in this research was facilitated in part by a National Physical Science Consortium Fellowship and by stipend support from the National Institute of Standards and Technology.
\par L.W., A.L. and E.E.A contributed equally to this work.
|
{
"timestamp": "2018-10-09T02:02:33",
"yymm": "1806",
"arxiv_id": "1806.00855",
"language": "en",
"url": "https://arxiv.org/abs/1806.00855"
}
|
\section{Introduction}
Investigation of evacuation problems dates back many years~\cite{hamacher2002,mamada2002}.
The goal is to evacuate all the evacuees to some sinks to optimize a certain objective function.
The problem can be modeled by a dynamic flow network whose vertices represent the places where
the evacuees are initially located and the edges represent possible evacuation routes.
Associated with each edge is the transit time across the edge and its capacity
in terms of the number of people who can traverse it per unit time~\cite{hamacher2002}.
A {\em completion time} {\em $k$-sink}, a.k.a. {\em min-max} {\em $k$-sink},
is a set of $k$ sinks that minimizes
the time until every evacuee evacuates to a sink.
If the edge capacities are uniform,
it is straightforward to compute a completion time 1-sink in a path network in linear time,
as shown by Cheng and Higashikawa {\em et al.}~\cite{cheng2013,higashikawa2015c}.
Mamada {\em et al.}~\cite{mamada2006} solved this problem for a dynamic tree network
with non-uniform edge capacities in $O(n\log^2 n)$ time.
Higashikawa {\em et al.} proposed an $O(n\log n)$ algorithm for a tree network with
uniform edge capacities~\cite{higashikawa2014b}.
The concept of {\em regret} was introduced by Kouvelis and Yu~\cite{kouvelis1997},
to model the situations where optimization is required when the exact values (such
as the number of evacuees at the vertices) are unknown.
Their model only assumes that the upper and lower bounds on those values are known.
The objective is to find a solution which is as good as any other solution in the worst case,
where the actual values are the most unfavorable.
Motivated by the 2011 earthquake in Japan,
Cheng {\em et al.}~\cite{cheng2013} applied minmax regret optimization to
the completion time 1-sink problem to model evacuation whose objective function is the completion time,
and proposed an $O(n\log^2 n)$ time algorithm for dynamic flow path networks with uniform
edge capacities.
There has been a flurry of research activities on this problem since then.
The initial result was soon improved to $O(n\log n)$,
independently by Higashikawa {\em et al.}~\cite{higashikawa2015c} and
Wang~\cite{wang2014},
and further to $O(n)$ by Bhattacharya and Kameda \cite{bhattacharya2015b}.
Li {\em et al.}~\cite{li2016b} propose an $O(n^3\log n)$ time algorithm to find
the minmax regret completion time 2-sink problem on dynamic flow path networks.
For the $k$-sink version of the problem,
Arumugam {\em et al.}~\cite{arumugam2014} give two algorithms,
which run in $O(kn^3\log n)$ and $O(kn^2(\log n)^k)$ time,
respectively.
As for dynamic flow tree networks with uniform edge capacities,
Higashikawa {\em et al.}~\cite{higashikawa2014b} propose an $O(n^2\log^2 n)$ time algorithm for finding
the minmax reget 1-sink.
An $O(n^3 \log n)$ time algorithm for dynamic flow cycle networks with uniform edge capacities
is reported by Xu and Li \cite{xu2015a}.
The objective function we adopt in this paper is the {\em aggregate evacuation time,}
i.e., the sum of the evacuation time of every evacuee,
a.k.a. {\em minsum}~\cite{higashikawa2017a}.
It is equivalent to minimizing the mean evacuation time,
and is motivated by the desire to minimize the transportation cost of evacuation
and the total amount of psychological duress suffered by the evacuees, etc.
It is more difficult than the {\em completion time} (a.k.a. {\em minmax}) variety because
the objective cost function is not unimodal.
It is shown by Benkoczi et al.~\cite{benkoczi2018a} that an aggregate time $k$-sink can be found
in $O(kn\log^3 n)$ if edge capacities are uniform.
Our aim in this paper is to determine an aggregate time sink that minimizes regret~\cite{averbakh1997}.
The main contribution of this paper to to improve the time complexity
from $O(n^3)$ in \cite{averbakh1997} to $O(n^2\log n)$.
We need to consider $O(n^2)$ scenarios,
which are called {\em pseudo-bipartite} scenarios~\cite{higashikawa2017a}.
We make use of two novel ideas.
One is used in Sec.~\ref{sec:sinks} to compute an aggregate time sink under
each of the $O(n^2)$ scenarios in amortized $O(\log n)$ time per sink.
The other is used in Sec.~\ref{sec:regret} to compute the upper envelope of $O(n^2)$
regret functions (with $O(n^3)$ linear segments in total) in $O(n^2\log n)$ time.
In the next section,
we define the terms that are used throughout this paper.
We also review some known facts which are relevant to later discussions.
Sec.~\ref{sec:clusters} introduces preprocessing which makes later operations
more efficient.
In Sec.~\ref{sec:sinks}
we show how to compute an aggregate time sink under scenarios that matter.
We then compute in Sec.~\ref{sec:regret} the optimum sink that minimizes the max regret.
\section{Preliminaries}
\subsection{Notations/definitions}\label{sec:defs}
Let $P(V,E)$ denote a given path network,
where we assume that the vertices in its vertex set $V=\{v_1, v_2, \ldots, v_{n}\}$
are arranged from left to right horizontally.
For $i=1,\ldots, n-1$, there is an edge $e_i=(v_i,v_{i+1})\in E$,
whose length is denoted by $d(e_i)$.
We write $p\in P$ for any point $p$ that is either at a vertex or on an edge of $P$.
For two points $a,b \in P$, we write $a \prec b$ or $b \succ a$ if $a$ lies to the left of $b$.
The distance between them is denoted by $d(a,b)$.
If $a$ and/or $b$ lies on an edge, the distance is prorated.
The capacity (the upper limit on the flow rate in each edge) of each edge is $c$ (persons per unit time),
and the transit time per unit distance by $\tau$.
In general, $w(v_i)\in \mathbb{Z}_+$ (the set of the positive integers) refers to the weight of vertex $v_i$,
which represents the number of evacuees initially located at $v_i$.
Under {\em scenario $s$},
vertex $v_i$ has a weight $w^s(v_i)$ such that $w^s(v_i) \in [\underline{w}(v_i), \overline{w}(v_i)]$,
where $\underline{w}(v_i)$ (resp. $\overline{w}(v_i)$) is the lower (resp. upper) limit on $w(v_i)$,
satisfying $0< \underline{w}(v_i) \le \overline{w}(v_i)$.
We define the Cartesian product
\[
{\cal S}\triangleq \prod_{i=1}^{n} [\underline{w}(v_i), \overline{w}(v_i)].
\]
Our objective function is the sum of the evacuation times of all the individual evacuees to point $x$.
\medskip\noindent
More definitions:
\begin{eqnarray}
\Phi^s_L(x) &\triangleq& \mbox{the cost at~} x \mbox{~for the evacuees from the vertices on~}
P[v_1,v_i],\nonumber\\
&&\mbox{where~} v_i\prec x \preceq v_{i+1} \nonumber\\
\Phi^s_R(x) &\triangleq& \mbox{the cost at~} x \mbox{~for the evacuees from the vertices on~} P[v_{i+1},v_n],\nonumber\\
&&\mbox{where~}v_i\preceq x \prec v_{i+1} \nonumber\\
\Phi^s(x) &\triangleq& \Phi^s_L(x) + \Phi^s_R(x) \nonumber\\
\mu^s &\triangleq& \mbox{\rm argmin}_x \Phi^s(x): \mbox{minsum sink under~} s \nonumber
\end{eqnarray}
\begin{eqnarray}
R^s(x) &\triangleq& \Phi^s(x) - \Phi^s(\mu^s) \mbox{{\em :~regret} at~} x \mbox{~under~} s \nonumber\\
\text{(We say}&&\hspace{-3mm}\text{that scenario $s'$ {\em dominates} scenario $s$ at point $x$ if
$R^{s'}(x) \geq R^s(x)$ holds.)} \nonumber\\
R_{max}(x) &\triangleq& \max_{s\in \cal S} R^s(x) \mbox{:~max regret at $x$}\nonumber\\
\overline{s}_i &\triangleq& \mbox{the {\em bipartite} scenario under which~}
w(v_j) = \overline{w}(v_j) \mbox{~for all~} j\leq i
\mbox{~and~} \nonumber\\
&& w(v_j) = \underline{w}(v_j) \mbox{~for all~} j>i, \mbox{~where~} 0\le i \le n \nonumber\\
\underline{s}_i &\triangleq& \mbox{the bipartite scenario under which~} w(v_j) = \underline{w}(v_j) \mbox{~for all~} j\leq i
\mbox{~and~} \nonumber\\
&& w(v_j) = \overline{w}(v_j) \mbox{~for all~} j>i \nonumber\\
s_0 &\triangleq& \overline{s}_0 = \underline{s}_n, s_M \triangleq \overline{s}_n = \underline{s}_0\nonumber\\
W^s[v_i] &\triangleq& \sum_{k=1}^i w^s(v_k) \nonumber
\end{eqnarray}
Clearly, we can precompute $\underline{W}[v_i] \triangleq W^{s_0}[v_i]~(=\sum_{k=1}^i \underline{w}(v_k))$
and $\overline{W}[v_i] \triangleq W^{s_M}[v_i]~(= \sum_{k=1}^i \overline{w}(v_k))$
in $O(n)$ time for all $i$, $1\leq i \leq n$.
Evacuation starts from all the vertices at the same time $t=0$.
Our model assumes that the evacuees at all the vertices start evacuation at the same time
at the rate limited by the capacity ($c$ persions per unit time) of the outgoing edge.
It also assumes that all the evacuees at a non-sink vertex who were initially there
or who arrive there later evacuate in the same direction (either to the left or to the right),
i.e., the evacuee flow is {\em confluent}.
We sometimes use the term ``cost'' to refer to the aggregate evacuation time
of a group of evacuees to a certain destination point.
Our overall approach is as follows.
\begin{enumerate}
\item
Compute $\{\mu^s \mid s\in \overline{\cal S}^*\}$,
where $\overline{\cal S}^*$ is defined in Sec.~\ref{sec:backNforth} and $|\overline{\cal S}^*|=O(n^2)$.
This step takes $O(n^2\log n)$ time.
\item
Compute $R_{\it max}(x) = \max \{R^s(x) \mid s\in \overline{\cal S}^*\}$.
This step takes $O(n^2\log n)$ time.
\item
Find point $x=\mu^*$ that minimizes $R_{\it max}(x)$.
This step takes $O(n^2)$ time.
\end{enumerate}
\subsection{Clusters}
Given a point $x\in P$, which is not the sink,
the evacuee flow at $x$ toward the sink is a function of time,
in general, alternating between no flow and flow at the rate of
$c$ (persons per unit time),
which is the capacity of each edge.
A maximal group of vertices that provide uninterrupted flow without any gap forms a {\em cluster}.
Such a cluster observed on edge $e_{k-1}=(v_{k-1},v_k)$ arriving from right
via $v_k$ is called an {\em ${\cal R}^s$-cluster} {\em with respect to} (any point on) $e_{k-1}$,
including $v_{k-1}$.
An {\em ${\cal L}^s$-cluster} {\em with respect to} $e_j=(v_j,v_{j+1})$, including $v_{j+1}$,
is similarly defined for evacuees arriving from left if the sink lies to the right of $v_j$.
If a cluster $C$ contains a vertex $v$, the cluster is said to {\em carry} the evacuees from $v$.
The first vertex of a cluster is called its {\em front vertex}.
\begin{itemize}
\item
${\cal C}^s_{R,k}$: sequence of all ${\cal R}^s$-clusters w.r.t. $e_{k-1}$ ($k=2, \ldots, n$).
\item
$C^s_{R,k}(v_i)\triangleq$ ${\cal R}^s$-cluster w.r.t. $e_{k-1}$ that contains vertex $v_i ~(i\ge k)$.
\item
${\cal C}^s_{L,k}$: sequence of all ${\cal L}^s$-clusters w.r.t. $e_k$ ($k=1, \ldots, n-1$).
\item
$C^s_{L,k}(v_i)\triangleq$ ${\cal L}^s$-cluster w.r.t. $e_k$ that contains vertex $v_i ~(i\le k)$.
\end{itemize}
Thus $C^s_{R,k}(v_k)$ is the first cluster of ${\cal C}^s_{R,k}$.
The total weight of the vertices contained in cluster $C$ is denoted by $\lambda(C)$.
If $v_h$ and $v_i$ $(v_h\prec v_i)$ are the front vertices of two adjacent clusters in ${\cal C}^s_{R,k}$,
then we have
\begin{equation}
d(v_h,v_i)\tau > \lambda(C^s_{R,k}(v_h))/c.\label{eqn:eqn1}
\end{equation}
Intuitively,
this means that when the evacuee from $v_i$ arrives at $v_h$,
all evacuees carried by $C^s_{R,k}(v_h)$ have left $v_h$ already.
For $v_{k-1}\preceq x \prec v_k$,
let us analyze the cost of $C^s_{R,k}(v_i)$ to reach $x$ from right,
where $v_i\succ v_k$.
For the $\lambda(C^s_{R,k}(v_i))$ evacuees to move to $x$,
let us divide the time required into two parts.
The first part, called the {\em intra cost}~\cite{benkoczi2018a},
is the weighted waiting time before departure from the front vertex of $C^s_{R,k}(v_i)$,
and can be expressed as
\begin{equation}\label{eqn:intraCost1}
I(C^s_{R,k}(v_i)) \triangleq \{\lambda(C^s_{R,k}(v_i))\}^2/2c.
\end{equation}
Intuitively, (\ref{eqn:intraCost1}) can be interpreted as follows.
As far as the waiting and travel time is concerned,
we may assume that all the $\lambda(C^s_{R,k}(v_i))$ evacuees were at the front vertex of
$C^s_{R,k}(v_i)$ to start with.
Since evacuees leave $v_i$ at the rate of $c$,
the mean wait time for an evacuee is $\lambda(C^s_{R,k}(v_i))/2c$
and the total for all the evacuees carried by $C^s_{R,k}(v_i)$
is $\lambda(C^s_{R,k}(v_i))/2c\times\lambda(C^s_{R,k}(v_i))=\{\lambda(C^s_{R,k}(v_i))\}^2/2c$.
Note that the intra cost does not depend on $x$,
as long as $v_{k-1}\preceq x \prec v_k$.
To be exact, the ceiling function must be applied to (\ref{eqn:intraCost1}),
but we omit it for simplicity,
and {\em adopt} (\ref{eqn:intraCost1}) as our intra cost~\cite{cheng2013}.
The second part, called the {\em extra cost}~\cite{benkoczi2018a},
is the total transit time from the front vertex of $C^s_{R,k}(v_i)$ to $x$ for all the evacuees carried by $C^s_{R,k}(v_i)$,
and can be expressed as
\begin{equation}\label{eqn:extraCost1}
E(C^s_{R,k}(v_i)) \triangleq d(x,v_j)\lambda(C^s_{R,k}(v_i))\tau,
\end{equation}
where $v_j~(\succ x)$ is the front vertex of $C^s_{R,k}(v_i)$.
For the evacuees carried by $C^s_{L,k}(v_i)$ moving to the right,
we similarly define its intra and extra costs for $k=1, \ldots, n-1$,
where $v_i\preceq v_k\prec x \preceq v_{k+1}$.
For $v_{k-1}\preceq x \prec v_k$,
we now introduce a cost function
\begin{eqnarray}
\Phi^s_{R,k}(x) & \triangleq &
\sum_{C\in {\cal C}^s_{R,k}} d(x,v_i)\lambda(C)\tau
+ \sum_{C\in {\cal C}^s_{R,k}} \lambda(C)^2/2c. \label{eqn:right1}
\end{eqnarray}
Similarly, for $x$ ($v_k\prec x \preceq v_{k+1}$), we define
\begin{eqnarray}
\Phi^s_{L,k}(x) & \triangleq& \sum_{C\in {\cal C}^s_{L,k}} d(v_i,x)\lambda(C)\tau
+ \sum_{C\in {\cal C}^s_{L,k}} \lambda(C)^2/2c. \label{eqn:left1}
\end{eqnarray}
When $v_k$ is clear from the context, or when there is no need to refer to it,
we may write $\Phi^s_R(x)$ (resp. $\Phi^s_L(x)$) to mean $\Phi^s_{R,k}(x)$ (resp. $\Phi^s_{L,k}(x)$).
The aggregate of the evacuation times to $x$ of all evacuees is given by
\begin{equation}\label{eqn:Phisx}
\Phi^s(x)=\left\{\begin{array}{lll}
&\Phi^s_{L,k}(x) + \Phi^s_{R,k+1}(x) &\text{~for } v_k\prec x \prec v_{k+1}\\
&\Phi^s_{L,k-1}(x) + \Phi^s_{R,k+1}(x) &\text{~for } x= v_k.
\end{array}
\right.
\end{equation}
A point $x$ that minimizes $\Phi^s(x)$ is called a {\em aggregate time sink},
a.k.a. {\em minsum sink}, under $s$.
An aggregate time sink shares the following property of a {\em median}~\cite{kariv1979b}.
\begin{lemma}{\rm \cite{higashikawa2015a}}\label{lem:atVertex}
Under any scenario there is an aggregate time sink at a vertex.
\end{lemma}
\begin{example}\label{ex:ex1}
Consider an example path network in Fig.~\ref{fig:ex1},
where a circle represents a vertex whose weight under some scenario $s$ is shown in it,
and the length of each edge is shown above it.
The capacity of each edge is assumed to be $c=1$.
\begin{figure}[ht]
\centering
\includegraphics[height=8mm]{figs/ex3.pdf}
\caption{An example path network.
}
\label{fig:ex1}
\end{figure}
Let $x$ denote the distance from $v_1$, $d(v_1,x)$,
as well as its position.
Using (\ref{eqn:Phisx}),
we obtain
\begin{eqnarray}\label{eqn:costFunc1}
\Phi^s(x) = 8x + 2(4-x) + 12(8-x) + 32+ 74 = 210 - 6x,
\end{eqnarray}
for $v_1\prec x \prec v_2$,
for example.
Fig.~\ref{fig:Phiofx2} plots $\Phi^s(x)$.
\begin{figure}[ht]
\centering
\includegraphics[height=42mm]{figs/Phiofx2.pdf}
\caption{Graph for $\Phi^s(x)$.
}
\label{fig:Phiofx2}
\end{figure}
\qed
\end{example}
The above example illustrates the fact that $\Phi^s(x)$ is piecewise linear with discontinuities at the
vertices.
Observe that there is a negative spike at each vertex
because its intra and extra cost contribution is absent,
and that the minsum sink $\mu^s$ is at a vertex,
as stated in Lemma~\ref{lem:atVertex}.
\subsection{What is known}\label{sec:known}
\begin{lemma}{\rm \cite{higashikawa2017a}}\label{lem:minsumsink1}
For any given scenario $s\in {\cal S}$,
\begin{enumerate}
\item[(a)]
We can compute $\{\Phi^s_L(v_i), \Phi^s_R(v_i)\mid i =1,\ldots,n\}$ in $O(n)$ time.
\item[(b)]
We can compute $\mu^s$ and $\Phi^s(\mu^s)$ in $O(n)$ time.
\end{enumerate}
\end{lemma}
Let $v$ be a vertex and $x$ be a point such that $v \prec x$.
We define a function
\begin{equation}\label{eqn:gamma}
\Gamma^s(x,v) = \Phi^s(x) - \Phi^s(v).
\end{equation}
We have $v=\mu^s$ under some scenario $s$ in mind,
since the regret function can be expressed as $R^s(x) = \Phi^s(x) - \Phi(\mu^s)=\Gamma^s(x,\mu^s)$.
\begin{lemma}{\rm \cite{higashikawa2017a}}\label{lem:bipartite}
For a fixed pair $x,v \in P$,
consider $\Gamma^s(x,v)$ as a funtion of $s$.
Any local maximum of $\Gamma^s(x,v)$ occurs under scenario $s$
under which an adjacent pair of clusters touches each other,
forming a larger cluster.
\end{lemma}
A scenario $s$ under which all vertices on the left (resp. right) of a vertex have
the max (resp. min) weights is called an {\em L-pseudo-bipartite} scenario~\cite{higashikawa2017a}.
The vertex $v_b$, where $1\le b\le n$,
that may take an intermediate weight $w^s(v_b) \in [\underline{w}(v_b), \overline{w}(v_b)]$,
is called the {\em boundary vertex},
a.k.a. {\em intermediate vertex}~\cite{higashikawa2017a}.
Let $b(s)$ denote the index of the boundary vertex under scenario $s$.
We consider the scenarios under which $w(v_b)=\underline{w}(v_b)$ and $w(v_b)=\overline{w}(v_b)$
also as special pseudo-bipartite scenarios,
and in the former (resp. latter) case,
either $b(s)= {b-1}$ or $b(s)=b$ (resp. $b(s)=b$ or $b(s)=b+1$).
The vertices that have the maximum (resp. minmum) weights comprise
the {\em max-weighted part} (resp. {\em min-weighted part}).
We define an {\em R-pseudo-bipartite} scenario symmetrically
with the max-weighted part and the min-weighted part reversed.
As $w(v_b)$ increases from $\underline{w}(v_b)$ to $\overline{w}(v_b)$,
clusters may merge.
Weight $w^s(v_b)$ is said to be a {\em critical weight},
if two clusters with respect to {\em any vertex}
merge as $w(v_b)$ increases to become a scenario $s$.
Let ${\cal S}^*_L$ (resp. ${\cal S}^*_R$) denote the set of the L- (resp. R-)pseudo-bipartite scenarios
that correspond to the critical weights.
Thus each scenario in ${\cal S}^*_L$ (resp. ${\cal S}^*_R$) can be specified by $v_b$ and $w(v_b)$.
Let ${\cal S}^* \triangleq{\cal S}^*_L \cup {\cal S}^*_R$.
\begin{lemma}{\rm \cite{higashikawa2017a}}\label{lem:minsumsink2}
\begin{enumerate}
\item[(a)]
Each scenario in ${\cal S}$ is dominated at every point by a scenario in ${\cal S}^*$.
\item[(b)]
\label{lem:SstarSize}
$|{\cal S}^*| =O(n^2)$,
and all scenarios in ${\cal S}^*$ can be determined in $O(n^2)$ time.
\end{enumerate}
\end{lemma}
If we use Lemma~\ref{lem:minsumsink1}(b) to find a sink for every scenario in ${\cal S}^*$,
then it takes $O(n^3)$ time.
We will design an algorithm to find them in sub-cubic time in Sec.~\ref{sec:sinks},
after some preparations in Sec.~\ref{sec:clusters}.
\section{Clusters}\label{sec:clusters}
Without loss of generality,
we concentrate on ${\cal R}^s$-clusters,
where $s\in {\cal S}^*_L$.
${\cal L}^s$-clusters and $s\in {\cal S}^*_R$ can be treated analogously.
For $k=2,\ldots, n$,
let ${\cal C}^s_{R,k}$ consist of $q^s(k)$ clusters
\begin{equation} \label{eqn:Rclusters}
{\cal C}^s_{R,k} = \langle C^s_{k,1}, C^s_{k,2}, \ldots, C^s_{k,q^s(k)}\rangle,
\end{equation}
and let $u_{k,i}$ be the front vertex of $C^s_{k,i}$,\footnote{For simplicity,
we omit subscript $R$ (for $R$ight) and superscript $s$ from $u_{k,i}$.
}
where $v_{k}= u_{k,1}\prec \ldots$ $\prec u_{k,q^s(k)}$.
By (\ref{eqn:eqn1}),
the following holds for $i=1, 2, \ldots, q^s(k)-1$.
\begin{equation}
d(u_{k,i},u_{k,i+1}) > \lambda(C^s_{k,i})/c.\label{eqn:eqn1a}
\end{equation}
\subsection{Preprocessing}\label{sec:precomputation}
\begin{lemma}\label{lem:clusters1}
\begin{enumerate}
\item[(a)]
For any scenario $s\in {\cal S}$,
the number of distinct clusters in $\{{\cal C}^s_{R,k}\mid k=2, \ldots, n\}$ is $O(n)$.
\item[(b)]
For any scenario $s\in {\cal S}$,
we can construct $\{{\cal C}^s_{R,k}\mid k=2, \ldots, n\}$
in $O(n)$ time.
\end{enumerate}
\end{lemma}
\begin{proof}
(a) Consider ${\cal C}^s_{R,k}$ in the order $k=n,n-1 \ldots, 2$.
${\cal C}^s_{R,v_n}$ consists of one cluster consisting just of $v_n$.
Let
${\cal C}^s_{R,k+1} = \langle C^s_{k+1,1}, C^s_{k+1,2}, \ldots, C^s_{k+1,q^s(k+1)}\rangle$
for some $k\le n-1$.
The first cluster $C^s_{k,1} \in {\cal C}^s_{R,k}$ contains vertex $v_{k}$
and possibly $C^s_{k+1,1}, \ldots, C^s_{k+1,h}$,
where $0\le h \le q^s(k+1)$.
$h=0$ means $C^s_{k,1}$ contains just $v_{k}$ and no other vertex.
Note that $C^s_{k,1}$ is new, but the other clusters of ${\cal C}^s_{R,k}$, i.e.,
$C^s_{k,2}, \ldots, C^s_{k,q^s(k)}$ are $C^s_{k+1,h+1}, \ldots, C^s_{k+1,q^s(j)}$,
which are members of ${\cal C}^s_{R,k+1}$.
This means that each $k$ introduces just one new cluster,
and thus the number of distinct clusters is $O(n)$.
(b)
Let us construct ${\cal C}^s_{R,k}$ in the order $k=n,n-1 \ldots, 2$ as in part (a).
Assume that we have computed ${\cal C}^s_{R,k+1}$, and want to compute $C^s_{k,1}$.
If $v_k$ merges with the first $h$ clusters in ${\cal C}^s_{R,k+1}$,
we spend $O(h)$ time in computing $C^s_{k,1}$.
Those $h$ clusters will never contribute to the computing time from now on.
If we pay attention to the front vertex of $C^s_{k,i}$, $u_{k,i}$,
it gets absorbed into a larger cluster at most once,
and each time such an event takes place, constant computation time incurs.
This implies the assertion (b).
\end{proof}
Based on (\ref{eqn:right1}),
we define
\begin{eqnarray}
E^s_{R,k} &\triangleq& \sum_{C\in {\cal C}^s_{R,k}} d(v_k,v_i)\lambda(C)\tau \label{eqn:extraCost2}\\
I^s_{R,k} &\triangleq& \sum_{C\in {\cal C}^s_{R,k}} \lambda(C)^2/2c.\label{eqn:intraCost2}
\end{eqnarray}
Computing the extra costs in (\ref{eqn:extraCost2}) is relatively easy,
because it is linear in $\lambda(C)$.
So let us try to compute intra costs efficiently.
As part of preprocessing,
we compute the prefix sum (from left) of the intra costs for the clusters under $s_M$,
and the prefix sum (from right) of the intra costs for the clusters under $s_0$.
To this end we cite the following lemma.
\begin{lemma}{\rm \cite{higashikawa2017a}}\label{lem:EandIcosts}
Given a scenario $s\in {\cal S}$,
\begin{enumerate}
\item[(a)]
We can compute $\{E^s_{R,k}, I^s_{R,k} \mid k=1, \ldots,n-1\}$ in $O(n)$ time.
\item[(b)]
We can compute $\{E^s_{L,k}, I^s_{L,k} \mid k=2, \ldots,n\}$ in $O(n)$ time.
\end{enumerate}
\end{lemma}
The following corollary follows easily from Lemmas~\ref{lem:clusters1} and \ref{lem:EandIcosts}.
\begin{corollary}\label{cor:s0sMclusters}
\begin{enumerate}
\item[(a)]
There are $O(n)$ distinct clusters among
$\{{\cal C}^{s_0}_{L,k}, {\cal C}^{s_0}_{R,k},{\cal C}^{s_M}_{L,k}, {\cal C}^{s_M}_{R,k} \mid k =1,\ldots,n\}$,
and we can compute them in $O(n)$ time.
\item[(b)]
We can compute $\{E^{s_0}_{R,k}, I^{s_0}_{R,k}, E^{s_M}_{R,k}, I^{s_M}_{R,k} \mid k=1, \ldots,n-1\}$
and $\{E^{s_0}_{L,k}, I^{s_0}_{L,k}, E^{s_M}_{L,k}, I^{s_M}_{L,k} \mid k=1, \ldots,n-1\}$ in $O(n)$ time
\item[(c)]
For each cluster sequence in $\{{\cal C}^{s_0}_{L,k}, {\cal C}^{s_0}_{R,k},{\cal C}^{s_M}_{L,k}, {\cal C}^{s_M}_{R,k}\}$
we can compute the prefix sum of intra costs in $O(n)$ time.
Thus we can compute the prefix sums for all $k$ in $O(n^2)$ time.
\end{enumerate}
\end{corollary}
Let $\overrightarrow{S}^{s_M}[v_j]$ denote the prefix sum from $v_1$ to $v_j$ under $s_M$.
By Corollary~\ref{cor:s0sMclusters}(c),
we can compute them for $j=1,\ldots, n$ in $O(n^2)$ time.
Similarly, $\overleftarrow{S}^{s_0}[v_j]$, the prefix sum from $v_n$ to $v_{j}$ under $s_0$,
can be computed for $j=1,\ldots, n$ in $O(n^2)$ time.
From now on,
we assume that we have computed
all the data mentioned in Corollary~\ref{cor:s0sMclusters},
as well as these prefix sums.
\subsubsection{Constructing ${\cal S}^*$}\label{sec:Sstar}
As observed before each scenario $s\in {\cal S}^*_L$ can be specified by
the boundary vertex $v_b$ and its weight $w(v_b)$.
But a cluster also has another parameter $k$,
as can be seen from (\ref{eqn:Rclusters}).
Let us organize this information by index $k$,
and define\footnote{Note that subscript $L$ of ${\cal S}^*_L$ means that
the left side of $v_b$ is max-weighted,
while the subscript $R$ of $\Delta_{R,k}$ refers to ${\cal R}$-clusters.
}
\begin{equation}\label{eqn:delta2}
\Delta_{R,k} \triangleq \{(b_1,\delta_{k,1}),(b_2,\delta_{k,2}),\ldots\},
\end{equation}
where $b_i\ge k$ for each $i$ and $b_1\le b_2\le \cdots$ hold.
Here $(b_i,\delta_{k,i}) \in \Delta_{R,k}$ means that when $w(v_{b_i})=\delta_{k,i}$
two ${\cal R}^s$-clusters w.r.t. $e_{k-1}$ merge.
Fig.~\ref{fig:deMerge}(a) shows the first R-cluster under $s_M$ with respect to $v_k$,
i.e., $C^{s_M}_{R,k}(v_k)$.
Fig.~\ref{fig:deMerge}(b) shows R-clusters under $s_0$ with respect to $v_k$,
\begin{figure}[ht]
\centering
\includegraphics[height=18mm]{figs/Merge1.pdf}
\caption{(a) $C^{s_M}_{R,k}(v_k)$;
(b) ${\cal R}^{s_0}$-clusters with respect to $v_k$.
}
\label{fig:deMerge}
\end{figure}
such that the last cluster in Fig.~\ref{fig:deMerge}(b) ends in vertex $v_l$,
which is the last vertex of $C^{s_M}_{R,k}(v_k)$.
Let us start with the clusters in Fig.~\ref{fig:deMerge}(b) and $b=k$.
Suppose we increase $w(v_b)$ by $\delta$ from $\underline{w}(v_b)$
until $C^{s_0}_{R,k}(v_k)$ and the cluster on its right
merge to form a single cluster.
The value of $\delta$ can be obtained by solving
\begin{equation}\label{eqn:delta1}
d(u_{k,1},u_{k,2})= \{\lambda(C^{s_0}_{k,1}) +\delta\}/c.
\end{equation}
If it satisfies
$\underline{w}(v_k) +\delta< \overline{w}(v_k)$,
we can find it in constant time.
Note that for $w^s(v_k)=\lambda(C^{s_0}_{k,1})+\delta$,
$C^{s_0}_{k,1}$ may merge with $C^{s_0}_{k,2}, \ldots, C^{s_0}_{k,h}$,
where $h\ge 2$,
resulting in a combined cluster $C^s_{k,1}$ under $s$,
and the first item $(k, \delta_{k,1})\in \Delta_{R,k}$.
We record $\delta_{k,1}$,
and if $\underline{w}(v_k) +\delta_{k,1}< \overline{w}(v_k)$,
then repeat this operation to find the increment $\delta_{k,2}$, if any, that causes
$C^s_{k,1}$ to merge with $C^{s_0}_{k,h+1}$, etc.
Otherwise, we increment $b$ by one.
When this process terminates,
we will end up with $C^{s_M}_{R,k}(v_k)$,
and we will have constructed $\Delta_{R,k}$ in (\ref{eqn:delta2}).
We formally present the above method to compute\ $\Delta_{R,k}$ as
Algorithm~\ref{alg:alg-1}.
\begin{algorithm}[ht]\label{alg:alg-1}
\KwData {
\begin{compactitem}[--]
\item
${\cal C}^{s_0}_{R,k}=\langle C^{s_0}_{k,1}, C^{s_0}_{k,2}, \ldots, C^{s_0}_{k,q^{s_0}(k)}\rangle$
\item
$v_l=$ the last vertex of $C^{s_M}_{R,k}(v_k)$ \tcp{Precomputed and available}
\item
$\{u_{k,1},u_{k,2}, \ldots, u_{k,q^s(k)}\}$
\end{compactitem}
}
\KwResult {
\begin{compactitem}[--]
\item
$\Delta_{R,k}$
\end{compactitem}
}
\BlankLine
Set $\Delta_{R,k}=\emptyset$, $C = C^{s_0}_{k,1}$, $b=k$, $h=2$, and $j=1$ ;
\tcp{\!\!Initialize}
\While {$b \le l$}{
\Repeat {$\underline{w}(v_b) + \delta \ge \overline{w}(v_b)$}{
Solve $d(u_{k,1},u_{k,h})=\{\lambda(C) + \delta\}/c$ for $\delta$ \;
\While{$d(u_{k,1},u_{k,h+1})\le \{\lambda(C\cup C^{s_0}_{k,h}) + \delta\}/c$}{
$C= C\cup C^{s_0}_{k,h}$ \;
$h=h+1$
}
Set $\delta_{k,j} = \delta$ and add $(b, {\delta}_{k,j})$ to $\Delta_{R,k}$ \;
$C= C\cup C^{s_0}_{k,h}$ \;
$j=j+1$ \;
}
$b=b+1$ \;
}
\caption{{\sc Computing} $\Delta_{R,k}$}
\end{algorithm}
Clearly, each item $(b_j,\delta_{k,j}) \in \Delta_{R,k}$ in (\ref{eqn:delta2})
corresponds to a scenario $s_j\in {\cal S}^*_L$ in the following way.
\begin{equation}\label{eqn:Phisx}
w^{s_j}(v_i)= \left\{\begin{array}{lll}
&w^{s_M}(v_i) &\text{~for~} 1\le i< k\\
&\underline{w}(v_k)+ \delta_{k,j} &\text{~for~} i= k\\
&w^{s_0}(v_i) &\text{~for~} k<i\le n
\end{array}
\right.
\end{equation}
Let ${\cal S}^*_{L,k}$ be the set of scenarios corresponding to the increments in $\Delta_{R,k}$
according to (\ref{eqn:Phisx}).
Note that under any $s\in {\cal S}^*_{L,k}$,
we have $C^s_{R,k}(v_{b(s)})= C^s_{R,k}(v_k)$.
\begin{lemma}\label{lem:clusters2}
\begin{enumerate}
\item[(a)]
${\cal S}^*_L= \cup^n_{k=1}{\cal S}^*_{L,k}$.
\item[(b)]
Algorithm~\ref{alg:alg-1} runs in $O(|C^{s_M}_{R,k}(v_k)|)$ time,
where $|C|$ denotes the number of vertices in cluster $C$.
\item[(c)]
We can construct $\{\Delta_{R,k}\mid k=2,\ldots, n\}$
in $O(n^2)$ time.
\end{enumerate}
\end{lemma}
\begin{proof}
(a) This is obvious.
(b) We can carry out each step inside the repeat loop of Algorithm~\ref{alg:alg-1}
in constant time.
We thus spend constant time per cluster of ${\cal C}^{s_0}_{R,k}$ that is contained in $C^{s_M}_{R,k}(v_k)$.
(c) Follows immediately from part (b),
since $O(|C^{s_M}_{R,k}(v_k)|)=O(n)$.
\end{proof}
\subsection{Computing $\Phi^s(v_i)$ for $s\in {\cal S}^*$}\label{sec:Phivi}
Let us now turn our attention to the computation of the extra and intra costs at vertices at the time
when a merger occurs,
namely under the scenarios in ${\cal S}^*_{L,k}$.
While computing $\Delta_{R,k}$ as in Sec.~\ref{sec:precomputation},
we can update the extra and intra costs at $v_k$ under the corresponding scenario in $s\in \tilde{\cal S}^*_{L,k}$
as follows.
When the first increment $\delta_{k,1}$ causes the merger of the first two clusters
$C^{s_0}_{k,1}$ and $C^{s_0}_{k,2}$, for example,
we subtract the extra cost contributions of $C^{s_0}_{k,1}$ and $C^{s_0}_{k,2}$ from $E^{s_0}_{R,k}$,
and add the new contribution from the merged cluster
in order to compute $E^s_{R,k}$ for the new scenario $s$ that results from the incremented weight
$\underline{w}^s(v_k) =\underline{w}(v_k) +\delta_{k,1}$.
We can similarly compute $I^s_{R,k}$ from $I^{s_0}_{R,k}$ in constant time.
Carrying out these operations whenever a new merged cluster is created thus takes $O(n)$ time
for a given $k$ and $O(n^2)$ time in total for all $k$'s.
Recall the definition of $\overrightarrow{S}^{s_M}[v_j]$ and $\overleftarrow{S}^{s_0}[v_j]$
after Corollary~\ref{cor:s0sMclusters}.
\begin{lemma}\label{lem:costAtv_i}
Assume that all the data mentioned in Corollary~\ref{cor:s0sMclusters} are available.
Then under any given scenario $s\in {\cal S}^*_L$,
we can compute the following in constant time.
\begin{enumerate}
\item[(a)]
$\Phi^s(v_i)= \Phi^s_L(v_i)+\Phi^s_R(v_i)$ for any given index $i$.
\item[(b)]
$\Phi^s(x)= \Phi^s_L(x)+\Phi^s_R(x)$ for any given point $x$.
\end{enumerate}
\end{lemma}
\begin{proof}
(a)
Let us compute $\Phi^s(v_i)$,
where $v_k\prec v_i \prec v_{b(s)}$.
We already have $\Phi^s_L(v_i)=\Phi^{s_M}_L(v_i)$ available,
so we need $\Phi^s_R(v_i)$.
The difference between ${\cal C}^s_{R,k}$ and ${\cal C}^s_{R,i}$ is illustrated in Fig.~\ref{fig:computePhi}.
Note that the cluster $C^s_{R,k}(v_i)$ may start before $v_i$,
while $C^s_{R,i}(v_i)$ starts at $v_i$.
See the green frames in Fig.~\ref{fig:computePhi}.
Thus more than one cluster of ${\cal C}^s_{R,i}$ may belong to the same cluster in ${\cal C}^s_{R,k}$,
as shown in the figure.
We first determine the last vertex of $C^s_{R,k}(v_i)$ and let it be $v_l$.
Note that the prefix sums were computed only for the critical weights of $v_b$,
and the critical weights for $v_b$ are not the same for ${\cal C}^s_{R,k}$ and ${\cal C}^s_{R,i}$.
When we use the prefix sum at $v_i$ for the clusters in ${\cal C}^s_{R,i}$,
we should change the weight $w(v_b)$ in Fig.~\ref{fig:computePhi}(b) to that in Fig.~\ref{fig:computePhi}(a).
This can be done by replacing the intra cost at $v_a$ in Fig.~\ref{fig:computePhi}(b) by that
in Fig.~\ref{fig:computePhi}(a).
There is another possibility that is not covered by Fig.~\ref{fig:computePhi},
namely $C^s_{R,k}(v_i)=C^s_{R,k}(v_b)$,
but $C^s_{R,i}(v_i) \not=C^s_{R,i}(v_b)$ or $C^s_{R,i}(v_i)=C^s_{R,i}(v_b)$.
In this case, there is no vertex $v_a$ in Fig.~\ref{fig:computePhi}(a).
Search for $w^s(v_b)$ among the critical weights for $w(v_b)$ with respect to $v_i$
in Fig.~\ref{fig:computePhi}(b),
and let $w_1 \le w^s(v_b) <w_2$.
Let $s'$ be the scenario such that $w^{s'}(v_b)=w_1$,
and determine the cluster $C^{s'}_{R,i}(v_b)$.
Let $s''$ that results from $s'$ by increasing $w(v_b)$ from $w^{s'}(v_b)~(=w_1)$
to $w^s(v_b)$.
This increase does not affect $C^{s'}_{R,i}(v_b)$,
except that $\Phi^{s''}_R(v_i) >\Phi^{s'}_R(v_i)$ due to the increases in the extra and intra costs.
It is straightforward to compute the increased extra cost.
It is easy to see that $I(C^{s''}_{R,i}(v_b))=\lambda(C^s_{R,k}(v_i))\}^2/2c$.
\begin{figure}[htb]
\centering
\subfigure[${\cal C}^s_{R,k}$]{\includegraphics[height=8mm]{figs/computePhi1.pdf}}
\subfigure[${\cal C}^s_{R,i}$]{\includegraphics[height=8mm]{figs/computePhi2.pdf}}
\caption{Illustration for the proof of Lemma~\ref{lem:costAtv_i}.
}
\label{fig:computePhi}
\end{figure}
Assume now that Fig.~\ref{fig:computePhi}(b) shows the clusters of ${\cal C}^{s''}_{R,i}$,
which are the same as those of ${\cal C}^{s'}_{R,i}$,
where $v_a=v_i$ is possible.
We can compute $I(C^{s''}_{R,i}(v_b))$ as follows.
\[
I(C^{s''}_{R,i}) = I(C^{s'}_{R,i}) + \{\lambda(C^{s''}_{R,i}(v_b))^2 - \lambda(C^{s'}_{R,i}(v_b))^2\}/2c,
\]
where
\[
\lambda(C^{s''}_{R,i}(v_b))= \lambda(C^{s'}_{R,i}(v_b))+ w^s(v_b) -w_1.
\]
Note that all this takes constant time under the assumption of the lemma.
(b)
Note that we have for $v_{i-1}\preceq x \prec v_i$,
\begin{eqnarray}
\Phi^s_R(x) &=& d(x,v_i)(W^s[v_n] - W^s[v_{i-1}] )\tau + E^s_{R,i} + I^s_{R,i}\nonumber\\
&=& d(x,v_i)(W^s[v_n] - W^s[v_{i-1}] )\tau + \Phi^s_R(v_i), \label{eqn:right2}
\end{eqnarray}
and we can compute $\Phi^s_R(v_i)$ in constant time by part (a).
It is clear that the first term can be computed in constant time.
We can similarly compute $\Phi^s_L(x)$ in constant time.
\end{proof}
\section{Computing sinks $\{\mu^s\mid s\in {\cal S}^*\}$}\label{sec:sinks}
Among the increments in $\Delta_R \triangleq \{\Delta_{R,k}\mid k=2,\ldots, n\}$,
there is a natural lexicographical order,
ordered first by $b$ and then by $w(v_b)$,
from the smallest to the largest.
We write $s\lessdot s'$ if $s$ is ordered before $s'$ in this order.
In what follows we assume the items in $\Delta_R$ are sorted by $\lessdot$.
\subsection{Tracking $\mu^s$}\label{sec:backNforth}
Observe that we have $\Phi^{s}_L(x)=\Phi^{s_M}_L(x)$ for $x\preceq v_b$,
which is independent of $w(v_b)$.
Similarly, we have $\Phi^{s}_R(x)=\Phi^{s_0}_R(x)$ for $x\succeq v_b$,
which is also independent of $w(v_b)$.
We precomputed the piecewise linear function $\Phi^{s_M}_L(x)$ for $x\preceq v_b$,
and $\Phi^{s_0}_R(x)$ for $x\succeq v_b$,
which are independent of $w(v_b)$.
We initialize the current scenario by $s=s_0$,
the boundary vertex $v_b$ by $b=1$,
its weight $w(v_b)= w^{s_0}(v_1)$.
For each successive increment in $\Delta_R$,
from the smallest (according to $\lessdot$),
we want to know the leftmost (aggregate time) sink under the corresponding scenario.
It is possible that,
as we increase the weight $w(v_b)$,
the sink may jump across $v_b$ from its right side to its left side,
and vice versa, back and forth many times.
We shall see how this can happen below.
By Lemma~\ref{lem:costAtv_i},
for a given index $b$,
we can compute $\{\Phi^{\overline{s}_{b-1}}(v_i) \mid i=1,2,\ldots, n\}$ in $O(n)$ time.\footnote{Recall
the definition of $\overline{s}_j$ from Sec.~\ref{sec:defs}.
}
We first scan those costs at $v_i$ for $i=b, b-1,\ldots, 1$,
and whenever we encounter a vertex with cost smaller than those we examined so far,
we record the index of the vertex.
Let ${\cal I}^b_L$ be the recorded index set.
We then scan those costs at $v_i$ for $i=b+1,\ldots, n$,
and whenever we encounter a vertex with cost smaller than those we examined so far,
we the index of the vertex,
and let ${\cal I}^b_R$ be the recorded index set.
We now plot $p_i=(v_i,\Phi^{\overline{s}_{b-1}}(v_i))$ for $i\in {\cal I}^b_L\cup {\cal I}^b_R$
in the $x$-$y$ coordinate system,
with $v_i$ as the $x$ value and $\Phi^{\overline{s}_{b-1}}(v_i)$ as the $y$ value.
See Fig.~\ref{fig:sinkMovement}.
\begin{figure}[ht]
\centering
\includegraphics[height=32mm]{figs/sinkMovement.pdf}
\caption{2-dimensional representation of
$\Phi^{\overline{s}_{b-1}}(v_i)=\Phi_L^{\overline{s}_{b-1}}(v_i)+\Phi_R^{\overline{s}_{b-1}}(v_i)$.
}
\label{fig:sinkMovement}
\end{figure}
It is clear that for $i,j \in {\cal I}^b_L$,
we have $\Phi^{\overline{s}_{b-1}}(v_i) < \Phi^{\overline{s}_{b-1}}(v_j)$ if $i<j$,
and for $i,j \in {\cal I}^b_R$,
we have $\Phi^{\overline{s}_{b-1}}(v_i) > \Phi^{\overline{s}_{b-1}}(v_j)$ if $i<j$.
Therefore, the points plotted on the left (resp. right) side of $v_b$ get higher and higher
as we approach $v_b$ from left (resp. right),
as can be seen in Fig.~\ref{fig:sinkMovement}.
Note that for a vertex $v_i\prec v_b$,
as $w(v_b)$ is increased,
$\Phi^{s}_R(v_i)$ increases,
while $\Phi^{s}_L(v_i)$ remains fixed at $\Phi^{s_M}_L(v_i)$.
For $v_i\succ v_b$, on the other hand,
as $w(v_b)$ is increased,
$\Phi^{s}_L(v_i)$ increases,
while $\Phi^{s}_R(v_i)$ remains fixed at $\Phi^{s_0}_R(v_i)$.
A vertical arrow in Fig.~\ref{fig:sinkMovement} indicates the amount
of increase in the cost of the corresponding vertex when $w(v_b)$ is increased
by a certain amount.
Note that the farther away a vertex is from $v_b$,
the more is the increase in the cost.
The following proposition follows from the above observations.
\begin{proposition}\label{prop:invariants}
\begin{enumerate}
\item[(a)]
$\Phi^s(v_i) \le \Phi^s(v_j)$ holds for any pair $i,j\in {\cal I}^b_L$ such that $i<j$.
\item[(b)]
$\Phi^s(v_i) \ge \Phi^s(v_j)$ holds for any pair $i,j\in {\cal I}^b_R$ such that $i<j$.
\item[(c)]
Either the vertex with the smallest index in ${\cal I}^b_L$
or the vertex with the largest index in ${\cal I}^b_R$ is a sink, i.e., it has the lowest cost.
\end{enumerate}
\end{proposition}
Note that the cost at $v_b$ is not affected by the change in $w(v_b)$ and remains the same.
We consider the three properties in Proposition~\ref{prop:invariants} as {\em invariant} properties,
and remove the vertices that do not satisfy (a) or (b).
As we increase $w(v_b)$,
in the order of the sorted increments in $\Delta_R$,
we update ${\cal I}^b_L$ and ${\cal I}^b_R$,
looking for the change of the sink.
By property (c),
the sink cannot move away from $v_b$.
We now make an obvious observation.
\begin{proposition}\label{prop:sameSink}
As $w(v_b)$ is increased,
there is a sink at the same vertex for all increments tested since the last time the sink moved,
until the smallest index in ${\cal I}^b_L$ or
the largest index in ${\cal I}^b_R$ changes,
causing the sink to move.
\end{proposition}
We are thus interested in how ${\cal I}^b_L$ (resp. ${\cal I}^b_R$) change,
in particular when its smallest (resp. largest) index changes.
To find out, let $\delta$ be the smallest increase such that $(b,\delta) \in \Delta_R$
{\em and} increasing $w(v_b)$ by $\delta$ above $\underline{w}(v_b)$
causes the cost of vertex $v_i$ to reach the cost of $v_j$,
where $i$ and $j$ are either adjacent in ${\cal I}^b_L$ and $i<j$ holds,
or adjacent in ${\cal I}^b_R$ and $i>j$ holds.
If such a $\delta$ does not exist,
we set $\delta=\infty$.
Since we can find such a $\delta$ by binary search over $\Delta_R$,
finding it for each adjacent pair of indices in ${\cal I}^b_L$ and ${\cal I}^b_R$
takes $O(\log n)$ time,
and the total time for all adjacent pairs is $O(n\log n)$.
We insert $(\delta;i,j)$ into a min-heap ${\cal H}$,
organized according to the first component $\delta$,
from which we can extract the item with the smallest first component in constant time.
Note that $v_b$ is fixed.
Once ${\cal H}$ has been constructed as above,
we pick the item $(\delta;i,j)$ with the smallest $\delta$ from ${\cal H}$ (in constant time).
If $i,j\in {\cal I}^b_L$ (resp. $i,j\in {\cal I}^b_R$) then we remove $i$ (resp. $j$) from
${\cal I}^b_L$ (resp. ${\cal I}^b_R$),
and compute $(\delta';i^-,j)$ (resp. $(\delta';i,j^+)$) where $i^-$ (resp. $j^+$) is the
index in ${\cal I}^b_L$ (resp. ${\cal I}^b_R$) that is immediately before (resp. after) $i$ (resp. $j$).
We perform binary search to find $\delta'$, taking $O(\log n)$ time,
and insert $(\delta';i^-,j)$ (resp. $(\delta';i,j^+)$) into ${\cal H}$, again taking $O(\log n)$ time.
If $i$ was the smallest index in ${\cal I}^b_L$,
the sink may have moved.
In this case no new item is inserted into ${\cal H}$.
Similarly, if $j$ was the largest index in ${\cal I}^b_R$,
the sink may have moved,
and no new item is inserted into ${\cal H}$.
We repeat this until either ${\cal H}$ becomes empty or the min value in ${\cal H}$ is $\infty$.
It is repeated $O(n)$ times, and the total time required is $O(n\log n)$.
If the sink moves when the smallest index in ${\cal I}^b_L$ or the largest index in ${\cal I}^b_R$
changes,
we have determined the sink under all the scenarios with the lighter $w(v_b)$
since the last time the sink moved.
Once $w(v_b)=\underline{w}(v_b)+\delta$ reaches $\overline{w}(v_b)$,
$b$ is incremented,
and the new boundary vertex $v_{b+1}$ now lies to the left of the old boundary vertex $v_b$
in Fig.~\ref{fig:sinkMovement}.
\subsection{Algorithm}
Algorithm~\ref{alg:alg-2}
is a formal description of our method to find a sink for each increment
of $w(v_b)$,
which are listed in $\Delta_R$.
It refers to ${\cal S}^*_b\triangleq \{s\in {\cal S}^* \mid b(s)=b\}$.
\renewcommand\footnoterule{}
\begin{floatTogether}
\begin{algorithm}[H]\label{alg:alg-2}
\KwData {
\begin{compactitem}[--]
\item
Boundary vertex $v_b$
\item
Sorted array $\Delta_R$;
\item
Index sets ${\cal I}^b_L$ and ${\cal I}^b_R$ for $w(v_b)=\underline{w}(v_b)$
\item
Arrays $\{\underline{W}[\cdot], \overline{W}[\cdot]\}$ and $\{\overrightarrow{S}^{s_M}[v_j], \overleftarrow{S}^{s_0}[v_j]\mid j=1, \ldots, n\}$
\end{compactitem}
}
\KwResult {
\begin{compactitem}[--]
\item
Sinks $\{\mu^s \mid s\in {\cal S}^*_b\cap {\cal S}^*_L\}$
\end{compactitem}
}
\BlankLine
\For {each adjacent pair $i,j\in{\cal I}^b_L~(i<j)$}{
Using binary search, find the smallest increment $\delta$ in $\Delta_R$ such that
$\Phi^{s(\delta)}(v_i) \ge \Phi^{s(\delta)}(v_j)$, and insert $(i,j;\delta)$ into a min-heap ${\cal H}_L$\;
}
\For {each adjacent pair $i,j\in{\cal I}^b_R~(i<j)$}{
Using binary search, find the smallest increment $\delta$ in $\Delta_R$ such that
$\Phi^{s(\delta)}(v_i) \le \Phi^{s(\delta)}(v_j)$, and insert $(i,j; \delta)$ into a min-heap ${\cal H}_R$\;
}
\While{${\cal H}_L \cup{\cal H}_R\not=\emptyset$}{
From ${\cal H}_L \cup{\cal H}_R$ remove item $(i,j;\delta)$ with smallest $\delta$,
and name it $\hat{\delta}_{i,j}$ \;
\If {$i$ is not the first index in ${\cal I}^b_L$ and $j$ is not the last index in ${\cal I}^b_R$}{
\If {$(i,j;\delta)\in {\cal H}_L$}
{Remove $i^-$ (the immediate predecessor of $i$) from ${\cal I}^b_L$,
compute $\hat{\delta}_{i^-,j}$, and insert $(i^-,j; \hat{\delta}_{i^-,j})$ into ${\cal H}_L$
}
\Else {
Remove $j^+$ (the immediate successor of $j'$) from ${\cal I}^b_R$,
compute ${\hat{\delta}}_{i,j^+}$, and insert $(i,j^+; \hat{\delta}_{i,j^+})$ into ${\cal H}_R$
}
Skip the {\bf else} part
}
\Else {
\If {$i$ is the first index in ${\cal I}^b_L$}
{Remove $i$ from ${\cal I}^b_L$
}
\If {$j$ is the last index in ${\cal I}^b_R$}
{Remove $j$ from ${\cal I}^b_R$
}
If the index corresponding to the current sink has been removed
then determine the new sink either at the vertex indexed by first index in ${\cal I}^b_L$ or
or the last index in ${\cal I}^b_R$, whichever has the smaller cost \;
From now on the sink remains at this new position until it moves the next time
}
}
\caption{{\sc Computing} $\{\mu^s \mid s\in {\cal S}^*_b\cap {\cal S}^*_L\}$}
\end{algorithm}
\end{floatTogether}
\begin{lemma}\label{lem:delta}
\begin{enumerate}
\item[(a)]
The minimum increment in $\Delta_R$ that causes the cost of $v_i$ to exceed that of
the next vertex closer to $v_b$,
can be determined in $O(\log n)$ time.
\item[(b)]
Algorithm~\ref{alg:alg-2} runs in $O(n\log n)$ time for a given $v_b$.
\end{enumerate}
\end{lemma}
\begin{proof}
(a) Use binary search on $\Delta_R$,
and compare the costs for each probe in constant time.
(b) Note that $|{\cal S}^*_b|=O(n)$.
Evaluating $\Phi^{s(\delta)}(v_i)$ and $\Phi^{s(\delta)}(v_j)$ in Lines~2 and 5 takes constant time
by Lemma~\ref{lem:costAtv_i}.
Thus the two for-loops take $O(n\log n)$ time
Updating ${\cal H}_L$ and ${\cal H}_R$ takes $O(\log n)$ time per insertion/deletion,
which will occur at most $n$ times and costs $O(n\log n)$ time.
All other steps take constant time.
Step~8 takes $O(n)$ time.
\end{proof}
For the ${\cal R}^s$-clusters w.r.t. $e_{i-1}$ that lie to the right of $C^s_{R,i}(v_b)$
and are not merged as a result of increase in $w(v_b)$,
the sum of their intra costs was already precomputed.
We can similarly compute $\{\mu^s \mid s\in {\cal S}^*_b\cap {\cal S}^*_R\}$ in $O(n\log n)$ time.
Running Algorithm~\ref{alg:alg-2} and its counterpart for ${\cal S}^*_R$ for $b=1,2,\ldots,n$,
we get
\begin{lemma}\label{lem:allSinks}
The sinks $\{\mu^s \mid s\in {\cal S}^*\}$ can be computed in $O(n^2\log n)$ time.
\end{lemma}
\section{Minmax regret sink}\label{sec:regret}
Since we know the sinks $\{\mu^s\mid s\in{\cal S}^*\}$
(Lemma~\ref{lem:allSinks}),
we proceed to compute the upper envelope for the $O(n^2)$ regret functions
$\{R^s(x) = \Phi^s(x) - \Phi^s(\mu^s)\mid s \in {\cal S}^*\}$.
The minmax regret sink $\mu^*$ is at the lowest point of this upper envelope.
\subsection{Upper envelope for $\{R^s(x) \mid s\in {\cal S}^*\}$}
If we try to find the upper envelope $\max_{s\in {\cal S}^*}\Phi^s(x)$ in one shot,
it would take at least $O(n^3)$ time,
since $|{\cal S}^*|= O(n^2)$,
and for each $s$, $\Phi^s(x)$ consists of $O(n)$ linear segments.
Recall the definition ${\cal S}^*_b= \{s\in {\cal S}^*\mid b(s)=b\}$
for $b=1,2,\ldots, n$.
We employ the following two-phase processing,
and carried out each phase in $O(n^2\log n)$ time.
\begin{itemize}
\item
[{\em Phase 1}:]
For each $b$, compute the upper envelope $\max_{s\in{\cal S}^*_b}R^s(x)$.
\item
[{\em Phase 2}:] Compute the upper envelope for the results from {\em Phase 1}.
\end{itemize}
In {\em Phase 1}, we successively merge regret functions,
spending amortized $O(\log n)$ time per regret function.
Thus the total time for a given $b$ is $O(n\log n)$ and the total time for all
$k$ is $O(n^2\log n)$.
In {\em Phase 2}, we then compute the upper envelope for the resulting $O(n)$ regret functions
with a total of $O(n^2)$ linear segments.
To implement {\em Phase 1},
we first prove the following lemma in the Appendix.
\begin{lemma}\label{lem:increasing}
Let $s, s' \in {\cal S}^*_b$ be two scenarios such that and $s\lessdot s'$.
As $x$ moves to the right,
the difference $D(x)=\Phi^{s'}(x) -\Phi^{s}(x)$ decreases monotonically for $v_1 \preceq x\preceq v_b$
and increases monotonically for $v_b \preceq x \preceq v_n$.
\end{lemma}
We divide each regret function in $\{R^s(x) \mid s\in {\cal S}^*_b\}$ into two parts:
left of $v_b$ and right of $v_b$.
We then find the upper envelope for the left set and right set separately.
Note that each $R^s(x)$ has $O(n)$ bending points, since they bend only at vertices.
Taking the max of two such functions may add one extra bending point on an edge,
so the total bending points in the upper bound is still $O(n)$.
By definition we have
\begin{eqnarray}\label{eqn:diff}
R^{s'}(x) - R^s(x) &=& \Phi^{s'}(x) -\Phi^{s'}(\mu^{s'}) - \{\Phi^s(x) -\Phi^s(\mu^s)\}\nonumber\\
&=& \Phi^{s'}(x) -\Phi^s(x) - \{\Phi^{s'}(\mu^{s'}) -\Phi^s(\mu^s)\}.
\end{eqnarray}
Note that the second term in (\ref{eqn:diff}) is independent of position $x$.
Lemma~\ref{lem:increasing} implies
\begin{lemma}\label{lem:crossing2}
Let $s, s' \in {\cal S}^*_b$ be two scenarios such that and $s\lessdot s'$.
Then $R^{s'}(x)$ may cross $R^{s}(x)$ at most once in the interval $[v_1,v_b]$ from above,
and at most once in the interval $[v_b,v_n]$ from below.
\end{lemma}
See Fig.~\ref{fig:intersectPath2} for an illustration or Lemma~\ref{lem:crossing2}.
\begin{figure}[ht]
\centering
\includegraphics[height=22mm]{figs/intersectPath2a.pdf}
\caption{$R^s(x)$ and $R^{s'}(x)$ cross each other at {\tt x}'s.}
\label{fig:intersectPath2}
\end{figure}
Algorithm~\ref{alg:alg-3} computes $\max_{s\in {\cal S}^*_b}R^s(x)$.
\begin{lemma}\label{lem:upperEnv}
\begin{enumerate}
\item[(a)]
The upper envelope $\max_{s\in {\cal S}^*_b}R^s(x)$
has $O(|{\cal S}^*_b|+ n)$ line segments.
\item[(b)]
Algorithm~\ref{alg:alg-3} computes it correctly in $O(|{\cal S}^*_b|\log n)$ time.
\end{enumerate}
\end{lemma}
\begin{proof}
(a) Without loss of generality,
let us consider the upper envelope in the interval $[v_b,v_n]$.
Since $R^s(x) =\Phi^s(x) -\Phi^s(\mu^s)$,
$R^s(x)$ is linear over the edge connecting any adjacent pair of vertices,
and $\max_{s\in {\cal S}^*_b}\Phi^s(x)$ has $O(|{\cal S}^*_b|+n)$ line segments on
all edges by Lemma~\ref{lem:crossing2}.
(b) By Lemma~\ref{lem:increasing},
the condition of Line 5 can be tested by their values at $v_b$,
and the condition of Line 8 can be tested by their values at $v_n$.
If $R^{s}(x)$ and $R^{s'}(x)$ in Lemma~\ref{lem:crossing2} intersect at point $X$ to the right of $v_b$,
then we have $R^{s'}(x)\ge R^{s}(x)$ holds for $x\succ X$,
and we can ignore $R^{s}(x)$ for $x\succ X$.
After the {\bf if}$-${\bf else},
Algorithm~\ref{alg:alg-3} ignores the regret function that was processed,
which is also justified by Lemma~\ref{lem:increasing}.
\end{proof}
\begin{algorithm}[ht]\label{alg:alg-3}
\KwData {
\begin{compactitem}[--]
\item
$\{\Phi^s(x), \mu^s \mid s\in {\cal S}^*_b\}$
\end{compactitem}
}
\KwResult {
\begin{compactitem}[--]
\item
$\max_{s\in {\cal S}^*_b} R^s(x)$
\end{compactitem}
}
\BlankLine
Order $\{R^s(x)\mid s\in {\cal S}^*_b\}$ by $\lessdot$ from the ``smallest''
to the ``largest'' \;
Initialize the upper envelope $U(x)$ to the ``smallest'' function,
and the {\em crossing point} $X$ to $v_n$ \;
\While{there is an unprocessed regret function}{
Pick the next ``smallest'' unprocessed regret function, $R^s(x)$ \;
\If {$\forall x: R^s(x)\ge U(x)$ (compare their values at $v_b$)}{
set $U(x)=R^s(x)$ and $X=v_n$ and go to Line 17
}
\If {$\forall x: R^s(x)\le U(x)$ (compare their values at $v_n$)}{
Do nothing and go to Line 17
}
\If {$U(X)\ge R^s(X)$}{
Compute the intersection of $U(x)$ and $R^s(X)$ (to the right of $X$) by binary search,
and update $U(x)$ and $X$
}
\Else {Compute the intersection of $U(x)$ and $R^s(X)$ (to the left of $X$) by binary search,
and update $U(x)$ and $X$
}
Mark $R^s(x)$ as {\em ``processed.''}
}
\caption{{\sc Computing} $\max_{s\in {\cal S}^*_b} R^s(x)$}
\end{algorithm}
\subsection{Main theorem}
Since $O(\sum_{b=1}^n |{\cal S}^*_b|\log n)= O(n^2\log n)$,
Lemma~\ref{lem:upperEnv} implies
\begin{lemma}\label{lem:bendingPoints}
The upper envelope $\max_{s\in {\cal S}^*}R^s(x)$ has $O(n^2)$ linear segments,
and can be computed in $O(n^2\log n)$ time.
\end{lemma}
Hershberger~\cite{hershberger1989} showed that the upper envelope of
$m$ line segments can be computed in $O(m\log m)$ time.
We can use his mothod to compute the global upper envelope in $O(n^2\log n)$ time.
So far we didn't pay any attention to the spikes at vertices.
Divide the problem in two subproblems: optimal sink is on an edge, and at a vertex.
Compare the two solutions and pick the better one.
In addition to Lemma~\ref{lem:bendingPoints}, we should evaluate the maximum cost at each vertex.
The minmax regret sink is at the point with the minimum of these maximum costs.
Corollary~\ref{cor:s0sMclusters} and
Lemmas~\ref{lem:minsumsink2}, \ref{lem:allSinks} and \ref{lem:bendingPoints} imply our main result.
\begin{theorem}
The minmax regret sink on a dynamic path network can be computed in $O(n^2\log n)$ time.
\end{theorem}
\section{Conclusion}
We presented an $O(n^2\log n)$ time algorithm for finding the minmax regret aggregate time sink
on dynamic path networks with uniform edge capacities,
which improves upon the previously most efficient $O(n^3)$ time algorithm in
\cite{higashikawa2017a}.
This was achieved by two novel methods.
One was used to compute 1-sinks under the $O(n^2)$ pseudo-bipartite scenarios in amortized $O(\log n)$ time
per scenario,
and the other was used to compute the upper envelope of $O(n^2)$ regret functions in $O(n^2\log n)$ time.
Note that $O(n^2)$ regret functions have $O(n^3)$ linear segments.
Future research topics include solving the minmax regret problem for aggregate time sink for more general
networks such as trees.
\input{archiv.bbl}
\section*{Appendix}
\subsection{Proof of Lemma~\ref{lem:increasing}}
\noindent
{\bf Lemma 14}
{\em Let $s, s' \in {\cal S}^*_b$ be two scenarios such that and $s\lessdot s'$.
As $x$ moves to the right,
the difference $D(x)=\Phi^{s'}(x) -\Phi^{s}(x)$ increases monotonically for $v_b \preceq x \preceq v_n$,
and decreases monotonically for $v_1 \preceq x\preceq v_b$.}
\begin{proof}
Without loss of generality, we assume that $v_b \preceq x \preceq v_n$,
since essentially the same proof works if $v_1 \preceq x\preceq v_b$.
Let us first consider the extra cost.
If the sum of the vertex weights on the left side of $x$ is larger than that on the right side,
then the extra cost component of $\Phi^{s'}(x)$ grows faster than that of $\Phi^{s}_R(x)$,
Otherwise,
it decreases more slowly.
We now consider the intra costs.
They do not change as long as $x$ moves on the same edge,
including the vertex at its right end.
So we assume that $x$ moves across a vertex, $v_k$,
as illustrated in Fig.~\ref{fig:clusters3a},
where $s$ and $s'$ are two scenarios such that $s \lessdot s'$ and
the both have the same boundary vertex $v_b$.
Let $v_b\in C^s_{L,j}(v_j)$ hence $v_b\in C^{s'}_{L,j}(v_j)$.
Let $v_i$ be the front vertex of the ${\cal L}^s_x$-cluster immediately to the left
of $C^s_{L,j}(v_j)$.
\begin{figure}[ht]
\centering
\includegraphics[width=7.2cm]{figs/clusters3a.pdf}
\caption{${\cal L}^s$-clusters and ${\cal L}^{s'}$-clusters w.r.t. $x$ are shown in dashed black ovals,
and ${\cal L}^s$-clusters and ${\cal L}^{s'}$-clusters w.r.t. $x'$ are shown in dashed red ovals.
}
\label{fig:clusters3a}
\end{figure}
We compare the increase in $\Phi^{s'}(x)$ with that in $\Phi^s(x)$,
as $x$ moves past $v_k$ to $x'$,
and show that the increase in $\Phi^s(x)$ is smaller than that in $\Phi^{s'}(x)$.
Clearly $D(x)$ is the smallest when $\Phi^s(x)$ increases as much as possible
and $\Phi^{s'}(x)$ increases as little as possible,
where we consider a decrease as a negative increase.
This situation happens,
when the move $x\rightarrow x'$ causes the merger of ${\cal L}^s$-clusters,
which implies $d(v_j, v_k)\tau< \underline{w}_k/c$,
while it causes the merger of $v_k$ only to an existing ${\cal L}^{s'}$-cluster.\footnote{Note that
if $v_k$ doesn't merge to the cluster to its left under $s$ then it doesn't merge under $s'$ either.
}
Since $C^s_{L,j}(v_i)$ and $C^s_{L,j}(v_j)$ are two separate clusters,
we have
\begin{equation}\label{eqn:nomerge}
d(v_i,v_j)\tau > \lambda(C^s_{L,j}(v_j))/c.
\end{equation}
The part of $\Phi^s(x)$ that is affected by the move is
\begin{eqnarray}\label{eqn:costs1a}
\Phi^s(x)&:& \lambda(C^s_{L,j}(v_j)) d(v_j, x)\tau + \frac{\lambda(C^s_{L,j}(v_j))(\lambda(C^s_{L,j}(v_j))+1)}{2c}\nonumber\\
&+& \lambda(C^s_{L,i}(v_i)) d(v_i, x)\tau + \frac{\lambda(C^s_{L,i}(v_i))(\lambda(C^s_{L,i}(v_i))+1)}{2c}. \nonumber
\end{eqnarray}
Since $C^s_{L,i}(v_i)$ and $C^s_{L,j}(v_j)$ are merged into $C^s_{L,k}(v_k)$ by assumption,
we have
\begin{equation}
d(v_i,v_k)\tau \leq \{\lambda(C^s_{L,j}(v_j)) + \underline{w}_k\}/c,
\end{equation}
and the part of $\Phi^{s'}(x)$ that is affected by the move is
\begin{eqnarray}\label{eqn:costs1b}
\Phi^s(x') &&: \lambda(C^s_{L,k}(v_k)) d(v_k, x')\tau \nonumber\\
&&+ \frac{\{\lambda(C^s_{L,i}(v_i)) + \lambda(C^s_{L,j}(v_j))+\underline{w}_k\}\{\lambda(C^s_{L,i}(v_i)+ \lambda(C^s_{L,j}(v_j))+\underline{w}_k+1\}}{2c},\nonumber
\end{eqnarray}
where
$\lambda(C^s_{L,k}(v_k)) =\lambda(C^s_{L,i}(v_i)) + \lambda(C^s_{L,j}(v_j))+\underline{w}_k$.
We now compute the increase
\begin{eqnarray}\label{eqn:costdiff1}
\Phi^s(x') - \Phi^s(x) &=& \lambda(C^s_{L,k}(v_k)) d(v_k, x')\tau - \lambda(C^s_{L,i}(v_i)) d(v_i, x)\tau \nonumber\\
&-& \lambda(C^s_{L,j}(v_j)) d(v_j, x)\tau + \lambda(C^s_{L,i}(v_i))\lambda(C^s_{L,j}(v_j))/c\nonumber\\
&+& \{\lambda(C^s_{L,i}(v_i))+\lambda(C^s_{L,j}(v_j))\}\underline{w}_k/c
+ \underline{w}_k(\underline{w}_k + 1)/2c.
\end{eqnarray}
Similarly, we have under $s'$,
\begin{eqnarray}\label{eqn:costs2}
\Phi^{s'}(x)&=& \lambda(C^{s'}_{L,j}(v_j)) d(v_j, x)\tau + \frac{\lambda(C^{s'}_{L,j}(v_j))(\lambda(C^{s'}_{L,j}(v_j))+1)}{2c}\nonumber\\
\Phi^{s'}(x')&=& \lambda(C^{s'}_{L,k}(v_k)) d(v_k, x')\tau
+ \frac{(\lambda(C^{s'}_{L,j}(v_j)) +\underline{w}_k)(\lambda(C^{s'}_{L,j}(v_j)) +\underline{w}_k+1)}{2c},\nonumber
\end{eqnarray}
where $ \lambda(C^{s'}_{L,k}(v_k)) =\lambda(C^{s'}_{L,j}(v_j)) +\underline{w}_k$,
and the increase is
\begin{eqnarray}\label{eqn:costdiff2}
\Phi^{s'}(x')&-&\Phi^{s'}(x)= \lambda(C^{s'}_{L,k}(v_k)) d(v_k, x')\tau - \lambda(C^{s'}_{L,j}(v_j)) d(v_j, x)\tau\nonumber\\
&+& \lambda(C^{s'}_{L,j}(v_j))\underline{w}_k/c + \underline{w}_k(\underline{w}_k + 1)/2c.
\end{eqnarray}
We clearly have $\lambda(C^{s'}_{L,k}(v_k)) >\lambda(C^s_{L,k}(v_k))$,
and (\ref{eqn:nomerge}) implies $d(v_i, x)\tau>\lambda(C^s_{L,j}(v_j))$,
since $v_j \prec x$.
The assumption that $v_k$ is merged into $C^s_{L,x}(v_j)$ and $C^{s'}_{L,x}(v_j)$
implies $d(v_j, v_k)\tau < \underline{w}_k/c$ for $v_j\prec x\prec v_k$.
We conclude that
\[
\{\Phi^{s'}(x')-\Phi^{s'}(x)\} - \{\Phi^s(x')-\Phi^s(x)\} > 0,
\]
when $x\prec v_k \prec x'$.
This is valid in particular if $x=v^-_k$ and $x'=v^+_k$,
where $v^-$ (resp. $v^+$) denote a point on the left (resp. right) of $v$ that is arbitrarily close to $v$.
It is clear that this relation also holds, if $v_k$ is not merged into $C^s_{L,j}(v_j)$ and $C^{s'}_{L,j}(v_j)$.
\end{proof}
\end{document}
|
{
"timestamp": "2018-06-05T02:11:24",
"yymm": "1806",
"arxiv_id": "1806.00814",
"language": "en",
"url": "https://arxiv.org/abs/1806.00814"
}
|
\section{Introduction}
\vspace{-0.5em}
With the progress of deep learning \cite{krizhevsky2012imagenet,simonyan2014very,Szegedy2014Going}, deep embedding learning has received a lot of attention and has been applied in a wide range of tasks and applications, including image retrieval and clustering \cite{Arandjelovic2016NetVLAD,oh2016deep,Hershey2015Deep,hoffer2015deep}, pattern verification \cite{sun2014deep,Schroff2015FaceNet,Parkhi2015Deep,Yi2014Deep} and domain adaptation \cite{Tahmoresnezhad2016Visual,Long2014Transfer}. Deep embedding learning intends to learn a feature representation of the input image that preserves the distance between similar data points small and dissimilar data points large in the feature space.
In deep embedding learning community, most remarkable works are based on contrastive loss \cite{sun2014deep,Lin2015DeepHash,Simo2015Discriminative,Yi2014Deep} and triplet loss \cite{Schroff2015FaceNet,oh2016deep,hoffer2015deep,Parkhi2015Deep}. And it is a common knowledge that hard example mining is crucial to ensure the quality and efficiency of these above methods, since the overly easy examples can satisfy the constraint well and then produce nearly zero loss, without contributing to the parameter update during back-propagation. Nevertheless, many hard example mining methods require much computational cost when measuring the embedding vectors in feature space, and they are performance-sensitive, e.g. the hard-class mining procedure in N-pair loss \cite{Sohn2016npair}.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{ste.eps}\\
\vspace{-1em}
\caption{Visualization (by t-SNE \cite{Laurens2015Accelerating}) of the deep embedding on the test splits of (a) CUB-200-2011 \cite{Wah2011The} (5924 images from class 101 to 200) and (b) MNIST \cite{Lecun2010The}. In (a), the
intra-class distance can be larger than the inter-class distance, and the distribution is heterogeneous and multimodal. While in (b), the distribution is 'uniform' and ideal.}\label{fig1}
\vspace{-1em}
\end{figure}
To alleviate the issue above and, to learn compact intra-class distance and separable inter-class distance, we introduce the concept of large margin constraint into N-pair loss instead of hard-class-mining. Some existing works \cite{Weinberger2006Distance,Liu2016Large} have focused on the learning of discriminative embedding via injecting large margin constraints into KNN and softmax, respectively. However, they exert non-adaptive constraint on the objective loss by introducing a fixed margin which is not suitable for the heterogeneous and multimodal feature distribution.
Figure \ref{fig1} illustrates the comparison between feature distribution on fine-grained bird dataset \cite{Wah2011The} and MNIST dataset \cite{Lecun2010The}. It is obviously observed that the diversity of embedding representation on bird dataset is prominent, where the intra-class distance can be larger than the inter-class distance and the distribution is heterogeneous, different from the 'uniform' distribution in MNIST. And in real cases, the distribution of feature space is complex due to pose and appearance \cite{Huang2016Local}. Thus, a consequent problem is that
stronger margin constraint can be used for easy patterns while it is infeasible to hard patterns\footnote{Easy/hard patterns refer to where the intra-class distance is smaller/larger than the inter-class distance.}. And that is why coarsely imposing fixed constraint can not only be hard to improve the performances, but probably lead to the failure of training. Thus, introducing a prudent and local-adaptive margin constraint is of the essence
In this paper, we propose Adaptive Large Margin N-pair loss (ALMN) to address the aforementioned issues, producing discriminative embedding under heterogeneous feature distribution in multimodal cases. It is mainly achieved by introducing an adaptive margin constraint in terms of local embedding representation structure. And as an extension to N-pair loss \cite{Sohn2016npair}, our method optimize the angular distance between samples as well. which is rotation-invariant and scale-invariant by nature. Furthermore, the adaptive large margin constraint is tactfully constructed by a novel technique of \textbf{\emph{Virtual Point Generating}} (VPG), factitiously mapping a well learned positive data point to a far place. Then, by optimizing this virtually generated new point well, a large angular margin can be obtained. Moreover, the strength of margin constraint induced by VPG for individual pattern is adjustable, quantified by hyper-parameter $\beta$. With bigger $\beta$, the ideal margin between samples becomes larger. Our ALMN is a flexible learning objective, and can be easily used as a drop-in loss function in the end-to-end frameworks and combined with any other hard example mining strategies. To our best knowledge, it is the first work to introduce margin constraint by generating virtual data point for deep embedding learning, virtual point generating is also an open question, in this work, we simply consider a geometrical way. Image retrieval and clustering experiments have been performed on several datasets, including CUB-200-2011 \cite{Wah2011The}, CARS196 \cite{Krause20133D}, Flowers102 \cite{Nilsback08}, Aircraft \cite{Maji2013Fine} and Stanford Online Products \cite{oh2016deep}
\vspace{-1em}
\section{Related Work}
\vspace{-1em}
The very key goal of deep embedding learning is to learn a feature representation that keeps the distance between related data points small and unrelated data points large on the feature space. Some research works jointly optimize contrastive loss and softmax loss for the purpose of discriminative feature learning, such as DeepID2\cite{sun2014deep} and DeepID2+\cite{Sun2014Deeply}. Facenet \cite{Schroff2015FaceNet} proposes triplet loss to improve the ability of deep embedding learning without jointly training with softmax loss. And many remarkable works use triplet-based objective loss to optimize deep frameworks in many tasks \cite{Parkhi2015Deep,qian2015fine,hoffer2015deep,oh2016deep}. Lifted structure embedding \cite{oh2016deep} encourages that each positive pair compares the distances against all the negative pairs in one mini-batch, aiming to make full use of the mini-batch. To avoid the convergence at bad local optimum, it optimizes a smooth upper bound function of nested \emph{max} functions. Local Similarity-Aware \cite{Huang2016Local} generalizes triplet loss to a quadruplet-like loss and selects hard samples by PDDM units. N-pair loss \cite{Sohn2016npair} expands the idea of triplet or quadruplet tuple to N-pair tuple, and enforces softmax cross-entropy loss among the pairwise similarity values in the batch. We share the similar core with N-pair that takes all negative samples in the current mini-batch into consideration, but as an extension, our ALMN can lead more discriminative embedding even without hard-sample mining, as a consequence of adaptive large margin learning.
The performances of most aforementioned research works are sensitive to the selected example pairs. Selecting genius hard samples to construct a training batch can significantly improve the quality of learning, but it also incurs much computational cost. However, our ALMN does not require hard-class mining (adopted in original N-pair loss), and thus allows the training of discriminative embedding with a lower computational cost.
There are some other works aiming at learning discriminative embedding feature. Large Margin Nearest Neighbor (LMNN) \cite{Weinberger2006Distance} optimizes the Mahalanobis metric for nearest neighbor classification. Recently, Large Margin Softmax (L-Softmax) \cite{Liu2016Large} encourages the angular decision margin between classes. While, it is designed for Softmax and the margin constraints are the same for any patterns, e.g. double-angle constraint for both easy and hard patterns, thus maybe unsuitable for multimodal feature space, and the convergence of model is slow. Our ALMN allows local-adaptive margin constraint and can be successfully applied in multimodal cases.
And some other works emerge in the deep embedding learning community. Clustering\cite{songCVPR17} formulates the NMI as the objective function and optimizes it in deep models. HDC\cite{Yuan_2017_ICCV} employs the cascaded models and selects hard-samples from different levels and models. Smart-mining\cite{kumar2017smart} combines local triplet loss and global loss to optimize the deep metric with hard-samples mining. Sampling-Matters\cite{Wu_2017_ICCV} proposes distance weighted sampling strategy and use a much stronger deep model(Res-50) than most existing methods. Angular loss\cite{wang2017deep} optimize a triangle-based angular function. BIER loss\cite{Opitz_2017_ICCV} adopts ensemble learning framework of online gradients boosting which is totally different from our method that belongs to single feature learning family. Proxy-NCA\cite{Movshovitz-Attias_2017_ICCV} explains why popular classification loss works from proxy-agent view, and the implementation is very similar with Softmax. In summary, different from the above methods that investigate ways of informative samples mining or feature ensemble, we mainly focus on introducing an open question, i.e. VPG, to impose large margin constraint so as to improve the discrimination of deep embedding leaning.
\vspace{-1em}
\section{Adaptive Large Margin N-pair Loss}
\vspace{-1em}
In deep embedding learning, our goal is to learn a deep feature embedding $f(X)$ from input image $X$ into a feature vector $x\in\mathbb{R}^{d}$, such that the similarity $S(x_{i},x_{j})$ between $x_{i}$ and $x_{j}$ is higher when they belong to the same class and is lower when they belong to different classes, where $x_{*}$ refers to the feature vector of image $X_{*}$. To ensure the intra-class compactness and the inter-class separability, we introduce large margin constraint instead of exploring sample-mining strategy. One related work L-Softmax \cite{Liu2016Large} uses a preset and fixed angular margin constraint to enlarge the margin between classes. While in practical vision tasks, the embedding distribution always exhibits a character of multimode due to pose and appearance \cite{Huang2016Local}, therefore, a fixed margin constraint is not suitable. Specifically, a relatively weaker constraint will contribute little to the optimization of easy patterns, while a rigorous constraint might be too strong to guide the training of hard patterns. Under multimodal situation, the learning of discriminative feature embedding by injecting an applicable margin constraint could suit the remedy to the case. Therefore, we propose Adaptive Large Margin N-pair loss (ALMN) that can meet the needs of multimodal feature distribution. Below, we first give a review of N-pair loss, then introduce our basic objective function, and finally show the mainstay of ALMN, i.e. \emph{Virtual Point Generating}
\vspace{-1em}
\subsection{Review of N-pair Loss and Preliminaries}
N-pair loss \cite{Sohn2016npair} points out that simultaneously optimizing with multiple negative samples can be regarded as an approximation of 'global optimization' and thus can improve the performances. It is formulated as follows
\vspace{-0.7em}
\begin{equation}\label{eq5}
L=-\frac{1}{N}\sum_{i}\log{\frac{e^{x_{i}^{T}x_{i^{+}}}}{e^{x_{i}^{T}x_{i^{+}}}+\sum_{y_{j}\neq{y_{i^{+}}}}e^{x_{j}^{T}x_{i^{+}}}}}+\frac{\lambda}{2N}\sum_{i=1}^{N}\|x_{i}\|_{2}^{2}
\end{equation}
where $\lambda$ is a regularization constant for $L_{2}$ norm and $N$ is the mini-batch size. $x_{i}, x_{i^{+}}, x_{j}$ refer to the positive point, anchor point and negative points respectively. Moreover, when minimizing Eq. \ref{eq5}, the optimization of inner-product-based softmax-like function is implicit to optimize the angle between samples, since the similarity based on inner product can be rewritten into $S(x_{i},x_{j})=x_{i}^{T}x_{j}=\|x_{i}\|\|x_{j}\|\cos(\theta)$, and in order to correctly separate $x_{i}$ from $x_{j}$, N-pair loss is to force $x_{i}^{T}x_{i^{+}}>x_{j}^{T}x_{i^{+}},~\forall{y_{j}\neq{y_{i}}}$, i.e. $\|x_{i}\|\cos\theta_{i}>\|x_{j}\|\cos\theta_{j}$, where $\theta_{i}/\theta_{j}$ is the angle between $x_{i}/x_{j}$ and $x_{i^{+}}$, and this optimization is mainly determined by $\cos(\theta_{.})$, verified by L-Softmax \cite{Liu2016Large}.
\vspace{-0.7em}
\subsection{Basic Objective Function based on Centers}
\vspace{-0.6em}
From minimizing Eq. \ref{eq5}, it can be observed that we would like to force $x_{i}^{T}x_{i^{+}}>x_{j}^{T}x_{i^{+}},~\forall{y_{j}\neq{y_{i}}}$ (i.e. $S(x_{i},x_{i^{+}})>S(x_{j},x_{i^{+}})$) in order to correctly separate $x_{i}$ from $x_{j}$, in another word we intend to push $x_{i}$ close to $x_{i^{+}}$ and pull $x_{j}$ far from $x_{i^{+}}$.
Apparently, the reasonability of location of the anchor point $x_{i^{+}}$ determines the stability of model training, since anchor point $x_{i^{+}}$ affects the gradients direction, and unstable direction will impede the stability of model training. To this end, we adopt class center $c_{y_{i}}$ instead of random positive sample as our anchor point $x_{i^{+}}$. While, it is impossible to update the class centers with respect to the entire training set during each iteration. We share a similar idea with \cite{wen2016discriminative} that performs the update on the basis of mini-batch. At each iteration, the class centers are updated as follows:
\vspace{-1em}
\begin{equation}\label{eq6}
c_{z}^{t+1}=c_{z}^{t}-\alpha\frac{\sum_{i=1}^{N}\textbf{1}\{y_{i}=z\}\cdot(c_{z}^{t}-x_{i})}{1+\sum_{i=1}^{N}\textbf{1}\{y_{i}=z\}}
\vspace{-0.5em}
\end{equation}
where $\textbf{1}\{\emph{condition}\}=1$ if the $\emph{condition}$ is satisfied, and $\textbf{1}\{\emph{condition}\}=0$ if not. $\alpha$ is the learning rate. Finally, our basic objective loss is as follows:
\vspace{-0.7em}
\begin{small}
\begin{gather}\label{eq13}
L=-\frac{1}{N}\sum_{i}\log\frac{e^{x_{i}^{T}c_{y_{i}}}}{e^{x_{i}^{T}c_{y_{i}}}+\sum_{y_{j}\neq{y_{i}}}e^{x_{j}^{T}c_{y_{i}}}}+\frac{\lambda}{2N}\sum_{i=1}^{N}\|x_{i}\|_{2}^{2}
\end{gather}
\end{small}
\vspace{-2.5em}
\subsection{Virtual Point Generating}\label{sec3_1}
\vspace{-0.7em}
However, without hard-class mining, the constraint $x_{i}^{T}c_{y_{i}}>x_{j}^{T}c_{y_{i}},~\forall{y_{j}\neq{y_{i}}}$ \footnote{For simplicity, here, we consider the problem of binary class, where label $y\in [1,2]$. Multi-classification complicates our analysis but has the same mechanism as binary scenario.} can hardly satisfy our demands of discriminative embedding learning, since it can be easily satisfied and hence stop contributing to parameter update, as shown in Fig.\ref{fig3}.(a) where the decision boundaries for two classes are overlapped, yielding separable but not discriminative features. Inspired by L-Softmax \cite{Liu2016Large}, optimizing a rigorous objective is to produce more rigorous decision boundaries and larger decision margin, we propose \emph{Virtual Point Generating} (VPG) to enhance the constraint by generating virtually local-hard point $x_{g}$, this constraint based on the generated point is more suitable in multimodal space than L-Softmax, producing an adaptive decision margin. Here, we will first introduce the general concept of our VPG and then will explain how to make it adaptive. Since the training of Eq. \ref{eq13} is based on angular optimization, $x_{g}$ is thus generated in the angular manner, and to keep the same amplitude as $x_{i}$, we formulate $x_{g}$ as follows:
\vspace{-0.5em}
\begin{equation}\label{eq2}
x_{g}=\frac{\beta b+x_{i}}{\|\beta b+x_{i}\|}\|x_{i}\|
\vspace{-0.5em}
\end{equation}
As shown in Fig.\ref{fig3}, vector $b$ has the same direction with $x_{i}-c_{y_{i}}$ and affects the location of $x_{g}$, here we do not focus on its specific value which will be investigated later. $\beta$ is a hyper-parameter to further control the location of $x_{g}$. From the right chart of Figure \ref{fig3} ($\beta=1$), it can be observed that the new generated data point $x_{g}$ has a larger angular distance to the anchor point $c_{y_{i}}$ than $x_{i}$. Therefore, to make a more rigorous decision boundary, we instead require
\vspace{-0.5em}
\begin{equation}\label{eq1}
x_{g}^{T}c_{y_{i}}>x_{j}^{T}c_{y_{i}},~\forall{y_{j}\neq{y_{i}}}
\vspace{-0.5em}
\end{equation}
Due to the geometrical relationship in Figure \ref{fig3}, $x_{i}^{T}c_{y_{i}}>x_{g}^{T}c_{y_{i}}$ always hold, if we can optimize $x_{g}^{T}c_{y_{i}}>x_{j}^{T}c_{y_{i}}$, then $x_{i}^{T}c_{y_{i}}>x_{j}^{T}c_{y_{i}}$ will spontaneously hold. So the new objective (i.e. Eq.\ref{eq1}) is a stronger constraint (requirement) to correctly separate $x_{i}$ from $x_{j}$, producing more rigorous decision boundaries.
\begin{figure}[t]
\centering
\includegraphics[width=0.82\linewidth]{vpg.eps}\\
\vspace{-1em}
\caption{Geometric interpretation of VPG ($\beta=1$). The embedding features learned before and after VPG are shown in left chart, one can observe that the angular margin between brown and green classes is enlarged by VPG, since the generated purple point is the boundary example and optimizing it will benefit the discriminative feature learning. The generating process is shown in right chart
}\label{fig3}
\vspace{-2em}
\end{figure}
As illustrated in the left chart of Figure \ref{fig3}, optimizing the objective $x_{g}^{T}c_{y_{i}}>x_{j}^{T}c_{y_{i}}$, which is implicitly with a stronger margin constraint, is to produce a large angular decision margin between classes, and to encourage both intra-class compactness and inter-class separability. Specifically, as in Fig.\ref{fig3}.(a) before VPG, when the training loss get to a stable level, the data points in feature space have no need to move further because they have satisfied the constraint $x_{i}^{T}c_{y_{i}}>x_{j}^{T}c_{y_{i}}$ well, however after VPG as shown in Fig.\ref{fig3}.(b), $x_{i}$ is mapped to a boundary point or even much harder point in feature space, i.e. $x_{g}$, so as to correctly separate $x_{g}$ from $x_{j}$, the new decision boundary is produced, and it will further push $x_{g}$ as well as $x_{i}$ towards $c_{y_{i}}$ and $x_{j}$ far from $c_{y_{i}}$ in angular manner, yielding more compact intra-class and separable inter-class angular distributions. Moreover, naturally inferred from Figure \ref{fig3}, by increasing $\beta$ to a larger value (e.g. $2,3,\ldots$), a farther $x_{g}$ is generated, in another word, a more rigorous objective is to be optimized and thus in ideal case a more discriminative embedding can be achieved.
\textbf{Adaptive Margin}: Without loss of generality, we consider $\beta=1$. As mentioned above, our goal is to make an adaptive large margin constraint, and from Eq. \ref{eq2} one can observe that $x_{g}$ is mainly determined by the vector $b$. Hence, vector $b$ should be local-adaptive such that the margin constraint based on $x_{g}$ is applicable for each case, e.g. hard and easy patterns. Specifically, considering the local feature space, vector $b$ should satisfy $\theta^{*}=\theta_{nn}-\theta_{i}$ (as in Figure \ref{fig3}), where $\theta_{nn}$ is the angle between $c_{y_{i}}$ and its nearest negative vector $x_{nn}$, and $\theta_{i}$ is the angle between $c_{y_{i}}$ and $x_{i}$. In summary, since $x_{g}$ is based on $x_{nn}$, with considering the local feature structure, the margin constraint introduced by $x_{g}$ is adaptive, in another word, easy patterns (larger $\theta_{nn}$ and smaller $\theta_{i}$, i.e. larger $\theta^{*}$) can be equipped with relatively stronger constraint, and hard patterns (smaller $\theta_{nn}$ and larger $\theta_{i}$, i.e. smaller $\theta^{*}$) with weaker constraint. As a consequence, the margin constraint is adaptive.
To generate $x_{g}$, we need to compute the specific value of $b$. However, it is not our focus and its specific value does not matter. Since, in practical application, we adopt random sampling instead of hard negative sample mining and only one mini-batch is fed into the network each iteration, so in one mini-batch $x_{nn}$ is not globally optimal and is always much farther, resulting in a bigger $\theta_{nn}$ (bigger $\theta^{*}$), i.e. bigger $\|b\|$, in another word a farther $x_{g}$ and non-local margin constraint are introduced. As a consequence, the training will be hard and even get failure. We address this challenge by empirically and experimentally constructing a lower bound vector \footnote{The lower bound vector has the same direction with the original vector, yet smaller amplitude.} of $b$, i.e. $b_{L}$, as follows:
\vspace{-0.1em}
\begin{equation}\label{eq3}
b_{L}=\frac{x_{i}-c_{y_{i}}}{\|x_{i}-c_{y_{i}}\|}\|x_{i}\|\sqrt{2-2\cos{(\theta_{nn}-\theta_{i})}}
\vspace{-1.5em}
\end{equation}
\begin{proposition}
$b_{L}$ is a lower bound vector of $b$ as illustrated in Figure \ref{fig4}.
\end{proposition}
\begin{proof}
We provide a explicit geometric interpretation of this lower bound vector $b_{L}$. As shown in Figure \ref{fig4}.(a), since $\|x_{g}\|=\|x_{i}\|$, and according to the Cosine Law, in $\triangle{ox_{g}x_{i}}$, $\|x_{g}-x_{i}\|=\sqrt{\|x_{g}\|^{2}+\|x_{i}\|^{2}-2\|x_{g}\|\|x_{i}\|\cos{\theta^{*}}}=\|x_{i}\|\sqrt{2-2\cos{(\theta_{nn}-\theta_{i})}}$, Additionally, $x_{i}$ and $x_{g}$ are on one concentric circle and easy to prove $\theta_{2}<\theta_{4}<\frac{\pi}{2}<\theta_{3}$, according to the Sine Law, in $\triangle{x_{g}bx_{i}}$, $\frac{\|x_{g}-x_{i}\|}{\|b\|}=\frac{\sin{\theta_{2}}}{\sin{\theta_{3}}}<1$. So $\|x_{g}-x_{i}\|<\|b\|$ always holds and from Eq. \ref{eq3} we have $\|b_{L}\|=\|x_{g}-x_{i}\|$, thus $\|b_{L}\|<\|b\|$ and, vector $b_{L}$ and $b$ have the same direction with $x_{i}-c_{y_{i}}$. In conclusion, $b_{L}$ in Eq. \ref{eq3} can be regarded as a lower bound vector of $b$.
\end{proof}
\begin{figure}[h]
\vspace{-1.5em}
\centering
\includegraphics[width=0.8\linewidth]{lower_bound2.eps}\\
\vspace{-1em}
\caption{(a) gives the geometric proof. (b) shows the stable $x_{g}$ generated by $b_{L}$ ($\beta=1$).}\label{fig4}
\vspace{-2em}
\end{figure}
Replacing $b$ in Eq. \ref{eq2} with the lower bound vector $b_{L}$, we can obtain a more stable $x_{g}$ as depicted in Figure \ref{fig4}.(b) and formulate it as follows:\vspace{-0.5em}
\begin{equation}\label{eq11}
\vspace{-0.7em}
x_{g}=\frac{\beta b_{L}+x_{i}}{\|\beta b_{L}+x_{i}\|}\|x_{i}\|=\frac{(M+1)x_{i}-Mc_{y_{i}}}{\|(M+1)x_{i}-Mc_{y_{i}}\|}\|x_{i}\|
\end{equation}
where $M=\frac{\beta\|x_{i}\|\sqrt{2-2\cos{(\theta_{nn}-\theta_{i})}}}{\|x_{i}-c_{y_{i}}\|}$, it addresses the problem of less-than-ideal angular constraint to some extent, which is caused by random sample mining and mini-batch training. We experimentally find that it indeed works well and also allows the stability of network optimizing.
\textbf{Overall Objective}: to optimize the new rigorous objective $x_{g}^{T}c_{y_{i}}>x_{j}^{T}c_{y_{i}}$, we follow N-pair loss and formulate it as the following one, i.e. our ALMN loss:
\begin{small}
\begin{gather}\label{eq7}
L=-\frac{1}{N}\sum_{i}\log\frac{e^{x_{g}^{T}c_{y_{i}}}}{e^{x_{g}^{T}c_{y_{i}}}+\sum_{y_{j}\neq{y_{i}}}e^{x_{j}^{T}c_{y_{i}}}}+\frac{\lambda}{2N}\sum_{i=1}^{N}\|x_{i}\|_{2}^{2}
\end{gather}
\end{small}
where $x_{g}$ is shown in Eq. \ref{eq11}. Obviously when $\beta=0, x_{g}=x_{i}$, and we make it as our baseline. The ALMN can be easily optimized by commonly used SGD and BP algorithm. The gradients with respect to $x_{i}$ and $x_{j}$ are listed as follows:
\begin{gather}\label{eq14}
\frac{\partial{L}}{\partial{x_{i}}}=\frac{1}{N}(\frac{e^{x_{g}^{T}c_{y_{i}}}}{e^{x_{g}^{T}c_{y_{i}}}+\sum_{y_{j}\neq{y_{i}}}e^{x_{j}^{T}c_{y_{i}}}}-1)\frac{\partial{(x_{g}^{T}c_{y_{i}})}}{\partial{x_{i}}}+\frac{\lambda}{N}x_{i}\\
\frac{\partial{L}}{\partial{x_{j}}}=\frac{1}{N}\frac{e^{x_{j}^{T}c_{y_{i}}}}{e^{x_{g}^{T}c_{y_{i}}}+\sum_{y_{j}\neq{y_{i}}}e^{x_{j}^{T}c_{y_{i}}}}c_{y_{i}}
\vspace{-1em}
\end{gather}
\vspace{-1em}
\begin{align}\label{eq15}
\frac{\partial{(x_{g}^{T}c_{y_{i}})}}{\partial{x_{i}}}=\frac{(M+1)x_{i}^{T}c_{y_{i}}-Mc_{y_{i}}^{T}c_{y_{i}}}{\|(M+1)x_{i}-Mc_{y_{i}}\|\|x_{i}\|}x_{i}+\nonumber
\end{align}
\vspace{-1em}
\begin{align}
\frac{(M+1+\frac{M(x_{i}-c_{y_{i}})^{T}c_{y_{i}}}{\|x_{i}-c_{y_{i}}\|^{2}})\|x_{i}\|c_{y_{i}}+M\frac{(\|x_{i}-c_{y_{i}}\|^{2}-\|x_{i}\|^{2})(x_{i}-c_{y_{i}})^{T}c_{y_{i}}}{\|x_{i}\|\|x_{i}-c_{y_{i}}\|^{2}}x_{i}}{\|(M+1)x_{i}-Mc_{y_{i}}\|}\nonumber
\end{align}
\vspace{-1em}
\begin{align}
-\frac{((M+1)x_{i}-Mc_{y_{i}})^{T}c_{y_{i}}(M(x_{i}-c_{y_{i}})^{T}x_{i}+\|x_{i}\|^{2})((M+1)x_{i}-Mc_{y_{i}})}{\|(M+1)x_{i}-Mc_{y_{i}}\|^{3}\|x_{i}\|}
\end{align}
\vspace{-1em}
\begin{algorithm}
\caption{Training deep model with our ALMN}\label{algorithm}
\begin{algorithmic}[1]
\REQUIRE training set $\{x_{i},y_{i}\}_{i=1}^{N}$ ($N$ denotes the image number), pre-trained CNN model, hyper-parameter $\beta$.
\end{algorithmic}
\textbf{Training}
\begin{algorithmic}[1]
\FOR{$t:=1\ldots T$}
\FOR{$i:=1\ldots N$}
\STATE adopt $c_{y_{i}}$ as the anchor point, compute $\theta_{nn}$, $\theta_{i}$
\STATE generate $x_{g}$ from $x_{i}$ with Eq.\ref{eq11}.
\STATE compute $Loss$ with Eq.\ref{eq7}, compute gradients with Eq. \ref{eq14}-\ref{eq15}.
\ENDFOR
\STATE update the anchor point with Eq.\ref{eq6}.
\ENDFOR
\end{algorithmic}
\textbf{Output:} Well trained deep model.
\end{algorithm}
\vspace{-1em}
Finally, we show ALMN in Algorithm.\ref{algorithm}. Most worthy of mention is that we introduce the novel concept of VPG to enhance the margin constraint, i.e. generating a virtually boundary point and optimizing $x_{g}^{T}c_{y_{i}}>x_{j}^{T}c_{y_{i}}$ instead of the original $x_{i}^{T}c_{y_{i}}>x_{j}^{T}c_{y_{i}}$. While our VPG does not limit the specific formulation of $x_{g}$, we leave it as an open question and there can be other ways to generate $x_{g}$, here for geometrical interpretation, we simply take Eq.\ref{eq2} and \ref{eq11}.
\section{Discussion}
\vspace{-1em}
The ALMN loss encourages an adaptive large angular margin among classes by a novel constraint constructing method VPG.
It has some nice properties:
\begin{itemize}
\vspace{-0.5em}
\item The core of VPG is to enhance the margin constraint by generating virtually hard points. And the holistic margin constraint can be controlled by hyper-parameter $\beta$. With bigger $\beta$, the ideal margin between classes becomes larger, yielding more discriminative embedding.
\item For any fixed $\beta$, the angular margin constraint induced by VPG is local-adaptive and varies across instances, since the virtual point is generated on the basis of local feature structure. Thus, easy patterns can be supervised by stronger constraint, and hard patterns will be optimized under the relatively weaker constraint.
\item Our VPG is a generic method that can be easily combined with any other hard-sample-mining methods and model architectures.
\end{itemize}
\textbf{Comparison to N-pair loss}: as an extension to N-pair loss \cite{Sohn2016npair}, our ALMN has two advantages. First, by employing class centers $c_{y_{i}}$ as the anchor points instead of random positive points, the optimization of our ALMN is more stable and ideal than N-pair loss due to the correct direction of gradients, and thus the performance of deep embedding learning can be improved, verified by the results comparison between ALMN ($\beta=0$) and N-pair loss in Table. \ref{tab2} and \ref{tab3}. Second, and which is our most contribution, ALMN (e.g. $\beta=3$) can significantly encourage a large angular decision margin among classes, yielding more discriminative feature embedding than N-pair loss, and it is mainly achieved by the novel and generic VPG method. Furthermore, our ALMN does not require hard-class mining procedure which is adopted to construct the training batches in N-pair loss.
\textbf{Comparison to other constraint losses}: Noisy-Softmax \cite{Chen_2017_CVPR} imposes annealed noise on Softmax which aims to improve the generalization ability of DCNNs. Our ALMN has a similar goal with \cite{Liu2016Large,liu2017sphereface} that enhancing the discriminative property of learned features by exerting constraint on objective function. However, in \cite{Liu2016Large}, the constraint is specifically designed for Softmax layer, and the strength of margin constraint behind the optimization objective $\|w_{y_{i}}\|\|x_{i}\|\cos{m\theta_{y_{i}}}>\|w_{j}\|\|x_{i}\|\cos\theta_{j}$ are the same for each samples (e.g. m=2), and this fixed m-times-angle constraint is not applicable under heterogeneous feature distribution. In contrast, our ALMN is towards deep embedding learning, for a certain $\beta$, our margin constraint behind $x_{g}^{T}c_{y_{i}}>x_{j}^{T}c_{y_{i}}$ is local-adaptive, since the virtual point $x_{g}$ is generated on the basis of its neighbouring feature space not a fixed scale. And, the margin constraint of ALMN is introduced by \emph{generated virtual point} which is different from directly setting $m$ in \cite{Liu2016Large}.
\textbf{Ablation study}: to highlight the effectiveness of our local-adaptive large margin constraint, we conduct a contrast test by modifying our basic objective function (Eq. \ref{eq13}) to a L-Softmax-like loss, which is of the fixed angular margin constraint, as follows:
\begin{footnotesize}
\begin{equation}\label{eq12}
L=-\frac{1}{N}\sum_{i}\log{\frac{e^{\|x_{i}\|\|c_{y_{i}}\|\psi{(\theta_{y_{i}})}}}{e^{\|x_{i}\|\|c_{y_{i}}\|\psi{(\theta_{y_{i}})}}+\sum_{y_{j}\neq{y_{i}}}e^{x_{j}^{T}c_{y_{i}}}}}+\frac{\lambda}{2N}\sum_{i=1}^{N}\|x_{i}\|_{2}^{2}
\end{equation}
\end{footnotesize}
where $\psi{(\theta_{y_{i}})}=(-1)^{k}\cos{(m\theta_{y_{i}})}-2k$ is the same as in L-softmax. Then, we train the same CNN model with Eq. \ref{eq12} ($m=2$) and Eq. \ref{eq7} ($\beta=2$), respectively. From Figure \ref{fig6}, we can observe that the training loss of L-Softmax($m=2$) stops reducing at a higher level, implying it does not converge, and we infer that the double-angle constraint may be much stronger for some examples (e.g. hard patterns) and this phenomenon will disturb the overall training process. While, the loss of our ALMN drops fast to a relatively low level, demonstrating that the local-adaptive angular margin constraint can be well optimized and thus is indeed crucial to address the problem of discriminative embedding learning in multimodal cases.
\begin{figure}[t]
\vspace{-1em}
\centering
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=1\linewidth]{comparison_to_lsoftmax.eps}\\
\vspace{-1em}
\caption{Training loss on CUB-200-2011 dataset.}\label{fig6}
\end{minipage}
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=1\linewidth]{bar_ebay.eps}\\
\vspace{-1em}
\caption{Mean Recall on Stanford Online Products.}\label{fig7}
\end{minipage}
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=1\linewidth]{bar_aircraft_flowers.eps}\\
\vspace{-1em}
\caption{Mean Recall on Aircraft and Flowers.}\label{fig8}
\end{minipage}
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=1\linewidth]{bar_cub_cars.eps}\\
\vspace{-1em}
\caption{Mean Recall on Cub and Cars.}\label{fig9}
\end{minipage}
\vspace{-1em}
\end{figure}
\vspace{-0.7em}
\section{Experiments and Results}
\vspace{-0.7em}
To demonstrate the effectiveness of our proposed ALMN under multimodal scenarios, we evaluate it on image clustering and retrieval tasks over several benchmark datasets, which present varieties of variations such as in pose and appearance. Notably, except class label we do not use any other annotation information such as bounding box or part annotation.
\vspace{-1em}
\subsection{Implementation Details}\label{sec5_1}
For network configuration, we use the ImageNet pretrained GoogLeNet \cite{Szegedy2014Going} for initialization and finetune it on our target datasets. The last fully connected layer is initialized with random weights and we fix the embedding feature size at $512$ throughout all of our experiments(since the performance doesn't change much when varying embedding sizes according to \cite{oh2016deep}). We set dropout ratio to $0.2$. For fair comparison, we follow the same data preprocess method as adopted in \cite{oh2016deep}, i.e. all the training and testing images are processed into $256\times256$ and then mean subtraction is performed. For data augmentation, all training images are randomly cropped to $227\times227$ and randomly mirrored. All of our experiments are implemented by Caffe library \cite{Jia2014Caffe} with our own modifications.
As we mentioned in the above section, we do not perform hard-class mining. Instead, we construct a random batch in $m\times n$ manner, where $m$ and $n$ denote the number of classes and the number of samples in each class, respectively. Note that, the classes and samples are all randomly selected. And we will investigate the affects of different combinations of $m$ and $n$ in the following subsection.
\textbf{Training}: The initial learning rate $\alpha$ is $0.00001$ and multiplied by $0.8$ at $20k$ iteration. However, the total iterations are $50k$ and $80k$ for (CUB, Flowers, Aircraft) and CARS196, respectively. We use a weight decay of $0.0002$ and momentum of $0.9$. Moreover, the regularization constant $\lambda$ for $L_{2}$ norm is $0.0005$ and we use 10 times learning rate for the feature layer.
\textbf{Evaluation}: The same as many other research works \cite{oh2016deep,Sohn2016npair,Huang2016Local}, we use the $F_{1}$ and NMI metrics for image clustering task and the Recall@K metric for image retrieval task. We use the simple cosine distance for the evaluation of the embedding feature. We make ALMN ($\beta=0$), which means training without VPG (i.e. $x_{g}=x_{i}$), as our baseline. For comparison, we evaluate many existing methods, and implement some of them with the same network and training configurations as ours, including triplet loss \cite{Schroff2015FaceNet}, lifted structured embedding \cite{oh2016deep} and N-pair loss \cite{Sohn2016npair}.
\vspace{-1em}
\subsection{Component Analysis}
\textbf{Mini-batch combination}:
To acquire a stable location of the anchor point, we employ the class center $c_{y_{i}}$. However, we experimentally found that the combination of mini-batch is important to the update of $c_{y_{i}}$.
Inspired by N-pair loss, we construct a $m\times n$ mini-batch, where $m$ and $n$ denote the number of classes and the number of the samples in each class, respectively. Throughout our experiments the value of $m\times n$ is fixed, and we can imagine that, as $n$ increases, there are more and more positive samples to contribute to the update of $c_{y_{i}}$ at the same time, resulting in a more stable and more real class center. However, when $n$ is large enough and $m=1$, i.e. in $0$ negative sample limit, there is no contribution from negative samples and thus the inter-class separability will not be guaranteed.
\begin{table*}[ht]
\vspace{-1em}
\centering
\begin{tabular}{c|cccc|cccc}
\hline
& \multicolumn{4}{c|}{CUB-200-2011} & \multicolumn{4}{c}{Cars196} \\
\hline
\hline
$m\times n$ & 65 x 2 & 26 x 5 & 16 x 8 & 8 x 16 & 65 x 2 & 26 x 5 & 16 x 8 & 8 x 16 \\
\hline
Recall@K=1 & 51.1 & \textbf{52.4} & 52.1 & 51.1 & 64.2 & \textbf{71.6} & 69.7 & 68.8 \\
Recall@K=2 & 63.7 & \textbf{64.8} & 64.4 & 64.0 & 75.2 & \textbf{81.3} & 80.5 & 79.1 \\
Recall@K=4 & 74.5 & \textbf{75.4} & 75.6 & 74.6 & 83.7 & \textbf{88.2} & 88.3 & 86.4 \\
Recall@K=8 & 83.6 & \textbf{84.3} & 84.3 & 84.0 & 90.0 & \textbf{93.4} & 92.8 & 92.3 \\
\hline
F1 & 27.2 & \textbf{28.5} & 27.5 &28 & 24.6 & \textbf{29.4} & 26.9 & 25.3 \\
NMI & 59.7 & \textbf{60.7} & 59.6 & 60.3 & 57.9 & \textbf{62.0} & 60.9 & 58.6 \\
\hline
\end{tabular
\caption{F1, NMI, and recall@K scores (\%) on CUB-200-2011 \cite{Wah2011The} and CARS196 \cite{Krause20133D} datasets with different combinations of $m\times n$.}
\vspace{-3em}
\label{tab1
\end{table*
We evaluate the performances of the ALMN loss with different combinations of $m\times n$ on CUB-200-2011 \cite{Wah2011The} and CARS196 \cite{Krause20133D}. And the experimental results are listed in Table \ref{tab1}. From the results, one can observe that the performances are different when using various combinations of $m\times n$, where the total batch sizes are almost the same. As we analyzed above, a relatively appropriate combination of $m\times n$ is required, which is important for stable training and discriminative embedding learning. And we use the combination of $26\times5$ in the following subsections. Notably, although we need to construct the mini-batch according to some protocol, the selection is totally random and there is no computational cost since there is no demand to evaluate the embedding vectors in deep learning framework, which is different from hard-class mining in N-pair loss.
\textbf{Enlarging angular margin}: We can further enhance the angular margin constraint by increasing parameter $\beta$ such that a larger decision margin among classes can be produced and the more discriminative embedding can be achieved. From Table \ref{tab2}, when $\beta=0$ in zero constraint limit, our baseline algorithm obtain relatively lower results. Then one can observe that, when $\beta=1$ our ALMN can significantly improve nearly $2\%$ and $4\%$ R@1 accuracies over CUB and CARS datasets respectively, verifying the effectiveness of the adaptive large margin constraint induced by VPG. Afterwards, it can further improve the performances over all datasets by increasing $\beta$ e.g. $\beta=2,3$, demonstrating our initial thought that larger decision margin among classes will encourage the learning of discriminative embedding. Likewise, the improvements can also be found in other datasets as in Table.\ref{tab3} \ref{tab4}.
\vspace{-1em}
\subsection{Comparison with State-of-the-art}
\vspace{-0.6em}
\begin{table*}[t]
\centering
\resizebox{\textwidth}{!}
\begin{tabular}{c|c|c|c|c||c|c|c|c|c|c||c|c}
\hline
\multirow{2}[4]{*}{} & \multicolumn{6}{c|}{CUB-200-2011} & \multicolumn{6}{c}{Cars196} \\
\cline{2-13} & R@1 & R@2 & R@4 & R@8 & F1 & NMI & R@1 & R@2 & R@4 & R@8 & F1 & NMI \\
\hline
\footnotesize{Google\cite{Szegedy2014Going}$^{+}$} & 40.8 & 53.8 & 67.0 & 78.2 & 18.0 & 51.5 & 35.5 & 47.5 & 58.9 & 71.5 & 8.6 & 37.1 \\
\footnotesize{Triplet\cite{Schroff2015FaceNet}$^{+}$} & 36.1 & 48.6 & 59.3 & 70.0 & 15.1 & 49.8 & 39.1 & 50.4 & 63.3 & 74.5 & 16.8 & 51.4 \\
\footnotesize{Lifted\cite{oh2016deep}$^{+}$} & 47.2 & 58.9 & 70.2 & 80.2 & 21.2 & 55.6 & 49.0 & 60.3 & 72.1 & 81.5 & 21.8 & 55.0 \\
\footnotesize{Clustering\cite{songCVPR17}}&48.2 & 61.4 & 71.8 & 81.9 & - & 59.2 & 58.1 & 70.6 & 80.3 & 87.8 & - & 59.4\\
\footnotesize{S-mining\cite{kumar2017smart}}& 49.8& 62.3 & 74.1 & 83.3 & - & 59.9 & 64.7 & 76.2 & 84.2 & 90.2 & - & 59.5\\
\footnotesize{Angular\cite{wang2017deep}}& 53.6 & 65.0 &75.3 &83.7 & 30.2 & 61.0 & 71.3 &80.7 &87.0 &91.8 & 31.8 & 62.4\\
\footnotesize{ N-pair\cite{Sohn2016npair}$^{+}$} & 49.1 & 61.2 & 72.7 & 82.1 & 25.9 & 58.5 & 63.6 & 74.7 & 84.1 & 90.1 & 23.9 & 57.4 \\
\footnotesize{Proxy NCA\cite{Movshovitz-Attias_2017_ICCV}}&49.2&61.9&67.9&72.4&-&59.5&73.2&82.4&86.4&88.7&-&64.9\\
\hline\hline
ALMN ($\beta=0$) & 50.4 & 62.7 & 73.5 & 82.9 & 27.6 & 59.4 & 66.2 & 76.7 & 85.1 & 91.4 & 23.6 & 56.7 \\
ALMN ($\beta=1$) & 52.0 & 64.5 & 74.8 & 83.7 & 28.2 & 60.2 & 70.4 & 80.4 & 87.3 & 92.5 & 26.3 & 59.3 \\
ALMN ($\beta=2$) & 52.2 & 64.7 & 75.3 & 84.2 & 28.2 & 60.7 & 71.3 & 81.2 & 88.1 & 93.1 & 28.3 & 61.5 \\
ALMN ($\beta=3$) & \textbf{\emph{52.4}} & \textbf{\emph{64.8}} & \textbf{\emph{75.4}} & \textbf{\emph{84.3}} & \textbf{\emph{28.5}} & \textbf{\emph{60.7}} & \textbf{\emph{71.6}} & \textbf{\emph{81.3}} & \textbf{\emph{88.2}} & \textbf{\emph{93.4}} & \textbf{\emph{29.4}} & \textbf{\emph{62.0}} \\
\hline
\end{tabular
}
\caption{Image clustering and retrieval results(\%) on CUB \cite{Wah2011The} and Cars196 \cite{Krause20133D}. $^{+}$ refers to our re-implement. Our best results are bold-faced.}
\label{tab2}
\vspace{-2em}
\end{table*
\textbf{CUB-200-2011} dataset \cite{Wah2011The} includes 11,788 bird images coming from 200 classes. We use the first 100 classes for training (5,864 images) and the rest 100 classes for testing (5,924 images). We list our experimental results together with those of other state-of-the-art methods in Table \ref{tab2}. From the results, one can observe that our baseline ALMN ($\beta=0$) outperforms N-pair loss even without large margin constraint, demonstrating that the reasonability of location of the anchor point can not only make training stable but improve the performance. And by introducing an adaptive large angular margin constraint among classes, our ALMN ($\beta=3$) can significantly improve the performances and also outperforms most existing methods, even achieving the comparable results compared to the state-of-the-art methods, and thus verifying the effectiveness of our adaptive large margin constraint.
\textbf{CARS196} dataset \cite{Krause20133D} includes 16,185 car images coming from 196 classes. We split the first 98 classes for training (8,054 images) and the rest 98 classes for testing (8,131 images). We list our experimental results together with that of other state-of-the-art methods in Table \ref{tab2}. From the results, it can be observed that ALMN ($\beta=0$) shows the better performances than N-pair loss, demonstrating the superiority of our choice of the anchor point $c_{y_{i}}$. Then, ALMN ($\beta=3$) can significantly improve nearly $5\%$ R@1 result over the baseline ALMN and also outperforms most of the other existing methods, obtains comparable results compared to state-of-the-art, verifying the effectiveness of our method.
\begin{table*}[t]
\centering
\resizebox{\textwidth}{!}
\begin{tabular}{c|c|c|c|c||c|c|c|c|c|c||c|c}
\hline
\multirow{2}[4]{*}{} & \multicolumn{6}{c|}{Flowers102} & \multicolumn{6}{c}{Aircraft}\\
\cline{2-13} & R@1 & R@2 & R@4 & R@8 & F1 & NMI & R@1 & R@2 & R@4 & R@8 & F1 & NMI \\
\hline
Googlenet$^{+}$\cite{Szegedy2014Going} &80.5&87.6&92.9&95.7&41.0&63.8&42.0&52.8&64.2&75.6&10.3&30.0\\
Triplet$^{+}$\cite{Schroff2015FaceNet} & 80.3 & 87.2 & 92.0 &95.7 &41.3 &64.0 & 41.8 & 53.5 & 64.4 & 75.3 & 10.7 & 31.3\\
Lifted$^{+}$\cite{oh2016deep} & 82.6 & 89.4 & 93.1 & 96.0 & 43.3 & 65.9 & 53.8 & 67.5 & 77.7 & 85.5 & 23.8 & 51.9\\
n-pair$^{+}$\cite{Sohn2016npair} & 83.3 & 89.9 & 93.9 & 96.4 & 43.2 & 66.1 & 56.1 & 69.0 & 80.2 & 87.7 & 24.7 & 52.4\\
\hline
\hline
ALMN($\beta=0$) & 85.3 &91.4 & 94.7 & 97.2 & 53.1 & 71.5 & 63.5 & 74.2 & 83.3 & 90.0 & 25.7 & 53.3\\
ALMN($\beta=1$) & 88.8 & 93.1 & 95.9 & 98.1 & 56.3 & 75.7 & 67.0 & 78.1 & 86.6 & 91.3 & 29.5 & 56.2\\
ALMN($\beta=2$) & 89.5 & 93.8 & 96.3 & 98.0 & 56.6 & 75.9 & 67.9 & 79.3 & 87.0 & 91.8 & 30.4 & 57.2\\
ALMN($\beta=3$) & \emph{\textbf{90.1}} & \emph{\textbf{94.0}} & \emph{\textbf{96.6}} & \emph{\textbf{98.2}} & \emph{\textbf{57.0}} & \emph{\textbf{76.2}} & \emph{\textbf{68.4}} & \emph{\textbf{79.9}} & \emph{\textbf{87.2}} & \emph{\textbf{92.0}} & \emph{\textbf{30.7}} & \emph{\textbf{57.9}}\\
\hline
\end{tabular
}
\caption{Image clustering and retrieval results on Flowers102 \cite{Nilsback08} and Aircraft dataset\cite{Maji2013Fine}. $^{+}$ refers to our re-implement. And our best results are bold-faced.}
\label{tab3}
\vspace{-3em}
\end{table*
\textbf{Flowers102} The Flowers102 dataset \cite{Nilsback08} includes 8189 flower images from 102 classes. Each class consists of between 40 and 258 images. We split the first 51 classes for training (3493 images) and the rest 51 classes for testing (4696 images). We implement triplet loss \cite{Schroff2015FaceNet}, lifted structured embedding \cite{oh2016deep} and n-pair loss \cite{Sohn2016npair} with the same network and training configurations as ours and test them with the single crop. From the results shown in Table. \ref{tab3}, our baseline ALMN($\beta=0$) outperforms other works by adopting a stable anchor point. And ALMN($\beta=3$) can further improve the performances for image clustering and retrieval tasks by learning a discriminative embedding with adaptive large margin constraint, demonstrating the superiority of our method.
\textbf{Aircraft} The Aircraft dataset \cite{Maji2013Fine} has 100 classes of aircrafts with 10,000 images. We split the first 50 classes for training (5,000 images) and the other 50 classes for testing (5,000 images). We also implement triplet loss \cite{Schroff2015FaceNet}, lifted structured embedding \cite{oh2016deep} and n-pair loss \cite{Sohn2016npair} with the same network and training configurations as ours and then test them with the single crop. From the results shown in Table. \ref{tab3}, our baseline ALMN($\beta=0$) outperforms other works by adopting a stable anchor point. And ALMN($\beta=3$) can further improve nearly $5\%$ and $6\%$ for image retrieval and clustering (F1) tasks respectively by learning a discriminative embedding with adaptive large margin constraint.
\begin{table}[H]
\vspace{-2em}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{c||c|c|c|c|c|c||c|c}
\hline
& Lifted\cite{oh2016deep} & n-pair\cite{Sohn2016npair} & Clustering\cite{songCVPR17} & Angular\cite{wang2017deep} &HDC\cite{Yuan_2017_ICCV} &BIER\cite{Opitz_2017_ICCV}& ALMN($\beta=0$)& ALMN($\beta=3$) \\
\hline
ensemble &$\times$&$\times$&$\times$&$\times$&$\surd$&$\surd$&$\times$&$\times$\\
\hline
R@1 & 62.5 & 66.4 & 67 & 67.9& 69.5 &72.7 & 69.3&\textbf{\emph{69.9}} \\
\hline
R@10 & 80.8 & 83.2 & 83.6 & 83.2& 84.4 &86.5 & 84.5&\textbf{\emph{84.8}}\\
\hline
R@100 & 91.9 & 93 & 93.2 & 92.2 & 92.8& 94&92.7&\textbf{\emph{92.8}}\\
\hline
\end{tabular
}
\caption{Results on Stanford Online dataset\cite{oh2016deep}. Our best results are bold-faced.}\label{tab4}
\vspace{-3.5em}
\end{table
\textbf{Stanford Online Products} dataset\cite{oh2016deep} has $120k$ images of $22k$ online classes and each class has $5.3$ images on average. Following the zero-shot protocol, we also split the first $11318$ classes for training and the remaining $11316$ classes for testing. We show our final results in Table.\ref{tab4}. One can observe that our method ($\beta=3$) achieves appealing results compared to other single-feature methods and the ensemble-feature method(e.g. HDC\cite{Yuan_2017_ICCV} and BIER\cite{Opitz_2017_ICCV}, ensemble is well known better than single feature).
The Mean Recall comparisons over these datasets are in Figure\ref{fig9} \ref{fig8} \ref{fig7}.
\vspace{-1em}
\subsection{Cases Study}
\vspace{-0.6em}
To show the results of discriminative embedding learning under multimodal scenario, we provide some cases over CUB-200-2011 \cite{Wah2011The} and Cars196 \cite{Krause20133D} datasets in Figure \ref{fig5}. From the comparison between top-1 positive and top-1 negative retrieval, it can be observed that the image is correctly retrieved by our algorithm. Then by introducing the adaptive large margin constraint among classes, our ALMN ($\beta=3$) can significantly increase the similarity score between the query and top-1 positive retrieval images, implying that the intra-class compactness is strengthened. And from the results of top-1 negative retrieval results, one can observe that our ALMN ($\beta=3$) can significantly reduce the similarity score between the query and top-1 negative sample, demonstrating that our method produces a more separable inter-class distance.
\begin{figure*}[t]
\vspace{-1em}
\centering
\includegraphics[width=1\linewidth]{final.eps}\\
\vspace{-0.5em}
\caption{Retrieval task cases on CUB \cite{Wah2011The} and Cars196 \cite{Krause20133D} datasets. The query images are shown on top of the figure. Top-1 positive and top-1 negative images retrieved by our ALMN are marked with red and blue boxes, respectively. And the similarity scores using ALMN ($\beta=0$) and ALMN ($\beta=3$) are orderly shown underneath the images.}\label{fig5}
\vspace{-2em}
\end{figure*}
\vspace{-1em}
\section{Conclusion}
\vspace{-0.6em}
In this paper, we propose ALMN to address the problem of discriminating feature learning in multimodal feature space. It encourages intra-class compactness and inter-class separability by enlarging the angular decision margin among classes. And the prudent margin constraint is local-adaptive. Moreover, the novel concept of VPG gives chances of discriminative embedding learning without hard-example mining, and the virtual point generating method is an open question which may benefit the community. Extensive quantitative and qualitative results demonstrate the effectiveness of our proposed method.
\bibliographystyle{splncs}
|
{
"timestamp": "2018-06-06T02:08:23",
"yymm": "1806",
"arxiv_id": "1806.00974",
"language": "en",
"url": "https://arxiv.org/abs/1806.00974"
}
|
\section{Prologue}
A common feature shared by several constructions involving $t$-structures on triangulated categories is the following. One starts with a (possibly infinite) semiorthogonal decomposition $\{\mathscr{D}_n\}_{n\in I\subseteq \mathbb{Z}}$ whose triangulated subcategories $\mathscr{D}_n$ are endowed with distinguished $t$-structures and, thanks to the vanishing of certain Ext groups, ends up with a new $t$-structure on $\mathscr{D}$ together with an Harder-Narashiman-type filtration on its heart. In terms of Bridgeland slicings (in the slightly generalized version presented in \cite{gkr,fosco}), this corresponds to trading a $\hat{\mathbb{Z}}\times_{\mathrm{lex}}\mathbb{Z}$-slicing for a ${\mathbb{Z}}\times_{\mathrm{lex}}\hat{\mathbb{Z}}$-slicing, where $\hat{\mathbb{Z}}$ denotes the ordered set of integers (with its usual order) endowed with the trivial $\mathbb{Z}$-action, and the subscript `lex' means that the product is given the lexicographic order. The aim of this note is to characterize the condition allowing this `exchange of factors', which we call `gluability' {as it is a direct generalization of the gluing of stability conditions introduced by Collins and Polishchuk in \cite{collins}, which is in turn} reminiscent of the `recollement situations' considered in \cite{bbd}. Once the gluability condition has been explicited, it is pretty immediate to identify several examples from the literature where this condition is more or less implicitly occurring. In particular, we are able to rediscover Levine's theorem: the Beilinson-Soul\'e vanishing conjecture implies the existence of Tate motives \cite{levine}.
Other examples include Beilinson's construction of a distinguished $t$-stucture from a filtered structure on a triangulated category \cite{beil}, the $t$-structure obtained by Marcr\`i from exceptional collections with Ext vanishings in \cite{macri}, and the construction of a distinguished $t$-structure on the Fukaya category of the trivial $\mathbb{C}$-bundle over a symplectic manifold $M$ exhibited by Hensel in \cite{lagra}. Moreover, we show how the distinguished $t$-structure built out of a gluable slicing always come with a whole family of `perverse'' variants. For instance, the `perverse motives' considered in \cite{permot} and the `perverse coherent sheves' considered in \cite{bezr} arise this way. A closely related example is the construction of the exotic $t$-structure on the derived category of $A$-modules, for $A$ a Koszul algebra, obtained by Koszul duality \cite{kosz}.
\vskip .5 cm
In this note we assume he reader is familiar with the language of Bridgeland slicings \cite{bridgeland}, in its generalization for an arbitrary poset $J$ endowed with a $\mathbb{Z}$-action considered in \cite{gkr} and surveyed in \cite{fosco}. We refer to \cite{fosco} for the notation and the definition used here. In particular we use the language of stable $\infty$-categories; the reader who prefers not to use higher categories will find no difficulty in verbatim translating each of the statements and proofs presented here in the more classical language of triangulated categories.
\section{The gluing procedure}
Let $\mathscr{D}$ a stable $\infty$-category or, if one prefers a more classical setting, a triangulated category.
If one considers Bridgeland slicings indexed by arbitrary partially orderes sets (endowed with a compatible $\mathbb{Z}$-action) as in \cite{gkr}, then one can think of associating with any $\mathbb{Z}$-poset $J$ the set of $J$-slicings on $\mathscr{D}$. This defines a `slice functor' on the category of $\mathbb{Z}$-posets,
see \cite[Remark 3.12]{fosco}. The slice functor actually hides a greater potential. Indeed, even if we don't have a morphism of $\mathbb{Z}$-posets (but just a $\mathbb{Z}$-equivariant map), then under suitably conditions the slice functor can still be applied. \\
\subsection{$f$-compatible slicings}
We begin by recalling the construction of the slice functor; see \cite{fosco} for details.
Let $J$ be a $\mathbb{Z}$-toset, i.e., a totally ordered set together with a monotone action of $\mathbb{Z}$, that we will denote by $(n,x)\mapsto x+n$. A $J$-slicing on a stable $\infty$-category $\mathscr{D}$ is a morphism of $\mathbb{Z}$-posests $\mathfrak{t}\colon \mathcal{O}(J)\to \mathrm{ts}(\mathscr{D})$, where $\mathcal{O}(J)$ is the $\mathbb{Z}$-poset of the slicings of $J$ (i.e., the decompositions of $J$ into a disjoint union of a lower set $L$ and of an upper set $U$), and $\mathrm{ts}(\mathscr{D})$ is the $\mathbb{Z}$-poset of $t$-structures on $\mathscr{D}$.
Clearly, if $f\colon J\to J'$ is a morphism of $\mathbb{Z}$-tosets, then \[
f^{-1}\colon \{\text{subsets of $J'$}\}\to \{\text{subsets of $J$}\}
\]
induces a $\mathbb{Z}$-equivariant morphism of $\mathbb{Z}$-posets $f^{-1}\colon \mathcal{O}(J')\to \mathcal{O}(J)$, and so composition with $f^{-1}$ gives a morphism
\begin{align*}
f_*\colon J\text{-slicings on $\mathscr{D}$}&\to J'\text{-slicings on $\mathscr{D}$}\\
\mathfrak{t}&\mapsto \mathfrak{t}\circ f^{-1}.
\end{align*}
The slices of $f_*\mathfrak{t}$ are given by $\mathscr{D}_{f_*\mathfrak{t};j}=\mathscr{D}_{\mathfrak{t};f^{-1}(\{j\})}$, for any $j\in J'$. Notice that, as $f$ is monotone, the subset $f^{-1}(\{j\})$ is an interval in $J$. It is immediate to see that $f_*$ restricts to a map
\begin{align*}
f_*\colon \text{Bridgeland $J$-slicings on $\mathscr{D}$}&\to \text{Bridgeland $J'$-slicings on $\mathscr{D}$}.
\end{align*}
Namely, if $\mathcal{H}^j_{f_*\mathfrak{t}}(X)=0$ for every $j\in J'$ then $\mathcal{H}^{f^{-1}(\{j\})}_{\mathfrak{t}}(X)=0$ for every $j$ in $J'$. As we are assuming the $J$-slicing $\mathfrak{t}$ is a Bridgeland slicing, this implies that $\mathcal{H}^{\phi}_{\mathfrak{t}}(X)=0$ for every $\phi$ in $f^{-1}(\{j\})$, for every $j$. Therefore $\mathcal{H}^{\phi}_{\mathfrak{t}}(X)=0$ for every $\phi$ in $J$ and so, again by definition of Bridgeland slicing, $X=0$. Also, if $\mathcal{H}^j_{f_*\mathfrak{t}}(X)\neq 0$ then $\mathcal{H}^{f^{-1}(\{j\})}_{\mathfrak{t}}(X)\neq 0$ and so (again by the Bridgeland slicing condition) there exists at least an element $\phi$ in $f^{-1}(\{j\})$ such that $\mathcal{H}^{\phi}_{\mathfrak{t}}(X)\neq 0$. As the $J$-slicing $\mathfrak{t}$ is Bridgeland, the total of these $\phi$'s must be finite, so only for finitely many $j$ we can have such a $\phi$. In other words, the number of indices $j$ in $J'$ such that $\mathcal{H}^j_{f_*\mathfrak{t}}(X)\neq 0$ is finite.
Notice that, if $\mathfrak{t}$ is a Bridgeland slicing of $\mathscr{D}$, and $f\colon J\to J'$ is a morphism of $\mathbb{Z}$-tosets, then for any slicing $(L,U)$ of $J'$, the lower and the upper categories
$\mathscr{D}_{f_*\mathfrak{t};L}$ and $\mathscr{D}_{f_*\mathfrak{t};U}$ can be equivalently defined as
\begin{align*}
\mathscr{D}_{f_*\mathfrak{t};L}&=\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\in L}\\
\mathscr{D}_{f_*\mathfrak{t};U}&=\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\in U}
\end{align*}
where $\langle \mathscr{S}\rangle$ denotes the extension-closed subcategory of $\mathscr{D}$ generated by the subcategory $\mathscr{S}$. In particular the slices of are given by
$\mathscr{D}_{f_*\mathfrak{t};j}=\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)=j}$.\\
\begin{rem}
The right hand sides of the above two expressions can clearly be defined for every morphism $f$ from $J$ to $J'$ (i.e., not necessarily monotone nor $\mathbb{Z}$-equivariant), and as soon as $f$ is $\mathbb{Z}$-equivariant, the assignment
\[
(L,U)\mapsto (\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\in L},\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\in U})
\]
is an equivariant morphism from $\mathcal{O}(J)$ to pairs of subcategories of $\mathscr{D}$. Clearly, when $f$ is not monotone there is no reason to expect that the pair $(\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\in L},\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\in U})$ forms a $t$-structure on $\mathscr{D}$.
\end{rem}
Yet, it interesting to notice that the condition that $f$ be monotone is only only sufficient in order to have this, and can indeed be relaxed.\\
\begin{defn}\label{compatible}
Let $J$ and $J'$ be $\mathbb{Z}$-tosets, and let $f\colon J\to J'$ a map of $\mathbb{Z}$-sets (i.e., a $\mathbb{Z}$-equivariant map, not necessarily nondecreasing). A Bridgeland $J$-slicing $\mathfrak{t}$ of $\mathscr{D}$ is said to be \emph{$f$-compatible}
if the condition `$f(\phi)>f(\psi)$ with $\phi\leq \psi$' implies $\mathscr{D}_{\mathfrak{t}; \phi}\boxslash \mathscr{D}_{\mathfrak{t};\psi}$ and $\mathscr{D}_{\mathfrak{t}; \phi}\boxslash \mathscr{D}_{\mathfrak{t};\psi}[1]$.
\footnote{Given two subcategories $\mathscr{S}_1$ and $\mathscr{S}_2$ of $\mathscr{D}$ we write $\mathscr{S}_1\boxslash \mathscr{S}_2$ to mean that $\mathscr{D}(X_1,X_2)$ is contractible for any $X_1\in \mathscr{S}_1$ and any $X_2\in\mathscr{S}_2$. It is easy to see that $\mathscr{S}_1\boxslash \mathscr{S}_2$ implies $\langle\mathscr{S}_1\rangle\boxslash \langle\mathscr{S}_2\rangle$, see \cite[Lemma 4.21]{fosco}}
\end{defn}
\begin{rem}\label{everything-compatible}
Clearly, if $f$ is monotone, then every $J$-slicing $\mathfrak{t}$ is $f$-compatible as the condition `$f(\phi)>f(\psi)$ with $\phi\leq \psi$' is empty.
\end{rem}
\begin{rem}\label{avanti-e-indietro}
Let $J$ and $J'$ be $\mathbb{Z}$-tosets, let $f\colon J\to J'$ a map of $\mathbb{Z}$-sets, and let $g\colon J'\to J''$ be an isomorphism of $\mathbb{Z}$-posets. Then a Bridgeland $J$-slicing $\mathfrak{t}$ of $\mathscr{D}$ is $f$-compatible if and only if it is $(g\circ f)$-compatible. Similarly, if $h\colon J''\to J$ is an isomorphism of $\mathbb{Z}$-posets, then $\mathfrak{t}$ is $f$-compatible if and only if $(h^{-1})_*\mathfrak{t}$ is $(f\circ h)$-compatible.
\end{rem}
\begin{lem}\label{is-t-structure}
Let $J$ and $J'$ be $\mathbb{Z}$-tosets, and let $f\colon J\to J'$ be a $\mathbb{Z}$-equivariant morphism of $\mathbb{Z}$-sets (i.e., not necessarily a monotone map) and let $\mathfrak{t}$ be a Bridgeland slicing of $\mathscr{D}$ which is $f$-compatible. Then, for any slicing $(L,U)$ of $J'$, the pair of subcategories
$(\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\in L},\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\in U})$ is a $t$-structure on $\mathscr{D}$.
\end{lem}
\begin{proof}
As $f$ is $\mathbb{Z}$-equivariant and $U+1\subseteq U$, we have
\begin{align*}
\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\in U}[1]&=\langle \mathscr{D}_{\mathfrak{t};\phi}[1]\rangle_{f(\phi)\in U}\\
&=\langle \mathscr{D}_{\mathfrak{t};\phi+1}\rangle_{f(\phi)\in U}\\
&=\langle \mathscr{D}_{\mathfrak{t};\phi+1}\rangle_{f(\phi+1)\in U+1}\\
&=\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\in U+1}\\
&\subseteq \langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\in U}
\end{align*}
and similarly for the lower subcategory $\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\in L}$. To show that
\[
\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\in U}\boxslash \langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\in L}
\]
it suffices to show that, if $f(\psi)\in L$ and $f(\phi)\in U$ then $\mathscr{D}_{\mathfrak{t};\phi}\boxslash \mathscr{D}_{\mathfrak{t};\psi}$. As $L\cap U=\emptyset$ we cannot have $\psi=\phi$, so either $\psi<\phi$ or vice versa. In the first case, $\mathscr{D}_{\mathfrak{t};\phi}\boxslash \mathscr{D}_{\mathfrak{t};\psi}$ by definition of Bridgeland $J$-slicing. In the second case, we have $\phi<\psi$ and $f(\phi)>f(\psi)$ as $f(\phi)\in U$ and $f(\psi)\in L$. Therefore, since $\mathfrak{t}$ is $f$-compatible, $\mathscr{D}_{\mathfrak{t};\phi}\boxslash\mathscr{D}_{\mathfrak{t};\psi}$. Finally, we have to show that every object $X$ in $\mathscr{D}$ fits into a fiber sequence
\[
\xymatrix{
X_U\ar[r]\ar[d] & X\ar[d]\\
0\ar[r] & X_L
}
\]
with $X_L\in \langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\in L}$ and $X_U\in \langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\in U}$. As $\mathfrak{t}$ is a Bridgeland slicing, we have a factorization of the initial morphism $\mathbf{0} \to X$ of the form
\[
\mathbf{0}=X_0 \xrightarrow{\alpha_1} X_1\cdots \xrightarrow{\alpha_{{\bar\imath}}}X_{{\bar\imath}}\xrightarrow{\alpha_{{\bar\imath}+1}}X_{{\bar\imath}+1}\xrightarrow{}\cdots \xrightarrow{\alpha_n} X_n=X
\]
with $\mathbf{0} \neq \cofib(\alpha_i)=\mathcal{H}_{\mathfrak{t}}^{\phi_i}(X) \in \mathscr{D}_{\phi_i}$ for all $i = 1, \cdots, n$, with $\phi_i>\phi_{i+1}$. Let us now consider the sequence of symbols $L$ and $U$ obtained putting in the $i$-th place $L$ if $f(\phi_i)\in L$ and $U$ if $f(\phi_i)\in U$. If this sequence is of the form $(U,U,\dots,U,L,L,\dots,L)$, then there exists an index $\bar{\imath}$ such that
$f(\phi_{i})\in U$ for $i\leq \bar{\imath}$ and $f(\phi_{i})\in L$ for $i>\bar{\imath}$ (with ${\bar\imath}=-1$ or $n$ when all of the $f(\phi_i)$ are in $L$ or in $U$, respectively). Then we can
consider the pullout diagram
\[
\xymatrix{
X_{{\bar\imath}}\ar[r]\ar[d]_{f_{L}}&0\ar[d]\\
X\ar[r]&\mathrm{cofib}(f_{L})
}\]
together with the factorizations
\[
\mathbf{0}=X_0 \xrightarrow{\alpha_1} X_1\cdots \xrightarrow{\alpha_{{\bar\imath}}}X_{{\bar\imath}}
\]
and
\[
X_{{\bar\imath}}\xrightarrow{\alpha_{{\bar\imath}+1}}X_{{\bar\imath}+1}\xrightarrow{}\cdots \xrightarrow{\alpha_n} X_n=X.
\]
The first factorization shows that $X_{{\bar\imath}}\in \langle\cup_{i=0}^{{\bar\imath}}\mathscr{D}_{\phi_i}\rangle\subseteq \langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\in U}$ while the second factorization shows that $\mathrm{cofib}(f_{L})\in \langle\cup_{i={\bar\imath}+1}^n\mathscr{D}_{\phi_i}\rangle\subseteq \langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\in L}$. So we are done in this case. Therefore, we are reduced to showing that we can always avoid a $(\dots,L,U,\dots)$ situation in our sequence of $L$'s and $U$'s. Assume we have such a situation. Then we have an index $i_0$ with $f(\phi_{i_0})\in L$ and $f(\phi_{{i_0}+1})\in U$. This in particular implies $f(\phi_{{i_0}+1})>f(\phi_{i_0})$ with $\phi_{{i_0}+1}<\phi_{i_0}$. As $\mathfrak{t}$ is $f$-compatible, this gives $\mathscr{D}_{\mathfrak{t}; \phi_{i_0+1}}\boxslash (\mathscr{D}_{\mathfrak{t};\phi_{i_0}}[1])$. In particular, $\mathscr{D}(\mathcal{H}^{\phi_{i_0+1}}_{\mathfrak{t}}(X),\mathcal{H}^{\phi_{i_0}}_{\mathfrak{t}}(X)[1])$ is contractible. Now consider the pasting of pullout diagrams
\[
\xymatrix{X_{i_0-1}\ar[r]^{\alpha_{i_0}}\ar[d] & X_{i_0}\ar[r]^{\alpha_{i_0+1}}\ar[d] & X_{i_0+1}\ar[d]^{\gamma}\\
0\ar[r]&\mathcal{H}^{\phi_{i_0}}_{\mathfrak{t}}(X)\ar[d]\ar[r]&Y\ar[d]\ar[r] &0\ar[d]\\
&0\ar[r]&\mathcal{H}^{\phi_{i_0+1}}_{\mathfrak{t}}(X)\ar[r]&\mathcal{H}^{\phi_{i_0}}_{\mathfrak{t}}(X)[1].
}
\]
As the arrow $\mathcal{H}^{\phi_{i_0+1}}_{\mathfrak{t}}(X)\to \mathcal{H}^{\phi_{i_0}}_{\mathfrak{t}}(X)[1]$ factors through $0$, we have $Y\cong\mathcal{H}^{\phi_{i_0+1}}_{\mathfrak{t}}(X)\oplus \mathcal{H}^{\phi_{i_0}}_{\mathfrak{t}}(X)$ and the above diagram becomes
\[
\xymatrix{X_{i_0-1}\ar[r]^{\alpha_{i_0}}\ar[d] & X_{i_0}\ar[r]^{\alpha_{i_0+1}}\ar[d] & X_{i_0+1}\ar[d]^{\gamma}\\
0\ar[r]&\mathcal{H}^{\phi_{i_0}}_{\mathfrak{t}}(X)\ar[d]\ar[r]^-{\iota_2}&\mathcal{H}^{\phi_{i_0+1}}_{\mathfrak{t}}(X)\oplus \mathcal{H}^{\phi_{i_0}}_{\mathfrak{t}}(X)\ar[d]^{\pi_1}\ar[r] &0\ar[d]\\
&0\ar[r]&\mathcal{H}^{\phi_{i_0+1}}_{\mathfrak{t}}(X)\ar[r]&\mathcal{H}^{\phi_{i_0}}_{\mathfrak{t}}(X)[1],
}
\]
where $\iota_2$ and $\pi_1$ are the canonical inclusion and projection. Let $\beta\colon X_{i_0+1}\to \mathcal{H}^{\phi_{i_0}}_{\mathfrak{t}}(X)$ be the composition
\[
\beta\colon X_{i_0+1}\xrightarrow{\gamma} \mathcal{H}^{\phi_{i_0+1}}_{\mathfrak{t}}(X)\oplus \mathcal{H}^{\phi_{i_0}}_{\mathfrak{t}}(X)\xrightarrow{\pi_2}
\mathcal{H}^{\phi_{i_0}}_{\mathfrak{t}}(X).
\]
Then we have a homotopy commutative diagram
\[
\xymatrix{
X_{i_0-1}\ar[r]^{\alpha_{i_0}}\ar[d]&X_{i_0}\ar[r]^{\alpha_{i_0+1}}\ar[d] & X_{i_0+1}\ar[d]^{\beta}\\
0\ar[r]&\mathcal{H}^{\phi_{i_0}}_{\mathfrak{t}}(X)\ar[r]^-{\mathrm{id}}&\mathcal{H}^{\phi_{i_0}}_{\mathfrak{t}}(X)
}
\]
(where only the left square is a pullout), and so the composition $\alpha_{i_0+1}\circ \alpha_{i_0}$ factors through the homotopy fiber of $\beta$. In other words, we have a homotopy commutative diagram
\[
\xymatrix{
X_{i_0-1}\ar[r]^{\alpha_{i_0}}\ar[d]_{\tilde{\alpha}_{i_0}}&X_{i_0}\ar[d]^{\alpha_{i_0+1}}\\
\fib(\beta)\ar[r]^{\tilde{\alpha}_{i_0+1}}&X_{i_0+1}\
}
\]
Writing $\tilde{X}_{i_0}=\fib(\beta)$, we get the pasting of pullout diagrams
\[
\xymatrix{X_{i_0-1}\ar[r]^{\tilde{\alpha}_{i_0}}\ar[d] & \tilde{X}_{i_0}\ar[r]^{\tilde{\alpha}_{i_0+1}}\ar[d] & X_{i_0+1}\ar[d]^{\gamma}\\
0\ar[r]&\mathcal{H}^{\phi_{i_0+1}}_{\mathfrak{t}}(X)\ar[d]\ar[r]^-{\iota_1}&\mathcal{H}^{\phi_{i_0+1}}_{\mathfrak{t}}(X)\oplus \mathcal{H}^{\phi_{i_0}}_{\mathfrak{t}}(X)\ar[d]_{\pi_2}\\
&0\ar[r]&\mathcal{H}^{\phi_{i_0}}_{\mathfrak{t}}(X).
}
\]
That is, by considering the factorization $X_{i_0-1}\xrightarrow{\tilde{\alpha}_{i_0}} \tilde{X}_{i_0}\xrightarrow{\tilde{\alpha}_{i_0+1}} X_{i_0+1}$ we have switched the cofibers with respect to the original factorization $X_{i_0-1}\xrightarrow{{\alpha}_{i_0}} {X}_{i_0}\xrightarrow{{\alpha}_{i_0+1}} X_{i_0+1}$. Therefore, writing $\tilde{\phi}_{i_0}=\phi_{i_0+1}$ and $\tilde{\phi}_{i_0+1}=\phi_{i_0}$,
we now have $f(\tilde{\phi}_{i_0})\in U$ and $f(\tilde{\phi}_{i_0+1})\in L$. That is, we have removed the $(\dots,L,U,\dots)$ situation from the position $i_0$, replacing it with a $(\dots,U,L,\dots)$ situation, while keeping all the labels $L,U$ before this positions unchanged. Repeating the procedure the needed number of times, we eventually get rid of all the $(\dots,L,U,\dots)$ situations.\footnote{This is somehow reminiscent of the `bubble sort' algorithm.}
\end{proof}
\begin{rem}\label{a-closer-look}
It follows from the proof of Lemma \ref{is-t-structure} that the objects $X_L$ and $X_U$ in the fiber sequence $X_U\to X\to X_L$ associated with the $t$-struture $(\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\in L},\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\in U})$ on $\mathscr{D}$ satisfy
\begin{align*}
X_L&\in \langle \mathscr{D}_{\mathfrak{t};\phi};\quad f(\phi)\in L \text{ and }\mathcal{H}^\phi_\mathfrak{t}(X)\neq 0\rangle\\
X_U&\in \langle \mathscr{D}_{\mathfrak{t};\phi};\quad f(\phi)\in U \text{ and }\mathcal{H}^\phi_\mathfrak{t}(X)\neq 0\rangle
\end{align*}
\end{rem}
\begin{lem}\label{slices}
For any $j\in J'$ we have
\[
\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)=j}=\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\leq j}\cap \langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\geq j}.
\]
\end{lem}
\begin{proof}
Clearly, $\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)=j}\subseteq \langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\leq j}\cap \langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\geq j}$, therefore we only need to prove the converse inclusion. Let $\in \langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\leq j}\cap \langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\geq j}$. Then in particular
$X\in \langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\geq j}$
and so there exists a factorization of the initial morphism $\mathbf{0} \to X$ of the form
\[
\mathbf{0}=X_0 \xrightarrow{\alpha_1} X_1\xrightarrow{\alpha_2}\cdots \xrightarrow{\alpha_{n-1}}X_{n-1}\xrightarrow{\alpha_n} X_n=X
\]
with $\mathbf{0} \neq \cofib(\alpha_i) \in \mathscr{D}_{\phi_i}$ with $f(\phi_i)\geq j$ for all $i = 1, \cdots, n$. As $((-\infty,j],(j,+\infty)$ is a slicing of $J'$, reasoning as in the proof of Lemma \ref{is-t-structure} we can arrange this factorization is such a way that $f(\phi_i)>j$ for $i\leq \bar{\imath}$ and $f(\phi_i)= j$ for $i>\bar{\imath}$. Therefore, again by reasoning as in the proof of Lemma \ref{is-t-structure} we get a fiber sequence of the form
\[
\xymatrix{
X_{>j}\ar[r]\ar[d] & X\ar[d]\\
0\ar[r]&X_{j}
}
\]
with $X_{>j}$ in $\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)>j}$ and $X_j$ in $\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)=j}$. This is in particular a fiber sequence of the form $X_{>j}\to X\to X_{\leq j}$, with $X_{\leq j}\in \langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\leq j}$.
As $(\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\leq j},\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)> j})$ is a $t$-structure on $\mathscr{D}$ by Lemma \ref{is-t-structure}, there is (up to equivalence) only one such a fiber sequence. And since $X\in \langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\leq j}$, this is the sequence $0\to X\xrightarrow{\mathrm{id}} X$. Therefore $X=X_j$.
\end{proof}
\begin{prop}
Let $J,J'$ be $\mathbb{Z}$-tosets, and let $f\colon J\to J'$ be a $\mathbb{Z}$-equivariant morphism of $\mathbb{Z}$-sets (i.e., not necessarily a monotone map) and let $\mathfrak{t}$ be a Bridgeland slicing of $\mathscr{D}$ which is $f$-compatible. The map
\begin{align*}
f_!\mathfrak{t}\colon \mathcal{O}(J')&\to\mathrm{ts}(\mathscr{D})\\
(L,U)&\mapsto (\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\in L},\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)\in U})
\end{align*}
defined by Lemma \ref{is-t-structure} is a Bridgeland $J'$-slicing of $\mathscr{D}$, with slices given by
\[
\mathscr{D}_{f_!\mathfrak{t};j}=\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)=j}.
\]
\end{prop}
\begin{proof}
The map $f_!\mathfrak{t}$ is manifestly monotone and $\mathbb{Z}$-equivariant (see the first part of the proof of Lemma \ref{is-t-structure}), so it is a $J'$ slicing of $\mathscr{D}$, and its slices are given by $\mathscr{D}_{f_!\mathfrak{t};j}=
\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)=j}$ by Lemma \ref{slices}. We are therefore left with showing that it is finite and discrete. Given an object $X$ in $\mathscr{D}$, let $\{\phi_1,\dots,\phi_n\}$ be the indices in $J$ such that $\mathcal{H}^{\phi_i}_\mathfrak{t}(X)\neq 0$ and let $\{j_1,\dots,j_k\}$ the image of the set $\{\phi_1,\dots,\phi_n\}$ via $f$. Up to renaming, we can assume $j_1>j_{2}>\cdots>j_k$. Consider now the factorization
\[
\mathbf{0}=X_0 \xrightarrow{\alpha_1} X_1\xrightarrow{\alpha_2}\cdots \xrightarrow{\alpha_{k-1}}X_{k-1}\xrightarrow{\alpha_k} X_k=X
\]
of the initial morphism of $X$ associated to the decreasing sequence $j_1>j_2>\cdots>j_k$ by the $J'$-slicing $f_!\mathfrak{t}$. The cofibers of the morphisms $\alpha_{i}$ are the cohomologies
$\mathcal{H}^{(j_{i+1},j_{i}]}_{f_!\mathfrak{t}}(X)$ and, by Remark \ref{a-closer-look}, we have
\begin{align*}
\mathcal{H}^{(j_{i+1},j_{i}]}_{f_!\mathfrak{t}}(X)\in& \langle \mathscr{D}_{\mathfrak{t};\phi};\quad f(\phi)\in (j_{i+1},j_{i}] \text{ and }\mathcal{H}^\phi_\mathfrak{t}(X)\neq 0\rangle\\
&=\langle \mathscr{D}_{\mathfrak{t};\phi};\quad \phi\in\{\phi_1,\dots,\phi_n\}\text{ and } f(\phi)\in(j_{i+1},j_{i}]\rangle\\
&=\langle \mathscr{D}_{\mathfrak{t};\phi};\quad \phi\in\{\phi_1,\dots,\phi_n\}\text{ and } f(\phi)=j_{i}\rangle\\
&\subseteq \langle \mathscr{D}_{\mathfrak{t};\phi};\quad f(\phi)=j_{i}\rangle=\mathscr{D}_{f_!\mathfrak{t};j_{i}}.
\end{align*}
Therefore
\[
\mathcal{H}^{(j_i,j_{i-1}]}_{f_!\mathfrak{t}}(X)=\mathcal{H}^{j_i}_{f_!\mathfrak{t}}\mathcal{H}^{(j_i,j_{i-1}]}_{f_!\mathfrak{t}}(X)=\mathcal{H}^{j_i}_{f_!\mathfrak{t}}(X).
\]
This tells us that, if all the cohomologies $\mathcal{H}^{j}_{f_!\mathfrak{t}}(X)$ vanish, then also all the $\mathcal{H}^{(j_i,j_{i-1}]}_{f_!\mathfrak{t}}(X)$ vanish, and so $X=0$. Finally, if $\tilde{j}\notin\{j_1,\dots,j_k\}$, then there exists an index $i$ such that $j_{i+1}<\tilde{j}<j_{i}$ and so $(j_{i+1},\tilde{j}]\cap \{j_1,\dots,j_k\}=\emptyset$. The above argument then shows that
\[
\mathcal{H}^{(j_{i+1},\tilde{j}]}_{f_!\mathfrak{t}}(X)\in\langle \mathscr{D}_{\mathfrak{t};\phi};\quad \phi\in\{\phi_1,\dots,\phi_n\}\text{ and } f(\phi)\in(j_{i+1},\tilde{j}]\rangle=\{\mathbf{0}\}.
\]
It follows that
\[
\mathcal{H}^{\tilde{j}}_{f_!\mathfrak{t}}(X)=\mathcal{H}^{\tilde{j}}_{f_!\mathfrak{t}}\mathcal{H}^{(j_{i+1},\tilde{j}]}_{f_!\mathfrak{t}}(X)=\mathcal{H}^{\tilde{j}}_{f_!\mathfrak{t}}(0)=0,
\]
for every $\tilde{j}\notin \{j_1,\dots,j_k\}$. So in particular, for any $X\in \mathscr{D}$, the cohomologies $\mathcal{H}^{j}_{f_!\mathfrak{t}}(X)$ are possibly nonzero only for finitely many indices $j$.
\end{proof}
\begin{rem}
If $f\colon J\to J'$ is a morphism of $\mathbb{Z}$-tosets, then every Bridgeland slicing of $\mathfrak{t}$ of $\mathscr{D}$ is $f$-compatible and one has $f_!\mathfrak{t}=f_*\mathfrak{t}$.
\end{rem}
\begin{prop}\label{functoriality}
Let $J,J',J''$ be $\mathbb{Z}$-tosets, let $f\colon J\to J'$ and $g\colon J'\to J''$ be $\mathbb{Z}$-equivariant morphisms of $\mathbb{Z}$-sets (i.e., not necessarily monotone maps), and let $\mathfrak{t}$ be a Bridgeland slicing of $\mathscr{D}$. If $\mathfrak{t}$ is $f$-compatible and $f_!\mathfrak{t}$ is $g$-compatible, then $\mathfrak{t}$ is $(g\circ f)$-compatible, and
\[
(g\circ f)_!\mathfrak{t}= g_!f_!\mathfrak{t}.
\]
\end{prop}
\begin{proof}
Let $\phi,\psi$ in $J$ be such that $\phi\leq \psi$ and $g(f(\phi))>g(f(\psi))$, and let $\xi=f(\phi)$ and $\eta=f(\psi)$. As $J'$ is a totally ordered set, either $\xi\leq \eta$ or $\xi>\eta$. In the first case we have $g(\xi)>g(\eta)$ with $\xi\leq \eta$. As $f_!\mathfrak{t}$ is $g$-compatible, this implies $\mathscr{D}_{f_!\mathfrak{t}; \xi}\boxslash \mathscr{D}_{f_!\mathfrak{t};\eta}$ and $\mathscr{D}_{f_!\mathfrak{t}; \xi}\boxslash \mathscr{D}_{f_!\mathfrak{t};\eta}[1]$. By definition of $f_!\mathfrak{t}$ we have $\mathscr{D}_{\mathfrak{t}; \phi}\subseteq \mathscr{D}_{f_!\mathfrak{t}; \xi}$ and $\mathscr{D}_{\mathfrak{t}; \psi}\subseteq \mathscr{D}_{f_!\mathfrak{t}; \eta}$, therefore we have $\mathscr{D}_{\mathfrak{t}; \phi}\boxslash \mathscr{D}_{\mathfrak{t};\psi}$ and $\mathscr{D}_{\mathfrak{t}; \phi}\boxslash \mathscr{D}_{\mathfrak{t};\psi}[1]$. In the second case we have $f(\phi)>f(\psi)$ with $\phi\leq \psi$. As $\mathfrak{t}$ is $f$-compatible, this again
implies $\mathscr{D}_{\mathfrak{t}; \phi}\boxslash \mathscr{D}_{\mathfrak{t};\psi}$ and $\mathscr{D}_{\mathfrak{t}; \phi}\boxslash \mathscr{D}_{\mathfrak{t};\psi}[1]$. Finally, for any $j\in J''$ one has
\[
\mathscr{D}_{g_!f_!\mathfrak{t};j}=\langle \mathscr{D}_{f_!\mathfrak{t};\xi}\rangle_{g(\xi)=j}=\langle\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{f(\phi)=\xi}\rangle_{g(\xi)=j}=\langle \mathscr{D}_{\mathfrak{t};\phi}\rangle_{g(f(\phi))=j}=\mathscr{D}_{(g\circ f)_!\mathfrak{t};j}.
\]
\end{proof}
\subsection{The exchange map}
Let now $J_1$ and $J_2$ be two $\mathbb{Z}$-tosets, and let $J_1 \times_{\textnormal{lex}} J_2$ and $J_2 \times_{\textnormal{lex}} J_1$ the two $\mathbb{Z}$-tosets obtained by considering the lexicogaphic order on the products $J_1\times J_2$ and $J_2\times J_1$, respectively, and the diagonal $\mathbb{Z}$ action. The following lemma is immediate.
\begin{lem}\label{exchange}
The exchange map
\begin{align*}
e\colon J_1\times J_2&\to J_2\times J_1\\
(j_1,j_2)&\mapsto (j_2,j_1)
\end{align*}
is $\mathbb{Z}$-equivariant.
\end{lem}
\begin{defn}\label{gluable}
Let $J_1$ and $J_2$ be $\mathbb{Z}$-tosets. A Bridgeland $J_1 \times_{\textnormal{lex}} J_2$-slicing $\mathfrak{t}$ of $\mathscr{D}$ is said to be \emph{gluable} if it is $e$-compatible, where $e \colon J_1 \times J_2 \to J_2 \times J_1$ is the exchange map.
\end{defn}
\begin{exmp}\label{example:bbd} Let $J_1=\{0,1\}$ with the trivial $\mathbb{Z}$-action and let $J_2=\mathbb{Z}$ with the standard translation $\mathbb{Z}$-action. Then a $J_1\times_{\textnormal{lex}} J_2$-slicing $\mathfrak{t}$ on $\mathscr{D}$ is the datum of a semiorthogonal decomposition $(\mathscr{D}_0,\mathscr{D}_1)$ of $\mathscr{D}$ toghether with $t$-structures $\mathfrak{t}_i$ on $\mathscr{D}_i$. Spelling out the definition, one sees that the $\{0,1\}\times_{\textnormal{lex}} \mathbb{Z}$-slicing $\mathfrak{t}$ is gluable if and only if
$\mathscr{D}_{0;\geq 0}\boxslash \mathscr{D}_{1;-1}$ and $\mathscr{D}_{0;\geq 0}\boxslash \mathscr{D}_{1;0}$, and so if and only if $\mathscr{D}_{0;\geq 0}\boxslash \mathscr{D}_{1;0}$. {As $\mathscr{D}_{0;\geq 0}$ is generated by the subcategories $\mathscr{D}_{0}^\heartsuit[k]$ for $k\geq 0$, this is equivalent to $\mathscr{D}_{0}^\heartsuit\boxslash\mathscr{D}_{1}^\heartsuit[n]$ for all $n\leq 0$.} When this happens, the glued slicing $e_!\mathfrak{t}$ is a $\mathbb{Z}\times_{\textnormal{lex}}\{0,1\}$ slicing on $\mathscr{D}$, i.e., is the datum of a bounded $t$-structure on $\mathscr{D}$ together with a torsion theory on its heart. More precisely, the heart of $e_!\mathfrak{t}$ is the full $\infty$-subcategory $\mathscr{D}^{\heartsuit_{e_!\mathfrak{t}}}$ of $\mathscr{D}$ on those objects $X$ that fall into fiber sequences of the form $X_1\to X\to X_0$ with $X_i\in \mathscr{D}_i^\heartsuit$, while the torsion theory on $\mathscr{D}^{\heartsuit_{e_!\mathfrak{t}}}$ is
$(\mathscr{D}_0^{\heartsuit},\mathscr{D}_1^{\heartsuit})$, i.e., precisely the pair of hearts of the two subcategories in the semiorthogonal decomposition. \\
This is exactly the setup of \cite{collins}, which on the other hand is a particular case of the classical gluing construction of $t$-structures by \cite{bbd}, explaining both our nomenclature and motivation.
\end{exmp}
\begin{lem}\label{incolla2}
Let $J_1$ and $J_2$ be $\mathbb{Z}$-tosets, and let $\mathfrak{t}$ be a Bridgeland $J_1 \times_{\textnormal{lex}} J_2$-slicing of $\mathscr{D}$. If $\mathfrak{t}$ is gluable and $\mathbb{Z}$ acts trivially on $J_1$, then $e_! \mathfrak{t}$ is a gluable $J_2 \times_{\textnormal{lex}} J_1$ slicing of $\mathscr{D}$.
\end{lem}
\begin{proof}
Let $(j_2,j_1)\leq (j_2',j_1')$ in $J_2 \times_{\textnormal{lex}} J_1$ be such that $(j_1,j_2)>(j_1',j_2')$ in $J_1 \times_{\textnormal{lex}} J_2$.
As $\mathfrak{t}$ is gluable, we have
\[
\mathscr{D}_{e_!\mathfrak{t};(j_2,j_1)}=\langle \mathscr{D}_{\mathfrak{t};(i_1,i_2)}\rangle_{e(i_1,i_2)=(j_2,j_1)}=\langle \mathscr{D}_{\mathfrak{t};(j_1,j_2)}\rangle=\mathscr{D}_{\mathfrak{t};(j_1,j_2)},
\]
since slices are extension closed. As $(j_1,j_2)>(j_1',j_2')$, we have $\mathscr{D}_{\mathfrak{t};(j_1,j_2)}\boxslash \mathscr{D}_{\mathfrak{t};(j_1',j_2')}$ by definition of slicing, and so $\mathscr{D}_{e_!\mathfrak{t};(j_2,j_1)}\boxslash\mathscr{D}_{e_!\mathfrak{t};(j_2',j_1')}$. Finally,
\[
\mathscr{D}_{e_!\mathfrak{t};(j_2,j_1)}[1]=\mathscr{D}_{\mathfrak{t};(j_1,j_2)}[1]=\mathscr{D}_{\mathfrak{t};(j_1,j_2)+1}=\mathscr{D}_{\mathfrak{t};(j_1,j_2+1)},
\]
since the $\mathbb{Z}$-action on $J_1$ is trivial. The condition $(j_2,j_1)\leq (j_2',j_1')$ with $(j_1,j_2)>(j_1',j_2')$ is actually equivalent to $j_2<j_2'$ and $j_1>j_1'$, so in particular we have $(j_1,j_2)>(j_1',j_2'+1)$.This implies $\mathscr{D}_{\mathfrak{t};(j_1,j_2)}\boxslash \mathscr{D}_{\mathfrak{t};(j_1',j_2'+1)}$ and so $\mathscr{D}_{e_!\mathfrak{t};(j_2,j_1)}\boxslash\mathscr{D}_{e_!\mathfrak{t};(j_2',j_1')}[1]$.
\end{proof}
\subsection{Upper graphs and monotone maps}
\begin{defn}
Let $U\colon J \to \mathcal{O}(J')$ be a map of sets. We denote by $\Gamma_{U}$ the subset of $J\times J'$ defined by
$$\Gamma_U=\{ (j,j') \in J \times J'; \quad j' \in U_{j} \}.$$
For $f\colon J\to J'$ a map of sets, we denote by $(<f,\geq f)\colon J \to \mathcal{O}(J')$ the composition of $f$ with the map
\begin{align*}
J'&\to \mathcal{O}(J')\\
j'&\mapsto ((-\infty,j'),[j',+\infty)).
\end{align*}
The upper graph of $f$ is the subset $\Gamma_{\geq f}$ of $J\times J'$.
\end{defn}
\begin{lem}\label{decreasing-gives-upper-set}
Let $U \colon J \to \mathcal{O}(J')$ be a map of sets. Then $\Gamma_U$
is an upper set of $J \times J'$ with the \emph{product order} if and only if $U$ is a map of posets $U\colon J\to \mathcal{O}(J')^\textnormal{op}$. In particular, $\Gamma_{\geq f}$ is an upper set if and only if $f\colon J\to J'$ is a nondecreasing map, i.e., a map of posets $J\to J'^\textnormal{op}$.
\end{lem}
\begin{proof}Assume $(L,U)$ is a map of posets from $J$ to $\mathcal{O}(J')^\textnormal{op}$.
Pick $(j,j') \in \Gamma_U$ and suppose $(j,j') \leq (k,k')$ in $J \times J'$. As $(L,U)$ is monotone and $k\geq j$, we have $U_j\subseteq U_k$; as $(j,j')\in \Gamma_U$, we have $j' \in U_j$. Therefore $j\in U_k$. Since $U_k$ is an upper set and $k'\geq j'$, we get $k'\in U_k$, i.e. $(k,k')\in \Gamma_U$. Vice versa, assume $\Gamma_U$ is an upper set, and let $j\leq k$ in $J$. For any $j'\in U_j$, the element $(k,j')$ in $J\times J'$ satisfies $(k,j')\geq (j,j')$ in the product order and so $(k,j')\in \Gamma_U$. This means $j'\in U_k$ and so $U_j\subseteq U_k$. To prove the second part of the statement, notice that the map $j\mapsto ((-\infty,j'),[j',+\infty))$ is a map of posets from $J'\to \mathcal{O}(J')$. Therefore, if $f\colon J\to J'^\textnormal{op}$ is a map of posets, then also $(<f,\geq f)\colon J \to \mathcal{O}(J')^\textnormal{op}$ is a map of posets. Vice versa, if $(<f,\geq f)\colon J \to \mathcal{O}(J')^\textnormal{op}$ is a map of posets then for every $j\leq k$ in $J$ we have $[f(j),+\infty)\subseteq [f(k),+\infty)$ and so $f(k)\leq f(j)$.
\end{proof}
\begin{prop}\label{graphs}
The map
\begin{align*}
\Gamma \colon \textnormal{Pos}(J,\mathcal{O}(J')^\textnormal{op})^\textnormal{op}&\to \mathcal{O}(J \times J')\\
U & \mapsto \Gamma_U
\end{align*}
is an isomorphism of posets.
\end{prop}
\begin{proof}
Let $U_1\leq U_2$ in the partial order on $\textnormal{Pos}(J,\mathcal{O}(J')^\textnormal{op})^\textnormal{op}$, and let $(j,j')\in \Gamma_{U_2}$. Then $j'\in U_{2;k}\subseteq U_{1,j}$ and so $(j,j')\in \Gamma_{U_1}$. This means $\Gamma_{U_1}\leq \Gamma_{U_2}$ in $\mathcal{O}(J \times J')$. Next, for any morphism of posets $U\colon J\to \mathcal{O}(J)^\textnormal{op}$ we have that $(j,j')\in \Gamma_{U+1}$ if and only if $j'\in (U+1)_j$
Pick and upper set $\tilde{U}$ of $J \times J'$ and set, for $j \in J$ $$U_{\tilde{U}}(j)=\{ j' \in J'; \quad (j,j') \in \tilde{U} \}.$$
The subset $U_{\tilde{U}}(j)\subseteq J'$ is an upper set. Indeed, if $j'\in U_{\tilde{U}}(j)$ and $k'\geq j'$ in $J'$, then $(j,k')\geq (j,j')$ in the product order on $J\times J'$ and so $(j,k')\in \tilde{U}$ as $\tilde{U}$ is an upper set. Next, if $j\leq k$ in $J$ and $j'\in U_{\tilde{U}}(j)$, then $(k,j')\in \tilde{U}$ as $\tilde{U}$ is an upper set, and so $j'\in U_{\tilde{U}}(k)$. This shows that $U_{\tilde{U}}(j)\subseteq U_{\tilde{U}}(k)$, and so $U_{\tilde{U}}$ is a map of posets from $J$ to $\mathcal{O}(J')^\textnormal{op}$. Moreover, if $\tilde{U}_1\leq \tilde{U}_2$ in $\mathcal{O}(J\times J')$ then $U_{\tilde{U}_1}\leq U_{\tilde{U}_2}$ in $\textnormal{Pos}(J,\mathcal{O}(J')^\textnormal{op})^\textnormal{op}$. Therefore we have a map
\begin{align*}
\gamma\colon \mathcal{O}(J \times J') &\to \textnormal{Pos}(J,\mathcal{O}(J')^\textnormal{op})^\textnormal{op} \\
\tilde{U} & \mapsto U_{\tilde{U}}.
\end{align*}
which is straightforward to see to be the inverse of $\Gamma$.
\end{proof}
\subsection{Perversities}
Assume $J$ and $J'$ are $\mathbb{Z}$-posets. Then $\mathcal{O}(J)^\textnormal{op}$ is a $\mathbb{Z}$-poset with the generator $1$ acting as $U\mapsto U-1$. The action of $\mathbb{Z}$ by conjugation on $\textnormal{Pos}(J,\mathcal{O}(J')^\textnormal{op})$ is therefore given by $(U\dotplus 1)_j=U_{j-1}-1$. To see that $U\dotplus 1$ is still a morphism of posets from $J$ to $\mathcal{O}(J')$, let $j\leq k$ in $J$. Then $(U\dotplus 1)_j=U_{j-1}-1\subseteq U_{k-1}-1=(U\dotplus 1)_k$.
The conjugation action, however, does not define a structure of $\mathbb{Z}$-poset on $\textnormal{Pos}(J,\mathcal{O}(J')^\textnormal{op})$, as it is generally not true that $U\leq U\dotplus1$. This condition is indeed equivalent to $U_{j}\subseteq (U\dotplus 1)_{j}$, i.e., to $U_{j}+1\subseteq U_{j-1}$ for any $j$ in $J$, and this is not necessarily satisfied by a morphism of posets $U\colon J\to \mathcal{O}(J')^\textnormal{op}$.
\begin{defn}
A \emph{$(J,J')$-perversity} is a map of posets $U\colon J\to \mathcal{O}(J')^\textnormal{op}$ such that
\[
U_{j}+1\subseteq U_{j-1
\]
for any $j$ in $J$. The set $\mathrm{Perv}(J,J')$ of all $(J,J')$-perversities inherits a poset structure from the inclusion in $\textnormal{Pos}(J,\mathcal{O}(J')^\textnormal{op})^\textnormal{op}$. It is a $\mathbb{Z}$-poset with $1$ acting as $U\mapsto U\dotplus(-1)$.
\end{defn}
\begin{rem}
Notice that the shift action on $(J,J')$-perversities is given by $U\mapsto U\dotplus(-1)$. Namely, by construction the $\mathbb{Z}$-action $U\mapsto U\dotplus 1$ is monotone on the set of perversities with the order induced by the inclusion in $\textnormal{Pos}(J,\mathcal{O}(J')^\textnormal{op})$, and the order on $\mathrm{Perv}(J,J')$ is the opposite one. The reason for considering this order is, clearly, Proposition \ref{graphs}.
\end{rem}
We have seen in Lemma \ref{decreasing-gives-upper-set} that a function $f\colon J\to J'$ defines a morphism of posets $(\geq f)\colon J\to \mathcal{O}(J')^\textnormal{op}$ if and only if $f$ is a morphism of posets from $J$ to $J'^\textnormal{op}$. When this happens, $(\geq f)$ is a $(J,J')$-perversity if and only if $f(j-1)\leq f(j)+1$, for every $j\in J$. In the particular case $J=\mathbb{Z}$,
assuming as usual that $J'$ is a $\mathbb{Z}$-poset, we can define a new function $p_f\colon \mathbb{Z}\to J'$ as $p_f(n)=f(n)+n$. Then the condition $f(n)\leq f(n+1)+1$ translates into
$p_f(n)\leq p_f(n+1)$, while the condition that $f\colon \mathbb{Z}\to J'^\textnormal{op}$ is a morphism of posets, i.e., $f(n+1)\leq f(n)$ translates to $p_f(n+1)\leq p_f(n)+1$. As $f\mapsto p_f$ is a bijection of the set of maps from $J$ to $J'$ into itself, we see that the functions $f\colon \mathbb{Z}\to J'$ defining perversities correspond bijectively to the set of functions $p\colon \mathbb{Z} \to J'$ such that
$p(n)\leq p(n+1)\leq p(n)+1$,
for every $n\in \mathbb{Z}$. This motivates the following (see \cite{bbd}).
\begin{defn}
A $(\mathbb{Z},\mathbb{Z})$-perversity $U$ is said to be defined by a function if there exists $f\colon \mathbb{Z}\to \mathbb{Z}$ such that $U=(\geq f)$. The subset of $(\mathbb{Z},\mathbb{Z})$-perversities defined by functions is denoted by $\mathrm{Perv}^\circ(\mathbb{Z},\mathbb{Z})$.
\end{defn}
\begin{lem}\label{lemma.perv0}
The subset $\mathrm{Perv}^\circ(\mathbb{Z},\mathbb{Z})$ is a $\mathbb{Z}$-sub-poset of $\mathrm{Perv}(\mathbb{Z},\mathbb{Z})$. It is isomorphic via $f\mapsto (\geq f)$ with the $\mathbb{Z}$-poset of nonincreasing functions $f\colon \mathbb{Z}\to \mathbb{Z}$ such that $f(n-1)\leq f(n)+1$ with the $\mathbb{Z}$-action given by $(f\dotplus 1)(n)=f(n+1)+1$.
\end{lem}
\begin{proof}
As a function $f\colon \mathbb{Z}\to \mathbb{Z}$ is uniquely determined by the collection of upper sets $[f(n),+\infty)$ the set $\mathrm{Perv}^\circ(J,J')$ bijectivley corresponds to the set of those functions $f\colon \mathbb{Z}\to \mathbb{Z}$ such that $(\geq f)$ is a $(\mathbb{Z},\mathbb{Z})$-perversity. As noticed above, these are precisely nonincreasing functions from $\mathbb{Z}$ to itself such that $f(n-1)\leq f(n)+1$. This bijection is an isomorphism of posets, as $(\geq f_1)\leq (\geq f_2)$ if and only if $f_1\leq f_2$ (in the standard poset structure on the set of maps from $\mathbb{Z}$ to the poset $\mathbb{Z}$). Finally, $((\geq f)\dotplus (-1))_n=(\geq f)_{n+1}+1=[f(n+1)+1,+\infty)=(\geq (f\dotplus 1))_n$, for any $n\in \mathbb{Z}$.
\end{proof}
\begin{defn}\label{defperv}
A \emph{perversity function} (on $\mathbb{Z}$) is a function $p\colon \mathbb{Z}\to \mathbb{Z}$ such that
\[
p(n)\leq p(n+1)\leq p(n)+1,
\]
for every $n\in \mathbb{Z}$. It is called a strict perversity function if
\[
p(n+2)-1\leq p(n)\leq p(n+1)\leq p(n)+1,
\]
for every $n\in \mathbb{Z}$. The set $\mathrm{perv}_\mathbb{Z}$ of perversity functions is a poset with the partial order induced by the inclusion $\mathrm{perv}_\mathbb{Z}\subseteq \textnormal{Pos}(\mathbb{Z},\mathbb{Z})$. It is a $\mathbb{Z}$-poset with the action $(p\dotplus1)(n)=p(n+1)$.
\end{defn}
\begin{rem}
A perversity function (on $\mathbb{Z}$) can be equivalently defined as a function $p\colon \mathbb{Z}\to \mathbb{Z}$ such that
\[
0\leq p(n)-p(m)\leq n-m
\]
for any $n-m\geq 0$ and the strictness condition translates to the additional condition $p(n)-p(m)< n-m$ for $n-m\geq 2$.
Basic examples of perversity functions are the zero perversity $p(n)\equiv 0$ and the identity perversity $p(n)\equiv n$. Another classical example is the \emph{middle perversity} $p(n)\equiv \lfloor n/2 \rfloor$. In particular, both the zero perversity and the middle perveristy are examples of strict perversities.
\end{rem}
\begin{rem}\label{other-action}
In addition to the $\mathbb{Z}$-action $p\mapsto p\dotplus 1$, the poset $\mathrm{perv}_\mathbb{Z}$ carries also another natural $\mathbb{Z}$-action making it a $\mathbb{Z}$-poset; namely, the action $(p+1)(n)=p(n)+1$. We will come back to this towards the end of this section.
\end{rem}
\begin{rem}
Notice that the $\mathbb{Z}$-action on perversity functions is a monotone action since, by definition of perversity function, we have $p(n)\leq p(n+1)$ and this precisely means $p(n)\leq (p\dotplus1)(n)$.
\end{rem}
\begin{lem}
For every $p\colon \mathbb{Z} \to \mathbb{Z}$, let $f_p\colon \mathbb{Z}\to \mathbb{Z}$ be the map defined by $f_p(n)= p(n)-n$. Then $p\mapsto (\geq f_p)$ is an isomorphism of $\mathbb{Z}$-posets between $\mathrm{perv}_\mathbb{Z}$ and $\mathrm{Perv}^\circ(\mathbb{Z},\mathbb{Z})$
\end{lem}
\begin{proof}
By Lemma \ref{lemma.perv0} we only need to show that $p\mapsto f_p$ is a monotone $\mathbb{Z}$-equivariant bijection between $\mathrm{perv}_\mathbb{Z}$ and the set of functions $f\colon \mathbb{Z}\to \mathbb{Z}$ such that $f(n)\leq f(n-1)\leq f(n)+1$. That it is a bijection is immediate: the inverse map is $f\mapsto p_f$, where $p_f(n)=f(n)+n$. To see that it is an isomorphism of posets, notice that $f_{p_1}\leq f_{p_2}$ if and only if $p_1(n)-n\leq p_2(n)-n$ for every $n\in \mathbb{Z}$, and so if and only if $p_1(n)\leq p_2(n)$ for every $n\in \mathbb{Z}$. Finally, $f_{p\dotplus1}(n)=(p\dotplus 1)(n)-n=p(n+1)-n=f_p(n+1)+1=(f_p\dotplus 1)(n)$.
\end{proof}
\subsubsection{Perversities as slicings of the lattice $\mathbb{Z}\times \mathbb{Z}$}
\begin{defn}
An upper set $U$ in $\mathcal{O}(J\times J')$ is called a \emph{kinky upperset} if $U\leq U+_\mathrm{ne}1$, where ${\mathrm{ne}}$ is the ``northwestern'' action of $\mathbb{Z}$ on $J\times J'$ given by
$(j,j')+_{\mathrm{ne}}1=(j-1,j'+1)$. We denote by $\mathrm{Kink}(J\times J')$ the poset of kinky uppersets of $J\times J'$, with the poset structure induced by the inclusion in $\mathcal{O}(J\times J')$. it is a $\mathbb{Z}$-poset with the ``northwestern'' action. We denote by $\mathrm{Kink}^\circ(J\times J')$ the $\mathbb{Z}$-sub-poset of nontrivial kinky uppersets of $J\times J'$, where
the trivial upperstes are $\emptyset$ and $J\times J'$.
\end{defn}
\begin{lem}
The map $\Gamma$ from Proposition \ref{graphs} induces an isomorphism of $\mathbb{Z}$-posets
\[
\Gamma \colon \mathrm{Perv}(J,J')\to \mathrm{Kink}(J\times J').
\]
\end{lem}
\begin{proof}
Let $U\colon J\to \mathcal{O}(J')^\textnormal{op}$ be a perversity, and let $(j,j')\in \Gamma_U$. Then $j'\in U_{j}$ and so, by definition of perversity, $j'+1\in U_{j-1}$. Therefore $(j-1,j'+1)\in \Gamma_U$, i.e., $\Gamma_U\leq \Gamma_U+_{\mathrm{ne}} 1$. Vice versa, if $\tilde{U}$ is a kinky upperset in $J\times J'$, let $U_{\tilde{U}}$ the preimage in $\textnormal{Pos}(J,\mathcal{O}(J')^\textnormal{op})^\textnormal{op}$ of $\tilde{U}$ via $\Gamma$ (see the proof of Proposition \ref{graphs}). Then for any $j\in J$ and any $j'\in U_{\tilde{U};j}$ we have $j'+1\in U_{\tilde{U};j-1}$ and so $U_{\tilde{U};j}+1\subseteq U_{\tilde{U};j-1}$. So the isomorphism of posets $\textnormal{Pos}(J,\mathcal{O}(J')^\textnormal{op})^\textnormal{op}\to \mathcal{O}(J \times J')$ restricts to an isomorphism of posets $\mathrm{Perv}(J,J')\to \mathrm{Kink}(J\times J')$. To see that $ \Gamma \colon \mathrm{Perv}(J,J')\to \mathrm{Kink}(J\times J')$ is also $\mathbb{Z}$-equivariant, notice that, for every perversity $U$ we have $(j,j')\in \Gamma_{U\dotplus (-1)}$ if and only if $j'\in U_{j+1}+1$. i.e., if and only if $(j+1,j'-1)\in \Gamma_U$. This latter condition is equivalent to $(j,j')\in \Gamma_U+_{\mathrm{nw}}1$, so we find $\Gamma_{U\dotplus (-1)}=\Gamma_U+_{\mathrm{nw}}1$.
\end{proof}
\begin{lem}
The map $\Gamma$ from Proposition \ref{graphs} induces an isomorphism of $\mathbb{Z}$-posets
\[
\Gamma \colon \mathrm{Perv}^\circ(\mathbb{Z},\mathbb{Z})\to \mathrm{Kink}^\circ(\mathbb{Z}\times \mathbb{Z}).
\]
\end{lem}
\begin{proof}
{Let $U$ be a kinky upper set that is not in the image of $\Gamma$. We want to show that $U=\emptyset$ or $U=\mathbb{Z}\times \mathbb{Z}$. To do this, we notice that}
a kinky upperset $U$ is in the image of $\Gamma$ if and only if $U$ is of the form $(\geq f)$ for a suitable function $f\colon \mathbb{Z}\to \mathbb{Z}$, {and that} this is possible if and only if $U_n\neq \emptyset,\mathbb{Z}$ for every $n\in \mathbb{Z}$. As $U$ is kinky, if $U_{n_0}=\emptyset$ for some $n_0$, then $U_{n_0+1}+1\subseteq U_{n_0}=\emptyset$, and so $U_{n_0+1}=\emptyset$. Inductively, this gives $U_n=\emptyset$ for every $n\geq n_0$. On the other hand, since a kinky upperset is an upperset, if there exists a nonempty $U_n$ with $n<n_0$, then there exist an element $(n,m)$ in $U$ and so, since $(n,m)\leq (n_0,m)$, also $ (n_0,m)\in U$. But then $m\in U_{n_0}$, which is impossible. So also the $U_n$ with $n<n_0$ are empty and therefore $U=\emptyset$. Similarly, if $U_{n_0}=\mathbb{Z}$ for some $n_0$, then $U_n=\mathbb{Z}$ for every $n>n_0$ as $U$ is an upperset, while the kinkiness condition $U_{n}+1\subseteq U_{n-1}$ implies that also $U_{n_0-1}=\mathbb{Z}$ and so, inductively, that all $U_n$ with $n<n_0$ are the whole of $\mathbb{Z}$. That is, $U=\mathbb{Z}\times\mathbb{Z}$ in this case.
\end{proof}
\begin{lem}\label{equiv}
The isomorphism of $\mathbb{Z}$-modules $\varphi\colon \mathbb{Z}^2\to \mathbb{Z}^2$ given by $\varphi\colon (n,n')\mapsto (n+n',n')$ induces an isomorphism of $\mathbb{Z}$-posets
\[
\varphi\colon \mathrm{Kink}(\mathbb{Z}\times \mathbb{Z})\xrightarrow{\sim} \mathcal{O}(\mathbb{Z}\times \mathbb{Z}),
\]
where the $\mathbb{Z}$-action on $\mathcal{O}(\mathbb{Z}\times \mathbb{Z})$ is the one induced by the ``northern'' $\mathbb{Z}$-action on $\mathbb{Z}^2$, namely, $(n,n')+1=(n,n'+1)$. In particular $\varphi$ induces an isomorphism of $\mathbb{Z}$-posets $\mathrm{Kink}^{\circ}(\mathbb{Z}\times \mathbb{Z})\xrightarrow{\sim} \mathcal{O}(\mathbb{Z}\times \mathbb{Z})\setminus\{\emptyset,\mathbb{Z}\times\mathbb{Z}\}$.
\end{lem}
\begin{proof}
A subset $U$ of $\mathbb{Z}\times \mathbb{Z}$ is an upper set (in the product order) if and only if $U+K\subseteq U$, where $K$ is the $\mathbb{Z}$-cone spanned by $\left(\begin{smallmatrix}1\\0\end{smallmatrix}\right)$ and $\left(\begin{smallmatrix}0\\1\end{smallmatrix}\right)$, while $U$ is a kinky upperset if and only if $U+K^\mathrm{kink}\subseteq U$, where $K^\mathrm{kink}$ is the $\mathbb{Z}$-cone spanned by $\left(\begin{smallmatrix}1\\0\end{smallmatrix}\right)$ and $\left(\begin{smallmatrix}-1\\1\end{smallmatrix}\right)$. Morever the $\mathbb{Z}$ action on the uppersets is generated by $U\mapsto U+\left(\begin{smallmatrix}0\\1\end{smallmatrix}\right)$ and the the $\mathbb{Z}$ action on the kinky uppersets is generated by $U\mapsto U+\left(\begin{smallmatrix}-1\\1\end{smallmatrix}\right)$, where in both cases the sum on the right hand side is the sum in $\mathbb{Z}^2$. As $\varphi \colon \mathbb{Z}^2\to \mathbb{Z}^2$ is an isomorphism of $\mathbb{Z}$-modules with $\varphi(K^\mathrm{kink})=K$ and $\varphi\left(\begin{smallmatrix}-1\\1\end{smallmatrix}\right)=\left(\begin{smallmatrix}0\\1\end{smallmatrix}\right)$, the statement follows ($\varphi$ is manifestly inclusion preserving).
\end{proof}
\begin{cor}\label{corperv}
We have an isomorphism of $\mathbb{Z}$-posets
\[
\mathrm{perv}_\mathbb{Z}\xrightarrow{\sim} \mathcal{O}(\mathbb{Z}\times \mathbb{Z})\setminus\{\emptyset,\mathbb{Z}\times\mathbb{Z}\}
\]
mapping a perversity function $p$ to the image via the isomorphism $\varphi\colon (n,n')\mapsto (n+n',n')$ of the set $\{(n,n')\in \mathbb{Z}\times \mathbb{Z}\text{ such that } n'\geq p(n)-n\}$.
\end{cor}
\begin{rem}
The zero perversity function $p(n)\equiv 0$ corresponds to the upper set $\{(n,n')\in \mathbb{Z}\times \mathbb{Z}\text{ such that } n\geq 0\}$; the identity perversity function $p(n)\equiv n$ corresponds to the upper set $\{(n,n')\in \mathbb{Z}\times \mathbb{Z}\text{ such that } n'\geq 0\}$.
\end{rem}
\begin{rem}\label{missing}
The two ``missing'' upper sets from $\mathcal{O}(\mathbb{Z}\times \mathbb{Z})\setminus\{\emptyset,\mathbb{Z}\times\mathbb{Z}\}$ can be recovered by adding to the set $\mathrm{perv}_\mathbb{Z}$ of perversity functions the two ``constant infinite perversities'', i.e., the function $p_{+\infty}\colon \mathbb{Z}\to \mathbb{Z}\cup\{-\infty,+\infty\}$ defined by $p_{+\infty}(n)=+\infty$ for every $n\in\mathbb{Z}$ and the function $p_{-\infty}\colon \mathbb{Z}\to \mathbb{Z}\cup\{-\infty,+\infty\}$ defined by $p_{-\infty}(n)=-\infty$ for every $n\in\mathbb{Z}$. The extended set
\[
\widehat{\mathrm{perv}}_\mathbb{Z} =\mathrm{perv}_\mathbb{Z}\cup\{p_{-\infty},p_{+\infty}\}
\]
is naturally a $\mathbb{Z}$-poset with $p_{+\infty}$ and $p_{-\infty}$ as maximum and minimum element, respectively (so they are in particular $\mathbb{Z}$-fixed points), and the inclusion of $\mathrm{perv}_\mathbb{Z}$ into $\widehat{\mathrm{perv}}_\mathbb{Z}$ is a morphism of $\mathbb{Z}$-posets. Moreover, the isomorphism of $\mathbb{Z}$-posets $\mathrm{perv}_\mathbb{Z}\xrightarrow{\sim} \mathcal{O}(\mathbb{Z}\times \mathbb{Z})\setminus\{\emptyset,\mathbb{Z}\times\mathbb{Z}\}$ from Corollary \ref{corperv} extends to an isomorphism of $\mathbb{Z}$-posets
\[
\widehat{\mathrm{perv}}_\mathbb{Z}\xrightarrow{\sim} \mathcal{O}(\mathbb{Z}\times \mathbb{Z}).
\]
\end{rem}
As well as the $\mathbb{Z}$-action $p\mapsto p\dotplus 1$, also the action $p\mapsto p+1$ from Remark \ref{other-action} extends to a $\mathbb{Z}$-action of $\widehat{\mathrm{perv}}_\mathbb{Z}$ (again, $p_{+\infty}$ and $p_{-\infty}$ will be fixed points). The isomorphism of posets $\widehat{\mathrm{perv}}_\mathbb{Z}\xrightarrow{\sim} \mathcal{O}(\mathbb{Z}\times \mathbb{Z})$ will then transfer this $\mathbb{Z}$-action to $\mathcal{O}(\mathbb{Z}\times \mathbb{Z})$ inducing on $\mathcal{O}(\mathbb{Z}\times \mathbb{Z})$ a $\mathbb{Z}$-action such that $\widehat{\mathrm{perv}}_\mathbb{Z}\xrightarrow{\sim} \mathcal{O}(\mathbb{Z}\times \mathbb{Z})$ is an isomorphism of posets with the $\mathbb{Z}$-action $p\mapsto p+1$ on the left hand side. More precisely, we have the following result.
\begin{prop}
\label{prop-perv}
We have an isomorphism of posets
\[
\widehat{\mathrm{perv}}_\mathbb{Z}\xrightarrow{\sim} \mathcal{O}(\mathbb{Z}\times \mathbb{Z})
\]
mapping a perversity function $p$ to the image via the isomorphism $\varphi\colon (n,n')\mapsto (n+n',n')$ of the set $S_p=\{(n,n')\in \mathbb{Z}\times \mathbb{Z}\text{ such that } n'\geq p(n)-n\}$, and mapping the ``infinite perversity functions'' $p_{-\infty}$ and $p_{+\infty}$ to $\mathbb{Z}\times \mathbb{Z}$ and to $\emptyset$, respectively. Moreover, this is an isomorphism of $\mathbb{Z}$-posets, where the $\mathbb{Z}$-action on the left is given by $(p+1)(n)=p(n)+1$ and the $\mathbb{Z}$-action on the right is the one induced by the ``northeastern'' $\mathbb{Z}$-action on $\mathbb{Z}^2$, namely, $(n,n')+1=(n+1,n'+1)$.
\end{prop}
\begin{proof}
After Corollary \ref{corperv} and Remark \ref{missing}, the only thing left to prove is the $\mathbb{Z}$-equivariancy of the isomorphism, i.e., that we have
\[
\varphi(S_{p+1})=\varphi(S_p)+\left(\begin{smallmatrix}1\\1\end{smallmatrix}\right).
\]
As $\varphi$ is an isomorphism of $\mathbb{Z}$-modules, this is equivalent to
\[
S_{p+1}=S_p+\left(\begin{smallmatrix}0\\1\end{smallmatrix}\right),
\]
i.e., to the condition $(n,n')\in S_{p+1}$ if and only if $(n,n'-1)\in S_{p}$, which is obvious.
\end{proof}
Taking the opposite of the complement (or, equivalently, the complement of the opposite) gives an isomorphism of posets
\begin{align*}
\mathcal{O}(\mathbb{Z}\times \mathbb{Z})&\xrightarrow{\sim} \mathcal{O}(\mathbb{Z}\times \mathbb{Z})^{\mathrm{op}}\\
U&\mapsto (\mathbb{Z}\times \mathbb{Z})\setminus (-U),
\end{align*}
where $-U=\{(-n,-n')\text{ with } (n,n')\in U\}$. This isomorphisms changes the northeastern action in its opposite, i.e., the ``southwestern'' action $(n,n')+1=(n-1,n'-1)$, so Proposition \ref{prop-perv} immediatley gives
\begin{cor}
\label{cor-perv2}
We have an isomorphism of posets
\[
\widehat{\mathrm{perv}}_\mathbb{Z}^{\mathrm{op}}\xrightarrow{\sim} \mathcal{O}(\mathbb{Z}\times \mathbb{Z})
\]
mapping a perversity function $p$ to the complement of the image via the isomorphism $\psi\colon (n,n')\mapsto (-n-n',-n')$ of the set $S_p=\{(n,n')\in \mathbb{Z}\times \mathbb{Z}\text{ such that } n'\geq p(n)-n\}$, and mapping the ``infinite perversity functions'' $p_{-\infty}$ and $p_{+\infty}$ to $\emptyset$ and to $\mathbb{Z}\times \mathbb{Z}$, respectively. Moreover, this is an isomorphism of $\mathbb{Z}$-posets, where the $\mathbb{Z}$-action on the left is given by $(p+^{\mathrm{op}}1)(n)=p(n)-1$ and the $\mathbb{Z}$-action on the right is the one induced by the ``northeastern'' $\mathbb{Z}$-action on $\mathbb{Z}^2$, namely, $(n,n')+1=(n+1,n'+1)$.
\end{cor}
\subsection{Slicing the heart}\label{sec:heart}
Let $\mathscr{D}$ be a stable $\infty$-category { and let $\mathfrak{t}$ be a bounded t-structure on $\mathscr{D}$, i.e., equivalently, the datum of} a Bridgeland $\mathbb{Z}$-slicing of $\mathscr{D}$. Let $\heartsuit_{\mathfrak{t}}$ denote the heart of $\mathfrak{t}$. Then an abelian $\mathbb{Z}$-slicing of $\heartsuit_{\mathfrak{t}}$ is the datum of an extension $\tilde{\mathfrak{t}}$ of $\mathfrak{t}$ to a Bridgeland $\mathbb{Z}\times_{\mathrm{lex}}\hat{\mathbb{Z}}$-slicing on $\mathscr{D}$, where $\hat{\mathbb{Z}}$ denotes the $\mathbb{Z}$-poset consisting of $\mathbb{Z}$ endowed with the trivial $\mathbb{Z}$-action, and the morphism $\mathcal{O}(\mathbb{Z})\to \mathcal{O}(\mathbb{Z}\times_{\mathrm{lex}}\hat{\mathbb{Z}})$ is induced by the projection on the first factor $\mathbb{Z}\times_{\mathrm{lex}}\hat{\mathbb{Z}}\to \mathbb{Z}$; see \cite[Section 5]{fosco}. We will denote by $\heartsuit_{\mathfrak{t};\phi}$ the $\phi$-th slice of the heart of $\mathfrak{t}$. In other words,
\[
\heartsuit_{\mathfrak{t};\phi}=\mathscr{D}_{\tilde{\mathfrak{t}};(0,\phi)}.
\]
Notice that we have
\[
\heartsuit_{\mathfrak{t};\phi}[n]=\mathscr{D}_{\tilde{\mathfrak{t}};(n,\phi)},
\]
where $[n]$ denotes the ``shift by $n$'' functor on $\mathscr{D}$.
\begin{exmp}
Via the obvious inclusion of $\mathbb{Z}$-posets $\{0,1\}\hookrightarrow\hat{\mathbb{Z}}$, any torsion pair $(\heartsuit_{\mathfrak{t};0},\heartsuit_{\mathfrak{t};1})$ on $\heartsuit_{\mathfrak{t}}$ defines an abelian $\mathbb{Z}$-slicing on $\heartsuit_{\mathfrak{t}}$.
\end{exmp}
\begin{defn}\label{defgrad}
Let $\mathfrak{t}$ be a bounded t-structure on $\mathscr{D}$. An abelian $\mathbb{Z}$-slicing $\hat{\mathfrak{t}}$ on $\heartsuit_{\mathfrak{t}}$ is called:
\begin{itemize}
\item \emph{perverse} (or \emph{weak grading})
if $\heartsuit_{\mathfrak{t};\phi}\boxslash\heartsuit_{\mathfrak{t};\psi}[n]$ for $\phi>\psi+n$;
\item \emph{grading
} (or \emph{radical}) if $\heartsuit_{\mathfrak{t};\phi}\boxslash\heartsuit_{\mathfrak{t};\psi}[n]$ for $\phi>\psi+n$ and
for $\phi=\psi+n$ with $n\geq 2$;
\item \emph{gluable} if $\heartsuit_{\mathfrak{t};\phi}\boxslash \heartsuit_{\mathfrak{t};\psi}[n]$
for $\phi>\psi$ and $n>0$;
\end{itemize}
\end{defn}
\begin{rem}\label{special-gluable}
The definition of gluable abelian $\mathbb{Z}$-slicing of $\heartsuit_\mathfrak{t}$ is the specialization of Definition \ref{gluable} to $J_1=\mathbb{Z}$ and $J_2=\hat{\mathbb{Z}}$.
\end{rem}
\begin{rem}
Historically, grading filtrations first appeared in \cite{ekh} under the name of `radical filtrations'
\end{rem}
\begin{exmp}
Let $(\heartsuit_{\mathfrak{t};0},\heartsuit_{\mathfrak{t};1})$ be a torsion pair on $\heartsuit_{\mathfrak{t}}$. Then $(\heartsuit_{\mathfrak{t};0},\heartsuit_{\mathfrak{t};1})$, seen as an abelian $\mathbb{Z}$-slicing, is grading. Namely, as $\heartsuit_{\mathfrak{t};\phi}[n]=0$ for $\phi \notin\{ 0,1\}$, the only nontrivial orthogonality conditions to be checked are:
\begin{itemize}
\item[-] $\heartsuit_{\mathfrak{t};0}\boxslash\heartsuit_{\mathfrak{t};0}[n]$ for $n<0$;
\item[-] $\heartsuit_{\mathfrak{t};0}\boxslash\heartsuit_{\mathfrak{t};1}[n]$ for $n<-1$;
\item[-] $\heartsuit_{\mathfrak{t};1}\boxslash\heartsuit_{\mathfrak{t};0}[n]$ for $n<1$;
\item[-] $\heartsuit_{\mathfrak{t};1}\boxslash\heartsuit_{\mathfrak{t};1}[n]$ for $n<0$.
\end{itemize}
These all follows from the orthogonality relation $\heartsuit_{\mathfrak{t}}\boxslash \heartsuit_\mathfrak{t}[n]$ for $n< 0$, except for $\heartsuit_{\mathfrak{t};1}\boxslash\heartsuit_{\mathfrak{t};0}$ which is true by definition of torsion pair.
\end{exmp}
\begin{prop}\label{glue-to-grad}
Let $\mathfrak{t}$ be a bounded t-structure on $\mathscr{D}$, and let $\tilde{\mathfrak{t}}$ be an abelian $\mathbb{Z}$-slicing on $\heartsuit_{\mathfrak{t}}$. If $\tilde{\mathfrak{t}}$ is gluable, then $\tilde{\mathfrak{t}}$ is grading.
\end{prop}
\begin{proof}
Let $\phi$ and $\psi$ be in $\mathbb{Z}$ with $\phi>\psi+n$.
The orthogonality condition $\heartsuit_{\mathfrak{t};\phi}\boxslash\heartsuit_{\mathfrak{t};\psi}[n]$ is trivially satisfied if $n<0$, so let us assume $n\geq 0$. If $n=0$, then $\phi>\psi$ and the orthogonaliy condition $\heartsuit_{\mathfrak{t};\phi}\boxslash\heartsuit_{\mathfrak{t};\psi}$ is satisfied by definition of slicing. Finally, if $n>0$, then we have $\phi>\psi$ and $n>0$, so $\heartsuit_{\mathfrak{t};\phi}\boxslash\heartsuit_{\mathfrak{t};\psi}[n]$ by definition of gluable slicing.
If $\phi=\psi+n$ with $n\geq 2$, then in particular $\phi>\psi$ and $n>0$, so again $\heartsuit_{\mathfrak{t};\phi}\boxslash\heartsuit_{\mathfrak{t};\psi}[n]$.
\end{proof}
\begin{prop}\label{grad}
Let $\mathfrak{t}$ be a bounded t-structure on $\mathscr{D}$, and let $\tilde{\mathfrak{t}}$ be a perverse abelian $\mathbb{Z}$-slicing on $\heartsuit_{\mathfrak{t}}$. Then, $\tilde{\mathfrak{t}}$ is $g_p$-compatible, where
\begin{align*}
g_p\colon \mathbb{Z} \times_{\mathrm{lex}} \hat{\mathbb{Z}}&\to \mathbb{Z} \times_{\mathrm{lex}} \hat{\mathbb{Z}}\\
(n,\phi)&\mapsto(n+p(\phi),-p(\phi)),
\end{align*}
for every strict perversity function $p\colon \mathbb{Z}\to \mathbb{Z}$. If $\tilde{\mathfrak{t}}$ is grading, then $\tilde{\mathfrak{t}}$ is $g_p$-compatible for every perversity function $p$.
\end{prop}
\begin{proof}
The map $g_p$ is $\mathbb{Z}$-equivariant, as the action on the second factor is the trivial one. Let $(n,\phi)$ and $(m,\psi)$ in $\mathbb{Z}\times_{\mathrm{lex}} \hat{\mathbb{Z}}$ with $(n,\phi)\leq (m,\psi)$ such that $g_p(n,\phi)>g_p(m,\psi)$ in $\mathbb{Z}\times_{\mathrm{lex}} \hat{\mathbb{Z}}$. By definition of $g_p$ this means that we have
\[
(n+p(\phi),-p(\phi))>(m+p(\psi),-p(\psi))
\]
in $\mathbb{Z}\times_{\mathrm{lex}} \mathbb{Z}$, i.e., that $n+p(\phi)>m+p(\psi)$ or that $n+p(\phi)=m+p(\psi)$ and $p(\psi)>p(\phi)$. Similarly, the condition $(n,\phi)\leq (m,\psi)$ means that either $n<m$ or $n=m$ and $\phi\leq \psi$. By considering all possibilities, and taking into account that a perversity function is nondecreasing, one sees that there is actually a single case to deal with:
$p(\phi)-p(\psi)>m-n$, with $m>n$ and $\phi>\psi$.
If $p$ is a perversity function, if $\phi> \psi$ then
\[
0\leq p(\phi)-p(\psi)\leq \phi-\psi,
\]
so we have
\[
\phi\geq \psi+ p(\phi)-p(\psi)>\psi+m-n.
\]
Since $\tilde{\mathfrak{t}}$ is perverse, this implies $\heartsuit_{\mathfrak{t};\phi}\boxslash\heartsuit_{\mathfrak{t};\psi}[m-n]$, and so $\heartsuit_{\mathfrak{t};\phi}[n]\boxslash \heartsuit_{\mathfrak{t};\psi}[m]$, i.e.,
\[
\mathscr{D}_{\tilde{\mathfrak{t}};(n,\phi)}\boxslash\mathscr{D}_{\tilde{\mathfrak{t}};(m,\psi)}.
\]
Concerning the orthogonality of $\mathscr{D}_{\tilde{\mathfrak{t}};(n,\phi)}$ and $\mathscr{D}_{\tilde{\mathfrak{t}};(m,\psi)}[1]$, notice that
from $\phi\geq \psi+ p(\phi)-p(\psi)>\psi+m-n$ it follows that either $\phi>\psi+m-n+1$ or $\phi=\psi+m-n+1\geq \psi+2$. If $\phi>\psi+m-n+1$, then reasoning as above we find $\mathscr{D}_{\tilde{\mathfrak{t}};(n,\phi)}\boxslash\mathscr{D}_{\tilde{\mathfrak{t}};(m,\psi)}[1]$. If $\phi=\psi+m-n+1\geq \psi+2$,
there are two cases to be considered. In the first case, $\tilde{\mathfrak{t}}$ is perverse and the perversity function $p$ is strict. In the second case, we allow $p$ to be any perversity, but we put a restriction on $\tilde{\mathfrak{t}}$, which we require to be grading.
In the first case, as $p$ is strict and $\phi\geq \psi+2$, we have $p(\phi)-p(\psi)< \phi-\psi$, and so again $\phi>\psi+m-n+1$.
In the second case, as $m-n+1\geq 2$ and $\tilde{\mathfrak{t}}$ is grading, we have $\heartsuit_{\mathfrak{t};\phi}\boxslash\heartsuit_{\mathfrak{t};\psi}[m-n+1]$, i.e., again
\[
\mathscr{D}_{\tilde{\mathfrak{t}};(n,\phi)}\boxslash\mathscr{D}_{\tilde{\mathfrak{t}};(m,\psi)}[1].
\]
\end{proof}
\begin{rem}
The proof of Proposition \ref{grad} makes it clear the meaning of the otherwise obscure condition in the definition of grading abelian $\mathbb{Z}$-slicing: the condition on a shift by at least 2 in the definition of strict perversity is traded for an orthogonality condition in the slicing.
\end{rem}
\begin{rem}\label{alpha}
By taking $p$ to be the identity perversity, $\mathrm{id}\colon \mathbb{Z}\to \mathbb{Z}$, we see that if $\tilde{\mathfrak{t}}$ is grading then $\tilde{\mathfrak{t}}$ is $\alpha$-compatible, where $\alpha\colon \mathbb{Z} \times_{\mathrm{lex}} \hat{\mathbb{Z}}\to \mathbb{Z} \times_{\mathrm{lex}} \hat{\mathbb{Z}}$ is the $\mathbb{Z}$-equivariant map given by
\[
\alpha(n,m)=(n+m,-m).
\]
\end{rem}
The proof of the following lemma is straightforward.
\begin{lem}\label{from-alpha-to-e}
Let $\beta\colon \mathbb{Z} \times_{\mathrm{lex}} \hat{\mathbb{Z}}\to \mathbb{Z} \times_{\mathrm{lex}} {\mathbb{Z}}$ be the map defined by
\[
\beta(n,m)=(n,n+m).
\]
Then $\beta$ is an isomorphism of $\mathbb{Z}$-tosets. Moreover the diagram
\[
\xymatrix{
\mathbb{Z} \times_{\mathrm{lex}} \hat{\mathbb{Z}}\ar[r]^{\alpha}\ar[d]_{\beta}& \mathbb{Z} \times_{\mathrm{lex}} \hat{\mathbb{Z}}\ar[d]^{\beta}\\
\mathbb{Z} \times_{\mathrm{lex}} {\mathbb{Z}}\ar[r]^{e}& \mathbb{Z} \times_{\mathrm{lex}} {\mathbb{Z}}
}
\]
commutes,
where $e\colon \mathbb{Z}\times \mathbb{Z} \to \mathbb{Z}\times \mathbb{Z}$ is the exchange map $e(n,m)=(m,n)$ from Lemma \ref{exchange} and $\alpha\colon \mathbb{Z} \times_{\mathrm{lex}} \hat{\mathbb{Z}}\to \mathbb{Z} \times_{\mathrm{lex}} \hat{\mathbb{Z}}$ is the map $\alpha(n,m)=(n+m,-m)$ from Remark \ref{alpha}.
\end{lem}
\begin{cor}
Let $\mathfrak{t}$ be a bounded t-structure on $\mathscr{D}$, and let let $\tilde{\mathfrak{t}}$ be an abelian $\mathbb{Z}$-slicing on $\heartsuit_{\mathfrak{t}}$. Then, in the notation of Lemma \ref{from-alpha-to-e}, $\tilde{\mathfrak{t}}$ is $\alpha$-compatible if and only if $\beta_*\mathfrak{t}$ is gluable.
\end{cor}
\begin{proof}
By Lemma \ref{from-alpha-to-e} and Remark \ref{avanti-e-indietro}, $\tilde{\mathfrak{t}}$ is $\alpha$-compatible if and only if $\tilde{\mathfrak{t}}$ is $(\beta\circ\alpha)$-compatible, which happens if and only if $\tilde{\mathfrak{t}}$ is $(e\circ\beta)$-compatible. As $\beta$ is an isomorphism of $\mathbb{Z}$-tosets, we can write $\mathfrak{t}=(\beta^{-1})_*\beta_*\mathfrak{t}$, and so, by Remark \ref{avanti-e-indietro} again, $\tilde{\mathfrak{t}}$ is $(e\circ\beta)$-compatible if and only if $\beta_*\mathfrak{t}$ is $e$-compatible. By Remark \ref{special-gluable}, this is equivalent to saying that $\beta_*\mathfrak{t}$ is gluable.
\end{proof}
From Proposition \ref{glue-to-grad} and Remark \ref{alpha} we immediately get the following
\begin{cor}
Let $\mathfrak{t}$ be a bounded t-structure on $\mathscr{D}$, and let let $\tilde{\mathfrak{t}}$ be an abelian $\mathbb{Z}$-slicing on $\heartsuit_{\mathfrak{t}}$. If $\tilde{\mathfrak{t}}$ is grading, then $\beta_*\mathfrak{t}$ is gluable. In particular, if $\tilde{\mathfrak{t}}$ is grading, then also $\beta_*\mathfrak{t}$ is grading.
\end{cor}
\begin{prop}\label{grad2}
Let $\mathfrak{t}$ be a bounded t-structure on $\mathscr{D}$, and let let $\tilde{\mathfrak{t}}$ be a grading abelian $\mathbb{Z}$-slicing on $\heartsuit_{\mathfrak{t}}$. For every perversity $p$,
let $\gamma_p\colon \mathbb{Z}\times_{\mathrm{lex}}\hat{\mathbb{Z}}\to \mathbb{Z}$ the $\mathbb{Z}$-equivariant morphism given by
\[
\gamma_p\colon (n,\phi)\mapsto n+p(\phi).
\]
Then
\begin{itemize}
\item $\tilde{\mathfrak{t}}$ is $\gamma_p$-compatible;
\item $ (\gamma_p)_!\tilde{\mathfrak{t}}$ is a bounded $t$-structure on $\mathscr{D}$;
\item the map
\begin{align*}
\Psi\colon \mathrm{perv}_\mathbb{Z}^{\mathrm{op}}&\to \mathrm{ts}(\mathscr{D})\\
p&\mapsto (\gamma_p)_!\tilde{\mathfrak{t}},
\end{align*}
is a morphism of $\mathbb{Z}$-posets, where on the left we have the $\mathbb{Z}$-action $(p+1)(n)=p(n)+1$ and on the right the $\mathbb{Z}$-action given by the shift in $\mathscr{D}$;
\item
$\Psi$ uniquely extends to a morphism of $\mathbb{Z}$-posets
\[
\Psi\colon \widehat{\mathrm{perv}}_\mathbb{Z}^{\mathrm{op}}\to \mathrm{ts}(\mathscr{D})
\]
preserving maxima and minima.
\end{itemize}
\end{prop}
\begin{proof}
Let $g_p\colon \mathbb{Z} \times_{\mathrm{lex}} \hat{\mathbb{Z}}\to \mathbb{Z} \times_{\mathrm{lex}} \hat{\mathbb{Z}}$ the map defined in Proposition \ref{grad}
As $\tilde{\mathfrak{t}}$ is grading, by Proposition \ref{grad}, $\tilde{\mathfrak{t}}$ is $g_p$-compatible. The projection on the first factor, $\pi_1\colon \mathbb{Z} \times_{\mathrm{lex}} \hat{\mathbb{Z}}\to \mathbb{Z}$ is a morphism of $\mathbb{Z}$-tosets, so by Remark \ref{everything-compatible} every $\mathbb{Z} \times_{\mathrm{lex}} \hat{\mathbb{Z}}$-slicing is $\pi_1$-compatible. In particular, $(g_p)_!\mathfrak{t}$ is $\pi_1$-compatible. Therefore, by Proposition \ref{functoriality}, $\mathfrak{t}$ is $(\pi_1\circ g_p)$-compatible. As $\pi_1\circ g_p=\gamma_p$, this precisely says that $\mathfrak{t}$ is $\gamma_p$-compatible. We therefore have a Bridgeland $\mathbb{Z}$-slicing, i.e., a bounded $t$-structure, $(\gamma_p)_!\mathfrak{t}$ on $\mathscr{D}$. The map $\Psi$ is monotone and $\mathbb{Z}$-equivariant. Indeed, the $t$-structure $\Psi_p$ is defined by the upper category
\[
\mathscr{D}_{(\gamma_{p})_!\tilde{\mathfrak{t}};\geq 0}=\langle \mathscr{D}_{\mathfrak{t};(n,\phi)}\rangle_{(n,\phi)\in \gamma_p^{-1}([0,+\infty))}.
\]
If $p_1\geq^{\mathrm{op}} p_2$, then $p_1\leq p_2$ and so $n+p_1(\phi)\geq 0$ implies $n+p_2(\phi)\geq 0$, and so $\gamma_{p_1}^{-1}([0,+\infty))\subseteq \gamma_{p_2}^{-1}([0,+\infty))$. This gives $\mathscr{D}_{(\gamma_{p_1})_!\tilde{\mathfrak{t}};\geq 0}\subseteq \mathscr{D}_{(\gamma_{p_2})_!\tilde{\mathfrak{t}};\geq 0}$, i.e.,
\[
\mathscr{D}_{(\gamma_{p_1})_!\tilde{\mathfrak{t}}}\geq \mathscr{D}_{(\gamma_{p_2})_!\tilde{\mathfrak{t}}}
\]
in the parial order on $\mathrm{ts}(\mathscr{D})$. Similarly, $n+(p+^{\mathrm{op}}1)(\phi)\geq 0$ if and only if $n+p(\phi)\geq 1$ and so
\[
\mathscr{D}_{(\gamma_{p+1})_!\tilde{\mathfrak{t}};\geq 0}=\mathscr{D}_{(\gamma_{p})_!\tilde{\mathfrak{t}};\geq 1}=\mathscr{D}_{(\gamma_{p})_!\tilde{\mathfrak{t}};\geq 0}[1].
\]
Finally, $\Psi$ trivially (and uniquely) extends to $\widehat{\mathrm{perv}}_\mathbb{Z}^{\mathrm{op}}$ preserving maxima and minima.
\end{proof}
Recalling Corollary \ref{cor-perv2} and Proposition \ref{glue-to-grad} we finally get the result we were aiming to.
\begin{thm}\label{main-thm}
Let $\mathfrak{t}$ be a bounded t-structure on $\mathscr{D}$, and let let $\tilde{\mathfrak{t}}$ be a gluing abelian $\mathbb{Z}$-slicing on $\heartsuit_{\mathfrak{t}}$. Then $\tilde{\mathfrak{t}}$ explicitly induces a natural morphism of $\mathbb{Z}$-posets
\[
\Psi\colon\mathcal{O}(\mathbb{Z}\times\mathbb{Z})\to \mathrm{ts}(\mathscr{D})
\]
such that for every proper upper set in $\mathbb{Z}\times \mathbb{Z}$, the corresponding $t$-structure on $\mathscr{D}$ is bounded.
\end{thm}
\begin{rem}By Proposition \ref{grad2}, one sees that
Theorem \ref{main-thm} is actually true under the weaker assumption that $\tilde{\mathfrak{t}}$ is grading. Moreover, by restricting to the $\mathbb{Z}$-sub-poset of $\mathcal{O}(\mathbb{Z}\times\mathbb{Z})$ consisting of the image in $\mathcal{O}(\mathbb{Z}\times\mathbb{Z})$ of strict perversities, Theorem \ref{main-thm} holds under the even weaker assumption that $\tilde{\mathfrak{t}}$ is perverse. We preferred to state it under the stronger assumption of a gluable $\tilde{\mathfrak{t}}$ to make its use in the examples below more immediate. See however Subsection \ref{perverse-coherent} for an geometrically interesting example involving a perverse abelian slicing.
\end{rem}
\begin{rem}\label{main-rem}
The morphism $\Psi$ can be thought of as a $(\mathbb{Z}\times \mathbb{Z})$-slicing of $\mathscr{D}$, but one has to keep in mind that the poset $\mathbb{Z}\times\mathbb{Z}$ indexing the slices (and so the cohomologies) is now not totally ordered. This is a possibly subtle point, so let us spend a few more words on it. An abelian $\mathbb{Z}$-slicing $\tilde{\mathfrak{t}}$ of $\heartsuit_\mathfrak{t}$ is by definition a $(\mathbb{Z}\times_{\mathrm{lex}}\hat{\mathbb{Z}})$-slicing, and by Lemma \ref{from-alpha-to-e}, this is equivalently a $(\mathbb{Z}\times_{\mathrm{lex}}{\mathbb{Z}})$-slicing. So going from an abelian slicing of the heart to a $(\mathbb{Z}\times_{\mathrm{lex}}{\mathbb{Z}})$-slicing of $\mathscr{D}$ is a trivial step. What is nontrivial is going from an abelian slicing of the heart to a $(\mathbb{Z}\times{\mathbb{Z}})$-slicing of $\mathscr{D}$, where now the poset structure on $\mathbb{Z}\times \mathbb{Z}$ is given by the product order and not by the lexicographic order. And indeed this can generally not be done for an
arbitrary abelian slicing of the heart, and here is where the property of the abelian slicing to be grading comes in. Finally, to emphasize once more how going from a $(\mathbb{Z}\times_{\mathrm{lex}}{\mathbb{Z}})$-slicing to a $(\mathbb{Z}\times{\mathbb{Z}})$-slicing is a nontrivial step, consider how there are many more upper sets in $\mathbb{Z}\times{\mathbb{Z}}$ than in $\mathbb{Z}\times_{\mathrm{lex}}{\mathbb{Z}}$.
\end{rem}
\begin{rem}
Describing the bounded $t$-structure on $\mathscr{D}$ associated by Theorem \ref{main-thm} to a proper upper set $U$ of $\mathbb{Z}\times \mathbb{Z}$ is a bit involved, but it is a completely explicit procedure. To begin with, recall that a bounded $t$-structure is completely determined by its heart, so we only need to give a description of the heart $\heartsuit_U$ associated with $U$. To do this, notice that the perveristy function associated to $U$ is
\[
p_U(n)=n+\min\{n'\in \mathbb{Z}\text{ such that } (n,n')\notin \psi^{-1}(U)\}
\]
where $\psi^{-1}(n,n')=(-n+n',-n')$. The heart $\heartsuit_U$ is then the extension closed subcategory of $\mathscr{D}$ generated by the slices $\mathscr{D}_{(-p_U(n),n)}$ of $\tilde{\mathfrak{t}}$.
\end{rem}
\begin{exmp}
If $U=\{(n,n')\text{ such that } n'\geq 1\}$, then $\psi(U)=\{(n,n')\text{ such}$ $\text{that } n'\leq -1\}$ and so $p_U(n)=n$. Therefore, $\heartsuit_U=\langle \mathscr{D}_{(-n,n)}\rangle_{n\in \mathbb{Z}}$ in this case. If $U=\{(n,n')\text{ such that } n\geq 1\}$, then $\psi(U)=\{(n,n')\text{ such that } n'\leq -n-1\}$ and so $p_U(n)=0$. Therefore, $\heartsuit_U=\langle \mathscr{D}_{(0,n)}\rangle_{n\in \mathbb{Z}}$ in this case.
\end{exmp}
A more explict description of the \emph{perverse hearts} of $\mathscr{D}$ is as follows.
\begin{thm}\label{perverse-heart}
Let $\mathfrak{t}$ be a bounded t-structure on $\mathscr{D}$, and let let $\tilde{\mathfrak{t}}$ be a gluing abelian $\mathbb{Z}$-slicing on $\heartsuit_{\mathfrak{t}}$. Let $U$ be an upper set of $\mathbb{Z}\times \mathbb{Z}$ and let $p$ be the corresponding perversity. Then the preverse heart $\heartsuit_p=\heartsuit_U$ of $\mathscr{D}$ is the full subcategory of $\mathscr{D}$ on those objects $X$ such that
\[
H_\mathfrak{t}^{n}(X)[-n]\in \langle \heartsuit_{\mathfrak{t};n'}\rangle_{n'\in p^{-1}(-n)}
\]
for every $n\in \mathbb{Z}$, where $H_\mathfrak{t}^{n}(X)$ is the $n$-th cohomology object of $X$ in the $t$-structure $\mathfrak{t}$ and $\{\heartsuit_{\mathfrak{t};n'}\}_{n'\in \mathbb{Z}}$ are the slices of the heart $\heartsuit_\mathfrak{t}$ of $\mathfrak{t}$ for the abelian $\mathbb{Z}$-slicing $\tilde{\mathfrak{t}}$.
\end{thm}
\begin{proof}
Denote by $\mathfrak{t}_p$ the $\mathfrak{t}$-structure on $\mathscr{D}$ associated with the perversity function $p$. Then the lower subcategory $\mathscr{D}_{\mathfrak{t}_p;<0}$ and the upper subcategory $\mathscr{D}_{\mathfrak{t}_p;\geq 0}$ of $\mathscr{D}$ are defined, by Proposition \ref{grad}, as
\[
\mathscr{D}_{\mathfrak{t}_p;<0}=\langle \mathscr{D}_{\tilde{\mathfrak{t}};(n,n')}\rangle_{n+p(n')<0}; \qquad \mathscr{D}_{\mathfrak{t}_p;\geq 0}=\langle \mathscr{D}_{\tilde{\mathfrak{t}};(n,n')}\rangle_{n+p(n')\geq 0}.
\]
These can be equivalently described as
\begin{align*}
\mathscr{D}_{\mathfrak{t}_p;<0}&=\{X\in \mathscr{D}\text { such that } H_{\tilde{\mathfrak{t}}}^{(n,n')}(X)=\mathbf{0}\text{ for } n+p(n')\geq 0\};\\
\mathscr{D}_{\mathfrak{t}_p;\geq 0}&=\{X\in \mathscr{D}\text { such that } H_{\tilde{\mathfrak{t}}}^{(n,n')}(X)=\mathbf{0}\text{ for } n+p(n')< 0\},
\end{align*}
see, e.g., \cite[Remark 4.27]{fosco}. Therefore
\[
\heartsuit_p=\{X\in \mathscr{D}\text{ such that } H_{\tilde{\mathfrak{t}}}^{(n,n')}(X)=\mathbf{0}\text{ for } n+p(n')\neq 0\}.
\]
Equivalently, this means that
\[
\heartsuit_p=\{X\in \mathscr{D}\text{ such that } H_{\mathfrak{t}}^{n}(X)\in \langle \heartsuit_{\mathfrak{t};n'}[n]\rangle_{p(n')=-n}\}.
\]
\end{proof}
\begin{exmp}
Let $k\in \mathbb{Z}$ and let $\chi_{[k,+\infty)}\colon \mathbb{Z}\to \{0,1\}$ the characteristic function of the interval $[k,+\infty)$. Seen as a function from $\mathbb{Z}$ to $\mathbb{Z}$, the function $\chi_{[k,+\infty)}$is a perversity function of a very special kind: it is a perversity function taking exactly two values. Moreover, it is easy to see that --up to an additive constant-- perversity functions taking exactly two values are precisely characteristic functions of upper intervals in $\mathbb{Z}$. We have
\[
\chi_{[k,+\infty)}^{-1}(-n)=\begin{cases}
\emptyset &\text{if }n\neq -1,0\\
\\
[k,+\infty)&\text{if }n= -1\\
\\
(-\infty,k)&\text{if }n =0.
\end{cases}
\]
Therefore the perverse heart $\heartsuit_{\chi_{[k,+\infty)}^{-1}}$ of $\mathscr{D}$ is the full subcategory of $\mathscr{D}$ on those objects $X$ such that
\[
\begin{cases}
H_\mathfrak{t}^{n}(X[1])= \mathbf{0} \qquad \text{ if }n\neq 0,1 \\
\\
H_\mathfrak{t}^{0}(X[1])= \langle \heartsuit_{\mathfrak{t};n'}\rangle_{n'\in [k,+\infty)}\\
\\
H_\mathfrak{t}^{1}(X[1])= \langle \heartsuit_{\mathfrak{t};n'}[1]\rangle_{n'\in (-\infty,k)}
\end{cases}
\]
for every $n\in \mathbb{Z}$. In other words, $\heartsuit_{\chi_{[k,+\infty)}^{-1}}$ is (up to a shift by 1) the heart of the tilted $t$-structure obrained by tilting $\mathfrak{t}$ with the torsion theory on $\heartsuit_\mathfrak{t}$ given by $\mathcal{F}=\langle \heartsuit_{\mathfrak{t};n'}\rangle_{n'\in (-\infty,k)}$ and $\mathcal{T}=\langle \heartsuit_{\mathfrak{t};n'}\rangle_{n'\in [k,+\infty)}$.
\end{exmp}
\section{A zoo of examples}
The upshot of Theorem \ref{main-thm} is that out of a gluing abelian $\mathbb{Z}$-slicing on the heart of a bounded $t$-structure on a stable $\infty$-category $\mathscr{D}$
we explicitly get a natural morphism of $\mathbb{Z}$-posets
\[
\Psi\colon\mathcal{O}(\mathbb{Z}\times\mathbb{Z})\to \mathrm{ts}(\mathscr{D}),
\]
which, on the subset of perversities, acts as
\begin{align*}
\Psi\colon \mathrm{perv}_\mathbb{Z}^{\mathrm{op}}&\to \mathrm{ts}(\mathscr{D})\\
p&\mapsto (\gamma_p)_!\tilde{\mathfrak{t}},
\end{align*}
see Proposition \ref{grad}. We can build this way whole new classes of `perverse' $t$-structures on $\mathscr{D}$. In this section we present a few examples, reinterpreting and rediscovering a few classical results from the literature on perverse $t$-structures within the unifying framework provided by the construction presented in the main section of the article.
\subsection{An example from algebra and one from geometry}
\subsubsection{The nonstandard $t$-structure from Koszul duality}
An instance of a nonstandard construction of a $t$-structure lies within the theory of Koszul duality. Namely, under suitable finiteness and semisimplicity assumptions, if $B$ and $\check{B}$ are Koszul dual algebras, then there is an equivalence of stable $\infty$-categories $\mathscr{D}^b(\mathrm{gmod}(\check{B}))\xrightarrow{\sim}\mathscr{D}^b(\mathrm{gmod}({B}))$, where $\mathrm{gmod}(\check{B})$ and $\mathrm{gmod}({B})$ are the categories of finitely generated graded modules over $\check{B}$ and $B$, respectively; see \cite{kosz}. This equivalence, however, does not preserve the standard $t$-structures, and the standard $t$-structure on $\mathscr{D}^b(\mathrm{gmod}(\check{B}))$ is mapped to a nonstandard `diagonal' $t$-structure $\mathscr{D}^b(\mathrm{gmod}({B}))$. This diagonal $t$-structure is actually an example of perverse $t$-structure deriving from a gluable abelian slicing on the standard heart of $\mathscr{D}^b(\mathrm{gmod}(\check{B}))$. Namely, the standard heart $\heartsuit_\mathfrak{t}$ of $\mathscr{D}^b(\mathrm{gmod}({B}))$ is the abelian category of finitely generated $\mathbb{Z}$-graded $B$-modules. For $\phi \in \mathbb{Z}$, denote by $\heartsuit_{\mathfrak{t};\phi}$ the full subcategory of $\heartsuit_{\mathfrak{t}}$ consisting of modules concentrated in degree $\phi$. Clearly, this defines an abelian $\mathbb{Z}$-slicing $\tilde{\mathfrak{t}}$ on $\heartsuit_{\mathfrak{t}}$. Following \cite{kosz} we have
\[
\textnormal{Ext}_{B}^n(M_{\phi},M_{\psi})=0
\]
for $n>\psi - \phi$, for any modules $M_\phi\in\heartsuit_{\mathfrak{t};\phi}$ and $M_\psi\in\heartsuit_{\mathfrak{t};\psi}$ . In particular, if $\phi>\psi$ and $n>0$ we have $\heartsuit_{\mathfrak{t};\phi}\boxslash \heartsuit_{\mathfrak{t};\psi}[n]$, and so
$\tilde{\mathfrak{t}}$ is a gluable slicing. Therefore, for every perversity $p$ we have a bounded perverse $t$-structure $(\gamma_p)_!\tilde{\mathfrak{t}}$ on $\mathscr{D}^b(\mathrm{gmod}({B}))$. By choosing $p$ to be the identity perversity $\mathrm{id}\colon \mathbb{Z}\to \mathbb{Z}$ we get a distinguished perverse $t$-structure $(\gamma_{\mathrm{id}})_!\tilde{\mathfrak{t}}$ on $\mathscr{D}^b(\mathrm{gmod}({B}))$. This is precisely the `diagonal' $t$-structure
considered in \cite{kosz}.
\subsubsection{Strictly perverse coherent sheaves}\label{perverse-coherent}
Let $X$ be a smooth projective variety over $\mathbb{C}$ \footnote{This assumption can be greatly weakened, see \cite{bezr}.} and let $\mathscr{D} = \mathscr{D}^b(\textrm{Coh}(X))$, the (bounded) derived category of coherent sheaves on $X$, endowed with its canonical heart $\heartsuit = \textrm{Coh}(X)$. Then there is an abelian $\{0, \cdots , \dim X \}$-slicing on $\heartsuit$ given by defining $\heartsuit_i$ as the full subcategory of $\textrm{Coh}(X)$ on coherent sheaves with support of pure codimension $i$, for each $0 \leq i \leq \dim X$.
We can see $\{\heartsuit_i\}_{0\leq i\leq \dim X}$ as an abelian $\mathbb{Z}$-slicing of $\heartsuit$ via the obvious $\mathbb{Z}$-poset embedding $\{0, \cdots , \dim X \} \subseteq \hat{\mathbb{Z}}$. By Serre duality and Grothendieck vanishing, if $\mathscr{E}$ and $\mathscr{F}$ are coherent sheaves on $X$ then
\[
\mathrm{Ext}^n(\mathscr{E},\mathscr{F})=0\qquad \text{for } n<\dim \mathrm{supp}(\mathscr{F})-\dim\mathrm{supp}(\mathscr{E}).
\]
Equivalently, this can be rewritten as $\mathscr{D}(\mathscr{E},\mathscr{F}[n])=0$ for $n<i-j$, for any $\mathscr{E}$ in $\heartsuit_i$ and $\mathscr{F}$in $\heartsuit_j$, i.e.,
$\heartsuit_{i}\boxslash\heartsuit_{j}[n]$ for $i>j+n$. In other words, $\{\heartsuit_i\}_{0\leq i\leq \dim Xn}$ is a perverse abelian $\mathbb{Z}$-slicing on $\textrm{Coh}(X)$. Therefore, by Theorem \ref{main-thm} and Remark \ref{main-rem}, with any strict perversity function $p$ is associated a perverse $t$-structure on $\mathscr{D}$, i.e., one gets a lattice of $t$-structures on $\mathscr{D}$ parametrized by strict perversities. The heart of these perverse $t$-structures are the perverse coherent sheaves constructed in \cite{bezr}, while the lattice of $t$-structures on $\mathscr{D}$ associated with the abelian $\mathbb{Z}$-slicing $\{\heartsuit_i\}_{0\leq i\leq \dim X}$ is the `distributive lattice of $t$-structures' from \cite{bondper}.
\subsection{Gluable slicings from baric structures}\label{baric}
The notion of a bounded Bridgeland $\hat{\mathbb{Z}}$-slicing is not new: it already appears in literature under other names. Namely, it is no more than an infinite version of a semiorthogonal decomposition in the sense of \cite{semiort}, or a `\textit{baric structure}' as defined in \cite{baric}. These are a rich source of gluable abelian $\mathbb{Z}$-slicings. Namely, given a baric structure $\{\mathscr{D}_{n}\}_{n\in \mathbb{Z}}$ on a stable $\infty$-category $\mathscr{D}$ together with the datum of a bounded $t$-structure on each of the stable subcategories $\mathscr{D}_{n}$, we can look at this as the datum of a $\hat{\mathbb{Z}} \times_{\textnormal{lex}} \mathbb{Z}$-slicing $\hat{\mathfrak{t}}$ on $\mathscr{D}$. If $\hat{\mathfrak{t}}$ is gluable, then $e_!\hat{\mathfrak{t}}$ is a gluable ${\mathbb{Z}} \times_{\textnormal{lex}} \hat{\mathbb{Z}}$-slicing of $\mathscr{D}$, by Lemma \ref{incolla2}. By the results in Section \ref{sec:heart}, $e_!\hat{\mathfrak{t}}$ is equivalently an abelian slicing on the heart $\heartsuit_{{\mathfrak{t}}}$ of the bounded $t$-structure ${\mathfrak{t}}$ on $\mathscr{D}$ defined by the composition
\[
\mathcal{O}(\mathbb{Z})\to \mathcal{O}(\mathbb{Z}\times_{\mathrm{lex}}\hat{\mathbb{Z}})\xrightarrow{e_!\hat{\mathfrak{t}}} \mathrm{ts}(\mathscr{D}).
\]
and so it is a gluable abelian slicing. {Moreover, and remarkably, the gluability of $\hat{\mathfrak{t}}$ can be easily explicited. Namely, spelling out Definition \ref{gluable}, we see that $\hat{\mathfrak{t}}$ is gluable if and only if $\mathscr{D}_{i;\geq 0}\boxslash \mathscr{D}_{j;0}$ for any $i<j$. {As $\mathscr{D}_{i;\geq 0}$ is generated by the subcategories $\mathscr{D}_{i}^\heartsuit[k]$ for $k\geq 0$, this is equivalent to $\mathscr{D}_{i}^\heartsuit\boxslash\mathscr{D}_{j}^\heartsuit[n]$ whenever $i<j$ and $n\leq 0$. This generalizes the gluability condition from Example \ref{example:bbd}.}
}
\subsubsection{Gluability and the Beilinson-Soul\'e conjecture}\label{motives}
The existence of motives, which is still an open question in general, was conjectured by Grothendieck in order to build a universal Weil cohomology theory for schemes: the `motivic cohomology'. Following this input, Deligne observed that it could be easier to construct first a triangulated category (the `mixed' motives) which should play the role of the derived category of motives, and later recover the abelian category of motives as the heart of a bounded $t$-structure on mixed motives. Finally, Voedvodskij succeded in constructing a triangulated category of mixed rational motives over a characteristic zero field $\mathbbm{k}$.
This triangulated category contains, for any $n \in \mathbb{Z}$, a `Tate object' $\mathbb{Q}(n)$ that represents the $n$-th motivic cohomology functor. Let $\mathscr{D}\textnormal{TM}_\mathbbm{k}$ be the category of mixed rational Tate motives, i.e., by definition, the triangulated\footnote{As, up to our knowledge, a description of the stable $\infty$-category of mixed motives is not available in the literature, in this subsection we stick to the more traditional $1$-categorical setup of triangulated categories. The same consideration applies to the examples considered in the subsequent subsections.} subcategory of the Voedvodskij category of mixed motives over $k$ generated by the Tate objects. Denote by $(\mathscr{D}\textnormal{TM}_\mathbbm{k})_{m}$ the triangulated subcategory of $\mathscr{D}\textnormal{TM}_\mathbbm{k}$ generated by $\mathbb{Q}(m)$. One has an isomorphisms of groups
\[
\mathscr{D}\textnormal{TM}_\mathbbm{k}(\mathbb{Q}(i),\mathbb{Q}(j)[n])=K_{2(j-i)-n}(\mathbbm{k})^{(j-i)}
\]
where $K_a(\mathbbm{k})$ is the $a$-th higher $K$-theory group of the point $\textnormal{Spec}(\mathbbm{k})$ and $K_a(\mathbbm{k})^{(b)}$ is the weight $b$ summand of $K_a(\mathbbm{k}) \otimes_{\mathbb{Z}} \mathbb{Q}$ with respect to the Adams action.
By dimensional reasons, the right hand side vanishes for $i > j$ and for $i = j$ with $n \neq 0$. In other words, the Tate objects form an infinite exceptional collection on $\mathscr{D}\textnormal{TM}_\mathbbm{k}$ which is clearly full by definition. By the general theory of semiorthogonal decomposition, this implies that the triangulated subcategories $(\mathscr{D}\textnormal{TM}_\mathbbm{k})_{m}$ with $m\in \mathbb{Z}$ are the slices of a baric structure on $\mathscr{D}\textnormal{TM}_\mathbbm{k}$ and that each of these slices is equivalent to $\mathscr{D}^b(\mathbb{Q}\text{-Vect})$,
the bounded derived category of finite-dimensional $\mathbb{Q}$-vector spaces, via an equivalence mapping $\mathbb{Q}(m)$ to $\mathbb{Q}$.
Since $\mathscr{D}^b(\mathbb{Q}\text{-Vect})$ is a bounded derived category, it comes equipped with a canonical bounded $t$-structure. The equivalences $(\mathscr{D}\textnormal{TM}_\mathbbm{k})_{m}\simeq \mathscr{D}^b(\mathbb{Q}\text{-Vect})$ then endow each slice of the baric structure with a bounded $t$-structure, whose heart $(\mathscr{D}\textnormal{TM}_\mathbbm{k})_{m}^\heartsuit$ is the abelian category generated by $\mathbb{Q}(m)$. The datum of these canonical $t$-structures on the slices of the baric structure $\{(\mathscr{D}\textnormal{TM}_\mathbbm{k})_{m}\}_{m\in \mathbb{Z}}$ defines a Bridgeland $\hat{\mathbb{Z}}\times_{\textnormal{lex}} \mathbb{Z}$ slicing on $\mathscr{D}\textnormal{TM}_\mathbbm{k}$ which, by the result in Section \ref{baric}, is gluable if and only if
\[
(\mathscr{D}\textnormal{TM}_\mathbbm{k})_{i}^\heartsuit\boxslash(\mathscr{D}\textnormal{TM}_\mathbbm{k})_{j}^\heartsuit[n]
\]
whenever $i<j$ and $n\leq 0$. As $(\mathscr{D}\textnormal{TM}_\mathbbm{k})_{m}^\heartsuit$ is generated by $\mathbb{Q}(m)$, the gluability condition is equivalent to
\[
\mathscr{D}\textnormal{TM}_\mathbbm{k}(\mathbb{Q}(i),\mathbb{Q}(j)[n])=\mathbf{0}
\]
whenever $i<j$ and $n\leq 0$, and therefore to
\[
K_{2(j-i)-n}(\mathbbm{k})^{(j-i)} = \mathbf{0}
\] whenever $i < j$ and $n \leq 0$. This is exactly the Beilinson-Soul\'e standard vanishing conjecture, which is known to hold, for instance, when $\mathbbm{k}$ is a number field due to Borel's computation of the ranks of K-theory groups in this case \cite{borel}. When the conjecture holds, by applying $e_!$ we get a Bridgeland $\mathbb{Z} \times_{\textnormal{lex}} \hat{\mathbb{Z}}$-slicing on $\mathscr{D}\textnormal{TM}_\mathbbm{k}$ and thus in particular a bounded $t$-structure whose heart contains the desired unmixed Tate motives over $\mathbbm{k}$. In other words, we recover a well known but
nontrivial fact (see \cite{levine}) using an abstract and very general reasoning: assuming the Beilinson-Soul\'e conjecture is true, (Tate) motives exist. \\
Moreover, following the reasoning recalled at the beginning of this Section, we also get a $t$-structure on $\mathscr{D}\textnormal{TM}_\mathbbm{k}$ for each perversity function on $\mathbb{Z}$. These are the `\textit{perverse motives}' appearing in \cite{permot}.
\subsubsection{Three more examples}
There are a number of other constructions in literature which are a particular case of the one we presented here. Just to mention a few, in \cite{beil} Beilinson defines a notion of \textit{'filtered structure'} on a triangualted category. This is no more than a baric structure $\{ \mathscr{D}_{n} \}_{n \in \hat{\mathbb{Z}}}$ with some additional data and properties. Starting with a $t$-structure on $\mathscr{D}_0$, Beilinson rearranges it into a $t$-structure on $\mathscr{D}$. One can easily check that the axioms of a filtered structure guarantee that it defines a gluable $\hat{\mathbb{Z}} \times_{\textnormal{lex}} \mathbb{Z}$-slicing and that the distinguished $t$-structure obtained by gluing coincides with Beilinson's new $t$-structure on $\mathscr{D}$. \\
\par
In \cite{macri}, Macr\'i starts with a finite `Ext exceptional' collection on a certain triangualted cateogory $\mathscr{D}$ and get a distinguished $t$-structure on $\mathscr{D}$. This construction actually goes along the exact lines sketched in Subsection \ref{motives}. Namely, when translated into the language of this note, a finite exceptional collection is just a baric structure with finitely many nonzero slices, all equivalent to the derived categogry of finite-dimensional vector spaces over some fixed field, and the condition of being `Ext exceptional' is identified with the gluability condition. \\
\par
Finally, a possibly more exotic instance is in \cite{lagra}. Here, starting with a suitable $\mathbb{R}$-slicing on the Fukaya category $\mathscr{D}_0$ of a symplectic manifold $M$, Hensel builds a $t$-structure on the Fukaya category $\mathscr{D}$ of $\mathbb{C} \times M$. This is done by embedding $\mathscr{D}_0$ into a triangulated category $\mathscr{D}$ as the zeroth slice of a baric structure. Lemma 7.1 from \cite{lagra} can then be reformulated as our gluability condition, and the distinguished $t$-structure obtained by gluing is seen to be the $t$-structure on the Lagrangian cobordism category exhibited by \cite{lagra}. \\
\newpage
\afterpage{\blankpage}
\clearpage
\bibliographystyle{alpha}
|
{
"timestamp": "2018-06-05T02:12:51",
"yymm": "1806",
"arxiv_id": "1806.00883",
"language": "en",
"url": "https://arxiv.org/abs/1806.00883"
}
|
\section*{Acknowledgment}
This work is mostly part of my master thesis, which I did at Würzburg University.
I want to thank Stefan Waldmann, my advisor then, very much for very many helpful discussions and ideas he gave me. He also suggested this topic to me.
I also want to thank Martin Bordemann for his comments and discussions with him.
\section{Introduction}
The aim of deformation quantization is to get from a classical physical system described by a Poisson or symplectic manifold $M$ to a quantum theory, which has as a classical limit this given system. For this one introduces a star product on the the formal power series of smooth functions $\CC^\infty(M)[[\lambda]]$ on $M$. A star product is an associative but normally noncommutative product. From this one obtains the classical Poisson bracket by taking the limit $\hbar \rightarrow 0$ of the star commutator $\frac{\mathrm {i}}{\hbar} [\cdot,\cdot]_\star$. This method has been introduced in \citep{bayen}.
Another idea, which is more recent, is to deform classical field theories by replacing the commutative algebra of functions on the spacetime manifold by a noncommutative one. The idea here is to deform the commutator of the coordinate functions, which is classical $[x_i,x_j] =0$, to something non-zero. There are many different approaches to this coming form theoretical physics, which lead to noncommutative field theories, see \citep{aschieri,wess,fredenhagen, douglas,schupp}. Most of these approaches only consider the case of $\mathbb{R}^4$ with a Weyl-Moyal product, however a more general approach is also needed. On the other hand there are quite concrete solutions to the corresponding noncommutative Einstein equations \citep{schenkel}.
This leads to what is called noncommutative geometry, which also has been studied from a more mathematical point of view, see e.g. \citep{connes}.
If one wants to deform a field theory in this way, one also needs to deform bundles over the spacetime manifold, especially principal bundles and vector bundles, because this is where the fields, the connection or the curvature (which is the field strength tensor) live.
There are several approaches of doing this. One is by Connes, which uses a so called spectral triple, see \citep{connes}. It was shown by Hawkins in \citep{hawkins} that this approach only works in some situations.
Here we want to consider what happens in the context of deformation quantization. For this we consider the quite general situation of a fibered manifold, which can be specialized the principal bundles and other cases.
The weakest way is to deform those into a module, which has be done in \citep{weisphd,art}, and always works. But for many applications this seems not enough. For example to write the Leibniz rule $\d(fa) = (\d f) a + f \d a$ with $f$ in some bundle over $M$ and $a \in \CC^\infty(M)$ in this form one would already need a bimodule. For the case of vector bundles this was also considered e.g. in \citep{waldmanne}. One can also use a Drinfeld twist, see e.g. \citep{aschieri}, to do noncommutative geometry. But also here obstructions exist \citep{weber}. Given a Drinfeld twist one can also define a star product, so to some extend what we do is more general.
Also in the context of noncommutative geometry often Hopf-Galois extension are considered as a generalization of principal bundles, e.g. \cite{brezinski,MR2175995}, but here one cannot deal with symplectic bases in general.
So the aim of this paper is to investigate under which conditions such bimodule structures for a fibered manifold $P \to M$ exist. It turns out that especially for the symplectic case there are strong obstructions and it is only possibly to get such a bimodule in very special cases, e.g. if the bundle is trivial or there exists a flat connection on $P$. To be precise one gets the structure of a Poisson module on $\CC^\infty(P)$ over the Poisson algebra $\CC^\infty(M)$. This can be used te define a morphism of differential operators $\operatorname{DiffOp}(M) \to \operatorname{DiffOp}(P)$, which respects the fiber projection. This can be seen as a generalization of a flat lift of the vector fields on $M$.
Since the order by order construction of these module and bimodule structures is equivalent to solving equations in the Hochschild cohomology $\mathbf{H\!H}^\bullet(\CC^\infty(M),\operatorname{DiffOp}(P))$, the second aim of this paper is to compute some of these cohomologies, namely $\mathbf{H\!H}^\bullet(\CC^\infty(M),\allowbreak \CC^\infty(N))$ and $\mathbf{H\!H}^\bullet(\CC^\infty(M),\allowbreak \operatorname{DiffOp}(N))$ for a sufficiently nice map $\operatorname{pr}:N\rightarrow M$. To be precise here we consider the differential or continuous Hochschild cohomology and not the purely algebraic one. This gives us - among other things - a generalization of the well known Hochschild-Kostant-Rosenberg theorem.
In fact we have
\begin{theorem}
Let $N \xrightarrow{\operatorname{pr}} M$ be such that $ \operatorname{pr}(N)$ is a closed submanifold of $M$ then
\begin{equation*}
\mathbf{H\!H}^\bullet_\mathrm{diff}(\CC^\infty(M),\operatorname{DiffOp}(N)) \cong \factor{\mathfrak{X}^\bullet(M)|_{\operatorname{pr}(N)}}{\sprod{\mathfrak{X}(\operatorname{pr}(N))}} \otimes_{\CC^\infty(M)} \operatorname{DiffOp_{ver}}(N)
\end{equation*}
as $\CC^\infty(M)$-bimodule, where $\mathfrak{X}^\bullet(M)$ denotes the set of vector fields on $M$ and $\sprod{x}$ denotes the ideal generated by $x$.
\end{theorem}
The paper is structured as follows: In the first section we we recall the basics of deformation quantization.
In the second section we first summarize the results from \citep{weisphd} and \citep{art} on module deformation and its relation to Hochschild cohomology. We proceed in finding the obstruction for a bimodule deformation, which in the symplectic case turns out to be the existence of a flat lift.
In the last section, we compute the Hochschild cohomology $\mathbf{H\!H}^\bullet(\CC^\infty(M),\CC^\infty(N))$ and $\mathbf{H\!H}^\bullet(\CC^\infty(M),\operatorname{DiffOp}(N))$ for a map $\operatorname{pr}$ between two manifolds $N$ and $M$, such that $\operatorname{pr}(N)$ is a closed submanifold of $M$, with the bimodule structure given by the pullback along $\operatorname{pr}$. This is done by using the Koszul complex of a convex set in $\mathbb{R}^n$, which we also define in this section. Computing this cohomology is useful, because the vanishing of it in certain cases proves the fact that every fiber bundle can be deformed into a module and it also shows that, in the case of a bimodule, there are in general problems to be expected due to fact that the Hochschild cohomology is non trivial.
\section{Deformation of fibered manifolds}
\subsection{Star products}
We want to recall some basic definitions and facts about the deformation quantization of smooth manifolds and star products.
\begin{defn}[Star product]
A (formal) star product $\star$ on a manifold $M$ is a bilinear associative operation $\CC^\infty(M)[[\lambda]] \times \CC^\infty(M)[[\lambda]] \rightarrow \CC^\infty(M)[[\lambda]]$ satisfying the following properties for all $f,g \in \CC^\infty(M)$:
\begin{itemize}
\item $ 1 \star f = f \star 1 = f$,
\item $f \star g = f\cdot g + \O(\lambda)$,
\item $f \star g = \sum_{k=0}^\infty C_k(f,g) \lambda^k$,
\end{itemize}
with bilinear operators $C_k$. We assume that all $C_k$ are bidifferential operators.
It is called natural if every $C_k$ is a differential operator of order $k$.
\end{defn}
We define the star commutator for $a,b \in \CC^\infty(M)[[\lambda]]$ by
$[a,b]_\star = a \star b - b\star a$.
As usual the star commutator satisfies the Leibniz and Jacobi-identity and so gives a non-commutative Poisson algebra. Also the adjoined action is a derivation of $\CC^\infty(M)[[\lambda]]$ for all $a \in \CC^\infty(M)[[\lambda]]$.
It is well known that the first order term of a star product defines a Poisson bracket as follows
\begin{equation}
\p{f,g} = \frac{\mathrm {i}}{2\lambda} [f,g] |_{\lambda=0} \text{ for } f,g \in \CC^\infty(M).
\end{equation}
\begin{defn}[Equivalence of star products \citep{bayen}]
Two star products $\star$, $\star'$ are called equivalent if there exists a formal power series of differential operators
$T= \operatorname{id} + \sum_{k=1}^\infty T_k \lambda^k$, with $T(1) =1$ such that
\begin{equation}
T(f) \star T(g) = T(f \star' g)
\end{equation}
\end{defn}
The operator $T$ in the above definition is always invertible and indeed, given a star product $\star$,
$f \star' g := T^{-1}(T(f) \star T(g))$ always gives a new equivalent star product. We recall:
\begin{lemma}
Two equivalent star products give rise to the same Poisson bracket.
\end{lemma}
\subsection{Module deformations}
We want to find criteria, for which star products on a manifold $M$ and fibered manifolds $P$ over $M$ it is possible or not to get a deformation of $\CC^\infty(P)$. We consider three different possibilities namely deformation as a module, as a bimodule and as a subalgebra. Here each version is stronger than the previous. For the module case essentially everything is known and works well \citep{art}. For the other cases this is not true. Here we give some obstructions, why things cannot always work, but also some examples where it works well.
For the convenience of the reader we recall some definitions and facts about module deformations from \citep{art,weisphd}, for proofs see there.
\begin{defn}
A (left) module deformation of fibered manifold $P \xrightarrow{\operatorname{pr}} M$, where $M$ carries a star product $\star$, is a $(\CC^\infty(M)[[\lambda]],\star)$-left module structure $\bullet$ on $\CC^\infty(P)[[\lambda]]$, such that
\begin{equation}
a \bullet f = \operatorname{pr}^*a f + \sum_{k=1}^\infty \lambda^k L_k(a,f) = \sum_{k=0}^\infty \lambda^k L_k(a,f),
\end{equation}
where the $L_k \in \operatorname{DiffOp}^\bullet(\CC^\infty(M),\CC^\infty(P);\CC^\infty(P))$ are bidifferential operators.
A module deformation is called fiber preserving if $a \bullet \operatorname{pr}^* b = \operatorname{pr}^* (a \star b)$. It is called natural if all $L_k$ are differential operators of order up to $k$ on $M$ and $P$.
\end{defn}
The local form of a $L_k$ is given by
\begin{equation}
L_k(a,f) = \sum_{I,J} \operatorname{pr}^*(\partial_I a) L^{I,J}_k (\partial_J f),
\end{equation}
where $I,J$ are multiindices and $L^{I,J}_k \in \CC^\infty(P)$ are coefficient functions.
Being fiber preserving is equivalent to $a \bullet 1 = \operatorname{pr}^* a$ for all $ a \in \CC^\infty(M)$, since then $a \bullet \operatorname{pr}^* b = a \bullet (b \bullet 1) = (a \star b) \bullet 1 = \operatorname{pr}^* (a \star b)$.
Similarly one can define a right module deformation. In this case we write $f \bullet a = \operatorname{pr}^*a f + \sum_{k=1}^\infty \lambda^k R_k(f,a)$.
It is also possible to define a module deformation for an arbitrary map $\operatorname{pr} : P \rightarrow M$ in a similar way.
\begin{defn}
Two module deformations $\bullet$ and $\tilde \bullet$ are called equivalent if there exits a formal series $T = \operatorname{id} +\sum_{k=1}^\infty T_k \lambda^k$ of differential operators on $P$ such that
\begin{equation}
T(a \bullet f) = a \mathbin{\tilde \bullet} T(f)
\end{equation}
for all $a \in \CC^\infty(M)$ and $f \in \CC^\infty(P)$.
\end{defn}
Since $T$ as above is always invertible, given a module deformation $\bullet$, one can define an equivalent module by $a \mathbin{\tilde \bullet} f= T^{-1}(a \bullet T(f))$. If the module is fiber preserving and $T$ satisfies $T(\operatorname{pr}^*a) =0$ for all $a \in \CC^\infty(M)$ the new module $\tilde \bullet$ will also be fiber preserving.
The bidifferential operators $L_k$ can also be considered as elements of $\operatorname{DiffOp}(\CC^\infty(M),\allowbreak \operatorname{DiffOp}(P))$ by considering the operators $a \mapsto L_k(a,\cdot)$. So it is possible to find the obstruction to an order by order construction of a module structure in the differential Hochschild cohomology $\mathbf{H\!H}^2_\mathrm{diff}(\CC^\infty(M),\operatorname{DiffOp}(P))$. This goes back to \citep{gerstenhaber1}.
\begin{lemma}[{\citep[Propostion 2.4.3]{weisphd}}] \label{th:mho}
Assume that $L^{(r)} = \sum_{k=0}^r \lambda^k L_k$ is a $(\CC^\infty(M),\star)$-left module structure up to order $\lambda^k$, with $a \star b = \sum_{k=0}^\infty \lambda^k C_k(a,b)$, then $L^{(r+1)} = L^{(r)} + \lambda^{r+1} L_{r+1}$ is a module structure up to order $k+1$ if
\begin{equation}
\delta L_{r+1} = R_r,
\end{equation}
where $\delta$ is the Hochschild differential of $\mathbf{HC}^\bullet(\CC^\infty(M),\operatorname{DiffOp}(P))$ and $R_r$ is given by
\begin{equation}
R_r(a,b) = \sum_{k=0}^r L_k(C_{r+1-k}(a,b),\cdot) - \sum_{k=1}^r L_k (b,L_{r+1-k}(a,\cdot)).
\end{equation}
Also $\delta R_r =0$, whence the obstruction for an order by order construction of a module structure is $[R_r] \in \mathbf{H\!H}^2_\mathrm{diff}(\CC^\infty(M),\operatorname{DiffOp}(P))$.
\end{lemma}
Similarly to the above lemma also the obstruction for the construction of an equivalence order by order lies in a certain Hochschild cohomology.
\begin{lemma}[{\citep[Lemma 2.2]{art}}] \label{th:eho}
Assume that $T^{(r)} = \operatorname{id} + \lambda T_1 + \cdots + \lambda^r T_r$ is an equivalence between two left module structures $\bullet$ and $\mathbin{\tilde \bullet}$ with differential operators $T_k$. Then the condition for $T^{(r+1)} = T^{(r)} + \lambda^{r+1} T_{r+1}$ to be an equivalence up to order $r+1$ is given by
\begin{equation}
\delta T_{r+1} = E_r
\end{equation}
where $E_r(a)(f) = \sum_{s=0}^r (L_{r+1-s}(a, T_s(f)) - T_s(L_{r+1-s}(a,f)) $
Moreover $\delta E_r =0$ so the obstruction for an order by order construction lies in $\mathbf{H\!H}^1_\mathrm{diff}(\CC^\infty(M),\operatorname{DiffOp}(P))$ for any order.
\end{lemma}
In fact the proofs are completely algebraic so the hold for any algebra and module.
Concerning the existence and equivalence, is was shown in \citep[Theorem 1.5]{art} that:
\begin{theorem}
Given a fibered manifold $P \xrightarrow{\operatorname{pr}} M$ and a star product on $M$ there exists always a (fiber preserving) module deformation, which is unique up to equivalence.
\end{theorem}
This follows also from \cref{th:hkrd} using the previous statements.
\subsection{Bimodule deformations}\label{ch:bim}
We now come to the study of bimodule deformations of a fibered manifold $P \xrightarrow{\operatorname{pr}} M$.
\begin{defn}[Bimodule deformation]
A bimodule deformation of a surjective submersion is a left and right module deformation, $\bullet$ and $\mathbin{\bullet'}$ resp., such that
\begin{equation}
(a \bullet f) \mathbin{\bullet'} b = a \bullet (f \mathbin{\bullet'} b)
\end{equation}
for all $a,b\in \CC^\infty(M)$ and $f\in \CC^\infty(P)$, i.e. $\CC^\infty(P)[[\lambda]]$ becomes a $(\CC^\infty(M)[[\lambda]],\star)$-bimodule.
It is called fiber preserving if both module structures are fiber preserving.
\end{defn}
We will call both module structures $\bullet$ in the following, because from the context it is clear which one we mean.
Also for the case of bimodules it is possible to define a notion of equivalence:
\begin{defn}
Two bimodule deformations $\bullet$ and $\mathbin{\tilde \bullet}$ are called equivalent if there exists a
formal power series $T= \operatorname{id} + \sum_{k=1}^\infty T_k \lambda^k$ of differential operators on $P$ such that
\begin{align}
T(a \bullet f) & = a \mathbin{\tilde \bullet} T(f) \\
T(f \bullet a) & = T(f) \mathbin{\tilde \bullet} a
\end{align}
In this case $T$ is called the bimodule equivalence.
\end{defn}
Note that $T$ is a left and a right module equivalence.
A simple calculation gives the following
\begin{lemma}
Given a bimodule deformation $(\bullet,\mathbin{\bullet'})$ and $T$ as in the above definition $(\mathbin{\tilde \bullet},\mathbin{\tilde \bullet}')$ given by
\begin{align}
a \mathbin{\tilde \bullet} f = T^{-1}(a \bullet T(f)) \\
f \mathbin{\tilde \bullet}' a = T^{-1}(T(f) \bullet a)
\end{align}
is an equivalent bimodule deformation.
\end{lemma}
In the definition of a bimodule deformation one can also consider the case, where the star product that acts from the left is different from the one that acts from the right. The following proposition shows that in nice situations this is not the case
\begin{prop}
Given a bimodule $(\bullet,\mathbin{\bullet'})$ over $\star$ and $\star'$, the two Poisson brackets are the same.
If the bimodule is fiber preserving we even have $\star = \star'$.
\end{prop}
\begin{proof}
Since all left and right modules are equivalent and there always exists a fiber preserving one, we can assume that $\mathbin{\bullet'}$ is fiber preserving, i.e. $R_1(\operatorname{pr}^*a,b) = \operatorname{pr}^* C'_1(a,b)$, because we can use this right module equivalence as a bimodule equivalence.
We can also find a left module equivalence $T = \operatorname{id} + T_1 \lambda + \O(\lambda^2)$, which would make the left module fiber preserving. This means there exist a $T_1$ such that the following equation holds:
$$L_1(a,\operatorname{pr}^* b) =\operatorname{pr}^* C_1(a,b) + a T_1(\operatorname{pr}^* b) - T_1(\operatorname{pr}^* ab).$$
From the bimodule condition $ a \bullet (f \bullet b) = ( a \bullet f) \bullet b$ in first order we get
$$\operatorname{pr}^* b L_1(a,f) + R_1(\operatorname{pr}^* a f,b) - \operatorname{pr}^* a R_1(f,b) - L_1(a,\operatorname{pr}^* b f) =0$$
Inserting $L_1$ and $R_1$ as above and setting $f=1$ gives:
\begin{align*}
\operatorname{pr}^* b C'_1(a,1) + \operatorname{pr}^*(ba) T_1(1) - b T_1(\operatorname{pr}^*a) -\operatorname{pr}^* C'_1(a,b) & \\
+ a T_1(\operatorname{pr}^* b ) -T_1(\operatorname{pr}^*ab) + \operatorname{pr}^* C_1(a,b) & =0
\end{align*}
Exchanging $a$ and $b$ then subtracting the two equations gives
\begin{equation*}
C_1(a,b) - C'_1(a,b) - ( C_1(b,a) - C'_1(b,a)) =0,
\end{equation*}
as we wanted.
\\
The second statement follows from $a \bullet (1 \mathbin{\bullet'} b) = a \bullet \operatorname{pr}^* b = \operatorname{pr}^* (a \star b)$ and
$ (a \bullet 1) \mathbin{\bullet'} b = \operatorname{pr}^*a \mathbin{\bullet'} b =\operatorname{pr}^*( a \star' b)$.
\end{proof}
In the last section we showed that an order by order construction of a module is equivalent to solving equations in a certain Hochschild cohomology. The same can be done for a bimodule deformation.
To see this, one uses the well known fact that an $\mathscr{A}$-bimodule is equivalent to an $\mathscr{A}^e = \mathscr{A} \otimes \mathscr{A}^\mathrm{opp}$-module. With this one gets that the right Hochschild cohomology to consider is $\mathbf{H\!H}^\bullet(\mathscr{A}^e,\operatorname{DiffOp}(P))$.
We now want to define a semi-classical limit of an bimodule deformation, which in some sense generalizes the fact that the semiclassical limit of a star product is a Poisson bracket.
\begin{defn}
Given a surjective submersion $P\rightarrow M$ with a bimodule structure
$(\bullet ,\mathbin{\bullet'})$, with $a \bullet f = \sum_{k=0}^\infty L_k(a,f) \lambda^k$ and $f \mathbin{\bullet'} a = \sum_{k=0}^\infty R_k(f,a)$,
we can define the semi-Poisson bracket (sP-bracket)
$\FP{\cdot,\cdot} : \CC^\infty(M) \times \CC^\infty(P) \rightarrow \CC^\infty(P)$
by
\begin{equation}
\FP{a,f} := \frac{\mathrm {i}}{2}(L_1(a,f) - R_1(f,a)).
\end{equation}
\end{defn}
The factor $\frac{\mathrm {i}}{2}$ assures compatibility with the Poisson bracket.
\begin{remark}
One can make the same definition if $\mathscr{A}$ is an arbitrary commutative algebra and $\mathscr{M}$ is a symmetric $\mathscr{A}$-bimodule. Also the following proposition remains true in this context.
\end{remark}
\begin{prop}\label{fpprop}
The sP-bracket satisfies
\begin{enumerate}[i)]
\item $\FP{ab,f} = \operatorname{pr}^* a\FP{b,f} + \operatorname{pr}^* b \FP{a,f}$
\item $\FP{a, \operatorname{pr}^* b f} = \operatorname{pr}^* \p{a,b} f + \operatorname{pr}^* b \FP{a,f}$
\item $\FP{a, \FP{b,f}} - \FP{b, \FP{a,f}} -\FP{ \p{a,b},f} =0$,
\end{enumerate}
for all $a,b \in \CC^\infty(M)$ and $f \in \CC^\infty(P)$.
So especially the sP-bracket is a derivation in the first argument.\\
If the bimodule is fiber-preserving, we also have $\FP{a,\operatorname{pr}^* b} = \operatorname{pr}^* \p{a,b}$.
\end{prop}
\begin{proof}
Similarly how one can get the properties of a Poisson bracket from the associativity of the star product, one also gets these properties of the sP-bracket from the compatibility of left and right module and the star product.
\begin{enumerate}[ {ad} i)]
\item We consider the equation
\begin{equation*}
(a \star b) \bullet f - f \bullet (a \star b) = a \bullet (b \bullet f) - a \bullet ( f \bullet b) + (a \bullet f) \bullet b - (f \bullet a) \bullet b
\end{equation*}
Evaluating in order $\lambda$ gives
\begin{align*}
\operatorname{pr}^* C_1(a,b) f & + L_1(ab,f) - R_1(f,ab) - \operatorname{pr}^* C_1(a,b) f \\
& = L_1(a, \operatorname{pr}^* b f) + \operatorname{pr}^* a L_1(b,f)) - \operatorname{pr}^* a R_1(f,b) - L_1(a,\operatorname{pr}^* b f) \\
& + \operatorname{pr}^* b L_1(a,f) + R_1(\operatorname{pr}^* a f,b) - \operatorname{pr}^* b R_1(f,a) - R_1(\operatorname{pr}^* a f ,b)
\end{align*}
Some of the terms cancel and the remaining give the desired equation.
\item $a \bullet (b \bullet f) - (b \bullet f) \bullet a = b \bullet (a \bullet f - f \bullet a) + (a \star b - b\star a) \bullet f$ in first order gives the result.
\item Using $ \FK{a, f} = a \bullet f - f \bullet a$ this equation is the second order term of
\begin{equation}
\FK{a, \FK{b,f}} = \FK{ [a,b]_\star , f} + \FK{b,\FK{af}}.
\end{equation}
This does not evolve the second order term of the module, since the zeroth order term of $\FK{\cdot,\cdot}$ is zero.
\end{enumerate}
\end{proof}
A bracket which satisfies the properties given in the previous proposition is sometimes called a Poisson module. Note these are completely algebraic.
In the following we will call a bracket which satisfies these properties a semi-Poisson bracket.
\begin{prop}
The sP-bracket of a bimodule deformation is invariant under bimodule equivalence transformations. So let $(\bullet,\mathbin{\bullet'})$ and $(\mathbin{\tilde \bullet},\mathbin{\tilde \bullet}')$ be two equivalent bimodules and $\FP{\cdot,\cdot}$ and $\FP{\cdot,\cdot}'$ resp. be the corresponding sP-brackets then we have
\begin{equation}
\FP{a,f} = \FP{a,f}' \textrm{ for all } a\in \CC^\infty(M), f \in \CC^\infty(P).
\end{equation}
\end{prop}
\begin{proof}
Let $T \in \operatorname{DiffOp}(P) [[\lambda]]$ be the bimodule equivalence, then one has
\begin{equation*}
\tilde L_1 (a,f) = L_1(a,f) + a T_1(f) - T_1(af)
\end{equation*}
and
\begin{equation*}
\tilde R_1 (f,a) = L_1(f,a) + a T_1(f) - T_1(af).
\end{equation*}
Subtracting these two one obtains
\begin{equation*}
\tilde L_1(a,f) - \tilde R_1(f,a) = L_1(a,f) -R_1(f,a).
\end{equation*}
\end{proof}
This means for example that two bimodule deformations with different sP-brackets cannot be equivalent.
\begin{defn}
We will call a sP-bracket fiber preserving if $\FP{a,\operatorname{pr}^* b }= \operatorname{pr}^* \p{a,b}$, this is equivalent to $\FP{a,1} =0$, since $\FP{a,\operatorname{pr}^* b 1} = \operatorname{pr}^* \p{a,b} 1 + \operatorname{pr}^* b \FP{a,1}$.
We will call a sP-bracket natural if $\FP{a,fg}= \FP{a,f}g + \FP{a,g}f$. So a natural sP-bracket is also fiber-preserving.
\end{defn}
\begin{prop}
If a bimodule deformation is fiber preserving so is the corresponding sP-bracket.
If it is fiber preserving and natural the corresponding sP-bracket is natural.
\end{prop}
We recall the definition of a Hamiltonian vector field. Let $M$ be a Poisson manifold and $a \in \CC^\infty(M)$ then we define the Hamiltonian vector field $X_a \in \CC^\infty(M)$ by $X_a(b) = \p{a,b}$ for $b \in \CC^\infty(M)$. Note that sometimes a different sign is chosen.
\begin{prop}\label{lift}
Given a natural sP-bracket on $P \xrightarrow{\operatorname{pr}} M$, where the corresponding Poisson bracket is symplectic, we get a horizontal lift, which is given on Hamiltonian vector fields by $X_a^h(f) = \FP{a,f}$ for $a\in \CC^\infty(M)$ and $f \in \CC^\infty(P)$, and thereby a connection on $P$.
\end{prop}
\begin{proof}
Since $M$ is symplectic it is enough to specify the horizontal lift on Hamiltonian vector fields $X_a \in \mathfrak{X}(M)$, since these span the tangent space at every point. For these we set $X_a^h(f) = \FP{a,f}$ for all $f \in \CC^\infty(P)$.
This is well-defined because the sP-bracket is a derivation in the first argument so it only depends on the differential of $f$. Since the Poisson structure is symplectic this is uniquely determined by the vector field.
Because we assume $\FP{\cdot,\cdot}$ to be natural it is also a derivation in the second argument and so $X_a^h$ is really a vector field.
Finally, since $X_a^h(\operatorname{pr}^* b) = \FP{a, \operatorname{pr} b} =\operatorname{pr}^* \p{a,b} = \operatorname{pr}^* X_a(b)$, we get a horizontal lift.
\end{proof}
Now we come to a main result of this section:
\begin{theorem}\label{th:flat}
Given a natural sP-bracket on $P \xrightarrow{\operatorname{pr}} M$, where the corresponding Poisson bracket is symplectic, the connection defined in \cref{lift} is flat.\\
So given a fibered manifold $P$ over a symplectic manifold $M$ a bimodule deformation with natural sP-bracket can only exists if $P$ admits a flat connection.
\end{theorem}
\begin{proof}
Since the manifold is assumed to be symplectic it suffices to compute the curvature on Hamiltonian vector fields, for these we get with the Jacobi identity (\Cref{fpprop})
\begin{align*}
R(X_a,X_a)(f) & = [X_a^h ,X_b^h](f) - [X_a,X_b]^h(f) \\
& = X_a^h (X_b^h(f)) - X_b^h (X_a^h(f)) - X_{\p{a,b}} (f) \\
& =\FP{a,\FP{b,f}}-\FP{b,\FP{a,f}} - \FP{ \p{a,b} ,f} =0
\end{align*}
for all $a,b \in \CC^\infty(M)$ and $f \in \CC^\infty(P)$.
\end{proof}
This can be generalized in some sense to the case where the sP-bracket is not natural.
\begin{prop}\label{rm:lift}
In the symplectic case the sP-bracket can be used to define a map $\operatorname{DiffOp}(M) \to \operatorname{DiffOp}(P)$.
\end{prop}
\begin{proof}
Before giving the proof, we recall very briefly the constructing of the universal enveloping algebra of a Lie-Rinehart algebra as given in \citep{MR1058984}.
Let $(A,L)$ be a Lie-Rinehart algebra over $\mathbb{K}$. Then we define $A \odot U(L)= A \otimes U(L)$, where $U(L)$ denotes the universal enveloping algebra of $L$ considered as Lie algebra over $\mathbb{K}$, with the multiplication $ (a \otimes X)(b \otimes Y) = ab \otimes XY + a X(b) \otimes Y$ with $a,b \in A$ and $X,Y \in L$.
Then we denote by $I$ the ideal generated by $a \otimes X - 1 \otimes a X$ and $U(A,L) =\factor{A \odot L}{I}$ is the universal enveloping algebra of $(A,L)$. We recall that for a manifold $M$ we have $U(\CC^\infty(M),\mathfrak{X}(M)) =\operatorname{DiffOp}(M)$.
On Hamiltonian vector fields we can define a Lie algebra morphism $\mathfrak{X}(M) \to \operatorname{DiffOp}_L(P)$, where the Lie bracket on $\operatorname{DiffOp}(M)$ is given by the commutator, by $X_a \mapsto \FP{a, \cdot}$. This can be extend to an algebra morphism $\phi: U(\mathfrak{X}(M)) \to \operatorname{DiffOp}(M)$. With this we define a map $\Phi:\CC^\infty(M) \odot U(\mathfrak{X}(M)) \to \operatorname{DiffOp}(P)$ by $a \otimes D \mapsto \operatorname{pr}^* a D $. Since
\begin{align*}
\Phi(a \otimes X_c )\Phi( b \otimes X_d) (f)&= \operatorname{pr}^* a \FP{ c,\operatorname{pr}^* b \FP{d,f}} = \operatorname{pr}^*( a b)\FP{c,\FP{d,f}} + \operatorname{pr}^*(a \p{c,b}) \FP{d,f} \\
&= \Phi(ab \otimes X_c X_d + a X_c(b)\otimes X_d ) = \Phi((a\otimes X_c)(b \otimes X_d))
\end{align*}
for $a,b,c,d \in \CC^\infty(M)$ it is an algebra morphism. It is also clear that it vanishes on $I$ and we get an induced map from $\operatorname{DiffOp}(M)= U(\CC^\infty(M),\mathfrak{X}(M)) \to \operatorname{DiffOp}(P)$.
\end{proof}
In the non-symplectic case we get a horizontal lift over the symplectic leaves of the Poisson manifold.
This condition is clearly not enough to get a bimodule deformation. Consider for example a symplectic star product $\star$ and formally replace $\lambda$ by $\lambda^2$ to get $\tilde \star$ then the Poisson tensor $\tilde \pi =0$, so the condition on the symplectic leaves is empty, but we can only find a bimodule for $\tilde \star$ if there is one for $\star$.
We do not get a horizontal lift in the non-symplectic case due to two reasons:
\begin{itemize}
\item When the Poisson tensor is degenerate, the Hamiltonian vector fields do not span the tangent space, so it is not possible to lift every vector.
\item The horizontal lift would be ill defined, because it can happen that $X_a =X_b$ for $a,b \in \CC^\infty(M)$ with $\d a \neq \d b $.
\end{itemize}
The obstruction we find is only in first order in $\lambda$ and in the general Poisson case it is the existence of a sP-bracket. This is non trivial as we have seen. In the symplectic the existence of a sP-bracket is enough to get a deformation.
\begin{theorem}
Let $(M,\star)$ be a manifold with a symplectic star product and $P \to M$ a fibered manifold. Given a sP-bracket on $P$, there exists a bimodule deformation.
\end{theorem}
\begin{proof}
The star product consists of differential operators $C_i$ on $M$, which can be lifted by using \cref{rm:lift}. So we set $L_i(a,f) := C_i(a,\cdot)^h(f)$ and $R_i(f,a) := C_i(\cdot,a)^h(f)$. One has to check that this in fact defines a bimodule. This follows since
$C_i(a,L_j(b,f)) = C_i(a, C_j(b, \cdot ))^h(f)$, which show that it is a left module and similar for the other conditions.
\end{proof}
One can also consider the case were only a Poisson structure on $M$ is given and not a star product. Then the next question would be, if given a sP-bracket it is possible to get a a star product on $M$ and bimodule structure on $\CC^\infty(P)$. Further it would interesting to classify them up to equivalence, similarly to Kontsevich's formality theorem.
\subsection{Deformation of principal fiber bundles}
In this section we want to consider the deformation of principal fiber bundles. We denote the structure group by $G$. We then have an induced left action of $G$ on the functions on $P$, given by
$(g \triangleright f)(p) = f( p \cdot g)$, where $\cdot$ denotes the principal right action.
\begin{defn}
A module deformation of a principal fiber bundle with structure group $G$ is called a deformation of a principal fiber bundle if
\begin{equation}
g \triangleright (a \bullet f) = a \bullet (g \triangleright f)
\end{equation}
for all $g \in G$ and similarly for a bimodule.
\end{defn}
\begin{prop}
For a bimodule deformation of a principal bundle we have for the sP-bracket
\begin{equation}
g \triangleright \FP{a,f} = \FP{ a, g \triangleright f}.
\end{equation}
\end{prop}
\begin{proof}
Since $g \triangleright (a \bullet f) = a \bullet (g \triangleright f)$ we get $g \triangleright L_1(a,f)= L_1(a,g \triangleright f)$
and similarly for $R_1$, with this
\begin{equation*}
g \triangleright \FP{a,f} = g \triangleright (L_1(a,f) -R_1(f,a)) =
L_1(a,g \triangleright f) - R_1(f, g \triangleright f) = \FP{a, g \triangleright f}.
\end{equation*}
\end{proof}
\begin{prop}
In the case of a principal bundle bimodule deformation the connection of \cref{lift} is a principal connection.
\end{prop}
\begin{proof}
For a Hamiltonian vector field $X_a$ we have
\begin{align}
g \triangleright X^h_a(f) = g \triangleright \FP{a,f} = X^h_a(g \triangleright f),
\end{align}
which shows that the horizontal lift and so the connection is compatible with the principal fiber bundle.
\end{proof}
If one has a deformation of a principal bundle one can also define a deformation of associated vector bundles, for the module case this is done in \cite[Sec. 6]{art}. Here we proceed similarly for the bimodule and algebra case.
So let us consider an associated vector bundle $E=P[V,\rho]$ over a principal $G$ bundle $P$ with typical fiber a finite dimensional vector space $V$ and $\rho: G \rightarrow \operatorname{GL}(V)$ a representation. For details of the definition see \citep[Section 18.7]{michor}.
It is well known that $\CC^\infty(P,V)^G \cong \Gamma^\infty(E)$. We denote this isomorphism by $\hat \cdot$ and its inverse by $\check \cdot$.
\begin{defn}
Given an associated vector bundle $E=P[V,\rho]$ and a bimodule deformation of $P$ as a principal bundle, we define a bimodule structure on $\Gamma^\infty(E)$ by
\begin{equation}
a \bullet s = (a \bullet \hat s)^\vee
\end{equation}
and similarly for the right module structure.
\end{defn}
\begin{proof}
We need to show that this is well defined. For this we have to show that $a \bullet \hat s$ is again $G$ equivariant. We have for any $g \in G$
\begin{equation}
g \triangleright (a \bullet \hat s) = a \bullet ( g\triangleright \hat s) = a \bullet \rho(g) \hat s =
\rho(g) a \bullet \hat s.
\end{equation}
What is what we wanted.
\end{proof}
\begin{remark}
If $V$ is also an algebra and given a principal subalgebra deformation of $P$, one can also define an $\CC^\infty(M)$ algebra structure on $\Gamma^\infty(E)$.
\end{remark}
\begin{proof}
For $f,g \in \CC^\infty(P,V)^G \cong \Gamma^\infty(E)$ and $\{e_i\}$ a basis of $V$ one defines
\begin{equation}\label{sa1}
f \star g = (f^i e_i) \star g^j (e_j) = (f^i \star g^j) (e_i \cdot e_j)
\end{equation}
where $\cdot$ is the undeformed product of the algebra $V$.
This is obviously independent of the choice of the basis. Using the fact that $g \triangleright (f \star_P h) = (g \triangleright f) \star_P (g \triangleright h)$ for $g \in G$ and $f,h \in \CC^\infty(M)$, one gets that the product in \eqref{sa1} is again $G$-equivariant.
\end{proof}
An interesting example of this is the frame bundle of a manifold, because if we can deform this as an algebra we can also deform the associated vector bundles like the tangent or cotangent bundle and higher tensor bundles like the exterior algebra. Deforming a single bundle of this as an algebra is straight forward using the above remark. More care has to be taken if the relations between these, e.g. that the tangent bundle is the dual of the cotangent bundle, should be preserved.
One also should note that if the Poisson structure on $M$ is symplectic, these deformations can only exist if the frame bundle is trivial, i.e. the manifold is parallelizable. But even in this case we do not see a straightforward way of deforming for example the de-Rham differential. But also in other approaches to deforming the exterior algebra this only works for specific cases, for example using a Drinfeld twist, see e.g. \citep{wess}.
\subsection{Equivalence of bimodules}\label{ch:nonequi}
In this section we want to show that given a trivial fiber bundle there are in general infinitely many nonequivalent bimodule deformations. For this we first describe a way of constructing a new bimodule structure out of a given one.
We denote $\mu_\star(a \otimes b) = a \star b$, $l_\star(a \otimes f) = a \bullet f$ and $r_\star(f \otimes a) = f \bullet a$.
We can define a new left module structure by $l'_\star = l_\star \mathrm {e}^{\lambda Q}$. The exponential is well defined since in any order in $\lambda$ there are only finitely many terms.
Here
\begin{equation}
Q= \sum_i E_i \otimes D_i
\end{equation}
where
$E_i$ is a derivation of the star product, this means $E_i(a \star b) = E_i(a) \star b + a \star E_i(b)$ or
\begin{align}
E_i \circ \mu_\star = \mu_\star (E_i \otimes \operatorname{id} + \operatorname{id} \otimes E_i)
\end{align}
and $D_i$ is a homomorphism of the bimodule, so $D(a \bullet f \bullet b) = a \bullet D(f) \bullet b$ or
\begin{align}
D \circ l_\star = l_\star (\operatorname{id} \otimes D) & \text { or } D(a \bullet f) = a \bullet D(f) \text{ and } \\
D \circ r_\star = r_\star (D \otimes \operatorname{id}) & \text { or } D(f \bullet a) = D(f) \bullet a
\end{align}
We also have to assume that all the $D_i$ commute among each other and also all the $E_i$, i.e. $D_i \circ D_j - D_j \circ D_i$ for all $i,j$ and the same for the $E_i$.
We also use
\begin{equation}
Q_{13} = E_i \otimes \operatorname{id} \otimes D_i,
\end{equation}
where a sum over $i$ is to be understood as in the following computations.
We show that $l'_\star$ is again a left module and together with $r_\star$ a bimodule.
For this we first compute
\begin{align}
\begin{split}\label{ec1}
Q \circ (\operatorname{id} \otimes l_\star) & = (E_i \otimes D_i)(\operatorname{id} \otimes l_\star) = E_i \otimes D_i l_\star \\
& = E_i \circ l_\star (\operatorname{id} \otimes D_i) = (\operatorname{id} \otimes l_\star )(E_i \otimes \operatorname{id} \otimes D_i) \\
& = (\operatorname{id} \otimes l_\star ) Q_{13}
\end{split}
\end{align}
and also
\begin{align}\label{ec2}
\begin{split}
Q \circ (\mu_\star \otimes \operatorname{id}) & = (E_i \otimes D_i) \circ (\mu_\star \otimes \operatorname{id}) \\
& =( \mu_\star \circ (E_i \otimes \operatorname{id} + \operatorname{id} \otimes E_i)) \otimes D_i = (\mu_\star \otimes \operatorname{id}) \circ (Q_{13} + Q_{23})
\end{split}
\end{align}
and
\begin{align}\label{ec3}
Q \circ (\operatorname{id} \otimes r_\star) = (E_i \otimes D_i) (\operatorname{id} \otimes r_\star) = E_i \otimes D_i \circ r_\star = E_i \otimes r_\star (\operatorname{id} \otimes D_i) =(\operatorname{id} \otimes r_\star )\circ Q_{12}.
\end{align}
Next we show $(a \star b) \mathbin{\bullet'} f = a \mathbin{\bullet'} ( b \mathbin{\bullet'} f)$. Using \eqref{ec2} we compute
\begin{align*}
l'_\star (\mu_\star(a \otimes b) \otimes f)
& = l_\star \mathrm {e}^{\lambda Q} (\mu_\star(a \otimes b) \otimes f) \\
& = l_\star \mathrm {e}^{\lambda Q} (\mu_\star \otimes \operatorname{id})( a\otimes b \otimes f) =
l_\star (\mu_\star \otimes \operatorname{id}) \mathrm {e}^{\lambda(Q_{13} + Q_{23})}
\end{align*}
and using \eqref{ec1}
\begin{align*}
l'_\star(a \otimes l'_\star( b\otimes f)) & =
l_\star \mathrm {e}^{\lambda Q} ( a \otimes l_\star \mathrm {e}^{\lambda Q} (b \otimes f)) \\
& = l_\star \mathrm {e}^{\lambda Q} \circ (\operatorname{id} \otimes l_\star) \mathrm {e}^{\lambda Q_{23}} (a \otimes b \otimes f) \\
& = l_\star (\operatorname{id} \otimes l_\star ) \mathrm {e}^{\lambda (Q_{13} + Q_{23})} (a \otimes b \otimes f) .
\end{align*}
In the last step we used the fact that the $D_i$ commute to get $ \mathrm {e}^{\lambda (Q_{13} + Q_{23})}= \mathrm {e}^{\lambda Q_{13}} \mathrm {e}^{\lambda Q_{23}}$.
Comparing these two and using that fact that $l_\star$ is a left-module gives $(a \star b) \mathbin{\bullet'} f = a \mathbin{\bullet'} ( b \mathbin{\bullet'} f)$ as wanted.
Next we want to show that it is also a bimodule, so we compute $ ( a \mathbin{\bullet'} f) \bullet b$ and $ a \mathbin{\bullet'} ( f \bullet b)$:
\begin{align*}
r_\star (l_\star \mathrm {e}^{\lambda Q} (a \otimes f) \otimes b ) = r_\star \circ (l_\star \otimes \operatorname{id}) \circ \mathrm {e}^{\lambda Q_{12}} (a\otimes f \otimes b)
\end{align*}
\begin{align*}
l_\star \mathrm {e}^{\lambda Q} (a\otimes r_\star (f \otimes b)
& = l_\star\mathrm {e}^{\lambda Q_{23}} \circ (\operatorname{id} \otimes r_\star) ( a \otimes f \otimes b) \\
& = l_\star \circ (\operatorname{id} \otimes r_\star) \mathrm {e}^{\lambda Q_{12}} ( a \otimes f \otimes b)
\end{align*}
The two sides agree, since $l_\star$ and $r_\star$ form a bimodule, so we get in fact a bimodule.
This shows the following proposition:
\begin{prop}
Let $(\bullet,\mathbin{\bullet'})$ be a bimodule deformation of $P \xrightarrow {\operatorname{pr}} M$, $n \in \mathbb{N}$, $E_i$ for $i=1, \dots,n$ be a derivation of the star product and $D_i$ for $i=1,\dots n$ a bimodule homomorphism, such that all the $E_i$ commute and also the $D_i$. Then $l_\star \mathrm {e}^{\lambda\sum_i E_i \otimes D_i }$, where $l_\star$ denotes the original left module structure, is again a bimodule structure with the same right module structure.
The modified bimodule has the sP-bracket
\begin{equation}
\FP{a,f}' = \FP{a,f} + \operatorname{pr}^* E_i(a) D_i(f).
\end{equation}
\end{prop}
The statement for the sP-bracket follows directly from
\begin{equation}
l'_\star = l_\star \circ \mathrm {e}^{\lambda Q} = L_0 + \lambda (E_i \otimes D_i) + \lambda L_1 + \O(\lambda^2)
\end{equation}
and the definition of the sP-bracket.
With this construction it is at least in the trivial case possible to construct lots of different, i.e. nonequivalent bimodule structures,
because there always exist derivations of a star product, e.g. the quasi-inner ones, and any vertical differential operator, whose coefficients are also independent of $M$, gives a bimodule homomorphism for the bimodule described in the first part of \cref{ch:examples}, which gives the following corollary:
\begin{corollary}
Let $M\times F \rightarrow M$ be a trivial fiber bundle and $\star$ a star product on $M$. Then the there are infinitely many nonequivalent bimodule deformations.
\end{corollary}
Here it can also be seen that even two bimodule deformations having the same sP-bracket are not equivalent. Take the trivial Poisson bracket with the trivial star product and also the trivial bimodule deformation. Then take $\tilde l_\star = l_\star \circ \mathrm {e}^{\lambda^2 Q}$ with $Q$ as above this does not change the sP-bracket nor the right module but in general changes the left module structure. In contrast in this case every bimodule equivalence which changes the left also changes the right module structure.
\subsection{Examples}\label{ch:examples}
First of all there is the trivial example. So let $P= M \times G$ with manifolds $M,G$ and $\star$ a star product on $M$.
Then we can choose the trivial connection and lift the differential operators in $\star$ with this and define $a \bullet f= \sum_{r=1}^\infty C^h_r(a,f)$
This means that for $f: (x,g) \mapsto f(x,g)$ the operators only act on $x$. This clearly gives a bimodule.
Actually its enough to have a flat connection on $P \rightarrow M$. Then lifting the differential operators $C_k(a,\cdot)$ and $C_k(\cdot,a)$ in the star product to differential operators on $N$ gives a left- resp. right module structure.
\begin{prop}
Consider a fibered manifold $P \rightarrow M$ and commuting vector fields
$X_1 \ldots X_k$ on $M$ and a horizontal lift such that also the $X^h_i$ commute.
Then we can define for any constant matrix $A=(a_{ij})$ a star product on $M$ by
\begin{equation}
a \star b= \mu(\mathrm {e}^{a^{ij} X_i \otimes X_j} a \otimes b)
\end{equation}
for $a,b \in \CC^\infty(M)$ and analogue on $P$ by
\begin{equation}
a \star_P b= \mu(\mathrm {e}^{a^{ij} X^h_i \otimes X^h_j} f \otimes g)
\end{equation}
for $f,g \in \CC^\infty(P)$.
Then $(\CC^\infty(M),\star)$ is a subalgebra of $(\CC^\infty(P),\star_P)$, so we have a subalgebra deformation as described in the following section, and we also get a bimodule.
\end{prop}
\begin{proof}
First we note that $\star$ and $\star_P$ really define two star products. See e.g. \citep[Sect.6.2.4]{defquan} for a proof of this. So we only need to show that we get a subalgebra.
This follows form
\begin{align*}
\operatorname{pr}^* a \star_P \operatorname{pr}^*b & = \mu(\mathrm {e}^{a^{ij} X^h_i \otimes X^h_j} \operatorname{pr}^* a \otimes \operatorname{pr}^b) \\
& = \mu( \operatorname{pr}^* \mathrm {e}^{a^{ij} X_i \otimes X_j} a \otimes b)
= \operatorname{pr}^* (a \star b),
\end{align*}
for all $a,b \in \CC^\infty(M)$, where we used $X^h \operatorname{pr}^a = \operatorname{pr}^* X(a)$.
The bimodule structure is given by $a \bullet f \bullet b = \operatorname{pr}^* a \star_P f \star_P \operatorname{pr}^* b$.
\end{proof}
\subsection{Subalgebra deformation}
We briefly want to give some remarks on the deformation of $\CC^\infty(M)$ as a subalgebra of $\CC^\infty(P)$ for a fibered manifold $P \xrightarrow{\operatorname{pr}} M$. This has already been considered in \citep{math/0403334}, but we here want to relate it to bimodule deformations. We also do not assume a fixed given Poisson bracket on $P$.
\begin{defn}
Given a fibered manifold $P \xrightarrow{\operatorname{pr}} M$ and a star product $\star $ on $M$, we call an algebra $(\CC^\infty(P)[[\lambda]],\star_P)$ a deformation as subalgebra of $P$ if
\begin{equation}\label{defsubalg}
(\operatorname{pr}^* a) \star_P (\operatorname{pr}^* b) = \operatorname{pr}^* (a \star b)
\end{equation}
for all $a,b \in \CC^\infty(M)$.
\end{defn}
\begin{remark}
Of course given a subalgebra deformation, one also gets a bimodule deformation by defining
$a \bullet f = \operatorname{pr}^* a \star_P f$ and $f \bullet b = f \star_P \operatorname{pr}^* a$. This bimodule deformation is always fiber preserving.
\end{remark}
\begin{defn}
Given a principal $G$ bundle $P \xrightarrow{\operatorname{pr}} M$ we call a subalgebra deformation $\star_P$ of $P$ a principal subalgebra if
\begin{equation}
(g \triangleright f) \star_P (g \triangleright h) = g \triangleright (f \star_P h)
\end{equation}
for all $f,h \in \CC^\infty(P)$ and $g \in G$.
\end{defn}
\begin{prop} \label{th:sap}
Given a subalgebra deformation of a fibered manifold $P$ and $\p{\cdot,\cdot}_P$ the corresponding Poisson bracket, for $a,b \in \CC^\infty(M)$ we have
\begin{equation} \label{subpois}
\operatorname{pr}^* \p{a,b} = \p{\operatorname{pr}^* a, \operatorname{pr}^* b}_P
\end{equation}
\end{prop}
\begin{proof}
This is a simple consequence from \eqref{defsubalg} in first order in $\lambda$.
\end{proof}
\begin{theorem} \label{th:sp}
Given a fibered manifold $P \xrightarrow {\operatorname{pr}} M$ and a Poisson bracket $\p{\cdot,\cdot}_P$ on $P$ and $\p{\cdot,\cdot}$ on $M$, which is symplectic, satisfying $\operatorname{pr}^* \p{a,b} = \p{\operatorname{pr}^* a, \operatorname{pr}^* b}$, i.e. $\CC^\infty(M)$ is a Poisson subalgebra of $\CC^\infty(P)$, we
get a horizontal lift, which is flat.
\end{theorem}
\begin{proof}
Since $M$ is symplectic it is enough to consider Hamiltonian vector fields.
We define
\begin{equation}
X_a^h(f) = \{\operatorname{pr}^* a, f\}_P \text{ for } f \in \CC^\infty(P).
\end{equation}
Since the Poisson bracket on $P$ is a derivation in the second arguments $X_a^h$ is really a vector field.
The lift is well defined similarly to \cref{lift}.
From $X_a^h(\operatorname{pr}^* b) = \{\operatorname{pr}^* a,\operatorname{pr}^* b\}' = \operatorname{pr}^* \{a,b\} = \operatorname{pr}^* X_a(b)$ we see that $X_a^h$ is a horizontal lift of $X_a$.
For the curvature one finds
\begin{align*}
R(X_a,X_b)(f) & = [X_a^h ,X_b^h](f) - [X_a,X_b]^h(f) \\
& = X_a^h(X_b^h(f)) - X_a^h(X_b^h(f)) - X^h_{\{a,b\}} \\
& = \{\operatorname{pr}^*a, \{\operatorname{pr}^*b ,f\}'\}' - \{\operatorname{pr}^*a,\{\operatorname{pr}^*b,f\}'\}' - \{\operatorname{pr} \{a,b\},f\}' \\
& = \{\operatorname{pr}^*a, \{\operatorname{pr}^*b ,f\}'\}' + \{\operatorname{pr}^*a,\{f,\operatorname{pr}^*b\}'\}' + \{f,\{\operatorname{pr}^* a,\operatorname{pr}^* b\}'\}' =0
\end{align*}
\end{proof}
\begin{corollary}
A subalgebra deformation of a fibered manifold $P \xrightarrow{\operatorname{pr}} M$ can only exist if $P$ admits a flat lift.
\end{corollary}
\begin{proof}
This is obvious from \cref{th:sp} and \cref{th:sap}
\end{proof}
Of course this also follows from \cref{th:flat} since a subalgebra gives rise to a fiber-preserving bimodule.
On the other hand if we have a flat lift, for example if one already has a subalgebra deformation or a $\operatorname{pr}^*$ related Poisson bracket on $P$, one can use this horizontal lift to lift the differential operators $C_k$ in the star product to get a star product on $P$, which gives a subalgebra deformation. The Poisson bracket on $P$ in this case is the lift of the Poisson bracket on $M$.
But it turns out that this lifted Poisson bracket is in general different from the original one on $P$, since the lifted one can contain a term with both vector fields vertical. For example the Poisson structure on $M$ could be zero, but the one on $P$ is only vertical but nonzero.
\section{Generalization of the HKR theorem}
In this section we want to compute the differential Hochschild cohomology of $\CC^\infty(P)$ and $\operatorname{DiffOp}(P)$ as $\CC^\infty(M)$-bimodules. For this we first need some more technical constructions, for which we follow \citep{art}.
\subsection{Hochschild cohomology}\label{ch:hc}
Let $\mathscr{M}$ be a bimodule over an algebra $\mathscr{A}$ and
\begin{equation}
\mathbf{HC}^k (\mathscr{A},\mathscr{M})= \operatorname{Hom}(\mathscr{A}^{\otimes k},\mathscr{M}) \cong \operatorname{Hom}(\underbrace{\mathscr{A},\ldots, \mathscr{A}}_{k \textrm{ times}},\mathscr{M}),
\end{equation}
where the isomorphisms follows from the universal property of the tensor product.
Here $\operatorname{Hom}(\mathscr{A}, \ldots ,\mathscr{A};\mathscr{M})$ means the multilinear maps from $\mathscr{A}$ to $\mathscr{M}$ as vector spaces.
Then we can define a differential on the complex $\mathbf{HC}^\bullet(\mathscr{A},\mathscr{M})$ by
\begin{align}
\delta_i^{n}f(a_1\otimes \ldots \otimes a_{n+1}) =
\begin{cases}
a_1\cdot f(a_2\otimes \ldots \otimes a_{n+1}) & \text{for }i=0 \\
f(a_1\otimes \ldots \otimes a_ia_{i+1}\otimes \ldots\otimes a_{n+1}) & \text{for }0<i<n \\
f(a_1\otimes \ldots \otimes a_n)\cdot a_{n+1} & \text{for }i=n
\end{cases}
\end{align}
\begin{equation}
\delta^n := \sum_{i=0}^n (-1)^i \delta^n_i
\end{equation}
One can compute that $\delta^n \circ \delta^{n-1} =0$, so we can make the following definition:
\begin{defn}
The $n$-th cohomology group of $\mathbf{HC}^n(\mathscr{A},\mathscr{M})$ can be defined by
\begin{equation}
\mathbf{H\!H}^n(\mathscr{A},\mathscr{M}) =\factor{ \ker(\delta^{n+1})}{\operatorname{im}(\delta^n)}
\end{equation}
This is the so called Hochschild cohomology of $\mathscr{M}$.
We note that if $\mathscr{A}$ is commutative, $\mathbf{H\!H}^n(\mathscr{A},\mathscr{M})$ is again an $\mathscr{A}$-bimodule.
\end{defn}
Later we will consider the case $\mathscr{A} = \CC^\infty(M)$ for a manifold $M$ and $\mathscr{M} = \operatorname{DiffOp}(N)$ or $\mathscr{M} = \CC^\infty(N)$ where we have a map $N \xrightarrow{\operatorname{pr}} M$. The bimodule structure on $\operatorname{DiffOp}(N)$ is defined by
\begin{equation}
(a \cdot D \cdot b)(f) = \operatorname{pr}^* a D( \operatorname{pr}^* b f)
\end{equation}
and similarly for $\CC^\infty(P)$.
\begin{defn}[Differential Hochschild complex]
Let $\mathscr{A}$ be a commutative algebra, then we define the differential Hochschild complex by
\begin{equation}
\mathbf{HC}^\bullet_\mathrm{diff}(\mathscr{A},\mathscr{M}) = \bigoplus_{k=0}^\infty \mathbf{HC}^k_\mathrm{diff} (\mathscr{A},\mathscr{M}) \subset \mathbf{HC}^\bullet(\mathscr{A},\mathscr{M})
\end{equation}
with
\begin{equation}
\mathbf{HC}^k_\mathrm{diff}(\mathscr{A},\mathscr{M}) = \operatorname{DiffOp}^\bullet(\underbrace{\mathscr{A},\ldots,\mathscr{A}}_{k \text{ times}};\mathscr{M}).
\end{equation}
The corresponding Hochschild cohomology we denote by $\mathbf{H\!H}^\bullet_\mathrm{diff}(\mathscr{A},\mathscr{M})$.
\end{defn}
In the case of $\mathbf{HC}_\mathrm{diff}(\mathscr{A},\operatorname{DiffOp}(P))$ we slightly modify this and set
\begin{equation}
\mathbf{HC}_\mathrm{diff}^k(\mathscr{A},\operatorname{DiffOp}(P)) = \bigcup_{L \in \mathbb{N}^k_0} \bigcup_{l \in \mathbb{N}_0} \operatorname{DiffOp}^L(\mathscr{A},\dots,\mathscr{A};\operatorname{DiffOp}(P)).
\end{equation}
To see that this actually is a subcomplex one needs that $\mathscr{M}$ is a differential bimodule, to make sure that the differential restricts to the set of differential operators. The multiplication with an element of the algebra and the concatenation of differential operators is again a differential operator, so the only operation in the definition of the differential which needs not to be a differential operator is the right module multiplication.
\begin{defn} [Differential bimodule] \label{de:difbim}
Let $\mathscr{M}$ be an $\mathscr{A}$-bimodule. Then $\mathscr{M}$ is called a differential bimodule if for all $a \in \mathscr{A}$ the map $\mathscr{M} \rightarrow \mathscr{M}: f \mapsto f \cdot a$ is a differential operator.
\end{defn}
The algebra we will use later will be $\CC^\infty(M)$ for a manifold $M$. This is a Fréchet algebra, with the usual seminorms.
On $\operatorname{DiffOp}$ we use the topology given by the local presentation. This is $D = \sum_I D^I \partial_I$ for some multiindex $I$. We have seminorms for all $I$ given by the seminorms of $D^I$ considered as smooth functions.
Since we are not interested in arbitrary homomorphisms but only in continuous ones we also define
\begin{defn}[Continuous Hochschild complex]
Let $\mathscr{A}$ be a commutative topological algebra and $\mathscr{M}$ a topological bimodule then we define the continuous Hochschild complex by
\begin{equation}
\mathbf{HC}^k_\mathrm{cont}(\mathscr{A},\mathscr{M}) = \operatorname{Hom}_\mathrm{cont}(\mathscr{A}, \ldots, \mathscr{A};\mathscr{M}) \subset \mathbf{HC}^k(\mathscr{A},\mathscr{M})
\end{equation}
where $\operatorname{Hom}_\mathrm{cont}$ denotes the space of all continuous homomorphism.
Since $\delta$ maps continuous homomorphisms to continuous homomorphisms, it can be restricted to this subcomplex and we get the continuous Hochschild cohomology.
\end{defn}
Since in our situation every differential operator is continuous we have $\mathbf{HC}_\mathrm{diff} \subset \mathbf{HC}_\mathrm{cont}$.
\subsection{Bar complex}
We recall the definition of the bar complex adopted to our situation following \citep{art,weisphd}.
We consider $\mathscr{A} =\CC^\infty(V)$ for an convex open subset $V$ of $\mathbb{R}^n$ and use $\mathscr{A}^e = \mathscr{A} \otimes \mathscr{A}^\mathrm{opp}$, which is an algebra for the obvious componentwise multiplication.
In our case of course $\mathscr{A}^\mathrm{opp} = \mathscr{A}$ but in general one needs $\mathscr{A}^\mathrm{opp}$. For the complexes we use, we actually need the completion of $\mathscr{A}^e$ in the projective topology of the tensor product, which we will denote by $\mathbin{\hat\otimes}$.
\begin{defn}
We define the bar complex $X_\bullet$ as
\begin{align}
X_0 = \mathscr{A}^e = \CC^\infty(V \times V) \\
X_k = \CC^\infty(V \times V^k \times V)
\end{align}
with differential $\partial^k_X: X^k \rightarrow X^{k-1}$ given by
\begin{equation}
\begin{split}
(\partial^k_X \phi) (v,q_1,\ldots,q_{k-1},w) = &\phi(v,v,q_1,\ldots,q_{k-1},w)
+ \sum_{i=1}^{k-1} (-1)^i \phi(v,q_1,\ldots.q_i,q_i,\ldots,q_{k-1},w) \\
&+(-1)^k \phi(v,q_1,\ldots,q_{k-1},w,w)
\end{split}
\end{equation}
\end{defn}
We have $X_k \cong \mathscr{A}^e \mathbin{\hat\otimes} \mathscr{A}^{\mathbin{\hat\otimes} k}$ for the completion in the projective topology of the tensor product induced by the Fréchet topology of $\CC^\infty(V)$, because for the completed tensor product one has $\CC^\infty(V) \mathbin{\hat\otimes} \CC^\infty(V) = \CC^\infty(V\times V)$. For details see \citep{jarchow} especially Section 21.6.
The $\mathscr{A}^e$-module structure is given by
\begin{equation}
(a \chi)(v,q_1,\ldots.q_k,w) = a(v,w)\chi(v,q_1,\ldots,q_k,w)
\end{equation}
for $a \in \mathscr{A}^e, \chi \in X_k$ and $v,w,q_1,\ldots,q_k \in V$, which corresponds to the algebra multiplication in the $\mathscr{A}^e$ factor of the $X_k$.
\begin{lemma}
We get a resolution of $\mathscr{A}$ as $\mathscr{A}^e$-module by an exact sequence
\begin{equation} \label{bc}
0 \longleftarrow \mathscr{A} \overset{\epsilon }{\longleftarrow} X_0 \overset{\partial^1_X }{\longleftarrow} X_1 \leftarrow \cdots \overset{\partial^k_X }{\longleftarrow} X_k \longleftarrow \cdots,
\end{equation}
where
\begin{equation}
(\epsilon a)(v) = a(v,v).
\end{equation}
\end{lemma}
\begin{proof}
One easily sees that $\partial_X$ and $\epsilon$ are $\mathscr{A}^e$ linear and a computation shows that $\partial_X \circ \partial_X=0$ and $\epsilon \circ \partial_X=0$, so we really have a complex.
To show that it is exact one uses the homotopies $h^k_X : X_k \rightarrow X_{k+1}$
\begin{align}
(h^{-1}_X a)(v,w) & = a(v) \\
(h^k_X \chi )(v,q_1,\ldots,q_{k+1},w) & = -(-1)^k \chi(v,q_1,\ldots,q_{k+1})
\end{align}
For details see \citep[Ch.3]{art}.
\end{proof}
We note that the homotopies $h^k_X$ are not $\mathscr{A}^e$ linear, which will cause some trouble later.
\begin{remark}
This resolution is topologically free, but not in the purely algebraic setting. This is one reason, why one cannot simple apply the standard techniques of homological algebra. For the continuous case one could still use them, see \citep{pflaum,connes}, however not for the differential cohomology.
\end{remark}
\subsection{Koszul complex}
Next we need another complex, which cannot be defined for $\CC^\infty(M)$ for an arbitrary manifold $M$, but only for the special case of a convex subset of $\mathbb{R}^n$. However, we will later be able to compute the Hochschild cohomology for arbitrary manifolds by localizing to convex sets. In the definition of the Kozsul complex and the related chain maps we follow \citep[Sect.5.4]{weisphd}.
Let $\mathscr{A}= \CC^\infty(\mathbb{R}^n)$ or $\mathscr{A}= \CC^\infty(V)$ where $V \subset \mathbb{R}^n$ is a convex open set. For a (finite dimensional) vector space $W$ we denote by $\Lambda^\bullet(W)$ the antisymmetric tensor algebra over $W$, and by $W^*$ its dual.
\begin{defn}
We define the Koszul complex $(K,\partial_K)$ over $\mathscr{A}$ as
\begin{align}
K_0 & = \mathscr{A}^e \\
K_k & = \mathscr{A}^e \otimes \Lambda^k(\mathbb{R}^n)^* \cong \CC^\infty(V\times V,\Lambda^k(\mathbb{R}^n)^*)
\end{align}
Also every $K_k$ has an $\mathscr{A}^e$-module structure by multiplication in the first factor.
Next we define the differential $\partial^k_K : K_k \rightarrow K_{k-1}$ by
\begin{equation}
(\partial_K \omega )(v,w)(x_1,\ldots, x_{k-1}) = (\omega (v,w))(v-w,x_1,\ldots, x_{k-1})
\end{equation}
for $\omega \in K_k$ and $v,w \in V, x_i \in \mathbb{R}^n$.
\end{defn}
Note that for the definition of the differential we need to insert $v-w$, which is actually a point on the manifold $V$, into a form. This is one reason, why one can define the Koszul complex only for a subset of $\mathbb{R}^n$. Actually it would be enough to consider $V=\mathbb{R}^n$ since every convex open set is diffeomorphic to $\mathbb{R}^n$.
We get the following finite and free resolution
\begin{equation} \label{kc}
0 \longleftarrow \mathscr{A} \overset{\epsilon }{\longleftarrow} K_0 \overset{\partial^1_K }{\longleftarrow} K_1 \leftarrow \cdots \overset{\partial^k_K }{\longleftarrow} K_n
\end{equation}
Again one has to check that $\partial_i \circ \partial_{i+1} =0$, which is a straightforward calculation using the fact that one has to insert the same argument $v-w$ twice.
\begin{lemma}
The sequence \eqref{kc} is exact.
\end{lemma}
\begin{proof}
Use the homotopies $h^k_K : K_k \rightarrow K_{k+1}$
\begin{equation}
(h^k_K \omega)(v,w) = - \sum_{j=1}^n e^j \wedge \int t^k \frac{\partial \omega}{\partial w^j} (v, tw +(1-t)v) \d t
\end{equation}
For details see \citep[Sect. 5.4]{weisphd}.
\end{proof}
Note that for this we needed the convexity of $V$ and the completion of the tensor product.
Using $\xi_i = x_i \otimes 1 - 1 \otimes x_i \in \mathscr{A}^e$, we can write the differential on forms $e^I = \left({e^1} \right)^{\wedge I_1} \wedge \cdots \wedge \left({e^n}\right)^{\wedge I_n}$ with a multiindex $I \in \mathbb{Z}^n$, where all $I_j $ can be assumed to be 0 or 1, because otherwise it would vanish, as
\begin{equation}
\partial_K e^I = \sum_i \xi_i \otimes \iota_{e_i} e^I.
\end{equation}
This can be easily seen using the fact that $\delta_K e^i = \xi_i$.
Next we define $F^k: K_k \rightarrow X_k$
\begin{equation}
F^k(\omega)(v,q_1,\ldots,q_k,w)= \omega(v,w)(q_1-v,\ldots, q_k - v)
\end{equation}
and $G^k: X_k \rightarrow K_k$
\begin{align*}
(G^k \chi)(v,w) & = \sum_{i_1\ldots i_k=1}^n e^{i_1}\wedge \cdots \wedge e^{i_k} \int\limits_0^1 \d t_1
\cdots \int\limits_0^{t_{k-1}} \d t_k \\
& \frac{\partial^k \chi}{\partial q_1^{i_1} \cdots \partial q_k^{i_k}}
(v,t_1 v +(1- t_1)w, \ldots, t_k v +(1- t_k)w, w).
\end{align*}
The definition of $F$ appears first in \citep[Sect.III.2$\alpha$]{connes} and the $G$ originates in \citep{bordemann}.
The maps $F$ and $G$ are chain maps and $\mathscr{A}^e$-module homomorphisms, this means we get the following commutative diagram of $\mathscr{A}^e$-linear maps:
\begin{tikzpicture}
[node distance=1.5cm]
\node(x0) {$0$};
\node(x1) [right=of x0] {$\mathscr{A}$};
\node(x2) [right=of x1]{$\mathfrak{X}_0$};
\node(x3) [right=of x2]{$\cdots$};
\node(x4) [right=of x3]{$X_k$};
\node(x5) [right=of x4]{$X_{k+1}$};
\node(x6) [right=of x5]{$\cdots$};
\node(k0) [below=of x0] {$0$};
\node(k1) [right=of k0]{$\mathscr{A}$};
\node(k2) [right=of k1]{$K_0$};
\node(k3) [right=of k2]{$\cdots$};
\node(k4) [right=of k3]{$K_k$};
\node(k5) [right=of k4]{$K_{k+1}$};
\node(k6) [right=of k5]{$\cdots$};
\path[->] (x1) edge (x0)
(x2) edge node[auto] {$\epsilon$} (x1)
(x3) edge node[auto] {$\partial^1_X$} (x2);
\path[->] (x4) edge node[auto] {$\partial^{k}_X$} (x3)
(x5) edge node[auto] {$\partial^{k+1}_X$} (x4)
(x6) edge node[auto] {$\partial^{k+2}_X$} (x5);
\path[->] (k1) edge (k0)
(k2) edge node[auto] {$\epsilon$} (k1)
(k3) edge node[auto] {$\partial^1_K$} (k2);
\path[->] (k4) edge node[auto] {$\partial^{k}_X$} (k3)
(k5) edge node[auto] {$\partial^{k+1}_K$} (k4)
(k6) edge node[auto] {$\partial^{k+2}_K$} (k5);
\draw[double] (x1) -- (k1);
\draw[double] (x2) -- (k2);
\path[->] (x4.-80) edge [left] node{$F^k$} (k4.90);
\path[->] (k4.110) edge [right] node{$G^k$} (x4.-100);
\path[->] (x5.-80) edge [left] node{$F^{k+1}$} (k5.90);
\path[->] (k5.110) edge [right] node{$G^{k+1}$} (x5.-100);
\end{tikzpicture}
One can show that
$G^k \circ F^k = \operatorname{id}$
which proves that
$\Theta^k = F^k \circ G^k $
is a projection.
Further one can compute explicitly
\begin{equation}
\begin{split}
(\Theta^k \chi)(v,q_1,\ldots,q_k,w) = \sum_{i_1\ldots i_k=1}^n \sum_{\sigma\in S_k} \operatorname{sign}(\sigma) (q_1-v)^{i_{\sigma(1)}} \cdots (q_k-v)^{i_{\sigma(k)}} \\
\int_0^1 \d t_1 \cdots \int_0^{t_{k-1}} \d t_k
\frac{\partial^k \chi}{\partial q_1^{i_1} \cdots \partial q_k^{i_k}} (v,t_1 v +(1- t_1)w, \ldots, t_k v +(1- t_k)w, w),
\end{split}
\end{equation}
where $S_k$ is the symmetric group with $k$ elements, and the upper indices on the brackets denote the components.
\begin{remark}
The explicit homotopies $F,G$ and $\Theta$ would not be necessary in the completely algebraic context, because their existence can be proven in a completely abstract way, but we need them here to make sure that everything stays in the continuous or differential Hochschild cohomology.
\end{remark}
\begin{lemma} \label{th:sk}
There exists a homotopy between $\Theta^\bullet$ and $\operatorname{id}_{X_\bullet}$.
\end{lemma}
\begin{proof}
See \citep[Proposition 5.7.2]{weisphd}
\end{proof}
We now consider the vector space $\operatorname{Hom}^\mathrm{cont}_{\mathscr{A}^e}(X_k,\mathscr{M})$ of continuous $\mathscr{A}^e$-linear maps.
With the pullback of the differentials $\delta^{k}_X = (\partial^k_X)^*$, defined by $(\partial^* \phi)(a) = \phi(\partial a)$ for $\phi\in\operatorname{Hom}^\mathrm{cont}_{\mathscr{A}^e}(X_k,\mathscr{M})$ and $a \in X_\bullet)$, we get the complex $(\operatorname{Hom}^\mathrm{cont}_{\mathscr{A}^e}(X_\bullet,\mathscr{M}), \delta_X)$.
\begin{prop}
The complexes $(\operatorname{Hom}^\mathrm{cont}_{\mathscr{A}^e}(X_\bullet,\mathscr{M}), \delta_X)$ and $\mathbf{HC}^k_\mathrm{cont}(\mathscr{A},\mathscr{M})$ are isomorphic with isomorphism $\Xi: \operatorname{Hom}^\mathrm{cont}_{\mathscr{A}^e}(X_\bullet,\mathscr{M}) \rightarrow \mathbf{HC}^k_\mathrm{cont}(\mathscr{A},\mathscr{M})$
\begin{equation}
(\Xi^k \psi)(a_1,\ldots,a_k) = \psi(1 \otimes a_1 \otimes \cdots a_k \otimes 1).
\end{equation}
\end{prop}
\begin{proof}
$\Xi$ is a chain map, since one can easily see that
\begin{equation}
\Xi \circ \delta_X = \delta \circ \Xi.
\end{equation}
The map $\Theta$ is an isomorphism, because of the universal property of the tensor product for continuous maps. The inverse is given by
\begin{equation}
\Xi^{-1}(1 \otimes a_1 \otimes \cdots \otimes a_k \otimes 1) = \psi(a_1, \ldots, a_k).
\end{equation}
For details see \citep[Prop.5.2.1]{weisphd}.
\end{proof}
We have a well-defined differential subcomplex $\mathbf{H\!H}^\bullet_\mathrm{diff} (\mathscr{A}, \mathscr{M}) $ in the case of a differential bimodule. But we also want to define a complex $\operatorname{Hom}^\mathrm{diff}_{\mathscr{A}^e}(X_k,\mathscr{M})$, with which we can compute this differential Hochschild cohomology.
For this we set
\begin{equation}
\operatorname{Hom}^{\mathrm{diff},L}_{\mathscr{A}^e} (X_k,\mathscr{M}) = (\Xi^k)^{-1} (\operatorname{DiffOp}^L( \mathscr{A},\mathscr{M}))
\end{equation}
Since $\Xi$ is a chain map we also get a well defined subcomplex $\operatorname{Hom}^\mathrm{diff}_{\mathscr{A}^e}(X_\bullet ,\mathscr{M})$ of $\operatorname{Hom}^\mathrm{cont}_{\mathscr{A}^e}(X_\bullet ,\allowbreak \mathscr{M})$. By construction we get an isomorphism of complexes
\begin{equation}
\Xi : (\operatorname{Hom}^\mathrm{diff}_{\mathscr{A}^e}(X_\bullet ,\mathscr{M}), \delta_X) \rightarrow (HC^\bullet_\mathrm{diff}(\mathscr{A},\mathscr{M}),\delta)
\end{equation}
Since $\mathscr{M}$ is a topological bimodule, we have that the map $(a,f,b) \mapsto a \bullet f \bullet b$ is continuous.
So by continuity we get an $\mathscr{A}^e$-module structure, for the completed tensor product, given by
\begin{equation}
(a \otimes b) \cdot f = a \cdot f \cdot b.
\end{equation}
This can also be written as
\begin{equation}\label{diffmod}
\hat a \bullet f = \sum_{|I|<l} (\Delta^*_0 \partial_I \hat a) \cdot f^I
\end{equation}
for all $\hat a \in \mathscr{A}^e$. Here $\Delta^*_k$ denotes the pull-back with the total diagonal map $\Delta_k : V \rightarrow V^{k+2}$ and the differentiation acts on the second argument of $\hat a$.
Using the local form of a differential operator it is also possible to get an explicitly form of the elements of $\operatorname{Hom}^{\mathrm{diff},L}_{\mathscr{A}^e}(X_k,\mathscr{M})$.
\begin{lemma}[{\citep[Lemma 5.3.6]{weisphd}}]\label{th:lochom}
An element $\psi \in \operatorname{Hom}^{\mathrm{diff},L}_{\mathscr{A}^e}(X_k,\mathscr{M})$ has the form
\begin{equation}
\psi(\chi)= \sum_{|I_1|<l_1, \ldots,\abs{I_k}<l_k,\abs{J}<l} \left(\Delta^*_k \frac{\partial^{\abs{I_1}+ \cdots \abs{I_k}} \chi}{\partial q^{I_1}_1 \cdots \partial q^{I_k}_k \partial w^J} \right) \cdot \psi^{I_1 \cdots I_k J}
\end{equation}
with multiindices $I_1,\ldots I_k, J \in \mathbb{N}_0^n$ and $\psi^{I_1 \cdots I_k J} \in \mathscr{M}$, and $l$ the order of the differential bimodule.
\end{lemma}
With this one can show that the constructed homotopies all respect the differential subcomplex in the following sense:
\begin{prop}[{\citep[Prop. 5.7.3]{weisphd}}]\label{th:kdiff}
The pullbacks $(G^k)^*: \operatorname{Hom}_{\mathscr{A}^e}(K_k,\mathscr{M}) \rightarrow \operatorname{Hom}_{\mathscr{A}^e}^{\mathrm{diff},L}(X_k,\mathscr{M})$ only take values in the differential cochains of multiorder $L=(l+1.\ldots,l+1)$ and
\begin{equation}
(\Theta^k)^* : \operatorname{Hom}_{\mathscr{A}^e}^{\mathrm{diff}}(X_k,\mathscr{M}) \rightarrow \operatorname{Hom}_{\mathscr{A}^e}^{\mathrm{diff},L}(X_k,\mathscr{M}),
\end{equation}
so elements of the differential Hochschild complex are mapped into such elements.
Also for all $L \in \mathbb{N}^{k+1}_0$ we have
\begin{equation}
(s^k)^* : \operatorname{Hom}_{\mathscr{A}^e}^{\mathrm{diff},L}(X_{k+1},\mathscr{M}) \rightarrow \operatorname{Hom}_{\mathscr{A}^e}^{\mathrm{diff},\tilde L}(X_k,\mathscr{M}),
\end{equation}
where $\tilde l_i = (k-1)! + \abs{L} +l$.
\end{prop}
We want to compute explicitly the map $\tilde G :\operatorname{Hom}(K,\mathscr{M}) \to \mathbf{HC}(A,M)$, which is induced by $G^*$, in the case of a symmetric bimodule.
We get
\begin{equation} \label{eq:gtilde}
\tilde G( \phi)(a_1,\dots, a_k) = \sum_{i_1,\dots i_k} (\partial_{i_1} a_1)\dots(\partial_{i_k} a_k) \phi(e^{i_1}\wedge \dots \wedge e^{i_k}).
\end{equation}
\begin{prop}\label{th:kohh}
We have the following isomorphisms of complexes:
\begin{equation}
\mathbf{H\!H}^\bullet_\mathrm{diff}(\mathscr{A},\mathscr{M}) \cong \H(\operatorname{Hom}^\mathrm{diff}_{A^e}(X_\bullet,\mathscr{M})) \cong \H(\operatorname{Hom}_{\mathscr{A}^e}(K_\bullet,\mathscr{M}))
\end{equation}
\end{prop}
\begin{remark} \label{th:homkoz}
Since $K^k$ is free and finite dimensional as an $\mathscr{A}^e$-module for any $k\in \mathbb{N}$, we have that $\operatorname{Hom}_{\mathscr{A}^e}(K_k,\mathscr{M}) \cong K_k^* \otimes_{\mathscr{A}^e} \mathscr{M} \cong (\mathscr{A}^e \otimes \Lambda^\bullet(\mathbb{R}^n)) \otimes_{\mathscr{A}^e} \mathscr{M} \cong \Lambda^\bullet(\mathbb{R}^n) \otimes \mathscr{M}$, for any module $\mathscr{M}$, where $K_k^*$ is the dual of $K_k$ as $\mathscr{A}^e$-module.
\end{remark}
The differential on $\operatorname{Hom}(K_k,M)$ can then be written as
\begin{equation}
\delta_K (\phi \otimes f) = \xi_i e^i \wedge \phi \otimes f
\end{equation}
for $\phi \otimes f \in \Lambda^\bullet(\mathbb{R}^n) \otimes \mathscr{M}$.
Since the Koszul complex is finite and every $K_k$ is also a finite dimensional module
it is much smaller than the bar complex. So it is easier to handle, but still big enough to compute the desired Hochschild cohomology.
For defining the Koszul complex one needs to use the completion of the tensor product in $\mathscr{A}^e$ because otherwise it is not possible to define for example the homotopy.
\subsection{Generalisation of the HKR theorem}
The aim of this section is to prove a generalization of the HKR theorem.
We start with the simple case that the considered manifolds are $\mathbb{R}^n$.
\begin{theorem}\label{th:hkrl}
Consider an arbitrary smooth map $\mathbb{R}^n \xrightarrow{p} \mathbb{R}^m$ between $\mathbb{R}^n$ and $\mathbb{R}^m$.
Then
\begin{equation}
\mathbf{H\!H}^\bullet_\mathrm{diff}(\CC^\infty(\mathbb{R}^m),\CC^\infty(\mathbb{R}^n)) = \Lambda^\bullet(\mathbb{R}^{m}) \otimes \CC^\infty(\mathbb{R}^n)
\end{equation}
as $\CC^\infty(\mathbb{R}^m)$-bimodules.
\end{theorem}
\begin{proof}
Using \cref{th:kohh} we compute the cohomology of the corresponding Koszul complex.
We consider an element $ e^I \otimes f \in \operatorname{Hom}(K_k,\CC^\infty(\mathbb{R}^n))$, where $I$ is a multiindex. Since $K_k$ is a free $\mathscr{A}^e$-module this is a generating set.
We have
\begin{equation}
\partial_K ( e^I \otimes f )= \xi_i e^i \wedge e^I \otimes f = e^i \wedge e^I \otimes (\operatorname{pr}^* x^i f - f \operatorname{pr}^* x^i) =0,
\end{equation}
using the fact that the tensor product is $\mathscr{A}^e$-linear and the fact that the multiplication in
$\CC^\infty(\mathbb{R}^n)$ is commutative.
So the differential is trivial and \cref{th:homkoz} gives us the desired result.
\end{proof}
\begin{remark}
Note that we only needed the fact that $\CC^\infty(\mathbb{R}^n)$ is a symmetric bimodule, so for any symmetric module $\mathscr{M}$ we get
\begin{equation}
\mathbf{H\!H}^\bullet(\CC^\infty(\mathbb{R}^m),\mathscr{M}) \cong \Lambda^\bullet(\mathbb{R}^{m}) \otimes \mathscr{M}.
\end{equation}
\end{remark}
Next we want to consider the trivial situation for the Hochschild cohomology of the differential operators.
\begin{theorem}\label{th:hkrld}
Let $\mathbb{R}^n \xrightarrow{p} \mathbb{R}^m$ be the projection on the first $k$ coordinates.
Then
\begin{equation}
\mathbf{H\!H}^\bullet_\mathrm{diff}(\CC^\infty(\mathbb{R}^m),\operatorname{DiffOp}(\mathbb{R}^n)) \cong \Lambda^\bullet(\mathbb{R}^{m-k}) \otimes \operatorname{DiffOp_{ver}}(\mathbb{R}^n)
\end{equation}
as $\CC^\infty(\mathbb{R}^m)$-modules.
\end{theorem}
\begin{proof}
This proof is similar to the proof of \cref{th:hkrl}.
Considering elements of the form $e^I \otimes f y^J \in \operatorname{Hom}(K_k,\operatorname{DiffOp}(\mathbb{R}^n))$, where $y^I$ is a symbol, which we identify with the corresponding differential operator, $f \in \CC^\infty(\mathbb{R}^n)$ and $I$, $J$ are multiindices. We also assume that $y^I$ acts on everything to the right. Again elements of this form generate the whole of $\operatorname{Hom}(K_k,\operatorname{DiffOp}(\mathbb{R}^n))$.
We have
\begin{align}
\partial_K (e^I \otimes f y^J ) & = \sum_{i=1}^n \xi_i e^i \wedge e^I \otimes f y^j \\
& =\sum_{i=1}^n e^i \wedge e^I \otimes f ( \operatorname{pr}^* x^i y^J - y^J \operatorname{pr}^* x^i) \\
& = \sum_{i=1}^k e^i \wedge e^I \otimes f [ x^i, y^J ] \\
& = - \sum_{i=1}^k e^i \wedge e^I \otimes f \partial_{y^i} y^J
\end{align}
using $\operatorname{pr}^* x^i =0$ for $i >k$ and $[x^i,y^J] = \partial_{y^i} y^I$.
In this case $\operatorname{DiffOp_{ver}}$ are those differential operators, whose symbols only contain $y^i$ with $i>k$.
For $\partial_K ( e^I \otimes f y^J ) =0$ we need that $\partial_K e^{I'} \otimes y^{J'} =0$ where $I'\in \mathbb{N}^k$ consists of the first $k$ entries of $I$ and similarly for $J$. This can be considered as the de-Rahm differential on $\mathbb{R}^k$ for polynomial functions. The cohomology of this is known to be trivial except in degree 0, where it is $\mathbb{C}$ and the non trivial element is 1. Since the differential is trivial on the other part, we get the result.
\end{proof}
Now we want to use this result for $\mathbb{R}^n$ and generalize it for the situation of an arbitrary smooth map $\operatorname{pr}: M \rightarrow N$ between two manifolds. To be able to localize things we need the assumption that $\operatorname{pr}(N)$ is a submanifold of $M$.
\begin{remark}
Since $\operatorname{pr}(N)$ is a submanifold of $M$, we can assuming that $\operatorname{pr}$ has constant rank, since this is true for every connected component. With the constant rank theorem we get adapted charts. This means for every point $p \in P$ there are open sets $ p \in V \subset P$ and $ \operatorname{pr}(p) \in U \subset M$, with $\operatorname{pr}(V) =U$, and diffeomorpism $V \rightarrow \tilde V \subset \mathbb{R}^{n}$ and $U \rightarrow \tilde U \subset \mathbb{R}^m$ such that in this charts $\operatorname{pr}$ is the projection on the first $k = \operatorname{rank}(\operatorname{pr})$ components. Furthermore we can assume that $\tilde U$ and $\tilde V$ are convex.
\end{remark}
\begin{lemma}\label{th:emcm}
The restrictions and charts shown in the following diagram are chain maps
\begin{tikzpicture}
\node(a) {$\mathbf{HC}^\bullet_\mathrm{diff}(M,\operatorname{DiffOp}(P))$};
\node(b) [below=of a] {$\mathbf{HC}^\bullet_\mathrm{diff} (U,\operatorname{DiffOp}(\operatorname{pr}^{-1}(U))$};
\node(c) [below=of b] {$\mathbf{HC}^\bullet_\mathrm{diff}(U,\operatorname{DiffOp}(V))$};
\node(d) [right=of c] {$\mathbf{HC}^\bullet_\mathrm{diff}(\tilde U,\operatorname{DiffOp}(\tilde V))$};
\path[->] (a) edge (b)
(b) edge (c)
(c) edge node[above] {$\cong$} (d);
\end{tikzpicture}
\end{lemma}
\begin{proof}
This follows from the fact that all involved operators are local.
\end{proof}
Now we get the the fist of the two main results of this part of this paper, which gives a generalization of the HKR theorem. A similar statement using the same concepts for the proof is given in \citep{bordemann}.
To simplify the notation we will sometimes write $\mathbf{HC}^\bullet(M,\cdot)$ instead of $\mathbf{HC}^\bullet(\CC^\infty(M), \cdot)$.
\begin{theorem}\label{th:hkr}
Let $N \xrightarrow{\operatorname{pr}} M$ be such that $ \operatorname{pr}(N)$ is a closed submanifold of $M$ then
\begin{equation}
\mathbf{H\!H}^\bullet(\CC^\infty(M),\CC^\infty(N)) = \mathfrak{X}^\bullet(M)|_{\operatorname{pr}(N)} \otimes_{\CC^\infty(M)} \CC^\infty(N)
\end{equation}
as $\CC^\infty(M)$-bimodule.
\end{theorem}
\begin{proof}
First we check that if we take $M,N$ and $\operatorname{pr}$ as in \cref{th:hkrl} we get the same statement as there.
Since in this case we have global charts, we have $\mathfrak{X}^\bullet(M) \cong \CC^\infty(M) \otimes \Lambda^\bullet(\mathbb{R}^m).$ So we get $\mathfrak{X}^\bullet(M)|_{\operatorname{pr}(N)} \otimes_{\CC^\infty(M)} \CC^\infty(N) \cong (\CC^\infty(\operatorname{pr}(N)) \otimes \Lambda^\bullet(\mathbb{R}^m)) \otimes_{\CC^\infty(M)} \CC^\infty(N) \cong \Lambda^\bullet(\mathbb{R}^m) \otimes \CC^\infty(N)$. The last isomorphism holds since $\CC^\infty(\operatorname{pr}(N))$ is a subalgebra of $\CC^\infty(M)$, since $\operatorname{pr}(N)$ is closed, so any function on $\operatorname{pr}(N)$ can be extended to a function on $M$.
The idea is to localize things such that \cref{th:hkrl} can be applied, and then glue them together again.
Since we consider the differential Hochschild cohomology it is enough to consider an open neighborhood of $\operatorname{pr}(N)$ in $M$. So given an atlas $\{U_\alpha\}$ of submanifold charts of $\operatorname{pr}(N)$ we can assume w.l.o.g. that $\bigcup_\alpha U_\alpha =M$, since $M \setminus \operatorname{pr}(N)$ is open we can take this is a submanifold chart and this to get a global atlas of $M$. We also consider a locally finite partition of unity $\chi_\alpha$ subordinate to $\{U_\alpha \}$ and an atlas $\{V_\alpha \}$ of $N$, with partition of unity $\psi_\alpha$.
These are adapted in the sense that $\operatorname{pr} (V_\alpha) = U_\alpha |_{\operatorname{pr}(N)}$
Now consider a $\phi \in \mathbf{HC}^l_\mathrm{diff}(M,\CC^\infty(N))$ which is closed.
With \cref{th:emcm} the restrictions $\phi_{V_\alpha} \in \mathbf{HC}^l_\mathrm{diff}(U_\alpha,\CC^\infty(V_\alpha))$ are closed. With the first part of the proof there exists $\sigma_\alpha \in \Lambda^\bullet(\mathbb{R}^m) \otimes \CC^\infty(N)$ and $\theta_\alpha \in \mathbf{HC}^{l-1}(U_\alpha,\CC^\infty(V_\alpha))$ with $\phi_{V_\alpha} = \sigma_\alpha + \delta \theta_\alpha$.
The restrictions
\begin{equation}
\tilde \theta_\alpha (a_1, \ldots , a_k)|_{V_\alpha} = \psi_\alpha \theta_\alpha(a_1|_{U_\alpha}, \ldots a_k|_{U_\alpha})
\end{equation}
and $0$ elsewhere, define global elements $\theta_\alpha$, and similarly one can define global elements $\sigma_\alpha$. Clearly we have $\delta \tilde \theta_\alpha + \tilde \sigma_\alpha = \psi_\alpha (\delta \theta + \sigma)$,
and, since $\psi_\alpha$ is locally finite, we get that $\theta = \sum_\alpha \tilde \theta_\alpha$ and
$\sigma= \sum_\alpha \tilde \sigma_\alpha$ are well-defined differential operators, and we also get
\begin{equation}
\phi = \sum_\alpha \psi_\alpha \phi = \sum_\alpha ( \tilde \sigma_\alpha + \delta \tilde \theta_\alpha)
= \sigma + \delta \theta
\end{equation}
This gives the desired result.
\end{proof}
\begin{remark}
The isomorphism in the previous theorem is given by the pullback of $\Theta$ since the differential in the Kozsul complex is trivial. From \cref{th:kdiff} it also follows that the image of $\Theta^*$ are exactly the multivector fields, because the module is symmetric, so it is a differential bimodule of order $l=0$. So the pullback maps into the totally antisymmetric multidifferential operators of order one in each argument.
\end{remark}
\begin{prop}\label{th:antisym}
The map which assigns to every cocycle its cohomology class is given by the total antisymmetrization.
\end{prop}
\begin{proof}
The map $\tilde G$ in \cref{eq:gtilde} is an isomorphism on cohomology. So let $\phi \in \mathbf{HC}(M,\mathscr{M})$ be a cocycle. Then there exists an $\eta \in \operatorname{Hom}(K,\mathscr{M})$ such that $[\phi] = [\tilde G \eta]$. So we have $\phi = \tilde G \eta + \partial \psi$ for a $\psi \in \mathbf{HC}(M,\mathscr{M})$. With this we get $\operatorname{Alt}(\phi) = \operatorname{Alt}(\tilde G \eta ) + \operatorname{Alt}(\partial \psi) = \tilde G \eta$, since $\tilde G\eta$ is antisymmetric and $\operatorname{Alt} \delta =0$, since the algebra is commutative.
\end{proof}
From this we easily get the classical HKR theorem.
\begin{corollary}
For a manifold $M$ we have
\begin{equation}
\mathbf{H\!H}^\bullet_\mathrm{diff}(\CC^\infty(M)) \cong \mathfrak{X}^\bullet(M)
\end{equation}
\end{corollary}
\begin{proof}
Use \cref{th:hkr} with $N = M$ and $\operatorname{pr} = \operatorname{id}$.
\end{proof}
We want to explicitly compute $\mathbf{H\!H}^\bullet_\mathrm{diff}(\CC^\infty(M),\CC^\infty(M))$ in the low degrees:
For $f \in \mathbf{HC}^0_\mathrm{diff}(\CC^\infty(M),\CC^\infty(N)) \cong \CC^\infty(N)$ we have
\begin{equation}
(\delta f)(a) = \operatorname{pr}^*a f -f \operatorname{pr}^*a =0
\end{equation}
for all $a \in \CC^\infty(M)$ and $f \in \CC^\infty(P)$. So every element of $\mathbf{HC}^0_\mathrm{diff}(\CC^\infty(M),\CC^\infty(N))$ is closed but since
there are no elements of degree $-1$, we have $\mathbf{H\!H}^0_\mathrm{diff}(\CC^\infty(M),\CC^\infty(N)) = \CC^\infty(N)$.
For $\phi \in \mathbf{HC}^1_\mathrm{diff}(\CC^\infty(M),\CC^\infty(N))$, we have
\begin{equation}
(\delta \phi)(a,b) = \operatorname{pr}^*a \phi(b) - \phi(ab) + \phi(a) \operatorname{pr}^* b.
\end{equation}
So $\phi$ is closed if
\begin{equation}\label{hc1}
\phi(ab) = \operatorname{pr}^*a \phi(b) + \phi(a) \operatorname{pr}^* b,
\end{equation}
which means that $\phi$ is a derivation. Since $\delta^0 =0$ there are no exact elements, and the cohomology is given by the elements satisfying \eqref{hc1}.
Before proving the main theorem we need a small lemma:
\begin{lemma}\label{th:factor}
Let $V$ be a finite dimensional vector space and $W \subset V$ be a vector subspace then
\begin{equation}
\factor{\Lambda^\bullet(V)}{\sprod{\Lambda^1(W)}} = \Lambda\left(\factor{V}{W}\right).
\end{equation}
Here $\sprod{x}$ denotes the ideal generated by $x$.
\end{lemma}
\begin{proof}
We define a homomorphism $\phi: \factor{\Lambda^\bullet(V)}{\sprod{\Lambda^1(W)}} \rightarrow \Lambda^\bullet(\factor{V}{W})$ by $[v_1 \wedge \cdots \wedge v_k] \mapsto [v_1] \wedge \cdots \wedge [v_k]$, where $[\cdot]$ denotes the corresponding equivalence classes.
First of all it is easy to see that this is well defined, since any $X \in \factor{\Lambda^\bullet(V)}{\sprod{\Lambda^1(W)}}$ contains a $w \in W$ and $w =0$ in $\factor{V}{W}$. $\phi$ is clearly surjective. Using a basis $\{e_i\}_{i \in I}$ such that $\{e_i\}_{i\in J}$, with $J \subset I$, is a basis of $W$, one gets that $\phi(e_{i_1} \wedge \cdots \wedge e_{i_k}) =0$ if and only if one of the $i_j$ is in $J$. This shows $\phi$ to be injective. So $\phi$ is a isomorphism.
\end{proof}
Now we can prove the main theorem of this paper, namely the computation of the Hochschild cohomology $\mathbf{H\!H}^\bullet(\CC^\infty(M),\operatorname{DiffOp}(N))$. It is a significant generalization of the theorem given in \citep{art}, where the situation of a fibered manifold is considered. The big difference to our situation is that there the cohomology is trivial except in degree zero, while here it is in general always non trivial.
We need the assumption that $\operatorname{pr}(N)$ is a closed submanifold. This is needed to use the local situation given in \cref{th:hkrld}. The fact that $\operatorname{pr}(N)$ is closed is important, because otherwise $\CC^\infty(\operatorname{pr}(N))$ would not be a subalgebra of $\CC^\infty(M)$, which is important for our construction.
After proving this theorem, we want to give some details on the isomorphism given in it.
\begin{theorem} \label{th:hkrd}
Let $N \xrightarrow{\operatorname{pr}} M$ be such that $ \operatorname{pr}(N)$ is a closed submanifold of $M$ then
\begin{equation}
\mathbf{H\!H}^\bullet_\mathrm{diff}(\CC^\infty(M),\operatorname{DiffOp}(N)) \cong \factor{\mathfrak{X}^\bullet(M)|_{\operatorname{pr}(N)}}{\sprod{\mathfrak{X}(\operatorname{pr}(N))}} \otimes_{\CC^\infty(M)} \operatorname{DiffOp_{ver}}(N)
\end{equation}
as $\CC^\infty(M)$-bimodule, where $\sprod{x}$ denotes the ideal generated by $x$.
\end{theorem}
\begin{proof}
Again we first compare the statement of this theorem with the local situation in \cref{th:hkrld}, i.e. $M= \mathbb{R}^m$ and $N = \mathbb{R}^n$.
In this case we have $\operatorname{pr}(N) = \mathbb{R}^k$ so $\mathfrak{X}^\bullet(M)|_{\operatorname{pr}(N)} = \CC^\infty(\mathbb{R}^k) \otimes \Lambda^\bullet (\mathbb{R}^m)$ and $\mathfrak{X}(\operatorname{pr}(N)) = \Lambda^\bullet(\mathbb{R}^n) \otimes \CC^\infty(\mathbb{R}^n)$. These equalities follow easily form the fact that the multivector bundle over $\mathbb{R}^n$ is trivial.
Next with \cref{th:factor} we have that $\factor{\Lambda^\bullet(\mathbb{R}^m)}{\sprod{\Lambda^\bullet(\mathbb{R}^k)}} = \Lambda^\bullet(\mathbb{R}^{m-k})$.
So we get
\begin{align*}
\factor{\mathfrak{X}^\bullet(M)|_{\operatorname{pr}(N)}}{\sprod{\mathfrak{X}(\operatorname{pr}(N))}} & =
\factor{\CC^\infty(\mathbb{R}^k) \otimes \Lambda^\bullet(\mathbb{R}^m)}{\sprod {\CC^\infty(\mathbb{R}^k) \otimes \Lambda^1(\mathbb{R}^k)}} \\
& = \CC^\infty(\mathbb{R}^k) \otimes \factor{\Lambda^\bullet(\mathbb{R}^m)}{\Lambda^1(\mathbb{R}^k)} \\
& = \CC^\infty(\mathbb{R}^k) \otimes \Lambda^\bullet(\mathbb{R}^{m-k}),
\end{align*}
since we consider $ \mathfrak{X}^\bullet(M)|_{\operatorname{pr}(N)}$ and $\sprod{\mathfrak{X}(\operatorname{pr}(N))}$ as $\CC^\infty(M)$-modules.
Globalizing works as in the previous theorem.
\end{proof}
\begin{remark}
The submodule $\mathbf{HC}^\bullet(\CC^\infty(M),\operatorname{DiffOp_{ver}}(N))$ is symmetric by the definition of a vertical operator and so similarly to above the pullback of $\Theta$ on this submodule maps into the multivector fields.
If one chooses a connection on $N$ as a fibered manifold over $\operatorname{pr}(N)$ one gets
\begin{equation}
\operatorname{DiffOp}(N) = \operatorname{DiffOp_{ver}}(N) \oplus \operatorname{DiffOp_{hor}}(N)
\end{equation}
So one also has $$\mathbf{HC}^\bullet(\CC^\infty(M),\operatorname{DiffOp}(N)) = \mathbf{HC}^\bullet(\CC^\infty(M),\operatorname{DiffOp_{ver}}(N)) \oplus \mathbf{HC}^\bullet(\CC^\infty(M),\operatorname{DiffOp_{hor}}(N)).$$
Now using the Koszul complex one can see that any closed element of $\mathbf{HC}^\bullet(\CC^\infty(M),\allowbreak \operatorname{DiffOp_{hor}}(N))$ is exact, since with the notation as in the proof of \cref{th:hkrld} we have that $y^{I'}$ is non constant.
So in every cohomology class their is a representative which lies in $\mathbf{HC}^\bullet(\CC^\infty(M),\operatorname{DiffOp_{ver}}(N))$ and on this the isomorphism in \cref{th:hkrd} is given by the antisymmetrization, which can be shown as \cref{th:antisym}.
\end{remark}
\begin{remark}[Connection to bimodule deformation]
This cohomology group in degree two gives the obstruction for the existence of a $\CC^\infty(M)$-module deformation of $\CC^\infty(N)$, see \cref{th:mho}. What we see is that this deformation only always exists if $\dim M = \dim \operatorname{pr}(N)$ or $\dim M = \dim \operatorname{pr}(N) +1 $. In all other cases one has to expect obstructions.
The existence of a bimodule deformation of a fibered manifold $P \xrightarrow{p} M$ as in \cref{ch:bim} would be granted if the cohomology of $\operatorname{DiffOp}(P)$ as $\CC^\infty(M) \otimes \CC^\infty(M) \cong \CC^\infty(M \times M)$-bimodule would be trivial, where $\operatorname{pr} $ is the projection $p:P \rightarrow M$ composed with the diagonal. Note however that the previous theorem cannot be used directly, because for the bimodule deformation we need the algebraic tensor product and for the isomorphisms of this section the topological one.
\end{remark}
We want to further interpret the cohomologies, which we computed to be $\mathfrak{X}^\bullet(M)|_{\operatorname{pr}(N)} \allowbreak \otimes_{\CC^\infty(M)} \CC^\infty(N)$ resp.$\factor{\mathfrak{X}^\bullet(M)|_{\operatorname{pr}(N)}}{\sprod{\mathfrak{X}(\operatorname{pr}(N))}} \otimes_{\CC^\infty(M)} \operatorname{DiffOp_{ver}}(N)$, because they look not very intuitive at first glance.
So want to show that in fact this two can be interpreted as vector bundles over $N$.
First we consider the simpler case of $\mathfrak{X}^\bullet(M)|_{\operatorname{pr}(N)} \otimes_{\CC^\infty(M)} \CC^\infty(N)$.
We have
\begin{equation}
\mathfrak{X}^\bullet(M) \otimes_{\CC^\infty(M)} \CC^\infty(N) \cong
\mathfrak{X}^\bullet(M)|_{\operatorname{pr}(N)} \otimes_{\CC^\infty(\operatorname{pr}(N))} \CC^\infty(N),
\end{equation}
since $\CC^\infty(M) = \CC^\infty(\operatorname{pr}(N)) \oplus N$, where $N= \{a \in \CC^\infty(M) \;|\; a|_N = 0 \}$, because we assume $\operatorname{pr}(N)$ to be closed. But the direct sum is not canonical because one has to embed $\CC^\infty(\operatorname{pr}(N))$ in $\CC^\infty(M)$. One possibility is defining a prolongation $\operatorname{prol}: \CC^\infty(\operatorname{pr}(N)) \rightarrow \CC^\infty(M)$, which satisfies $(\operatorname{prol} a)|_N = a$. One has $\operatorname{pr}^*a = 0$ for $ a\in N$. This can be done for example by choosing a tubular neighborhood.
In the following proposition we need the concept of the pullback of a vector bundle, see e.g. \citep[Section {III,8.9}]{michor}. For a vector bundle $E$ we denote the pullback along $f$ by $f^\sharp E$.
\begin{prop}\label{th:vecpb}
In the considered situation we have
\begin{equation}
\mathfrak{X}^\bullet(M) \otimes_{\CC^\infty(M)} \CC^\infty(N) \cong \Gamma^\infty(\operatorname{pr}^\sharp \Lambda^\bullet(TM|_{\operatorname{pr}(N)})).
\end{equation}
\end{prop}
\begin{proof}
Since $\mathcal{E} = \mathfrak{X}^\bullet(M)|_{\operatorname{pr}(N)}$ are the section of a $\operatorname{pr}(N)$ vector bundle it is a projective module over $\mathscr{A}= \CC^\infty(\operatorname{pr}(N))$. So their exists a projector $P \in \mathscr{A}^{n \times n}$ such that $\mathcal{E} = P \mathscr{A}^n$. So we have
\begin{equation}
P \mathscr{A}^n \otimes_\mathscr{A} \CC^\infty(N)= \operatorname{pr}^*(P) \CC^\infty(N)^k.
\end{equation}
This shows $\mathfrak{X}^\bullet(M) \otimes_{\CC^\infty(M)} \CC^\infty(N)$ to be projective as a $\CC^\infty(\operatorname{pr}(N))$-module. So it is a isomorphic to the sections of a vector bundle over $N$.
One can define a map $\phi: \mathfrak{X}^\bullet(M) \otimes_{\CC^\infty(M)} \CC^\infty(N) \rightarrow \Gamma^\infty(\operatorname{pr}^\sharp \Lambda^\bullet(TM|_{\operatorname{pr}(N)})) $ by
\begin{equation}
X \otimes f \mapsto f \operatorname{pr}^\sharp X.
\end{equation}
This clearly linear with respect to $\CC^\infty(N)$, so it is a vector bundle morphism. One can also show that $\phi$ is isomorphism.
\end{proof}
Now we come to the case of $\factor{\mathfrak{X}^\bullet(M)|_{\operatorname{pr}(N)}}{\sprod{\mathfrak{X}(\operatorname{pr}(N))}} \otimes_{\CC^\infty(M)} \operatorname{DiffOp_{ver}}(N)$.
We recall that $\mathfrak{X}^\bullet(M)|_{\operatorname{pr}(N)}$ and $\mathfrak{X}^\bullet(\operatorname{pr}(N))$ can be considered as vector bundles over $\operatorname{pr}(N)$, which is by assumption a manifold and $\mathfrak{X}^\bullet(M)|_{\operatorname{pr}(N)}$ is a subbundle of $\mathfrak{X}^\bullet(\operatorname{pr}(N))$, so the quotient is again a vector bundle over $\operatorname{pr}(N)$.
For a manifold $M$ and a submanifold $ N \subset M$ we define
\begin{equation}
\mathfrak{N}(M,N) = \factor{\mathfrak{X}^\bullet(M)|_{N}}{\sprod{\mathfrak{X}(N)}}
\end{equation}
to be the section of the exterior algebra of the normal bundle of $N$ in $M$.
This means
\begin{equation}
\mathfrak{N}(M,N) \cong \Gamma^\infty( \Lambda^\bullet(TN^\bot))
\end{equation}
as vector bundle over $N$. Here $TN^\bot = \factor{TM|_N}{TN}$ denotes the normal bundle of $N$. This can been seen using \cref{th:factor}.
So we have
\begin{equation}
\mathbf{H\!H}^\bullet(\CC^\infty(M),\operatorname{DiffOp}(P)) \cong \mathfrak{N}(M,\operatorname{pr}(N)) \otimes_{\CC^\infty(M)} \operatorname{DiffOp_{ver}}(N).
\end{equation}
First we note that $\CC^\infty(M) = \CC^\infty(\operatorname{pr}(N)) \oplus \mathcal{B}$, and for $b \in \mathcal{B}$ we have $\operatorname{pr}^*b = 0$. So we have
\begin{equation}
\mathfrak{N}(M,\operatorname{pr}(N))\otimes_{\CC^\infty(M)} \operatorname{DiffOp_{ver}}(N) \cong
\mathfrak{N}(M,\operatorname{pr}(N)) \otimes_{\CC^\infty(\operatorname{pr}(N))} \operatorname{DiffOp_{ver}}(N)
\end{equation}
since $\mathcal{P} = \mathfrak{N}(M,N)$ is a $\CC^\infty(\operatorname{pr}(N))$-module.
Since $\mathcal{P}$ are the sections of a vector bundle over $\operatorname{pr}(N)$, it is a projective $\CC^\infty(\operatorname{pr}(N))$-module. This means we can write $\mathcal{P} = P\CC^\infty(\operatorname{pr}(N))^k$ for a projector $P \in \CC^\infty(\operatorname{pr}(N))^{k \times k}$.
We then have $P\CC^\infty(\operatorname{pr}(N))^k \otimes_{\CC^\infty(\operatorname{pr}(N))} \CC^\infty(N) \cong \operatorname{pr}^* P \CC^\infty(N)^k$ for purely algebraic reasons. This shows $\mathfrak{N}(N,M) \otimes_{\CC^\infty(M)} \operatorname{DiffOp_{ver}}(N)$ to be a projective $\CC^\infty(N)$-module, so it is isomorphic to the section of a vector bundle over $N$.
\begin{prop}
We have
\begin{equation}
\begin{split}
\mathfrak{N}(M,\operatorname{pr}(N)) \otimes_{\CC^\infty(M)} \operatorname{DiffOp_{ver}}(N)
&\cong \Gamma^\infty \left(\operatorname{pr}^\sharp \Lambda^\bullet \left(\factor{TM}{T \operatorname{pr}(N)} \right) \otimes \Lambda^\bullet(\mathcal{S} VN) \right) \\
& \cong \Gamma^\infty \left(\operatorname{pr}^\sharp \Lambda^\bullet T \operatorname{pr} N^\bot \right) \otimes \operatorname{DiffOp_{ver}}(N) .
\end{split}
\end{equation}
Here $VN$ is the vertical bundle of $N$ with respect to some connection on the fibered manifold $N \rightarrow \operatorname{pr}^*(N)$.
\end{prop}
\begin{proof}
First we note that $\mathfrak{N}(M,\operatorname{pr}(N)) \cong \Gamma^\infty( \Lambda^\bullet \left(\factor{TM}{T \operatorname{pr}(N)}\right))$.
Then, when choosing a torsion free connection on $N$, we get that $\Gamma^\infty(\mathcal{S} TN) \cong \operatorname{DiffOp}(N)$. For the vertical operators we get with this $\Gamma^\infty(\mathcal{S} (VN)) \cong \operatorname{DiffOp_{ver}}(N)$, assuming $\nabla_X \operatorname{pr}^* a$ is the pullback of some function on $\operatorname{pr}(N)$ for any $a\in \CC^\infty(\operatorname{pr}(N))$ and $X \in \mathfrak{X}(N)$. So we can consider the differential operators as a vector bundle over $N$.
In general we have that for two vector bundle $E,F$ over $N$ we have $\Gamma^\infty(E) \otimes_{\CC^\infty(N)} \Gamma^\infty(F) \cong \Gamma^\infty(E \otimes F)$. Using this and \cref{th:vecpb} we get the desired result.
\end{proof}
The above proposition shows that one can consider $\mathbf{H\!H}^\bullet_\mathrm{diff}(\CC^\infty(M),\operatorname{DiffOp}(N))$ as some sort of multivector fields on $N$, which take as arguments functions on $M$ and have values in the vertical differential operators on $N$.
Finally we want to embed the cohomology as reformulated above back in to the complex. For this it is necessary to embed the normal bundle of $\operatorname{pr} N$ into the tangent bundle $TM|_{\operatorname{pr} N}$. This can be done for example by choosing a tubular neighborhood. With this an element $X \otimes D \in \Gamma^\infty \left(\operatorname{pr}^\sharp \Lambda^k T \operatorname{pr} N^\bot \right) \otimes \operatorname{DiffOp_{ver}}(N)$ can be considered as an element of $\mathbf{HC}_\mathrm{diff}(M,\operatorname{DiffOp}(n))$ by
\begin{equation}
(X_1 \wedge \dots \wedge X_k \otimes D)(a_1, \dots , a_k)(f) = \sum_{\sigma \in S_k} \operatorname{sign}(\sigma) \sprod{\operatorname{pr}^\sharp\d a_1,X_{\sigma(1)}} \dots \sprod{\operatorname{pr}^\sharp\d a_k,X_{\sigma(k)}} D(f).
\end{equation}
Here $\sprod{\cdot,\cdot}$ denotes the natural pairing between $\operatorname{pr}^\sharp TM|_N$ and $\operatorname{pr}^\sharp T^*M|_N$.
\bibliographystyle{bibstyle}
|
{
"timestamp": "2018-06-05T02:18:30",
"yymm": "1806",
"arxiv_id": "1806.01131",
"language": "en",
"url": "https://arxiv.org/abs/1806.01131"
}
|
\section{Introduction}
In quantum theories, we speak of electrons as having a property called ``spin.'' The reason we use this term is that electrons possess an angular momentum and a magnetic moment, just as one would expect for a rotating charged body. However, textbooks frequently warn students against thinking of the electron as actually rotating, or even being in some quantum superposition of different rotating motions. There are three serious obstacles to regarding the electron as a spinning object:
\begin{enumerate}
\item Given certain upper limits on the size of an electron, the electron's mass would have to rotate faster than the speed of light in order for the electron to have the correct angular momentum.
\item Similarly, the electron's charge would have to rotate faster than the speed of light in order to generate the correct magnetic moment.
\item A simple classical calculation of the electron's gyromagnetic ratio yields the wrong answer---off by a factor of (approximately) 2.
\end{enumerate}
These obstacles can be overcome by taking the electron's classical state (the state which enters superpositions) to be a state of the Dirac field. The Dirac field possesses mass and charge. One can define velocities describing the flow of mass and the flow of charge. The first two obstacles are addressed by the fact that the electron's mass and charge are spread over sufficiently large distances that the correct angular momentum and magnetic moment can be understood as resulting from rotation without either the velocity of mass flow or the velocity of charge flow exceeding the speed of light. The electron's gyromagnetic ratio is twice the expected value because its charge rotates twice as fast as its mass.
In the next section I explain the three obstacles above in more detail. Then, I consider how the obstacles are modified by the fact that some of the electron's mass is in the electromagnetic field that surrounds it. The mass in the electromagnetic field rotates around the electron and thus contributes to its angular momentum. Because the amount of mass in the electromagnetic field ultimately turns out to be small, this is not the dominant contribution to the electron's angular momentum. But, the idea of mass rotating in a classical field appears again when we consider the Dirac field which describes the electron itself. After an initial examination of this flow of mass and charge in the Dirac field, I show that the three obstacles can be overcome (in the manner described above) if we restrict ourselves to positive frequency modes of the Dirac field. This restriction is imposed because negative frequency modes are associated with positrons in quantum field theory. After presenting this account of spin, I compare it to other proposals as to how one might understand the electron's angular momentum and magnetic moment as arising from the motion of its mass and charge.
Before jumping into all of that, let me explain the focus on classical field theory in a paper about electron spin (a supposedly quantum phenomenon). When one moves from classical field theory to a quantum description of the electron within the quantum field theory of quantum electrodynamics, the classical Dirac and electromagnetic fields are quantized. Instead of representing the electron by a definite Dirac field interacting with a definite electromagnetic field, we represent the electron by a superposition of different field states---a wave functional that assigns amplitudes to different possible classical states of the fields. The dynamics of this quantum state are determined by a wave functional version of the Schr\"{o}dinger equation and can be calculated using path integrals which sum contributions from different possible evolutions of the fields (different possible paths through the space of field configurations). Seeing that the three obstacles above can be surmounted for each classical state makes the nature of spin in a quantum theory of those fields much less mysterious. The electron simply enters superpositions of different states of rotation.
In the previous paragraph I assumed a ``field approach'' to quantum field theory where one starts from a relativistic classical theory of the Dirac field (with the Dirac equation giving the dynamics) and then quantizes this classical field theory to get a quantum theory of the Dirac field. \citet{carrollblog} advocates such an approach. He writes:
\begin{quote}
``What about the Klein-Gordon and Dirac equations? These were, indeed, originally developed as `relativistic versions of the non-relativistic Schr\"{o}dinger equation,' but that's not what they ended up being useful for. ... The Klein-Gordon and Dirac equations are actually not quantum at all---they are \emph{classical field equations}, just like Maxwell's equations are for electromagnetism and Einstein's equation is for the metric tensor of gravity. They aren't usually taught that way, in part because (unlike E\&M and gravity) there aren't any macroscopic classical fields in Nature that obey those equations.''\footnote{Carroll's point at the end of the quote is important (see also \citealp[chapter 8]{duncan}). The motivation for studying the classical Dirac field in this paper is not that classical Dirac field theory emerges as an approximate description at the macroscopic level of what is happening microscopically according to quantum field theory. The motivation is that quantum field theory describes superpositions of states of the classical Dirac field. Classical Dirac field theory plays a key role in the foundations of quantum electrodynamics.}
\end{quote}
Carroll then goes on to explain that in quantum field theory the quantum state can be represented as a wave functional obeying a version of the Schr\"{o}dinger equation.
There is an alternative ``particle approach'' to quantum field theory where one begins instead from a relativistic quantum single particle theory in which the Dirac field is interpreted as a wave function for the electron and the Dirac equation gives the dynamics for that wave function. In this theory, the electron is treated as a point particle in a superposition of different locations. From this quantum particle theory, one can move to quantum field theory by transitioning from a relativistic single particle quantum theory to a theory with a variable number of particles. Instead of having a wave function that assigns amplitudes to possible spatial locations for a single particle, one uses a wave function that assigns amplitudes to possible spatial arrangements of any number of particles (to points in the disjoint union of all $N$-particle configuration spaces).
These two approaches are often seen as different ways of formulating the very same physical theory.\footnote{For more on the field approach, see \citet{jackiw1990}; \citet[chapters 10 and 11]{hatfield}; \citet[chapter 4]{valentini1992}; \citet[section 12.4]{holland}; \citet{valentini1996}; \citet[chapter 3]{peskinschroeder}; \citet[chapter 4]{ryder}; \citet[pg.\ 241--242]{weinberg1999}; \citet{huggett2000}; \citet{wallace2001, wallace2006, wallace2017}; \citet[chapters 2 and 4]{tong}; \citet{baker2009}; \citet{struyve2010}; \citet{duncan}; \citet[section 4.3.1]{myrvold2015}. For more on the particle approach, see \citet[chapters 6--8]{schweberQFT}; \citet[sec.\ 13.2]{bjorkendrellfields}; \citet{thaller1992}; \citet[chapter 3]{teller}; \citet[section 3]{durr2005}; \citet{deckert}, \citet{wallace2017}. It is also possible to adopt a mixed approach. For example, \citet{bohmhiley} take a field approach for bosons and a particle approach for fermions.} However, in trying to understand what really exists in nature it is tempting to ask which approach better reflects reality. Put another way: Is quantum field theory fundamentally a theory of fields or particles? This is a tough question and I will not attempt to settle it here (or even to develop the alternatives in much detail). I seek only to display a single virtue of the first perspective: it allows us to understand electrons as truly spinning. Adopting the first perspective is compatible with a number of different strategies for interpreting quantum field theory as it leaves open many foundational questions, such as: Does the wave functional ever collapse? Is there any additional ontology beyond the wave functional? Are there many worlds or is there just one?
What follows is a project of interpretation, not modification. It is generally agreed that the equations of our best physical theories describe an electron that has ``spin'' but does not actually rotate. Here I present an alternative interpretation of the very same equations. There is no need to modify these equations so that they describe a rotating electron. Interpreted properly, they already do.
\section{The Obstacles}\label{obstacles}
The first obstacle to regarding the electron as truly spinning is that it must rotate superluminally in order to have the correct angular momentum. One estimate for the radius of the electron is the classical electron radius, $\frac{e^2}{m c^2}\approx10^{-13}\mbox{cm}$ (which will be explained in the next section). If you assume that the angular momentum of the electron is due entirely to the spinning of a sphere with this radius and the mass of the electron, points on the edge of the sphere would have to be moving superluminally \citep[problem 4.25]{griffithsQM}. To get an angular momentum of $\frac{\hbar}{2}$ with subluminal rotation speeds, the electron's radius must be greater than (roughly) the Compton radius of the electron, $\frac{\hbar}{m c}\approx10^{-11}\mbox{ cm}$. The relation between velocity at the equator $v$ and angular momentum $|\vec{L}|$ for a spherical shell of mass $m$ and radius $R$ is
\begin{equation}
|\vec{L}| = \frac{2}{3}m v R\ .
\end{equation}
Setting $|\vec{L}| = \frac{\hbar}{2}$ and $v=c$ then solving for $R$ yields a radius on the order of the Compton radius,
\begin{equation}
R=\frac{4}{3}\frac{\hbar}{m c}\ .
\end{equation}
Rejecting this picture of a spinning extended electron, one might imagine the mass of the electron to be confined to a single point.\footnote{The fact that there is mass in the electromagnetic field makes this difficult to imagine (see footnote \ref{whereisthemass}).} If this were so, the electron's angular momentum---as calculated from the usual definition of angular momentum in terms of the linear momentum and displacement from the body's center of a body's parts---would be zero (because none of the point electron's mass is displaced from its center). One might respond that in quantum physics we are forced to revise this definition of angular momentum and allow point particles to posses angular momentum. The following sections show that there is no need to so radically revise our understanding of angular momentum.
The second obstacle is that an electron with the classical electron radius would have to spin superluminally to produce the correct magnetic moment. Assuming the magnetic moment is generated by a spinning sphere of charge imposes essentially the same minimum radius for the electron as the first obstacle---the Compton radius. The relation between velocity and magnetic moment $|\vec{m}|$ for a spherical shell of charge $-e$ is
\begin{equation}
|\vec{m}| = \frac{e R v}{3 c}\
\end{equation}
\citep[pg.\ 127]{rohrlich}. Inserting $v=c$ and $|\vec{m}|=\frac{e \hbar}{2 m c}$ (the Bohr magneton) yields a radius of
\begin{equation}
R=\frac{3}{2}\frac{\hbar}{m c}\ .
\end{equation}
The third obstacle to regarding the electron as spinning is that its gyromagnetic ratio (the ratio of magnetic moment to angular momentum) differs from the simplest classical estimate (\citealp[problem 5.56]{griffiths}; \citealp[pg.\ 187]{jackson}):
\begin{equation}
\frac{|\vec{m}|}{|\vec{L}|}=\frac{e}{2 m c}\ .
\label{classicalGR}
\end{equation}
I stress that this is the \emph{simplest} estimate and not \emph{the} classical gyromagnetic ratio because its derivation requires two important assumptions beyond axial symmetry, each of which will be called into question later: (1) the mass $m$ and charge $-e$ are both distributed in the same way (i.e., mass density is proportional to charge density), and (2) the mass and charge rotate at the same rate. With these assumptions in place, the derived gyromagnetic ratio is independent of the rate of rotation and the distribution of mass and charge. The actual gyromagnetic ratio of the electron is twice this estimate,
\begin{equation}
\frac{|\vec{m}|}{|\vec{L}|}=\frac{e}{m c}\ ,
\label{quantumGR}
\end{equation}
as its angular momentum is $\frac{\hbar}{2}$ and its magnetic moment is the Bohr magneton, $\frac{e \hbar}{2 m c}$ (ignoring the anomalous magnetic moment).
The physicists who first proposed the idea of electron spin were aware of these obstacles. Ralph Kronig was the first to propose a spinning electron to explain the fine structure of atomic line spectra (in 1925), but he did not publish his results because there were too many problems with his idea. One of these problems was that the electron would have to rotate superluminally \citep[pg.\ 35]{tomonaga}. Independently of Kronig, George Uhlenbeck and Samuel Goudsmit had the same idea. Uhlenbeck spoke with Hendrik Lorentz about the proposal and Lorentz brought up the problem of superluminal rotation (among others). After speaking with Lorentz, Uhlenbeck no longer wanted to publish. But, it was too late. His advisor, Paul Ehrenfest, had already sent the paper off. Uhlenbeck recalls Ehrenfest attempting to reassure the pair by saying: ``You are both young enough to be able to afford a stupidity!'' (\citealp[pg.\ 47]{uhlenbeck}; see also \citealp{goudsmit}). Uhlenbeck and Goudsmit were also aware of the gyromagnetic ratio problem, but they were not so troubled by it. They understood that the classical calculation of the gyromagnetic ratio has assumptions that can be denied (e.g., the calculated gyromagnetic ratio would be different if the electron's mass were distributed evenly throughout the volume of a sphere and its charge were distributed over the surface; \citealp[pg.\ 47]{uhlenbeck}; \citealp[pg.\ 39]{pais1989}).
\section{The Electromagnetic Field}
Before going on to model the electron using the Dirac field, it is worthwhile to consider how the above obstacles are altered by taking the mass of the electromagnetic field into account. By the relativistic equivalence of mass and energy, the electromagnetic field has a relativistic mass density proportional to its energy density (see \citealp{lange}; \citealp{forcesonfields}). In Gaussian units, the density of energy is
\begin{equation}
\rho_f^{\mathcal{E}}=\frac{1}{8 \pi}\left(|\vec{E}|^2+|\vec{B}|^2\right)\ ,
\label{energydensityfield}
\end{equation}
and the density of relativistic mass is
\begin{equation}
\rho_f=\frac{\rho_f^{\mathcal{E}}}{c^2}=\frac{1}{8 \pi c^2}\left(|\vec{E}|^2+|\vec{B}|^2\right) \ .
\label{massdensityfield}
\end{equation}
The $f$ subscript indicates that these are properties of the electromagnetic field and the $\mathcal{E}$ superscript distinguishes the energy density from the relativistic mass density. The total mass of the electron is the sum of this electromagnetic mass plus any mass possessed by the electron itself.\footnote{I will use the phrase ``the electron itself'' to refer to the bare electron, distinct from the electromagnetic field that surrounds it. This is to be contrasted with the dressed electron, which includes both the electron itself and its electromagnetic field.} The mass of the electromagnetic field moves with a velocity that can be expressed in terms of the Poynting vector, $\vec{S}= \frac{c}{4\pi} \vec{E} \times \vec{B}$, which gives the energy flux density of the field. The field velocity\footnote{This field velocity appears in \citet{poincare1900}; \citet[section 12.6.2]{holland}; \citet[box 8.3]{lange}; \citet{forcesonfields}.} can be found by dividing the energy flux density $\vec{S}$ by the energy density $\rho_f^{\mathcal{E}}$ or, equivalently, by dividing the momentum density of the field,
\begin{equation}
\vec{G}_f=\frac{\vec{S}}{c^2}= \frac{1}{4\pi c} \vec{E} \times \vec{B} \ ,
\label{momentumdensityfield}
\end{equation}
by its mass density \eqref{massdensityfield},
\begin{equation}
\vec{v}_f=\frac{\vec{G}_f}{\rho_f}=\frac{\vec{S}}{\rho_f^{\mathcal{E}}}\ .
\label{fieldvelocity}
\end{equation}
Looking at the field lines around a charged magnetic dipole, like the electron, it is clear from \eqref{momentumdensityfield} and \eqref{fieldvelocity} that the field mass circles the axis picked out by the dipole, as depicted in figure \ref{chargeddipole} (\citealp[chapter 27]{feynman2}; \citealp[chapter 8]{lange}).
The fact that some (or perhaps all) of the mass of the electron is located outside the bounds of the electron itself\footnote{Sometimes you see it said that a portion of the electron's mass is electromagnetic in origin, which seems to suggest that although this portion of mass originates in the energy of the electromagnetic field it is possessed by and located at the electron itself. I have argued against such an understanding of electromagnetic mass in \citet{forcesonfields}. The electromagnetic mass is located in the electromagnetic field.\label{whereisthemass}} and rotating appears to be helpful for addressing the first obstacle---getting a large angular momentum without moving superluminally is easier if the mass is more spread out. Also, there is no danger of the mass in the electromagnetic field moving superluminally since the magnitude of the field velocity in \eqref{fieldvelocity} is maximized at $c$ when $\vec{E}$ is perpendicular to $\vec{B}$ and $|\vec{E}|=|\vec{B}|$.
\begin{figure}[htb]
\center{\includegraphics[width=8 cm]{electron2.jpg}}
\caption{This figure depicts the electric and magnetic fields produced by the electron's charge and magnetic dipole moment. Also shown is the Poynting vector $\vec{S}$ which indicates that the mass of the electromagnetic field rotates about the axis picked out by the electron's magnetic moment.}
\label{chargeddipole}
\end{figure}
We are now in a position to see where the classical electron radius comes from and to see why it is a completely unreasonable estimate to use in motivating the first obstacle (from section \ref{obstacles}). Let's work up to that slowly. First, note that the smaller the electron is, the greater the mass in the electric and magnetic fields surrounding the electron. Keeping the total mass of the electron fixed, the smaller the electron is, the less mass it itself possesses. If we imagine making the electron as small as possible,\footnote{If we were willing to make the mass of the electron itself negative, its radius could be even smaller \citep[pg.\ 214]{pearle}.} putting all of its mass in the electromagnetic field, we arrive at a radius for the electron that we can call the ``electromagnetic radius,'' on the order of\footnote{The exact number depends on the way the electron's charge is distributed and how that charge flows.} $10^{-12}$ cm (\citealp[pg.\ 47]{uhlenbeck}; \citealp[pg.\ 39]{pais1989}; \citealp[chapter 8]{macgregor}). The classical electron radius was arrived at through similar reasoning applied before the electron's magnetic moment was discovered. It was assumed that the electron's mass comes entirely from its electric field. If we take the electron's charge to be distributed evenly over a spherical shell, the radius calculated in this way would be
\begin{align}
R&=\frac{e^2}{2 m c^2}\ ,
\label{electricsurfaceradius}
\end{align}
as the energy in the electric field is $\frac{e^2}{2 R}$. Ignoring the prefactor (which is dependent on the way the charge is distributed), we get the classical electron radius,\footnote{See \citet[section 38-3]{feynman2}, \citet[section 6-1]{rohrlich}.}
\begin{equation}
R=\frac{e^2}{m c^2}\approx 2.82 \times 10^{-13} \mbox{ cm}\ .
\label{classicalelectronradius}
\end{equation}
This radius is an order of magnitude smaller than the electromagnetic radius because the amount of energy in the magnetic field is much greater than the amount in the electric field. Neither of these radii should prompt worries of superluminal mass flow. If the mass of the electron resides entirely in its electromagnetic field, then the electron itself is massless and energyless. It doesn't matter how fast it's spinning since it itself won't have any angular momentum. The angular momentum is entirely in the field and the mass of the field cannot move superluminally.
Understanding that the electromagnetic field possesses mass does little to alter the second obstacle. Although the electron's mass bleeds into the field, its charge does not.
The third obstacle is complicated by the existence of mass in the electromagnetic field. The simple calculation of the gyromagnetic ratio for a spinning charged body given above \eqref{classicalGR} was the ratio of the magnetic moment produced by a spinning body to the angular momentum of that body itself. But, once we recognize that some of the mass we associate with that body is actually in its electromagnetic field, we must take the field's angular momentum into consideration when calculating the gyromagnetic ratio. Here is one illustrative way of doing so. The electric and magnetic fields around a charged magnetic dipole located at the origin are
\begin{align}
\vec{E}&=-e\frac{\vec{x}}{|\vec{x}|^3}
\nonumber
\\
\vec{B}&=\frac{3 (\vec{m}\cdot\vec{x})\vec{x}}{|\vec{x}|^5}-\frac{\vec{m}}{|\vec{x}|^3}\ .
\end{align}
If we assume the charge $-e$ to be uniformly distributed over a spherical shell of radius $R$, so that the above electric field is only present outside this radius, and also that the only contribution to the angular momentum comes from the fields outside this radius (because the entirety of the electron's mass resides in its electromagnetic field), we arrive at a gyromagnetic ratio of
\begin{equation}
\frac{|\vec{m}|}{|\vec{L}|}=\frac{3 R c}{2 e}\ .
\label{fieldGR}
\end{equation}
Unlike the simple calculation of the gyromagnetic ratio of an axially symmetric spinning charged body mentioned above \eqref{classicalGR}, this result is radius dependent.\footnote{I have only rarely seen the angular momentum of the electromagnetic field taken into account when calculating the gyromagnetic ratio of the electron (\citealp{corben1961, giulini2008}).}
We must input a radius for the electron if we are to compare \eqref{fieldGR} to \eqref{classicalGR} and \eqref{quantumGR}. One option would be to use the classical electron radius \eqref{classicalelectronradius}. However, we must be careful because the prefactors that were ignored in \eqref{classicalelectronradius} are important in \eqref{fieldGR}. In the earlier derivation of the angular momentum of the field, it was assumed that the charge was distributed uniformly over the surface of the sphere as in \eqref{electricsurfaceradius}. Continuing with that assumption and plugging \eqref{electricsurfaceradius} into \eqref{fieldGR} yields a gyromagnetic ratio of
\begin{equation}
\frac{|\vec{m}|}{|\vec{L}|}=\frac{3 e}{4 m c}\ ,
\label{firstGR}
\end{equation}
closer to \eqref{quantumGR} than \eqref{classicalGR}, but still incorrect. The classical electron radius was calculated by ignoring the magnetic field of the electron. Taking the magnetic field into consideration and using the electromagnetic radius instead of the classical radius would yield a gyromagnetic ratio which is much too large. So, the assumption that the mass of the electron is entirely in the electromagnetic field leads to trouble. Fortunately, it's not true. In section \ref{restrictionsection} we'll see that the electron is large enough that the mass of the electromagnetic field surrounding the electron is only a small fraction of the electron's total mass.
\section{The Dirac Field}\label{diracfieldsection}
In the previous section we examined the flow of mass in the electromagnetic field surrounding the electron. In this section we ignore the electromagnetic field and focus exclusively on the flow of mass and charge of the electron itself (assuming, contra the previous section, that little of the electron's mass is in the electromagnetic field). We can understand this flow of mass and charge by using the Dirac field to represent the state of the electron. In this section I make heavy use of the excellent account of spin given by \citet{ohanian}.
As was discussed in the introduction, the free Dirac equation,
\begin{equation}
i\hbar \frac{\partial \psi}{\partial t}=\left(\frac{\hbar c}{i}\gamma^0 \vec{\gamma}\cdot\vec{\nabla}+m\gamma^0 c^2\right)\psi
\ ,
\label{thediracequation}
\end{equation}
can either be viewed as part of a relativistic single particle quantum theory in which $\psi$ is a wave function (the quantum interpretation), or, as part of a relativistic field theory in which $\psi$ is a classical field (the classical interpretation).\footnote{As the Dirac field is sometimes interpreted as a wave function and sometimes as a classical field, one might naturally wonder if it is possible to interpret the electromagnetic field as a wave function instead of a classical field (see \citealp{good1957}; \citealp{mignani1974}; \citealp{bialynicki1996}; \citealp{emasqp}).} Here I adopt the second perspective and take $\psi$ to be a four-component complex-valued\footnote{Alternatively, the classical Dirac field is sometimes treated as Grassmann-valued (e.g., in textbook presentations of path integral methods for quantum field theory). I discuss the relation between complex-valued classical Dirac field theory and Grassmann-valued classical Dirac field theory in \citet{positronpaper}.} classical field. The classical Dirac field can be quantized, along with the classical electromagnetic field, to arrive at the quantum field theory of quantum electrodynamics. In the context of quantum electrodynamics, the electron is described by a superposition of different states for the classical Dirac field (a wave functional). In this paper, we will examine the classical field states that compose this superposition and see that our three obstacles can be overcome for each such classical state. We will not need to go as far as quantizing the Dirac field. At the level of physics under consideration here, there are just two interacting classical fields---the Dirac field and the electromagnetic field.\footnote{\citet[pg.\ 216--217]{weyl} explicitly considers and rejects the idea that the Dirac field should be treated as a classical field along the lines proposed here, comparing the idea to Schr\"{o}dinger's original pre-Born-rule interpretation of his eponymous equation where the amplitude-squared of the wave function is interpreted as a charge density. It is true that before quantization the classical Dirac field does not provide an adequate theory of the electron (though such a theory works better than you might expect; see \citealp{crisp1969,jaynes1973,barut1988, barut1990}). What matters for our purposes here is not the adequacy of classical Dirac field theory itself, but just the fact that it is this classical field theory which gets quantized to arrive at our best theory of the electron: quantum electrodynamics. (It is worth noting that \citealp{weyl} later treats the Dirac field like a classical field when quantizing it; see \citealp[pg.\ 451]{pashby2012}.)}
Much like the electromagnetic field, the Dirac field carries energy and momentum. The energy and momentum densities are given by:\footnote{These two densities are components of the symmetrized stress-energy tensor for the Dirac field (\citealp[section 20]{wentzel}; \citealp[appendix 7]{heitler}; \citealp[pg.\ 218--221]{weyl}).}$^,$\footnote{Here $\gamma^0$, $\vec{\gamma}$, and $\vec{\sigma}$ are four-dimensional matrices, related to the two-dimensional Pauli spin matrices $\vec{\sigma}_p$ by
\begin{equation}
\gamma^0=\left(\begin{matrix} I & 0 \\ 0 & -I \end{matrix}\right)
\quad\quad
\vec{\gamma}=\left(\begin{matrix}0 & \vec{\sigma}_p \\ -\vec{\sigma}_p & 0 \end{matrix}\right)
\quad\quad
\vec{\sigma}=\left(\begin{matrix}\vec{\sigma}_p & 0 \\ 0 & \vec{\sigma}_p \end{matrix}\right)\ .
\label{matrixdefs}
\end{equation}}
\begin{align}
\rho_d^{\mathcal{E}}&=\frac{i \hbar}{2}\left(\psi^\dagger\frac{\partial \psi}{\partial t}-\frac{\partial \psi^\dagger}{\partial t}\psi\right)
\nonumber
\\
&=m c^2 \psi^\dagger\gamma^0\psi + \frac{\hbar c}{2i}\left[\psi^\dagger\gamma^0\vec{\gamma}\cdot\vec{\nabla}\psi-(\vec{\nabla} \psi^\dagger)\cdot\gamma^0\vec{\gamma}\psi\right]
\label{diracenergydensity}
\\
\vec{G}_d&=\frac{\hbar}{2 i}\left[\psi^\dagger \vec{\nabla} \psi - (\vec{\nabla} \psi^\dagger)\psi \right]+\frac{\hbar}{4}\vec{\nabla}\times(\psi^\dagger \vec{\sigma} \psi)\ .
\label{diracmomentumdensity}
\end{align}
The $d$ subscript indicates that these are properties of the Dirac field. The second term in the momentum density gives the contribution from spin (\citealp[pg.\ 181--182]{wentzel}, \citealp[pg.\ 168]{pauli}, \citealp[pg.\ 503]{ohanian}). Because the spin contribution is a curl, it will not contribute to the total linear momentum of the electron. When the momentum density in \eqref{diracmomentumdensity} is used to calculate the angular momentum of an electron, the first term yields the orbital angular momentum and the second yields the spin angular momentum. The density of spin angular momentum derived from the second term in \eqref{diracmomentumdensity} is
\begin{equation}
\frac{\hbar}{2}\psi^\dagger \vec{\sigma} \psi\ .
\label{diracspinangularmomentumdensity}
\end{equation}
As we are here concerned with understanding spin, we will focus on states where the electron is at rest and the first term in \eqref{diracmomentumdensity} is everywhere zero.
Although I have not seen it done before, we can introduce a relativistic mass density and a velocity that describes the flow of mass in just the same way as was done for the electromagnetic field in the previous section,
\begin{align}
\rho_d&=\frac{\rho_d^{\mathcal{E}}}{c^2}=m \psi^\dagger\gamma^0\psi + \frac{\hbar}{2i c}\left[\psi^\dagger\gamma^0\vec{\gamma}\cdot\vec{\nabla}\psi-(\vec{\nabla} \psi^\dagger)\cdot\gamma^0\vec{\gamma}\psi\right]
\label{diracmassdensity}
\\
\vec{v}_d&=\frac{\vec{G}_d}{\rho_d}\ .
\label{diracvelocity}
\end{align}
In contrast with the electromagnetic field, the Dirac field's energy density can be negative and thus its mass density can be negative as well.
In addition to the mass density and its flow, we can examine the charge density of the Dirac field and the flow of charge. The charge density and charge current density are
\begin{align}
\rho^q_d&=-e \psi^\dagger \psi
\label{diracchargedensity}
\\
\vec{J}_d&=-e c \psi^\dagger \gamma^{0} \vec{\gamma} \psi
\label{diraccurrentdensity}\ .
\end{align}
If we were considering interaction with the electromagnetic field, these densities would act as source terms for Maxwell's equations. From the charge and current densities, we can define the velocity of charge flow as
\begin{equation}
\vec{v}_d^{\:q}=\frac{\vec{J}_d}{\rho^q_d}=\frac{c \psi^\dagger \gamma^{0} \vec{\gamma} \psi}{\psi^\dagger \psi}\ .
\label{diracchargevelocity}
\end{equation}
From this definition, it follows that the charge velocity cannot exceed the speed of light (\citealp[section 2b]{takabayasi1957}; \citealp[section 10.4]{bohmhiley}; \citealp[section 12.2]{holland}). Because of this light-speed limit, our second obstacle is automatically averted. Superluminal charge flow is impossible.
The reason the gyromagnetic ratio of the electron differs from the simple classical estimate \eqref{classicalGR} by a factor of two can be explained straightforwardly in the context of Dirac field theory using the mass and charge velocities introduced above.\footnote{It is generally agreed that there exists some explanation of the gyromagnetic ratio in the context of the Dirac equation. The task here is to better understand what sort of explanation is available (compare \citealp[pg.\ 504]{ohanian} and \citealp[section 1.4]{bjorkendrell}).} In the simple estimate of the gyromagnetic ratio, we assumed that the mass and charge were rotating together at the same rate. Actually, as we are about to see, the charge of the electron rotates twice as quickly as the mass.\footnote{How could charge move at a different velocity than mass? Imagine you're describing a charged fluid flowing through pipes using certain mass and charge densities. On closer inspection, the fluid turns out to be made of two kinds of particles---heavy neutral particles and light positively charged particles. Sometimes the charged particles flow faster than the neutral ones and the velocity of charge flow is greater than the velocity of mass flow. Sometimes the heavy particles flow faster than the light ones and the velocity of mass flow is greater than the velocity of charge flow.} So, the magnetic moment is twice as large as you'd expect given the angular momentum.
This factor of two between the mass and charge velocity is a general feature of wave functions that describe an electron at rest. But, to see how it arises it will be helpful to start with a particular illustrative example wave function. Here is a simple instantaneous state of the Dirac field which we can use as a first approximation towards representing a single electron which is (at this moment) at rest with $z$-spin up:\footnote{This state is discussed in \citet[equation 12]{huang1952}; \citet[equation 3.32]{bjorkendrell}; \citet[equation 14]{ohanian}.}
\begin{equation}
\psi=\left(\frac{1}{\pi d^2}\right)^{3/4}e^{-|\vec{x}|^2/2d^2}\left(\begin{matrix} 1\\0\\0\\0 \end{matrix}\right)\ .
\label{ohanianstate}
\end{equation}
The mass and charge are both localized in a Gaussian wave packet of width $d$. The reason for calling this a single electron state is that the integral of the charge density over all of space is $-e$.\footnote{See \citet[pg.\ 10]{takabayasi1957}.}
\begin{figure}[htb]
\center{\includegraphics[width=9.5 cm]{flows3.jpg}}
\caption{These plots depict the flow of mass and charge for the state of an electron at rest given in \eqref{ohanianstate}. The first two plots give the momentum density \eqref{momex} and mass velocity \eqref{massvel}. The second two plots give the magnetization current density \eqref{magcurrent} and the corresponding contribution to the charge velocity \eqref{chargevel} (for the corrected state \eqref{approxstate}, these plots give the total charge current density and charge velocity). The two velocity plots use the same scale to highlight that the charge velocity is twice the mass velocity.}
\label{flows}
\end{figure}
The momentum density for this state is
\begin{equation}
\vec{G}_d=-\frac{\hbar}{2}\left(\frac{1}{\pi d^2}\right)^{3/2}e^{-|\vec{x}|^2/d^2}\ \frac{\vec{x}\times\hat{z}}{d^2}\ ,
\label{momex}
\end{equation}
calculated via \eqref{diracmomentumdensity} where only the second term is non-zero. From this expression, it is clear that mass and energy are flowing around the $z$-axis (see figure \ref{flows}). The mass velocity for this state can be calculated by dividing this momentum density by the mass density, as in \eqref{diracvelocity},
\begin{equation}
\vec{v}_d = - \frac{\hbar}{2 m}\frac{\vec{x}\times\hat{z}}{d^2}= \frac{\hbar r}{2 m d^2}\hat{\theta}\ .
\label{massvel}
\end{equation}
The second expression gives the velocity in cylindrical coordinates. This equation shows that the mass flows everywhere about the $z$-axis at constant angular velocity. The electron's mass appears\footnote{I use the qualification ``appears'' because, as will be explained shortly, \eqref{ohanianstate} is not an entirely satisfactory approximation to the state of an electron.} to rotate like a solid object.
To calculate the velocity at which charge flows, it is useful to first expand the current density using the free Dirac equation as follows\footnote{This expansion appears in \citet{gordon1928}; \citet[pg.\ 321--322]{frenkel}; \citet[pg.\ 479]{huang1952}; \citet[pg.\ 504]{ohanian}.}
\begin{equation}
-e c \psi^\dagger \gamma^{0} \vec{\gamma} \psi = \underbrace{\strut \frac{i e\hbar}{2 m}\left\{\psi^\dagger \gamma^0 \vec{\nabla} \psi - (\vec{\nabla} \psi^\dagger) \gamma^0\psi\right\}}_{\text{\large{\ding{172}}}}\underbrace{\strut - \frac{e\hbar}{2 m} \vec{\nabla}\times(\psi^\dagger \gamma^0 \vec{\sigma}\psi)}_{\text{\large{\ding{173}}}}\underbrace{\strut +\frac{i e\hbar}{2 m c}\frac{\partial}{\partial t}(\psi^\dagger \vec{\gamma} \psi)}_{\text{\large{\ding{174}}}}\ .
\label{currentexpansion}
\end{equation}
The three terms in the expansion are the convection current density, the magnetization current density, and the polarization current density. As was the case for the momentum density \eqref{diracmomentumdensity}, the first term is zero for an electron at rest. The second two terms give the contribution to the charge current from spin. For the moment, let us focus on the magnetization current density. The magnetization current density in \eqref{currentexpansion} corresponds to a magnetic moment density of
\begin{equation}
- \frac{e\hbar}{2 m c} \psi^\dagger \gamma^0 \vec{\sigma}\psi\ ,
\label{magneticmomentdensity}
\end{equation}
where the prefactor is the Bohr magneton.\footnote{See \citet[section 5.6]{jackson}; \citet[pg. 504]{ohanian}.} The ratio of the magnitude of this magnetic moment density to the magnitude of the angular moment density in \eqref{diracspinangularmomentumdensity} for the state in \eqref{ohanianstate} is $\frac{e}{m c}$, the correct gyromagnetic ratio for the electron \eqref{quantumGR}. The magnetization current density,
\begin{equation}
\frac{e \hbar}{m}\left(\frac{1}{\pi d^2}\right)^{3/2}e^{-|\vec{x}|^2/d^2}\ \frac{\vec{x}\times\hat{z}}{d^2}\ ,
\label{magcurrent}
\end{equation}
makes a contribution to the velocity of charge flow, calculated via \eqref{diracchargevelocity}, of
\begin{equation}
-\frac{\hbar}{m}\frac{\vec{x}\times\hat{z}}{d^2}=\frac{\hbar r}{m d^2} \hat{\theta}\ .
\label{chargevel}
\end{equation}
The contribution to the velocity of charge flow which determines the electron's magnetic moment \eqref{chargevel} is twice the velocity of mass flow which determines the electron's angular momentum \eqref{massvel}.
The factor of two between these velocities is not a peculiar feature of the chosen state, but will hold for any electron state in the non-relativistic limit. In general, the contribution to the moment density $\vec{G}_d$ from spin is $\frac{\hbar}{4}\vec{\nabla}\times(\psi^\dagger \vec{\sigma} \psi)$---the second term in \eqref{diracmomentumdensity}. In the non-relativistic limit, the relativistic mass density is approximately $m \psi^\dagger\gamma^0\psi$---the first term in \eqref{diracmassdensity}. Dividing these, as in \eqref{diracvelocity}, gives a contribution to the velocity of mass flow from spin of
\begin{equation}
\frac{\hbar}{4 m} \frac{\vec{\nabla}\times(\psi^\dagger \vec{\sigma} \psi)}{\psi^\dagger\gamma^0\psi}\ .
\label{massvelgen}
\end{equation}
The velocity associated with the electron's spin magnetic moment can be derived from the magnetization current density---the second term in \eqref{currentexpansion}. Dividing the magnetization current density by the charge density \eqref{diracchargedensity}, as in \eqref{diracchargevelocity}, yields a contribution to the charge velocity of
\begin{equation}
\frac{\hbar}{2 m} \frac{\vec{\nabla}\times(\psi^\dagger \gamma^0\vec{\sigma} \psi)}{\psi^\dagger\psi}\ .
\label{chargevelgen}
\end{equation}
It is clear that \eqref{chargevelgen} is twice \eqref{massvelgen} (up to factors of $\gamma^0$ which we will return to later).
Addressing our obstacles in the context of the Dirac equation has enabled significant progress, but there remain three serious shortcomings to the account given thus far (which will be resolved in the following section). First, we have only been able to say (somewhat awkwardly) that a certain contribution to the velocity of charge flow is twice the velocity of mass flow and not that the actual velocity of charge flow is twice the velocity of mass flow. In fact, it is easy to see that the velocity of charge flow is zero for the state in \eqref{ohanianstate} as the charge current density calculated from \eqref{diraccurrentdensity} is clearly zero. The first term in the current expansion \eqref{currentexpansion} is also zero. Thus, the third term in \eqref{currentexpansion} (the polarization current density) must exactly cancel the second (the magnetization current density). Because of this cancellation, no magnetic field is being produced by an electron in this state. If we want to account for the magnetic field around an electron at rest, we need the electron's charge to actually rotate.
Second, the velocities in \eqref{massvel} and \eqref{chargevel} are unbounded, becoming superluminal as $r$ becomes very large. The fact that \eqref{chargevel} becomes infinite is not so troubling because, as was just discussed, it is cancelled by the contribution to the charge velocity from the polarization current. Also, as was mentioned earlier, it can be shown in general that the charge velocity cannot exceed $c$. The fact that \eqref{massvel} becomes superluminal is a real problem.
Third, there are problems that arise if the electron is too small and as of yet we have no reason to think it's large enough to avoid these problems. If the electron is too small, we face our first two obstacles concerning superluminal rotation. Also, if the electron is too small we will not be able to ignore the mass in the electromagnetic field when calculating the gyromagnetic ratio (as was done in this section but not the last). Looking at \eqref{ohanianstate} it appears that the size of the electron is an entirely contingent matter depending on the state of the Dirac field (by decreasing $d$, the electron can be made arbitrarily small).
\section{Restriction}\label{restrictionsection}
The three problems raised at the end of the previous section can be resolved by restricting the allowed states of the Dirac field to those formed by superposing positive frequency modes. Such a restriction can be motivated by the fact that, in quantum field theory, the positive frequency modes of the Dirac field are associated with electrons and the negative frequency modes with positrons.
In this section we will continue to focus our attention on the free Dirac equation, putting aside issues of self-interaction---the electron is treated as blind to the electromagnetic field it generates---but still confronting issues of self-energy---the energies of the Dirac and electromagnetic fields are both taken into account.
The free Dirac equation admits of plane wave solutions with definite\footnote{On a classical interpretation of the Dirac field, by saying the momentum is ``definite'' I mean that the momentum density \eqref{diracmomentumdensity} is uniform.} momentum $\vec{p}$ and time dependence given by either $e^{-i \mathcal{E}_{\vec{p}} t /\hbar}$ (positive frequency) or $e^{i \mathcal{E}_{\vec{p}} t/\hbar}$ (negative frequency), where $\mathcal{E}_{\vec{p}}$ is the energy associated with that momentum, $\mathcal{E}_{\vec{p}}=\sqrt{|\vec{p}|^2 c^2 + m^2 c^4}$ \citep[chapter 3]{bjorkendrell}. From \eqref{diracenergydensity}, it is clear that the positive frequency plane waves have uniform positive energy density and the negative frequency plane waves have uniform negative energy density.
In textbook presentations of the quantization of the classical Dirac field (such as \citealp[section 3.5]{peskinschroeder}), one starts by expanding the Dirac field in terms of positive and negative frequency modes. Quantizing this theory in a straightforward way, one pairs creation operators with each of these modes and sees that the operators paired with the positive frequency modes create particles with negative charge and positive energy whereas the operators paired with the negative frequency modes create particles with negative energy and negative charge. Seeking to avoid negative energies, one then redefines the operators for charge and energy so that the operators associated with negative frequency modes can be reinterpreted as annihilation operators for particles with positive energy and negative charge---positrons. Bringing the lesson that negative frequency modes will ultimately be associated with positrons back to our classical Dirac field theory,\footnote{In \citet{positronpaper}, I describe in more detail the procedure of field quantization and the lessons one ought to draw from it for classical Dirac field theory.} we ought to revise our understanding of the electron within classical Dirac field theory. In representing the electron, we will forbid any state of the Dirac field which has a Fourier decomposition that includes negative frequency modes and permit only states that are formed entirely from positive frequency modes (reserving the negative frequency modes for the representation of positrons). We will thus focus on a restricted version of classical Dirac field theory where the negative frequency modes are removed. (As we are not considering interactions, if negative frequency modes are absent at one time they will be absent always.) Before continuing on to analyze the electron within this restricted classical Dirac field theory, let us take a brief detour to examine the way negative frequency modes were originally handled by Dirac.
On a quantum interpretation of the Dirac equation where the Dirac field is viewed as a wave function, one would say that the negative frequency plane wave solutions are energy eigenstates with negative eigenvalues. The existence of such negative energy states proved both a blessing and a curse for early applications of the Dirac equation. To retain the blessing while dispelling the curse, Dirac proposed his hole theory according to which the negative energy states are filled \citep{dirac1930theory}.\footnote{See \citet{saunders1991, pashby2012}.} By Pauli exclusion, any additional electrons must sit atop this ``Dirac sea.'' The filled sea is taken to set the zero level for energy and charge. If any electron is excited out of the sea, the hole it leaves behind acts like a particle with equal mass and opposite charge to the electron---a positron. In this ``hole theory,'' the positive frequency modes (which are by default empty) are used to describe ordinary positive energy electrons and the negative frequency modes (which are by default filled) are used to describe positrons.\footnote{Some authors use the idea of Dirac sea in presenting the quantization of the Dirac field and others emphatically renounce it (compare \citealp[section 8a]{schweberQFT}; \citealp[section 13.4]{bjorkendrellfields}; \citealp{hatfield} to \citealp[chapter 2]{duncan}; \citealp[pg.\ 142]{schwartz}).}
As was first examined by \citet{weisskopf1934a, weisskopf1934b, weisskopf1939}, the electromagnetic energy divergence---which arises because the amount of energy in an electron's electromagnetic field goes rapidly to infinity as its radius is decreased---is tamed in the context of hole theory (\citealp[section 2.5.3]{schweber1994}). Weisskopf's handling of this divergence has been incorporated into the modern understanding of mass renormalization within quantum electrodynamics.\footnote{It is cited in relation to a modern understanding based on Feynman diagrams by \citet[pg.\ 513]{schweberQFT}; \citet[pg.\ 165]{bjorkendrell}; \citet[section II.D.2]{weisskopf1986}.} The crucial insight from Weisskopf's analysis for our task at hand is expressed well by \citet[pg.\ 299]{heitler}. He writes that the taming of the electromagnetic self-energy divergence for an electron at rest ``is a consequence of the hole theory and the Pauli [exclusion] principle'':
\begin{quote}
``Consider an electron represented by a very small wave packet in coordinate space. In momentum space this would be represented by a distribution including negative energy states. The latter, however, are filled with vacuum electrons. Consequently, the negative energy contributions to the wave function must be eliminated and the electron cannot be a wave packet of infinitely small size but must have a finite extension (of the order $\frac{\hbar}{mc}$ [the Compton radius], as one easily finds). Consequently the static self-energy will be diminished also.''\footnote{The fact that wave packets composed of positive energy modes have a minimum size is also discussed in \citet{newton1949}; \citet[pg.\ 39]{bjorkendrell}; \citet{chuu2007}.}
\end{quote}
Although Heitler's explanation is given from within the quantum interpretation of the Dirac field as wave function, it contains lessons that carry over to our classical interpretation. There is a limit on the minimum size wave packet that one can construct from the positive frequency modes of the Dirac field that are available in restricted Dirac field theory. The mass and charge of the electron thus cannot be confined to an arbitrarily small volume. Because the charge of the electron is spread over such a large packet, the electromagnetic contribution to the energy (and mass) of an electron at rest is small and can be ignored when calculating the gyromagnetic ratio to a first approximation (as was done in the previous section). In section \ref{obstacles} we saw that in order to avoid the superluminal rotation speeds forced upon us in our first two obstacles, the electron must be at least as large as the Compton radius. Restricting to positive frequency modes delivers that minimum size.
Let us return to the instantaneous electron state in \eqref{ohanianstate}. In restricted Dirac field theory, this state is forbidden as it includes both positive and negative frequency modes. To find a similar state that is allowed, we can simply delete the negative frequency modes from the Fourier decomposition of \eqref{ohanianstate}.\footnote{This Fourier decomposition is given in \citealp[section 3.3]{bjorkendrell}.} This yields\footnote{As the negative frequency modes have simply been deleted, this new state is not normalized. In our classical terms, this means that the integral of the charge density over all space will not be $-e$. In the non-relativistic limit, the total charge will be close to $-e$.}
\begin{equation}
\psi=\frac{1}{2}\left(\frac{d^2}{\pi \hbar^2}\right)^{3/4}\left(\frac{1}{2 \pi \hbar}\right)^{3/2}\int d^3 p \left(1+\frac{m c^2}{\mathcal{E}_{\vec{p}}}\right)e^{-\frac{|\vec{p}|^2 d^2}{2 \hbar^2}+\frac{i}{\hbar}\vec{p}\cdot\vec{x}}\left(\begin{matrix}
1\\
0\\
\frac{p_z c}{\mathcal{E}_{\vec{p}} + m c^2}\vspace*{4 pt}\\
\frac{(p_x+ i p_y) c}{\mathcal{E}_{\vec{p}} + m c^2}
\end{matrix}\right)\ .
\label{truestate}
\end{equation}
We can approximate this state in the non-relativistic limit by computing these integrals assuming that $d \gg \frac{\hbar}{m c}$ so the momentum space Gaussian in the integrand suppresses modes where $|\vec{p}|^2$ is not $\ll m^2 c^2$,\footnote{The same approximation can be arrived at by trusting only the first two components of \eqref{ohanianstate} and using the positive frequency non-relativistic limit of the Dirac equation in \citet[equation 1.31]{bjorkendrell} to calculate the other two.}
\begin{equation}
\psi=\left(\frac{1}{\pi d^2}\right)^{3/4}e^{-|\vec{x}|^2/2d^2}\left(\begin{matrix}
1\\
0\\
\frac{\hbar}{2 m c d^2}i z\vspace*{4 pt}\\
\frac{\hbar}{2 m c d^2}(i x-y)
\end{matrix}\right)\ .
\label{approxstate}
\end{equation}
The total current density for this state \eqref{approxstate}, calculated via \eqref{diraccurrentdensity}, is equal to the previous magnetization current density \eqref{magcurrent}. Dividing this by the charge density for \eqref{approxstate} gives a charge velocity of
\begin{equation}
\vec{v}_d^{\:q}=\frac{\frac{-\hbar}{m d^2}\vec{x}\times\hat{z}}{1+\frac{\hbar^2}{m^2 c^2 d^4}|\vec{x}|^2}\ .
\label{chargevel2}
\end{equation}
This limits to \eqref{chargevel} for $d \gg \frac{\hbar}{m c}$. Unlike \eqref{chargevel}, this is the actual charge velocity and not merely a contribution to it. The charge is really moving. The velocity in \eqref{chargevel2} is bounded and will not exceed the speed of light---as must be the case since the definition of the charge velocity \eqref{diracchargevelocity} ensures that it cannot be superluminal. For $d \gg \frac{\hbar}{m c}$, the mass velocity derived from \eqref{approxstate} using \eqref{diracvelocity} will be as it was before \eqref{massvel}. Thus, the charge rotates twice as fast as the mass.
This factor of two between mass and charge velocity is a general feature of states that describe electrons at rest in restricted Dirac field theory. What prevented us from reaching this conclusion in the previous section was that we had no reason to suppose the magnetization current density would be the dominant contribution to the total current density \eqref{currentexpansion}. The polarization current density could be significant as well. By restricting ourselves to superpositions of positive frequency modes, we have guaranteed that the polarization current density is small. To see why this is so, consider an arbitrary state of the Dirac field at $t=0$, $\psi(0)$. This state can be written as the sum of a superposition of positive frequency modes, $\psi_+$, and a superposition of negative frequency modes, $\psi_-$. In the non-relativistic limit, the time dependence of this state is given by
\begin{equation}
\psi(t)=e^{(i m c^2 / \hbar) t}\psi_++e^{-(i m c^2 / \hbar) t}\psi_-\ .
\end{equation}
The polarization current density for this state is
\begin{equation}
\frac{i e\hbar}{2 m c}\frac{\partial}{\partial t}\big(\psi_+^\dagger \vec{\gamma} \psi_++e^{-2(i m c^2 / \hbar) t}\psi_+^\dagger \vec{\gamma} \psi_-+e^{2(i m c^2 / \hbar) t}\psi_-^\dagger \vec{\gamma} \psi_++\psi_-^\dagger \vec{\gamma} \psi_-\big)\ .
\end{equation}
If we forbid negative frequency modes, the cross terms are absent and the time derivative yields zero. Thus, in the non-relativistic limit the polarization current density is negligible.
In the previous section we were able to derive the factor of two between charge velocity and mass velocity only up to factors of $\gamma^0$. The reason these factors can be ignored is that $\gamma^0$ simply flips the sign of the third and fourth components of $\psi$ and these components---for a state composed of positive frequency modes in the non-relativistic limit---are much smaller than the first and second components.
At this point let us reflect on the role that the non-relativistic limit has played in the preceding analysis. This limit is not part of the general response to our first two obstacles. The fact that there is a minimum size for wave packets formed from positive frequency modes is not dependent on this limit, nor is the light-speed cap on charge velocity. The non-relativistic limit is, however, essential in explaining the electron's gyromagnetic ratio. The reason for this is that the gyromagnetic ratio we seek to account for only holds in the non-relativistic limit.\footnote{Standard explanations of the factor of two in the gyromagnetic ratio of the electron using the Dirac equation appeal to the non-relativistic limit (\citealp[section 1.4]{bjorkendrell}).} Beyond this limit, the relationship between angular momentum and magnetic moment is more complex. In quantum mechanical terms, the relationship is given by the claim that the spin magnetic moment operator, $- \frac{e\hbar}{2 m c} \gamma^0 \vec{\sigma}$, is $-\frac{e\gamma^0}{m c}$ times the spin angular momentum operator, $\frac{\hbar}{2}\vec{\sigma}$ (\citealp[pg.\ 504]{ohanian}; \citealp[pg.\ 323]{frenkel}). Expressed in terms of local expectation values, the local ratio of spin magnetic momentum to angular momentum is the ratio of $- \frac{e\hbar}{2 m c} \psi^\dagger \gamma^0 \vec{\sigma}\psi$ to $\frac{\hbar}{2}\psi^\dagger \vec{\sigma} \psi$. In our classical field terminology, this is understood as the ratio of the spin magnetic moment density \eqref{magneticmomentdensity} to the spin angular momentum density \eqref{diracspinangularmomentumdensity}.
\section{Other Accounts of Spin}
In the introduction, I distinguished between field and particle approaches to quantum field theory. In the field approach, one starts from classical Dirac field theory and then quantizes. In the particle approach, one starts from a single electron relativistic quantum theory and then extends to multiple particles. Although I am not aware of other authors who explicitly argue that the electron really rotates within the field approach, there have been a number of attempts to somehow understand the electron's angular momentum and magnetic moment as resulting from the motion of the electron's mass and charge within the particle approach---where one sees $\psi$ as the electron's quantum wave function.\footnote{\citet{ohanian} calls $\psi$ a ``wave field,'' which is confusingly ambiguous between quantum wave function and classical field. Because of the way he uses quantum language, I have classified him as adopting a particle approach where $\psi$ is seen as a quantum wave function.} In this section, I will briefly compare the analysis presented here to a few of these accounts of spin. To organize the discussion, I will sort the accounts into two classes. First, there are those who---despite sometimes using quantum language---treat $\psi$ as broadly similar to a classical field, putting aside the probabilistic nature of this wave function and understanding the mass and charge of the electron to be spread out and executing some rotational motion. Second, there are those who emphasize the probabilistic nature of $\psi$ and think of the electron's mass and charge as located at a point. On such an account, the angular momentum and magnetic moment of the electron are not explained by a spinning motion (as the electron is point size) but instead by a rapid circulation of the electron within its wave function.
Let us begin with the first class. The most similar account of spin to the one proposed here is that of \citet{ohanian}. He describes the flow of energy and charge in the electron's wave function just as I describe the flow of energy and charge in the Dirac field in section \ref{diracfieldsection}---though he introduces neither a velocity of mass flow \eqref{diracvelocity} nor a velocity of charge flow \eqref{diracchargevelocity}. Ohanian explains that the angular momentum and magnetic moment of the electron are not ``internal'' and ``irreducible,'' but instead result from these flows of energy and charge. Ohanian does not directly address the three obstacles raised in section \ref{obstacles} and does not make the moves in section \ref{restrictionsection} that are necessary to surmount them. \citet{chuu2007} also give an account of spin where the electron's mass and charge are understood to be spread over the wave function and rotating. However, their description of this flow is somewhat different from that of section \ref{diracfieldsection} as they use the same current for the flows of both mass and charge. Chuu \textit{et al.\ }note that a wave packet formed from positive energy modes has a minimum size large enough to avoid the first obstacle (as was important in section \ref{restrictionsection}). This minimum size can also be used to address the second obstacle, though they do not mention that explicitly. They do not give a response to the third obstacle.
To understand the accounts that fall within the second class, let us start by examining the flow of probability. One can introduce densities of probability and probability current for the electron's wave function that are proportional to the densities of charge and charge current from section \ref{diracfieldsection}, \eqref{diracchargedensity} and \eqref{diraccurrentdensity}. The probability density is $\psi^\dagger \psi$ and the probability current density is $c\psi^\dagger\gamma^{0} \vec{\gamma}\psi$. This probability current density is the local expectation value of the ``velocity operator'' $c\gamma^{0} \vec{\gamma}$ (or, equivalently, $c\vec{\alpha}$).\footnote{This velocity operator is presented in \citet[sections 31 and 32]{frenkel}; \citet[section 69]{dirac}; \citet[pg. 920--922]{messiah1962}; \citet[pg.\ 11]{bjorkendrell}.} I did not introduce these densities earlier because I was treating $\psi$ as a classical field and thus all of this quantum talk about the flow of probability would have been inappropriate. The classical Dirac field has a mass density and a charge density, but no probability density. Now that we are treating $\psi$ as a quantum wave function, it is important to introduce a probability density and probability current density.
Applying a Bohmian interpretation of quantum mechanics to this single particle relativistic quantum theory, one can posit the existence of a point electron particle that is separate from the Dirac wave function and guided by it. Dividing the probability current density by the probability density yields a velocity for this particle, $\frac{c\psi^\dagger\gamma^{0} \vec{\gamma}\psi}{\psi^\dagger\psi}$, equal to the velocity of charge flow \eqref{diracchargevelocity} introduced earlier and thus also capped at $c$ (\citealp{bohm1953comments}; \citealp[section 10.4]{bohmhiley}; \citealp[equation 12.2.10]{holland}). Taking the electron to be a point particle moving with this velocity leads to the possibility of understanding the electron's angular momentum and magnetic moment as generated by the electron's motion within its wave function. \citet[pg.\ 218]{bohmhiley} argue that: ``in the Dirac theory, the magnetic moment usually attributed to the `spin' can actually be attributed to a circulating movement of a point particle, and not that of an extended spinning object.''
\citet{huang1952} also thinks of the electron as a point particle whose circulating motion gives rise to the observed angular momentum and magnetic moment, but he has different ideas about how it moves. He writes:
\begin{quote}
``... in the Dirac theory [velocity] is represented by the operator [$c\gamma^{0} \vec{\gamma}$], whose components can only have eigenvalues of $\pm c$, where $c$ is the velocity of light. This means that while the average velocity of the electron is less than $c$, its instantaneous velocity is always $\pm c$. We infer from this that the motion of the electron consists of a highly oscillatory component, superimposing on the average motion. Schr\"{o}dinger called this oscillatory motion `zitterbewegung,' and showed that the amplitude of this oscillation is of the order of the Compton wavelength of the electron.''\footnote{This conjectured zitterbewegung (``trembling motion'') of the electron is discussed in \citet[section 69]{dirac}; \citet[pg.\ 38]{bjorkendrell}. \citet{barutzanghi} present a way of using zitterbewegung to understand electron spin that is different from the proposals of Huang and Hestenes.}
\end{quote}
The existence of a Bohmian theory in which the electron's velocity can be less than the speed of light shows that we are not forced to regard the electron as always traveling at the speed of light. But, we can see where that thought leads. Unlike \citet{bohmhiley}, Huang does not give general equations for calculating the electron's motion. In the article, he considers the example state in \eqref{ohanianstate} and shows that it can be written as a superposition of states in which the expectation value of position executes circular motion (though the expectation value of position for the actual state does not move). He takes this observation and others to suggest that in this state the electron circles the $z$-axis.
Making Huang's picture precise, Hestenes has proposed equations of motion for a point electron according to which the electron always moves at the speed of light (see \citealp{gull1993}; \citealp{hestenes2008, hestenes2010} and references therein). The angular momentum and magnetic moment arise from a circulating motion of the electron (depicted in figure 1 of \citealp{hestenes2008}). According to \citet[pg.\ 2]{hestenes2010}, it may be possible to view the equations he proposes as ``formulating fundamental properties of the electron that are manifested in the Dirac equation in some kind of average form.'' Hestenes recognizes that his equations represent a departure from standard physics and has considered ways of empirically observing the predicted deviations (\citealp{hestenes2008}; \citealp[sections 9.1 and 11]{hestenes2010}). He also acknowledges that there is work to be done in reconciling his novel approach with quantum field theory as we know it \citep[pg.\ 53]{hestenes2010}. This kind of modificatory program is quite different from the account of spin provided here, where I've sought to show that our existing equations, properly interpreted, describe a spinning electron.
\section{Conclusion}
The consensus about electron spin, which emerged long ago, is that the electron somehow acts like a spinning object without actually spinning. As \citet[pg.\ 514]{rojansky} puts it in his textbook on quantum mechanics, after discussing angular momentum and magnetic moment in the context of the Dirac equation,
\begin{quote}
``In short, \emph{Dirac's equation automatically endows the electron with the properties that account for the phenomena previously ascribed to a hypothetical spinning motion of the electron.}'' (original italics)
\end{quote}
In this paper I have argued for a different interpretation. The Dirac equation does not somehow manage to account for these properties without positing a spinning electron. Instead, it explains just how the electron spins.
The obstacles to regarding the electron as spinning presented in the introduction were addressed as follows: Old estimates of the size of the electron made under the assumption that the electron's mass is primarily electromagnetic suggest that the electron would have to rotate superluminally in order to have the right angular momentum and magnetic moment. Actually, if the electron's mass is primarily electromagnetic we should focus on the rotation of the electromagnetic field's mass in calculating the electron's angular momentum and this mass cannot move superluminally. Also, the electron's mass is not primarily electromagnetic. When we move to better estimates of the electron's size---using the Dirac field to represent the state of the electron---we see that its minimum size is large enough that there is no need for superluminal rotation. Further, the definition of charge velocity for the Dirac field guarantees that the electron's charge will not move superluminally. The other obstacle was the fact that the electron's gyromagnetic ratio differs from the simplest classical estimate by a factor of two. On the account given here, this factor does not arise from some novel quantum revision to the basic physical principles defining angular momentum and magnetic moment, but is instead attributed to a false assumption in the simple classical estimate---the electron's mass and charge do not rotate at the same rate.\\\\
\textbf{Acknowledgments}
Thank you to Adam Becker, Dirk-Andr\'{e} Deckert, Maaneli Derakhshani, John McGreevy, Lukas Nickel, Hans Ohanian, Laura Ruetsche, Roderich Tumulka, David Wallace, and anonymous referees for helpful feedback and discussion. This project was supported in part by funding from the President's Research Fellowships in the Humanities, University of California (for research conducted while at the University of California, San Diego).
|
{
"timestamp": "2019-08-02T02:05:58",
"yymm": "1806",
"arxiv_id": "1806.01121",
"language": "en",
"url": "https://arxiv.org/abs/1806.01121"
}
|
\section{Introduction}\label{sect1}
The variational inequalities have been the subject of many
recent articles in probability, ergodic theory and harmonic
analysis.For linear version, the first variational inequality was proved by L\'epingle \cite{Lep76} for martingales (see \cite{PiXu88} for a simple proof). Bourgain \cite{Bou89} used L\'epingle's result to obtain corresponding variational estimates for the Birkhoff ergodic averages and then directly deduce pointwise convergence results without previous knowledge that pointwise convergence holds for a dense subclass of functions, which is quite diffcult in some ergodic models. A few years later, Jones and his collaborators systematically studied variational inequalities for ergodic averages in \cite{JKRW98}, \cite{JRW03}, \cite{CJRW2000} and \cite{CJRW2002}, see also \cite{HoMa,MTX,HL17}. Recently, several results on variational inequalities for discrete averaging operators of Radon type have also been established (cf. e.g. \cite{Kra14}, \cite{MiTr14}, \cite{MST15}, \cite{MTZ14}, \cite{Zor14}).
\par
In this paper we concern with variational inequalities for some classes of bilinear averages, and their application to ergodic theory. In fact, the problem of almost everywhere convergence of multilinear ergodic averages plays an important role in ergodic theory. For instance, Demeter {\it et al}\ \cite{DTH08}
considered the following multilinear averages and related ergodic averages:
\begin{align}\label{RGaverage}
T_{A,\mathbb R,r}(f_1,\cdots,f_{n-1})(x)=\frac1{(2r)^m}\int_{|t_1|,\cdots,|t_{m}|\le r}\prod_{i=1}^{n-1}f_i\big(x+\sum_{j=1}^ma_{i,j}t_j\big)d\vec{t},
\end{align}
and
\begin{align}\label{VGaverage}
T_{A,X,L}(f_1,\cdots,f_{n-1})(x)=\frac1{(2L+1)^m}\sum_{|l_1|,\cdots,|l_{m}|\le L}\prod_{i=1}^{n-1}f_i\big(S^{\sum_{j=1}^ma_{i,j}l_j}x\big),
\end{align}
where $n>1$, $m\ge1$, $A=(a_{i,j})$ is a $(n-1)\times m$ integer-valued matrix and $(X,\Sigma,m,S)$ is a dynamical system. This kind of averages are related to the Furstenberg recurrence theorem \cite{F77} and to Szemer\'{e}di's theorem \cite{S75} on arithmetic progressions, and are also connected to the result in \cite{GT08} that primes contain arbitrarily long progressions.To get the convergence, authors established the almost everywhere convergence for $T_{A,X,L}$ for $f_1,\cdots,f_{n-1}\in L^\infty(X)$, proved $\sup\limits_{L>0}|T_{A,X,L}|$ maps $L^{p_1}(X)\times\cdots \times L^{p_{n-1}}(X)$ to $L^p(X)$ and extended the convergence result to the case when $ f_i\in L^{p_i}(X)$. The boundedness of $\sup\limits_{L>0}|T_{A,X,L}|$ is a consequence of an analogous boundedness for $\sup\limits_{r>0}|T_{A,\mathbb R,r}|$, because of transference arguments. But the problem of almost everywhere convergence of $T_{A,X,L}$ for $f_1,\cdots,f_{n-1}\in L^\infty(X)$ is quite difficult except some special case. An alternate method would be to prove variational inequalities for $T_{A,X,L}$ in $L$ without consider the almost everywhere convergence of $T_{A,X,L}$ for $f_1,\cdots,f_{n-1}\in L^\infty(X)$. In 2008, Demeter {\it et al}\ \cite{DLTH08} established an oscillation result(a weak variational inequality) which is used to prove the convergence for the signed average analog of Bourgain's return times theorem, and to provide a separate proof of Bourgain's theorem.
\par
Precisely, we primarily consider the almost everywhere convergence of the following bilinear averages:
\begin{align*}
Q_t(f,g)(x)=\frac1{t^2}\int_{|y|\le\frac t2}\int_{|z|\le\frac t2}f(x-y)g(x-z)dydz,
\end{align*}
where $t>0$, and $f,g$ are arbitrary measurable functions on $\mathbb R$. Note that averages $Q_t(f,g)$ are special cases of multilinear averages defined in \eqref{RGaverage} when $n=3$, $m=2$ and $A=I_{2\times2}$. We denote the family $\{Q_t(f,g)\}_{t>0}$ by $\mathcal Q(f,g)$. Before we can get into more details we need some definitions.
\par
For sequence $\{a_n\}$ and $\rho\ge1$ define the variational norm $V_\rho$ by
$$
\|\{a_n\}\|_{V_\rho}=\sup_{\{n_i\}}\big(\sum_{i}|a_{n_i}-a_{n_{i+1}}|^\rho\big)^{1/\rho},
$$
where the supremum is taken over all systems of indices $n_1<n_2<\cdots$.
Given an interval $I\in (0,\infty)$ and a family of complex numbers $\mathfrak a=\{a_t\}_{t\in I}$, the variational norm of the family $\mathfrak a$ is defined as
\begin{equation*}\label{q-ver number family}
\|\mathfrak a\|_{V_\rho(I)}=\sup\big(\sum_{i\geq1}
|a_{t_i}-a_{t_{i+1}}|^\rho\big)^{\frac{1}{\rho}},
\end{equation*}
where the supremum runs over all increasing sequences $\{t_i\in I:i\geq1\}$. It is trivial that
\begin{equation}\label{number contr ineq}
\|\mathfrak a\|_{L^\infty(I)}:=\sup_{t\in I}|a_t|
\le|a_{t_0}|+\|\mathfrak a\|_{V_\rho(I)}\quad\text{for\ any}\ t_0\in I\ \text{and}\ \rho\ge1.
\end{equation}
If $I=(0,\infty)$, we denote the variational norm $V_\rho(I)$ by $V_\rho$ for short.
\par
Given a family of Lebesgue measurable functions $\mathcal F=\{F_t(x)\}_{t>0}$ defined on $\mathbb{R}$, for fixed $x$ in $\mathbb{R}$ the value of the strong $\rho$-variation operator $V_\rho(\mathcal F)$ of the family $\mathcal F$ at $x$ is defined by
\begin{equation}\label{defini of Vq(F)}
V_\rho(\mathcal F)(x)=\|\{F_t(x)\}_{t>0}\|_{V_\rho},\quad \rho\ge1.
\end{equation}
It is easy to observe from the definition of $\rho$-variation norm that for fixed $x$ if $V_\rho(\mathcal F)(x)<\infty$, then $\{F_t(x)\}_{t>0}$ converges when $t\rightarrow0$ and $t\rightarrow\infty$. In particular, if $V_\rho(\mathcal F)$ belongs to some function spaces such as $L^p$ or $L^{p,\infty}$, then the family $\{F_t(x)\}_{t>0}$ converges almost everywhere without any additional condition. This is why mapping property of strong $\rho$-variation operator is so interesting in ergodic theory and harmonic analysis.
\par
The following theorem is a variational inequality for bilinear averages over cubes.
\begin{theorem}\label{var of the Q C}
For $\rho>2$, $1<p_1,p_2<\infty$ and $\frac1p=\frac1{p_1}+\frac1{p_2}$, we have
\begin{align*}
\|V_\rho(\mathcal{Q}(f,g))\|_{L^p(\mathbb R)}\le C\|f\|_{L^{p_1}(\mathbb R)}\|g\|_{L^{p_2}(\mathbb R)}.
\end{align*}
\end{theorem}
In addition to averages $\{Q_t(f,g)\}_{t>0}$, we introduce averages $\{\mathrm{Q}_L(\phi,\psi)\}_{L\in\mathbb N}$ defined on $\phi,\psi:\mathbb Z\rightarrow \mathbb R$ of compact support:
\begin{align*}
\mathrm{Q}_L(\phi,\psi)(i)=\frac1{(2L+1)^2}\sum_{|l|,|k|\le L}\phi(i-l)\psi(i-k).
\end{align*}
The family of discrete averages $\{\mathrm{Q}_L(\phi,\psi)\}_{L\in\mathbb N}$ is denoted by $\mathbf{Q}(\phi,\psi)$. Moreover, we obtain the discrete version of Theorem \ref{var of the Q C} as follows.
\begin{corollary}\label{var of the Q D}
For $\rho>2$, $1<p_1,p_2<\infty$ and $\frac1p=\frac1{p_1}+\frac1{p_2}$, we have
\begin{align*}
\|V_\rho(\mathbf{Q}(\phi,\psi))\|_{L^p(\mathbb Z)}\le C\|\phi\|_{L^{p_1}(\mathbb Z)}\|\psi\|_{L^{p_2}(\mathbb Z)}.
\end{align*}
\end{corollary}
Let $(X,\Sigma,m,S)$ denote a dynamical system with $(X,\Sigma, m)$ a complete probability space and $S$ an invertible bimeasurable transformation such that $mS^{-1}=m$.
The closely related ergodic averages are given by
$$
\mathfrak{Q}_L(f,g)(x)=\frac1{(2L+1)^2}\sum_{|l_1|,|l_{2}|\le L}f\big(S^{l_1}x\big)g\big(S^{l_2}x\big).
$$
The sequence $\{\mathfrak{Q}_L(f,g)\}_{L}$ is denoted by $\mathscr Q(f,g)$.
Appealing to Corollary \ref{var of the Q D} and standard transfer methods like in \cite{DTH08,D07}, we get
\begin{corollary}\label{var of the ergodic Q C}
For $\rho>2$, $1<p_1,p_2<\infty$ and $\frac1p=\frac1{p_1}+\frac1{p_2}$, we have
\begin{align*}
\|V_\rho(\mathscr{Q}(f,g))\|_{L^p(X)}\le C\|f\|_{L^{p_1}(X)}\|g\|_{L^{p_2}(X)}.
\end{align*}
Moreover, for every dynamical system $(X,\Sigma,m,S)$, the averages over squares
$$
\frac1{(2N+1)^2}\sum_{i=-N}^{N}\sum_{j=-N}^{N}f\big(S^ix\big)g\big(S^jx\big)
$$
converge a.e. for $f\in L^{p_1}(X)$ and $g\in L^{p_2}(X)$.
\end{corollary}
\par
For $j,m\in\mathbb Z$, the dyadic interval in $\mathbb R$ is an interval of the form $[m2^j.(m+1)2^j)$. The set of all dyadic intervals with side-length $2^j$ is denoted by $\mathcal D_j$. The conditional expectation of a local integrable $f$ with respect to the increasing family of $\sigma-$algebras $\sigma(\mathcal D_j)$ generated by $\mathcal D_j$ is given by
$$
\mathbb E_jf(x)=\sum_{I\in \mathcal D_j}\frac1{|I|}\int_{I}f(y)dy\cdot\chi_{I}(x)
$$
for all $j\in\mathbb Z$. In view of the Lebesgue differentiation theorem, we have that
$$
\lim_{j\rightarrow \infty}\mathbb E_jf\rightarrow f,\ a.e.
$$
for $f\in L^2(\mathbb R)$. $\{\mathbb E_jf\}_j$ can be looked as a family of averages which are constructed from $f$ by certain averaging process. Moreover, there is a close connection between the martingale sequence $\{\mathbb E_jf\}_j$ and averages over cubes \cite{JKRW98,JRW03,JSW08}. Therefore, we consider the bilinear conditional expectation of two local integrable $f$ and $g$, which is given by
$$
\mathbb E_j(f,g)(x)=\sum_{I,J\in \mathcal D_j}\frac1{|I\times J|}\int_{I\times J}f(y)g(z)dydz\cdot\chi_{I\times J}(x,x).
$$
For the bilinear conditional expectation, we obtain the following variational inequality.
\begin{theorem}\label{var of Mar}
For $\rho>2$, $1<p_1,p_2<\infty$ and $\frac1p=\frac1{p_1}+\frac1{p_2}$, we have
\begin{align*}
\|V_\rho(\{\mathbb E_j(f,g)\}_j)\|_{L^p(\mathbb R)}\le C\|f\|_{L^{p_1}(\mathbb R)}\|g\|_{L^{p_2}(\mathbb R)}.
\end{align*}
\end{theorem}
Other family of bilinear averages are carried out by a suitable "approximation of the identity" as follows. Fix $\phi\in \mathscr{S}(\mathbb R^2)$ with $\int_{\mathbb R^2}\phi(x)dx =1$. For $t>0$, set $\phi_t(x,y)=t^{-2}\phi(x/t,y/t)$. The bilinear convolution operators are given by
\begin{equation*}
\phi_t(f,g)(x)=\int_{\mathbb R^2}\phi_t(x-y,x-z)f(y)g(z)dydz.
\end{equation*}
We denote $\{\phi_t(f,g)\}_{t>0}$ by $\Phi(f,g)$. In this setting we obtain the variational estimate as follows.
\begin{theorem}\label{var of the 1.1}
For $\rho>2$, $1<p_1,p_2<\infty$ and $\frac1p=\frac1{p_1}+\frac1{p_2}$, we have
\begin{align*}
\|V_\rho(\Phi(f,g))\|_{L^p(\mathbb R)}\le C\|f\|_{L^{p_1}(\mathbb R)}\|g\|_{L^{p_2}(\mathbb R)}.
\end{align*}
\end{theorem}
In the next section we give the proof of the variational inequality for averages over cubes, which is a consequence of an vector-valued bilinear interpolation and an endpoint estimate for certain vector-valued operator. The discrete analogue is proved at the end of this section. The variational inequality for conditional expectations is treated in the same way in section 3. In final section we prove the variational estimate for approximations of the identity in the similar way. But, the $L^{p_1}\times L^{p_2}\rightarrow L^p$ bounds for all $1<p,p_1,p_2<\infty$ with $\frac1{p_1}+\frac1{p_2}=\frac1p$ and endpoint estimate can not be established directly, since those kernels are not multiplicatively separable. We apply bilinear vector-valued Calder\'{o}n-Zygmund theory to deal with those problems.
\section{Variational inequality for averages over cubes}
In order to prove Theorem \ref{var of the Q C}, we present an $\mathcal B$-valued bilinear interpolation, where $\mathcal B$ is a Banach space, see \cite{JH12} and \cite{RS69}.
\begin{lemma}\label{B interpolation}
Suppose and $T$ is a bilinear $\mathcal B$-valued operator. If $T$ is bounded from $L^{p_1}(\mathbb R)\times L^{p_2}(\mathbb R)$ into $L^{p,\infty}(\mathcal B)$ for all $1<p,p_1,p_2<\infty$ with $\frac1{p_1}+\frac1{p_2}=\frac1p$ and from $L^1(\mathbb R)\times L^1(\mathbb R)$ into $L^{1/2,\infty}(\mathcal B)$, then $T$ is bounded from $L^{p_1}(\mathbb R)\times L^{p_2}(\mathbb R)$ into $L^{p,\infty}(\mathcal B)$ for all $1<p_1,p_2<\infty$ with $\frac1{p_1}+\frac1{p_2}=\frac1p$.
\end{lemma}
We take the Banach space $\mathcal B=\{a(t):\|a\|_{\mathcal B}=\|a\|_{V_\rho}<\infty\}$.
Then, $V_\rho(\mathcal{Q}(f,g))(x)=\|\{Q_t(f,g)(x)\}_{t>0}\|_{\mathcal B}$.
Lemma \ref{B interpolation} implies Theorem \ref{var of Mar} is a consequence of the following two propositions.
\begin{proposition}\label{var of the Q P}
For $\rho>2$, $1<p,p_1,p_2<\infty$ and $\frac1p=\frac1{p_1}+\frac1{p_2}$, we have
\begin{align*}
\|V_\rho(\mathcal{Q}(f,g))\|_{L^p(\mathbb R)}\le C\|f\|_{L^{p_1}(\mathbb R)}\|g\|_{L^{p_2}(\mathbb R)}.
\end{align*}
\end{proposition}
\begin{proof} Similarly, we get
\begin{align*}
V_\rho(\mathcal{Q}(f,g))(x)&=V_\rho(\mathcal M(f)\cdot \mathcal M(g))(x)\\
&\le M(f)(x)\cdot V_\rho(\mathcal M(g))(x)+M(g)(x)\cdot V_\rho(\mathcal M(f))(x).
\end{align*}
By using H\"{o}lder's inequality and the variational inequalities for averages\cite{JKRW98,CJRW2000}, we get the desired result.
\end{proof}
\begin{lemma}\label{weak var of bilinear Q}
For $\rho>2$, we have
\begin{align}\label{Uf w estimate}
\lambda|\{x\in\mathbb R:V_\rho(\mathcal{Q}(f_1,f_2))(x)>\lambda\}|^2\le C\|f_1\|_{L^1(\mathbb R)}\|f_2\|_{L^1(\mathbb R)}
\end{align}
uniformly in $\lambda>0$.
\end{lemma}
\begin{proof}
By scaling, we can assume that $\lambda=1$. Suppose that $f_1,f_2$ are step functions given by a finite linear combination of characteristic functions of disjoint dyadic intervals. In proving above weak endpoint type estimate, we may assume that
$$
\|f_1\|_{L^1}=\|f_2\|_{L^1}=1.
$$
The general case follows immediately by scaling. It suffices to prove
\begin{align*}
|\{x\in\mathbb R:V_\rho(\mathcal Q(f_1,f_2))(x)>1\}|\le C.
\end{align*}
We apply the Calder\'{o}n-Zygmund decomposition to functions $f_i$ at height $1$ to obtain functions $g_i$, $b_i$ and finite families dyadic intervals $\{I_{i,k}\}_k$ with disjoint interiors such that
$$
f_i=g_i+b_i\ \ \text{and}\ \ b_i=\sum_kb_{i,k}.
$$
For $i=1,2$, we have
$$
\text{support}(b_{i,k})\subseteq I_{i,k}
$$
$$
\int_{I_{i,k}}b_{i,k}(x)dx=0
$$
$$
\int_{I_{i,k}}|b_{i,k}(x)|dx\le C|I_{i,k}|
$$
$$
|\cup_kI_{i,k}|\le C
$$
$$
\|g_i\|_{L^1}\le \|f_i\|_{L^1}=1
$$
$$
\|g_i\|_{L^\infty}\le 2.
$$
For interval $I$, $\tilde{I}$ denotes the interval that is concentric with $I$ and has length $3|I|$. For convenience, we denote $\cup_k\tilde{I}_{1,k}$ and $\cup_i\tilde{I}_{2,i}$ by $\Omega_1$ and $\Omega_2$, respectively. Since
\begin{align*}
|\{x\in\mathbb R:V_\rho(\mathcal Q(f_1,f_2))(x)>1\}|&\le |\{x\in\mathbb R:V_\rho(\mathcal Q(f_1,f_2))(x)>1/4\}|+|\Omega_1|\\
&+|\Omega_2|+|\{x\notin\Omega_1:V_\rho(\mathcal Q(b_1,g_2))(x)>1/4\}|\\
&+|\{x\notin\Omega_2:V_\rho(\mathcal Q(g_1,b_2))(x)>1/4\}|\\
&+|\{x\notin\Omega_1\cup\Omega_2:V_\rho(\mathcal Q(b_1,b_2))(x)>1/4\}|,
\end{align*}
it suffices to estimate each of above six sets. Let us start with the first one. Applying Proposition \ref{var of the Q P}, we observe
\begin{align*}
|\{x\in\mathbb R:V_\rho(\mathcal Q(g_1,g_2))(x)>1/4\}|&\le C\|V_\rho(\mathcal Q(g_1,g_2))\|_{L^1}\le C\|g_1\|_{L^2}\|g_2\|_{L^2}\le C.
\end{align*}
Obviously,$|\Omega_1|+|\Omega_2|\le C$.
Now we turn to the fourth term. For $x\notin\Omega_1$ and $t\in(0,\infty)$, there are at most two $k$'s for which
\begin{equation*}
\frac1{t}\int_{x-\frac t2}^{x+\frac t2}b_{1,k}(y)dy\neq0.
\end{equation*}
Indeed, it happens only if $I_{1,k}$ contains the starting point or endpoint of $(x-\frac t2,x+\frac t2)$. Hence,
\begin{align*}
V_\rho(\mathcal Q(b_1,g_2))(x)&= \sup_{\{t_j\}\searrow0}\bigg(\sum_j\big|\sum_k[M_{t_j}(b_{1,k},g_2)(x)-M_{t_{j+1}}(b_{1,k},g_2)(x)]\big|^\rho\bigg)^{1/\rho}\\
&\le C\sup_{\{t_j\}\searrow0}\bigg(\sum_j\sum_k|M_{t_j}(b_{1,k},g_2)(x)-M_{t_{j+1}}(b_{1,k},g_2)(x)|^\rho\bigg)^{1/\rho}\\
&\le C\bigg(\sum_kV_\rho(\mathcal Q(b_{1,k},g_2))^\rho(x)\bigg)^{1/\rho}.
\end{align*}
For $x\notin\tilde{I}_{1,k}$, we assume $x$ is on the right of $I_{1,k}$, the other case can be treated in the same way. We can choose a monotone decreasing sequence $\{t_j(x)\}_j$ approaching $0$ such that
\begin{align*}
V_\rho(\mathcal Q(b_{1,k},g_2))(x)&\le C\sum_j\big|Q_{t_j(x)}(b_{1,k},g_2)(x)-Q_{t_{j+1}(x)}(b_{1,k},g_2)(x)\big|\\
&\lesssim |Q_{t_{j_0}(x)}(b_{1,k},g_2)(x)|+\sum_{j=j_0}^{j_1-1}\big|Q_{t_j(x)}(b_{1,k},g_2)(x)-Q_{t_{j+1}(x)}(b_{1,k},g_2)(x)\big|\\
&+|Q_{t_{j_1}(x)}(b_{1,k},g_2)(x)|\\
&\lesssim \frac1{t_{j_1}(x)}\|b_{1,k}\|_{L^1}+\sum_{j=j_0}^{j_1-1}\big|M_{t_j(x)}(b_{1,k})(x)-M_{t_{j+1}(x)}(b_{1,k})(x)\big|\\
&+\sum_{j=j_0}^{j_1-1}\big|M_{t_j(x)}(g_2)(x)-M_{t_{j+1}(x)}(g_2)(x)\big||M_{t_{j+1}(x)}(b_{1,k})(x)|,
\end{align*}
where $x-t_{j_0}(x)\in I_{1,k}$ and $x-t_{j_0-1}(x)\notin I_{1,k}$, $x-t_{j_1}(x)\in I_{1,k}$ and $x-t_{j_1+1}(x)+x\notin I_{1,k}$, and we have used the fact that $\|M(g_2)\|_{L^\infty}\le 2$. Clearly, $t_{j_1}(x)\sim d(x,I_{1,k})$ for $x\notin I_{1,k}$. Then, the second summand is dominated by
\begin{align*}
&\sum_{j=j_0}^{j_1-1}\big|\frac1{t_j(x)}-\frac1{t_{j+1}(x)}\big|\|b_{1,k}\|_{L^1}+\sum_{j=j_0}^{j_1-1}\frac1{t_{j+1}(x)}\big|\int_{x-\frac{t_j(x)}2}^{x+\frac{t_j(x)}2}b_{1,k}(y)dy-\int_{x-\frac{t_{j+1}(x)}2}^{x+\frac{t_{j+1}(x)}2}b_{1,k}(y)dy\big|\\
&\lesssim \frac1{t_{j_1}(x)}\|b_{1,k}\|_{L^1}\le \frac {C\|b_{1,k}\|_{L^1}}{d(x,I_{1,k})}.
\end{align*}
For the third summand, it is controlled by
\begin{align*}
&\sum_{j=j_0}^{j_1-1}\big|\frac1{t_j(x)}-\frac1{t_{j+1}(x)}\big|\frac{t_{j_0}(x)}{t_{j_1}(x)}\|b_{1,k}\|_{L^1}+\sum_{j=j_0}^{j_1-1}\frac1{t_{j+1}(x)}\big|\int_{x-\frac{t_j(x)}2}^{x+\frac{t_j(x)}2}g_2(z)dz-\int_{x-\frac{t_{j+1}(x)}2}^{x+\frac{t_{j+1}(x)}2}g_2(z)dz\big|\frac{\|b_{1,k}\|_{L^1}}{t_{j_1}(x)}\\
&\lesssim \frac{t_{j_0}(x)}{t^2_{j_1}(x)}\|b_{1,k}\|_{L^1}\lesssim \frac{d(x,I_{1,k})+|I_{1,k}|}{d(x,I_{1,k})^2}\|b_{1,k}\|_{L^1},
\end{align*}
where we used the fact $\|g_2\|_{L^\infty}\le2$ and $\|g_2\|_{L^1}\le 1$.
\par
As a result, we get
\begin{align*}
\big|\{x\notin\Omega_1:V_\rho(\mathcal Q(b_1,g_2))(x)>1/4\}\big|&\le C\sum_k\int_{(\tilde{I}_{1,k})^c}V_\rho(\mathcal Q(b_{1,k},g_2))^\rho(x)dx\\
&\le C\sum_k\|b_{1,k}\|_{L^1}^\rho\int_{(\tilde{I}_{1,k})^c}\frac{(d(x,I_{1,k})+|I_{1,k}|)^\rho}{d(x,I_{1,k})^{2\rho}}dx\\
&\le C\sum_k\|b_{1,k}\|_{L^1}^\rho |I_{1,k}|^{1-\rho}\le C\sum_k|I_{1,k}|\le C.
\end{align*}
The fifth term can be treated in the similar way, we obtain
\begin{align*}
\big|\{x\notin\Omega_2:V_\rho(\mathcal Q(g_1,b_2))(x)>1/4\}\big|\le C.
\end{align*}
For the last one, we write
\begin{align*}
b_1(y)b_2(z)&=\sum_kb_{1,k}(y)\sum_{i:|I_{2,i}|\le|I_{1,k}|}b_{2,i}(z)+\sum_ib_{2,i}(z)\sum_{k:|I_{1,k}|\le|I_{2,i}|}b_{1,k}(y)\\
&:=\sum_kb_{1,k}(y)b_2^{(k)}(z)+\sum_ib_{2,i}(z)b_1^{(i)}(y).
\end{align*}
Then, for $x\notin\Omega_1\cup\Omega_2$, we observe that
\begin{align*}
V_\rho(\mathcal Q(b_1,b_2))(x)\le \bigg(\sum_kV_\rho(\mathcal Q(b_{1,k},b_2^{(k)}))^\rho(x)\bigg)^{1/\rho}+\bigg(\sum_iV_\rho(\mathcal Q(b_1^{(i)},b_{2,i}))^\rho(x)\bigg)^{1/\rho},
\end{align*}
where we use the fact that for $x\notin\Omega_1\cup\Omega_2$ and $t\in(0,\infty)$, there are at most two $k$'s and two $i$'s for which
\begin{equation*}
\frac1{t}\int_{x-\frac t2}^{x+\frac t2}b_{1,k}(y)dy\neq0\ \ \text{and}\ \ \frac1{t}\int_{x-\frac t2}^{x+\frac t2}b_{2,i}(z)dz\neq0.
\end{equation*}
Hence, we see that
\begin{align*}
|\{x\notin\Omega_1\cup \Omega_2:V_\rho(\mathcal Q(b_1,b_2))(x)>1/4\}|&\le |\{x\notin\Omega_1\cup \Omega_2:\big(\sum_kV_\rho(\mathcal Q(b_{1,k},b_2^{(k)}))^\rho\big)^{\frac1\rho}(x)>1/8\}|\\
&+|\{x\notin\Omega_1\cup \Omega_2:\big(\sum_iV_\rho(\mathcal Q(b_1^{(i)},b_{2,i}))^\rho\big)^{\frac1\rho}(x)>1/8\}|.
\end{align*}
It suffices to consider the first term, the other one can be treated in the same way. For $x\notin\Omega_1\cup \Omega_2$ and $t>d(x,I_{1,k})$ such that $M_t(b_{1,k})(x)\neq0$, there are at most two summands $b_{2,i}$ in $b_2^{(k)}$ for which
\begin{equation*}
\int_{x-\frac t2}^{x+\frac t2}b_{2,i}(z)dz\neq0\ \ \text{and}\ \ \big|\int_{x-\frac t2}^{x+\frac t2}b_{2,i}(z)dz\big|\le |I_{2,i}|\le |I_{1,k}|.
\end{equation*}
Notice that dyadic intervals $\{I_{2,i}\}_i$ are with disjoint interiors. Moreover, for above $x$ and $t$, we obtain
\begin{align*}
|M_t(b_2^{(k)})|\le \frac{2|I_{1,k}|}{d(x,I_{1,k})}\le 2\ \ \text{and}\ \ |Q_t(b_{1,k},b_2^{(k)})|&\le \frac1t\|b_{1,k}\|_{L^1}M_t(b_2^{(k)})\le \frac2t\|b_{1,k}\|_{L^1}.
\end{align*}
For $x\notin I_{1,k}\cup \Omega_2$, we assume $x$ is on the right of $I_{1,k}$. We can choose a monotone decreasing sequence $\{t_j(x)\}_j$ approaching $0$ such that
\begin{align*}
V_\rho(\mathcal Q(b_{1,k},b_2^{(k)}))(x)&\le C\sum_j\big|Q_{t_j(x)}(b_{1,k},b_2^{(k)})(x)-Q_{t_{j+1}(x)}(b_{1,k},b_2^{(k)})(x)\big|\\
&\lesssim |Q_{t_{j_0}(x)}(b_{1,k},b_2^{(k)})(x)|+\sum_{j=j_0}^{j_1-1}\big|Q_{t_j(x)}(b_{1,k},b_2^{(k)})(x)-Q_{t_{j+1}(x)}(b_{1,k},b_2^{(k)})(x)\big|\\
&+|Q_{t_{j_1}(x)}(b_{1,k},b_2^{(k)})(x)|\\
&\lesssim\frac1{t_{j_1}(x)}\|b_{1,k}\|_{L^1}+\sum_{j=j_0}^{j_1-1}\big|M_{t_j(x)}(b_{1,k})(x)-M_{t_{j+1}(x)}(b_{1,k})(x)\big||M_{t_j(x)}(b_2^{(k)})(x)|\\
&+\sum_{j=j_0}^{j_1-1}\big|M_{t_j(x)}(b_2^{(k)})(x)-M_{t_{j+1}(x)}(b_2^{(k)})(x)\big||M_{t_{j+1}(x)}(b_{1,k})(x)|,
\end{align*}
where $x-t_{j_0}(x)\in I_{1,k}$ and $x-t_{j_0-1}(x)\notin I_{1,k}$, $x-t_{j_1}(x)\in I_{1,k}$ and $x-t_{j_1+1}(x)\notin I_{1,k}$. The second summand is dominated by
\begin{align*}
&\sum_{j=j_0}^{j_1-1}\big|\frac1{t_j(x)}-\frac1{t_{j+1}(x)}\big|\|b_{1,k}\|_{L^1}+\sum_{j=j_0}^{j_1-1}\frac1{t_{j+1}(x)}\big|\int_{x-\frac{t_j(x)}2}^{x+\frac{t_j(x)}2}b_{1,k}(y)dy-\int_{x-\frac{t_{j+1}(x)}2}^{x+\frac{t_{j+1}(x)}2}b_{1,k}(y)dy\big|\\
&\lesssim \frac1{t_{j_1}(x)}\|b_{1,k}\|_{L^1}\le \frac {C\|b_{1,k}\|_{L^1}}{d(x,I_{1,k})}.
\end{align*}
We estimate the third summand as
\begin{align*}
&\sum_{j=j_0}^{j_1-1}\big|\frac1{t_j(x)}-\frac1{t_{j+1}(x)}\big|\frac{|I_{1,k}|}{t_{j_1}(x)}\|b_{1,k}\|_{L^1}+\sum_{j=j_0}^{j_1-1}\frac{\|b_{1,k}\|_{L^1}}{t^2_{j_1}(x)}\big|\int_{x-\frac{t_j(x)}2}^{x+\frac{t_j(x)}2}b_2^{(k)}(z)dz-\int_{x-\frac{t_{j+1}(x)}2}^{x+\frac{t_{j+1}(x)}2}b_2^{(k)}(z)dz\big|\\
&\lesssim \frac{|I_{1,k}|}{t^2_{j_1}(x)}\|b_{1,k}\|_{L^1}\le \frac {C\|b_{1,k}\|_{L^1}}{d(x,I_{1,k})},
\end{align*}
where we use the fact $|I_{1,k}|\le t_{j_1}(x)$. Finally, using Chebyshev's inequality,
\begin{align*}
\big|\{x\notin\Omega_1\cup \Omega_2:\big(\sum_kV_\rho(\mathcal Q(b_{1,k},b_2^{(k)}))^\rho\big)^{\frac1\rho}(x)>1/8\}\big|&\le C\sum_k\int_{(\tilde{I}_{1,k})^c}V_\rho(\mathcal Q(b_{1,k},b_2^{(k)}))^\rho(x)dx\\
&\le C\sum_k\|b_{1,k}\|_{L^1}^\rho\int_{(\tilde{I}_{1,k})^c}\frac1{d(x,I_{1,k})^{\rho}}dx\\
&\le C\sum_k\|b_{1,k}\|_{L^1}^\rho |I_{1,k}|^{1-\rho}\le C.
\end{align*}
This completes the proof of Proposition \ref{weak var of bilinear Q}.
\end{proof}
Now let turn to the proof of Corollary \ref{var of the Q D}.
\begin{proof}
For each $\phi,\psi:\mathbb Z\rightarrow\mathbb Z$ we consider functions like $f:\mathbb R\rightarrow \mathbb R$ with
\begin{equation*}
f(x)= \begin{cases}
\phi([x]), &\mbox{$[x]+\frac14\le x\le[x]+\frac12$,}\\
0, &\mbox{otherwise,}
\end{cases}
\end{equation*}
and $g:\mathbb R\rightarrow \mathbb R$ with
\begin{equation*}
g(x)= \begin{cases}
\psi([x]), &\mbox{$[x]+\frac14\le x\le[x]+\frac12$,}\\
0, &\mbox{otherwise.}
\end{cases}
\end{equation*}
For $L\in\mathbb N$ and $i\in\mathbb Z$, we observe that
$$
\mathrm{Q}_L(\phi,\psi)(i)=4Q_{L+\frac12}(f,g)(x),\ \ x\in[i,i+\frac34].
$$
Further, we get that
$$
V_\rho\big(\mathbf{Q}(\phi,\psi)\big)(i)\le4V_\rho\big(\mathcal Q(f,g)\big)(x),\ \ x\in[i,i+\frac34].
$$
For the variational inequality for averages over cubes in Theorem \ref{var of the Q C} we deduce that
\begin{align*}
\|V_\rho\big(\mathbf{Q}(\phi,\psi)\big)\|_{l^p(\mathbb Z)}&=\bigg(\sum_i\big|V_\rho\big(\mathbf{Q}(\phi,\psi)\big)(i)\big|^p\bigg)^{1/p}\\
&\le 4\big(\frac43\big)^{1/p}\bigg(\sum_i\int_{i}^{i+3/4}\big|V_\rho\big(\mathcal Q(f,g)\big)(x)\big|^pdx\bigg)^{1/p}\\
&\le4\big(\frac43\big)^{1/p}\big\|V_\rho\big(\mathcal Q(f,g)\big)\big\|_{L^p(\mathbb R)}\le C\|f\|_{L^{p_1}(\mathbb R)}\|g\|_{L^{p_2}(\mathbb R)}\\
&\le C\|\phi\|_{l^{p_1}(\mathbb Z)}\|\psi\|_{l^{p_2}(\mathbb Z)}.
\end{align*}
\end{proof}
\section{Variational inequality for conditional expectations}
In the same way, we apply Lemma \ref{B interpolation} and take the Banach space $\mathcal B=\{a(j):\|a\|_{\mathcal B}=\|a\|_{V_\rho}<\infty\}$.
Then, $V_\rho(\{\mathbb E_j(f,g)\}_j)=\|\{\mathbb E_j(f,g)\}_j\|_{\mathcal B}$.
Lemma \ref{B interpolation} implies Theorem \ref{var of Mar} is a consequence of the following two propositions.
\begin{proposition}\label{var of bilinear con exp}
For $\rho>2$, $1<p,p_1,p_2<\infty$ and $\frac1p=\frac1{p_1}+\frac1{p_2}$, we have
\begin{align*}
\|V_\rho(\{\mathbb E_j(f,g)\}_j)\|_{L^p(\mathbb R)}\le C\|f\|_{L^{p_1}(\mathbb R)}\|g\|_{L^{p_2}(\mathbb R)}.
\end{align*}
\end{proposition}
\begin{proof}
Obviously, we have
$$
\mathbb E_j(f,g)(x)=\sum_{I,J\in \mathcal D_j}\frac1{|I|}\int_{I}f(y)dy\chi_{I}(x)\frac1{|J|}\int_{J}f(y)g(z)dz\chi_{J}(x)=\mathbb E_j(f)(x)\mathbb E_j(g)(x).
$$
Then, we get
\begin{align*}
|\mathbb E_{j_{n+1}}(f,g)-\mathbb E_{j_n}(f,g)|&=|\mathbb E_{j_{n+1}}(f)\mathbb E_{j_{n+1}}(g)-\mathbb E_{j_n}(f)\mathbb E_{j_n}(g)|\\
&\le |\mathbb E_{j_{n+1}}(f)-\mathbb E_{j_n}(f)|\cdot|\mathbb E_{j_{n+1}}(g)|+|\mathbb E_{j_{n+1}}(g)-\mathbb E_{j_n}(g)|\cdot|\mathbb E_{j_n}(f)|.
\end{align*}
By applying H\"{o}lder's inequality and L\'{e}pingle's inequality \cite{Lep76}, we obtain
\begin{align*}
\|V_\rho(\{\mathbb E_j(f,g)\}_j)\|_{L^p(\mathbb R)}&\le \|M(g)\cdot V_\rho(\{\mathbb E_j(f)\}_j)\|_{L^p}+\|M(f)\cdot V_\rho(\{\mathbb E_j(g)\}_j)\|_{L^p}\\
&\le C\|f\|_{L^{p_1}(\mathbb R)}\|g\|_{L^{p_2}(\mathbb R)}.
\end{align*}
This completes the proof of Proposition \ref{var of bilinear con exp}.
\end{proof}
\begin{remark}
In fact, above bilinear variational inequality holds for $p=1$, $1<p_1,p_2<\infty$ and $\frac1p=\frac1{p_1}+\frac1{p_2}$.
\end{remark}
The second proposition is the variational weak endpoint type estimate for conditional expectation sequence.
\begin{proposition}\label{weak var of bilinear con exp}
For $\rho>2$, we have
\begin{align*}
\lambda|\{x\in\mathbb R:V_\rho(\{\mathbb E_j(f_1,f_2)\}_j)(x)>\lambda\}|^2\le C\|f_1\|_{L^1(\mathbb R)}\|f_2\|_{L^1(\mathbb R)}
\end{align*}
uniformly in $\lambda>0$.
\end{proposition}
\begin{proof}
By scaling, we assume that $\lambda=1$ and $\|f_1\|_{L^1}=\|f_2\|_{L^1}=1$,
the general case follows immediately by scaling.
It suffices to prove
\begin{align*}
|\{x\in\mathbb R:V_\rho(\{\mathbb E_j(f_1,f_2)\}_j)(x)>1\}|\le C.
\end{align*}
Analogously, we apply the Calder\'{o}n-Zygmund decomposition to functions $f_i$ at height $1$ to obtain functions $g_i$, $b_i$ and dyadic intervals $\{I_{i,k}\}_k$ such that
$f_i=g_i+b_i\ \ \text{and}\ \ b_i=\sum_kb_{i,k}$. Since
\begin{align*}
|\{x\in\mathbb R:V_\rho(\{\mathbb E_j(f_1,f_2)\}_j)(x)>1\}|&\le |\{x\in\mathbb R:V_\rho(\{\mathbb E_j(g_1,g_2)\}_j)(x)>1/4\}|+|\Omega_1|\\
&+|\Omega_2|+|\{x\notin\Omega_1:V_\rho(\{\mathbb E_j(b_1,g_2)\}_j)(x)>1/4\}|\\
&+|\{x\notin\Omega_2:V_\rho(\{\mathbb E_j(g_1,b_2)\}_j)(x)>1/4\}|\\
&+|\{x\notin\Omega_1\cup\Omega_2:V_\rho(\{\mathbb E_j(b_1,b_2)\}_j)(x)>1/4\}|,
\end{align*}
it suffices to estimate each of above six sets. Applying Proposition \ref{var of bilinear con exp}, we observe
\begin{align*}
|\{x\in\mathbb R:V_\rho(\{\mathbb E_j(g_1,g_2)\}_j)(x)>1/4\}|&\le C\|V_\rho(\{\mathbb E_j(g_1,g_2)\}_j)\|_{L^1}\le C\|g_1\|_{L^2}\|g_2\|_{L^2}\le C.
\end{align*}
Clearly,$|\Omega_1|+|\Omega_2|\le C$. Note that $\mathbb E_j(b_{1,k})(x)=0$ for $x\notin\tilde{I}_{1,k}$. Hence $\mathbb E_j(b_1,g_2)(x)=\mathbb E_j(b_1)(x)\cdot\mathbb E_j(g_2)(x)=0$ for $x\notin\Omega_1$. Consequently,
\begin{align*}
|\{x\notin\Omega_1:V_\rho(\{\mathbb E_j(b_1,g_2)\}_j)(x)>1/4\}|=|\{x\notin\Omega_1\cup\Omega_2:V_\rho(\{\mathbb E_j(b_1,b_2)\}_j)(x)>1/4\}|=0.
\end{align*}
Similarly,
\begin{align*}
|\{x\notin\Omega_2:V_\rho(\{\mathbb E_j(g_1,b_2)\}_j)(x)>1/4\}|=0.
\end{align*}
This proves Proposition \ref{weak var of bilinear con exp}.
\end{proof}
\section{Variational inequality for approximations of the identity}
In order to prove Theorem \ref{var of the 1.1}, we view the kernel $\{\phi_t(y,z)\}_{t>0}$ as having values in the Banach space
\begin{align}\label{def of B}
\mathcal B=\{a(t):\|a\|_{\mathcal B}=\|a\|_{V_\rho}<\infty\}.
\end{align}
Then, $V_\rho(\Phi(f,g))(x)=\|\{\phi_t(f,g)(x)\}_{t>0}\|_{\mathcal B}$. Lemma \ref{B interpolation} implies Theorem \ref{var of the 1.1} is a consequence of the following two propositions:
\begin{proposition}\label{var of app >1}
For $\rho>2$, $1<p,p_1,p_2<\infty$ and $\frac1p=\frac1{p_1}+\frac1{p_2}$, we have
\begin{align*}
\|V_\rho(\Phi(f,g))\|_{L^p(\mathbb R)}\le C\|f\|_{L^{p_1}(\mathbb R)}\|g\|_{L^{p_2}(\mathbb R)}.
\end{align*}
\end{proposition}
\begin{proposition}\label{app var weak}
For $\rho>2$, then
\begin{align*}
\lambda\big|\big\{x\in\mathbb R:V_\rho(\Phi(f,g))(x)>\lambda\big\}\big|^2\le C\|f\|_{L^1(\mathbb R)}\|g\|_{L^1(\mathbb R)}
\end{align*}
for any $\lambda>0$.
\end{proposition}
\subsection{Variational inequality with $1<p,p_1,p_2<\infty$.}
The goal of this subsection is to prove Proposition \ref{var of app >1}. Let $\varphi\in \mathscr{S}(\mathbb R)$ and $\int_{\mathbb R}\varphi(x)dx =1$. Then, we have the following pointwise estimate:
\begin{align*}
V_\rho(\Phi(f,g))\le V_\rho(\{\varphi_t(f)\cdot\varphi_t(g)\}_{t>0})+V_\rho(\{\phi_t(f,g)-\varphi_t(f)\cdot\varphi_t(g)\}_{t>0}).
\end{align*}
Hence, it suffices to estimate the $L^p$ norms of $V_\rho(\{\varphi_t(f)\cdot\varphi_t(g)\}_{t>0})$ and $V_\rho(\{\phi_t(f,g)-\varphi_t(f)\cdot\varphi_t(g)\}_{t>0})$.
\begin{lemma}\label{var of bilinear var}
For $\rho>2$, $1<p,p_1,p_2<\infty$ and $\frac1p=\frac1{p_1}+\frac1{p_2}$, we have
\begin{align*}
\|V_\rho(\{\varphi_t(f)\cdot\varphi_t(g)\}_{t>0})\|_{L^p(\mathbb R)}\le C\|f\|_{L^{p_1}(\mathbb R)}\|g\|_{L^{p_2}(\mathbb R)}.
\end{align*}
\end{lemma}
\begin{proof}
Note that
\begin{align*}
&|\varphi_{t_i}(f)\cdot\varphi_{t_i}(g)-\varphi_{t_{i+1}}(f)\cdot\varphi_{t_{i+1}}(g)|\\
\le& |\varphi_{t_i}(f)-\varphi_{t_{i+1}}(f)|\cdot|\varphi_{t_i}(g)|+|\varphi_{t_i}(g)-\varphi_{t_{i+1}}(g)|\cdot |\varphi_{t_{i+1}}(f)|.
\end{align*}
Then, by using H\"{o}lder's inequality and Theorem 2.6 in \cite{HL17}, we get
\begin{align*}
\|V_\rho(\{\varphi_t(f)\cdot\varphi_t(g)\}_{t>0})\|_{L^p(\mathbb R)}&\le \|M(g)\cdot V_\rho(\{\varphi_t(f)\}_{t>0})\|_{L^p}+\|M(f)\cdot V_\rho(\{\varphi_t(g)\}_{t>0})\|_{L^p}\\
&\le C\|f\|_{L^{p_1}(\mathbb R)}\|g\|_{L^{p_2}(\mathbb R)}.
\end{align*}
\end{proof}
The long variation operator $V^L_\rho(\mathcal F)$ of the family $\mathcal F$ at $x$ is defined by
\begin{equation}\label{defini of Vq(F)}
V^L_\rho(\mathcal F)(x)=\|\{F_{2^n}(x)\}_{n\in\mathbb{Z}}\|_{V_\rho},\quad \rho\ge1.
\end{equation}
Moreover, the short variation operator
$$
S_2(\mathcal F)(x)=\bigg(\sum_{j\in\mathbb Z}\|\{F_t(x)\}_{t>0}\|_{V_2[2^j,2^{j+1}]}^2\bigg)^{1/2}.
$$
Then the following pointwise comparison holds.
\begin{lemma}\ {\rm (see \cite[Lemma 1.3]{JSW08})}\label{lem:convert lemma}
\begin{equation}\label{pcls}
V_\rho(\mathcal F)(x)\lesssim V^L_\rho(\mathcal F)(x)+S_2(\mathcal F)(x).
\end{equation}
\end{lemma}
\begin{lemma}\label{var of App diff}
For $\rho>2$,$1<p,p_1,p_2<\infty$ and $\frac1p=\frac1{p_1}+\frac1{p_2}$, we have
\begin{align*}
\|V_\rho(\{\phi_t(f,g)-\varphi_t(f)\cdot\varphi_t(g)\}_{t>0})\|_{L^p(\mathbb R)}\le C\|f\|_{L^{p_1}(\mathbb R)}\|g\|_{L^{p_2}(\mathbb R)}.
\end{align*}
\end{lemma}
\begin{proof}
To estimate the $L^p$ norm of $V_\rho(\{\phi_t(f,g)-\varphi_t(f)\cdot\varphi_t(g)\}_{t>0})$,we denote the function $\phi(y,z)-\varphi(y)\varphi(z)$ by $\psi(y,z)$ for convenience. \eqref{pcls} reduces above desired estimate to
\begin{align}\label{var of App diff L}
\|V^L_\rho(\{\psi_t(f,g)\}_{t>0})\|_{L^p(\mathbb R)}\le C\|f\|_{L^{p_1}(\mathbb R)}\|g\|_{L^{p_2}(\mathbb R)}
\end{align}
and
\begin{align}\label{var of App diff S}
\|S_2(\{\psi_t(f,g)\}_{t>0})\|_{L^p(\mathbb R)}\le C\|f\|_{L^{p_1}(\mathbb R)}\|g\|_{L^{p_2}(\mathbb R)}.
\end{align}
\par
We show \eqref{var of App diff L} first. Clearly, for $\rho>2$ we have
\begin{align*}
V^L_\rho(\{\phi_t(f,g)-\varphi_t(f)\cdot\varphi_t(g)\}_{t>0})=V^L_\rho(\{\psi_t(f,g)\}_{t>0})\le \big(\sum_j|\psi_{2^j}(f,g)|^2\big)^{1/2}.
\end{align*}
Hence, it suffices to prove
\begin{align}\label{squ fun est}
\|\big(\sum_j|\psi_{2^j}(f,g)|^2\big)^{1/2}\|_{L^p(\mathbb R)}\le C\|f\|_{L^{p_1}(\mathbb R)}\|g\|_{L^{p_2}(\mathbb R)},
\end{align}
for $1<p,p_1,p_2<\infty$ and $\frac1p=\frac1{p_1}+\frac1{p_2}$.
\par
To obtain \eqref{squ fun est}, we apply \cite[Theorem 1.1]{JH12} and verify $\psi$ satisfying related conditions. Note that $\phi\in\mathscr S(\mathbb R^2)$ and $\varphi\in\mathscr S(\mathbb R)$, then $\psi\in\mathscr S(\mathbb R^2)$. Hence, for any $N\in\mathbb N$ and multi-indices $\alpha$ we have
\begin{align*}
|\partial^\alpha\psi(y,z)|\le \frac{C_N}{(1+|y|+|z|)^{2N}}\le \frac{C_N}{(1+|y|)^N(1+|z|)^N}.
\end{align*}
Moreover, it satisfies the cancellation condition
\begin{align*}
\int_{\mathbb R^2}\psi(y,z)dydz=\int_{\mathbb R^2}\phi(y,z)dydz-\int_{\mathbb R}\varphi(y)dy\cdot\int_{\mathbb R}\varphi(z)dz=0.
\end{align*}
\par
As a result, we obtain
\begin{align*}
\|V^L_\rho(\{\phi_t(f,g)-\varphi_t(f)\cdot\varphi_t(g)\}_{t>0})\|_{L^p(\mathbb R)}&\le \|\big(\sum_j|\psi_{2^j}(f,g)|^2\big)^{1/2}\|_{L^p(\mathbb R)}\\
&\le C\|f\|_{L^{p_1}(\mathbb R)}\|g\|_{L^{p_2}(\mathbb R)},
\end{align*}
and complete the proof of \eqref{var of App diff L}.
\par
Next we turn to proof of \eqref{var of App diff S}. By using Bergh and Peetre's \cite{BP} estimate
\begin{align*}
\|a\|_{V_\rho}\le \|a\|_{L^\rho}^{1/\rho'}\|a'\|_{L^\rho}^{1/\rho},
\end{align*}
we observe that
\begin{align*}
S_2^2(\{\psi_t(f,g)\}_{t>0})(x)&=\sum_k\|\{\psi_t(f,g)\}_{t>0}\|^2_{V_2[2^k,2^{k+1}]}\\
&\le \sum_k\|\psi_t(f,g)\|_{L_t^2[2^k,2^{k+1}]}\|\frac d{dt}\psi_t(f,g)\|_{L_t^2[2^k,2^{k+1}]}\\
&\le C\bigg(\int_0^\infty|\psi_t(f,g)(x)|^2\frac{dt}t\bigg)^{1/2}\bigg(\int_0^\infty|\tilde{\psi}_t(f,g)(x)|^2\frac{dt}t\bigg)^{1/2}\\
&:=C G(f,g)(x)\tilde{G}(f,g)(x),
\end{align*}
where $\tilde{\psi}(y,z)=2\psi(y,z)+y\partial_y\psi(y,z)+z\partial_z\psi(y,z)$. Note that $\psi,\tilde{\psi}\in \mathscr S(\mathbb R^2)$, for any $N\in\mathbb N$ we have
\begin{align*}
|\hat{\psi}(\xi,\eta)|+|\hat{\tilde\psi}(\xi,\eta)|\le \frac C{(1+|(\xi,\eta)|)^N}\ \ \text{and}\ \ \hat{\psi}(0,0)=\hat{\tilde\psi}(0,0)=0.
\end{align*}
Using \cite[Example 2.1]{SXY17} and \cite[Theorem 1.2]{XPY15}, we get
\begin{align*}
\|G(f,g)\|_{L^p(\mathbb R)}+\|\tilde G(f,g)\|_{L^p(\mathbb R)}\le C\|f\|_{L^{p_1}(\mathbb R)}\|g\|_{L^{p_2}(\mathbb R)}
\end{align*}
for $1<p,p_1,p_2<\infty$. Furthermore, by H\"{o}lder's inequality
\begin{align*}
\|S_2(\{\psi_t(f,g)\}_{t>0})\|^p_{L^p(\mathbb R)}&=\int_{\mathbb R}S_2(\{\psi_t(f,g)\}_{t>0})^{2\cdot \frac p2}(x)dx\le \int_{\mathbb R}G(f,g)^{\frac p2}(x)\tilde{G}(f,g)^{\frac p2}(x)dx\\
&\le C\|G(f,g)\|^{\frac p2}_{L^p(\mathbb R)}\|\tilde G(f,g)\|^{\frac p2}_{L^p(\mathbb R)}\le C\|f\|^p_{L^{p_1}(\mathbb R)}\|g\|^p_{L^{p_2}(\mathbb R)}.
\end{align*}
This completes the proof of \eqref{var of App diff S}.
\end{proof}
\subsection{Variational weak endpoint type estimate}
To prove Proposition \ref{app var weak}, we use bilinear vector-valued Calder\'{o}n-Zygmund theory. Let $\mathcal B$ be the Banach space given by \eqref{def of B} and $F$ be a bilinear function defined on $\mathbb C\times \mathbb C$ to $\mathcal B$, we define
$$
\|F\|_{\mathcal{BL}(\mathbb C\times \mathbb C\rightarrow \mathcal B)}=\sup_{|\xi_1|,|\xi_2|\le1}\|F(\xi_1,\xi_2)\|_{\mathcal B}.
$$
\par
Let $T$ be a bilinear operator defined on $\mathscr S(\mathbb R)\times\mathscr S(\mathbb R)$ and taking values in $\mathscr S'(\mathbb R;\mathcal B)$. Assume that the restriction of its distributional kernel away from the diagonal $x=y=z$ in $\mathbb R^3$ coincides with a $\mathcal B$-valued function $K$, satisfying the size condition
\begin{align*}
\|K(x,y,z)\|_{\mathcal{BL}(\mathbb C\times \mathbb C\rightarrow \mathcal B)}\le \frac C{(|x-y|+|x-z|)^2}\ \ \text{for}\ \ |x-y|+|x-z|\neq0,
\end{align*}
the regularity conditions
\begin{align*}
&\|K(x,y,z)-K(x+h,y,z)\|_{\mathcal{BL}(\mathbb C\times \mathbb C\rightarrow \mathcal B)}+\|K(x,y,z)-K(x,y+h,z)\|_{\mathcal{BL}(\mathbb C\times \mathbb C\rightarrow \mathcal B)}\\
&+\|K(x,y,z)-K(x,y,z+h)\|_{\mathcal{BL}(\mathbb C\times \mathbb C\rightarrow \mathcal B)}\le \frac {C|h|}{(|x-y|+|x-z|)^3}
\end{align*}
for $|h|\le\max(|x-y|,|x-z|)/2$, and such that
\begin{align*}
T(f,g)(x)=\int_{\mathbb R^2}K(x,y,z)f(y)g(z)dxdy
\end{align*}
whenever $f,g\in\mathcal D(\mathbb R)$ and $x\notin \text{supp}\ f\cap \text{supp}\ g$. Under above assumptions and $T$ is bounded $L^{p_1}\times L^{p_2}\rightarrow L^p(\mathcal B)$ for some $1<p,p_1,p_2<\infty$ with $\frac1p=\frac1{p_1}+\frac1{p_2}$, we will say $T$ is a bilinear $\mathcal B$-valued Calder\'{o}n-Zygmund operator.
We state a weak endpoint result in \cite{JH12} for bilinear vector-valued Calder\'{o}n-Zygmund operators as follows.
\begin{lemma}\label{B weak lemma}
If $T$ is a bilinear $\mathcal B$-valued Calder\'{o}n-Zygmund operator, then $T$ is bounded $L^1\times L^1\rightarrow L^{1/2,\infty}(\mathcal B)$.
\end{lemma}
\begin{proof} Lemma \ref{B weak lemma} implies that it suffices to verify $\{\phi_t(f,g)\}_{t>0}$ be a bilinear $\mathcal B$-valued Calder\'{o}-Zygmund operator. We have proved that $\{\phi_t(f,g)\}_{t>0}$ is bounded $L^{p_1}\times L^{p_2}\rightarrow L^p(\mathcal B)$ for $1<p,p_1,p_2<\infty$ in Proposition \ref{var of app >1}, it suffices to verify the kernel $\{\phi_t(y,z)\}_{t>0}$ satisfying related size condition and regularity conditions.
\par
We consider the size condition first. Note that $\|a\|_{\mathcal B}=\|a\|_{V_\rho}\le \|a\|_{V_1}\le \int_0^\infty |a'(t)|dt$. Then,
\begin{align*}
\|\{\phi_t(y,z)\}_{t>0}\|_{\mathcal B}&\le \int_0^\infty\big|\frac{d}{dt}\phi_t(y,z)\big|dt\le C\int_0^\infty\big[\frac1{t^3}|\phi(\frac yt,\frac zt)|+\frac1{t^4}\big(|y||\phi_1(\frac yt,\frac zt)|+|z||\phi_2(\frac yt,\frac zt)|\big)\big]dt\\
&\le C\int_0^\infty\big[\frac1{t^3(1+\frac{|(y,z)|}t)^N}+\frac{|(y,z)|}{t^4(1+\frac{|(y,z)|}t)^N}\big]dt\le \frac{C}{|(y,z)|^2}\le \frac{C}{(|y|+|z|)^2},
\end{align*}
where $\phi_1(y,z)=\partial_y\phi(y,z)$ and $\phi_2(y,z)=\partial_z\phi(y,z)$.
\par
For the regularity condition, we have
\begin{align*}
\|\{\phi_t(y,z)-\phi_t(y',z)\}_{t>0}\|_{\mathcal B}\le C\int_0^\infty\frac{|y-y'|}{t^4(1+\frac{|(y,z)|}t)^N}dt\le \frac{C|y-y'|}{|(y,z)|^3}\le \frac{C|y-y'|}{(|y|+|z|)^3}.
\end{align*}
In the same way, we have
\begin{align*}
\|\{\phi_t(y,z)-\phi_t(y,z')\}_{t>0}\|_{\mathcal B}\le C\int_0^\infty\frac{|z-z'|}{t^4(1+\frac{|(y,z)|}t)^N}dt\le \frac{C|z-z'|}{|(y,z)|^3}\le \frac{C|z-z'|}{(|y|+|z|)^3}.
\end{align*}
This completes the proof of Proposition \ref{app var weak}.
\end{proof}
\bibliographystyle{amsplain}
|
{
"timestamp": "2018-06-05T02:13:20",
"yymm": "1806",
"arxiv_id": "1806.00902",
"language": "en",
"url": "https://arxiv.org/abs/1806.00902"
}
|
\section{Introduction}\label{introduction}
Nowadays, magnetic properties of low dimensional systems in forms of graphene-like structures have attracted significant amount of interest. The reason is due to the fact that two dimensional graphene
\cite{novoselov2,wallace,mcClure,slonczewski,geim}, and its variants \cite{cahangirov} defined on a honeycomb lattice
exhibit a variety of interesting electric and magnetic properties which are significantly affected by varying system size.
After experimental realization of exactly two dimensional monocrystalline graphitic films \cite{novoselov1} which are only a few atoms thick but stable under environmental conditions,
theoretical and experimental research interests have been directed to the
studies of two-dimensional
layered structures. For instance, in order to reveal the finite-temperature properties of honeycomb iridates with general formula $\mathrm{A_{2}IrO_{3}}$ which exhibit strong spin-orbit coupling (SOC),
Price and Perkins \cite{price1,price2} have performed Monte Carlo (MC) simulations based on the classical Heisenberg-Kitaev (HK) model \cite{kitaev} on a honeycomb lattice where the interactions
between nearest neighbors are of $XX$, $YY$ or $ZZ$ type.
Very recently, it has been shown that transition metal trihalides ($\mathrm{MX_{3}}$) defined on a two dimensional honeycomb lattice may exhibit magnetic order below a finite critical temperature \cite{sarikurt,ersan}.
Importance of honeycomb lattice not only originates as a consequence of experimental research on graphene, but resides also on the theoretical grounds. Namely, it offers reduced mathematical complexity, and there are also
some exact results regarding the magnetic properties for this structure \cite{horiguchi,urumov1,urumov2,urumov3,urumov4,zhen}.
From the experimental point of view, single layer, double layer and few (3 to 10) layer honeycomb structures are classified as three different types of 2D crystals, and thin film limit is reached for thicker systems \cite{novoselov1}.
In this regard, investigation of magnetic properties of graphene-like multilayers gained particular attention,
and a wide variety of such systems have been successfully modeled within the framework of Ising model and its variants \cite{masrour,mhirech,jiang,santos,kaneyoshi1}. For instance, using the effective field
theory (EFT) formalism, Jiang and coworkers \cite{jiang} investigated the magnetic properties such as magnetization and the magnetic susceptibility of a nano-graphene bilayer.
For a trilayer Ising nanostructure, EFT calculations have been performed and from the thermal variations of the total magnetization, six distinct compensation types have been reported by Santos and S\'{a} Barreto \cite{santos}.
In a recent paper, Kaneyoshi \cite{kaneyoshi1} investigated the magnetic behavior of an Ising bilayer with non-magnetic inter-layers. Based on EFT method, some characteristic features of ferrimagnetism have also been reported in this study.
In that work, a realistic case has also been considered by assuming a distance-dependent indirect exchange interaction between the two magnetic layers.
On the other hand, after experimental realization of dynamic phase transitions \cite{acharyya,rikvold} in uniaxial cobalt films \cite{berger}, stochastic dynamics of
kinetic systems gained renewed interest \cite{yuksel,vatansever}. In such systems, a dynamic phase
transition between dynamically ordered and disordered phases takes place which is characterized by a dynamic symmetry breaking. Depending on the two competing time scales, namely, the period of the externally applied oscillating magnetic field
and relaxation time of the system, kinetic Ising model may exhibit dynamic ferromagnetic (FM) or dynamic paramagnetic (PM) character. Winner of the competition of the above mentioned time scales is determined by another complicated competition
between the field amplitude, field period, temperature, and exchange coupling.
The effective field theory \cite{kaneyoshi2} partially takes into account the spin fluctuations, and it is superior to conventional mean field theory \cite{strecka}
where the spin-spin correlations are completely ruled out. Despite its mathematical simplicity, mean field predictions
are only valid for the systems with dimensionality $d\geq4$.
In a recent work, we have shown that EFT and MC results qualitatively agree well with each other for a particular ternary spin system \cite{yuksel2}.
In this regard, EFT method promises reasonable results with less computational cost.
The aim of the present paper is two fold: First, a direct comparison of MC results obtained within the present work with the available EFT results of
Ref. \cite{kaneyoshi1} will be presented for the Ising bilayer system. As will be shown in the following discussions, qualitatively plausible agreement exists between EFT and MC results. Second, we will present some results regarding the
stochastic dynamics and compensation behavior of the kinetic Ising bilayer in the presence of two different forms of the oscillating magnetic field, namely a square wave form and a sinusoidal wave form.
The rest of the paper can be outlined as follows: In Section \ref{formulation}, we will present the formulation and simulation details of our model. Section \ref{results} contains numerical results and related discussions.
Finally, Section \ref{conclusion} is devoted to our conclusions.
\section{Model and Formulation}\label{formulation}
\begin{figure}[!h]
\center
\subfigure[\hspace{0cm}] {\includegraphics[width=4.5cm]{fig1a}}
\hspace{0.5cm}
\subfigure[\hspace{0cm}] {\includegraphics[width=6.0cm]{fig1b}}\\
\caption{(a) Schematic representation of the simulated magnetic bilayer. Sublattice $A$ $(B)$ is occupied by $\sigma=\pm1/2$ $(S=\pm1,0)$ spins. (b) Equivalent of honeycomb lattice in the brick lattice representation.
Each pseudo spin has three nearest neighbors, and is located on the nodes of a $L\times L$ square lattice.} \label{fig1}
\end{figure}
Our bilayer model consists of successive stacking of 2D honeycomb monolayers forming a 3D graphite structure (Fig \ref{fig1}a).
The bottom layer, i.e. the sublattice $A$ consists of Ising spins with $\sigma_{i}=\pm\frac{1}{2}$ whereas the topmost layer (sublattice $B$)
consists of tightly packed magnetic atoms with a pseudo spin variable $S_{i}=\pm1,0$. The number of nonmagnetic layers between the sublattices $A$ and $B$ is denoted by $n$.
The intra-layer exchange couplings are respectively denoted by $J_{A}$ $(>0)$ and $J_{B}$ $(>0)$ whereas the interlayer exchange coupling is represented by $J_{R}$ $(>0)$.
This selection of interaction constants allows us to study the ferrimagnetic behavior of the model.
We consider an indirect exchange coupling between the layers $A$ and $B$.
Hence, following the same notation with Ref. \cite{kaneyoshi1}, we assume
\begin{equation}\label{eq1}
J_{R}=J\exp[-\lambda(n+1)]/(n+1)^{\delta},
\end{equation}
where the parameter $\lambda$ is related to the disorder and $\delta$ is related to the dimensionality of the system, and $n$ is the number of nonmagnetic layers between the sublattices $A$ and $B$,
(please see Ref. \cite{kaneyoshi1} for details).
The Hamiltonian of the model represented by Fig. \ref{fig1} is given by
\begin{equation}\label{eq2}
\mathcal{H}=-J_{A}\sum_{<ij>}\sigma_{i}\sigma_{j}-J_{B}\sum_{<kl>}S_{k}S_{l}+J_{R}\sum_{<ik>}\sigma_{i}S_{k}-D_{B}\sum_{k}(S_{k})^{2},
\end{equation}
where the spin-spin coupling terms in the first three sums are taken over only the nearest-neighbor spin pairs whereas the
last summation is carried out over all the lattice sites of sub-lattice $B$ with $D_{B}$ being the single ion anisotropy parameter of spin-1.
In order to implement the MC simulation procedure for the present system, each pseudo spin variable $\sigma_{i}$ and $S_{k}$ is assigned on the lattice sites of a brick lattice \cite{kim,morita} with lateral dimension $L$
which is topologically equivalent of the honeycomb lattice (Fig. \ref{fig1}b). Periodic boundary conditions have been imposed in both lateral and vertical directions. During the simulations, we have monitored the
quantities of interest over $250000$ Monte Carlo steps per lattice site for equilibrium system, after discarding the first $50000$ steps. On the other hand, for the calculation of kinetic properties, we have
obtained time series of magnetization over $2000$ cycles of external magnetic field, and allowed the system to relax during the first $1000$ periods.
In the equilibrium case, the thermal average of sub-lattice ($M_{A}$ and $M_{B}$) and total $(M_{T})$ magnetizations have been calculated according to
\begin{equation}\label{eq3}
M_{\alpha}=\left\langle \sum_{t}m_{\alpha}(t) \right\rangle, \quad \alpha=A,B,T
\end{equation}
where $m_{\alpha}(t)$ is the time series of corresponding sub-lattice (or total) magnetization per spin.
Then the definition of magnetic susceptibility and the alternative description of the total magnetization can also be given by
\begin{equation}\label{eq5}
\chi=\frac{N_{T}}{k_{B}T}\left[\left\langle \sum_{t}(m_{T}(t))^{2}\right\rangle-\left\langle \sum_{t}m_{T}(t) \right\rangle^{2}\right],
\end{equation}
\begin{equation}\label{eq4}
M_{T}=[M_{A}+M_{B}]/2.0,
\end{equation}
where $N_{T}$ is the total number of lattice sites.
Some of the simulation parameters have been fixed as $J_{A}=1.0J$, $J_{B}=0.5J$. For simplicity, we also set $k_{B}=1$.
\section{Results and Discussion}\label{results}
In section \ref{sub1}, we will present the magnetic properties of Ising bilayer in the absence of magnetic field. However, section \ref{sub2} is devoted for the discussions regarding the nonequilibrium stochastic behavior of the
system in the presence of time dependent oscillating magnetic field.
\subsection{Equilibrium properties}\label{sub1}
\begin{figure}[!h]
\center
\subfigure[\hspace{0cm}] {\includegraphics[width=6.5cm]{fig2a}}\\
\subfigure[\hspace{0cm}] {\includegraphics[width=7.0cm]{fig2b}}\\
\caption{(a) Phase diagram of the Ising bilayer with $L=128$ plotted in a $(D_{B}/J \ \mathrm{vs} \ T_{c}/J )$ plane for three different values of $\lambda$.
(b) Magnetic properties such as the total magnetization $M_{T}$ and magnetic susceptibility $\chi$ for $D_{B}=-1.5$ with $n=1$, $\lambda=0$ and $\delta=3.0$. Different symbols correspond to different lattice size $L$.}\label{fig2}
\end{figure}
We start our investigation by examining the phase diagram of the present model in a $(D_{B}/J \ \mathrm{vs} \ T_{c}/J )$ plane for three values of disorder parameter $\lambda$ where the numerical value of the transition temperature
has been estimated from the peak point of susceptibility curves. Here, we consider one monolayer of nonmagnetic sites.
According to Eq. (\ref{eq1}), antiferromagnetic interface exchange coupling $J_{R}$ exponentially approaches to zero with increasing $\lambda$. Hence, for large values of this parameter, we have $J_{R}\rightarrow0$, and in this limit,
the two sublattices $A$ and $B$ become magnetically independent of each other. For moderate values such as $\lambda\leq0.5$, ferrimagnetic character is adopted in the system, and both sublattices undergo a phase transition at the same critical temperature.
For $\lambda=0.0$, $J_{R}$ approaches its maximum value, and for positive $D_{B}/J$, critical temperature becomes reduced with increasing $\lambda$. On the other hand, for large negative values of $D_{B}/J$, only $S_{i}=0.0$ state is allowed
in sublattice $B$. Therefore, if we define a threshold value $D_{B}^{*}/J$ for single ion anisotropy parameter then the sublattice $B$ becomes nonmagnetic for $D_{B}/J<D_{B}^{*}/J$. In this case, the horizontal line in the phase diagram
is the sole contribution of sublattice $A$ to the transition temperature. For spin-1 Blume-Capel model, MC calculations predict a tricritical point at $D_{t}/J=-1.446$ for the same phase diagram \cite{booth}
whereas EFT result is $D_{t}/J=-1.41$ \cite{kaneyoshi2,tucker}. We note that, the selection of exchange coupling parameters, namely, $J_{A}=1.0J$ and $J_{B}=0.5J$ helps us to omit the first order phase transitions in the present system.
This can be seen from Fig. \ref{fig2}b, where we plot the magnetization and magnetic susceptibility as a function of temperature for several lattice sizes ranging from $L=64$ to $L=256$. As shown in this figure,
the magnetization exhibits a continuous phase transition in the vicinity of critical temperature and magnetic susceptibility curves exhibit a size dependent positive divergence around $T_{c}$.
All these observations clearly demonstrate that the phase transition is always of second order for the whole range of $D_{B}/J$ values.
Besides, the ground state magnetization saturates at $M_{T}=0.25$, since the magnetization of sublattice $B$ is zero for $D_{B}=-1.5J$.
As a final note regarding this figure, we should point out that a qualitatively similar phase diagram
has been obtained in Ref. \cite{kaneyoshi1} where the author used EFT. This fact again shows that the models solved by EFT method exhibit the same topology as those obtained from
the Monte Carlo (MC) simulation.
\begin{figure}[!h]
\center
\includegraphics[width=6.5cm]{fig3}
\caption{Total magnetization $M_{T}$ as a function of $D_{B}/J$ for fixed system parameters which are shown the figure. The system size has been fixed as $L=128$. } \label{fig3}
\end{figure}
Next in Fig. \ref{fig3}, we present some ferrimagnetic properties of the system where the total magnetization $M_{T}$ has been plotted as a function of temperature for some selected values of $D_{B}/J$. The other system parameters have been fixed as
displayed in the figure. In a recent work \cite{jiang}, six different compensation types \cite{neel,strecka2} have been observed for an Ising trilayer system. On the other hand, Ref. \cite{kaneyoshi1} reports that the total magnetization of Ising bilayer with indirect
interlayer exchange exhibits $P$-, $N$- and $Q$- type behaviors which have also been observed in our calculations. Moreover, the unclassified curve corresponding to $d=-0.85$ of Ref. \cite{kaneyoshi1} is identical to
the curve corresponding to $D_{B}/J=-0.88$ in Fig. \ref{fig3} of the present study \cite{footnote}. This observation again supports the consistency of the results obtained by EFT and MC methods.
\begin{figure}[!h]
\center
\subfigure[\hspace{0cm}] {\includegraphics[width=6.5cm]{fig4a}}
\subfigure[\hspace{0cm}] {\includegraphics[width=6.5cm]{fig4b}}\\
\subfigure[\hspace{0cm}] {\includegraphics[width=6.5cm]{fig4c}}\\
\caption{Influence of (a) $\lambda$, (b) $\delta$, and (c) $n$ on the compensation behavior of the total magnetization of the Ising bilayer with $L=128$.} \label{fig4}
\end{figure}
As shown in Fig. \ref{fig3}, a compensation behavior may originate in the system for a narrow range of $D_{B}/J$ values.
Compensation temperature is peculiar to the systems exhibiting ferrimagnetism at which the sublattice magnetizations cancel each other below the transition temperature.
The influence of varying $\lambda$, $\delta $ and $n$ on the magnetisation profile has been depicted in Fig. \ref{fig4}. As shown in this figure, $N$- type magnetization curve evolves towards the $Q$- type behavior with increasing
$\lambda$, $\delta$, and $n$. This is an expected result, since $J_{R}$ rapidly decays towards zero with increasing values of these parameters. Therefore, ferrimagnetism is destructed, and we obtain two independent ferromagnetic
layers.
\subsection{Kinetic properties}\label{sub2}
\begin{figure}[!h]
\center
\subfigure[\hspace{0cm}] {\includegraphics[width=6.5cm]{fig5a}}
\subfigure[\hspace{0cm}] {\includegraphics[width=6.5cm]{fig5b}}\\
\subfigure[\hspace{0cm}] {\includegraphics[width=6.5cm]{fig5c}}
\subfigure[\hspace{0cm}] {\includegraphics[width=6.5cm]{fig5d}}\\
\caption{Time series of magnetizations $m_{A}$, $m_{B}$ and magnetic field $h(t)$ for the system size $L=128$. The time evolution of magnetic field is either in sinusoidal form ((a),(b)) or in square wave form ((c),(d)).
The leftmost plots have been obtained for $P=20$ whereas the rightmost curves correspond to high period case $P=200$. The magnetic field amplitude has been fixed as $h_{0}=0.4J$.} \label{fig5}
\end{figure}
Up to now, we have considered the ferrimagnetic properties of Ising bilayer in the absence of magnetic field.
From now on, we will discuss the variation of magnetic properties of the system in the presence of time dependent oscillating magnetic field for the following set of system parameters:
$J_{A}=1.0J$, $J_{B}=0.5J$, $D_{B}=-0.85J$, $n=1.0$, $\delta=3.0$, and $\lambda=0.0$. This set of parameters not only allows us to avoid the first order phase transitions, but also provides information
about how the compensation behavior varies in the presence of oscillating magnetic field. For this aim we consider two distinct types of magnetic field: (i) sinusoidal wave, (ii) square wave.
In this case, the Hamiltonian equation can be written as
\begin{equation}\label{eq7}
\mathcal{H}=\mathcal{H}_{0}+h(t)(\sum_{i}\sigma_{i}+\sum_{k}S_{k}),
\end{equation}
where $\mathcal{H}_{0}$ is the Hamiltonian equation in the absence of dynamic magnetic field, and the second and the third summations correspond to dynamic Zeeman energy terms.
As we have underlined in the preceding sections, the system can exhibit a field induced dynamic phase transition between ordered and disordered
phases. Such a situation is shown in Fig. \ref{fig5} where we respectively select the field amplitude and the temperature as $h_{0}/J=0.4$, and $T=0.8T_{c}$. Here $T_{c}$ denotes the critical temperature in the absence of any magnetic field.
Oscillation period of the magnetic field is denoted by $P$. In Fig. \ref{fig5}, the top and bottom panels respectively correspond to sinusoidal and square wave forms of the oscillating magnetic field. In the high frequency regime
(i.e. the left panels) the sublattice magnetizations $m_{A}$ and $m_{B}$ oscillate around a nonzero value. This corresponds to the dynamically ordered phase. On the other hand, in the low frequency regime,
the sublattice magnetizations $m_{A}$ and $m_{B}$ can follow the external perturbation with a small phase lag, and the time average of the magnetization is very close to zero where the system is in the dynamically disordered phase.
In this process, it is possible to trigger a field induced dynamic phase transition by properly adjusting the field period $P$.
\begin{figure}[!h]
\center
\subfigure[\hspace{0cm}] {\includegraphics[width=6.5cm]{fig6a}}
\subfigure[\hspace{0cm}] {\includegraphics[width=6.5cm]{fig6b}}\\
\caption{Variation of dynamic order parameters $Q_{A}$ and $Q_{B}$ as functions of temperature for $L=128$. The magnetic field $h(t)$ varies in (a) sinusoidal (b) square wave form with time.
System parameters accompany each figure. In the inset, the total dynamic order parameter $Q_{T}$ has been depicted.} \label{fig6}
\end{figure}
Compensation behavior in the presence of dynamic magnetic fields can be examined by calculating the thermal average of dynamic order parameters corresponding to sublattices, as well as the total magnetization. These magnetic properties are defined as
the time averaged magnetizations over the successive cycles of the oscillating field \cite{tome},
\begin{equation}\label{eq8}
Q_{\alpha}=\frac{1}{NP}\oint m_{\alpha}(t)dt, \ \alpha=A,B \ \mathrm{or} \ T
\end{equation}
where $P$ is the field period, and $N$ denotes the number of magnetic field cycles. In Fig. \ref{fig6}, in order to compare the stochastic behavior of the system in the presence of sinusoidal and square wave magnetic field,
we have depicted the thermal variation of sublattice magnetizations $Q_{A}$ and $Q_{B}$ as functions of the temperature. It can be seen from this figure that transition temperature, as well as the compensation point $T_{comp}$
reduces with increasing magnetic field amplitude $h_{0}$. Moreover, in the presence of square wave magnetic field, numerical values of $T_{c}$ and $T_{comp}$ are clearly lower than those obtained for the sinusoidally oscillating
magnetic fields. The insets in Fig. \ref{fig6} show the thermal variation of total magnetization when the field amplitude is varied. For both forms of the magnetic field, $Q_{T}$ exhibits $N$- type behavior. Therefore, we can conclude that
although the compensation temperature is reduced with increasing $h_{0}$, $Q_{T}$ maintains its Ne\'{e}l classification scheme.
\begin{figure}[!h]
\center
\subfigure[\hspace{0cm}] {\includegraphics[width=6.5cm]{fig7}}
\caption{Variation of dynamic order parameter $Q_{T}$ as a function of temperature for $L=128$. The magnetic field $h(t)$ varies with time either in cosine or in square wave form .
System parameters accompany each figure.} \label{fig7}
\end{figure}
Finally, let us conclude our investigation for the Ising bilayer system by examining the variation of compensation phenomenon as a function of varying field period $P$.
In figure \ref{fig7}, termal variation of $Q_{T}$ has been depicted for both sinusoidal and square wave forms of magnetic field. Here, the field amplitude has been fixed as $h_{0}=0.4J$, and we consider two different values of field period $P$.
Either for square and sinusoidal wave forms of magnetic field, the order parameter $Q_{T}$ maintains its $N$- type profile for high and low frequency perturbations.
Our simulation results also show that increasing magnetic field period causes a decline in critical and compensation temperature values. However, in Ref. \cite{vatansever2}, it has been reported that
the field period does not alter the compensation behavior of a mixed ferrimagnetic bulk system. In this regard, it can be concluded that the mechanism behind the variation of the compensation behavior with respect to the stochastic dynamics
in low dimensional systems such as magnetic bilayers may be rather different from those originated in bulk systems.
\section{Conclusion}\label{conclusion}
We have performed Monte Carlo simulations regarding the magnetic properties of an Ising bilayer system defined on a couple of stacked honeycomb lattices where the sublattices $A$ and $B$ interact via indirect exchange coupling $J_{R}$.
In the first part of our analysis, we have investigated the equilibrium ferrimagnetic properties of the system, and we obtained $P$-, $N$-, $Q$- type magnetization profiles which have been classified according to
Ne\'{e}l classification scheme. Compensation phenomenon suddenly disappears with decreasing strength of indirect ferrimagnetic interlayer exchange coupling.
We have also compared the obtained results with those reported in the literature, and found that
MC simulations qualitatively reproduce the magnetization curves obtained from EFT. In this regard, we have concluded that EFT method exhibits the same topology as those obtained from
the MC simulation with less computational time. In the second part of our analysis, we have focused on the evolution of compensation behavior observed in the system in the presence of a time depending magnetic field.
Two different forms for the time dependence of the dynamic magnetic field has been considered as sinusoidal oscillations, and square wave form. For both cases, compensation point $T_{comp}$ and transition temperature $T_{c}$ tend to decrease with
increasing field amplitude $h_{0}$. The increasing field period $P$ also causes to the same consequence. For the fixed values of $h_{0}$ and $P$, obtained $T_{comp}$ and $T_{c}$ values for a square wave
are clearly lower than those obtained for the sinusoidally oscillating
magnetic fields.
Investigation of dynamical critical properties of magnetic spin systems revealed very rich physical phenomena, and these systems promise even more interesting and novel features. For instance, whether the critical exponents of magnetization
and magnetic susceptibility exhibit any dimensional crossover as the geometry of the kinetic Ising bilayer system evolves from graphene-like structure to a graphite-like topology seems to be an interesting problem.
However, this will be the subject of our near future work.
\section*{Acknowledgements}
The numerical
calculations reported in this paper were performed
at TUBITAK ULAKBIM High Performance and Grid
Computing Center (TR-Grid e-Infrastructure).
\section*{References}
|
{
"timestamp": "2018-06-07T02:18:24",
"yymm": "1806",
"arxiv_id": "1806.01002",
"language": "en",
"url": "https://arxiv.org/abs/1806.01002"
}
|
\section{Introduction}
The rate of criminal activities by individuals and threats by terrorist groups has been on the rise in recent years. The law enforcement agencies have been motivated to use video surveillance systems to monitor and curb these threats. Many automated video surveillance systems have been developed in the past to monitor abandoned objects (bags)~\cite{li2010abandoned}, theft~\cite{chuang2009carried}, fire or smoke~\cite{seebamrungsat2014fire}, violent activities~\cite{goya2009method}, etc.
Li et al.~\cite{li2010abandoned} developed a video surveillance system to identify the abandoned objects with the use of Gaussian mixture models and Support Vector Machine. This system is robust to illumination changes and performs with an accuracy of 84.44\%. This system is vital for the detection of abandon bags in busy public areas, which may contain bombs. Chuang et al.~\cite{chuang2009carried} used Forward-backward ratio histogram and a finite state machine to recognize robberies. This system has proven to be very useful around automatic teller machines (ATMs) and has detected 96\% cases of the theft. Seebamrungsat et al.~\cite{seebamrungsat2014fire} presented a fire detection system based on HSV and YCbCr color models as it allowed it to distinguish bright images more efficiently than other RGB models. The system has been shown to detect fire with an accuracy of more than 90.0\%. Goya et al.~\cite{goya2009method} introduced a Public Safety System (PSS) for identifying criminal actions such as purse snatching, child kidnapping, and fighting using distance, velocity, and area to determine the human behavior. This system can identify the criminal actions with an accuracy of around 85\%.
These reported systems have been very successful in detecting and reporting various criminal activities. Despite their impressive performance (more than 90\% accuracy), the area these systems can monitor is limited due to the restricted field of view of the cameras. The law enforcement agencies have been motivated to use aerial surveillance systems to surveil large areas. Governments have recently deployed drones in war zones to monitor hostiles, to spy on foreign drug cartels~\cite{ padgett2009drones}, conducting border control operations~\cite{ walters2010ucav} as well as finding criminal activity in urban and rural areas~\cite{ lewis2010cctv}. One or more soldiers pilot most of these drones for long durations which makes these systems prone to mistakes due to the human fatigue.
Surya et al.~\cite{penmetsa2014autonomous} proposed an autonomous drone surveillance system capable of detecting individuals engaged in violent activities in public areas. This first of its kind system used the deformable parts model~\cite{felzenszwalb2005pictorial} to estimate human poses which are then used to identify the suspicious individuals. This is an extremely challenging task as the images or videos recorded by the drone can suffer from illumination changes, shadows, poor resolution, and blurring. Also, the humans can appear at different locations, orientations, and scales. Despite the above-explained complications, the system can detect violent activated with an accuracy of around 76\% which is far less as compared to the greater than 90\% performance of the ground surveillance systems.
This paper introduces an improved real-time autonomous drone surveillance system to identify violent individuals in public areas. The proposed method first uses the feature pyramid network (FPN)~\cite{hd} is used to detect the humans from the aerial image. Next, the proposed ScatterNet Hybrid Deep Learning (SHDL) network is used to estimate the pose for each detected human. Finally, the orientations between the limbs of the estimated pose are used by the support vector machine (SVM) to identify individuals engaged in violent activities.
The novelties of the proposed system and the advantages over Surya et al.'s~\cite{penmetsa2014autonomous} technique are detailed below:
\begin{itemize}
\setlength\itemsep{-0.35em}
\item \textbf{\textit{Accurate Human Pose Estimation}}: The proposed system uses the SHDL for human pose estimation. Deep networks have achieved the state-of-the-art pose estimation performance with high-level features~\cite{li20143d, pfister2014deep, toshev2014deeppose} which gives the proposed system a competitive edge.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\linewidth]{2}
\end{center}
\caption{\small{Illustration presents the violent activities from the introduced AVI dataset namely (clockwise from top) (i) Strangling,
(ii) Punching, (iii) Kicking, (iv) Shooting and (v) Stabbing. The image of shooting activity involves multiple people in the same frame.}}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[scale = 0.35]{1}
\end{center}
\caption{\small{The figure (left) illustrates the 14 body key-points annotated on the human body. The description of the human body points is as Facial Region (Purple): P1-Head, P2- Neck; Arms Region (Red): P3- Right shoulder, P4- Right Elbow, P5- Right Wrist, P6- Left Shoulder, P7- Left Elbow, P8- Left Wrist; Legs Region (Green): P9-Right Hip, P10- Right Knee, P11-Right Ankle, P12- Left Hip, P13- Left Knee, P14- Left Ankle. The figure (right) shows the Parrot AR Drone used to capture the images in the dataset and close-ups of few annotated keypoints.}}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\item \textbf{\textit{ScatterNet Hybrid Deep Network}}: The proposed SHDL network for pose estimation is composed of a hand-crafted ScatterNet front-end and a supervised learning based back-end formed of the modified coarse-to-fine deep regression network~\cite{belagiannis2015robust}, referred from now as the regression network (RN). The SHDL network is constructed by replacing the first convolutional, relu and pooling layers of the coarse-to-fine deep regression network~\cite{belagiannis2015robust} with the hand-crafted parametric log ScatterNet~\cite{singh}. This accelerates the learning of the regression network (RN) as the ScatterNet front-end extracts invariant (translation, rotation, and scale)~\cite{sifre2013} edge features which can be directly used to learn more complex patterns from the start of learning. The invariant edge features can be beneficial for this application as the humans can appear with these variations in the aerial images.
\item \textbf{\textit{Rapid Training with Structural Priors}}: Training of the SHDL network can be slow as it requires the optimization of several hyperparameters. The training is shown to accelerate by initializing the CNN layer filters of the regression network with structural priors learned (unsupervised) using the PCANet~\cite{pcanet} framework (Fig. 3). The initialization with priors also reduces the need for sizeable labeled training datasets for effective training which is especially advantageous for this task or other applications~\cite{tsshdl,jain} as it can be expensive and time-consuming to generate keypoint annotations.
\item \textbf{\textit{Real-time Identification}}: The proposed system performs the computation and memory demanding SHDL network processes along with the activity classification technique on the cloud while keeping short-term navigation onboard. This allows the system to identify violent individuals in real-time which is an improvement over the previous work of Surya et al.~\cite{penmetsa2014autonomous}.
\item \textbf{\textit{Aerial Violent Individual (AVI) Dataset}}: The paper presents the Aerial Violent Individual (AVI) dataset of 2000 annotated images (10863 total individuals) of 5124 individuals engaged in violent activities. The AVI dataset contains images with humans recorded at different variations of scale, position, illumination, blurriness, etc. This dataset may encourage researchers interested in using deep learning for aerial surveillance applications.
\end{itemize}
The proposed Drone Surveillance System (DSS) is used to identify the individuals engaged in violent activities from aerial images. The pose estimation and activity classification performance of the system is compared with the state-of-the-art techniques.
The paper is divided into the following sections. Section 2 presents the introduced AVI dataset while Section 3 introduces the proposed DSS system. Section 4 details the experimental results and Section 5 concludes this research.
\section{Aerial Violent Individual (AVI) Dataset}
This research proposes an annotated Aerial Violent Individual (AVI) dataset which is used by the proposed SHDL network to learn pose estimation. The dataset is composed of 2000 images where each image contains two and ten humans. The complete datasets consist of 10863 humans with 5124 (48\%) engaged in one or more of the five violent activities of (1) Punching, (2) Stabbing, (3) Shooting, (4) Kicking, and (5) Strangling as shown in Fig. 1. Each human in the aerial image frame is annotated with 14 key-points which are utilized by the proposed network as labels for learning pose estimation as shown in Fig. 2. These activities are performed by 25 subjects between the ages of 18-25 years. These images are recorded from the parrot drone at four heights of 2m, 4m, 6m and 8m (m: meters).
The violent individual identification task from these aerial images is an extremely challenging problem as these images can be affected by illumination changes, shadows, poor resolution, and blurring. In addition to these variations, the humans can appear at different locations, orientations, and scales. The proposed dataset includes images with the above-detailed variations as these can significantly alter the appearance of the humans and affect the performance of the surveillance systems. The SHDL network, when trained on the AVI dataset with these variations, can learn to recognize humans despite these variations.
\section{Drone Surveillance System}
This section presents the Drone Surveillance System (DSS) for the identification of individuals engaging in violent activities. The system first uses the feature pyramid network (FPN)~\cite{hd} to detect humans from the images recorded by the drone. The proposed ScatterNet Hybrid Deep Learning (SHDL) Network is then used to estimate the pose of each detected human. Finally, the orientations between the limbs of the estimated pose are used to identify the violent individuals. The system uses cloud computation to achieve the identification in real-time. Each part of the Drone Surveillance System (DSS) is explained in the following sub-sections.
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{3n}
\end{center}
\caption{\small{Illustration presents the \{textit{human pose estimation pipeline} that can be used to detect violent individuals in public areas or large gatherings. The DSS framework uses the image recorded by the drone first to discover the humans within the image using the FPN network~\cite{hd}. The image regions containing the humans are given as input to the proposed SHDL network to detect 14 key-points on the body for pose estimation. The proposed SHDL network uses the ScatterNet (front-end) to extract hand-crafted features from the input region at L0, L1, and L2 using DTCWT filters at two scales and six fixed orientations. The handcrafted features extracted from the three layers are concatenated and given as input to the 4 Convolutional layers of the Regression Network (RN) (L3, L4, L5, L6) (back-end) with 32, 32, 64 and 64 filters. Each RN convolutional layer is initialized with the PCA based structural priors with the same number of filters. PCA layers can learn the undesired checkerboard filters (shown in red) which are avoided and not used as the prior for the Regression Network. To detect and remove the checkerboard filters from the learned filter set; we used the method defined in ~\cite{geiger2012}. The ScatterNets and Structural priors have shown to improve the training of the proposed SHDL network as compared to the original coarse-to-fine regression network~\cite{belagiannis2015robust} (which was modified to obtain the SHDL) as shown from the convergence graph. The 14 key-points detected on the human are connected to construct the skeleton structure. The hand-crafted filters for the ScatterNet, learned structural PCA priors and the learned filters of the regression network (RN) are shown. }}
\label{fig:short}
\end{figure*}
\subsection{Human Detection}
The DSS system makes uses of the feature pyramid network (FPN)~\cite{hd} to detect humans quickly from the images recorded by the drone. The FPN network detects the humans by leveraging the pyramidal shape of a ConvNet’s feature hierarchy while creating a feature pyramid that has strong semantics at all scales. The result is a feature pyramid that has rich semantics at all levels and is built quickly from a single input image scale.
\subsection{ScatterNet Hybrid Deep Learning Network}
This section details the proposed ScatterNet Hybrid Deep Learning (SHDL) network, inspired from Singh et al.'s work in~\cite{eff,shdl2017,tsshdl, GSHDL}, composed by combining the hand-crafted (front-end) two-layer parametric log ScatterNet~\cite{singh} with the regression network (RN) (back-end) shown in Fig. 3. The ScatterNet accelerates the learning of the SHDL network by extracting invariant edge-based features which allow the SHDL network to learn complex features from the start of the learning~\cite{eff}. The regression network also uses structural priors to expedite the training as well as reduce the dependence on the annotated datasets. The ScatterNet (front-end) and regression network (RN) (back-end) parts of the proposed SHDL network are presented below.
\vspace{0.5mm}
\textbf{\textit{ScatterNet (front-end)}}: The parametric log based DTCWT ScatterNet~\cite{singh} is an improved numerous version of the hand-crafted multi-layer Scattering Networks~\cite{Jbruna2013,ima,eccv} proposed over the years. The parametric log ScatterNet extracts relatively symmetric translation invariant representations using the \textit{dual-tree complex wavelet transform} (DTCWT)~\cite{Kingsbury1998} and parametric log transformation layer. The ScatterNet features are denser over scale as they are extracted from multi-resolution images at 1.5 times and twice the size of the input image. Below we present the formulation of the parametric DTCWT ScatterNet for a single input image which may then be applied to each of the multi-resolution images.
The parametric log ScatterNet is a hand-crafted two-layer network which extracts translation invariant feature representation from an input image or signal. The invariant features are obtained at the first layer by filtering the input signal $x$ with dual-tree complex wavelets (better than cosine transforms~\cite{wavdct}) $ \psi_{j,r }$ at different scales ($j$) and six pre-defined orientations ($r$) fixed to $15^\circ, 45^\circ, 75^\circ, 105^\circ, 135^\circ$ and $165^\circ$. To build a more translation invariant representation, a point-wise $L_{2}$ non-linearity (complex modulus) is applied to the real and imaginary part of the filtered signal:
\begin{equation}
U[\lambda_{m = 1}] = |x\star \psi_{\lambda_{1} }| = \sqrt{|x\star \psi_{\lambda_{1} }^{a}|^2 + |x\star \psi_{\lambda_{1} }^{b}|^2}
\end{equation}
The parametric log transformation layer is then applied to all the oriented representations extracted at the first scale $j=1$ with a parameter $k_{j=1}$, to reduce the effect of outliers by introducing relative symmetry of pdf~\cite{singh}, as shown below:
\begin{equation}
U1[j] = \log(U[j] + k_{j}), \quad U[j] = |x\star \psi_{j}|,
\end{equation}
Next, a local average is computed on the envelope $|U1[\lambda_{m = 1}]|$ that aggregates the coefficients to build the desired translation-invariant representation:
\begin{equation}
S_{1}[\lambda_{m = 1}] = |U1[\lambda_{m = 1}]| \star \phi_{2^J}
\end{equation}
The high frequency components lost due to smoothing are retrieved by cascaded wavelet filtering performed at the second layer. Translation invarinace is introduced in these features by applying the L2 non-linearity with averaing as explained above for the first layer~\cite{singh}.
The scattering coefficients at L0, L1, and L2 are:
\vspace{-0.3em}
\begin{equation}
S = \begin{pmatrix}
x \star \phi_{2^J},
S_{1}[\lambda_{m = 1}],S_{2}[\lambda_{m = 1},\lambda_{m = 2}] \star \phi_{2^J}
\end{pmatrix}
\end{equation}
The rotation and scale invariance are next obtained by filtering jointly across the position ($u$), rotation ($\theta$) and scale($j$) variables as detailed in~\cite{sifre2013}.
The features extracted from each multi-resolution at L0, L1, and L2 are concatenated and given as input to the regression network (RN), to learn high-level features for human pose estimation. The ScatterNet features help the proposed SHDL to converge faster as the convolutional layers of the regression network can learn more complex patterns from the start of learning as it is not necessary to wait for the first layer to learn invariant edges as the ScatterNet already extracts them.
\textbf{\textit{Pose Estimation with Structural Priors (back-end):}}
The invariant ScatterNet features are used by the regression network (RN) of the SHDL network to learn pose estimation from the AVI dataset. The regression network was constructed by removing the first convolutional, relu, pooling, and normalization layers of the coarse-to-fine deep regression network~\cite{belagiannis2015robust}. The regression network (RN) of the SHDL is composed of four convolutional (L3 to L6 layers), two fully connected, normalization, and max-pooling layers as shown in Fig. 3.
The training objective is to estimate the optimal weights of the filters in the convolutional layers using the AVI training dataset $D = (S; Y)$, which minimizes the Tukey's biweight loss function~\cite{belagiannis2015robust} of the network. Here $S$ are the ScatterNet features extracted from the input image ($X$) while $Y$ is a 28 element vector of ($x,y$) corresponding to the 14 key-points annotated on the human body as shown in Fig. 2. The network is optimized using backpropagation with stochastic gradient descent. Dropout is utilized to avoid overfitting. Tukey's biweight loss function is very efficient as it suppresses the influence of outliers during backpropagation by reducing the magnitude of the gradient close to zero~\cite{belagiannis2015robust}.
\textbf{\textit{Structural Priors}}: Each convolutional layer (L3 to L6) of the regression network (RN) of the SHDL network is initialized with structural priors to accelerate the training. The Structural priors are obtained for each layer using the PCANet~\cite{pcanet} framework that learns a family of orthonormal filters by minimizing the following reconstruction error:
\begin{figure}[t]
\begin{center}
\includegraphics[scale = 0.5]{4}
\end{center}
\caption{
The Illustration shows the skeleton corresponding to the humans in an image. The angles (shown in green for few limbs) between the various limbs in this structure are used by the SVM to recognize the humans engaged in violent activities.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\begin{equation}
\min_{V \epsilon \ R^{z_{1}z_{2}\times K} } \left \|X-VV^TX \right \|_{F}^2,\ s.t.\ VV^T = I_{K}
\end{equation}
Where $X$ are patches sampled from $N$ training features, $I_K$ is an identity matrix of size $K \times K$. The solution of Eq. 5 in its simplified form represents $K$ leading principal eigenvectors of $XX^T$ obtained using Eigen decomposition.
The structural priors for layer 3 (L3) are learned on the ScatterNet features, layer 4 (L4) on layer 3 outputs, layer 5 (L5) on layer 4 outputs and so on. The structural priors for L3 to L6 layers learn filters that respond to a hierarchy of features, similar to the features learned by the CNN's. These learned priors are used to initialize each convolutional layer resulting in accelerated training as shown in Fig. 3 (Graph). Since it is swift to determine the structural priors, the whole process is much quicker than training CNN's with random weight initialization. The PCA framework may learn undesired checkerboard filters. To detect the checker-board filters from the learned filter set, we use the method defined in~\cite{geiger2012}. These checkerboard filters are avoided as filter priors.
\subsection{\textbf{\textit{Violent Individual Classification}}}
The 14 key-points identified by the SHDL network are connected to form a skeleton structure as shown in Fig. 3. The orientations between the limbs of the skeleton structure are derived as shown in Fig. 4. A support vector machine (SVM) is trained on a vector of these orientations for six classes (five violent activities and one neutral activity) to perform multi-class classification. During test time, the orientations between the limbs of the skeleton are given as input to the SVM which classifies the humans as either neural or assigns the most likely violent activity label.
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{5}
\end{center}
\caption{
Illustration shows the pose estimation performance via the detection of key-points for the (a) arms region, which constitutes the wrist, shoulder and elbow, (b) legs region, which includes ankle, knee, and hip, and, (c) facial regions with the head and neck.}
\label{fig:short}
\end{figure*}
\subsection{Drone Image Acquisition and Cloud Processing}
The images that form the AVI dataset, presented in Section 2 are recorded using a Parrot AR Drone. The AR Drone 2.0 consists of two cameras, an Inertial Measurement Unit (IMU) including a 3-axis accelerometer, 3-axis gyroscope and 3-axis magnetometer, and ultrasound and pressure-based altitude sensors. It features a 1 GHz ARM Cortex-A8 as the CPU and runs a Linux operating system. The front-facing camera has a resolution of 1280$\times$720 at 30fps with a diagonal field of view of $92^\circ$ while the downward facing camera is of the lower resolution of 320$\times$240 at 60fps with a diagonal field of view of $64^\circ$. We use the front-facing camera to record the images due to its higher resolution. The downward facing camera estimates the parameters determining the state of the drone such as roll, pitch, yaw, and altitude using the sensors onboard to measure the horizontal velocity. The horizontal velocity calculation is based on an optical flow-based feature as detailed in \cite{bristeau2011navigation}. All the sensor measurements are updated at the 200Hz rate.
The images recorded by the drone are transferred to the Amazon cloud to achieve real-time identification. The slow and memory intensive computations of the SHDL network are processed on the Amazon cloud while keeping short-term navigation onboard. Cloud computing has given the flexibility of using unlimited computational resources (including GPUs) which provides an edge with for applications requiring vast amounts of computational power periodically \cite{goldberg2013cloud}.
\section{Experimental Results}
This section presents the training details and the performance of the Drone Surveillance System (DSS) for the identification of violent individuals on the AVI dataset. The DSS system uses the FPN network~\cite{hd} first to detect the humans, the SHDL network for human pose estimation, and then the orientations of the limbs of the estimated pose are used to identify the violent individuals. The next sections detail the performance of each part of the DSS system. The classification performance is also compared with the state-of-the-art technique proposed by Surya et al.~\cite{penmetsa2014autonomous}, used to identify persons of interest from aerial images.
\subsection{\textbf{\textit{Human Detector}}}
The FPN network~\cite{hd} pre-trained on the 80 category COCO detection dataset is used to detect the humans recorded by the drone in the AVI dataset. The FPN network was able to detect 10558 humans out of the 10863 humans, with an accuracy of 97.2\%.
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{6}
\end{center}
\caption{\small{The figure shows the performance of the Drone Surveillance System (DSS) on aerial images with only one violent individual, recorded using the AR parrot drone at four different heights of 2m (Row 1), 4m (Row 2), 6m (Row 3), and 8m (Row 4) (m: meters). The illustration also shows the individual engaged in different violent activities namely: Shooting (Column 1), Stabbing (Column 2), Kicking (Column 3), Strangling (Column 4) and Punching (Column 5). The violent individual detected by the DSS framework is shown in red while the neutral human is shown in cyan color. The estimated pose is also shown on top each detected human.}}
\label{fig:short}
\end{figure*}
\subsection{\textbf{\textit{SHDL Parameters and Training}}}
The image regions detected by the FPN network are resized to 120 $\times$ 80 and normalized by subtracting the image regions mean and dividing by its standard deviation.
\textbf{\textit{ScatterNet}}: The resultant image region is given as input to the ScatterNet (SHDL front-end) which extracts invariant edge representations at L0, L1, and L2 using DTCWT filters at 2 scales, and 6 fixed orientations.
\textbf{\textit{Regression Network with Structural Priors}}: The regression network (SHDL back-end) with four convolutional layers (L3-L6) is trained on the concatenated ScatterNet features (L0, L1, and L2) extracted from the 10558 image regions (detected by FPN network, Section 4.1). The network was trained on randomly selected 6334 image regions (60\%), validated against 2111 image regions (20\%) and tested on the remaining 2113 image regions (20\%). The network parameters are as follows: The base learning rate is $10^{-5}$, which we decrease to $10^{-6}$ after 20 iterations, the dropout is 0.5, the batch size is 20, and the total number of iterations (epochs) is 90. The filters of the convolutional layers are initialized with structural priors which are shown to accelerate the training as compared to the DeepPose network~\cite{toshev2014deeppose} as detailed in Section 3.2 and illustrated from the convergence graph in Fig. 3.
\subsection{\textbf{\textit{Key-Point Detection Performance}}}
The pose estimation performance of the SHDL network is evaluated by comparing the coordinates of the detected 14 key-points with their ground truth values on the annotated dataset. The key-point is deemed correctly located if it is within a set distance of $d$ pixels from a marked key-point in the ground truth, as shown in Fig. 5 via the accuracy vs. distance graphs, for different regions of the body.
The key-points detection analysis for the arms, legs, and facial, region is presented below.
\textbf{\textit{Arms Region}}: The arm region constitutes six points namely: wrist key-points (P5 and P8), shoulder key-points (P3 and P6), and elbow key-points(P4 and P7), as shown in Fig. 2. Fig. 5(a) indicates that the SHDL network can detect the wrist region key-points with an accuracy of around 60\%, for a pixel distance of d=5. The detection accuracy is much higher for the elbow and shoulder region at roughly 85\% and 95\% respectively, for the same pixel distance (d=5).
\textbf{\textit{Legs Region}}: The leg region constitutes six key-points, namely: hip key-points (P9, P12), knee key-points (P10, P13), and ankle key-points (P11, P14), as shown in Fig. 2. Fig. 5(b) indicates that the SHDL network detects hip key-points with almost 100\% for a pixel distance of d=5. The detection accuracy is between 85\% and 90\% for the knee key-points while the detection rate falls to around 85\% for the ankle key-points.
\textbf{\textit{Facial Region}}: The facial region constitutes two points, one the head (P1) and the other on the neck (P2), as shown in Fig. 2. The algorithm detects the neck key-point (P2) more accurately as compared the head key-point (P1) with an accuracy of around 95\% as opposed to roughly 77\% accuracy, for a pixel distance of d=5, as shown in Fig. 5(c).
The human pose estimation performance of the SHDL network on the Aerial Violent Individual (AVI) dataset is presented in Table 1. As observed from the Table, the SHDL network estimates the human pose based on the 14 key-points at d = 5 pixel distance from the ground-truth, with 87.6\% accuracy.
\begin{table}[!h]%
\centering
\begin{tabular}{c|cccc}
\hline
\multicolumn{1}{c}{Dataset} & \multicolumn{4}{c}{Deep Learning Networks} \\
\hline
& SHDL & CN & CNE & SpatialNet \\
\cline{2-4} \hline
\small{AVI} & \textbf{87.6} & 79.6 & 80.1 & 83.4 \\
\end{tabular}
\newline
\caption{{Comparison of the human pose estimation performance of SHDL network with Coordinate network (CN) ~\cite{cne}, Coordinate extended network (CNE)~\cite{cne} ~\cite{pfister2015flowing} and Spatial network~\cite{pfister2015flowing} based on the detection of the 14 key-points. The evaluation is presented on the AVI dataset for maximum 5 pixels allowed distance (d=5) from the annotated ground truth.}}
\end{table}
The human pose estimation performance of the SHDL network is also compared with several state-of-the-art pose estimation methods such as CoordinateNet (CN)~\cite{cne}, CoordinateNet extended(CNE)~\cite{cne}, and SpatialNet ~\cite{pfister2015flowing}. The proposed SHDL network outperforms them by a decent margin.
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{7}
\end{center}
\caption{The figure shows the performance of the Drone Surveillance System (DSS) on aerial images with multiple humans engaging together in different violent activities. The violent individuals are highlighted in red color and neutral human in cyan color. }
\label{fig:short}
\end{figure*}
\subsection{\textbf{\textit{Violent Individuals Identification}}}
The detected key-points are connected to form a skeleton structure as shown in Fig. 3. The orientations between the limbs are concatenated as a vector. A support vector machine (SVM) with a Gaussian kernel is trained on the orientation vector for each class of violent activity and one neutral class for 6334 randomly selected human poses (60\%) to perform the multi-class classification. The SVM parameter (c) is selected as 14 while gamma parameter is set to 0.00002 using 5-fold cross-validation on the training set. The classification accuracy on the AVI dataset of each violent activity is presented for 4224 (40\%) human poses as shown in Table 2.
\begin{table}[!h]%
\centering
\scalebox{0.9}{
\begin{tabular}{c|ccccc}
\hline
\multicolumn{1}{c}{\small{Dataset}} & \multicolumn{5}{c}{Violent Activities} \\
\hline
& \small{Punching} & \small{Kicking} & \small{Strangling} & \small{Shooting} & \small{Stabbing} \\
\cline{2-4} \hline
\small{DSS} & 89 & 94 & 85 & 82 & 92 \\
\small{\small{Surya}~\cite{penmetsa2014autonomous}} & 80 & 84 & 73 & 73 & 79\\
\end{tabular}
}
\newline
\caption{Table presents the classification accuracies(\%) for the violent activities on Aerial Violent Individual (AVI) dataset. }
\end{table}
The accuracy of the strangling and shooting activities are relatively lower due to their similarity as shown in Fig. 6.
Next, the classification accuracy for varying number of human subjects engaged in a violent activity per image is shown in Table 3.
\begin{table}[!h]%
\centering
\scalebox{0.9}{
\begin{tabular}{c|ccccc}
\hline
\multicolumn{1}{c}{\small{Dataset}} & \multicolumn{5}{c}{No. of Violent Individuals (Per Image)} \\
\hline
& \small{1} & \small{2} & \small{3} & \small{4} & \small{5} \\
\cline{2-4} \hline
\small{DSS} & 94.1 & 90.6 & 88.3 & 87.8 & 84.0
\end{tabular}
}
\newline
\caption{The table presents the classification accuracies(\%) with the increase in individuals engaged in the violent activities in the aerial images taken the Aerial Violent Individual (AVI) dataset.}
\end{table}
The accuracy of the DSS system decreases with the increase in the number of humans in the aerial image. This can be due to the inability of the FPN network~\cite{hd} to locate all the humans or the incapability of the SHDL network to estimate the pose of the humans accurately. The incorrect pose can result in a wrong orientations vector which can lead the SVM to classify the activities incorrectly.
The results presented in above table are encouraging as the system is more likely to encounter multiple people in an image frame. The DSS framework applied to images with the different number of people engaged in violent activities is shown in Fig. 7.
The classification performance is also compared with the state-of-the-art technique which was developed to recognize the person of interest from aerial images~\cite{penmetsa2014autonomous} as shown in Table. 4. The proposed Drone Surveillance System (DSS) was able to outperform the method by more than 10\% on the AVI dataset.
\begin{table}[!h]%
\centering
\begin{tabular}{c|c c }
\hline
\multicolumn{1}{c}{Dataset} & \multicolumn{2}{c}{Comparison} \\
\hline
& DSS & state-of-the-art~\cite{penmetsa2014autonomous} \\
\cline{2-3} \hline
\small{AVI} & \textbf{88.8} & 77.8 \\
\end{tabular}
\newline
\caption{
The table shows the comparison of the violent individual identification performance of the proposed system against the state-of-the-art technique~\cite{penmetsa2014autonomous}}
\end{table}
\subsection{\textbf{\textit{Runtime Performance}}}
The runtime performance of the DSS framework is computed on the cloud and consists of three parts: (i) detecting humans using the FPN network, (ii) human pose estimation using the SHDL network, and (iii) classification of the estimated pose. The deep learning framework was accelerated using the cuDNN framework and NVIDIA Tesla GPUs. The system detected the violent individuals at 5 fps per second to 16 fps for a maximum of ten and a minimum of two people, respectively, in the aerial image frame. The processing varies depending on the number of individuals within the image frame.
\section{Conclusions}
The paper proposed the real-time Drone Surveillance System (DSS) framework that can detect one or more individuals engaged in violent activities from aerial images. The framework first uses the FPN network to detect humans after which the proposed SHDL network is used to estimate the pose of the humans. The estimated poses are used by the SVM to identify violent individuals.
The proposed SHDL network uses ScatterNet features with Structural priors to achieve accelerated training for relatively fewer labeled examples. The utilization of fewer labeled examples for pose estimation is beneficial for this application as it is expensive to collect annotated examples. The paper also introduced the Aerial Violent Individual (AVI) Dataset which can benefit other researcher aiming to use deep learning for aerial surveillance applications. The proposed DSS framework outperforms the state-of-the-art technique on the AVI dataset. This framework will be instrumental in detecting individuals engaged in violent activities in public areas or large gatherings.
{\small
\bibliographystyle{ieee}
|
{
"timestamp": "2018-06-05T02:09:44",
"yymm": "1806",
"arxiv_id": "1806.00746",
"language": "en",
"url": "https://arxiv.org/abs/1806.00746"
}
|
\section{Introduction}
The ability to efficiently analyse human genomes is a key component of the emerging vision for preventive, predictive, and personalised medicine \cite{Hood2011}. Genome analysis aims to discover genetic variants that help diagnose genetic diseases in clinical practice, or predict risk factors e.g. for certain types of cancer \cite{Tian2012}. A single exome contains about 10-15GB of data (encoded as a compressed FastQ file), while a whole genome totals up to 1TB. Depending on the specific kind of analysis, state of the art variant discovery and interpretation processes may take up to 10 hours to process a single exome. As whole-genome sequencing at population scale becomes economically affordable, personalised medicine will therefore increasingly require scalable variant analysis solutions.
With some variations, variant discovery consists of a pipeline where data flows through a number of well-understood steps, from the raw reads off the sequencing machine, to a list of functionally annotated variants that can be interpreted by a clinician. A number of algorithms, often implemented as open source and publicly available programs, are normally employed to implement each of the steps. A notable example is the GATK suite of programs from the Broad Institute \cite{VanderAuwera2013}, which forms the basis for the study presented in this paper, and is described more in detail below.
The most promising approach for improving the efficiency of the pipeline is to try and exploit the latent parallelism that may be available in some of the data as well as in the algorithms. In particular, there is increasing evidence that Hadoop-based implementations of deep genomic pipelines deployed on a cloud-based cluster can outperform equivalent pipelines that require HPC resources~\cite{Siretskiy2015}. In our own work~\cite{Cala2015} we have shown that a workflow-based implementation that runs on a public cloud infrastructure (Azure) scales better than a script-based HPC version, while providing better cost control. The prevalent approach to achieve parallelism at the level of the single program (see Sec. 1.2) involves partitioning the input to the program in such a way that multiple instances can be executed in parallel, one on each partition, with a merge step at the end. Clearly, this \textit{split-and-merge} pattern only works when the data chunks can be processed independently of one another. In such a case, existing tools can be \textit{wrapped} as part of the pattern, without modification. Recently, however, a new generation of GATK programs have been released (4.0, in beta version at the time of writing), which re-implement a number of the algorithms as Spark programs. In this approach, the task of achieving parallelism is essentially delegated to the Spark infrastructure in combination with HDFS for dataset partitioning.
In this paper we present an initial analysis of the new GATK facilities. We have implemented the reference GATK pipeline in Spark, using the new 4.0 programs when possible, and by wrapping the programs that have not been ported to Spark.
In the rest of the paper we describe this hybrid approach, report on the effort involved in deploying the pipeline both on a single-node Spark configuration and on a cluster, and present an initial performance evaluation on the Azure cloud for a variety of Spark settings, VM configurations, and cluster sizes.
When variant discovery pipelines are used for research purposes, transparency and control over pipeline composition are important factors to consider, especially in view of the rapid advances in the tools. An example of open-source platform is the Genome Variant Investigation Platform (GenomeVIP)~\cite{Mashl01082017}, which employs GATK in addition to a number of other third party tools. On the other end of the spectrum, ``black box'' variant discovery services are now being offered, notably the new Microsoft Azure Genomics Services. Thanks to a grant from Microsoft, we were able to compare the GATK Spark approach with the new Microsoft Azure Genomics Services. We conclude that the Genomics Services are currently both faster and more cost-effective, when the Spark pipeline is deployed on the Azure cloud and the Spark processing times are translated into commercial rates. These results are preliminary, however, as GATK Spark tools are still in beta at the time of writing.
\subsection{The Variant analysis pipeline}
We begin by describing the target pipeline in some detail. The pipeline is roughly aligned with the GATK Best Practices guidelines\footnote{\scriptsize \url{https://software.broadinstitute.org/gatk/best-practices/}} and incorporates the latest GATK 4.0 Spark tools. Broadly speaking, it consists of three main phases, as indicated in Fig.~\ref{fig:NGS_Pipeline_GATK}, namely \textit{Pre-processing}, \textit{Variant Discovery}, and \textit{Call Set Refinement}. The pre-processing phase takes the input raw exome dataset, in the FASTQ format, it aligns its content (unmapped reads of gene base pairs) against a reference genome like h19 or h38, using the well-known BWA aligner~\cite{Li2010a}, and it marks any duplicates, i.e., by flagging up multiple paired reads that are mapped to the same start and end positions. These reads often originate erroneously from DNA preparation methods. They will cause biases that skew variant calling and hence should be removed, in order to avoid them in downstream analysis. The BQSR (Base Quality Score Recalibration) step then assigns confidence values to each of the aligned reads, taking into account possible sequencing errors.\footnote{\scriptsize \url{https://software.broadinstitute.org/gatk/documentation/article.php?id=11081}} Finally, Variant Calling, performed using the GATK Haplotype Caller, identifies both single-nucleotide polymorphisms (SNPs) as well as insertion/deletion mutations (Indels).
Multiple variant files (gVCF), one for each sample, are then bundled together for the next phase, \textit{Variant Discovery}. The specific steps include producing raw SNP and Indel VCF files, building recalibration models for those SNPs and Indels\footnote{\scriptsize \url{https://software.broadinstitute.org/gatk/documentation/article.php?id=2805}} and refining the genotypes, that is, filtering out genotypes with low estimated accuracy. The final phase, \textit{Variant Annotation}, is not part of the Best Practices and thus may be implemented using a variety of third party tools. We used Annovar, a well-known tool for functionally annotating genetic variants detected from diverse genomes~\cite{doi:10.1093/nar/gkq603}. As mentioned later, pre-processing time dominates the entire processing time and thus our performance analysis ignores phases two and three. However, in the following we highlight some of the implementation challenges for these steps.
\subsection{Related work} \label{sec:related}
SparkSeq~\cite{Wiewiorka2014} is a general-purpose library for genomic cloud computing built on top of Spark. Its strengths are its generality and extensibility, as it can be used to build customised analysis pipelines (in Scala). It appears that the library is built from the ground up, i.e., without leveraging existing implementations such as GATK.
In contrast, a general big data platform for genome data analysis, called Gesall, that uses a wrapper approach to reuse existing tools without change is presented in \cite{Mushtaq2015}. Gesall leverages the potential parallelism that is available from some of the existing tools, for instance BWA, by partitioning its input (SAM and BAM files) and then managing the parallel execution of multiple BWA instances. Making this work, however, requires a heavy stack of new MapReduce-based software to be injected between the data layer (HDFS) and the native tools.
A similar approach, namely to segment input data sets and then feed them to multiple instances of the tools, is presented in \cite{Mushtaq2015}. The distinctive element of the resulting framework is to perform load balancing by dividing chromosomal regions according to the number of reads mapped to each chromosome, as opposed to natural chromosome boundaries.
This equalizes the size of each data chunk and, in addition to in-memory data management, achieves substantial speedup over a functionally equivalent but naively implemented Hadoop MapReduce based solution. The advantages of in-memory processing for efficient genome analysis have also been demonstrated recently in other ad hoc frameworks~\cite{Li:2018:HGA:3200691.3178511}. Yet another parallel version of a genomics pipeline that operates by partitioning the input data files is described in~\cite{Roy2017}. In this instance, however, some of the tools have been re-implemented (as opposed to simply wrapped) to explicitly leverage the embarrassingly parallel steps of the pipeline.
In contrast to these efforts, in our experiments we aim to show the potential of the tool re-implementation approach offered by the GATK 4.x tool suite, which are being incrementally ported to the Spark architecture.
\section{Spark hybrid pipeline implementation} \label{sec:implementation}
As mentioned, the main motivation for undertaking this work has been to experiment with a Spark implementation of the GATK Best Practices pipeline, based on the recently release of GATK 4.0. Not only are these tools natively built for Spark, but also, compared to the previous version (GATK 3.8), they are also better integrated with each other, for instance to avoid writing intermediate files to disk and increasing efficiency.
\sloppy At the time of writing, however, these new versions of the tools are limited to the pre-processing phase: \texttt{BwaAndMarkDuplicatesPipelineSpark}, \texttt{BQSRPipelineSpark} and \texttt{HaplotypeCallerSpark} (Fig. \ref{fig:NGS_Pipeline_GATK}). Thus, the implementation necessarily required a hybrid approach, whereby pre-processing used the new Spark tools, while for the rest of the pipeline we used a wrapper method. For this, Spark offers a transformation called \texttt{Pipe}, which ``pipes each partition of the RDD through a shell command, e.g. a Perl or bash script. RDD elements are written to the process's stdin and lines output to its stdout are returned as an RDD of strings''. Thus, \texttt{Pipe} allows Bash scripts to execute from within Spark, but not efficiently, as pipelining across the steps requires the content of intermediate RDDs to be written out to files and then be read back in. Looking at Fig.~\ref{fig:NGS_Pipeline_GATK}, it should be clear that the variant discovery phase is a potential bottleneck, as it must process the entire batch of samples, with no parallelism available. However, as it turns out its processing time is negligible compared to that of pre-processing.
\subsection{Single node deployment}
\sloppy The hybrid native Spark/wrapper approach works well for a single-node deployment, as the entire pipeline can be launched using a single bash script that encapsulates the communication with the Spark driver. For a batch of $N$ samples, the \texttt{spark-submit} command spawns one iteration per sample for the pre-processing (\texttt{BwaAndMarkDuplicatesPipelineSpark}, \texttt{BQSRPipelineSpark}, and \texttt{HaplotypeCallerSpark}), followed by a single \texttt{VariantDiscovery} and \texttt{CallsetRefinement} call for the entire batch. The results produced by the execution have been validated against those obtained from our more established, workflow-based pipeline as described in~\cite{Cala2015}.
\begin{figure}
\begin{center}
\centering
\includegraphics[width=.7\textwidth]{NGS_Pipeline_GATK-2.pdf}
\caption{Multi-sample Variant processing pipeline~\label{fig:NGS_Pipeline_GATK}}
\end{center}
\end{figure}
\subsection{Cluster deployment} \label{sec:cluster}
In theory, Spark is designed to facilitate the seamless scaling out of applications over a cluster, with virtually no change to the code. The pre-processing phase of our pipeline would benefit the most from distribution, as it consists of native Spark applications as explained earlier. In reality, the deployment of a complex multi-tool pipeline like the one described requires substantial additional effort, mainly due to the requirement for Spark tools to read input and reference datasets from a HDFS data layer.
Commercial solutions such as Microsoft Azure \textit{HDInsight} provide a preconfigured environment ready to execute Spark in cluster mode. This comes at a substantial cost, however (about twice the cost of an un-configured set of VMs). We therefore undertook the challenge of a manual Spark cluster configuration.
In this section we report on our experience realising a distributed version of the pipeline using a virtualisation approach, based on \textit{Docker Swarm} technology.\footnote{\scriptsize \url{https://docs.docker.com/engine/swarm/}} Our conclusion is that while Swarm greatly simplifies deployment, manual effort is still required especially to satisfy the data access requirements of the various components, and limitations are incurred for the fragments of the pipeline that are implemented using the wrapper method as explained earlier.
Also, a distributed deployment is not always beneficial due to the additional communication overhead associated with a distributed execution, as we show in Sec.~\ref{sec:experiments}.
Swarm extends Docker by providing seamless and automated distribution of Docker containers over a cluster of VMs. A \textit{swarm} is a group of machines \textit{nodes}, that run Docker containers and are joined into a cluster. The usual Docker commands are executed on a cluster by a Swarm Manager.
Swarm managers may employ several strategies to run containers, such as ``emptiest node'', which fills the least utilized machines with containers, or ``global'', which ensures that each machine gets exactly one instance of the specified container. Swarm managers are the only machines in a swarm that can execute user commands, or authorize other machines to join the swarm as workers. Workers only provide capacity and do not have the authority to tell any other machine what it can or cannot do. In this context, a \textit{service} is an image for an application that resides in a container and that is deployed over a swarm.
We have used Docker Swarm to deploy both Spark and HDFS over a cluster of nodes, using Docker Hub and Docker Images provided by Big Data Europe\footnote{\scriptsize \url{https://www.big-data-europe.eu/}}, as follows. The first step is to create a Swarm, which in our test cluster consists of three nodes: a Swarm Manager and two Swarm Workers as shown in Fig.~\ref{fig:stacksparkhdfs}. As both Spark and HDFS adopt Master-Slave architecture, the masters (Spark Master and HDFS Namenode) are deployed on the Swarm Manager. The Slaves (Spark workers and HDFS Data nodes) are deployed globally, that is, one replica is allocated to each node in the Swarm, including the Swarm Manager node. The Docker containers that host these images are connected through a dedicated overlay network.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{stack_spark_hdfs}
\caption{Virtualised Spark and HDFS cluster deployment using Docker Swarm}
\label{fig:stacksparkhdfs}
\end{figure}
Shared data, including all input samples, reference databases, GATK libraries, etc., resides on HDFS and is therefore naturally distributed and replicated over the Data nodes across the cluster. For the most part, this achieves location transparency as tools need only access the data through Spark HDFS drivers (readers and writers). There are two exceptions, however. Firstly, nonSpark tools expect data to be accessible on a local file system. This is achieved by mounting HDFS Data nodes as virtual Docker volumes so they are accessible from within a Docker container. Secondly, the reference genome had to be replicated to each local Worker file system (see \textit{reference image} in Fig.~\ref{fig:stacksparkhdfs}). This is achieved by encapsulating the dataset itself as a Docker Image container, which is then automatically deployed by Swarm using the ``global'' Swarm mode, as indicated above. One advantage of this encapsulation approach is that it makes it easy to upgrade the reference genome, eg from h19 to h38.p1, the most recent.
\subsection{Cluster mode pipeline execution} \label{sec:cluster-exec}
A key observation, already made earlier, is that none of the non-Spark programs that make up the pipeline can be distributed. This is the case for the initial step, \texttt{FastqToSam}, as well as for all the steps after pre-processing, which are necessarily executed on the Spark Master container. As the processing time is linear in the number of samples, this justifies allocating a larger VM to the Spark Master.
With this in mind, execution on a cluster consists of four main steps, controlled by a master bash script. These are summarised in Fig.~\ref{fig:distributedpipeline}. The first step, \texttt{FastqToSam}, is non-Spark and produces local uBAM files, which then needs to be distributed across the HDFS nodes (step 2) to be made available to the Spark pre-processing tools (step 3). As explained, these tools communicate through HDFS files and at the time of writing are not easy to integrate more deeply, i.e., by sharing intermediate datasets using Spark process memory. Finally, step 4 consists of the execution of non-Spark tools, again on the Spark Master. This requires that outputs that reside on HDFS be moved back to local file system.
In summary, the deployment may benefit from a partial porting of GATK tools to Spark, however non-GATK tools that escape this porting effort represent bottlenecks. Firstly, because they run in centralised mode, and secondly because of the different file infrastructure they require. Also, Spark tools appear to be designed in isolation, without attempting to eliminate intermediate data passing through HDFS reads and writes.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{distributed_pipeline}
\caption{Pipeline execution flow in cluster mode}
\label{fig:distributedpipeline}
\end{figure}
\section{Experimental evaluation} \label{sec:experiments}
In this section we report on preliminary results on the performance of the pipeline. For these experiments we used 6 exomes, from anonymised patients obtained from the Institute of Genetic Medicine at Newcastle University. These samples come with naturally slightly different sizes. Our samples sizes are in the range 10.8GB-15.9GB, with average 13.5GB (compressed). Using these samples, we analysed the runtime of the pipeline implementation described in Sec.~\ref{sec:implementation}, comparing the deployment modes described in the previous section, namely a
single-node Spark model, known as ``pseudo-cluster'' mode, with a cluster mode configuration with up to four nodes. In both cases, all nodes are identical virtual machines on the Azure cloud with 8 cores, 55GB RAM. Our experiments aim to compare the effect of various Spark settings for each of these configurations.
We focused exclusively on the pre-processing phase, where the bulk of the processing occurs. Specifically, BWA alignment and duplicate marking (denoted BWA/MD in the following) accounts for 38\% of the processing time, Base Quality Score Recalibration Processing (BQSRP) for 11\%, and variant calling using the Haplotype Caller (HC) 39\%. The rest of the pipeline, which only accounts for 12\% of the processing, was not considered further in these experiments.
Four settings were used to tune the Spark configuration, indicated in the charts as X/Y/W/Z, where X is the driver process memory, Y the number of executors, W the number of cores allocated to each executor, and Z the memory allocated to each executor.
\begin{figure}
\centerline{
\subfigure[Configuration 20/2/4/16I]{
\includegraphics[width=.5 \linewidth]{pre-proc-1.pdf}
\label{fig:pre-proc-1}}
\quad
\subfigure[Configuration 20/4/2/8]{
\includegraphics[width=.5 \linewidth]{pre-proc2.pdf}
\label{fig:pre-proc2}}
}
\caption{Pre-processing steps for single node deployment configurations}
\label{fig:pre-proc}
\end{figure}
Charts \ref{fig:pre-proc-1} and \ref{fig:pre-proc2} show the processing for two configurations: 20/2/4/16 and 20/4/2/8 respectively, for each of the six samples (ordered by size) and with a breakdown for each pre-processing tool. Both charts show a slight increase in processing time as the sample size increases (with an unexplained anomaly on the 13GB sample in both cases). These times are not significantly affected by the differences in configuration. Indeed, if we normalise the processing time by the input size, we observe very similar figures across the two configurations and for each tool, as shown in Fig.~\ref{fig:avg-time-2-config}. Specifically, for the two configurations BWA/MD, BQSRP, and HC report an average of 19.3 vs 18.4 minutes/GB, 5.6 vs 5.3 minutes, and 20.2 vs 19.14 minutes, respectively.
\begin{figure}
\centerline{
\subfigure[Average time/ GB for two configurations]{
\includegraphics[width=0.25\linewidth]{avg-time-2-config.pdf}
\label{fig:avg-time-2-config}}
\quad
\subfigure[Pre-processing time/GB (all three steps) across four configurations for the same sample]{
\includegraphics[width=0.5\linewidth]{pre-proc-across-config.pdf}
\label{fig:pre-proc-across-config}}
}
\caption{Normalised pre-processing processing time/GB }
\label{fig:pre-across-config}
\end{figure}
For a deeper analysis on the effect of Spark settings, we then ran the pipeline on one single representative sample (PFC 0028, 14.2GB) on two additional settings, 10/4/2/8, and 10/8/1/6. Fig. 5(b) shows the results, with processing times normalised by sample size for ease of comparison with the previous chart. Again, there is no indication that these four settings are critical in affecting the processing times.
More significant is the difference in processing time achieved by adding resources to the VMs. Fig.~\ref{fig:scale-up} shows a nearly ideal speedup as we double the number of cores (with a constant 55GB RAM per 8 cores, i.e. 110GB for 16 cores, etc.) It seems however that the Spark tools will not benefit from a larger VM beyond 16 cores. Note that the chart in Fig.~\ref{fig:scale-up} does not include the processing time for HC, as this took an unusually long time to run on a 16 cores configuration. This was due to an issue with a low-level library on the HC implementation, which was not resolved at the time of writing.
\begin{figure}
\centerline{
\subfigure[single node (55GB RAM/core)]{
\includegraphics[width=0.5\linewidth]{scale-up.pdf}
\label{fig:scale-up}}
\quad
\subfigure[8 cores/55GB RAM cluster mode]{
\includegraphics[width=0.5\linewidth]{scale-out.pdf}
\label{fig:scale-out}}
}
\caption{BWA/MD + BQSRP speedup}
\label{fig:speedup}
\end{figure}
As expected, running Spark in cluster mode shows a speedup as we increase the number of nodes, as shown in Fig.~\ref{fig:scale-out}. However, we also note that scaling out, that is, by adding nodes, incurs an overhead that makes it less efficient than scaling up (i.e., adding cores to a single node configuration). For instance, 2 nodes with 8 cores each process at 229 minutes, while a single node with 16 cores takes 165 minutes. This overhead effect is noticeable when using 32 cores, which as we noted earlier does not improve processing time on a single host (175 minutes, Fig.~\ref{fig:scale-up}), while a 4x8 nodes cluster takes 168 minutes, a further improvement over the 2x8 configuration.
\subsection{Comparing with Microsoft Genomics Services}
Thanks to a grant from Microsoft Azure Research, we were able to process our patient samples using the new Microsoft Genomics Services. These services execute precisely the pre-processing steps of the pipeline, making it easier to compare with our results. The processing time for our reference PFC 0028 sample is an impressive 77 minutes (compare with the best time of 446 minutes on a single node, obtained from the figures in Fig.~\ref{fig:scale-up}) to which the average HC processing time has been added). However, at the time of writing these services were only offered as a \textit{black box} that runs on a single, high-end virtual machine of undisclosed specifications. In terms of pricing, the current charges for using Genomics Services are \pounds0.217 / GB, which translates to about \pounds18.61 for processing our six samples. For comparison, the cost of processing the same samples using our pipeline with a 8 cores, 55GB configuration is estimated at \pounds28.
\section{Conclusions}
We have presented an experimental evaluation of the design effort involved in implementing a genomics variant discovery pipeline using the recently released GATK Spark tools from the Broad Institute, and a performance analysis based on a single node and small cluster configuration. Our analysis is preliminary, as the GATK 4.x tools are still very recent, non-GATK tools or those that have not yet been ported represent bottlenecks. Firstly, because they run in centralised mode, and secondly because of the different file infrastructure they require. Also, Spark tools appear to be designed in isolation, without attempting to eliminate intermediate data passing through HDFS reads and writes.
Compared with the processing times reported for the Microsoft Azure Genomics Services, it appears that using Spark with the current beta version of GATK tools is currently not economically competitive and thus is not recommended for operational use in clinical settings. This may change, however, as the GATK Spark tools mature. On the plus side, our implementation offers complete control over the evolution of the pipeline over time, a key requirement especially in a genetic research setting.
\section*{Acknowledgments}
The authors are grateful to Microsoft for the Azure for Research grant that made it possible to experiment with Azure Genomics Services.
\input{SEBD-18-CR-static.bbl}
\end{document}
|
{
"timestamp": "2018-06-05T02:10:52",
"yymm": "1806",
"arxiv_id": "1806.00788",
"language": "en",
"url": "https://arxiv.org/abs/1806.00788"
}
|
\subsection{Training data from FIFA video games}
State-of-the-art datasets for human shape modeling mostly focus on general representation of human bodies and aim at diversity of body shape and clothing \cite{Loper2015SMPLAS,varol17b}. Instead, to optimize for accuracy and performance in our problem, we want a training dataset that focuses solely on soccer, where clothing, players' poses, camera views, and positions on the field are very constrained. Since our goal is to estimate a depth map given a single photo of a soccer player, the ideal training data would be image and depth map pairs of soccer players in various body poses and clothing, viewed from a typical soccer game camera.
The question is: how do we acquire such ideal data? It turns out that while playing Electronic Arts FIFA games and intercepting the calls between the game engine and the GPU ~\cite{Richter_2016_ECCV, Richter_2017}, it is possible to extract depth maps from video game frames.
In particular, we use RenderDoc \cite{renderdoc} to intercept the calls between the game engine and the GPU. FIFA, similar to most games, uses deferred shading during game play. Having access to the GPU calls enables capture of the depth and color buffers per frame\footnote{RenderDoc causes the game to freeze, essentially capturing 1 fps.}. Once depth and color is captured for a given frame we process it to extract the players.
The extracted color buffer is an RGB screen shot of the game, without the score and time counter overlays and the in-game indicators.
The extracted depth buffer is in Normalized Device Coordinates (NDC), with values between 0 and 1. To get the world coordinates of the underlying scene we require the OpenGL camera matrices that were used for rendering. In our case,
these matrices were not directly accessible in RenderDoc, so we estimated them (see Appendix A in supplementary material).
Given the game camera parameters, we can convert the z-buffer from the NDC to 3D points in world coordinates. The result is a point cloud that includes the players, the ground, and portions of the stadium when it is visible. The field lies in the plane $y=0$. To keep only the players, we remove everything that is outside of the soccer field boundaries and all points on the field (i.e., points with $y=0$). To separate the players from each other we use DBSCAN clustering~\cite{Ester1996} on their 3D locations. Finally, we project each player's 3D cluster to the image and recalculate the depth buffer with metric depth. Cropping the image and the depth buffer around the projected points gives us the image-depth pairs -- we extracted $12000$ of them -- for training a depth estimation network (\refFig{fifa_data_small}). Note that we use a player-centric depth estimation because we get more training data by breaking down each frame into 10-20 players, and it is easier for the network to learn individual player's configuration rather than whole-scene arrangements.
\subsection{Depth Estimation Neural Network}
\label{sec:depth_estimation}
Given the depth-image pairs extracted from the video game, we train a neural network to estimate depth for any new image of a soccer player. Our approach follows the hourglass network model~\cite{Newell16,varol17b}: the input is processed by a sequence of hourglass modules -- a series of residual blocks that lower the input resolution and then upscale it -- and the output is depth estimates.
Specifically, the input of the network is a $256\times256$ RGB image cropped around a player together with a segmentation mask for the player, resulting in a 4-channel input. We experimented with training on no masks, ground truth masks, and estimated masks. Using masks noticeably improved results. In addition, we found that using estimated masks yielded better results than ground truth masks. With estimated masks, the network learns the noise that occurs in player segmentation during testing, where no ground truth masks are available. To calculate the player's mask, we apply the person segmentation network of~\cite{Yu2015MultiScaleCA}, refined with a CRF~\cite{Krhenbhl2011EfficientII}. Note that our network is single-player-centric: if there are overlapping players in the input image, it will try to estimate the depth of the center one (that originally generated the cropped image) and assign the other players' pixels to the background.
The input is processed by a series of 8 hourglass modules and the output of the network is a $64\times64\times50$ volume, representing 49 quantized depths (as discrete classes) and 1~background class. The network was trained with cross entropy loss with batch size of 6 for 300 epochs with learning rate 0.0001 using the Adam~\cite{Kingma2014AdamAM} solver (see details of the architecture in supplementary material).
The depth parameterization is performed as follows: first, we estimate a virtual vertical plane passing through the middle of the player and calculate its depth \wrt the camera. Then, we find the distance in depth values between a player's point and the plane. The distance is quantized into 49 bins (1 bin at the plane, 24 bins in front, 24 bins behind) at a spacing of 0.02 meters, roughly covering 0.5 meters in front and in back of the plane (1 meter depth span). In this way, all of our training images have a common reference point. Later, during testing, we can apply these distance offsets to a player's bounding box after lifting it into 3D (see \refSec{mesh-generation}).
\subsection{Camera Pose Estimation}
\label{sec:localization}
The first step is to estimate the per-frame parameters of the real game camera.
Because soccer fields have specific dimensions and structure according to the rules of FIFA, we can estimate the camera parameters by aligning the image with a synthetic planar field template. We set the world origin to coincide with the center of the synthetic soccer field which lies in the $y=0$ plane.
The most consistent features on the field of play are the field lines (\eg, sidelines, penalty box around the goal).
Thus, we extract edge points $\mathcal{E}$ for each frame to localize those features. We can solve for the camera parameters $\mathbf{w}$ (focal length, rotation and translation) that align rendered synthetic field lines with the extracted edge points. In particular, we first construct a distance map $\mathcal{D}$ that, for each pixel in the original frame, stores the squared distance to the nearest point in $\mathcal{E}$. Then,
for projection $\mathcal{T}(p; \mathbf{w})$ that maps the visible 3D line points $p$ to the image, we minimize:
\begin{equation}
\min_{\mathbf{w}} \sum_p \mathcal{D}\left(\mathcal{T}(p; \mathbf{w})\right),
\end{equation}
\ie, the sum of squared distances between the projected synthetic field points and the nearest edge points in $\mathcal{E}$.
This process is highly dependent on the the quality of the edge points $\mathcal{E}$ and the camera initialization. We use structured forests~\cite{DollarICCV13edges} for line detection. We additionally remove edges that belong to people by applying a person segmentation network~\cite{Yu2015MultiScaleCA}. To initialize the camera fitting, we provide 4 manual correspondences in the first frame, and further solve for the camera pose in each successive frame using the previous frame as initialization.
\subsection{Player Detection and Tracking}
The first step of the video analysis is to detect the players in every frame. While detecting soccer players may seem straightforward due to the relatively uniform background, most state-of-the-art person detectors still have difficulty when, \eg, players from the same team occlude each other or the players are too small.
We start with a set of bounding boxes obtained with~\cite{ren2015faster}. Next, we refine the initial bounding boxes based on pose information using the detected keypoints/skeletons from~\cite{Wei2016ConvolutionalPM}. We observed that the estimated poses can better separate the players than just the bounding boxes, and the pose keypoints can be effectively used for tracking the players across frames.
Finally, we generate tracks over the sequence based on the refined bounding boxes. Every track has a starting and ending location in the video sequence. The distance between two tracks A and B is defined as the 2D Euclidean distance between the ending location of track A and starting location of track B, assuming track B starts at a later frame than track A and their frame difference is smaller than a threshold (detailed parameters are described in supplementary material).
We follow a greedy merging strategy. We start by considering all detected neck keypoints (we found this keypoint to be the most reliable to associate with a particular player) from all frames as separate tracks and we calculate their pairwise distances. Two tracks are merged if their distance is below a threshold, and we continue until there are no tracks to merge. This step associates
every player with a set of bounding boxes and poses across frames. This information is essential for the later processing of the players, namely the temporal segmentation, depth estimation and better placement in 3D. \refFig{overview_smallsize} shows the steps of detection, pose estimation, and tracking.
\subsection{Temporal Instance Segmentation}
For every tracked player we need to estimate its segmentation mask to be used in the depth estimation network. A straightforward approach is to apply at each frame a person segmentation method~\cite{Yu2015MultiScaleCA}, refined with a dense CRF~\cite{Krhenbhl2011EfficientII} as we did for training. This can work well for the unoccluded players, but in the case of overlap, the network estimates are confused. Although there are training samples with occlusion, their number is not sufficient for the network to estimate the depth of one player (\eg the one closer to the center) and assign the rest to the background. For this reason, we ``help'' the depth estimation network by providing a segmentation mask where the tracked player is the foreground and the field, stadium and other players are background (this is similar to the instance segmentation problem~\cite{he2017maskrcnn, li2016fully}, but in a 1-vs-all scenario).
To estimate the pixels that belong to a particular player $T$, we rely both on the semantic segmentation and on the pose estimation from the previous step. First, for every pixel $p$, we aim to find the continuous variable $o_p$ that indicates its association to the player $T$, background or other players by minimizing the energy~\cite{Levin2004, Krishnan2013,KunduCVPR16}:
\begin{equation}
E = \sum_{p} (o_p - \sum_{q\in N(p)} w_{pq}o_q),
\end{equation}
where $N(p)$ is the spatial neighborhood of pixel $p$, and $w_{pq}$ is the affinity between pixels $p$ and $q$ based on the color image $I$ and edge image $G$: $\exp(-||I_p-I_q||^2)*\exp(-G_p^2)$. Several pixels can be used as anchors for the optimization: a) the pixels $s$ that belong to the tracked player skeleton will have $o_s=0$, b) other players' skeleton pixels $r$ have $o_r=2$ and c) pixels $b$ with high background probability have $o_b=1$. By thresholding the optimized $o_p$ values (we use 0.5 for our experiments) we generate the player's mask $M_o$. This mask performs well in separating the main player from other players, but tends to include some background as well.
To better segment out the background, we estimate an additional mask $M_{\rm \it CNN}$ as follows. We construct a video volume containing the player in a block of 15 frames, with the player's per-frame locations translated to align the neck keypoint across frames. We solve a dense CRF~\cite{Krhenbhl2011EfficientII} over the volume to obtain $M_{\rm \it CNN}$ for every frame in the block. The unary potentials come from the person segmentation network of~\cite{Yu2015MultiScaleCA}.
The pairwise potentials are modeled as Gaussian kernels in a $D$-dimensional feature space, with the features $f\in D$ consisting of the $rgb$ colors, the $xy$ locations, and $t$ the time stamp. $M_{\rm \it CNN}$ better segments out the background, but tends to include other players. Thus, our final segmentation mask is the product $M_{\rm \it final} = M_o M_{\rm \it CNN}$. In the inset image, we show an occluded player, the optimized variables $o_p$, our masks, and the instance segmentation from another state-of-the-art method~\cite{li2016fully} (see supplementary for additional results). For the $o_p$ visualization, $o_s$ is yellow, $o_r$ is blue, and $o_b$ is magenta.
\begin{figure}[h]
\vspace{-5pt}
\centering
\includegraphics[width=0.95\columnwidth]{images/InstanceSegm_smallsize}
\vspace{-10pt}
\end{figure}
\subsection{Mesh Generation}
\label{sec:mesh-generation}
The foreground mask from the previous step, together with the original cropped image are fed to the network described in \ref{sec:depth_estimation}. The output of the network is per-pixel, quantized signed distances between the player's surface and a virtual plane \wrt the camera. To obtain a metric depth map we first lift the bounding box of the player into 3D, creating a billboard (we assume that the bottom pixel of the player lies on the ground). We then apply the distance offsets output by the network to the 3D billboard to obtain the desired depth map.
The depth map is then unprojected to world coordinates using the camera parameters, generating the player's pointcloud in 3D. Each pixel corresponds to a 3D point and we use pixel connectivity to establish faces. We texture-map the mesh with the input image. Depending on the application, the mesh can be further simplified with mesh decimation to reduce the file size for deployment in an AR device.
\subsection{Trajectories in 3D}
Due to imprecise camera calibration and bounding box localization, the 3D placement of players can ``jitter'' from frame to frame. To address this problem, we smooth the 3D trajectories of the players. In particular, once we estimate the player's position in the 3D field, we calculate the center of the mesh (mean of the player's vertices) and solve for its optimized 3D trajectory $\mathbf{X}\in \mathbb{R}^{N \times 3}$~\cite{Milan2014} by minimizing:
\begin{equation}
E = \sum_{t\in M}||\mathbf{X}_t-D_t||^2 + \sum_{t=1}^{N-1} || \mathbf{X}_{t-1} - 2\mathbf{X}_{t}+\mathbf{X}_{t+1}||^2
\end{equation}
where $N$ is the number of frames and $M$ is the set of timestamps when a detection occurs. $D_t$ corresponds to the center of the lifted bounding box in 3D at time $t$. The first term of the objective ensures that the estimated trajectory will be close to the original detections, and the second term encourages second order temporal smoothness.
\section{Introduction}
Please follow the steps outlined below when submitting your manuscript to
the IEEE Computer Society Press. This style guide now has several
important modifications (for example, you are no longer warned against the
use of sticky tape to attach your artwork to the paper), so all authors
should read this new version.
\subsection{Language}
All manuscripts must be in English.
\subsection{Dual submission}
Please refer to the author guidelines on the CVPR 2018 web page for a
discussion of the policy on dual submissions.
\subsection{Paper length}
Papers, excluding the references section,
must be no longer than eight pages in length. The references section
will not be included in the page count, and there is no limit on the
length of the references section. For example, a paper of eight pages
with two pages of references would have a total length of 10 pages.
{\bf There will be no extra page charges for
CVPR 2018.}
Overlength papers will simply not be reviewed. This includes papers
where the margins and formatting are deemed to have been significantly
altered from those laid down by this style guide. Note that this
\LaTeX\ guide already sets figure captions and references in a smaller font.
The reason such papers will not be reviewed is that there is no provision for
supervised revisions of manuscripts. The reviewing process cannot determine
the suitability of the paper for presentation in eight pages if it is
reviewed in eleven.
\subsection{The ruler}
The \LaTeX\ style defines a printed ruler which should be present in the
version submitted for review. The ruler is provided in order that
reviewers may comment on particular lines in the paper without
circumlocution. If you are preparing a document using a non-\LaTeX\
document preparation system, please arrange for an equivalent ruler to
appear on the final output pages. The presence or absence of the ruler
should not change the appearance of any other content on the page. The
camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment
the \verb'\cvprfinalcopy' command in the document preamble.) Reviewers:
note that the ruler measurements do not align well with lines in the paper
--- this turns out to be very difficult to do well when the paper contains
many figures and equations, and, when done, looks ugly. Just use fractional
references (e.g.\ this line is $095.5$), although in most cases one would
expect that the approximate location will be adequate.
\subsection{Mathematics}
Please number all of your sections and displayed equations. It is
important for readers to be able to refer to any particular equation. Just
because you didn't refer to it in the text doesn't mean some future reader
might not need to refer to it. It is cumbersome to have to use
circumlocutions like ``the equation second from the top of page 3 column
1''. (Note that the ruler will not be present in the final copy, so is not
an alternative to equation numbers). All authors will benefit from reading
Mermin's description of how to write mathematics:
\url{http://www.pamitc.org/documents/mermin.pdf}.
\subsection{Blind review}
Many authors misunderstand the concept of anonymizing for blind
review. Blind review does not mean that one must remove
citations to one's own work---in fact it is often impossible to
review a paper unless the previous citations are known and
available.
Blind review means that you do not use the words ``my'' or ``our''
when citing previous work. That is all. (But see below for
techreports.)
Saying ``this builds on the work of Lucy Smith [1]'' does not say
that you are Lucy Smith; it says that you are building on her
work. If you are Smith and Jones, do not say ``as we show in
[7]'', say ``as Smith and Jones show in [7]'' and at the end of the
paper, include reference 7 as you would any other cited work.
An example of a bad paper just asking to be rejected:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of our
previous paper [1], and show it to be inferior to all
previously known methods. Why the previous paper was
accepted without this analysis is beyond me.
[1] Removed for blind review
\end{quote}
An example of an acceptable paper:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of the
paper of Smith \etal [1], and show it to be inferior to
all previously known methods. Why the previous paper
was accepted without this analysis is beyond me.
[1] Smith, L and Jones, C. ``The frobnicatable foo
filter, a fundamental contribution to human knowledge''.
Nature 381(12), 1-213.
\end{quote}
If you are making a submission to another conference at the same time,
which covers similar or overlapping material, you may need to refer to that
submission in order to explain the differences, just as you would if you
had previously published related work. In such cases, include the
anonymized parallel submission~\cite{Authors14} as additional material and
cite it as
\begin{quote}
[1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324,
Supplied as additional material {\tt fg324.pdf}.
\end{quote}
Finally, you may feel you need to tell the reader that more details can be
found elsewhere, and refer them to a technical report. For conference
submissions, the paper must stand on its own, and not {\em require} the
reviewer to go to a techreport for further details. Thus, you may say in
the body of the paper ``further details may be found
in~\cite{Authors14b}''. Then submit the techreport as additional material.
Again, you may not assume the reviewers will read this material.
Sometimes your paper is about a problem which you tested using a tool which
is widely known to be restricted to a single institution. For example,
let's say it's 1969, you have solved a key problem on the Apollo lander,
and you believe that the CVPR70 audience would like to hear about your
solution. The work is a development of your celebrated 1968 paper entitled
``Zero-g frobnication: How being the only people in the world with access to
the Apollo lander source code makes us a wow at parties'', by Zeus \etal.
You can handle this paper like any other. Don't write ``We show how to
improve our previous work [Anonymous, 1968]. This time we tested the
algorithm on a lunar lander [name of lander removed for blind review]''.
That would be silly, and would immediately identify the authors. Instead
write the following:
\begin{quotation}
\noindent
We describe a system for zero-g frobnication. This
system is new because it handles the following cases:
A, B. Previous systems [Zeus et al. 1968] didn't
handle case B properly. Ours handles it by including
a foo term in the bar integral.
...
The proposed system was integrated with the Apollo
lunar lander, and went all the way to the moon, don't
you know. It displayed the following behaviours
which show how well we solved cases A and B: ...
\end{quotation}
As you can see, the above text follows standard scientific convention,
reads better than the first version, and does not explicitly name you as
the authors. A reviewer might think it likely that the new paper was
written by Zeus \etal, but cannot make any decision based on that guess.
He or she would have to be sure that no other authors could have been
contracted to solve problem B.
FAQ: Are acknowledgements OK? No. Leave them for the final copy.
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsection{Miscellaneous}
\noindent
Compare the following:\\
\begin{tabular}{ll}
\verb'$conf_a$' & $conf_a$ \\
\verb'$\mathit{conf}_a$' & $\mathit{conf}_a$
\end{tabular}\\
See The \TeX book, p165.
The space after \eg, meaning ``for example'', should not be a
sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided
\verb'\eg' macro takes care of this.
When citing a multi-author paper, you may save space by using ``et alia'',
shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.)
However, use it only when there are three or more authors. Thus, the
following is correct: ``
Frobnication has been trendy lately.
It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by
Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.''
This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...''
because reference~\cite{Alpher03} has just two authors. If you use the
\verb'\etal' macro provided, then you need not worry about double periods
when used at the end of a sentence as in Alpher \etal.
For this citation style, keep multiple citations in numerical (not
chronological) order, so prefer \cite{Alpher03,Alpher02,Authors14} to
\cite{Alpher02,Alpher03,Authors14}.
\begin{figure*}
\begin{center}
\fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}}
\end{center}
\caption{Example of a short caption, which should be centered.}
\label{fig:short}
\end{figure*}
\section{Formatting your paper}
All text must be in a two-column format. The total allowable width of the
text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54
cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a
$\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the
first page) should begin 1.0 inch (2.54 cm) from the top edge of the
page. The second and following pages should begin 1.0 inch (2.54 cm) from
the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86
cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4
paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the
page.
\subsection{Margins and page numbering}
All printed material, including text, illustrations, and charts, must be kept
within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm)
high.
Page numbers should be in footer with page numbers, centered and .75
inches from the bottom of the page and make it start at the correct page
number rather than the 4321 in the example. To do this fine the line (around
line 23)
\begin{verbatim}
\setcounter{page}{4321}
\end{verbatim}
where the number 4321 is your assigned starting page.
Make sure the first page is numbered by commenting out the first page being
empty on line 46
\begin{verbatim}
\end{verbatim}
\subsection{Type-style and fonts}
Wherever Times is specified, Times Roman may also be used. If neither is
available on your word processor, please use the font closest in
appearance to Times to which you have access.
MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of
the first page. The title should be in Times 14-point, boldface type.
Capitalize the first letter of nouns, pronouns, verbs, adjectives, and
adverbs; do not capitalize articles, coordinate conjunctions, or
prepositions (unless the title begins with such a word). Leave two blank
lines after the title.
AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title
and printed in Times 12-point, non-boldface type. This information is to
be followed by two blank lines.
The ABSTRACT and MAIN TEXT are to be in a two-column format.
MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use
double-spacing. All paragraphs should be indented 1 pica (approx. 1/6
inch or 0.422 cm). Make sure your text is fully justified---that is,
flush left and flush right. Please do not place any additional blank
lines between paragraphs.
Figure and table captions should be 9-point Roman type as in
Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred.
\noindent Callouts should be 9-point Helvetica, non-boldface type.
Initially capitalize only the first word of section titles and first-,
second-, and third-order headings.
FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction})
should be Times 12-point boldface, initially capitalized, flush left,
with one blank line before, and one blank line after.
SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements})
should be Times 11-point boldface, initially capitalized, flush left,
with one blank line before, and one after. If you require a third-order
heading (we discourage it), use 10-point Times, boldface, initially
capitalized, flush left, preceded by one blank line, followed by a period
and your text on the same line.
\subsection{Footnotes}
Please use footnotes\footnote {This is what a footnote looks like. It
often distracts the reader from the main flow of the argument.} sparingly.
Indeed, try to avoid footnotes altogether and include necessary peripheral
observations in
the text (within parentheses, if you prefer, as in this sentence). If you
wish to use a footnote, place it at the bottom of the column on the page on
which it is referenced. Use Times 8-point type, single-spaced.
\subsection{References}
List and number all bibliographical references in 9-point Times,
single-spaced, at the end of your paper. When referenced in the text,
enclose the citation number in square brackets, for
example~\cite{Authors14}. Where appropriate, include the name(s) of
editors of referenced books.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Frobnability \\
\hline\hline
Theirs & Frumpy \\
Yours & Frobbly \\
Ours & Makes one's heart Frob\\
\hline
\end{tabular}
\end{center}
\caption{Results. Ours is better.}
\end{table}
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered. Please ensure that any point you wish to
make is resolvable in a printed copy of the paper. Resize fonts in figures
to match the font in the body text, and choose line widths which render
effectively in print. Many readers (and reviewers), even of an electronic
copy, will choose to print your paper in order to read it. You cannot
insist that they do otherwise, and therefore must not assume that they can
zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it's almost always best to use
\verb+\includegraphics+, and to specify the figure width as a multiple of
the line width as in the example below
{\small\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
\end{verbatim}
}
\subsection{Color}
Please refer to the author guidelines on the CVPR 2018 web page for a discussion
of the use of color in your document.
\section{Final copy}
You must include your signed IEEE copyright release form when you submit
your finished paper. We MUST have this form before your paper can be
published in the proceedings.
{\small
\bibliographystyle{ieee}
\section{Implementation Details}
\paragraph{Training Data} Our training data comes from the video game Electronic Arts FIFA 2016. The GPU calls between the game engine and the graphics card were obtained using RenderDoc v0.34. The playing teams were randomly selected and the camera was set to broadcast.
\paragraph{Player Analysis} Detection and pose estimation was performed using the code of \cite{Ren2015} and \cite{Wei2016ConvolutionalPM} respectively. Since the boxes can be lifted in 3D, very small or very large detections were removed. For tracking, two tracks were merged if the distance between them is less than 50 pixels and they are inside a frame window of 10 frames.
We observed that a multi-person pose estimator better separates the players than bounding box overlap and it enables a simple algorithm for tracking. The pose estimation skeleton is also used for pixel-wise instance segmentation. In experiments with heavily occluded players, we found the pose estimator was significantly better in separating the occluded players than \cite{li2016fully} ($94\%$ vs $65\%$ in finding the correct number of players)
\begin{wrapfigure}{r}{3cm}
\hspace{-20px}
\includegraphics[width=0.2\textwidth]{./images/bbox_ambiguity}
\end{wrapfigure}
Tracking is used for temporal smoothing in 3D, removing jitter, and improving player segmentation.
The player mesh is generated from the estimated depth map and the player's 3D bounding box.
Small errors in calibration or in the 2D bounding box (the green box shown here is a few pixels off) result in jittering of the 3D box (the assumption is that the bottom of the box lies on the ground) which is corrected using 3D temporal smoothing.
\paragraph{Ball Reconstruction}
\begin{wrapfigure}{r}{3cm}
\hspace{-20px}
\includegraphics[width=0.2\textwidth]{./images/ball_ambiguity}
\end{wrapfigure}
Our method does not reconstruct the 3D position of the ball (in some videos the ball was added manually). Our input is a monocular video and even with perfect 2D tracking of the ball, there is still ambiguity in the 3D ball trajectory that generated the 2D track. For example, we cannot disambiguate whether the ball is airborne moving straight away from the camera (red) or just moving in a straight line on the ground (blue), without incorporating ball physics (an area of future work).
\paragraph{Depth Estimation Network}
We used the Pytorch implementation of Stacked the Hourglass network~\cite{Newell2016StackedHN} with 8 stacks of 1 module. We performed optimization with the Adam solver with 0.0001 learning rate, 0.003 weight decay and betas were set to 0.9 and 0.999. Batch size was set to 6. The network was trained for 300 epochs with cross entropy loss.
We experimented with a number of network architectures: encoder-decoder with skip connections, fully convolutional with upsampling, and others, and we found that the hourglass model had superior performance.
\paragraph{Scene Reconstruction}
The Soccer Hologram results were obtained using the Microsoft HoloLens Unity SDK and the capture of the video and images were performed using the Mixed Reality Capture from the HoloLens device. The varying-viewpoint results were obtained using Blender, where the reconstructed players were placed in a synthetic stadium with predefined camera paths. No user intervention was required for the players animation.
\section{#1}\label{sec:#2}}
\def\mycsection#1#2{\section*{#1}\label{sec:#2}}
\def\mysubsection#1#2{\subsection{#1}\label{sec:#2}}
\def\mysubsubsection#1#2{\subsubsection{#1}\label{sec:#2}}
\newcommand{\refSec}[1]{Sec.~\ref{sec:#1}}
\newcommand{\refFig}[1]{Fig.~\ref{fig:#1}}
\newcommand{\refEq}[1]{Eq.~\ref{eq:#1}}
\newcommand{\refTbl}[1]{Table~\ref{tbl:#1}}
\definecolor{unsurecolor}{rgb}{1,.85,.7}
\definecolor{changedcolor}{rgb}{.85,1,.7}
\DeclareRobustCommand{\unsure}[1]{{\sethlcolor{unsurecolor}\hl{#1}}}
\DeclareRobustCommand{\changed}[1]{\sethlcolor{changedcolor}\hl{#1}}
\soulregister\ref7
\soulregister\cite7
\soulregister\citeetal7
\soulregister\etal0
\soulregister\eg0
\soulregister\scene1
\soulregister\refTbl1
\DeclareGraphicsExtensions{.pdf,.ai,.psd,.jpg}
\DeclareGraphicsRule{.ai}{pdf}{.ai}{}
\DeclareGraphicsRule{.psd}{pdf}{.psd}{}
|
{
"timestamp": "2018-06-05T02:12:59",
"yymm": "1806",
"arxiv_id": "1806.00890",
"language": "en",
"url": "https://arxiv.org/abs/1806.00890"
}
|
\section{Introduction}
Sulphur is the tenth most abundant element in our Galaxy and has been the subject of a lot of controversy over the past 20 years. Most observations of diffuse media (probing different conditions) find no elemental depletion of sulphur \citep[e.g.][]{howk2006}. With a detailed analysis, \citet{jenkins2009} seems to show a depletion with increasing density (as for the other elements) but discussed the possible observational bias of this result. On the contrary, only a small fraction of the cosmic sulphur abundance is seen in cold dense cores (through observable molecules such as SO and CS) \citep[e.g.][]{tieftrunk1994,palumbo1997}. To reproduce these small abundances, chemical models need to deplete the elemental abundance of sulphur to "hide" the overflow of sulphur \citep[see for example][]{wakelam2004}. The explanation commonly proposed is that the missing sulphur is locked on the icy mantles of dust grains \citep[e.g.][]{millar1990,ruffle1999}. The controversy resides in the fact that, until now, only OCS \citep{geballe1985,palumbo1995,palumbo1997} and possibly SO$_2$ \citep{boogert1997,zasowski2009} have been detected in solid state towards high-mass protostars, with abundances ($\sim$ 10$^{-7}$) less than 4\% of the sulphur cosmic abundance. H$_2$S, the most likely natural product of hydrogenation of sulphur on grains, has not been detected and the upper limits on the abundance are $3 \times 10^{-7}$ and $3\times 10^{-6}$ (assuming an abundance of H$_2$O in ices of $10^{-4}$) \citep{smith1991}. The long standing question is then where is the sulphur that appears to be depleted from the gas phase in the dense regions of the interstellar medium?\\
Experimental measurements have been performed in order to understand which species are good candidates for sulphur in its solid state. OCS, for example, is formed by cosmic-ray irradiation, although it is easily destroyed on a long term \citep{garozzo2010}. \citet{ferrante2008} discussed the alternative of carbon disulphide (CS$_2$), which has been detected in the coma of comet 67P/Churyumov-Gerasimenko \citep{calmonte2016}, although not detected in ices yet. Hydrated sulphuric acid (H$_2$SO$_4$) was also suggested by \citet{scappini2003} as the main reservoir. Photoproducts of H$_2$S ice processing were proposed as a plausible explanation of the absence of H$_2$S in the ices and the sulphur depletion towards dense clouds and protostars \citep{jimenez-escobar2011}. A large fraction of the missing sulphur in dense clouds could thus be polymeric sulphur residing in dust grains as proposed by \citet{wakelam2004}. Finally, \citet{druard2012} have proposed polysulphanes (H$_2$S$_n$) as possible carriers for sulphur on grains.\\
So far, only a few studies of sulphur chemistry have been performed in the cold regions of the interstellar medium. A sulphur depletion factor of $\sim$ 100 has been adopted to explain the chemistry in starless cores \citep{tafalla2006,agundez2013}. A higher gas-phase sulfur abundance approaching the cosmic value of 1.5 $\times$ 10$^{-5}$ has been found in bipolar outflows \citep{bachiller1997,anderson2013}, photodissociation regions \citep{goicoechea2006}, and hot cores \citep{esplugues2014}. In these cases, this abundance was possibly interpreted by the release of the sulphur bearing species from the icy grain mantles because of thermal and non-thermal desorption and sputtering. Very recently, such a study has been done by \citet{fuente2016} towards the Barnard B1b globule that hosts two candidates for the first hydrostatic core (FHSC). Their pointed position lies in between the two cores (B1b-N and B1b-S) leading to a difficult analysis. Their observational data are fitted using a chemical modelling with an elemental depletion of $\sim$ 25 for sulphur. \citet{fuente2016} proposed that the low sulfur depletion and high abundances of complex molecules could be the result of two factors, both related to the star formation activity: the enhanced UV fields and the surrounding outflows. The star formation activity could also have induced a rapid collapse of the B1b core that preserves the high abundances of the sulphured species. In addition, the outflow associated with B1b-S may heat the surroundings and contribute to stop the depletion of S-molecules. More recently, \citet{vidal2017} compared the observations of sulphur bearing species towards the TMC-1 (CP) dark cloud with the results from an updated chemical model and concluded that sulphur depletion is not required although a factor of three depletion also fitted the observations.
As part of the IRAM-30m Large Program ASAI\footnote{Astrochemical Surveys At Iram: http://www.oan.es/asai/} \citep{lefloch2018}, we carried out a highly sensitive, unbiased spectral survey of the molecular emission of the L1544 pre-stellar core with a high spectral resolution. In the present study we report on the detection of twenty-one sulphur bearing species in this core, use a radiative transfer modelling to determine the observed column densities and compare with the most up to date chemical modelling for sulphur chemistry.\\
We present in Section 2 the observations from the ASAI spectral survey and the line identification for the sulphur bearing species. Based on the detections and tentative detection, we compute in Section 3 the column densities of these species. We present in Section 4 the deuterium fraction, which is expected to be high in a pre-stellar core, and compare with their relative non sulphur bearing molecules. In Section 5, we use a detailed chemical modelling and confront the results with the observations.
\section{Observations}
The observations for all transitions quoted in Table \ref{spectro} were performed at the IRAM-30m toward the dust peak emission of the L1544 pre-stellar core (${\rm \alpha_{2000} = 05^h04^m17.21^s, \delta_{2000} = 25\degr10\arcmin42.8\arcsec}$) in the framework of the ASAI Large Program, except for H$_2$S which was observed as a follow-up project. Observations at frequencies lower than 80 GHz have been performed in December 2015. All the details of these observations can be found in \citet{vastel2014} and \citet{quenard2017a} and line intensities are expressed in units of main-beam brightness temperature. \\
The ortho--H$_2$S (1$_{1,0}$ -- 1$_{0,1}$) at 168762.75 MHz has been observed with the IRAM-30m towards the dust peak emission on March 19 and 20, 2017, with the use of the spectral line Eight MIxer Receivers (EMIR) in band E150 combined with the narrow mode of the Fast Fourier Transform Spectrometers (FTS) allowing a spectral resolution of 50 kHz. We used the frequency switching observing mode with a frequency throw of 7.14 MHz, which allows a good removal of the ripples including standing waves between the secondary and the receivers \citep{fuente2016}. Pointing was checked every 1.5 hour on the nearby continuum sources 0430+352, 0439+360 and 0316+413 with errors always within 3$^{\prime\prime}$. The system temperature was stable, at $\sim$ 230 K (2 mm of precipitable water vapour) resulting in an average rms of 9.1 mK for a resolution of 50 kHz. The IRAM beam varies from 33.5$^{\prime\prime}$ at 75 GHz to 23.9$^{\prime\prime}$ at 105 GHz and 15$^{\prime\prime}$ at 168 GHz.\\
Twenty-one sulphur bearing species have been detected in total and are shown in Appendix A: CS, $^{13}$CS and C$^{34}$S (Fig. \ref{cs}), CCS and CC$^{34}$S (Fig. \ref{ccs}), C$_3$S (Fig. \ref{c3s}), H$_2$S (Fig. \ref{h2s}), H$_2$CS, H$_2$C$^{34}$S, HDCS and D$_2$CS (Fig. \ref{h2cs}), HSCN (Fig. \ref{hscn}), OCS (Fig. \ref{ocs}), SO, S$^{18}$O and $^{34}$SO (Fig. \ref{so}), SO$_2$ (Fig. \ref{so2}), NS (Fig. \ref{ns}), NS$^+$ \citep{cernicharo2018}, HCS$^+$ and HC$^{34}$S$^+$(Fig. \ref{hcsp}). We present, in Table \ref{spectro}, the spectroscopic parameters of the transitions detected using the CDMS\footnote{http://www.astro.uni-koeln.de/} \citep{muller2005} for most species except CC$^{34}$S for which we used JPL\footnote{https://spec.jpl.nasa.gov/} \citep{pickett1998}. The line identification and analysis have been performed using the {\sc cassis}\footnote{http://cassis.irap.omp.eu} software \citep{vastel2015a}. The results from the line fitting take into account the statistical uncertainties accounting for the rms (estimated over a range of 15 km~s$^{-1}$ for a spectral resolution of 50 kHz). We also report a tentative detection (4 $\sigma$ level) for methyl mercaptan (CH$_3$SH) in Fig. \ref{ch3sh}.
\section{Determination of the column densities}
In this section we determine the column densities (or some upper limits) for the detected species. Different methods are used, depending on the number of detected transitions for each species, and also depending on the availability of collisional coefficients. These methods take into account the uncertainties based on the line fitting and also the absolute calibration accuracy, around 10$\%$ or better depending on the band considered.
For species where multiple transitions have been detected, covering a wide range in energy, the rotational diagram method (see Fig. \ref{RD} in the case of CCS) is a useful tool to derive parameters such as the column density and excitation temperature. We performed this analysis for CCS, C$_3$S, H$_2$CS, H$_2$C$^{34}$S, HDCS, D$_2$CS, OCS and SO (see Table \ref{lte}). When multiple transitions are detected we can also use a MCMC (Markov Chain Monte Carlo) method implemented within {\sc cassis}. The MCMC method is an iterative process that goes through all of the parameters with a random walk and heads into the solutions space and the final solution is given by a $\chi^2$ minimization. All parameters such as column density, excitation temperature (or kinetic temperature and H$_2$ density in the case of a non-LTE\footnote{LTE: Local Thermodynamic Equilibrium} analysis), source size, linewidth, V$_{lsr}$, can be varied. The partition functions have been computed in {\sc cassis} for temperatures lower than 9.375 K which is the lowest temperature given by the CDMS database for some species. These are computed as:\\
\begin{equation}
Q(T) = \sum_{i} g_{i} \times exp(-E_{i}/kT)
\end{equation}
where g$_i$ and E$_i$ are the statistical weight and energy, respectively, of the i level. Table \ref{lte} shows the results from the MCMC method for a LTE analysis. Note that the emission is compatible with a source size larger than the IRAM beam of $\sim$ 30$^{\prime\prime}$.
\begin{figure}
\centering
\includegraphics[width=1.06\hsize]{rotationnal_diagram_CCS.pdf}
\caption{Rotational diagram analysis for the nine detected transitions of CCS. The CCS column density and rotational temperature are quoted in the upper right corner.}
\label{RD}
\end{figure}
\begin{table*}
\caption{Temperature and column density of sulphur species in L1544, under the LTE condition, using MCMC and rotational diagram analysis. \label{lte}}
\begin{tabular}{cccccccc}
\hline\hline
& \multicolumn{4}{c}{MCMC} & \multicolumn{2}{c}{Rotational diagram} \\
\hline
Species & $\rm T_{ex}$ & N & FWHM & $\rm V_{lsr}$ & $\rm T_{rot}$ & N \\
& (K) & (cm$^{-2}$) & (km~s$^{-1}$) & (km~s$^{-1}$) & (K) & (cm$^{-2})$ \\
\hline
CCS & $5.6 \pm 0.2$ & $6.5(\pm 0.9)$ x $10^{12}$ & $0.40 \pm 0.01$&$7.18 \pm 0.01 $ & $4.9 \pm 0.2$&$7.5(\pm 1.3)$ x $10^{12}$ \\
C$_{3}$S & $6.2 \pm 0.5$ & $3.1(\pm 0.1)$ x $10^{12}$ & $0.42 \pm 0.01$ &$7.22 \pm 0.01 $ & $7.9 \pm 0.2$ & $8.8(\pm 0.7)$ x $10^{11}$ \\
H$_{2}$C$^{34}$S & $14.1 \pm 2.0$ & $3.5(\pm 0.7)$ x $10^{11}$ & $0.43 \pm 0.03$& $7.24 \pm 0.02$ & $14.6 \pm 0.8$ & $3.1(\pm 0.2)$ x $10^{11}$\\
D$_{2}$CS & $9.6 \pm 1.5$ & $1.1(\pm 0.7)$ x $10^{12}$ &$0.47 \pm 0.02$ &$7.22 \pm 0.01$ & $10.8 \pm 0.8$ & $6.5(\pm 0.6)$ x $10^{11}$ \\
OCS & $7.5 \pm 0.6$ & $6.3(\pm 1.6)$ x $10^{12}$ & $0.36 \pm 0.01$&$7.18 \pm 0.01$ & $8.8 \pm 0.2$ & $4.1(\pm 0.2)$ x $10^{12}$ \\
HDCS & $6.8 \pm 0.6$ & $1.6(\pm 0.8)$ x $10^{12}$ &$0.42 \pm 0.01$ & $7.23 \pm 0.01$& $7.5 \pm 0.5$ & $8.4(\pm 1.1)$ x $10^{11}$ \\
H$_{2}$CS & $12.3 \pm 0.7$ & $7.3(\pm 1.0)$ x $10^{12}$ &$0.41 \pm 0.01$ &$7.20 \pm 0.01$ & $13.1 \pm 0.9$ & $5.8(\pm 0.6)$ x $10^{12}$ \\
SO & $9.7 \pm 1.8$ & $5.2(\pm 0.8)$ x $10^{12}$ &$0.34 \pm 0.01$ &$7.22 \pm 0.01$ & $7.9 \pm 1.6$ & $7.2(\pm 3.2)$ x $10^{12}$ \\
\hline
\end{tabular}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=1.06\hsize]{struct_L1544.pdf}
\caption{Gas and dust temperature, density, and velocity profiles of the L1544 pre-stellar core as a function of the radius in arcseconds and au from \citet{keto2014}.}
\label{struct}
\end{figure}
The computed column densities and temperatures shown in Table \ref{lte} show similar results, especially in the case of CCS for which we have nine transitions detected. Some differences occur when the number of transitions decreases. For example, there is a factor of two on the computation of the column density of HDCS (and a factor of three for C$_3$S) between both methods, reflecting the low number of transitions detected covering a small range in energy E$_{u}$ = 8.9, 17.7, 18.1 K (E$_{u}$ = 25.3, 29.1, 33.3, 33.7 K for C$_3$S). The rotational diagram analysis cannot be performed on species for which only a few transitions are detected. For these species, we fixed the excitation temperature at 10 K, which is the average kinetic temperature of the L1544 core in the IRAM beam (see Fig. \ref{struct}), or slightly varied the temperature to be compatible with the non detected transitions in our spectral survey, and constrained their column density by adjusting the spectra. The resulting column densities are given in Table \ref{lteat10K} as N$_{species}$. We present in Fig. \ref{ch3sh} the tentative detection for methyl mercaptan (CH$_3$SH) in our spectral survey, convincingly reinforced by a LTE modelling (in red) with a strong upper limit on the column density of $2.5 \times 10^{11} cm^{-2}$ using a fixed temperature of 10 K, a full width at half maximum of 0.3 km/s, a velocity in the standard of rest of 7.1 km/s. Note that varying the excitation temperature from 8 to 12 K for these species does not significantly change the results.
\begin{table}
\caption{Derived column density (N) with a fixed excitation temperature of 10 K for species where one or two transitions have been detected or with a varied excitation temperature so that the LTE modelling is compatible with the non detected transitions in our spectral survey. \label{lteat10K}}
\begin{tabular}{|c|c|c|c|}
\hline\hline
Species & $\rm T_{ex}$ & $\rm N_{species} $ & $\rm N_{main\,isotopologue}$\\
& (K) & (cm$^{-2}$) & (cm$^{-2}$)\\
\hline
HCS$^{+}$& $10$ & $(6.2-6.5) \times 10^{11}$ &\\
HC$^{34}$S$^{+}$& $10$ & $(4.8-5.2) \times 10^{10}$ & $(1.1-1.2) \times 10^{12}$\\
CS & $10$ & $(4.0-4.4) \times 10^{12}$ &\\
$^{13}$CS & $10$ & $(2.6-3.0) \times 10^{11}$ & $(1.8-2.0) \times 10^{13}$\\
C$^{34}$S & 10 & $(7.8-8.3) \times 10^{11}$ & $(1.8-2.0) \times 10^{13}$\\
CC$^{34}$S & 6--7 & $(4-6) \times 10^{11}$ & $(0.9-1.4) \times 10^{13}$\\
HSCN & $10$ & $(5.8-6.2) \times 10^{10}$ & \\
NS & $10$ & $(1.4-1.6) \times 10^{12}$ &\\
$^{34}$SO & 5--6 & $ (1.3-1.6) \times 10^{12}$ & $(3.0-3.6) \times 10^{13}$\\
S$^{18}$O & 6--8 & $ (3-3.2) \times 10^{11}$ & $(1.7-1.8) \times 10^{14}$\\
CH$_3$SH & 10 & $\le 2.5 \times 10^{11}$ & \\
\hline
\end{tabular}
\end{table}
For OCS, SO and SO$_2$ we carried out a non-LTE analysis using the LVG (Large Velocity Gradient) code by \citet{ceccarelli2003}, using the collision rates by \citet{green1978}, \citet{lique2007b} and \citet{cernicharo2011} respectively. Table \ref{lvg} lists the results (H$_2$ density, kinetic temperature and column density) from the LVG analysis, taking into account the observed integrated fluxes and the corresponding rms (see Table \ref{spectro}). These species are likely emitted in the external layer where the density is lower and the temperature is higher than in the center where $\rm T_{gas} \sim T _{dust}$ $\sim$ 7 K and n(H$_2$) $\sim$ 10$^7$ cm$^{-3}$ (see Fig. \ref{struct}). Note also that the emission is compatible with a source size larger than the maximum IRAM beam of $\sim$ 30$^{\prime\prime}$. For NS$^+$, we used the column density derived from \citet{cernicharo2018}: 2.3 $\times$ 10$^{10}$ cm$^{-2}$.\\
The H$_2$S transition is the only transition among our sulphur bearing species that presents a double-peaked profile that cannot simply be analysed in LTE (see Fig. \ref{h2s}). We estimated a lower limit on the H$_2$S column density of 1.6 $\times$ 10$^{12}$ cm$^{-2}$, from a simple LTE modelling using an excitation temperature of 10 K and we use this limit in section 5 (see Fig. \ref{Ncol_density_1e2}) as a comparison for the chemical modelling. We also use, in Appendix B, the variation of H$_2$S abundance as a function of radius (from section 5) combined with a 3D radiative transfer treatment to try to reproduce the line profile, taking into account the density and temperature profiles as well as the velocity profile of the L1544 pre-stellar core.\\
\begin{table}
\caption{Results from the non-LTE analysis for the OCS, SO and SO$_2$ species. \label{lvg}}
\begin{tabular}{|c|c|c|c|c|}
\hline\hline
Species & $\rm n_{H_2}$ & $\rm T_{K}$ & $\rm N$ \\
& (cm$^{-3}$) & (K) & (cm$^{-2}$) \\
\hline
OCS& (7 $\pm$ 3.0) $\times$ $10^{3}$ & 13 $\pm$ 2 & 4 ($\pm$ 1) $\times 10^{12}$ \\
SO & (2 $\pm$ 1) $\times$ $10^{4}$ & $\ge$ 12 & $\ge$ 8 $\times$ $10^{12}$ \\
SO$_2$ & (2 $\pm$ 1) $\times$ $10^{4}$ & 12 $\pm$ 1 & (2.0--3.5) $\times$ $10^{12}$ \\
\hline
\end{tabular}
\end{table}
Isotopologues may be used when transitions are optically thick. We decided to use the isotopologues of the more abundant species presented in Section 2 to refine some of the column densities determined earlier. For example, one transition for $^{12}$CS, $^{13}$CS and C$^{34}$S have been detected in our spectral survey. A simple LTE analysis gives a $^{12}$CS/$^{13}$CS ratio between 13.3 and 16.9, much below the value of 68 determined in the local interstellar medium \citep{milam2005,asplund2009,manfroid2009} for $^{12}$C/$^{13}$C. The $^{12}$CS (2--1) is likely optically thick and hinders the determination of the true column density for $^{12}$CS. Using $^{13}$CS and a $^{12}$C/$^{13}$C ratio of 68, we obtain a value of N($^{12}$CS)=(1.77--2.04) $\times$ 10$^{13}$ cm$^{-2}$. The same analysis can be also applied for $^{34}$S/$^{32}$S. The LTE analysis for C$^{34}$S gives a C$^{34}$S/C$^{32}$S ratio of $\sim$ 0.2, much higher than the value (0.044) in the vicinity of the Sun \citep{chin1996}. The CS column density derived from C$^{34}$S gives (1.77--1.89) $\times$ 10$^{13}$ cm$^{-2}$, similar to the value derived from $^{13}$CS. Previous observations of the CS (2--1) double peaked line profile, as well as a map over the whole core, have shown that CS is depleted in the central positions \citep{tafalla2002,hirota1998}. \citet{aikawa2003} used these observations to compute the column density from their best-fit model assuming a spherical core with radius of 15000 au. Using the collision coefficients for para-H$_2$ from \citet{green1978}, the resulting column density for the CS molecule is 4.6 $\times$ 10$^{13}$ cm$^{-2}$ (with an uncertainty factor of 2--3), a factor 10 higher than their LTE value, but compatible with our LTE computation using $^{13}$CS and $^{12}$CS/$^{13}$CS=68. The $^{34}$S/$^{32}$S ratio for both HCS$^+$ and CCS are compatible with the value found in the vicinity of the Sun. For the SO molecule, two isotopologues have been detected: $^{34}$SO and S$^{18}$O. The respective SO column densities based on the local values of the isotopic ratios are $\sim$ 3.2 $\times$ 10$^{13}$ and $\sim$ 1.8 $\times$ 10$^{14}$ cm$^{-2}$ respectively, compatible with the lower limit (8 $\times$ 10$^{12}$ cm$^{-2}$) found using a non-LTE formalism (see Table \ref{lvg}). \\
We present in the fourth column of Table \ref{lteat10K}, the column density (as $\rm N_{main\,isotopologue}$) of the main species based on the rarer isotopologue column density and using $^{12}$C/$^{13}$C, $^{32}$S/$^{34}$S and $^{16}$O/$^{18}$O ratios of 68, 23, and 557 \citep{wilson1999} respectively. CS and HCS$^+$ are the only species where optical depth affects the determination of the column densities and we will use their values, determined from the rare isotopologues ($^{13}$CS, C$^{34}$S and HC$^{34}$S$^+$), when comparing with the outcome of the chemical model described in Section 5.\\
Very recently, the thioformyl radical (HCS) and its metastable isomer HSC have been detected toward the molecular cloud L483 \citep{agundez2018}. These species have not been detected in our spectral survey, and we can estimate an upper limit on the column density for both species, using an excitation temperature of 10K: N(HCS) $\le$ 3 $\times$ 10$^{12}$ cm$^{-2}$ and N(HSC) $\le$ 6 $\times$ 10$^{10}$ cm$^{-2}$, a factor two to three lower than those derived towards L483.\\
Table \ref{final-N} summarizes the column densities that have been computed in this Section which will be used in Section 5.
\section{Deuterium Fraction of the sulphur bearing species}
An extreme molecular deuteration is a major characteristic of pre-stellar cores. Although the deuterium abundance is about 1.5 $\times$ 10$^{-5}$ relative to hydrogen (Linsky 2003), singly, doubly and even triply deuterated molecules have been detected with D/H ratios reaching 100$\%$ \citep[see][for a review]{ceccarelli2014}. Deuterium fractionation occurs in the cold and dense regions of the interstellar medium where CO is depleted from the gas phase, which leads to the preferential reactions between the H$_3$$^+$ ion with HD, feeding the deuterium of the latter ion, and distributing deuterium in the gas-phase and on the grain surfaces. In those regions where CO is highly depleted and H$_2$ is mostly in para form, the abundance of D$_2$H$^+$ should be similar to that of H$_2$D$^+$ \citep{roberts2003}. This was confirmed with the detection of D$_2$H$^+$ toward the pre-stellar core 16293E \citep{vastel2004}. A high deuterium fractionation has already been detected in L1544 \citep[e.g.][]{caselli1999,crapsi2005,bizzocchi2014} in which H$_2$D$^+$ has been detected \citep{caselli2003}. The central 7000 au is called the {\it deuteration zone} (see Fig. \ref{struct}), where the freeze-out of abundant neutrals such as CO and O, the main destruction partners of the H$_3$$^+$ isotopologues, favour the formation of deuterated molecules. The outer ring (7000--30000 au), is called the {\it dark-cloud zone} \citep{caselli2012,ceccarelli2014}, where the carbon is mostly locked in CO, gas-phase chemistry is regulated by ion-molecule reactions and deuterium fractionation is reduced. \\
From Table \ref{lte} we can compute the following fractionation ratios: H$_2$C$^{34}$S/H$_2$CS = 0.048 $\pm$ 0.016, HDCS/H$_2$CS = 0.219 $\pm$ 0.140 and D$_2$CS/H$_2$CS = 0.151 $\pm$ 0.117. The first one is compatible with the $^{34}$S/$^{32}$S ratio of 0.044 in the vicinity of the Sun \citep{chin1996} which means that the H$_2$CS detected transitions are optically thin and give a good estimate of the total column density of H$_2$CS. The deuteration fractionation ratios for singly and doubly deuterated thioformaldehyde are both comparable and present an extremely high D enhancement. As a comparison, for the B1 cloud, the derived HDCS/H$_2$CS and D$_2$CS/H$_2$CS abundance ratios are 0.33 and 0.11 respectively \citep{marcelino2005}.\\
At the high densities and very low temperatures found in pre-stellar cores, D$_2$CO forms efficiently \citep{tielens1983} because of its lower zero energy level compared to that of H$_2$CO, through gas chemistry where deuterium is passed from the deuterated forms of H$_3^+$ and CH$_3^+$ \citep{roberts2003,roberts2007}. Considering the high densities in the central regions of L1544 where deuteration is the highest (see Fig. \ref{struct}), the LTE assumption should be correct for the determination of the column densities of the deuterated species, as found in the case of B1 \citep{marcelino2005}. The collision coefficients are unkown for the D$_2$CS and HDCS species, but assuming a typical range of 10$^{-11}$--10$^{-10}$ cm$^3$~s$^{-1}$ and using Einstein coefficients of $\sim$ 10$^{-5}$ s$^{-1}$(see Table \ref{spectro}), we can estimate critical densities between 10$^5$ and 10$^6$ cm$^{-3}$. These values correspond to the densities found at 4 $\times$ 10$^3$ and 10$^3$ au respectively from the L1544 center (see Fig. \ref{struct}) and are higher than densities found at larger distances. Therefore, it is reasonable to assume that LTE conditions are valid for the gas in the {\it deuteration zone}, whereas the emission from the outer gas is likely sub-thermal.\\
The formation of the deuterated forms of thioformaldehyde and formaldehyde should be similar \citep[CO and CS being strongly depleted:][]{tafalla2002} and their deuterated ratios comparable. A map of formaldehyde and its deuterated counterparts has recently been performed by Chac\`on-Tanarro et al. (submitted to A\&A) in L1544 and they measured the following deuterated fractions at the dust peak where CO is heavily depleted \citep{caselli1999}: D$_2$CO/H$_2$CO = 0.04 $\pm$ 0.03, HDCO/H$_2$CO = 0.03 $\pm$ 0.02. The derivation was done assuming optically thin emission and LTE, using a constant excitation temperature of 7 K (from the modelling of H$_2$CO). It is difficult to compare the deuterated fractions of both formaldehyde and thioformaldehyde because of the large error bars found for the latter. Overall, it seems that deuteration is somewhat more efficient for thioformaldehyde than for formaldehyde.
\\
\section{Evidence for sulphur depletion: a comparison between the observations and the chemical modelling}
We now confront the results from the radiative transfer modelling presented in section 3 with the output of a detailed chemical modelling. The sulphur chemical network has recently been enhanced by \citet{vidal2017}, using experimental and theoretical rates and branching ratios from the literature. Basically, they added 46 sulphur bearing species along with 478 reactions in the gas-phase, 305 reactions on the grain surface and 147 reactions in the grain bulk. They tested the effect for this updated network on the output of a gas-grain chemical model for dark clouds conditions, with different elemental sulphur abundances. Their results show that, depending on the age of the observed cloud, the sulphur reservoir could be either atomic sulphur in the gas phase or HS/H$_2$S in icy grain bulks. From the chemical modelling, they conclude that depletion of sulphur is not required to explain the observations of the TMC-1 dark cloud. This cloud is at an earlier stage than L1544, and presents a constant density ($\sim$ 10$^{4}$ cm$^{-3}$) and temperature ($\sim$ 10 K). \\
We used the same chemical network for our study combined with three-phases modellings, which allows to follow the evolution of chemical abundances for a given set of chemical and physical parameters. Gas-phase, grain surface and grain bulk chemistries are taken into account, along with exchanges between those phases: adsorption of gas-phase species onto the grain surfaces, thermal and non-thermal desorption of species from the grain surface into the gas-phase, and finally the exchange of species between the bulk and the surface of the grains. More details on the three-phase model can be found in \citet{ruaud2016} and \citet{vidal2017}.
We present in Fig. \ref{network} the most critical reactions linking the sulphur bearing species that we detected (blue ellipses), but also the intermediate undetected species (purple boxes) and the reactions exchanges: blue arrows are for surface reactions, green arrows for electronic recombination and red arrows for bimolecular reactions. We used the {\sc nautilus}' outputs for the many models considered and extracted the main reactions leading to the production of the detected sulphur bearing species in L1544. These reactions are based on the KInetic Database for Astrochemistry (KIDA) (http://kida.obs.u-bordeaux1.fr/).
\begin{figure*}
\centering
\includegraphics[width=0.95\hsize]{Revised-Sulf-network-FINAL.pdf}
\caption{Simplified sulphur network. The blue ellipses being the detected species in L1544, the purple boxes being the intermediate undetected species, blue arrows are the surface reactions, green arrows the electronic recombination and red arrows the bimolecular reactions. }
\label{network}
\end{figure*}
We have used the gas-grain chemical code {\sc nautilus} in its three-phase model (gas phase, grain surface and mantle) to predict the abundances of all sulphur species in the cold core. We have already used {\sc nautilus} in previous studies of the chemistry involved in the cold core \citep{quenard2017a,vastel2018} and we follow here a similar method to stay consistent with these works. In the work led by \citet{vidal2017}, they considered a one-step chemical modelling with a constant density and temperature to model the physical conditions in TMC-1. Indeed, TMC-1 is a younger core than L1544 and the latter presents a density, temperature and velocity structure with evidence of gravitational contraction \citep{caselli2012,keto2014}. To take into account this structure, we therefore used a two-step model: the first phase represents the evolution of the chemistry in a diffuse or molecular cloud, with T = 20 K and several densities, ranging from 10$^2$ to 2 $\times$ 10$^4$ cm$^{-3}$, depending on the model considered \citep[see][for more details]{quenard2017a,vastel2018}. The initial abundances considered are those labelled as "EA1" in \citet{quenard2017a} and we only vary the sulphur atomic abundance (see below and Table \ref{init_dens}). We follow the chemistry in this phase during 10$^6$ years. We have checked that a variation of this age to 10$^5$, 5 $\times$ 10$^6$ or 10$^7$ years does not change the resulting column densities by more than a factor 1.5.
In order to compare the results from the chemical modelling (abundances with respect to H) with the observations (column density) of sulphur bearing molecules in L1544, we took into account the density profile across the core \citep[see][]{quenard2017a} to determine the column density from the chemical modelling instead of the abundance:\\
\begin{equation}
N(X) = 2 \times \sum_{i=2}^{n}(r_{i-1}-r_i) \times \frac{n(H)_{i-1}[X]_{i-1}+n(H)_i[X]_i}{2}
\end{equation}
where r is the radius from the center for every layer, n(H)$_i$ the gas density and [X]$_i$ the abundance at radius r$_i$. The different N(X) are then weigthed using a Gaussian function with a FWHM depending on the beam of the IRAM 30m telescope (between 15$^{\prime\prime}$ and 30$^{\prime\prime}$, depending on the frequency) to compare with the observations. This procedure was not adopted in the case of TMC-1 for which a constant density profile has been used. In the case of L1544, we cannot simply divide the observed column density by the total H$_2$ column density since the emission is not radially constant. \\
\begin{table}
\caption{Column densities used for the comparison with the chemical modelling. \label{final-N}}
\begin{tabular}{|c|c|c|}
\hline
Species & N & method \\
& (cm$^{-2}$) &\\
\hline
CCS & (0.9--1.4) $\times$ 10$^{13}$ & CC$^{34}$S \\
C$_{3}$S & 3.1 ($\pm$ 0.1) $\times$ 10$^{12}$ & MCMC \\
SO$_2$ & (2.0--3.5) $\times$ 10$^{12}$ & LVG \\
CS & (1.8--2.0) $\times$ 10$^{13}$ & $^{13}$CS, C$^{34}$S \\
OCS & 4 ($\pm$ 1) $\times$ 10$^{12}$ & LVG \\
H$_{2}$CS & 7.3 ($\pm$ 1.0) $\times$ 10$^{12}$ & MCMC\\
HSCN & (5.8--6.2) $\times$ 10$^{10}$ & LTE \\
NS & (1.4--1.6) $\times$ 10$^{12}$ & LTE \\
NS$^+$ & 2.3 $\times$ 10$^{10}$ & \citet{cernicharo2018}\\
HCS$^+$ & (1.1--1.2) $\times$ 10$^{12}$ & HC$^{34}$S$^+$ \\
SO & $\ge$ 8 $\times$ 10$^{12}$ & LVG\\
H$_2$S & $\ge$ 1.6 $\times$ 10$^{12}$ & LTE \\
CH$_3$SH & $\le$ 2.5 $\times$ 10$^{11}$ & LTE \\
\hline
\end{tabular}
\end{table}
We present in Fig. \ref{Ncol_density_1e2} the variation of the modelled column density as a function of time from \textsc{nautilus} and the comparison with the observed column density (black horizontal line). We use in Fig. \ref{Ncol_density_1e2} (black horizontal lines) the observed column densities from Table \ref{final-N} that have been computed in Section 3 for the 13 sulphur bearing species. The width of the line represents the errors quoted in Table \ref{final-N}. The blue lines correspond to model 1 (sulphur depletion: S/H=8.0 $\times$ 10$^{-8}$) and the red ones correspond to model 4 (sulphur non depletion: S/H=1.5 $\times$ 10$^{-5}$) for an initial H density of 10$^2$ cm$^{-3}$. Table \ref{init_dens} shows the different modelling varying density in the first phase and the element gas phase abundances. From Fig. \ref{Ncol_density_1e2} we can clearly identify which model better reproduces the observations. A sulphur depletion seems necessary overall, with the exception of SO$_2$, although still within the error bars. Additional modelling are presented in Appendix C: Fig. \ref{Ncol_density_3e3} (initial H density of 3 $\times$ 10$^3$ cm$^{-3}$, see Table \ref{init_dens}) and \ref{Ncol_density_2e4} (initial H density of 2 $\times$ 10$^4$ cm$^{-3}$, see Table \ref{init_dens}).
\begin{table}
\centering
\caption{Chemical modelling parameters used for the first phase model. \label{init_dens}}
\begin{tabular}{llc}
\hline\hline
Model & Density & Sulphur elemental\\
number & (cm$^{-3}$) & abundance\\
\hline
1 & $1\times10^2$ & $8.0\times10^{-8}$\\
2 & $3\times10^3$ & $8.0\times10^{-8}$\\
3 & $2\times10^4$ & $8.0\times10^{-8}$\\
4 & $1\times10^2$ & $1.5\times10^{-5}$\\
5 & $3\times10^3$ & $1.5\times10^{-5}$\\
6 & $2\times10^4$ & $1.5\times10^{-5}$\\
\hline
\end{tabular}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=\hsize]{all_molecules_density_1e2.pdf}\\
\caption{Variation of the modelled column density as a function of time and the comparison with the observed column density (black horizontal line). The blue lines correspond to model 1 (sulphur depletion: S/H=$8~10^{-8}$) and the red ones correspond to model 4 (sulphur non depletion: S/H=$1.5~10^{-5}$) as shown in Table \ref{init_dens}. The thickness of the black line corresponds to the error bar of the observed column densities. A variation by a factor of three of modelled column densities is shown in corresponding coloured areas. The dashed black horizontal line for SO and H$_2$S correspond to the lower limit on the computation of the total column density. The gray vertical area highlights an age between [1--3] $\times$ 10$^6$ years (see text).}
\label{Ncol_density_1e2}
\end{figure*}
Then, in order to find the "best-fit" model, we used the distance of disagreement computation \citep[see for example][]{wakelam2006}, applied on the column density, which is computed as follows:
\begin{equation}
D(t) = \frac{1}{n_{obs}} \sum_{i}|log(N(X))_{obs,i}-log(N(X))_{i}(t)|
\end{equation}
where N(X)$_{obs,i}$ is the observed column density, N(X)$_{i}$(t) is the modelled column density at a specific age and n$_{obs}$ is the total number of observed species considered in this computation (10 in the case of non deuterated sulphur bearing species detected in L1544). Note that we did not take into account species where a lower limit has been derived from the observations (SO and H$_2$S) and an upper limit has been derived (CH$_3$SH). Fig. \ref{disagreement} shows the distance of disagreement for all 6 models explained in Table \ref{init_dens}.\\
The minimum of the D(t) function is then obtained for the "best fit" age. Fig. \ref{disagreement} shows that 1) the "best-fit" age is 10$^6$-10$^7$ years for most models, compatible with previous estimates of the cloud age \citep{quenard2017a} and 2) the models where sulphur is depleted (models 1--3, respectively in red, yellow and green) are more favourable than models where sulphur is not depleted (models 4--6, respectively in light blue, dark blue and magenta). Moreover, the best solution for models 4 and 5 is found at very early age ($\sim$ 10$^3$ years), which is not compatible with the age of the object. We report the best-fit age in Fig. \ref{Ncol_density_1e2} as a gray vertical area which highlights an age between [1--3]~$\times 10^6$ years for a direct comparison between observations and modelling. The ten sulphur-bearing species are reproduced by the chemical model, within the observed error bars and considering a conservative factor of three to the modelled column densities. We are also in good agreement with the lower limits found for SO and H$_2$S.
We present in Fig. \ref{radial} the radial distribution for all the detected sulphur bearing species in the network from \citet{vidal2017}, for an age between 10$^6$ et 3 $\times$ 10$^6$ years, compatible with the results from the distance of disagreement. The abundances clearly peak at a radius between [1-2] $\times$ 10$^4$ au. Carbon, oxygen and sulphur depletion affect the cold and dense regions within L1544 \citep[e.g.][]{vasyunin2017}. Species like CO and CS disappear rapidly from the gas phase in the {\it deuteration zone} (see section 4), while species like N$_2$H$^+$ and NH$_3$ survive much longer at high densities. As a result, the pre-stellar core gradually develops a differentiated interior characterised by a centre rich in depletion-resistant species (such as the deuterium bearing species) surrounded by layers richer in depletion-sensitive molecules (such as the sulphur bearing species). This molecular differentiation as been identified in many starless cores \citep[e.g.][]{tafalla2006} and non-thermal desorption processes have been invoked \citep[e.g.][]{vastel2014,balucani2015,vastel2016,vasyunin2017}: FUV photo-desorption, cosmic-rays and chemical desorption. As a consequence, we cannot simply assume a constant abundance as was assumed in dark clouds such as TMC-1 \citep{vidal2017} for the comparison between the observations and the chemical modelling. Note that in the current model, FUV photo-desorption plays a minor role as compared to the chemical desorption.\\
\begin{figure}
\centering
\includegraphics[width=0.9\hsize]{distance_disagreement.pdf}
\caption{Distance of disagreement for models 1--3 (depletion) and 4--6 (non depletion). See Table \ref{init_dens}.}
\label{disagreement}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.7\hsize]{all_molecules_all_models_radius_v2_nsp.pdf}
\caption{Radial distribution of the detected sulphur bearing species abundances in L1544 for an age between 10$^6$ and 3 $\times$ 10$^6$ years, for models 1--6 (see Table \ref{init_dens}).}
\label{radial}
\end{figure*}
Methyl mercaptan (CH$_3$SH) is tentatively detected in the L1544 pre-stellar core. It is likely emitted in the external layer of L1544 at $\sim$ 10$^4$ au (see Fig. \ref{ch3sh_model}) where the density drops to about 10$^4$ cm$^{-3}$ and the temperature increases to 10--12 K. We used the same chemical modelling as for the other sulphur-bearing species and present in Fig. \ref{ch3sh_model} the results from the modelling for models 1--6 (see Table \ref{init_dens}) compared to the upper limit on the column density (dashed line). It is clear from this Figure that a non-depletion regime where S/H=$1.5~10^{-5}$ over produces this species (red line domain) and that a depletion regime where S/H=$8~10^{-8}$ (blue line domain) is more compatible with our observations. In the current network, CH$_3$SH is mainly formed in the gas phase (>80\%, depending on the model) through $\rm CH_3SH_2^+$ electron recombination ($\rm CH_3SH_2^+ + e^- \rightarrow CH_3SH + H$), assuming that the $\rm CH_3^+~+~H_2S$ reaction leads to $\rm CH_3SH_2^+$.\\
\begin{figure}
\centering
\includegraphics[width=0.8\hsize,clip=true,trim=0 0 0 1.29cm]{CH3SH_all_models_radius.pdf}
\includegraphics[width=0.8\hsize,clip=true,trim=0 0 0 0.9cm]{CH3SH_density_1e2.pdf}\\
\includegraphics[width=0.8\hsize,clip=true,trim=0 0 0 0.9cm]{CH3SH_density_2e4.pdf}
\includegraphics[width=0.8\hsize,clip=true,trim=0 0 0 0.9cm]{CH3SH_density_3e3.pdf}\\
\caption{Top panel: Radial distribution of CH$_3$SH for an age between 10$^6$ et 3 $\times$ 10$^6$ years, for models 1--6 (see Table \ref{init_dens}). Upper to lower panel: Variation of the modelled column density as a function of time and the comparison with the observed upper limit on the column density (dashed black horizontal line). The blue lines correspond to model 1, 2, 3 (sulphur depletion: S/H=$8~10^{-8}$) and the red ones correspond to model 4, 5, 6 (sulphur non depletion: S/H=$1.5~10^{-5}$) as shown in Table \ref{init_dens}. A variation by a factor of three of modelled column densities is shown in corresponding coloured areas. The gray vertical area highlights an age between [1--3] $\times$ 10$^6$ years (see text).}
\label{ch3sh_model}
\end{figure}
To summarize, we used the ten detected species (CCS, C$_3$S, SO$_2$, CS, OCS, H$_2$CS, HSCN, NS, NS$^+$, HCS$^+$) and some of their isotopologues, as well as the lower limits on both SO and H$_2$S, and the upper limit on CH$_3$SH, to constrain the sulphur depletion in the L1544 pre-stellar core. The true degree of depletion of sulphur is difficult to constrain, but the results from our chemical modelling are consistent with a core where the initial sulphur elemental abundance is depleted with respect to the cosmic value (depletion $\sim$ 190). This is in contradiction with what was found by \citet{vidal2017} towards the starless core TMC-1 (CP). However, the method used by \citet{vidal2017} is very different from ours, since we are taking into account the density profile of the L1544 pre-stellar core. Their results might prove much different in case the density profile of TMC-1 (CP) is not flat. It is also difficult to compare our results with the study of the Barnard B1b globule which represents an advanced stage compared to L1544 and shows many structures in one beam. Higher spatial resolution is needed for Barnard B1b, to determine the possible influence of the B1b-S outflow on the sulphur chemistry.
\section{Conclusions}
We presented all the sulphur-bearing species that have been detected in a proto-typical pre-stellar core, L1544, using a spectral survey performed at the IRAM-30m. We computed the column densities for each species and compared them with the results from a chemical modelling taking into account the recent release from a sulphur chemical network. All species from this network are emitted in the external layer of the pre-stellar core, at about 10000 au, and the observations are best reproduced using an initial gas-phase sulphur abundance of 8 $\times$ 10$^{-8}$, 0.5$\%$ the cosmic value of 1.5~$\times$~10$^{-5}$. Sulphur is likely depleted in the cold and dense phases of the interstellar medium, although there is no strong evidence of sulphur in the observations of the ice mantles so far.
\section*{Acknowledgements}
C.V. is grateful for the help of the IRAM staff at Granada during the data acquisitions, and also for their dedication to the telescope.
\bibliographystyle{mnras}
|
{
"timestamp": "2018-06-18T02:09:25",
"yymm": "1806",
"arxiv_id": "1806.01102",
"language": "en",
"url": "https://arxiv.org/abs/1806.01102"
}
|
\section{Introduction} \label{sec:intro}
The $\delta$\,Scuti type stars are pulsating stars situated in the classical Cepheid instability strip \citep{Breger2000}. Most of the $\delta$\,Scuti pulsators are stars of spectral types A0$-$F5\,III$-$V. They are main-sequence or immediate
post-main-sequence variable stars moving to the giant branch \citep{Breger2000}.
Pulsation periods of $\delta$\,Scuti type stars vary between $~$0.008 and 0.42~days \citep{Sanchez2017}. Excited modes have amplitudes from 0.001~mag \citep{Breger2000} up to almost one magnitude in blue bands \citep{Sanchez2017}.
Majority of $\delta$\,Scuti type stars have multiple non-radial pulsation modes, while some of them are pure radial pulsators \citep{Breger2000}. A nature of the excited modes can be complicated: either pure $p$-modes, or pure $g$- or mixed $p$- and $g$-modes. Oscillations of $\delta$\,Scuti stars are not fully understood. There are many modes which are theoretically expected to be excited in a given frequency range but not all modes in this range are detected \citep{Goupil2005}.
$\gamma$\,Doradus stars is another group of variable stars existing in close neighbourhood with $\delta$\,Scuti stars. Some stars are hybrid $\delta$\,Scuti-$\gamma$\,Doradus pulsators showing high-frequency $p$-mode pulsations typical of $\delta$\,Scuti stars and low-frequency $g$-mode oscillations characteristic of $\gamma$\,Doradus stars.
Only high amplitude low frequency variations of $\gamma$\,Doradus and hybrid stars may be observed from the ground. Low amplitude frequencies can be detected only from space telescope observations. Observations from space missions such as MOST, CoRoT, and $Kepler$ have revealed a large number of hybrid $\delta$\,Scuti-$\gamma$\,Doradus pulsators, which are situated where the instability strips of $\delta$\,Scuti and $\gamma$\,Doradus stars partially overlap in the Hertzpsrung--Russell (HR) diagram. These stars show behavior typical to hybrid $\delta$\,Scuti--$\gamma$\,Doradus pulsators (\citealt{Grigahcene2010}, \citealt{Uytterhoeven2011}, \citealt{Bradley2015}, \citealt{Xiong2016}, \citealt{Sanchez2017}).
A difficult task is to identify the modes. The method of mode equidistances is not always working as frequencies of some modes do not follow the pure rules. A direct fit of theoretical models to observed frequencies is
difficult as often several choices of stellar models are possible within uncertainties. Pulsation modes are very sensitive to the convection treatment, as a reliable description of time dependent convection
is necessary. A fast rotation of this type of stars makes additional problems in the mode identification framework \citep{Goupil2005}.
Though the group of $\delta$\,Scuti type stars is among most numerous groups of pulsators, it is also one of the least understood groups of stars. Much more observational information about this type of stars is necessary in order to improve their models and uncover details about processes happening beneath their surfaces.
\section{Selected targets}
We chose for observations the $\delta$\,Scuti candidates selected from the Hipparcos catalog by \cite{Handler2002} and selected 13 stars suitable for observations with telescopes of the Mol\.etai Astronomical Observatory (MAO) in Lithuania.
We list the selected $\delta$\,Scuti candidates in Table~\ref{table:1} and show their positions in the HR diagram (Figure~\ref{Fig1_strip}). $T_{\rm eff}$ and $L/L_{\rm Sun}$ were taken from \cite{McDonald2012}. Positions of theoretical instability strips for $\delta$ Scuti and $\gamma$\,Doradus were taken from \cite{Dupret2005} and \cite{Xiong2016}.
\begin{table*}
\caption{Information about observed stars. }
\label{table:1}
\begin{tabular}{l c c c c c c c c c}
\hline
Star & $\alpha$(2000) & $\delta$(2000) & $V$ &Sp.type& Images & Runs &Point error& STD of LC &Comparison \\
& h m s &$^\circ$ $^\prime$ $^{\prime\prime}$ & mag &(Simbad) & & & (mean) & &star \\
\hline
HIP 2923 & 00 37 03.56 & +31 29 11.31 & 7.65 & F0III &2965 &17 &13.50 &22.76 &TYC 2275-1038-1 \\
HIP 5526 & 01 10 43.31 & +27 52 04.61 & 8.10& F0 &1684 &12 &13.24 &33.20 &TYC 1753-1926-1\\
HIP 5659 & 01 12 41.26 & +65 00 32.83 & 7.62& F0 &1556 &12 &8.30 &23.04 &TYC 4038-716-1\\
HIP 11090 & 02 22 50.30 & +41 23 46.67 & 5.80& F0III-IV &1682 &12 &6.16 &21.70 &TYC 2835-85-1\\
HIP 17585 & 03 46 00.94 & +67 12 05.78 & 5.79& F0IV &3475 &14 &14.82 &29.91 &TYC 4075-932-1\\
HIP 74155 & 15 09 06.24 & +69 39 11.11 & 7.12& F2 &175 &6 &10.88 &19.33 &TYC 4411-835-1\\
HIP 101473 & 20 33 53.70 & +10 03 35.05 & 6.54& A2Vnn &572 &7 &6.30 &15.75 &TYC 1092-1447-1\\
HIP 106219 & 21 30 53.28 & +24 46 51.90 & 8.25& A5 &1312 &14 &5.41 &12.12 &TYC 2192-608-1\\
HIP 106223 & 21 30 57.05 & +16 34 15.57 & 8.64& A5 &3531 &28 &15.62 &43.59 &TYC 1664-433-1\\
HIP 107786 & 21 50 08.23 & +19 25 26.38 & 7.21& A5 &1215 &14 &10.38 &22.39 &TYC 1674-299-1\\
HIP 113487 & 22 58 58.96 & +34 04 00.67 & 7.54& A0 &916 &11 &8.06 &29.04 &TYC 2758-490-1\\
HIP 115093 & 23 18 42.26 & +36 05 24.81 & 7.37& F0 &1390 &10 &8.10 &16.69 &TYC 2764-1570-1\\
HIP 115856 & 23 28 23.54 & +19 53 08.09 & 6.67& F0 &3742 &17 &10.22 &26.84 &TYC 1726-1519-1\\
\hline
\end{tabular}
\end{table*}
\cite{Dupret2005} calculated an instability strips of $\delta$\,Scuti and $\gamma$\,Doradus stars. These instability strips fit well ground-based observations. \cite{Dupret2005} also predicted hybrid stars in the overlapping
region of the $\delta$\,Scuti and $\gamma$\,Doradus instability strips.
\begin{figure}[!ht]
\centering
\includegraphics[width=\hsize]{EPak1_instability_strip.png}
\caption{Positions of analyzed $\delta$\,Scuti candidates and instability strips. See text for more explanations.}
\label{Fig1_strip}
\end{figure}
Figure~\ref{Fig1_strip} shows also the more recently calculated instability strips of $\delta$\,Scuti and $\gamma$\,Doradus stars by \cite{Xiong2016}, which fit well the data collected by space telescopes.
The Padova evolutionary tracks\footnote{http://pleiadi.pd.astro.it/} of six stars with different masses (thin dotted lines) and the Zero Age Main Sequence (ZAMS, thick gray line) are also shown in Figure~\ref{Fig1_strip}.
Twelve of our selected $\delta$\,Scuti candidate stars obviously lay inside the instability strip of $\delta$\,Scuti or $\gamma$\,Doradus stars, and only one of the candidates is located close to the blue edge of $\delta$\,Scuti instability strip. Thus, we may expect to observe both types of oscillations in at least some targets.
In Table~\ref{table:1} we present a list of targeted $\delta$\,Scuti candidates, their coordinates, $V$ magnitudes, spectral types, a number of images taken, a number of runs, names of comparison stars used for the data reduction and parameters of data quality (mean errors of the observed points and standard deviations of the light curves). All the stars belong to the preliminary PLATO fields (\citealt{Miglio2017}): HIP\,5659, HIP\,17585, and HIP\,74155 belong to STEP02; HIP\,2923, HIP\,5526, and HIP\,74155 belong to STEP07; and the remaining stars belong to the field STEP05.
\section{Observations}
Observations were performed with a 51~cm Maksutov-type MAO telescope of 35~cm working diameter of primary mirror and the Apogee Alta U47 CCD camera.
This instrumentation allows us to observe bright stars without saturation of CCD pixels.
For the observations we used the $Y$ filter of a medium-band Vilnius photometric system.
Its effective wavelength is at 466~nm and the width is 26~nm \citep{Straizys&Sviderskiene1972} which is close to the Johnson's $B$ filter, but is transparent for a narrower range of wavelengths.
The observations were carried out in a semi-robotic mode, i.e. the telescope was changing the pointing and took exposures of different fields of the sky according to the beforehand prepared script. This mode allowed us to observe light curves (LC) of stars in $5-7$ different fields of the sky during the same night with a cadence of $15-30$\,minutes.
Observations were carried out using blind tracking without autoguiding,
thus we calibrated the CCD images very carefully in order to avoid artificial signals.
We were taking more than 10 images of the bias, dark and sky flat fields during each night for the CCD image calibration.
A layout of obtained LCs of the $\delta$\,Scuti candidates is presented in Figure~\ref{Fig2_all_LCs}.
\begin{figure}
\centering
\includegraphics[width=\hsize]{EPak2_all_LCs.png}
\caption{A layout of light curves for the investigated $\delta$\,Scuti candidate stars. }
\label{Fig2_all_LCs}
\end{figure}
\section{Data reduction and analysis}
The observed images were first processed with the Muniwin program of the software package C-Munipack\footnote{http://c-munipack.sourceforge.net/} \citep{Muniwin14}, which is built on the basis of a software package DAOPHOT for doing stellar photometry in crowded stellar fields \citep{Daophot87}.
The Muniwin program is designed for the time series differential aperture photometry and searching of variable stars.
We used the {\it Advanced} image calibration procedure, to perform the bias and dark frame subtraction, and flat-field correction.
We performed photometry with different apertures in order to select the best one, corresponding to a smallest standard deviation of the obtained light curves. The selected aperture was 4 or 5 pixels for a field. We used it to determine the instrumental magnitudes of all detected stars in the field.
For the further analysis, we calculated differential magnitudes of our targets using comparison stars. We used
one comparison star per field, which had a magnitude most similar to the target magnitude and which had a light curve with no signs of variability. The names of used comparison stars are listed in the last column of Table~\ref{table:1}. We obtained amplitude spectra of selected comparison stars (Figure~\ref{Fig3_FT_compar}) and checked, if there are signals at the same frequencies which were observed in $\delta$\,Scuti candidates.
\begin{figure}[!ht]
\centering
\includegraphics[width=\hsize]{EPak3_FT_of_comparisons.png}
\caption{{Amplitude spectra of the comparison stars used for differential photometry of $\delta$\,Scuti candidates. }
}
\label{Fig3_FT_compar}
\end{figure}
The LCs were analyzed using a process of their Fourier decomposition into sinusoidal components (\citealt{Fourier1822}).
We used a software Period04\footnote{https://www.univie.ac.at/tops/Period04/} \citep{Lenz05} for decomposition of LCs, obtaining
amplitude spectra and for prewhitening procedures in order to find all frequencies, amplitudes and phases of pulsations in light curves, spectral windows (SW) and a noise level. As one site observations were used for analysis, SWs have high side-lobes of 1\,$c/d$ aliases. The highest side-lobes of 1\,$c/d$ aliases were calculated for HIP\,74155 and reach 95.7$\%$ of the central peak, while the lowest one was detected for HIP\,115856 and is equal to 83.1$\%$ of the central peak. The length of the LCs also differs, thus the FWHM of the central lobe in SWs usually vary between 0.0374\,$c/d$ (HIP\,106223) and 0.1558\,$c/d$ (HIP\,11090). The worst SW was obtained for HIP\,74155, since it had a smallest set of data points and big ($2-5$ days) gaps between runs.
Some stars in their amplitude spectra had a signal at low frequencies ($\approx$\,1\,$c/d$), which needed a special treatment. Signals caused by a daily variation of weather conditions may appear at such frequencies, especially when a target and a comparison star are of different colors, because of instrumental instabilities or a varying position of a star on a CCD chip during blind tracking, if a star periodically crosses the same defected pixel or dust spot. In cases of significant signals at low frequencies we checked the resulting amplitude spectra using different comparison stars of different colors.
If analyzis with all comparison stars gave the same signal at low frequencies, we attributed it to stellar pulsations.
We observed the stars in a mode of blind tracking, that might cause artifacts in amplitude spectra at any frequency. Though positions of the stars were not stable from one night to another and during the same night, these variations were not periodic and could not produce periodic signals. Moreover, we performed a careful reduction of the CCD images using calibration images (bias, dark and flat-field) taken for every night separately trying to eliminate any newly appeared dust grain or other defects in the field.
We used a so-called prewhitening procedure for analysis of every star. First of all we calculated an amplitude spectrum with Period04 and identified the highest amplitude peak at frequencies higher than 2\,$c/d$ assuming that the low frequencies may be caused by instrumental or weather instabilities.
If there were such signals, they were analyzed the last. An exception was done only for two stars: HIP\,2923, as the signal at 0.9651\,$c/d$ was dominant and its side-lobes could affect other signals in the amplitude spectrum; and HIP\,106223, as this star showed signals only at low frequencies.
After that we calculated a sinusoid with the identified frequency and used a least square fitting method improving amplitude and phase, simultaneously.
Then we checked a significance of the extracted frequency by comparing an amplitude of the signal with the mean amplitude of residual in a box $\pm$10\,$c/d$ around the extracted frequency, i.e. we calculated a signal to noise ratio (S/N) at the extracted frequency. The noise level was calculated using the same software Period04. According to \cite{Breger1993}, a signal may be assumed as significant if its ${\rm S/N} \gtrapprox 4$. \citet{Alvarez1998} have shown that signals with ${\rm S/N}=3.7$ in a box of 10~$c/d$ could be as an indication of 99$\%$ of significance level, while ${\rm S/N}=3.2$ is an indication of 90$\%$ of significance level.
Some authors use a S/N cut as high as 6 in order to be 100$\%$ confident in significance of signals. Almost a half of our analyzed stars have at least one frequency reaching the ${\rm S/N} \geq 6$ level. Another five analyzed stars did not exceed ${\rm S/N} = 5$ level. We interpreted signals as significant when their ${\rm S/N} \geq 4$ in a box of $\pm$10~$c/d$ around the extracted frequency, and verified if changes in the box size by up/down 10\,$c/d$ do not
alter the S/N level significantly. We listed the extracted frequencies according to their extraction succession, thereby experienced readers may make their own decision whether to accept a given frequency or not.
We extracted only one signal at frequencies lower than 2~$c/d$ even if peaks left after the extraction gave ${\rm S/N} >4$, because it is always difficult to deal with so low frequencies from ground-based observations. In that case we just state the fact of low frequency detection without its detailed analyzis.
\begin{table}
\caption{Observed signals in amplitude spectra.}
\label{table:2}
\resizebox{\columnwidth}{!}{
\centering
\begin{tabular}{c c c c c}
\hline
Frequency $\pm \sigma$ & Amplitude $\pm \sigma$ & Phase $\pm \sigma$ & Noise & S/N \\
$c/d$ & mmag & & mmag & \\
\hline
\multicolumn{5}{c}{HIP\,2923} \\
0.9651$\pm$0.0022 & 8.73$\pm$0.62 & 0.227$\pm$0.015 & 1.66& 5.25 \\
15.0271$\pm$0.0019 & 8.41$\pm$0.65 & 0.764$\pm$0.010 & 1.45& 5.81 \\
16.0973$\pm$0.0022 & 7.28$\pm$0.66 & 0.277$\pm$0.012 & 1.25& 5.84 \\
11.7260$\pm$0.0021 & 6.24$\pm$0.62 & 0.987$\pm$0.013 & 1.26& 4.94 \\
6.7078$\pm$0.0025 & 6.02$\pm$0.61 & 0.845$\pm$0.015 & 1.40& 4.30 \\
11.4043$\pm$0.0032 & 4.71$\pm$0.61 & 0.262$\pm$0.021 & 1.09 & 4.32 \\
\multicolumn{5}{c}{HIP\,5526} \\
9.3431$\pm$0.0007 & 24.19$\pm$0.77 & 0.806$\pm$0.005 & 4.77& 5.07 \\
5.1385$\pm$0.0009 & 20.46$\pm$0.77 & 0.557$\pm$0.007 & 4.23& 4.84 \\
8.5147$\pm$0.0007 & 17.92$\pm$0.83 & 0.788$\pm$0.005 & 3.26& 5.50 \\
9.7283$\pm$0.0012 & 13.21$\pm$0.78 & 0.291$\pm$0.008 & 2.70& 4.89 \\
5.5947$\pm$0.0014 & 12.46$\pm$0.84 & 0.849$\pm$0.010 & 2.58& 4.82 \\
12.5515$\pm$0.0021 & 8.22$\pm$0.80 & 0.255$\pm$0.014 & 1.86& 4.41 \\
0.542$\pm$0.0019 & 8.05$\pm$0.76 & 0.799$\pm$0.013 & 1.75& 4.61 \\
\multicolumn{5}{c}{HIP\,5659} \\
9.4932$\pm$0.0164 & 16.45$\pm$0.94 & 0.049$\pm$0.011& 2.90& 5.67 \\
10.2439$\pm$0.0115 & 11.49$\pm$0.85 & 0.629$\pm$0.014& 2.45& 4.69 \\
9.8508$\pm$0.0125 & 12.46$\pm$0.93 & 0.674$\pm$0.012& 1.98& 6.30 \\
7.0028$\pm$0.0089 & 8.95$\pm$0.83 & 0.814$\pm$0.015& 1.82& 4.92 \\
\multicolumn{5}{c}{HIP\,11090} \\
15.8617$\pm$0.0024 & 11.30$\pm$1.13 & 0.387$\pm$0.016 & 1.63& 6.94 \\
28.7418$\pm$0.0049 & 5.34$\pm$.59 & 0.453$\pm$0.034 & 1.19& 4.47 \\
0.9662$\pm$0.0027 & 8.89$\pm$.80 & 0.811$\pm$0.019 & 1.93& 4.61 \\
\multicolumn{5}{c}{HIP\,17585} \\
13.1631$\pm$0.0078 & 13.99$\pm$2.01 & 0.887$\pm$0.023 & 3.33& 4.20 \\
\multicolumn{5}{c}{HIP\,74155} \\
11.7619$\pm$0.0039 & 14.15$\pm$2.31 & 0.847$\pm$0.026 & 2.99& 4.73 \\
\multicolumn{5}{c}{HIP\,101473} \\
6.0374$\pm$0.0063 & 7.24$\pm$1.44 & 0.997$\pm$0.033 & 1.79& 4.04 \\
4.3081$\pm$0.0080 & 5.64$\pm$1.57 & 0.038$\pm$0.042 & 1.34& 4.20 \\
\multicolumn{5}{c}{HIP\,106219} \\
11.3007$\pm$0.0011 & 6.66$\pm$0.44 & 0.113$\pm$0.010 & 0.74& 8.95 \\
10.8018$\pm$0.0023 & 2.91$\pm$0.43 & 0.042$\pm$0.023 & 0.69& 4.19 \\
14.2773$\pm$0.0027 & 2.45$\pm$0.42 & 0.210$\pm$0.027 & 0.52 & 4.74 \\
\multicolumn{5}{c}{HIP\,106223} \\
1.1429$\pm$0.0004 & 31.09$\pm$0.71 & 0.194$\pm$0.004 & 2.84& 10.95 \\
\multicolumn{5}{c}{HIP\,107786} \\
15.4817$\pm$0.0025 & 9.87$\pm$1.53 & 0.932$\pm$0.025 & 2.08& 4.75 \\
1.3778$\pm$0.0018 & 13.11$\pm$1.48 & 0.476$\pm$0.018 & 2.78 & 4.72 \\
\multicolumn{5}{c}{HIP\,113487} \\
21.9102$\pm$0.0013 & 23.01$\pm$1.80 & 0.327$\pm$0.013 & 3.73& 6.16 \\
17.2064$\pm$0.0025 & 11.40$\pm$1.89 & 0.012$\pm$0.024 & 2.82& 4.04 \\
\multicolumn{5}{c}{HIP\,115093} \\
11.4318$\pm$0.0040 & 10.61$\pm$1.39 & 0.784$\pm$0.021 & 2.18& 4.87 \\
\multicolumn{5}{c}{HIP\,115856} \\
9.1109$\pm$0.0007 & 14.08$\pm$0.65 & 0.917$\pm$0.008 & 1.75& 8.06 \\
16.9660$\pm$0.0016 & 7.48$\pm$.69 & 0.068$\pm$.016 & 1.28& 5.86 \\
18.6810$\pm$0.0018 & 5.81$\pm$.68 & 0.676$\pm$.019 & 1.11& 5.24 \\
0.9669$\pm$0.0011 & 9.55$\pm$.00 & 0.240$\pm$.011 & 1.29& 7.43 \\
\hline
\end{tabular}
}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=15cm]{EPak4_LC_all_fragment_.png}
\caption{Light curves of the $\delta$\,Scuti candidate stars obtained during four best quality observing runs. The black dots with error-bars correspond to observations, the red curves represent the LC calculated according to the determined frequencies and amplitudes of pulsations.
}
\label{Fig4_LC_fragment}
\end{figure*}
The upper bound of frequency is defined by the Nequest frequency, which depends on the sampling time of light curves. In our case with an uneven sampling time, the effective Nequest frequency is close to 50\,$c/d$, which is twice larger than the highest frequency of pulsations observed in our sample of stars.
\section{Characterization of $\delta$\,Scuti candidates}
We have collected enough data for the variability analyzis of the 13 $\delta$\,Scuti candidates (Table~\ref{table:1}).
As it was expected, the amplitude spectra of LCs revealed signals mostly between 5\,$c/d$ and 22\,$c/d$. This range of frequencies is intrinsic for $\delta$\,Scuti stars. Some of those stars had pulsations at frequencies below 2\,$c/d$ and this may be an indication of $\gamma$\,Doradus or $\delta$\,Scuti-$\gamma$\,Doradus hybrid star. However, we have to consider that the frequency at around 1\,$c/d$ may be caused by the daily atmospheric transmittance variations or instrumental instabilities.
A list of all detected signals ordered according to their prewhitening in every star is presented in Table~\ref{table:2}.
Figure~\ref{Fig4_LC_fragment} shows how theoretical and observed LCs fit each other. The observed LCs correspond to four best quality runs for every star. The theoretical LCs were calculated using the computing program Period04 \citep{Lenz05} with the parameters of brightness variations given in Table~\ref{table:2}.
Below we discuss every star separately in more detail.
\subsection{HIP\,2923}
\begin{figure}[!ht]
\centering
\includegraphics[width=\hsize]{EPak5_FT_HIP2923.png}
\caption{Amplitude spectrum of the combined LC of HIP\,2923 and its prewhitening. In each panel, from top to bottom, one signal with S/N$>$4 is prewhitened from the time series, and a new spectrum of residuum is obtained.
The black solid curves show amplitude spectra. The selected for prewhitening signal has a shape of the spectral window, which is drawn on top of the amplitude spectrum.
The dashed horizontal line corresponds to a 4 times of noise level at the selected frequency. Length of the dashed horizontal line shows size of the box used for the noise level calculation. The vertical dotted lines show positions and amplitudes of already prewhitened frequencies.
}
\label{Fig5_FT_2923}
\end{figure}
According to fundamental parameters derived by \cite{McDonald2012}, HIP\,2923 is located on a blue edge of $\gamma$\,Doradus stars defined by \cite{Dupret2005} (Figure~\ref{Fig1_strip}). An automated classification of Hipparcos unsolved variables by \cite{Rimoldini2012} classified it as a low amplitude $\delta$\,Scuti variable: both {\it Random forest} (RF) and {\it Multistage Bayesian Network} (MB) methods gave the same result with probabilities of 0.86 and 0.89, respectively. \cite{Rimoldini2012} also derived a frequency of the dominant signal to be 9.3455~$c/d$.
The prewhitening process of our data on HIP\,2923 is presented in Figure~\ref{Fig5_FT_2923}.
An amplitude spectrum of HIP\,2923 is rich in signals.
We found two dominant signals at 0.9651\,$c/d$ and 15.0271\,$c/d$ with amplitudes 8.73\,mmag and 8.41\,mmag, respectively. That may be an indication of the $\delta$\,Scuti--$\gamma$\,Doradus hybrid star pulsation. In total, we found five signals at frequencies common to $\delta$\,Scuti type stars and one signal at low frequencies typical to $\gamma$\,Doradus type stars.
As the signal at 0.9651\,$c/d$ needed a special caution, we checked the resulting amplitude spectra using different comparison stars of different colors, but no one of them reduced its amplitude.
Taking into account the position of HIP\,2923 in the HR diagram and detected signals of pulsations HIP\,2923 could be considered as a $\delta$\,Scuti--$\gamma$\,Doradus hybrid star.
\subsection{HIP\,5526}
\begin{figure}[!ht]
\centering
\includegraphics[width=\hsize]{EPak6_FT_HIP5526.png}
\caption{Amplitude spectrum of the combined LC of HIP\,5526, and its prewhitening. Meanings of the lines are the same as in Figure~\ref{Fig5_FT_2923}.
}
\label{Fig6_FT_5526}
\end{figure}
According to fundamental parameters derived by \cite{McDonald2012}, HIP\,5526 is close to the red edge of $\gamma$\,Doradus instability strip derived by \cite{Dupret2005}. An automated classification of Hipparcos unsolved variables by \cite{Rimoldini2012} classified HIP\,5526 as a low amplitude $\delta$\,Scuti variable with probabilities equal to 0.28 (RF method) and 0.46 (MB method).
\cite{Rimoldini2012} used Hipparcos observations and found for this star a dominant signal at 0.67842\,$c/d$. We find more frequencies of pulsations for this star in our set of observations.
We prewhitened seven frequencies from LC of HIP\,5526 (Table~\ref{table:2} and Figure~\ref{Fig6_FT_5526}). Those signals could not come from the used comparison stars, as amplitude spectra of comparison stars do not show signs of variability (Figure~\ref{Fig3_FT_compar}). The dominant signal in the amplitude spectrum of HIP\,5526 is at 9.3431\,$c/d$. As well as \cite{Rimoldini2012}, we found that HIP\,5526 has a signal at low frequency. It peaks at 0.5420\,$c/d$ with $\rm{S/N}=4.61$.
According to position in the HR diagram and the signal at low frequency, HIP\,5526 could be considered as a $\delta$\,Scuti--$\gamma$\,Doradus hybrid star. This presumption should be taken into acount in further analyses of HIP\,5526.
\subsection{HIP\,5659}
\begin{figure}[!h]
\centering
\includegraphics[width=\hsize]{EPak7_FT_HIP5659.png}
\caption{Amplitude spectrum of the combined LC of HIP\,5659, and its prewhitening. Meanings of the lines are the same as in Figure~\ref{Fig5_FT_2923}.
}
\label{Fig7_FT_5659}
\end{figure}
According to fundamental parameters derived by \cite{McDonald2012}, HIP\,5659 is close to the red edge of $\gamma$\,Doradus instability strip derived by \cite{Dupret2005}. In Simbad database, HIP\,5659 is classified as a double or multiple star.
\cite{Rimoldini2012} classified HIP\,5659 as low amplitude $\delta$\,Scuti variable with probabilities equal 0.57 (RF method) and 0.69 (MB method).
We prewhitened 4 frequencies from LC of HIP\,5659 (Table~\ref{table:2}, Figure~\ref{Fig7_FT_5659}). We found the dominant signal at 9.4932\,$c/d$ which is very close to the dominant signal 9.85145\,$c/d$ found by \cite{Rimoldini2012}, who used the Hipparcos data. The amplitude spectrum of HIP\,5659 looks typical to $\delta$\,Scuti type star and does not show any sign of variability at frequencies below 2\,$c/d$.
\subsection{HIP\,11090}
\begin{figure}[!h]
\centering
\includegraphics[width=\hsize]{EPak8_FT_HIP11090.png}
\caption{Amplitude spectrum of the combined LC of HIP\,11090 and its prewhitening. Meanings of the lines are the same as in Figure~\ref{Fig5_FT_2923}.
}
\label{Fig8_FT_11090}
\end{figure}
According to fundamental parameters derived by \cite{McDonald2012}, HIP\,11090 is located in the center of the instability strip of $\gamma$\,Doradus stars defined by \cite{Dupret2005} (Figure~\ref{Fig1_strip}). \cite{Rimoldini2012} classified this star as a low amplitude $\delta$\,Scuti variable with probabilities of 0.34 (RF method) and 0.52 (MB method). \cite{Rimoldini2012} also derived the dominant signal at 15.86354\,$c/d$.
We observed HIP\,11090 during eleven runs and obtained a really nice amplitude spectrum with an obvious peak at 15.8617\,$c/d$ with amplitude of 11.30~mmag (Figure~\ref{Fig8_FT_11090}, Table~\ref{table:2}). This frequency is really close to the one derived by \cite{Rimoldini2012}. In addition, we found two more signals with ${\rm S/N}>4$ at 28.7418\,$c/d$ and 0.9662\,$c/d$. We tend to trust the low frequency signal in HIP\,11090 due to its position in the HR diagram suggesting that this star may be a $\gamma$\,Doradus star or a hybrid star.
As HIP\,11090 shows pulsations at frequencies typical to both $\delta$\,Scuti and $\gamma$\,Doradus, this star could be considered as a hybrid star.
\subsection{HIP\,17585}
\begin{figure}[!h]
\centering
\includegraphics[width=\hsize]{EPak9_FT_HIP17585.png}
\caption{Amplitude spectrum of the combined LC of HIP\,17585 and its prewhitening. Meanings of the lines are the same as in Figure~\ref{Fig5_FT_2923}.
}
\label{Fig9_FT_17585}
\end{figure}
According to fundamental parameters derived by \cite{McDonald2012}, HIP\,17585 is located at the red edge of the instability strip of $\gamma$\,Doradus stars defined by \cite{Dupret2005} (Figure~\ref{Fig1_strip}).
\cite{Kahraman2016} used FIES spectra obtained on the Nordic Optical Telescope for spectroscopic analysis of atmospheric parameters and abundance of $\gamma$\,Doradus stars, and classified HIP\,17585 (another name is HD\,23005) as a $\gamma$\,Doradus candidate of F1\,IV spectral type.
\cite{Rimoldini2012} found pulsations at frequency 1.64172\,$c/d$ and classified HIP\,17585 as a low amplitude $\delta$\,Scuti variable with probability of 0.46 using the RF method, or as a $\gamma$\,Doradus star with probability of 0.42 using the MB method.
The amplitude spectrum of LC observed by us showed only one peak with ${\rm S/N}>4$ at 13.1631\,$c/d$ with amplitude of 13.99\,mmag (Figure~\ref{Fig9_FT_17585} and Table~\ref{table:2}). HIP\,17585 has a higher noise level in its amplitude spectrum than other stars discussed before. Probably this is a reason why we could not detect any signal common in $\gamma$\,Doradus stars. Even the noise at low frequencies observed in the amplitude spectrum of the comparison star (Figure~\ref{Fig3_FT_compar}) did not show up in the amplitude spectrum of HIP\,17585. Anyway, we can conclude that HIP\,17585 has pulsations at frequencies common in $\delta$\,Scuti stars.
\subsection{HIP\,74155}
\begin{figure}[!h]
\centering
\includegraphics[width=\hsize]{EPak10_FT_HIP74155.png}
\caption{Amplitude spectrum of the combined LC of HIP\,74155 and its prewhitening. Meanings of the lines are the same as in Figure~\ref{Fig5_FT_2923}. }
\label{Fig10_FT_74155}
\end{figure}
According to fundamental parameters derived by \cite{McDonald2012}, HIP\,74155 is located outside
of the $\gamma$\,Doradus instability strip defined by \cite{Dupret2005}, however still inside the $\gamma$\,Doradus instability strip defined by \cite{Xiong2016} (Figure~\ref{Fig1_strip}).
\cite{Rimoldini2012} classified HIP\,74155 as a low amplitude $\delta$\,Scuti variable with probabilities of 0.55 (RF method) and 0.60 (MB method). \cite{Rimoldini2012} also derived frequency of the dominant signal, which was 10.20234\,$c/d$.
Our observations of HIP\,74155 had quite big gaps of different length ($2-5$~days). This disturbed the spectral window and raised difficulties in prewhitening of signals. We detected only one signal at 11.7619\,$c/d$ with amplitude of 14.15~mmag and ${\rm S/N}>4$ (Figure~\ref{Fig10_FT_74155} and Table~\ref{table:2}).
In order to collect more data on variability of this star, we observed HIP\,74155 not just during five runs on the Maksutov-type telescope in 2016, but also involved our larger 1.65~m MAO telescope in 2017. The additional observations showed clear variability in the range of $\delta$\,Scuti pulsation frequencies. Also we found that HIP\,74155 has more than one frequency of pulsations (see Figure~\ref{Fig4_LC_fragment} between JD\,2458006.23 and JD\,2458006.62). As we observed only one run with the 1.65~m MAO telescope, S/N of signals in the amplitude spectrum was smaller than 3.3 (i.e. below the limit we set for our detailed analysis), thus the determined two signals at 11.059~$c/d$ with an amplitude of 9.4~mmag and at 14.515~$c/d$ with the amplitude of 6.7~mmag we present here just as indications for further analyses.
However, our observations allow us to conclude that HIP\,74155 has more than one frequency of pulsations which are typical to a $\delta$\,Scuti type variable stars.
\subsection{HIP\,101473}
\begin{figure}[!h]
\centering
\includegraphics[width=\hsize]{EPak11_FT_HIP101473.png}
\caption{Amplitude spectrum of the combined LC of HIP\,101473 and its prewhitening. Meanings of the lines are the same as in Figure~\ref{Fig5_FT_2923}. }
\label{Fig11_FT_101473}
\end{figure}
HIP\,101473 is the hottest star in our sample. According to fundamental parameters derived by \cite{McDonald2012}, it appears at the blue edge of instability strip of $\delta$\,Scuti stars derived by \cite{Dupret2005} and \cite{Xiong2016} (Figure~\ref{Fig1_strip}).
\cite{Rimoldini2012} classified HIP\,101473 as a low amplitude $\delta$\,Scuti variable with probabilities of 0.39 (RF method) and 0.43 (MB method). \cite{Rimoldini2012} also derived frequency of the dominant signal, which was 7.50067\,$c/d$.
This star was observed by us during seven runs. Some of the runs showed a wavy shape of LCs inherent to variable stars (Figure~\ref{Fig4_LC_fragment}), however there were runs without obvious brightness variations. Analysis of combined LC of all seven runs gave two signals at frequencies of 6.0374\,$c/d$ and 4.3081\,$c/d$ with S/N just slightly larger than 4 (Figure~\ref{Fig11_FT_101473} and Table~\ref{table:2}).
The value of the higher frequency is close to the one derived by \cite{Rimoldini2012}.
We confirm that HIP\,101473 has pulsations at frequencies typical to $\delta$\,Scuti type stars, however it needs further observations in order to determine its pulsation parameters more precisely.
\subsection{HIP\,106219}
\begin{figure}[!h]
\centering
\includegraphics[width=\hsize]{EPak12_FT_HIP106219.png}
\caption{Amplitude spectrum of the combined LC of HIP\,106219 and its prewhitening. Meanings of the lines are the same as in Figure~\ref{Fig5_FT_2923}. }
\label{Fig12_FT_106219}
\end{figure}
According to fundamental parameters derived by \cite{McDonald2012}, HIP\,106219 appiers in the HD diagram in the middle of $\delta$\,Scuti instability strip, which overlaps with instability strip of $\gamma$\,Doradus stars (\citealt{Xiong2016}). This star is one of the most evolved stars among our targets (Figure~\ref{Fig1_strip}).
\cite{Rimoldini2012} classified HIP\,106219 as a low amplitude $\delta$\,Scuti variable with probabilities of 0.53 (RF method) and 0.88 (MB method). \cite{Rimoldini2012} also derived frequency of the dominant signal, which was 10.80438\,$c/d$.
We have observed HIP\,106219 during 14 runs, thus a noise level in amplitude spectrum of this star was smallest among 13 observed targets due to very good weather conditions. The light curve of HIP\,106219 observations gave a very nice amplitude spectrum with the obvious signal at 11.3007\,$c/d$ with amplitude of 6.66~mmag and two lower amplitude signals at 10.8018\,$c/d$ and 14.2773\,$c/d$ (Figure~\ref{Fig12_FT_106219} and Table~\ref{table:2}). One of our frequencies fits quite well with the frequency derived by \cite{Rimoldini2012}.
The amplitude spectrum of HIP\,106219 also has an enlarged amplitude at low frequencies with the highest peak at 1.0468\,$c/d$ with amplitude of 2.70\,mmag and with ${\rm S/N}=4.17$. As the instability strip of $\gamma$\,Doradus stars is quite wide (Figure~\ref{Fig1_strip}), there is a possibility that we managed to observe low amplitude pulsations at low frequency, which are typical to $\gamma$\,Doradus stars. Presently we do not accept this signal as significant enough, however it should be considered during further analysis of HIP\,106219.
We conclude that HIP\,106219 has pulsations at frequencies typical to a $\delta$\,Scuti star, however further analyses may reveal it as being a hybrid star.
\subsection{HIP\,106223}
\begin{figure}[!h]
\centering
\includegraphics[width=\hsize]{EPak13_FT_HIP106223.png}
\caption{Amplitude spectrum of the combined LC of HIP\,106223 and prewhitening process. Meanings of the lines are the same as in Figure~\ref{Fig5_FT_2923}.
}
\label{Fig13_FT_106223}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=\hsize]{EPak14_LC_HIP106223.png}
\caption{Light curves of HIP\,106223 during eight observing runs. The gray dots with error bars show the observed LC, the continuous and dashed curves show the calculated LCs using 10 and 3 frequencies of pulsations, respectively.
}
\label{Fig14_LC_106223}
\end{figure}
According to fundamental parameters derived by \cite{McDonald2012}, HIP\,106223 is located close to the ZAMS at the red edge of the instability strip of $\gamma$\,Doradus stars defined by \cite{Dupret2005} (Figure~\ref{Fig1_strip}).
\cite{Rimoldini2012} classified HIP\,106223 as a $\gamma$\,Doradus variable with probabilities of 0.27 (RF method) and 0.63 (MB method). However, \cite{Rimoldini2012} did not find low frequency pulsations in this star, a dominant signal was derived at 12.79676\,$c/d$.
We observed this star during 28 runs. We found that HIP\,106223 is different from other candidates to $\delta$\,Scuti stars. It has obvious low frequency pulsations and no pulsations at higher frequencies (Figure~\ref{Fig13_FT_106223} and Table~\ref{table:2}).
Most of the time there were observed just slow increases or decreases of the magnitude during a night (Figure~\ref{Fig14_LC_106223}). Brightness of the star increased more significantly on JD\,2457625.4, it reached the maximum and started to decrease the same night.
The amplitude spectrum of the combined LC gave a dominant frequency at 1.1429\,$c/d$ with amplitude of 31.09\,mmag. After prewhitening of this frequency it was still possible to prewhiten two more frequencies (1.1165\,$c/d$ and 1.3572\,$c/d$ with amplitudes 16.13\,mmag and 11.03\,mmag, respectively). As this region of low frequencies may be affected by instrumental or weather condition instabilities, the two signals of lower amplitudes still have to be confirmed. Except these three low frequencies there were no more significant peaks in a full range of the amplitude spectrum. It was only a small peak at 14.0649\,$c/d$ with ${\rm S/N}=3.05$, which is close to the one derived by \cite{Rimoldini2012}.
However, the LC calculated using only these three low frequencies can not reach the maximal brightness of HIP\,106223 observed on JD 2457625.4. There could be more than three frequencies of pulsations which sometimes interfere creating a highly increased amplitude of pulsations. We evaluated that the interference of 10 frequencies determined after prewhitening of the amplitude spectrum can give a satisfactory match of the observed and calculated LCs.
Summarizing all the information collected about HIP\,106223, most probably it is a $\gamma$\,Doradus star or $\delta$\,Scuti--$\gamma$\,Doradus hybrid star with very low amplitude pulsations at higher frequencies.
\subsection{HIP\,107786}
\begin{figure}[!h]
\centering
\includegraphics[width=\hsize]{EPak15_FT_HIP107786.png}
\caption{Amplitude spectrum of the combined LC of HIP\,107786 and its prewhitening. Meanings of the lines are the same as in Figure~\ref{Fig5_FT_2923}. }
\label{Fig15_FT_107786}
\end{figure}
HIP\,107786 is the most luminous star among our targets, moreover, it is a triple system with the $\delta$\,Scuti type star (\citealt{Henry2004}, \citealt{Fekel2015}). The short orbital period of the Aa and Ab pair is 1.4707386 days, and the long period of A and B components is 724.09 days. According to fundamental parameters derived by \cite{McDonald2012}, this system is located inside the overlapping instability strips of $\delta$\,Scuti and $\gamma$\,Doradus stars (\citealt{Xiong2016}), but it is out of the $\gamma$\,Doradus instability strip derived by \cite{Dupret2005}.
Different components of the triple system are located at different places in the HR diagram. According to \citet{Fekel2015}, this system consists of a broad-lined A8\,V star, which is a variable $\delta$\,Scuti type star, and an unseen mid-M dwarf companion (Aa and Ab components). One more component (B) is F7\,V star at a larger distance from the close pair of Aa and Ab. A total estimated mass of Aa plus Ab is 2.1 ${{M}_{\rm{Sun}}}$, and 1.2 ${{M}_{\rm{Sun}}}$ of the component B. The mass of $\delta$\,Scuti star (Aa) and its companion Ab may be equal to 1.9 and 0.2~${{M}_{\rm{Sun}}}$, respectively \citep{Fekel2015}. That means that the $\delta$\,Scuti companion of HIP\,107786 triple system should be located at slightly lower luminosities and temperatures than it is showed in the HR diagram (Figure~\ref{Fig1_strip}).
\cite{Henry2004} found three frequencies of photometric brightness variations in HIP\,107786 (1.35980\,$c/d$, 15.43448\,$c/d$ and 15.78034\,$c/d$) with amplitudes between 30\,mmag and 11\,mmag. Two of the derived frequencies are typical for $\delta$\,Scuti stars. But the lowest one was explained by \cite{Henry2004} as a result from the ellipticity effect of Aa component.
We had 14 runs of observations for the field with HIP\,107786. The amplitude spectrum of these observations gave clear signals at 15.4817\,$c/d$ with amplitude of 9.87~mmag and at 1.3778\,$c/d$ with amplitude of 13.11\,mmag (Figure~\ref{Fig15_FT_107786} and Table~\ref{table:2}). These two frequencies fit well to the frequencies derived by \cite{Henry2004}, however we did not find
the second signal between 15\,$c/d$ and 16\,$c/d$.
\subsection{HIP\,113487}
\begin{figure}[!h]
\centering
\includegraphics[width=\hsize]{EPak16_FT_HIP113487.png}
\caption{Amplitude spectrum of the combined LC of HIP\,113487 and its prewhitening. Meanings of the lines are the same as in Figure~\ref{Fig5_FT_2923}. }
\label{Fig16_FT_113487}
\end{figure}
According to fundamental parameters derived by \cite{McDonald2012}, it appears on the blue edge of instability strip of $\gamma$\,Doradus stars derived by \cite{Xiong2016} (Figure~\ref{Fig1_strip}).
\cite{Rimoldini2012} classified HIP\,113487 as a low amplitude $\delta$\,Scuti variable with probabilities of 0.41 (RF method) and 0.58 (MB method). \cite{Rimoldini2012} also derived frequency of the dominant signal, which was 20.91221\,$c/d$.
HIP\,113487 had the highest frequency of pulsations among the observed stars, amplitude of the dominant signal was high and easily determined. The amplitude spectrum analysis provided a strong signal at 21.9102\,$c/d$ with amplitude of 23.01\,mmag and ${\rm S/N}=6.16$, and one more signal at 17.2064\,$c/d$ with amplitude of 11.40\,mmag and ${\rm S/N}=4.04$. Both frequencies are typical for $\delta$\,Scuti stars (Figure~\ref{Fig16_FT_113487} and Table~\ref{table:2}).
\subsection{HIP\,115093}
\begin{figure}[!h]
\centering
\includegraphics[width=\hsize]{EPak17_FT_HIP115093.png}
\caption{Amplitude spectrum of the combined LC of HIP\,115093 and its prewhitening. Meanings of the lines are the same as in Figure~\ref{Fig5_FT_2923}. }
\label{Fig17_FT_115093}
\end{figure}
According to fundamental parameters derived by \cite{McDonald2012}, HIP\,115093 appears in the HD diagram inside the overlapping instability strips of $\delta$\,Scuti and $\gamma$\,Doradus stars (\citealt{Xiong2016}) (Figure~\ref{Fig1_strip}).
\cite{Handler1999} used Hipparcos observations and classified HIP\,115093 as an object, which may be a $\gamma$\,Doradus star with the frequency of 0.5587\,$c/d$, but whose nature remained uncertain. Latter \cite{Handler2002} classified it as a $\delta$\,Scuti candidate.
\cite{Rimoldini2012} classified HIP\,115093 as a low amplitude $\delta$\,Scuti variable with probabilities of 0.53 (RF method) and 0.49 (MB method). \cite{Rimoldini2012} also derived frequency of the dominant signal, which was 10.27306\,$c/d$.
We had 10 runs of observations for the field of this star. The amplitude spectrum of observed LC gave a single signal with ${\rm S/N}=4.87$ at 11.4318\,$c/d$ with amplitude of 10.61~mmag (Figure~\ref{Fig18_FT_115856} and Table~\ref{table:2}), which is intrinsic to $\delta$\,Scuti type stars. This star might have more modes of pulsations hidden in a quite high noise level of its amplitude spectrum. If there is any pulsation typical to $\gamma$\,Doradus stars, it should have a small amplitude and we were not able to detect it.
\subsection{HIP\,115856}
\begin{figure}[!h]
\centering
\includegraphics[width=\hsize]{EPak18_FT_HIP115856.png}
\caption{Amplitude spectrum of the combined LC of HIP\,115856 and its prewhitening process. Meanings of the lines are the same as in Figure~\ref{Fig5_FT_2923}.
}
\label{Fig18_FT_115856}
\end{figure}
According to fundamental parameters derived by \cite{McDonald2012}, HIP\,115856 appears in the HD diagram inside the overlapping instability strips of $\delta$\,Scuti and $\gamma$\,Doradus stars (\citealt{Xiong2016}) (Figure~\ref{Fig1_strip}). Positions of HIP\,115856 and HIP\,115093 are very close, however, the amplitude spectrum of HIP\,115856 is much richer.
\cite{Rimoldini2012} classified HIP\,115093 as a low amplitude $\delta$\,Scuti variable with probabilities of 0.69 (RF method) and 0.74 (MB method). \cite{Rimoldini2012} also derived frequency of the dominant signal, which was 9.10961\,$c/d$.
Seventeen runs of HIP\,115856 observations gave an amplitude spectrum with at least 4 signals, with the strongest one at 9.1109\,$c/d$ and amplitude of 14.08~mmag (Figure~\ref{Fig18_FT_115856} and Table~\ref{table:2}).
We found at least one low frequency at 0.9669\,$c/d$ with amplitude of 9.55\,mmag and ${\rm S/N}=7.43$.
Thus we conclude that HIP\,115856 expose typical brightness variations of $\delta$\,Scuti stars, however there is a possibility that it may be a $\delta$\,Scuti--$\gamma$\,Doradus hybrid star.
\section{Conclusions}
We obtained 24\,215 CCD images and analyzed stellar light curves of thirteen $\delta$\,Scuti candidates selected from the Hipparcos catalog.
We confirm that twelve of them are variables and pulsate with frequencies typical to $\delta$\,Scutti type stars.
Moreover, five of them (HIP\,2923, HIP\,5526, HIP\,11090, HIP\,115856, and HIP\,106219) may be hybrid $\delta$\,Scuti-$\gamma$\,Doradus pulsators, as they simultaneously show high-frequency pulsations typical to the $\delta$\,Scuti stars and significant low-frequency oscillations (between 0.5422\,$c/d$ and 1.3778\,$c/d$) characteristic to the $\gamma$\,Doradus stars.
One more star, HIP\,106223, pulsate just with low frequencies typical to variables of $\gamma$\,Doradus type stars.
\section*{Acknowledgements}
This research has made use of the SIMBAD database and NASA's Astrophysics Data System (operated at CDS, Strasbourg, France), and was funded by the grant from the Research Council of Lithuania (LAT-08/2016). We are grateful to an anonymous reviewer for providing insightful comments and
directions for additional analysis of observations which have resulted in this paper.
\subsubsection*{#1}}
\pagestyle{headings}
\markright{Reference sheet: \texttt{natbib}}
\usepackage{shortvrb}
\MakeShortVerb{\|}
\begin{document}
\thispagestyle{plain}
\newcommand{\textsc{Bib}\TeX}{\textsc{Bib}\TeX}
\newcommand{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}
\begin{center}{\bfseries\Large
Reference sheet for \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ usage}\\
\large(Describing version \fileversion\ from \filedate)
\end{center}
\begin{quote}\slshape
For a more detailed description of the \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package, \LaTeX\ the
source file \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.dtx}.
\end{quote}
\head{Overview}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is a reimplementation of the \LaTeX\ |\cite| command,
to work with both author--year and numerical citations. It is compatible with
the standard bibliographic style files, such as \texttt{plain.bst}, as well as
with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago},
\texttt{astron}, \texttt{authordate}, and of course \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}.
\head{Loading}
Load with |\usepackage[|\emph{options}|]{|\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}|}|. See list of
\emph{options} at the end.
\head{Replacement bibliography styles}
I provide three new \texttt{.bst} files to replace the standard \LaTeX\
numerical ones:
\begin{quote}\ttfamily
plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst
\end{quote}
\head{Basic commands}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package has two basic citation commands, |\citet| and
|\citep| for \emph{textual} and \emph{parenthetical} citations, respectively.
There also exist the starred versions |\citet*| and |\citep*| that print
the full author list, and not just the abbreviated one.
All of these may take one or two optional arguments to add some text before
and after the citation.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. (1990)\\
|\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex]
|\citep{jon90}| & (Jones et al., 1990)\\
|\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\
|\citep[see][]{jon90}| & (see Jones et al., 1990)\\
|\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex]
|\citet*{jon90}| & Jones, Baker, and Williams (1990)\\
|\citep*{jon90}| & (Jones, Baker, and Williams, 1990)
\end{tabular}
\end{quote}
\head{Multiple citations}
Multiple citations may be made by including more than one
citation key in the |\cite| command argument.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\
|\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\
|\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\
|\citep{jon90a,jon90b}| & (Jones et al., 1990a,b)
\end{tabular}
\end{quote}
\head{Numerical mode}
These examples are for author--year citation mode. In numerical mode, the
results are different.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. [21]\\
|\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex]
|\citep{jon90}| & [21]\\
|\citep[chap.~2]{jon90}| & [21, chap.~2]\\
|\citep[see][]{jon90}| & [see 21]\\
|\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex]
|\citep{jon90a,jon90b}| & [21, 32]
\end{tabular}
\end{quote}
\head{Suppressed parentheses}
As an alternative form of citation, |\citealt| is the same as |\citet| but
\emph{without parentheses}. Similarly, |\citealp| is |\citep| without
parentheses. Multiple references, notes, and the starred variants
also exist.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citealt{jon90}| & Jones et al.\ 1990\\
|\citealt*{jon90}| & Jones, Baker, and Williams 1990\\
|\citealp{jon90}| & Jones et al., 1990\\
|\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\
|\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\
|\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\
|\citetext{priv.\ comm.}| & (priv.\ comm.)
\end{tabular}
\end{quote}
The |\citetext| command
allows arbitrary text to be placed in the current citation parentheses.
This may be used in combination with |\citealp|.
\head{Partial citations}
In author--year schemes, it is sometimes desirable to be able to refer to
the authors without the year, or vice versa. This is provided with the
extra commands
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citeauthor{jon90}| & Jones et al.\\
|\citeauthor*{jon90}| & Jones, Baker, and Williams\\
|\citeyear{jon90}| & 1990\\
|\citeyearpar{jon90}| & (1990)
\end{tabular}
\end{quote}
\head{Forcing upper cased names}
If the first author's name contains a \textsl{von} part, such as ``della
Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the
beginning of a sentence. One can force the first letter to be in upper case
with the command |\Citet| instead. Other upper case commands also exist.
\begin{quote}
\begin{tabular}{rl@{\quad$\Rightarrow$\quad}l}
when & |\citet{dRob98}| & della Robbia (1998) \\
then & |\Citet{dRob98}| & Della Robbia (1998) \\
& |\Citep{dRob98}| & (Della Robbia, 1998) \\
& |\Citealt{dRob98}| & Della Robbia 1998 \\
& |\Citealp{dRob98}| & Della Robbia, 1998 \\
& |\Citeauthor{dRob98}| & Della Robbia
\end{tabular}
\end{quote}
These commands also exist in starred versions for full author names.
\head{Citation aliasing}
Sometimes one wants to refer to a reference with a special designation,
rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be
defined and used, textual and/or parenthetical with:
\begin{quote}
\begin{tabular}{lcl}
|\defcitealias{jon90}{Paper~I}|\\
|\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\
|\citepalias{jon90}| & $\Rightarrow$ & (Paper~I)
\end{tabular}
\end{quote}
These citation commands function much like |\citet| and |\citep|: they may
take multiple keys in the argument, may contain notes, and are marked as
hyperlinks.
\head{Selecting citation style and punctuation}
Use the command |\bibpunct| with one optional and 6 mandatory arguments:
\begin{enumerate}
\item the opening bracket symbol, default = (
\item the closing bracket symbol, default = )
\item the punctuation between multiple citations, default = ;
\item the letter `n' for numerical style, or `s' for numerical superscript
style, any other letter for
author--year, default = author--year;
\item the punctuation that comes between the author names and the year
\item the punctuation that comes between years or numbers when common author
lists are suppressed (default = ,);
\end{enumerate}
The optional argument is the character preceding a post-note, default is a
comma plus space. In redefining this character, one must include a space if
one is wanted.
Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep{jon90,jon91,jam92}|
\end{quote}
into [Jones et al. 1990; 1991, James et al. 1992].
Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep[and references therein]{jon90}|
\end{quote}
into (Jones et al. 1990; and references therein).
\head{Other formatting options}
Redefine |\bibsection| to the desired sectioning command for introducing
the list of references. This is normally |\section*| or |\chapter*|.
Define |\bibpreamble| to be any text that is to be printed after the heading but
before the actual list of references.
Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to
the list of references.
Define |\citenumfont| to be a font declaration or command like |\itshape|
or |\textit|.
Redefine |\bibnumfmt| as a command with an argument to format the numbers in
the list of references. The default definition is |[#1]|.
The indentation after the first line of each reference is given by
|\bibhang|; change this with the |\setlength| command.
The vertical spacing between references is set by |\bibsep|; change this with
the |\setlength| command.
\head{Automatic indexing of citations}
If one wishes to have the citations entered in the \texttt{.idx} indexing
file, it is only necessary to issue |\citeindextrue| at any point in the
document. All following |\cite| commands, of all variations, then insert
the corresponding entry to that file. With |\citeindexfalse|, these
entries will no longer be made.
\head{Use with \texttt{chapterbib} package}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is compatible with the \texttt{chapterbib} package
which makes it possible to have several bibliographies in one document.
The package makes use of the |\include| command, and each |\include|d file
has its own bibliography.
The order in which the \texttt{chapterbib} and \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ packages are loaded
is unimportant.
The \texttt{chapterbib} package provides an option \texttt{sectionbib}
that puts the bibliography in a |\section*| instead of |\chapter*|,
something that makes sense if there is a bibliography in each chapter.
This option will not work when \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ is also loaded; instead, add
the option to \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}.
Every |\include|d file must contain its own
|\bibliography| command where the bibliography is to appear. The database
files listed as arguments to this command can be different in each file,
of course. However, what is not so obvious, is that each file must also
contain a |\bibliographystyle| command, \emph{preferably with the same
style argument}.
\head{Sorting and compressing citations}
Do not use the \texttt{cite} package with \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}; rather use one of the
options \texttt{sort} or \texttt{sort\&compress}.
These also work with author--year citations, making multiple citations appear
in their order in the reference list.
\head{Long author list on first citation}
Use option \texttt{longnamesfirst} to have first citation automatically give
the full list of authors.
Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|,
given before the first citation.
\head{Local configuration}
Any local recoding or definitions can be put in \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.cfg} which
is read in after the main package file.
\head{Options that can be added to \texttt{\char`\\ usepackage}}
\begin{description}
\item[\ttfamily round] (default) for round parentheses;
\item[\ttfamily square] for square brackets;
\item[\ttfamily curly] for curly braces;
\item[\ttfamily angle] for angle brackets;
\item[\ttfamily colon] (default) to separate multiple citations with
colons;
\item[\ttfamily comma] to use commas as separaters;
\item[\ttfamily authoryear] (default) for author--year citations;
\item[\ttfamily numbers] for numerical citations;
\item[\ttfamily super] for superscripted numerical citations, as in
\textsl{Nature};
\item[\ttfamily sort] orders multiple citations into the sequence in
which they appear in the list of references;
\item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple
numerical citations are compressed if possible (as 3--6, 15);
\item[\ttfamily longnamesfirst] makes the first citation of any reference
the equivalent of the starred variant (full author list) and subsequent
citations normal (abbreviated list);
\item[\ttfamily sectionbib] redefines |\thebibliography| to issue
|\section*| instead of |\chapter*|; valid only for classes with a
|\chapter| command; to be used with the \texttt{chapterbib} package;
\item[\ttfamily nonamebreak] keeps all the authors' names in a citation on
one line; causes overfull hboxes but helps with some \texttt{hyperref}
problems.
\end{description}
\end{document}
|
{
"timestamp": "2018-06-05T02:18:05",
"yymm": "1806",
"arxiv_id": "1806.01100",
"language": "en",
"url": "https://arxiv.org/abs/1806.01100"
}
|
\section{Introduction and statement of the results}
\bigskip
Let $L$ be a real strictly $\alpha$-stable L\'evy process with characteristic exponent
$$\Psi(\lambda) \;=\;\log(\mathbb{E}[e^{\i \lambda L_1}])\; =\;-\; (\i \lambda)^\alpha e^{-\i\pi\alpha\rho\, {\rm sgn}(\lambda)},\qquad \lambda \in \mathbb{R},$$
where $\alpha \in (0,2]$ is the scaling parameter and $\rho=\mathbb{P}[L_1>0]$ is the positivity parameter. Recall e.g. from Lemma 14.11 and Theorem 14.19 in \cite{Sato} that $\rho\in[1-1/\a,1/\a]$ for $\a\in[1,2]$ and $\rho\in[0,1]$ for $\a\in(0,1),$ and that with this normalization, for $\a\in(0,2)$ the L\'evy measure of $L$ has density
$$\nu(x)\; =\; \frac{\Gamma(1+\alpha)}{\pi}\left(\frac{\sin(\pi\alpha (1-\rho))}{|x|^{1+\alpha}}{\bf 1}_{\{x<0\}}\; +\; \frac{\sin(\pi\alpha \rho) }{x^{1+\alpha}}{\bf 1}_{\{x>0\}}\right).$$
Throughout, we assume that $L$ takes positive values i.e. $\rho\neq 0,$ and we exclude the degenerate case $\a = \rho =1$ where $L$ is a unit drift. With these restrictions, $L$ has positive jumps if and only if $\a \le 1$ or $\a > 1$ and $\rho <1/\a.$\\
Consider the positive random variable
$$\M_{\alpha,\rho,\gamma}\; =\; \sup_{t\ge 0} \{L_t - t^\gamma\}.$$
It is well-known from e.g. Proposition 48.10 in \cite{Sato} that
$$\mathbb{P}[\M_{\alpha,\rho,\gamma} < \infty]\; = \; \left\{\begin{array}{ll} 1 & \mbox{if $\gamma\a > 1$}\\
0 & \mbox{if $\gamma\a \le 1.$}\end{array}\right.$$
In this paper, we are concerned with the asymptotic behaviour of
$$\mathbb{P}[\M_{\alpha,\rho,\gamma} \geq x], \qquad x\rightarrow \infty,$$
in the relevant case $\gamma\a > 1.$ In the literature, the evaluation of such asymptotics having various applications in insurance is coined as Cram\'er's estimate. In the case of a linear drift $\gamma =1,$ we refer to (XI.6.16) and (XII.5.10) in \cite{Fel} for random walks and to \cite{BD} for L\'evy processes having one-sided exponential moments. Applied to stable L\'evy processes, the main result of \cite{BD} shows
\begin{equation}
\label{One}
\mathbb{P}[{\bf M}_{\a,1/\a,1} \geq x]\;\sim \; e^{-x}
\end{equation}
for $\a > 1,$ and it is well-known that the asymptotics is in fact an equality - see \cite{Zo} or Corollary VII.2 in \cite{Ber}. For more general power drifts and a class of Gaussian processes fulfilling a certain scaling property, we refer to \cite{HP} which, applied to the important case of Brownian motion with a parabolic drift, yields
\begin{equation}
\label{Two}
\mathbb{P}[{\bf M}_{2,1/2,2} \geq x]\;\sim \; \frac{1}{\sqrt{3}} \exp\left\{ - \frac{4}{3\sqrt{3}}\, x^{3/2}\right\}.
\end{equation}
Let us mention that this estimate has been recently refined in Theorem 2.1 of \cite{GT}, where a complete asymptotic expansion at infinity is obtained - see also Lemma 2.1 and the references therein for closed expressions of the density of ${\bf M}_{2,1/2,2}$ in terms of the Airy function. The first result of the present paper is the following general estimate, extending (\ref{One}) and (\ref{Two}).
\begin{THEA}\label{theo:1} Assume $\gamma \alpha>1$.
\medskip
{\em (a)} If $L$ has positive jumps, one has
$$\mathbb{P}[\M_{\alpha,\rho,\gamma} \geq x]\; \sim\; \frac{\sin(\pi\alpha \rho)}{\pi}\,\Gamma(\alpha - 1/\gamma)\Gamma(1+1/\gamma) \;x^{\frac{1}{\gamma}-\a}.$$
{\em (b)} If $L$ does not have positive jumps, one has
$$\mathbb{P}[{\bf M}_{\a,1/\a,\gamma} \geq x]\; \sim\;\sqrt{\frac{\a -1}{\gamma\a -1}}\, \exp\left\{-(\alpha-1)\,\gamma^{\frac{\alpha}{\alpha-1}}\,(\gamma\a -1)^{\frac{1-\gamma\a}{\gamma(\alpha-1)}}\,x^{\frac{\gamma\a-1}{\gamma(\alpha-1)}} \right\}.$$
\end{THEA}
\bigskip
In the specific case $\a\in (1/2,2]$ and $\gamma =2,$ these estimates are somehow reminiscent of those previously obtained in \cite{BerBur} in the framework of Burgers turbulence with stable noise initial data. More precisely, if we set
$$\underline{\bf M}_{\alpha,\rho,\gamma}^{[x]}\; =\; \sup_{t\in[0,x]} \{L_t - t^\gamma\}\qquad\mbox{and}\qquad \overline{\bf M}_{\alpha,\rho,\gamma}^{[x]}\; =\; \sup_{t\ge x} \{L_t - t^\gamma\},$$
then the main result of \cite{BerBur} states that
$$\mathbb{P}[\overline{\bf M}_{\alpha,\rho,2}^{[x]}\ge \underline{\bf M}_{\alpha,\rho,2}^{[x]}]\; \asymp\; x^{1-2\a}$$
if $L$ has positive jumps\footnote{Here and throughout, we use the standard notation $f(x)\asymp g(x)$ to express the fact that there exist two positive finite constants $\kappa_1, \kappa_2$ such that $\kappa_1 f(x)\leq g(x) \leq \kappa_2 f(x)$ as $x\to\infty$ or as $x\to 0$, the nature of the limit being clear from the context.}, and that
$$\log \mathbb{P}[\overline{\bf M}_{\alpha,1/\a,2}^{[x]}\ge \underline{\bf M}_{\alpha,1/\a,2}^{[x]}]\; \sim\; -\kappa_\a\, x^{\frac{2\a-1}{\alpha-1}}$$
for some explicit $\kappa_\a\in(0,\infty)$ if $L$ does not have positive jumps.
Roughly speaking, when $x$ is large the event
$$\left\{ \overline{\bf M}_{\alpha,\rho,2}^{[x]}\ge \underline{\bf M}_{\alpha,\rho,2}^{[x]}\right\}$$
amounts to the fact that the translated process $L_t - L_x - (t-x)^2$ crosses a level of size $x^2$ for some $t\ge x,$ which explains heuristically why the asymptotics of
$$\mathbb{P}[\overline{\bf M}_{\alpha,\rho,2}^{[x]}\ge \underline{\bf M}_{\alpha,\rho,2}^{[x]}]\qquad\mbox{and}\qquad \mathbb{P}[{\bf M}_{\a,\rho,2} \ge x^2]$$
are comparable. Our arguments are quite different from those of \cite{BerBur}. They rely on the compensation formula for the case with positive jumps and on some ad hoc and rather involved estimates combined with Laplace's method in the spectrally negative case. One might wonder if such arguments could not help refine the results of \cite{BerBur}, but we have not investigated this question.\\
In the second part of the paper we consider the Riemann-Liouville (or fractionally integrated) stable process with parameter $\beta > 0$, defined as
$$L^{(\beta)}_t\;=\; \int_0^t (t-s)^\beta dL_s\; =\; \beta\int_0^t (t-s)^{\beta-1} L_s\, ds, \qquad t\ge 0.$$
The process $\{L^{(\beta)}_t, \, t\ge 0\}$ is stable in the broad sense of \cite{SamTaq}, and by Proposition 3.4.1 therein we have
\begin{equation}
\label{Variance} L^{(\beta)}_1\;\stackrel{d}{=}\; (1+\alpha\beta)^{-1/\alpha} L_1.
\end{equation}
Recall also that $\{L^{(\beta)}_t, \, t\ge 0\}$ is self-similar with index $\beta +1/\a,$ non-Markovian, and that it has a.s. continuous sample paths. Consider the positive random variable
$$\M_{\alpha,\rho,\gamma}^{(\beta)}\; =\; \sup_{t\ge 0} \{L^{(\beta)}_t - t^{\beta+\gamma}\},$$
and observe from e.g. Theorem 10.5.1 in \cite{SamTaq} and self-similarity that$$\mathbb{P}[\M_{\alpha,\rho,\gamma}^{(\beta)} < \infty]\; =\; \mathbb{P}[\M_{\alpha,\rho,\gamma} < \infty]$$
for every $\beta > 0.$ As a rule, the non-Markovian character of a given process makes however its passage times across a level more difficult to investigate. Our second main result has a less precise character.
\begin{THEB}\label{theo:2} Assume $\gamma\alpha>1$.
\medskip
{\em (a)} If $L$ has positive jumps, one has
$$\mathbb{P}[\M_{\alpha,\rho,\gamma}^{(\beta)} \geq x]\; \asymp\; x^{\frac{1-\gamma\a}{\beta +\gamma}}.$$
{\em (b)} If $L$ does not have positive jumps, one has
$$\log\mathbb{P}[{\bf M}_{\a,1/\a,\gamma}^{(\beta)} \geq x] \;\sim\; -c_{\a,\beta,\gamma}\, x^{\frac{\gamma\alpha -1}{(\alpha-1)(\gamma+\beta)}}$$
with $c_{\a,\beta,\gamma} = (\alpha-1)\,(\gamma+\beta)^{\frac{\alpha}{\alpha-1}}\, (\a\beta+1)^{\frac{\gamma+\beta-1-\alpha\beta}{(\alpha-1)(\gamma +\beta)}}\,(\gamma\a -1)^{\frac{1-\gamma\a}{(\alpha-1)(\gamma +\beta)}}\, >\, 0.$
\end{THEB}
The method to get these estimates differs here for the lower bound and the upper bound. The former uses a simple scaling argument, inspired by that of \cite{HP}, and amounts to a comparison with the upper tails of $L_1.$ The latter relies on telescoping sums for the case with positive jumps, and on a simple yet powerful association lemma in the spectrally negative case - see Lemma \ref{lem:ST}.\\
In the last part of the paper, we study the lower tail problem for the integrated stable process with a power positive drift. In a Gaussian framework, lower tail probabilities have many applications described in \cite{LS}. In a self-similar framework they are connected to the persistence probabilities, whose applications are also manifold - see the recent survey \cite{AS}. We show the following.
\begin{THEC}\label{theo:3}
Assume $\gamma\alpha > 1$ and $\rho\in (0,1).$ For every $\mu\ge 0,$ one has
$$ \mathbb{P}\left[\sup_{0\leq t\leq 1} \left\{ L_t^{(1)} + \mu t^{1+\gamma}\right\}\leq\, \varepsilon\right]\; \asymp\;\varepsilon^{\frac{\alpha\rho}{(\alpha+1)(\a(1-\rho)+1)}}.
$$
\end{THEC}
Above, we have excluded the case $\rho =1,$ where the estimate amounts by monotonicity to the one-dimensional estimate $\mathbb{P}[L_1 +\mu \leq\varepsilon],$ which is exponentially small for $\mu = 0$ - see e.g. (14.35) in \cite{Sato} - and zero for $\mu > 0.$ Theorem C is an extension of Theorem A in \cite{PSInt} which dealt with the case $\mu = 0.$ In this respect, we should mention that the condition $\gamma\a >1$ on the drift power is optimal: in the Cauchy case $\a=\gamma =1,$ the same Theorem A in \cite{PSInt} shows that the lower tail probability exponent depends on $\mu.$ Our argument relies in an essential way on the strong Markov property of the bidimensional process $\{(L^{(1)}_t, L_t), \, t\ge 0\}$ and is hence specific to the case $\beta = 1.$ The other cases are believed to be challenging. To give but one example, for $\a=\beta =2$ and $\mu = 0$, finding the right asymptotics of
$$ \mathbb{P}\left[\sup_{0\leq t\leq 1} \left\{ L_t^{(2)}\right\}\leq\, \varepsilon\right]$$
as $\varepsilon\to 0$ is still an open problem on Brownian motion - see Section 3.3 in \cite{AS}. In our proof the aforementioned association Lemma \ref{lem:ST} plays also a significant role. Unfortunately, its one-sided character prevents us from dealing with the case of a negative power drift. We leave this question, whose connection to Burgers turbulence with stable L\'evy process initial data in the case $\a > 1$ and $\gamma =1$ is described in Section 4.1 of \cite{AS}, to future research.
\section{Proof of Theorem A}
\subsection{The case with positive jumps} We will use the standard notation
$$c_+\; =\;\frac{\Gamma(1+\alpha)}{\pi} \sin(\pi\rho\alpha)\; > \; 0$$
for simplicity. Defining for every $x>0$ the stopping time
$$T_x\;=\; \inf\{t\geq 0;\, L_t> t^{\gamma} +x\},$$
we have $\mathbb{P}[{\bf M}_{\a,\rho,\gamma} \ge x]\; =\; \mathbb{P}[T_x<\infty].$ We also set $K_x=L_{T_x} - T^\gamma_x -x$ for the overshoot at $T_x.$ For every $f : \mathbb{R}^+\to\mathbb{R}^+$ measurable and such that $f(0)=0$, the compensation formula - see \cite{Ber} p. 7 or Theorem 19.2 in \cite{Sato} - implies
\begin{eqnarray*}
\mathbb{E}\left[f(K_x)\, {\bf 1}_{\{T_x<\infty\}}\right]&= &\mathbb{E}\left[ \sum_{t\geq 0}f\left(L_{t^-} + \Delta L_t -t^\gamma -x\right) {\bf 1}_{\{L_u < u^\gamma +x \;\forall u < t, \, t^\gamma +x < L_{t^-}+\Delta L_t\}}\right]\\
&= & c_+\,\mathbb{E}\left[ \int_0^{\infty} \!\! dt \int_0^{\infty}\!\! f\left(L_t + s -t^\gamma-x\right) {\bf 1}_{\{L_u < u^\gamma +x \;\forall u < t, \, t^\gamma +x < L_{t}+s\}}\,s^{-1-\alpha} ds \right] \\
&= & c_+\, \int_0^{\infty}\!\! dt \int_0^{\infty}\!\!\mathbb{E}\left[ f(z-t^\gamma-x) {\bf 1}_{\{L_u < u^\gamma +x \;\forall u < t, \, t^\gamma +x < z\}} (z-L_t)^{-1-\alpha} \right] dz.
\end{eqnarray*}
Taking $f(u)={\bf 1}_{\{u>0\}}$ and integrating in $z$, we obtain
\begin{eqnarray*}
\mathbb{P}[K_x >0,\, T_x<\infty]\!\! &=&\!\! \frac{c_+}{\alpha} \int_0^{\infty} \mathbb{E}\left[(t^\gamma+x- L_t)^{-\alpha} \,{\bf 1}_{\{L_u < u^\gamma +x\;\forall u < t\}}\right]\, dt \\
&=&\!\!\frac{c_+}{\alpha}\left( \int_0^{\infty} \mathbb{E}\left[(s^\gamma+1- x^{\frac{1}{\gamma\alpha}-1}L_s)^{-\alpha} \,{\bf 1}_{\{ x^{\frac{1}{\gamma\alpha}-1} L_v < v^\gamma +1\;\forall v <s\}}\right] ds\right) x^{\frac{1}{\gamma}-\alpha}\\
& \sim & \!\!\frac{c_+}{\alpha}\left( \int_0^{\infty} (s^\gamma+1)^{-\alpha} \,ds\right) x^{\frac{1}{\gamma}-\alpha}\\
& \sim & \frac{\sin(\pi\alpha \rho)}{\pi}\,\Gamma(\alpha - 1/\gamma)\Gamma(1+1/\gamma) \;x^{\frac{1}{\gamma}-\a}
\end{eqnarray*}
where the second equality follows by scaling, the convergence on the third line is obtained by bounded and monotone convergence (decomposing into $\{L_s < 0\}$ and $\{L_s \ge 0\}$ inside the expectation), and the evaluation of the integral on the fourth line is standard. To conclude the proof, it remains to show that $L$ does not creep at $T_x$, in other words that
\begin{equation}
\label{Creep}
\mathbb{P}[K_x=0,\,T_x<\infty]\; =\; 0.
\end{equation}
The latter is in accordance with the well-known fact that $L$ does not creep at a fixed level $x >0$ - see Theorem VI.19 and Lemma VIII.1 in \cite{Ber}. However, this result does not apply here since we consider the first passage time above a moving boundary. To show (\ref{Creep}), fix $x > 0$ and decompose
$$\mathbb{P}[L_s \geq s^\gamma +x]\; =\; P_1(s)\, +\, P_2(s)$$
for every $s \ge 0,$ with
$$\left\{\begin{array}{lll}
P_1(s)& =& \mathbb{P}[{\widetilde L}_{s-T_x} + T_x^\gamma \geq s^\gamma, \,K_x = 0,\, T_x<s]\\
P_2(s)& =&\mathbb{P}[{\widetilde L}_{s-T_x} + L_{T_x}\geq s^\gamma +x,\, K_x > 0,\, T_x<s],\end{array}\right.$$
where ${\widetilde L}$ is a copy of $L$ which is independent of $(T_x, L_{T_x})$, by the strong Markov property. On the one hand, we see by scaling and e.g. Property 1.2.15 in \cite{SamTaq} that
$$ \mathbb{P}[L_s \geq s^\gamma +x] \;\sim\; \frac{c_+}{\alpha}\, s^{1-\gamma\alpha}.$$
On the other hand, we have
$$P_1(s) \; \geq \; \mathbb{P}[{\widetilde L}_{s-T_x} \geq s^\gamma, \,K_x = 0,\, T_x<s/2]\, \geq \, \mathbb{P}[{\widetilde L}_1 \geq 2^{\frac{1}{\alpha}} s^{\gamma-\frac{1}{\alpha}}]\; \mathbb{P}[K_x = 0,\, T_x< s/2]$$
and passing to the limit, we obtain
$$\liminf_{s\rightarrow \infty} s^{\gamma\alpha -1}\,P_1(s) \; \geq\; \frac{c_+}{2\alpha}\;\mathbb{P}[K_x = 0,\, T_x<\infty].$$
Hence, we see that (\ref{Creep}) is a consequence of
\begin{equation}
\label{Creepy}
P_2(s) \;\sim\; \frac{c_+}{\alpha}\, s^{1-\gamma\alpha}.
\end{equation}
Applying the compensation formula as above, we obtain
$$P_2(s) \;=\; c_+\int_0^s dt \int_0^{\infty}\mathbb{P}[{\widetilde L}_{s-t} +L_t+z\geq s^\gamma +x, \,L_t + z> t^\gamma +x, \, L_u < u^\gamma + x\;\forall u < t]\, z^{-1-\alpha}\, dz.$$
Changing the variables $z=s^{\gamma} y$ and $t=su,$ we see that $c_+^{-1} s^{\gamma\alpha -1}P_2(s)$ equals
$$\int_0^1\!\! du \int_0^{\infty}\mathbb{P}[s^{\frac{1}{\alpha}} {\widetilde L}_{1-u} + s^{\frac{1}{\alpha}}L_u+s^{\gamma} y \geq s^\gamma +x, \, s^{\frac{1}{\alpha}}L_u > s^\gamma (u^\gamma-y) +x, \, s^{\frac{1}{\alpha}} L_u < s^\gamma\,u^\gamma + x\;\forall u < 1]\,y^{-1-\alpha}dy,$$
which converges as $s\to\infty$ to
$$\int_0^1 du \int_1^{\infty} y^{-1-\alpha} dy \; =\;\frac{1}{\alpha}\cdot$$
This shows (\ref{Creepy}), and completes the proof.
\qed
\begin{remark} (a) Setting, here and throughout, $L_t^* = \sup\{L_s, \, s\in[0,t]\}$ for every $t > 0,$ we have
$$\lim_{\gamma\rightarrow \infty} \mathbb{P}[\M_{\alpha,\rho,\gamma} \geq x]\; =\; \mathbb{P}[L_1^* \geq x]$$
for every $x\ge 0.$ Passing formally to the limit $\gamma\to\infty$ in Theorem A (a), we can infer
\begin{equation}
\label{Bing}
\mathbb{P}[L_1^* \geq x] \;\sim\; \frac{\Gamma(\alpha) \sin(\pi\alpha \rho)}{\pi}\, x^{-\a}
\end{equation}
which is a standard and rigorous estimate - see Theorem 10.5.1 in \cite{SamTaq} and Proposition VIII.4 in \cite{Ber}.\\
(b) Taking $f(u) = {\bf 1}_{\{u\geq r x\}}$ for some $r>0$ and applying as above the compensation formula leads to the estimate
$$\mathbb{P}[K_x\geq r x ,\, T_x<\infty] \; \sim\; \frac{c_+}{\alpha} \left( \int_0^{\infty}(r + u^\gamma+1)^{-\alpha}\, du\right) x^{\frac{1}{\gamma}-\a}\;\sim\; (r+1)^{\frac{1}{\gamma} -\a}\mathbb{P}[T_x <\infty].$$
This implies the following limit theorem for the law of the renormalized overshoot:
$$\L\left( x^{-1} K_x\,\bigg| \,T_x<+\infty\right)\; \to\; {\rm Pareto}\, (\alpha-1/\gamma) \qquad \mbox{as $x\to\infty$.}$$
This observation seems new even in the classical case of a linear drift $\gamma = 1$ with $\a > 1.$ Notice that still in the case of a linear drift, the limit behaviour of the overshoot is very different for L\'evy processes having finite exponential moments. If we consider for example the tempered stable subordinator with negative unit drift and L\'evy measure having density
$$\nu(x)\; =\; \frac{\a\,e^{-cx}}{\Gamma(1-\a) x^{\a+1}}\,{\bf 1}_{\{x > 0\}}$$
for some $c\in (0,1),$ then we are in the framework of \cite{BD} with $\omega \in (0,1)$ and $\mu^* < \infty$ so that $C > 0$ in (5) therein. By Remark 2 of \cite{BD}, this implies that $K_x$ converges at infinity to some proper random variable. \\
(c) In the case $\a > 1, \rho = 1-1/\a$ and $\gamma =1,$ the Laplace transform of ${{\bf M}}_{\a,1-1/\a,1}$ can be computed with the help of Zolotarev's well-known general formula in \cite{Zo}: one finds
$$\mathbb{E}[e^{-\lambda{{\bf M}}_{\a,1-1/\a,1}}]\; =\; \frac{1}{1 +\lambda^{\a-1}}\cdot$$
This Laplace transform can be easily inverted and yields the identity in law
$${{\bf M}}_{\a,1-1/\a,1}\;\stackrel{d}{=}\; {\bf L}^{\frac{1}{\a-1}}\,\times\, {\bf Z}_{\a-1}$$
where ${\bf L}\sim{\rm Exp}(1)$ and ${\bf Z}_{\a-1}$ has a standard positive $(\a-1)-$stable law with Laplace transform
$$\mathbb{E}[e^{-\lambda {\bf Z}_{\a-1}}]\; =\; e^{-\lambda^{\a-1}},$$
both random variables being independent. This shows that the law of ${{\bf M}}_{\a,1-1/\a,1}$ is the so-called Mittag-Leffler distribution of parameter $\a-1$ which is studied e.g. in Exercise 34.4 of \cite{Sato} - see also the references therein. In particular, there exists a closed expression for the survival function of ${\bf M}_{\a,1-1/\a,1}$ in terms of the classical Mittag-Leffler function, which leads to a complete and simple asymptotic expansion at infinity: one has
$$\mathbb{P}[{{\bf M}}_{\a,1-1/\a,1}\, > \,x] \; =\; E_{\a -1}(-x^{\a-1})\; \sim\;\sum_{n\ge 1} \frac{(-1)^{n-1} x^{-(\a-1) n}}{\Gamma (1- (\a-1) n)}$$
where we have used the standard expansion 18.1(7) in \cite{EMOT}. Observe from the complement formula for the Gamma function that the first term matches the one that can be derived from Theorem A (a), in this specific case. Notice also the following closed formula for the distribution function, as a convergent series:
$$\mathbb{P}[{{\bf M}}_{\a,1-1/\a,1}\, \le \,x] \; =\; 1\, -\, E_{\a -1}(-x^{\a-1})\; = \;\sum_{n\ge 1} \frac{(-1)^{n-1} x^{(\a-1)n}}{\Gamma (1+ (\a-1) n)}\cdot$$
Let us finally refer to \cite{Fur} for related results in the presence of a compound Poisson process.\\
\end{remark}
\subsection{The case with no positive jumps} Applying the strong Markov property at $T_x$ and using the absence of positive jumps, we get
\begin{eqnarray*}
\int_0^{\infty} (1-e^{-\lambda t})\, \mathbb{P}[L_t> t^\gamma +x]\,dt & = & \mathbb{E}\left[{\bf 1}_{\{T_x<\infty\}} \int_0^{\infty} (1-e^{-\lambda (T_x+t)})\,{\bf 1}_{\{L_t+T_x^\gamma> (t+T_x)^\gamma\}}\, dt\right]\\
& = & \mathbb{E}\left[{\bf 1}_{\{T_x<\infty\}} \int_0^{\infty} (1-e^{-\lambda (T_x+t)})\,{\bf 1}_{\{t^{1/\a}L_1^+ > (t+T_x)^\gamma - T_x^\gamma\}}\, dt\right]
\end{eqnarray*}
where we have set $a^+ = \max(a,0)$ and, on the right-hand side, $L$ and $T_x$ are independent. Integrating both sides on $(0,\infty)$ with respect to $\lambda^{-\nu-1} d\lambda$ with $\nu\in (0,1)$, we deduce
\begin{eqnarray*}
\int_0^{\infty} t^{\nu}\,\mathbb{P}[L_t> t^\gamma +x]\,dt &= &\mathbb{E}\left[{\bf 1}_{\{T_x<\infty\}} \int_0^{\infty} (T_x+t)^{\nu}\, {\bf 1}_{\{t^{1/\alpha} L_1^+ > (t+T_x)^\gamma - T_x^\gamma\}}\, dt\right]\\
&= &\mathbb{E}\left[{\bf 1}_{\{T_x<\infty\}} \, T_x^{1+\nu}\, \int_0^{\infty} (1+t)^{\nu}\, {\bf 1}_{\{L_1^+ > T_x^{\gamma-1/\alpha}\,\varphi_{\a,\gamma} (t)\}}\, dt\right],
\end{eqnarray*}
where
$$\varphi_{\a,\gamma}(t)\; =\; \frac{(1+t)^\gamma -1}{t^{1/\alpha}}$$
is an homeomorphism from $(0,\infty)$ to $(0,\infty),$ because $\alpha\gamma>1$ and $\alpha>1$. This implies the identity
\begin{eqnarray}
\nonumber \int_0^{\infty}\!\! t^{\nu}\,\mathbb{P}[L_1^+> t^{-1/\a}(t^\gamma +x)]\,dt\!\! &= &\!\!\mathbb{E}\left[ {\bf 1}_{\{T_x<\infty\}} \,T_x^{1+\nu} \int_0^{\infty} (1+t)^{\nu}\, {\bf 1}_{\{\varphi_{\a,\gamma}^{-1}(T_x^{1/\a - \gamma} L_1^+)> t\}}\, dt\right]\\
\label{eq:1}\!\!& = &\!\!\frac{1}{1+\nu}\, \mathbb{E}\left[{\bf 1}_{\{T_x<\infty\}} \,T_x^{1+\nu} \left((1+\varphi_{\a,\gamma}^{-1}( T_x^{1/\a-\gamma}L_1^+))^{1+\nu} -1\right)\right]
\end{eqnarray}
which extends to all $\nu>-1$ by analyticity, since $L_1^+$ has moments of every order. We will now study the asymptotic behaviour of both sides of (\ref{eq:1}), introducing the crucial parameter
$$\nu_0\; = \; \frac{\alpha(\gamma-1)}{\alpha-1}\; >\; -1.$$
We begin with the left-hand side, which is easy.
\begin{lemma}
\label{lem:Lap}
One has
$$\int_0^{\infty}t^{\nu}\,\mathbb{P}[L_1^+> t^{-1/\a}(t^\gamma +x)]\,dt \,\sim\, \frac{\gamma^{\frac{\a}{1-\a}} ((\gamma\a -1)x)^{\frac{\nu -\nu_0}{\gamma}}}{\sqrt{(\a -1)(\gamma\a-1)}}\,\exp\left\{ -(\alpha-1)\,\gamma^{\frac{\alpha}{\alpha-1}}\,(\gamma\a -1)^{\frac{1-\gamma\a}{\gamma(\alpha-1)}}\, x^{\frac{\alpha\gamma-1}{\gamma(\alpha-1)}}\right\}.$$
\end{lemma}
\proof
By (14.35) in \cite{Sato}, we have the asymptotic behaviour
$$p_1(x)\; \sim\;\frac{\alpha^{-\frac{1}{2(\alpha-1)}}}{\sqrt{2\pi(\alpha-1)}}\,x^{\frac{2-\alpha}{2(\alpha-1)}} \exp\left\{-(\alpha-1)\, \a^{\frac{\a}{1-\a}}\, x^{\frac{\alpha}{\alpha-1}}\right\}$$
at infinity, where $p_1$ stands for the density of the random variable $L_1$. Making a change of variable and applying Watson's lemma - see also Theorem 2.5.3 in \cite{Z}, this easily implies
\begin{equation}
\label{eq:iGamma}\mathbb{P}[L^+_1> x] \;\sim\;\frac{\alpha^{\frac{1}{2(\alpha-1)}}}{\sqrt{2\pi(\alpha-1)}}\, x^{-\frac{\alpha}{2(\alpha-1)}}\, \exp \left\{-(\alpha-1)\, \a^{\frac{\a}{1-\a}}\, x^{\frac{\alpha}{\alpha-1}}\right\}.
\end{equation}
On the other hand, we can rewrite
\begin{equation}
\label{eq:iii}
\int_0^{\infty}t^{\nu}\,\mathbb{P}[L_1^+> t^{-1/\a}(t^\gamma +x)]\,dt\; =\; x^{\frac{\nu+1}{\gamma}} \int_0^{\infty} s^\nu \,\mathbb{P}[L^+_1> x^{\frac{\alpha\gamma-1}{\alpha\gamma}}\eta(s)]\,ds\end{equation}
where $\eta(s)= s^{-1/\a}(s^\gamma+1)$ reaches its global minimum on $(0,\infty)$ at $s_\ast= (\alpha\gamma -1)^{-1/\gamma},$ with
$$\eta(s_\ast)\; =\; \gamma\a(\gamma\a-1)^{\frac{1-\gamma\a}{\gamma\a}}\qquad\mbox{and}\qquad \eta''(s_\ast)\; =\;\frac{\gamma(\gamma\a-1)^{2+1/\a}}{\a}\cdot$$
Plugging (\ref{eq:iGamma}) into the right-hand side of (\ref{eq:iii}), we obtain
\begin{multline*}\int_0^{\infty}t^{\nu}\,\mathbb{P}[L_1^+> t^{-1/\a}(t^\gamma +x)]\,dt \\\sim\;\frac{\alpha^{\frac{1}{2(\alpha-1)}}}{\sqrt{2\pi(\alpha-1)}} \, x^{\frac{\nu+(1-\nu_0)/2}{\gamma}} \int_0^{\infty} s^\nu\,\eta(s)^{\frac{\alpha}{2(1-\alpha)}}\, \exp\left\{-(\alpha-1)\,\a^{\frac{\alpha}{1-\alpha}} \,\eta(s)^{\frac{\alpha}{\alpha-1}} x^{1+\nu_0}\right\}\,ds,
\end{multline*}
which yields the required asymptotic behaviour, by Laplace's method.
\endproof
We will now analyze the right-hand side of (\ref{eq:1}), which is more involved. Introducing the function
$$\Phi_{\a,\gamma,\nu}(x)\; =\; x^{-\frac{(1+\nu)\alpha}{\gamma\a-1}} \left((1+\varphi_{\a,\gamma}^{-1}(x))^{1+\nu}-1\right)$$
on $(0,\infty),$ we can rewrite (\ref{eq:1}) as
\begin{equation}
\label{eq:1bis} \int_0^{\infty} t^{\nu}\,\mathbb{P}[L_1^+> t^{-1/\a}(t^\gamma +x)]\,dt\; =\;\frac{1}{1+\nu}\, \mathbb{E}\left[{\bf 1}_{\{T_x<\infty\}}\left(L_1^+\right)^{\frac{(1+\nu)\alpha}{\gamma\a-1}} \Phi_{\a,\gamma,\nu} (T_x^{1/\a-\gamma}\,L_1^+)\right]
\end{equation}
Taking $\nu =\nu_0$ and observing that $\varphi_{\a,\gamma}^{-1}(t)\sim (t/\gamma)^{\frac{\alpha}{\alpha-1}}$ as $t\to 0$ and $\varphi_{\a,\gamma}^{-1}(t)\sim t^{\frac{\alpha}{\gamma\a-1}}$ as $t\to\infty,$ we get
$$\lim_{x\rightarrow 0}\Phi_{\a,\gamma, \nu_0}(x)\; =\; (1+\nu_0)\gamma^{\frac{\alpha}{1-\alpha}}\; >\; 0
\qquad \text{ and }\qquad \lim_{x\rightarrow \infty}\Phi_{\a,\gamma, \nu_0}(x)\; =\; 1.$$
Therefore, since $\Phi_{\a,\gamma,\nu_0}$ is continuous and positive on $(0,\infty),$ we have
$$0\;<\; \inf_{x > 0}\left\{ \Phi_{\a,\gamma,\nu_0}(x)\right\} \;<\; \sup_{x > 0}\left\{\Phi_{\a,\gamma,\nu_0}(x)\right\}\; <\;\infty.$$
Going back to (\ref{eq:1bis}) and using Lemma \ref{lem:Lap}, we finally get the crude asymptotics
\begin{equation}
\label{eq:leqleq}
\mathbb{P}[T_x<\infty]\;\asymp\;\exp\left\{ -(\alpha-1)\,\gamma^{\frac{\alpha}{\alpha-1}}\,(\gamma\a -1)^{\frac{1-\gamma\a}{\gamma(\alpha-1)}}\, x^{\frac{\alpha\gamma-1}{\gamma(\alpha-1)}}\right\}.
\end{equation}
In order to obtain an exact asymptotics and finish the proof, we will need the following technical lemma.
\begin{lemma}\label{lem:Phi}
For every $\nu \in (-1, \gamma-1/\a -1],$ the function $\Phi_{\a,\gamma,\nu}$ is an homemorphism from $(0,\infty)$ to $(0,1).$
\end{lemma}
\proof
First, it is easy to see from the aforementioned asymptotics of $\varphi_{\a,\gamma}$ at zero and infinity that
$$\lim_{x\rightarrow 0}\Phi_{\a,\gamma, \nu}(x)\; =\; 0
\qquad \text{ and }\qquad \lim_{x\rightarrow \infty}\Phi_{\a,\gamma, \nu}(x)\; =\; 1$$
for $\nu \in (-1, \gamma-1/\a -1],$ and it is plain that $\Phi_{\a,\gamma, \nu}$ is continuous. Since $\varphi_{\a,\gamma}$ increases on $(0,\infty),$ we are reduced to show that
$$z\;\mapsto\;\Phi_{\a,\gamma,\nu}\left(\varphi_{\a,\gamma}(z)\right)\; = \;\frac{\left((1+z)^{1+\nu}-1\right)\,z^{\frac{(1+\nu)}{\gamma\a-1}} }{\left((1+z)^\gamma-1\right)^{\frac{(1+\nu)\alpha}{\gamma\a-1}}}$$
increases on $(0,\infty).$ Setting $y=(1+z)^{\gamma}-1$ and $f_c(x) = (1+x)^c - x^c,$ we obtain
$$\left(\Phi_{\a,\gamma,\nu}\left(\varphi_{\a,\gamma}(z)\right) \right)^{\frac{\gamma\a-1}{1+\nu}}\; =\;\left(f_{\frac{1+\nu}{\gamma}}(y^{-1}) \right)^{\frac{\gamma\a-1}{1+\nu}} f_{\frac{1}{\gamma}}(y^{-1})$$
which, since $f_c$ decreases for $c\in(0,1],$ shows that $\Phi_{\a,\gamma, \nu}$ increases for $\gamma\geq 1$ and $\nu \in (-1, \gamma-1]$. Assuming last $\gamma<1,$ we need to prove that
$$x\;\mapsto\; g_{\a,\gamma,\nu}(x)\; =\; \left(f_{\frac{1+\nu}{\gamma}}\left(x\right) \right)^{\alpha-\frac{1}{\gamma}} \left(f_{\frac{1}{\gamma}}\left(x\right)\right)^{\frac{1+\nu}{\gamma}}$$
decreases on $(0,\infty)$. Setting $c= \frac{1+\nu}{\gamma} \in (0,1),$ we compute
$$g_{\a,\gamma,\nu}^\prime(x)\; =\; \frac{c\, g_{\a,\gamma,\nu}(x)}{\gamma(1+x)}\left(\alpha \gamma - (\alpha \gamma-1) \frac{x^{c-1}}{f_{c}(x)} - \frac{x^{\frac{1}{\gamma}-1}}{f_{\frac{1}{\gamma}}(x)}\right)\; < \; \frac{c\, g_{\a,\gamma,\nu}(x)}{\gamma(1+x)}\left(\alpha \gamma - (\alpha \gamma-1) \frac{x^{c-1}}{f_{c}(x)}\right).$$
It is easy to see that $x\mapsto x^{1-c}f_{c}(x)$ increases from $(0,+\infty)$ to $(0,c),$ and we finally obtain
$$g^\prime_{\a,\gamma,\nu}(x) \; <\; \frac{((c-1)\gamma\a +1)g_{\a,\gamma,\nu}(x)}{\gamma(1+x)}\; \le \; 0$$
as soon as $\nu\le \gamma-1/\alpha-1$.
\endproof
\begin{corollary}
\label{eq:A}
For every $A\ge 0,$ one has
$$\frac{\mathbb{P}[T_x\leq A]}{\mathbb{P}[T_x<+\infty]} \; \to \; 0$$
as $x\to\infty.$
\end{corollary}
\proof
Set $\nu=\varepsilon - 1$ with $\varepsilon>0$ small enough for $\Phi_{\a,\gamma,\varepsilon -1}$ to increase on $(0,\infty).$ By (\ref{eq:1bis}) and the fact that $L_1^+$ and $T_x$ are independent, we have
\begin{eqnarray*}
\varepsilon \int_0^{\infty} t^{\varepsilon - 1}\,\mathbb{P}[L_1^+> t^{-1/\a}(t^\gamma +x)]\,dt & = & \mathbb{E}\left[{\bf 1}_{\{T_x<\infty\}}\left(L_1^+\right)^{\frac{\alpha}{\gamma\a-1}} \Phi_{\a,\gamma,\varepsilon-1} (T_x^{1/\a-\gamma}\,L_1^+)\right] \\
&\geq & \mathbb{E}\left[\left(L_1^+\right)^{\frac{\alpha}{\gamma\a-1}} \Phi_{\a,\gamma,\varepsilon-1} (A^{1/\a-\gamma}\,L_1^+) \right]\, \mathbb{P}[ T_x\leq A].
\end{eqnarray*}
Combining now the crude asymptotics (\ref{eq:leqleq}) and Lemma \ref{lem:Lap}, we deduce that there exists $K>0$ such that
$$ \frac{\mathbb{P}[T_x\leq A]}{\mathbb{P}[T_x<+\infty]}\; \leq \; K\, x^{\frac{\varepsilon- 1 -\nu_0}{\gamma}}\;\to\; 0$$
as $x\to\infty,$ taking $\varepsilon > 0$ small enough.
\endproof
We can now finish the proof. Taking $\nu =\nu_0$ in (\ref{eq:1bis}), we first decompose the quantity
$$\gamma^{\frac{\alpha}{\a-1}}\int_0^{\infty}t^{\nu_0}\,\mathbb{P}[L_1^+> t^{-1/\a}(t^\gamma +x)]\,dt$$
into
$$\mathbb{E}[(L_1^+)^{\frac{\alpha}{\alpha-1}}]\,\mathbb{P}[T_x<\infty]\; +\; \frac{1}{\Phi_{\a,\gamma,\nu_0}(0+)}\,\mathbb{E}\left[ {\bf 1}_{\{T_x<\infty\}} (L_1^+)^{\frac{\alpha}{\alpha-1}} \left(\Phi_{\a,\gamma,\nu_0}( T_x^{1/\a -\gamma}L_1^+) -\Phi_{\a,\gamma,\nu_0}(0+)\right) \right].$$
Applying Lemma \ref{lem:Lap} and the moment evaluation
$$\mathbb{E}[(L_1^+)^{\frac{\alpha}{\alpha-1}}]\; =\; \frac{1}{\a-1}$$
which is e.g. a consequence of (2.6.20) in \cite{Z}, we see that the proof will be complete as soon as
\begin{equation}
\label{llinf}
\mathbb{E}\left[ {\bf 1}_{\{T_x<\infty\}} (L_1^+)^{\frac{\alpha}{\alpha-1}} \left(\Phi_{\a,\gamma,\nu_0}( T_x^{1/\a -\gamma}L_1^+) -\Phi_{\a,\gamma,\nu_0}(0+)\right) \right]\; \ll\; \mathbb{P}[T_x <\infty], \quad x\to\infty.
\end{equation}
But, decomposing according to $\{T_x\leq A\}$ or $\{T_x> A\}$, the left-hand side is bounded by
\begin{multline*}
\frac{2}{\a-1}\, \sup_{z > 0}\left\{\Phi_{\a,\gamma,\nu_0}(z)\right\}\,\mathbb{P}[T_x\leq A]\\
+\; \mathbb{E}\left[(L_1^+)^{\frac{\alpha}{\alpha-1}} \sup_{z\geq A}\left\{\left|\Phi_{\a,\gamma,\nu_0}(z^{1/\alpha-\gamma}L_1^+) -\Phi_{\a,\gamma,\nu_0}(0+)\right|\right\}\right]\, \mathbb{P}[T_x<\infty]
\end{multline*}
and (\ref{llinf}) follows by Corollary \ref{eq:A}, the continuity of $\Phi_{\a,\gamma,\nu_0}$ at zero, and dominated convergence.
\qed
\section{Proof of Theorem B}
\subsection{The lower bound} This part is easy and relies essentially on the identity (\ref{Variance}). Introducing
$$T^{(\beta)}_x \;=\; \inf\{ t\ge 0, \, L^{(\beta)}_t = t^{\gamma+\beta} +x\}\qquad \text{and}\qquad {\widehat T}^{(\beta)}_x\;=\;\inf\{t\geq 0,\, L^{(\beta)}_t = (t^{\gamma +\beta} +1)\, x^{\frac{\gamma\alpha -1}{\alpha(\gamma +\beta)}} \},$$
we see by scaling that
\begin{equation}
\label{Ska}
\mathbb{P}[\M_{\alpha,\rho,\gamma}^{(\beta)}\ge x]\; =\; \mathbb{P}[T^{(\beta)}_x < \infty]\; =\; \mathbb{P}[{\widehat T}^{(\beta)}_x < \infty].
\end{equation}
Setting
$$s_\ast\; =\; \arg\min\{ s^{-\beta-1/\a}(s^{\gamma+\beta}+1)\}\; =\; \left( \frac{1+\alpha \beta}{\gamma\a-1}\right)^{\frac{1}{\gamma+\beta}}$$
and
$$m_\ast\; =\; \min_{s >0}\{ s^{-\beta-1/\a}(s^{\gamma+\beta}+1), \, s >0\}\; =\;\a(\gamma+\beta) (\alpha \beta +1)^{-\frac{1+\alpha\beta}{\a(\gamma+\beta)}}(\gamma\a -1)^{\frac{1-\gamma\a}{\a(\gamma +\beta)}},$$
a further scaling argument implies
$$ \mathbb{P}[{\widehat T}^{(\beta)}_x < \infty]\;\ge\; \mathbb{P}\left[ L^{(\beta)}_{s_\ast}\geq (s_\ast^{\gamma+\beta}+1)\, x^{\frac{\gamma\alpha -1}{\alpha(\gamma+\beta)}} \right]\; =\;
\mathbb{P}[L_1\geq (1+\alpha\beta)^{1/\alpha} m_\ast\, x^{\frac{\gamma\alpha -1}{\alpha(\gamma+\beta)}}].$$
When $L$ has positive jumps, applying e.g. Property 1.2.15 in \cite{SamTaq} yields the required lower bound
$$ \mathbb{P}[\M_{\alpha,\rho,\gamma}^{(\beta)}\ge x] \; \geq \; \kappa\, x^{\frac{1-\gamma\a}{\gamma+\beta}}, \qquad x\rightarrow \infty,$$
for some $\kappa > 0.$ When $L$ has no positive jumps, we obtain from (\ref{eq:iGamma}) and some simplifications the required lower bound
$$\liminf_{x\to\infty} x^{\frac{1-\gamma\a}{(\alpha-1)(\gamma+\beta)}} \log \mathbb{P}[{\bf M}^{(\beta)}_{\a,1/\a,\gamma} \geq x]\; \ge\; - c_{\a,\beta,\gamma}.$$
\subsection{The upper bound in the case with positive jumps} Introducing the parameter
$$\delta\; =\; \frac{\gamma\alpha -1}{\alpha(\gamma +\beta)}\;\in\; (0,1)$$
and fixing $\varepsilon>0$ small enough such that $\eta = 2^{\delta}(1+\varepsilon)^{\delta-1} > 1,$ define the stochastically increasing family of stopping times
$$ {\widehat T}^{(\beta,k)}_x\;=\;\inf\{t\geq 0,\, L^{(\beta)}_t - (1+\varepsilon)^{-k} t^{\gamma +\beta} x^\delta\, =\, 2^k x^{\delta} \},\qquad k\ge 0.$$ By (\ref{Ska}), we have the telescoping decomposition
$$\mathbb{P}[\M_{\alpha,\rho,\gamma}^{(\beta)}\ge x]\; =\; \mathbb{P}[{\widehat T}^{(\beta,0)}_x < \infty]\; =\; \sum_{k\ge 0}\left( \mathbb{P}[{\widehat T}^{(\beta, k)}_x < \infty]\, -\,\mathbb{P}[{\widehat T}^{(\beta, k+1)}_x < \infty]\right).$$
We first consider the case $\gamma+\beta \ge 1.$ Setting $r_k= (3 \times 2^k (1+\varepsilon)^k)^{\frac{1}{\gamma+\beta}},$ we can bound
\begin{eqnarray*}
\mathbb{P}[{\widehat T}^{(\beta, k)}_x < \infty] & \le &\mathbb{P}\left[\sup_{t\in [0,r_k]} \{L_t^{(\beta)}\}\geq 2^k x^{\delta} \right]\; +\; \mathbb{P}\left[ \sup_{t\geq r_k}\{ L_t^{(\beta)} - (1+\varepsilon)^{-k}t^{\gamma+\beta} x^\delta\}\geq 2^k x^{\delta}\right]\\
& \le & \mathbb{P}\left[ L_1^\ast \ge \eta^k 3^{\delta -1} x^\delta \right] \; +\; \mathbb{P}\left[\sup_{t\geq 0}\{ L_{t+r_k}^{(\beta)} -(1+\varepsilon)^{-k} t^{\gamma+\beta}x^\delta\} \geq 2^{k+2}x^{\delta}\right],
\end{eqnarray*}
where in the second line we have used the a.s. inequality $\sup_{t\in[0,1]}\{L_{t}^{(\beta)}\}\le L_1^\ast,$ which is obvious, and the equally obvious deterministic inequality
\begin{equation}
\label{Dieter}
(t+r_k)^{\gamma+\beta}\; \geq\; t^{\gamma+\beta}\,+\, r_k^{\gamma+\beta}
\end{equation}
for all $t\ge 0,$ which follows from $\gamma +\beta \ge 1.$ The next step is to write down the process decomposition
\begin{eqnarray}
\label{eq:decomp}L^{(\beta)}_{t+r_k} & = & \left( \beta\int_0^{r_k}(t+r_k-u)^{\beta-1}\,L_u\, du \; +\; t^\beta\, L_{r_k}\right)\;+\; \beta\int_0^{t}(t-s)^{\beta-1}\,(L_{s+r_k}-L_{r_k})\, ds\\
\nonumber& \stackrel{d}{=} & \left( \beta\int_0^{r_k}(t+r_k-u)^{\beta-1}\,L_u\, du \; +\; t^\beta\, L_{r_k}\right)\; +\; {\widehat L}^{(\beta)}_t\; \le \; (t+r_k)^\beta L^\ast_{r_k}\; +\; {\widehat L}^{(\beta)}_t
\end{eqnarray}
with $\{{\widehat L}^{(\beta)}_t, \, t\ge 0\}$ an independent copy of $\{L^{(\beta)}_t, \, t\ge 0\},$ which implies
\begin{multline*}
\mathbb{P}\left[\sup_{t\geq 0}\{ L_{t+r_k}^{(\beta)} -(1+\varepsilon)^{-k} t^{\gamma+\beta}x^\delta\} \geq 2^{k+2}x^{\delta}\right]
\\\!\!\!\leq\;\; \mathbb{P}[{\widehat T}^{(\beta, k+1)}_x < \infty]\; +\; \mathbb{P}\left[\sup_{t\geq 0} \{ L_{r_k}^\ast (t+r_k)^\beta - \varepsilon \,(1+\varepsilon)^{-k-1} t^{\gamma+\beta} x^\delta\}\geq 2^{k+1} x^{\delta}\right]
\\\;\;\;\leq\;\; \mathbb{P}[{\widehat T}^{(\beta, k+1)}_x < \infty]\; +\; \mathbb{P}\left[ c_\beta\, r_k^\beta L_{r_k}^\ast \,+\, \sup_{t\geq 0} \{ c_\beta\,L_{r_k}^\ast t^\beta - \varepsilon \,(1+\varepsilon)^{-k-1} t^{\gamma+\beta} x^\delta\}\geq 2^{k+1} x^{\delta}\right],
\end{multline*}
where $c_\beta = 2^{\vert \beta -1\vert}$ and we have used $(t+s)^\beta \le c_\beta (t^\beta + s^\beta)$ for all $t,s\ge 0.$ The second term on the right-hand side is bounded by
\begin{multline*}
\mathbb{P}\left[ L_1^\ast \ge \eta^k 3^{\delta -1}c_\beta^{-1} x^\delta \right] \; +\; \mathbb{P}\left[ \sup_{t\geq 0} \{ c_\beta\,L_{r_k}^\ast t^\beta - \varepsilon \,(1+\varepsilon)^{-k-1} t^{\gamma+\beta} x^\delta\}\geq 2^k x^{\delta}\right]
\\ =\; \mathbb{P}\left[ L_1^\ast \ge \eta^k 3^{\delta -1}c_\beta^{-1} x^\delta \right]\; +\; \mathbb{P}\left[ L_1^\ast \ge \eta^k \kappa\, x^\delta \right]
\end{multline*}
for some positive constant $\kappa$ not depending on $k,x.$ Setting ${\hat \kappa} = \min\{\kappa, 3^{\delta -1}c_\beta^{-1}\} > 0,$ and putting everything together, we finally obtain
$$\mathbb{P}[\M_{\alpha,\rho,\gamma}^{(\beta)}\ge x]\; \le\; 3\,\sum_{k\ge 0}\mathbb{P}\left[ L_1^\ast \ge \eta^k{\hat \kappa} \, x^\delta \right]\; \sim\; \frac{3\,{\hat \kappa}^{-\a} \Gamma(\alpha) \sin(\pi\alpha \rho)}{\pi (1-\eta^{-\a})}\, x^{\frac{1-\gamma\a}{\gamma +\beta}},$$
where the estimate follows at once from (\ref{Bing}) and direct summation. This completes the proof for $\gamma +\beta \ge 1.$ The case $\gamma +\beta < 1$ follows along the same lines, except that (\ref{Dieter}) is not true anymore. We hence set
$$\lambda\; =\;\frac{\varepsilon}{2(1+\varepsilon)}\,\in\, (0,1)\qquad\mbox{and}\qquad r_k\; =\; (3\lambda^{-1} \times 2^k (1+\varepsilon)^k)^{\frac{1}{\gamma+\beta}},\quad k\ge 0.$$
Using the obvious inequality $(t+r_k)^{\gamma+\beta}\geq (1-\lambda)t^{\gamma+\beta}+ \lambda r_k^{\gamma+\beta}$
leads first to
$$\mathbb{P}[{\widehat T}^{(\beta, k)}_x < \infty]\; \le\; \mathbb{P}\left[ L_1^\ast \ge \eta^k 3^{\delta -1} x^\delta \right] \; +\; \mathbb{P}\left[\sup_{t\geq 0}\{ L_{t+r_k}^{(\beta)} -(1-\lambda)(1+\varepsilon)^{-k} t^{\gamma+\beta}x^\delta\} \geq 2^{k+2}x^{\delta}\right].$$
Then, we can bound
\begin{multline*}
\mathbb{P}\left[\sup_{t\geq 0}\{ L_{t+r_k}^{(\beta)} -(1-\lambda)(1+\varepsilon)^{-k} t^{\gamma+\beta}x^\delta\} \geq 2^{k+2}x^{\delta}\right]
\\\!\!\!\leq\;\; \mathbb{P}[{\widehat T}^{(\beta, k+1)}_x < \infty]\; +\; \mathbb{P}\left[\sup_{t\geq 0} \{ 2L_{r_k}^\ast (t+r_k)^\beta - \varepsilon \,(1+\varepsilon)^{-k-1} t^{\gamma+\beta} x^\delta\}\geq 2^{k+2} x^{\delta}\right],
\end{multline*}
and the proof is finished similarly.
\qed
\subsection{The upper bound in the case without positive jumps} The argument relies on the following well-known association lemma, which will also be used during the proof of Theorem C.
\begin{lemma}
\label{lem:ST}
Let $F, G$ be two bounded functionals on the Skorokhod space $\mathcal{D}(\mathbb{R}^+,\mathbb{R})$ being both non-increasing or both non-decreasing. Then, one has
$$\mathbb{E}\left[ F(L_u,\, u\geq 0)\, G(L_u,\, u\geq 0)\right]\; \geq\; \mathbb{E}\left[ F (L_u,\, u\geq 0)\right] \mathbb{E}\left[ G(L_u,\, u\geq 0)\right].$$
\end{lemma}
\proof
By c\`ad-l\`ag approximation, it is enough to consider the case when $F,G$ depend only on a finite number of points. With the notation of Chapter 4.6 in \cite{SamTaq}, we are hence reduced to show that the random vector $(L_{t_1}, L_{t_2}, \ldots, L_{t_n})$ is associated for every $n\ge 2$ and $0 < t_1 < \ldots < t_n.$ By independence of the increments we have $(L_{t_1}, L_{t_2}, \ldots, L_{t_n}) = (X_1, X_1 +X_2, \ldots, X_1+\cdots +X_n),$ where the $X_i$'s are mutually independent real random variables, making the vector $X=(X_1, \ldots, X_n)$ trivially associated. We can then apply e.g. Exercise 4.25 p. 220 in \cite{SamTaq}.
\endproof
Let us now finish the proof. For simplicity, we will set $T_x$ for $T_x^{(\beta)}\!.$ Let $\varepsilon>0$ and fix $\delta$ small enough such that $\eta = 1- (1-\varepsilon)(\delta+1)^{\beta+\gamma} > 0.$ Using the absence of positive jumps, we obtain
\begin{align}
\notag\int_0^{\infty} \mathbb{P}[L_t^{(\beta)} \geq (1-\varepsilon)t^{\beta+\gamma} +x]\,dt & = \int_0^{\infty} \mathbb{P}\left[L_t^{(\beta)} -L_{T_x}^{(\beta)} \geq (1-\varepsilon)t^{\beta+\gamma} - T_x^{\beta+\gamma}, \; T_x<+\infty \right]dt \\
\label{eq:nojumps1}& \ge \int_0^{\delta} \mathbb{P}\left[L_{T_x(t+1)}^{(\beta)}-L_{T_x}^{(\beta)} \geq -\eta\,T_x^{\beta+\gamma}, \; T_x <\infty\right] dt.
\end{align}
By (\ref{Variance}) and a change of variable, the left-hand side equals
$$\int_0^{\infty} \mathbb{P}[L_t^{(\beta)} \geq (1-\varepsilon)t^{\beta+\gamma} +x]\,dt\; = \; \kappa_\varepsilon\, \int_0^{\infty} t^{-\frac{\a\beta}{1+\a\beta}}\, \mathbb{P}[L_1 \geq (1+\alpha\beta)^{1/\alpha}t^{-1/\a}(t^{\frac{\gamma+\beta}{1+\a\beta}} + c_\varepsilon x)]\,dt$$
for some positive constants $\kappa_\varepsilon, c_\varepsilon$ such that $c_\varepsilon \to 1$ as $\varepsilon\to 0$ and, by Lemma \ref{lem:Lap}, we first deduce
$$\log\int_0^{\infty} \mathbb{P}[L_t^{(\beta)} \geq (1-\varepsilon)t^{\beta+\gamma} +x]\,dt \;\sim\; - c_{\a,\beta,\gamma} (c_\varepsilon x)^{\frac{\gamma\a -1}{(\a-1)(\gamma +\beta)}}.$$
We shall now separate the proof according as $\beta\geq1$ or $\beta<1$.\\
Assume first $\beta\geq 1$. Bounding the right-hand side of (\ref{eq:nojumps1}) leads to
$$
\int_0^{\infty} \mathbb{P}[L_t^{(\beta)} \geq (1-\varepsilon)t^{\beta+\gamma} +x]\,dt \ge\delta\, \mathbb{P}\left[ \inf_{u\ge 1,t\le \delta}\{ u^{-\beta-\gamma} (L_{u(t+1)}^{(\beta)}-L_u^{(\beta)} )\}\geq -\eta, \;1< T_x <\infty\right],
$$
whence
\begin{multline}
\label{eq:Z}
\mathbb{P}\left[ \inf_{u\ge 1,t\le \delta}\{ u^{-\beta-\gamma} (L_{u(t+1)}^{(\beta)}-L_u^{(\beta)} )\} \geq -\eta,\; T_x <\infty\right]\\ \le \; \delta^{-1}\int_0^{\infty} \mathbb{P}[L_t^{(\beta)} \geq (1-\varepsilon)t^{\beta+\gamma} +x]\,dt\, +\, \mathbb{P}[T_x \le 1].\qquad
\end{multline}
We next observe that the contribution of $\mathbb{P}[T_x \le 1]$ in the right-hand side of (\ref{eq:Z}) is negligible, using the obvious bound
$$\mathbb{P}[T_x\leq 1]\;\leq\; \mathbb{P}[\tau_x\leq 1]$$
with $\tau_x = \inf\{t\geq0,\; L_t^{(\beta)}=x\},$ the crude estimates
$$\log\mathbb{P} [\tau_x \le 1]\;\asymp\; \log\mathbb{P} [ L_1 > x] \;\asymp\; -x^{\frac{\alpha}{\alpha-1}}$$
and the strict inequality
$$\frac{\alpha \gamma-1}{(\alpha-1)(\beta+\gamma)} \; < \;\frac{\alpha}{\alpha-1}\cdot$$
Above, the crude estimates are a consequence of (\ref{Variance}), (\ref{eq:iGamma}) and
$$\mathbb{P}[L_1^{(\beta)} >x ]\; \le\; \mathbb{P}[\tau_x \le 1]\; \le\; \mathbb{P}\left[\sup_{t\le 1}\left\{ L_t\right\} > x\right]\; =\; \a\,\mathbb{P}[L_1 > x],$$the last equality being well-known as the reflection principle for spectrally negative stable L\'evy processes - see e.g. Exercises 29.7 and 29.18 in \cite{Sato}. Finally, we notice that
$$ u^{-\beta-\gamma} (L_{u(t+1)}^{(\beta)}-L_u^{(\beta)})\; =\;\beta\int_0^{1+t} \left( (1+t -s)^{\beta -1} - (1-s)^{\beta -1}{\bf 1}_{\{s\le 1\}}\right) \, \frac{L_{us}}{u^\gamma}\,ds$$
is an increasing functional of $\{L_s, \, s\ge 0\},$ because $\beta \ge 1.$ Applying Lemma \ref{lem:ST}, we obtain
\begin{multline*}
\mathbb{P}\left[ \inf_{u\ge 1,t\le \delta}\{ u^{-\beta-\gamma} (L_{u(t+1)}^{(\beta)}-L_u^{(\beta)} )\} \geq -\eta,\; T_x <\infty\right]\\
\ge\; \mathbb{P}\left[ \inf_{u\ge 1,t\le \delta}\{ u^{-\beta-\gamma} (L_{u(t+1)}^{(\beta)}-L_u^{(\beta)} )\}\geq -\eta\right]\;\mathbb{P}\left[ T_x <\infty\right]\; = \; \kappa\,\mathbb{P}\left[ T_x <\infty\right]
\end{multline*}
for some $\kappa > 0$ not depending on $x.$ Putting everything together, we get
$$\limsup_{x\to\infty} x^{\frac{1-\gamma\a}{(\a-1)(\gamma +\beta)}}\,\log \mathbb{P}[T_x <\infty]\; \le\; - c_{\a,\beta,\gamma}\, c_\varepsilon^{\frac{\gamma\a -1}{(\a-1)(\gamma +\beta)}},$$
which, letting $\varepsilon\to 0$, completes the proof in the case $\beta\geq1$.\\
Assume second $\beta< 1$. We set
$$\sigma_t= \beta \int_0^{1} \left\{ (1-s)^{\beta-1} -(1+t -s)^{\beta-1}\right\} s^{\gamma} ds$$
which is a positive increasing function on $(0,\infty)$ such that $\sigma_t\rightarrow 0$ as $t\rightarrow 0$.
Replacing $T_x^{\beta+\gamma}$ by
$$T_x^{\beta+\gamma} = \frac{ \beta}{\sigma_t} \int_0^{T_x} \left\{ (T_x-s)^{\beta-1} -(T_x(1+t) -s)^{\beta-1}\right\} s^{\gamma} ds$$
we deduce using a change of variable that
\begin{multline*}
L_{T_x(t+1)}^{(\beta)}-L_{T_x}^{(\beta)} + \frac{\eta}{2}T_x^{\beta+\gamma}\geq \beta T_x^\beta\int_{1}^{1+t} (1+t -s)^{\beta -1}L_{sT_x} ds
- T_x^\beta h_\beta(t) \sup_{u\geq0}\left\{L_u- \frac{\eta}{2\sigma_t}u^\gamma\right\}.
\end{multline*}
where $h_\beta(t)=1+t^\beta -(1+t)^\beta$ is increasing in $t$.
Going back to (\ref{eq:nojumps1}), and taking $a<\delta$, the right-hand side is then greater than :
\begin{equation}\label{eq:a}
a \mathbb{P}\left[ F_\delta(L)- \frac{ h_\beta(\delta)}{ \delta^{\gamma}x^{\frac{\gamma}{\beta+\gamma}}} \sup_{s\geq0} \left\{L_{s} - \frac{\eta}{2 \sigma_a} s^\gamma\right\} \geq -\eta/2, \;\delta x^{\frac{1}{\beta+\gamma}} <T_x <\infty\right]
\end{equation}
where
$$
F_\delta(L)= \beta \inf_{t\leq \delta} \int_1^{1+t} (1+t -s)^{\beta -1} \inf_{u\geq 1}\frac{L_{su}}{u^\gamma}\,ds
$$
is an increasing functional of $L$. We next observe that, cutting (\ref{eq:a}) in two as in (\ref{eq:Z}), the second term will be negligible by taking $\delta$ small enough since
$$\mathbb{P}[T_x\leq \delta x^{\frac{1}{\gamma+\beta}}]\;\leq\; \mathbb{P}[\tau_x\leq \delta x^{\frac{1}{\gamma+\beta}}]$$
and
$$\log\mathbb{P} [\tau_x \le \delta x^{\frac{1}{\gamma+\beta}}]\;\asymp\; \log\mathbb{P} [ L_1 > \delta^{-\beta - \frac{1}{\alpha}} x^{\frac{\alpha\gamma-1}{\alpha(\gamma+\beta)}}] \;\asymp\; - \delta^{- \frac{\alpha\beta+1}{\alpha-1}} x^{\frac{\alpha\gamma-1}{(\alpha-1)(\gamma+\beta)}}.$$
Thus, it remains to deal with the term :
$$ \mathbb{P}\left[ F_\delta(L) \geq -\eta/4,\, T_x <\infty\right] - \mathbb{P}\left[ h_\beta(\delta) \sup_{s\geq0} \left\{L_{s} - \frac{\eta}{2 \sigma_a} s^\gamma\right\} \geq \frac{\eta}{4} \delta^{\gamma}x^{\frac{\gamma}{\beta+\gamma}} \right].
$$
From Theorem A and using the scaling of $L$, the second term behaves as
$$ \log \mathbb{P}\left[ h_\beta(\delta) \left(\sigma_a\right)^{\frac{1}{\alpha\gamma-1}} \sup_{s\geq0} \left\{L_{s} -\frac{\eta}{2} s^\gamma\right\} \geq\frac{\eta}{4} \delta^\gamma x^{\frac{\gamma}{\beta+\gamma}} \right]\asymp - \left(\sigma_a\right)^{-\frac{1}{\gamma(\alpha-1)}} x^{\frac{\gamma \alpha-1}{(\alpha-1)(\gamma+\beta)}}$$
which is negligible by taking $a$ small enough. The proof is then concluded as in the case $\beta\geq1$ by applying Lemma \ref{lem:ST} to the term $
\mathbb{P}\left[ F_\delta(L) \geq -\eta/4, \; T_x <\infty\right]$.\\
\qed
\begin{remark} In the particular case $\beta =1$ of the integrated stable process, we may proceed as in the proof of Theorem A, and obtain a more precise upper bound. The strong Markov property at $T_x$ for the two-dimensional process
$$\{(L^{(1)}_t, L_t), \, t\ge 0\},$$
a scaling argument and (\ref{Variance}) imply firstly
\begin{multline*}
\int_0^\infty t^{\nu_0}\, \mathbb{P}[L^{(1)}_t> t^{1+\gamma} +x]\, dt \\
= \;\mathbb{E}\left[{\bf 1}_{\{T_x<\infty\}} \, T_x^{1+\nu_0}\, \int_0^{\infty} (1+t)^{\nu_0}\, {\bf 1}_{\{{\widetilde L}_1^+ + (tT_x)^{-1/\a} (L_{T_x} -(1+\gamma) T_x^{\gamma}) > T_x^{\gamma-1/\alpha}\,\psi_{\a,\gamma} (t)\}}\, dt\right]
\end{multline*}
where $\psi_{\a,\gamma}(t)= t^{-1-1/\a}((t+1)^{1+\gamma} -1 - (1+\gamma) t)$ is again an homeomorphism from $(0,\infty)$ to $(0,\infty),$ and $T_x$ and ${\widetilde L}_1^+$ are independent. We can then bound
$$
\int_0^\infty t^{\nu_0}\, \mathbb{P}[L^{(1)}_t> t^{1+\gamma} +x]\, dt\;\geq \;\mathbb{E}\left[{\bf 1}_{\{T_x<\infty\}} \, T_x^{1+\nu_0}\, \int_0^{\infty} (1+t)^{\nu_0}\, {\bf 1}_{\{{\widetilde L}_1^+ > T_x^{\gamma-1/\alpha}\,\psi_{\a,\gamma} (t)\}}\, dt\right],$$
using the crucial fact that the derivative of $t\mapsto L^{(1)}_t - t^{1+\gamma}$ at $T_x,$ which equals $L_{T_x} - (1+\gamma) T_x^{\gamma},$ is a.s. non-negative. This leads to
$$ \int_0^{\infty} t^{\nu_0}\,\mathbb{P}[L_1^+> t^{-1-1/\a}(t^{1+\gamma} +x)]\,dt\; \ge\;\frac{1}{1+\nu_0}\; \mathbb{E}\left[{\bf 1}_{\{T_x<\infty\}}\left({\widetilde L}_1^+\right)^{\frac{(1+\nu_0)\alpha}{\gamma\a-1}} \Psi_{\a,\gamma} (T_x^{1/\a-\gamma}\,{\widetilde L}_1^+)\right]$$
where
$$\Psi_{\a,\gamma}(x)\; =\; x^{-\frac{(1+\nu_0)\alpha}{\gamma\a-1}} \left((1+\psi_{\a,\gamma}^{-1}(x))^{1+\nu_0}-1\right)$$
is again bounded away from zero and $\infty,$ by the fateful choice of $\nu_0.$ We finally obtain
$$\mathbb{P}[T_x <\infty]\; \le \; \kappa_+ \, \int_0^{\infty} t^{\nu_0}\,\mathbb{P}[L_1^+> t^{-1-1/\a}(t^{1+\gamma} +x)]\,dt$$
for some $\kappa_+ \in(0,\infty),$ and an appropriate modification of Lemma 1 yields
$$\mathbb{P}[{\bf M}^{(1)}_{\a,1/\a,\gamma} \geq x]\; \le\; {\hat \kappa}_+\,\exp\left\{- c_{\a,1,\gamma}\,x^{\frac{\gamma\a-1}{(\alpha-1)(1+\gamma)}}\right\}$$
at infinity, for some other ${\hat \kappa}_+ \in(0,\infty).$ Unfortunately, the precise lower bound which can be derived from (\ref{eq:iGamma}) is different: one gets
$$\mathbb{P}[{\bf M}^{(1)}_{\a,1/\a,\gamma} \geq x]\; \ge\; {\hat \kappa}_-\,x^{\frac{1-\gamma\a}{2(\alpha-1)(1+\gamma)}}\,\exp\left\{- c_{\a,1,\gamma}\,x^{\frac{\gamma\a-1}{(\alpha-1)(1+\gamma)}}\right\}$$
for some ${\hat \kappa}_-\in(0,\infty),$ and the exact polynomial speed before the exponential term remains unknown. We believe that this speed is given in the lower bound, and we refer to Remark \ref{Precise} (c) below for a general conjecture.
\end{remark}
\subsection{A more precise estimate in the Brownian case} In this paragraph we specify the general results of \cite{HP} to the process $L^{(\beta)}$ in the case $\alpha =2,$ and we get a refinement of Theorem B (b). Observe that in this framework we can also consider the wider range $\beta > -1/2.$ It turns out that a transition phenomenon occurs around $\beta =1/2.$
\begin{proposition}
\label{ApplyHP}
Assume $\gamma>1/2.$
\medskip
{\em (a)} If $\beta \in(-1/2, 1/2),$ there exists $\kappa_{\beta,\gamma} > 0$ such that
$$\mathbb{P}[{\bf M}_{2,1/2,\gamma}^{(\beta)} \geq x] \;\sim\; \kappa_{\beta,\gamma}\, x^{\frac{2\beta(1-2\gamma)}{(2\beta +1)(\gamma+\beta)}}\, \exp\left\{ -c_{2,\beta,\gamma}\, x^{\frac{2\gamma -1}{\gamma+\beta}}\right\}.$$
{\em (b)} If $\beta > 1/2,$ there exists ${\tilde \kappa_{\beta,\gamma}} > 0$ such that
$$\mathbb{P}[{\bf M}_{2,1/2,\gamma}^{(\beta)} \geq x] \;\sim\; {\tilde \kappa_{\beta,\gamma}}\, x^{\frac{1-2\gamma}{2(\gamma+\beta)}}\, \exp\left\{ -c_{2,\beta,\gamma}\, x^{\frac{2\gamma -1}{\gamma+\beta}}\right\}.$$
\end{proposition}
\proof
With our normalization, one has $L_1 \sim \mathcal{N}(0, 2)$ and a scaling argument implies
$${\bf M}_{2,1/2,\gamma}^{(\beta)}\;\stackrel{d}{=}\; \left(\frac{2}{2\beta+1}\right)^{\frac{\gamma +\beta}{2\gamma-1}}\, {\widetilde {\bf M}_{\beta,\gamma}}$$
where
$${\widetilde {\bf M}_{\beta,\gamma}}\; =\; \sup_{t>0}\left\{ \sqrt{2\beta+1}\int_0^t (t-s)^\beta \, dB_s\, - \, t^{\gamma +\beta}\right\}$$
and $\{B_t,\, t\ge 0\}$ is a standard linear Brownian motion. Setting $H = \beta +1/2,$ the process
$$X_t\;=\; \sqrt{2\beta +1}\int_0^t (t-s)^\beta \, dB_s, \qquad t \ge 0,$$
is Gaussian with mean $0$ and variance $t^{2H},$ and self-similar with index $H.$ With the notation of Section 1 in \cite{HP}, we have
\begin{equation}
\label{Een}
s_0\; =\; \left(\frac{2\beta +1}{2\gamma -1}\right)^{\frac{1}{\gamma +\beta}}\qquad\mbox{and}\qquad A\; =\; 2 \sqrt{\frac{c_{2,\beta,\gamma}}{2\beta+1}}.
\end{equation}
We now wish to apply Theorem 1 in \cite{HP}, whose statement deals with case $H\in (0,1)$ but we can actually consider any $H >0$ by Remark 1 therein. Following (7) in \cite{HP}, the next step is to evaluate the behaviour of $\mathbb{E}[(Y_t - Y_s)^2]$ as $t,s\to s_0,$ having set $Y_t = t^{-H}X_t$ for all $t > 0.$ Because of the time normalization, we have not found any precise reference for this behaviour in the literature and so we give the details. For $0 < s <t,$ we compute
$$\mathbb{E}[(Y_t - Y_s)^2]\; =\; 2 \; -\; I_\beta (x)$$
with $x=st^{-1}\in (0,1)$ and
$$I_\beta(x)\; =\; 2(2\beta +1)\sqrt{x}\int_0^1 (1-u)^\beta (1-xu)^\beta\, du.$$
We need to study the asymptotic behaviour of $I_\beta(x)$ as $y = 1-x\to 0.$ If $\beta > 1/2,$ rewriting
$$I_\beta(x)\; =\; 2(2\beta + 1)\sqrt{1-y}\int_0^1 (1-u)^{2\beta} (1+yu(1-u)^{-1})^\beta\, du,$$
making a Taylor expansion of order 2 of both quantities in $y$ and evaluating the two underlying Beta integrals leads to
$$I_\beta(x)\; =\; 2\; -\; \frac{(2\beta +1) y^2}{4(2\beta -1)}\; +\; o(y^2).$$
This shows that (7) in \cite{HP} holds with
\begin{equation}
\label{Twee}
\alpha\; = \;2 \qquad \mbox{and}\qquad D\; =\; \frac{(2\beta +1)}{4(2\beta -1)s_0^2}\cdot
\end{equation}
If $\beta < 1/2,$ the argument does not apply because the second Beta integral diverges. We first rewrite
$$I_\beta(x)\; =\; \frac{2(2\beta+1)\sqrt{x}}{\beta +1}\;\pFq{2}{1}{-\beta,1}{\beta +2}{x}\; =\;\frac{2(2\beta+1)\sqrt{x} y^\beta}{\beta +1}\;\pFq{2}{1}{-\beta,\beta + 1}{\beta +2}{-xy^{-1}},$$
where the first equality follows from Euler's formula and the second one from Pfaff's transformation for the hypergeometric function - see respectively 2.1.3(10) and 2.1.4(22) in \cite{EMOT}. Applying next the residue transformation 2.1.4(17) in \cite{EMOT}, we obtain
\begin{eqnarray*}
I_\beta(x) & = & 2 x^{\beta +1/2}\pFq{2}{1}{-\beta,-1-2\beta}{-2\beta}{-yx^{-1}}\; -\; \frac{2\Gamma(\beta +1)\Gamma(-2\beta)}{\Gamma(-\beta)} \, x^{-\beta -1/2} y^{2\beta +1}\\
& = & 2 \; -\; \frac{\Gamma^2(\beta +1)}{\Gamma(2\beta +1)\,\cos(\pi\beta)} \, y^{2\beta +1} \; +\; O(y^2).
\end{eqnarray*}
This shows that (7) in \cite{HP} holds with
\begin{equation}
\label{Drii}
\alpha\; = \;2\beta +1 \qquad \mbox{and}\qquad D\; =\; \frac{\Gamma^2(\beta +1)}{\Gamma(2\beta +1)\,\cos(\pi\beta)\, s_0^{2\beta +1}}\cdot
\end{equation}
Putting (\ref{Een}) and (\ref{Twee}) resp. (\ref{Drii}) together with (10) resp. (9) in \cite{HP}, using the standard estimate $\sqrt{2\pi}\Psi (u)\sim u^{-1} e^{-u^2/2}$ for the tail of the unit normal distribution, and proceeding to the necessary simplifications, we obtain our required asymptotics with the two different regimes.
\endproof
\begin{remark}
\label{Precise}
(a) For $\beta= 1/2,$ the transformation 2.1.4(18) in \cite{EMOT} with $m=2$ exhibits a logarithmic term: one has the non-trivial closed formula
\begin{eqnarray*}
I_{1/2}(x) & =& 2\; +\;\frac{y^2}{2(1-y)} (\psi(3/2) \,-\,\psi(3)\, +\,\log(y)\,-\,\log(1-y))
\end{eqnarray*}
where $\psi$ is the digamma function. This implies
$$\mathbb{E}[(Y_t-Y_s)^2]\; \sim\; -\frac{(t-s)^2\log\vert t-s\vert}{2 s_0^2}$$
as $t,s\to s_0,$ and we cannot apply the results of \cite{HP}. We believe that
$$\mathbb{P}[{\bf M}_{2,1/2,\gamma}^{(1/2)} \geq x] \;\sim\;\kappa\, (\log x)^\delta \,x^{\frac{1-2\gamma}{2\gamma+1}}\, \exp\left\{ -c_{2,1/2,\gamma}\, x^{\frac{2\gamma -1}{(\gamma+1/2)}}\right\}$$
for some $\kappa > 0$ and $\delta\neq 0$ to be determined, the logarithmic correction being heuristically due to the 1-self-similarity of
$$t\;\mapsto\;\int_0^t \sqrt{t-s}\; dB_s.$$
(b) The constants $\kappa_{\beta,\gamma}$ and ${\tilde \kappa}_{\beta,\gamma}$ can also be evaluated from Theorem 1 in \cite{HP}, but they have a complicated form in general. For $\beta > 1/2,$ one gets
$${\tilde \kappa}_{\beta,\gamma}\; =\; \sqrt{\frac{2(2\beta\gamma -\beta-\gamma +1)}{\pi\,c_{2,\beta,\gamma}\,(2\beta-1)(2\gamma -1)}}\cdot$$
For $\beta\in(-1/2,1/2),$ one obtains
$$\frac{H_{2\beta +1}}{\sqrt{(2\beta +1)(2\gamma-1)}} (\gamma+\beta)^{-\frac{4\beta}{2\beta +1}} \left(\frac{2\gamma-1}{2\beta+1}\right)^{\frac{2\beta(2\gamma-1)}{(2\beta+1)(\gamma+\beta)}} \left( \frac{\Gamma^2(\beta +1)}{\Gamma(2\beta +1)\,\cos(\pi\beta)}\right)^{\frac{1}{2\beta +1}}$$
where $\{B_H(t),\, t\ge 0\}$ is a standard fractional Brownian motion with Hurst parameter $H=\beta +1/2,$ and
$$H_{2\beta +1}\; =\; \lim_{T\to \infty} \frac{1}{T}\,\mathbb{E}\left[ \exp\left\{ \max_{0\le t\le T} (\sqrt{2} B_H(t) - t^{2H})\right\}\right]$$
is, in the words of \cite{HP}, a ``well-known constant''. It does not seem to the authors that the latter constant is explicit, save for $\beta =0$ where the reflection principle and Laplace's method yield
$$\mathbb{E}\left[ \exp\left\{ \max_{0\le t\le T} (\sqrt{2} B_t - t)\right\}\right]\; =\; 1\, +\,\frac{T^{3/2}}{2\sqrt{\pi}}\int_0^1 \sqrt{s}\left(\int_0^\infty x\, e^{-\frac{sT (x-1)^2}{4}}\, dx\right) ds\;\sim\; T,$$
so that $H_1 = 1$ and
$$\kappa_{0,\gamma}\; =\; \frac{1}{\sqrt{2\gamma -1}}$$
in accordance with Theorem A (b).\\
(c) From Proposition \ref{ApplyHP}, it is plausible to conjecture that for $\a\in(1,2)$ one has
$$\mathbb{P}[{\bf M}^{(\beta)}_{\a,1/\a,\gamma} \geq x]\; \sim\; \kappa_{\a,\beta,\gamma}\,x^{\frac{\a\beta(1-\gamma\a)}{(\a-1+\a\beta)(\alpha-1)(\gamma +\beta)}}\,\exp\left\{- c_{\a,\beta,\gamma}\,x^{\frac{\gamma\a-1}{(\alpha-1)(\gamma +\beta)}}\right\}$$
if $\beta \in (0,1-1/\a)$ and
$$\mathbb{P}[{\bf M}^{(\beta)}_{\a,1/\a,\gamma} \geq x]\; \sim\; {\tilde \kappa_{\a,\beta,\gamma}}\,x^{\frac{1-\gamma\a}{2(\alpha-1)(\gamma +\beta)}}\,\exp\left\{- c_{\a,\beta,\gamma}\,x^{\frac{\gamma\a-1}{(\alpha-1)(\gamma +\beta)}}\right\}$$
if $\beta > 1 -1/\a,$ where $\kappa_{\a,\beta,\gamma}$ and ${\tilde \kappa_{\a,\beta,\gamma}}$ are some positive and finite constants.
\end{remark}
\section{Proof of Theorem C}
Following the notation of \cite{PSInt}, we will set
$$\theta\; =\; \frac{\rho}{\a(1-\rho) +1}$$
once and for all. The upper bound follows easily from
$$\mathbb{P}\left[\sup_{0\leq t\leq 1}\left\{ L_t^{(1)} + \mu t^{1+\gamma}\right\}\leq\, \varepsilon\right]\;\leq \; \mathbb{P}\left[\sup_{0\leq t\leq 1}\left\{ L_t^{(1)}\right\} \leq \varepsilon\right]\; \leq\; \kappa\, \varepsilon^{\frac{\alpha\theta}{\alpha+1}}$$
for some $\kappa\in (0,\infty),$ where the first inequality follows from $\mu\ge 0$ and the second one from Theorem A in \cite{PSInt} and scaling. \\
The lower bound is more involved and we will need the strong Markovian character of the two-dimensional process $\{(L^{(1)}_t\!,L_t), \, t\ge 0\},$ setting by $\mathbb{P}_{(x,y)}$ for its law starting from $(x,y)\in \mathbb{R}^2.$ Define the stopping time
$$R_{\varepsilon}\; =\; \inf\left\{t\geq0,\, L_t^{(1)} + \mu \varepsilon^{\frac{\gamma\alpha -1}{\alpha+1}} t^{1+\gamma} =0\right\}$$
and observe first that, by scaling and translation,
$$\mathbb{P}\left[\sup_{0\leq t\leq 1}\left\{ L_t^{(1)} + \mu t^{1+\gamma}\right\}\le\,\varepsilon\right]\; =\; \mathbb{P}_{(-1,0)}\left[ R_\varepsilon\,\geq\,\varepsilon^{-\frac{\alpha}{\alpha+1}}\right].$$
Notice also that $\mathbb{P}_{(x,y)}\left[ R_\varepsilon \leq R_0\right]=1$ for every $x < 0$ and $y\in\mathbb{R},$ because $\mu \ge 0.$ Applying the strong Markov property at $R_\varepsilon,$ we obtain
$$\mathbb{P}_{(-1,0)}\left[ R_0 \geq 2\varepsilon^{-\frac{\alpha}{\alpha+1}}\right] \; = \;
\mathbb{E}_{(-1,0)}\left[\mathbb{P}_{(-\mu\varepsilon^{\frac{\gamma\alpha -1}{\alpha+1}} R_\varepsilon^{1+\gamma}, L_{R_\varepsilon})}\left[ R_0+ x\geq 2\varepsilon^{-\frac{\alpha}{\alpha+1}}\right]_{\{x=R_\varepsilon\}}\right]
$$
whose right-hand side is, by comparison, smaller than
$$\mathbb{P}_{(-1,0)}\left[ R_\varepsilon \geq \varepsilon^{-\frac{\alpha}{\alpha+1}}\right] \; +\;
\mathbb{E}_{(-1,0)}\left[{\bf 1}_{\{ R_\varepsilon \leq \varepsilon^{-\frac{\alpha}{\alpha+1}}\}}\,\mathbb{P}_{(-\mu\varepsilon^{\frac{\gamma\alpha -1}{\alpha+1}} R_\varepsilon^{1+\gamma}, - \mu(1+\gamma)\varepsilon^{\frac{\gamma\alpha -1}{\alpha+1} }R_{\varepsilon}^{\gamma})}\left[ R_0\geq \varepsilon^{-\frac{\alpha}{\alpha+1}}\right]\rcr.$$
Indeed, the derivative of $t\mapsto L^{(1)}_t+ \mu \varepsilon^{\frac{\gamma\alpha -1}{\alpha+1}} t^{1+\gamma}$ at $R_\varepsilon$ equals $L_{R_\varepsilon} + \mu(1+\gamma)\varepsilon^{\frac{\gamma\alpha -1}{\alpha+1} }R_{\varepsilon}^{\gamma}$ and is a.s. non-negative under $\mathbb{P}_{(-1,0)}$. On the other hand, a further scaling argument shows that
$$\mathbb{P}_{(- x,-y)}[R_0\geq t] \;=\; \mathbb{P}_{(- 1, -yx^{-\frac{1}{\alpha+1}})}\left[ x^{\frac{\alpha}{\alpha+1}} R_0\geq t \right]$$
for every $x,y,t \ge 0.$ If we now assume $\mu\le 1$ this implies, again by comparison,
\begin{multline*}
\mathbb{E}_{(-1,0)}\left[{\bf 1}_{\{ R_\varepsilon \leq \varepsilon^{-\frac{\alpha}{\alpha+1}}\}}\,\mathbb{P}_{(-\mu\varepsilon^{\frac{\gamma\alpha -1}{\alpha+1}} R_\varepsilon^{1+\gamma}, - \mu(1+\gamma)\varepsilon^{\frac{\gamma\alpha -1}{\alpha+1} }R_{\varepsilon}^{\gamma})}\left[ R_0\geq \varepsilon^{-\frac{\alpha}{\alpha+1}}\right]\rcr\\
=\;\mathbb{E}_{(-1,0)}\left[{\bf 1}_{\{ R_\varepsilon \leq \varepsilon^{-\frac{\alpha}{\alpha+1}}\}}\,\mathbb{P}_{(-1, - \mu^{\frac{\alpha}{\alpha+1}} (1+\gamma)(\varepsilon^{\frac{\alpha}{\alpha+1}} R_\varepsilon) ^{\frac{\gamma\alpha -1}{\alpha+1}})}\left[ \left(\mu x^{1+\gamma}\,\varepsilon^{\frac{\gamma\alpha -1}{\alpha+1}}\right)^{\frac{\alpha}{\alpha+1}} R_0\geq \varepsilon^{-\frac{\alpha}{\alpha+1}}\right]_{\{x=R_\varepsilon\}}\right]\\
\leq \; \mathbb{E}_{(-1,0)}\left[\mathbb{P}_{(-1, -1-\gamma)}\left[ x^{\frac{\alpha(\gamma+1)}{\alpha+1}} R_0\,\geq \mu^{-\frac{\alpha}{\alpha+1}} \varepsilon^{-\frac{\a^2(\gamma+1)}{(\alpha+1)^2}}\right]_{\{x=R_\varepsilon\}}\right]\\
\le\; \mathbb{P}_{(-1, -1-\gamma)}\left[ {\widehat R_0}^{\frac{\alpha(\gamma+1)}{\alpha+1}} R_0\,\geq \mu^{-\frac{\alpha}{\alpha+1}} \varepsilon^{-\frac{\alpha^2(\gamma +1)}{\alpha+1}}\right]
\end{multline*}
where ${\hat R_0}$ is an independent copy of $R_0.$ Putting everything together, we obtain
$$\mathbb{P}_{(-1,0)}\left[ R_\varepsilon \geq \varepsilon^{-\frac{\alpha}{\alpha+1}}\right]\; \ge\;\mathbb{P}_{(-1,0)}\left[ R_0 \geq 2\varepsilon^{-\frac{\alpha}{\alpha+1}}\right] \; -\; \mathbb{P}_{(-1, -1-\gamma)}\left[ {\widehat R_0}^{\frac{\alpha(\gamma+1)}{\alpha+1}} R_0\,\geq \mu^{-\frac{\alpha}{\alpha+1}} \varepsilon^{-\frac{\alpha^2(\gamma +1)}{\alpha+1}}\right].$$
Now by Theorem A in \cite{PSInt} we have
$$\mathbb{P}_{(-1, x)}\left[ R_0 > t\right] \;\asymp\; t^{-\theta}$$
for every $x\in\mathbb{R}$ and since $\alpha(\gamma+1) > \alpha+1,$ we can also infer from Lemma 2 in \cite{PSWind} that
$$\mathbb{P}_{(-1, -1-\gamma)}\left[ {\widehat R_0}^{\frac{\alpha(\gamma+1)}{\alpha+1}} R_0\,\geq \, t \right]\;\asymp\; t^{\frac{\theta(\alpha+1)}{\alpha(\gamma+1)}}.$$
This implies that there exists two finite constants $\kappa_2 \ge \kappa_1 > 0$ independent of $\mu,\varepsilon$ such that
$$\mathbb{P}_{(-1,0)}\left[ R_\varepsilon \geq \varepsilon^{-\frac{\alpha}{\alpha+1}}\right]\; \ge\;\kappa_1\,\varepsilon^{\frac{\alpha \theta}{\alpha+1}}\, -\, \kappa_2\, \mu^{\frac{\theta}{\gamma+1}} \varepsilon^{\frac{\alpha \theta}{\alpha+1}},$$
which completes the proof of the lower bound for $\mu \leq\mu_0$ with $\mu_0 = (\kappa_1/2\kappa_2)^{(\gamma+1)/\theta}> 0.$
Assuming finally $\mu > \mu_0$ and setting ${\bar \mu}=\mu^{\frac{\alpha}{\alpha\gamma-1}}$ and ${\bar \mu_0}=\mu_0^{\frac{\alpha}{\alpha\gamma-1}}$ for simplicity, we have
\begin{eqnarray*}
\mathbb{P}\left[\sup_{0\leq t\leq 1}\left\{ L_t^{(1)} + \mu t^{1+\gamma}\right\}\le\,\varepsilon\right] &= &\mathbb{P}\left[\sup_{0\leq t\leq {\bar \mu}}\left\{ L_t^{(1)} + t^{1+\gamma}\right\}\le\,\varepsilon {\bar \mu}^{\frac{\alpha+1}{\alpha}} \right] \\
&\geq &\mathbb{P}\left[\sup_{0\leq t\leq {\bar \mu}}\left\{ L_t^{(1)} + t^{1+\gamma}\right\}\le\,\varepsilon {\bar \mu_0}^{\frac{\alpha+1}{\alpha}} \right]\\
& \geq & \mathbb{P}\left[ \sup_{{\bar \mu_0}\leq t\leq {\bar \mu}}\left\{ L_t^{(1)} + t^{1+\gamma}\right\}\le\,0, \;\sup_{0\leq t\leq {\bar \mu_0}}\left\{ L_t^{(1)} + t^{1+\gamma}\right\}\le\,\varepsilon {\bar \mu_0}^{\frac{\alpha+1}{\alpha}}\right]\\
& \geq &\mathbb{P} \left[ \sup_{{\bar \mu_0}\leq t\leq {\bar \mu}}\left\{ L_t^{(1)} + t^{1+\gamma}\right\}\le\,0 \right] \mathbb{P}\left[\sup_{0\leq t\leq {\bar \mu_0}}\left\{ L_t^{(1)} + t^{1+\gamma}\right\}\le\,\varepsilon {\bar \mu_0}^{\frac{\alpha+1}{\alpha}}\right]\\
& = & \mathbb{P} \left[ \sup_{{\bar \mu_0}\leq t\leq {\bar \mu}}\left\{ L_t^{(1)} + t^{1+\gamma}\right\}\le\,0 \right] \mathbb{P}\left[\sup_{0\leq t\leq 1}\left\{ L_t^{(1)} + \mu_0 t^{1+\gamma}\right\}\le\,\varepsilon\right]\\
& \ge & \kappa\, \varepsilon^{\frac{\alpha \theta}{\alpha+1}}
\end{eqnarray*}
for some $\kappa > 0,$ where the first and fifth equalities are obtained by scaling, the fourth inequality follows from Lemma \ref{lem:ST}, and the last inequality is a consequence of the strict positivity of ${\bar \mu_0}.$ This completes the proof.
\qed
\begin{remark} Using the same argument and Lemma VIII.4 in \cite{Ber}, one can show the following lower tail probabilities estimate for the L\'evy stable process with a positive power drift. If $\a\gamma > 1$ and $\rho\in (0,1),$ then for every $\mu \ge 0$ one has
\begin{equation}
\label{SmallV}
\mathbb{P}\left[\sup_{0\leq t\leq 1}\left\{ L_t + \mu t^{\gamma}\right\}\leq\, \varepsilon\right]\;\asymp\; \varepsilon^{\alpha\rho}.
\end{equation}
We leave the detail, which is simpler than the above, to the interested reader. This estimate for small values echoes the persistence result for large time obtained in Theorem 1 of \cite{AK}, which reads
\begin{equation}
\label{Persi}
\mathbb{P}\left[\sup_{0\leq t\leq T}\left\{ L_t + \mu t^{\gamma}\right\}\leq\, 1\right]\;=\; T^{-\rho +o(1)}
\end{equation}
for every $\mu \ge 0,$ with $\a\gamma < 1, \rho\in (0,1),$ and under the additional assumption $\a\in (0,1).$ Observe that in the absence of self-similarity, the estimates (\ref{SmallV}) and (\ref{Persi}) are different ones and cannot be deduced from one another, save for $\mu =0.$
\end{remark}
\bigskip
\noindent
{\bf Acknowledgement.} This work was initiated during a stay at Academia Sinica, Taipei, of the second author who would like to thank Chii-Ruey Hwang and Ju-Yi Yen for their hospitality, as well as Chien-Hao Huang and Zakhar Kabluchko for some useful discussions.
|
{
"timestamp": "2018-06-05T02:09:43",
"yymm": "1806",
"arxiv_id": "1806.00745",
"language": "en",
"url": "https://arxiv.org/abs/1806.00745"
}
|
\section{Introduction} \label{sec:introduction}
Marginal structural models (MSMs) offer a successful way to estimate the causal effect of a time-varying treatment on an outcome of interest from longitudinal data in observational studies \citep{robins2000marginal,robins2000marginalb}. For example, they have been used to estimate the optimal timing of HIV treatment initiation \citep{hiv2011initiate}, to evaluate the effect of hormone therapy on cardiovascular outcomes \citep{hernan2008observational}, and to evaluate the impact of negative advertising on election outcomes \citep{blackwell2013framework}.
The increasing popularity of MSMs among applied researchers derives from their ability to control for time-dependent confounders, which are confounders that are affected by previous treatments and affect future ones. In particular, as shown by \cite{robins2000marginalb} and \cite{blackwell2013framework}, standard methods, such as regression or matching, fail to control for time-dependent confounding, introducing post-treatment bias. In contrast, MSMs consistently estimate the causal effect of a time-varying treatment via inverse probability of treatment weighting (IPTW), which controls for time-dependent confounding by weighting each subject under study by the inverse of their probability of being treated given covariates, {\textit{i.e.}, the propensity score \citep{rosenbaum1983central},} mimicking a sequential randomized trial. In other words, IPTW creates a hypothetical pseudo-population where time-dependent confounders are balanced over time.
\textcolor{black}{Despite their wide range of applications, the usage of these methods in observational studies may be jeopardized by their considerable dependence on}
positivity. This assumption requires that, at each time period, the probability of being assigned to the treatment, conditional on the history of treatment and confounders, is not 0 or 1 \citep{robins2000marginal}. \textcolor{black}{Even if positivity holds theoretically, when propensities are close to 0 or 1, it can be \textit{practically} violated}.
Practical positivity violations lead to extreme and unstable weights, which in turn yield very low precision and misleading inferences \citep{kang2007demystifying,robins1995analysis,scharfstein1999adjusting}. In addition, MSMs using IPTW are highly sensitive to \textit{misspecification} of the treatment assignment model, which can
lead to biased estimates
\citep{kang2007demystifying,lefebvre2008impact,cole2008constructing}.
Various statistical methods have been proposed in an attempt to overcome these challenges. To deal with extreme weights, several authors \citep{cole2008constructing,xiao2013comparison} have suggested truncation, whereby outlying weights are replaced with less extreme ones. \cite{santacatterina2019optimal} proposed to use shrinkage instead of truncation as a more direct way to control the bias-variance trade-off.
\cite{robins2000marginalb} recommended the use of stabilized-IPTW (sIPTW) where inverse probability weights are normalized by the marginal probability of treatment. To control for misspecification of the treatment assignment model, \cite{imai2015robust} proposed to use the covariate balance propensity score (CBPS), which instead of plugging in a logistic regression estimate of propensity into IPTW finds the logistic model that balances covariates via the generalized method of moments. The method tries to balance the first moment of each covariate even if a logistic model is misspecified \citep{imai2014covariate}.
\textcolor{black}{In this paper, we present and apply Kernel Optimal Weighting (KOW), which provides weights for fitting an MSM that optimally balance time-dependent confounders while controlling for precision. Specifically,
by solving a quadratic optimization problem over weights, the proposed method directly minimizes \textit{imbalance}, defined as the sum of discrepancies between the weighted observed data and the counterfactual of interest over all treatment regimes, while penalizing extreme weights.}
This extends the kernel optimal matching method of \citet{kallus2016generalized} and \citet{kallus2018more} to the longitudinal setting and to dealing with time-dependent confounders, where, similarly to regression and matching, it cannot be applied without introducing post-treatment bias.
{The proposed method has several attractive characteristics. First, by optimally balancing time-dependent confounders while penalizing extreme weights, it leads to better accuracy, precision, and total error.
In particular, in the simulation study presented in Section \ref{simu}, we show that the mean squared error (MSE) of the estimated effect of a time-varying treatment obtained by using KOW is lower than that obtained by using IPTW, sIPTW, and CBPS in all considered simulated scenarios. Second, differently from \cite{imai2015robust}, where the number of covariate balancing conditions grows exponentially in the number of time periods, KOW only needs to minimize a number of {discrepancies} that grows linearly in the number of time periods.
This feature leads to a lower computational time of KOW compared with CBPS when the total number of time periods increases, as shown in our simulation study in Section \ref{simu_comp_time} and in our study on the effect of negative advertising on election outcomes in Section \ref{caseblack}. Third, by optimally balancing covariates, KOW mitigates the effects of possible misspecification of the treatment model. In Section \ref{simu}, we show that KOW is more robust to model misspecification compared with the other methods. Fourth, KOW can balance non-additive covariate relationships by using kernels, which generalize the structure of conditional expectation functions, and does not restrict weights to follow a fixed logistic (or other parametric) form.
In Section \ref{simu}, we show how KOW compares favorably with the aforementioned methods in all nonlinear scenarios, and in Section \ref{caseblack} we use KOW to balance non-additive covariate relationships estimating the effect of negative advertising on election outcomes.
Fifth, KOW can be easily generalized to other settings, such as informative censoring. We do just that in Section \ref{censor} and, in Section \ref{casehiv}, we use this extension to study the effect of human immunodeficiency virus (HIV) treatment on time to death among people living with HIV. Finally, KOW can be solved by using off-the-shelf solvers for quadratic optimization. }
In the next section, we briefly introduce the literature of MSMs (Section \ref{sec:rew_msm}). In Section \ref{kow} we develop and define KOW. We then discuss some practical guidelines on the use of KOW (Section \ref{guidelines}). In Section \ref{simu} we report the results of a simulation study aimed at comparing KOW with IPTW, sIPTW, and CBPS. In Section \ref{censor}, we extend KOW to control for informative censoring. We then present two empirical applications of KOW in medicine and political science (Section \ref{empirics}). We offer some concluding remarks in Section \ref{conclusions}.
\section{Marginal structural models for longitudinal data} \label{sec:rew_msm}
In this section, we briefly review MSMs \citep{robins2000marginal,robins2000marginalb}. Suppose we have a simple random sample with replacement of size $n$ from a population.
For each unit $i=1, \ldots, n$ and time period $t=1, \ldots, T$, we denote the binary time-varying treatment variable by $A_{it}$, with $A_{it}=0$ meaning not being treated at time $t$ and $A_{it}=1$ being treated at time $t$, and time-dependent confounders $X_{it}$. We denote by $\overline{A}_{it}=\lbrace A_{i1}, \ldots, A_{it} \rbrace$ the treatment history up to time $t$ and by $\overline{X}_{it}=\lbrace X_{i1}, \ldots, X_{it} \rbrace$ the history of confounders up to time $t$. $X_{i1}$ represents the time-invariant confounders, \textit{i.e.}, confounders that do not depend on past treatments.
{We denote by $\overline{a}_t$ and $ \overline{x}_t$ possible realizations of the treatment history $\overline{A}_{it}$ and the confounder history $\overline{X}_{it}$, respectively.
We use $\mathbbm{1}[\cdot]$ to denote the indicator so that $\mathbbm1\left[\overline A_{it}=\overline a_t\right]$ is the variable that is 1 if $\overline A_{it}=\overline a_t$ and 0 otherwise.
To streamline notation, we will refer to $\overline{A}_{iT}$ as $\overline{A}_{i}$, $\overline{a}_{T}$ as $\overline{a}$, $\overline{X}_{iT}$ as $\overline{X}_{i}$, and to $\overline{x}_{T}$ as $\overline{x}$.
}
For each unit $i=1, \ldots, n$, we denote by $Y_i$ the outcome variable observed at the end of the study. Using the potential outcome framework \citep{imbens2015causal}, we denote by $Y_i(\overline{a})$ the potential outcome we would see if we were to apply the treatment regime $\overline{a}\in\mathcal A$ to the $i^\text{th}$ unit,
where $\mathcal A=\{0,1\}^T$ is the space of treatment regimes.
Throughout, we drop the subscripts $i$ on these variables to refer to a generic unit.
We impose the assumptions of consistency, non-interference, positivity and sequential ignorability \citep{imbens2015causal,hernan2010causal}. Consistency and non-interference \citep[also known as SUTVA;][]{rubin1980randomization} can be encapsulated in that the potential outcomes are well-defined and the observed outcome corresponds to the potential outcome of the treatment regime applied to that unit, \textit{i.e.,}
$Y=Y(\overline{A})$.
As previously introduced, positivity states that, for each time $t=1, \ldots, T$, the probability of being treated at time $t$ conditioned on the treatment history up to time $t-1$ and the confounder history up to time $t$, is not 0 or 1, \textit{i.e.},
\begin{equation}
\label{positivity}
0 < \mathbb P(A_{t} = 1 \mid \overline{A}_{t-1}, \overline{X}_{t} ) < 1 \quad \forall \ t \in \lbrace 1, \ldots, T \rbrace,
\end{equation}
Sequential ignorability states that the potential outcome $Y(\overline{a})$ is independent of treatment assignment at time $t$, given the treatment history up to time $t-1$ and the confounder history up to time $t$. Formally, sequential ignorability is defined as
\begin{equation}
\label{igno}
Y(\overline{a}) \independent A_{t} \mid \overline{A}_{t-1}, \overline{X}_{t} \quad \forall \ t \in \lbrace 1, \ldots, T \rbrace.
\end{equation}
An MSM is a model for the marginal causal effect of a time-varying treatment regime on the mean of $Y$, that is,
\begin{equation}
\label{MSM}
\mathbbm{E} \left[ Y(\overline{a}) \right] = g(\overline{a},\bm{\beta}),
\end{equation}
where $g(\overline{a},\bm{\beta})$ is some known function class parametrized by $\bm\beta$. For example, a commonly used MSM is based on additive effects with a common coefficient: $g(\overline{a},\bm{\beta})=\beta_1 + \beta_2 \sum_{t=1}^Ta_{t}$, where the parameter $\beta_2$ is the causal parameter of interest.
{Usually, $\bm\beta$ is computed by a weighted regression of the outcome on the treatment regime alone using weighted least squares (WLS), \textit{i.e.,} $\min_{\bm\beta}\sum_{i=1}^nW_i(Y_i-
g(\overline A_{i},\bm\beta)
)^2$, and Wald confidence intervals are constructed using robust (sandwich) standard errors \citep{freedman2006so,robins2000marginal,hernan2001marginal}.
In order to consistently estimate $\bm\beta$, the weights $W_{1:n}=(W_1, \dots, W_n)$, must account for the non-randomness of the treatment assignment mechanism, \textit{i.e.,}
the confounding.
\citet{robins2000marginal} showed that the set of inverse probability weights and stabilized inverse probability weights achieve this objective. These weights are defined as follows,}
\begin{equation}
\label{sipweights}
\begin{aligned}
W_i^{\text{IPTW}}=w(\overline{A}_{i}, \overline{X}_{i}),\quad w(\overline{a}, \overline{x}) &= \prod_{t=1}^T \frac{h_t(\overline a_t)}{\mathbb P(A_{t}=a_t \mid \overline{A}_{t-1}=\overline a_{t-1}, \overline{X}_{t}=\overline x_t )},
\end{aligned}
\end{equation}
\noindent
where $h_t(\overline{a}_{t})$ is a known function of treatment history. The set of inverse probability weights is obtained by setting $h_t(\overline{a}_{t})=1$, while the set of stabilized inverse probability weights is obtained by setting $h_t(\overline{a}_{t})=\mathbb P(A_{t} = a_{t} \mid \overline{A}_{t-1} = \overline{a}_{t-1})$.
To estimate weights of the form of eq.~\eqref{sipweights},
one first estimates the conditional probability models using
either parametric methods such as logistic regression or other machine learning methods \citep{karim2017application,gruber2015ensemble,karim2017estimating} and then these estimates are plugged in directly into eq.~\eqref{sipweights} to derive weights, which are then plugged into the WLS. Stabilized weights seek to attenuate the variability of inverse probability weights by normalizing them by the marginal probability of treatment. Since the additional factor is a function of treatment regime alone, it does not affect the consistency of the WLS if the MSM is well specified. Both sets of weights, however, rely on plugging in an estimate of a probability into the denominator, meaning that when the true probability is even modestly close to 0, any small error in estimating it can translate to very large errors in estimating the weights and to estimated weights that are extremely variable.
{Furthermore, both sets of weights rely on the correct specification of the conditional probability models used to estimate the weights in eq.~\eqref{sipweights}.
}
To overcome this issue,
{\citet{imai2015robust} proposed to estimate weights of the form of eq.~\eqref{sipweights} that improve balance of confounders by generalizing the covariate balancing propensity score (CBPS) methodology.} Instead of plugging in probability estimates based on logistic regression, CBPS uses the generalized method of moments to find the logistic regression model that if plugged in would lead to weights, $W_i^{\text{CBPS}}$, that approximately solve a subset of the moment conditions that the true inverse probability weights, eq.~\eqref{sipweights}, satisfy.
{
Differently than IPTW, sIPTW and CBPS, in the next Section, we characterize imbalance as the discrepancies in observed average outcomes due to confounding, consider their worst case values, and use quadratic optimization to obtain weights that directly optimize the balance of time-invariant and time-dependent confounders over all possible weights while controlling precision.}
\section{{Kernel Optimal Weighting}}
\label{kow}
In this Section we present a convex-optimization-based approach that obtains weights that minimize the imbalance due to time-dependent confounding (\textit{i.e.,} maximize balance thereof) while controlling precision. Toward that end, in Section \ref{imbalance}, we provide a definition of imbalance. Specifically, we define imbalance as the sum of discrepancies between the weighted \textit{observed} data and the \textit{unobserved} counterfactual of interest over all treatment regimes. Since this imbalance depends on unknown functions, in Section \ref{nswci} we consider the worst case imbalance, which guards against all possible realizations of the unknown functions. We also show that the worst case imbalance has the attractive characteristic that the number of discrepancies considered grows \textit{linearly} in the number of time periods and not \textit{exponentially} like the number of treatment regimes. We finally show how to minimize this quantity while controlling precision using kernels, reproducing kernel Hilbert space (RKHS) and off-the-shelf solvers for quadratic optimization (Sections \ref{min}-\ref{kernelqp}).
\subsection{Defining imbalance}
\label{imbalance}
Consider any population weights $W=w(\overline{A}, \overline{X})$, where $w(\cdot)$ is a function that depends on the treatment and confounder histories.
In this Section, we will
show that, under
consistency and assumptions \eqref{positivity}--\eqref{igno}, we can decompose the difference between the weighted average outcome among the $\overline a$-treated units, $\mathbbm{E} [ W \mathbbm{1}[\overline A=\overline a] Y ]$, and the average potential outcome of $\overline a$, $\mathbbm{E} [ Y(\overline a) ]$, into a sum over time points $t$ of discrepancies
involving the values of treatment and confounder histories up to time $t$.
To build intuition we start by explaining this decomposition in the case of two time periods $T=2$.
Assuming consistency and assumptions \eqref{positivity}--\eqref{igno}, for each $\overline a=(a_1,a_2)\in\mathcal A$, we can decompose the weighted average outcome among the $\overline a$-treated units as follows:
\begin{align}
\label{deco}
\mathbb E[W\mathbbm 1[\overline A=\overline a]Y]&=
\mathbb E[W\mathbbm 1[A_1=a_1]\mathbbm 1[A_2=a_2]\mathbb E[Y(\overline a)\mid A_1,A_2,X_1,X_2]]
\\\notag&=
\mathbb E[W\mathbbm 1[A_1=a_1]\mathbbm 1[A_2=a_2]\mathbb E[Y(\overline a)\mid A_1,X_1,X_2]]
\\\notag&=
\mathbb E[W\mathbbm 1[A_1=a_1]\mathbb E[Y(\overline a)\mid A_1,X_1,X_2]]
+\delta^{(2)}_{a_2}(W,g_{\overline a}^{(2)})
\\\notag&=
\mathbb E[W\mathbbm 1[A_1=a_1]\mathbb E[Y(\overline a)\mid X_1]]
+\delta^{(2)}_{a_2}(W,g_{\overline a}^{(2)})
\\\notag&=
\mathbb E[Y(\overline a)]
+\delta^{(1)}_{a_1}(W,g_{\overline a}^{(1)})+\delta^{(2)}_{a_2}(W,g_{\overline a}^{(2)})
\\\notag&= \mathbb E[Y(\overline a)] + \sum_{t=1}^{2}\delta^{(t)}_{a_t}(W,g_{\overline a}^{(t)}),
\end{align}
where the first equality follows from iterated expectation, the second from sequential ignorability, the fourth from iterated expectation and sequential ignorability and the third and fifth from the following definitions, which exactly capture the difference between the two sides of the third and fifth equalities,
\begin{align}
\delta^{(2)}_{a_2}(W,h^{(2)}) &= \mathbbm{E} \left[ W \mathbbm{1}[A_2=a_2] h^{(2)}(A_1,X_1,X_2) \right] - \mathbbm{E} \left[W h^{(2)}(A_1,X_1,X_2) \right] \\\notag
g_{\overline a}^{(2)}(A_1,X_1,X_2) &= \mathbbm{1}[A_1=a_1]\mathbbm{E} \left[ Y(\overline a) \mid A_1,X_1,X_2 \right]\\\notag
\delta^{(1)}_{a_1}(W,h^{(1)}) &= \mathbbm{E} \left[ W \mathbbm{1}[A_1=a_1] h^{(1)}(X_1) \right] - \mathbbm{E} \left[ h^{(1)}(X_1) \right] \\\notag
g_{\overline a}^{(1)}(X_1) &= \mathbbm{E} \left[ Y(\overline a) \mid X_1 \right].
\end{align}
Note our use of $h^{(t)}$ as a
generic dummy function and
$g_{\overline a}^{(t)}$ as a \textit{specific} function that depends on the particular (unknown) distribution of $\overline X_t,\overline A_{t-1},Y(\overline a)$.
This gives a definition of discrepancy, $\delta^{(t)}_{a_t}(W,h^{(t)})$, where the subscript $a_t\in\{0,1\}$ refers to the treatment assigned at time $t$, $W=w(\overline{A}, \overline{X})$ is a population weight, and $h^{(t)}$ is a given function of interest of the treatment and confounder history up to $t$, $\overline A_{t-1},\overline X_t$. The function $g_{\overline a}^{(t)}$ is one such function.
In particular, for every $a_1\in\{0,1\}$,
the quantity $\delta^{(1)}_{a_1}(W,h^{(1)})$ is the discrepancy between the $h^{(1)}$-moments of the baseline confounder distribution in the weighted $a_1$-treated population and of the distribution in the whole population. Similarly, for every $a_2\in\{0,1\}$, $\delta^{(2)}_{a_2}(W,h^{(2)})$ is a discrepancy in the $h^{(2)}$-moment of treatment and confounder histories at the start of time step $2$.
What we have shown above is how these discrepancies directly relate to the difference between weighted averages of observed outcomes and true averages of unknown counterfactuals of interest.
Specifically, we have shown that when we measure these discrepancies with respect to the specific function $g_{\overline a}^{(t)}$, then their sum gives that difference.
We can extend this decomposition to general horizons $T\geq1$.
Let us define the same discrepancies for any time $t\geq3$ as
\begin{equation*}
\begin{aligned}
\delta^{(t)}_{a_t}(W,h^{(t)}) &= \mathbbm{E} \left[ W \mathbbm{1}[A_t=a_t] h^{(t)}(\overline A_{t-1},\overline X_t) \right] - \mathbbm{E} \left[
W
h^{(t)}(\overline A_{t-1},\overline X_t) \right],\\
g_{\overline a}^{(t)}(\overline A_{t-1},\overline X_t) &= \mathbbm{1}[\overline A_{t-1}=\overline a_{t-1}]\mathbbm{E} \left[ Y(\overline a) \mid \overline A_{t-1},\overline X_t \right].
\end{aligned}
\end{equation*}
The following result gives the general decomposition of the difference between weighted average of observed outcomes and true average of counterfactuals as the sum of $T$ discrepancies, one for every time step:
\begin{theorem}
\label{thm1}
Under assumptions \eqref{positivity}--\eqref{igno}, for each $\overline a\in\mathcal A=\{0,1\}^T$,
\begin{equation*}
\mathbbm{E} \left[ W \mathbbm{1}[\overline A=\overline a] Y \right]
- \mathbbm{E} \left[ Y(\overline a) \right]
=
\sum_{t=1}^T\delta^{(t)}_{a_t}(W,g_{\overline a}^{(t)}).
\end{equation*}
\end{theorem}
Based on the results of Theorem \ref{thm1}, it is clear that if we want the difference between average counterfactual outcomes and average weighted factual outcomes to be small for all treatment regimes $\overline a$ then we should seek weights $W$ that make
$$\overline\delta_{\overline a}(W,\overline g_{\overline a})=\sum_{t=1}^T\delta^{(t)}_{a_t}(W,g_{\overline a}^{(t)})
$$
small for all $\overline a$,
where we write $\overline h=(h^{(1)},\dots,h^{(T)})$ for any set of $T$ functions.
The empirical counterparts to
$\delta^{(t)}_{a_t}(W,h^{(t)})$
are the sample moment discrepancies for a given set of sample weights $W_{1:n}$:
\begin{equation}
\label{deltase}
\begin{aligned}
\hat\delta^{(t)}_{a_t}(W_{1:n},h^{(t)}) &= \frac1n\sum_{i=1}^n (W_i\mathbbm{1}[A_{it}=a_t]-
W_i
) h^{(t)}(\overline A_{i,t-1},\overline X_{it}),\quad \forall t\geq2,
\\
\hat\delta^{(1)}_{a_1}(W_{1:n},h^{(1)}) &= \frac1n\sum_{i=1}^n W_i \mathbbm{1}[A_{i1}=a_1] h^{(1)}(X_{i1}) - \frac1n\sum_{i=1}^n h^{(1)}(X_{i1}), \\
\hat{\overline\delta}_{\overline a}(W_{1:n},\overline h)&=\sum_{t=1}^T\hat\delta^{(t)}_{a_t}(W_{1:n},h^{(t)}).
\end{aligned}
\end{equation}
Thus, we will seek samples weights $W_{1:n}$ that make $\hat{\overline\delta}_{\overline a}(W_{1:n},\overline g_{\overline a})$ small for all treatment regimes $\overline a$.
Toward that end,
for \textit{any} set of given functions
$(\overline h_{\overline a})_{\overline a\in\mathcal A}$,
we define \textit{imbalance} of a set of weights $W_{1:n}$ as the average squared discrepancy over treatment regimes:
\begin{equation}
\label{biasb2}
\begin{aligned}
\text{IMB}(W_{1:n};(\overline h_{\overline a})_{\overline a\in \mathcal A}) &= \frac1{\left|\mathcal A\right|}\sum_{\overline a\in\mathcal A}\hat{\overline\delta}^2_{\overline a}(W_{1:n},\overline h_{\overline a}).
\end{aligned}
\end{equation}
The particular imbalance of interest is given when we consider $\overline h_{\overline a}=\overline g_{\overline a}$.
One way to control this imbalance, $\text{IMB}(W_{1:n};(\overline g_{\overline a})_{\overline a\in \mathcal A})$, and consequently control the empirical discrepancies of interest, $\hat{\overline\delta}_{\overline a}(W_{1:n},\overline g_{\overline a})$, is by using inverse probability weights. If known, these weights make this quantity a sample average of mean-zero variables and thus close to zero for large $n$. However, the difficulties are that (a) even mild practical violations of positivity can lead to large variance of each of these terms and (b) we need to correctly estimate the sequential propensities.
Differently, we will seek to find weights that directly \emph{minimize} imbalance. There are two main challenges in this task. The first challenge is that the imbalance of interest depends on some unknown functions $\overline g_{\overline a}$. The second is that the number of treatment regimes grows exponentially in the number of time periods. In the next Section we show how the proposed methodology overcomes these two challenges.
\subsection{Worst case imbalance}
\label{nswci}
To overcome the fact that we do not actually know the functions $\overline g_{\overline a}$ on which imbalance $\text{IMB}(W_{1:n};(\overline g_{\overline a})_{\overline a\in \mathcal A})$ depends, we will guard against all possible realizations of the unknown functions. Specifically, since $\hat{\overline\delta}_{\overline a}(W_{1:n},\overline g_{\overline a})$ scales linearly with $\overline g_{\overline a}$, we will consider its magnitude relative to that of $\overline g_{\overline a}$. We therefore need to define a magnitude.
In particular, let us define
$$
\|\overline h\|=
\sqrt{\|h^{(1)}\|^2_{(1)}+\cdots+\|h^{(T)}\|^2_{(T)}},
$$
where
$\|\cdot\|^2_{(t)}$ are some given extended seminorms on functions from the space of time-dependent confounders and treatment histories up to time $t$ to the space of outcomes. Compared to a norm, an extended seminorm may also assign the values of $0$ and $\infty$ to nonzero elements but must still satisfy triangle inequality and absolute homogeneity. We will discuss specific choices of such seminorms $\|\cdot\|^2_{(t)}$ in Section \ref{kernelqp}.
Given these, we can define the \textit{worst case discrepancies}, $$\Delta^{(t)}_{a_t}(W_{1:n})=\sup_{h^{(t)}} \frac{\hat\delta^{(t)}_{a_t}(W_{1:n},h^{(t)})}{\|h^{(t)}\|_{(t)}} =
\sup_{\|h^{(t)}\|_{(t)}\leq1}\hat\delta^{(t)}_{a_1}(W_{1:n},h^{(t)}).$$
Note that $\Delta^{(t)}_{a_t}(W_{1:n})$ depends \textit{only} on the treatment at time $t$, $a_t$, and \textit{not} the whole treatment regime, $\overline a$.
Then the \textit{worst case imbalance} is given by
\begin{equation}
\label{nswcimb}
\begin{aligned}
\mathcal{B}^2(W_{1:n})
&=
\sup_{
\|\overline h_{\overline a}\|\leq1\;\forall\overline a\in \mathcal A
}
\text{IMB}(W_{1:n};(\overline h_{\overline a}^{(t)})_{\overline a\in \mathcal A})
\\&=
\sup_{\overline h_{\overline a},\;\overline a\in \mathcal A}
\frac1{\left|\mathcal A\right|}\sum_{\overline a\in\mathcal A}\frac{\hat{\overline\delta}^2_{\overline a}(W_{1:n},\overline h_{\overline a})}{\|\overline h_{\overline a}\|^2}
\\&
=\frac12\sum_{t=1}^T(\Delta^{(t)}_0(W_{1:n})^2+\Delta^{(t)}_1(W_{1:n})^2).
\end{aligned}
\end{equation}
What is important to note is that this shows that the discrepancies of interest are essentially the same regardless of the particular treatment regime trajectory $\overline a$.
That is, to control the discrepancies for \textit{all} trajectories $\overline a$ for \textit{all} possible realizations of $\overline g_{\overline a}$, at any time point $t$, we are only concerned with the discrepancies of histories $\overline A_{t-1},\overline X_t$ for those units treated at time $t$, $A_t=1$, and for those not, $A_t=0$.
So, while
the number of treatment regimes grows \textit{exponentially} in the number of periods, we need only to keep track of and minimize a number of discrepancies growing \textit{linearly} in the number of periods $T$. By eliminating each of these linearly-many imbalances, any time-dependent confounding would necessarily be removed, as shown by
Theorem~\ref{thm1}.
In Section \ref{simu_comp_time}, we show how this feature also translates to favorable computational time when dealing with many time periods.
\subsection{Minimizing imbalance while controlling precision}
\label{min}
We can obtain minimal imbalance by minimizing $\mathcal{B}^2(W)$.
However, to control for extreme weights we propose to regularize the weight variables $W_{1:n}$. We therefore wish to find weights that minimizes $\mathcal{B}^2(W_{1:n})$ plus a penalty for deviations from uniform weighting. Formally, we want to solve
\begin{equation}\label{cmse}
\begin{aligned}
\underset{W_{1:n} \in \mathcal{W}}{\min} \quad
\mathcal{B}^2(W_{1:n}) + \lambda \|W_{1:n}-e\|_2^2,
\end{aligned}
\end{equation}
\noindent
where $e$ is the vector of ones and $\mathcal{W}= \lbrace W_{1:n}:W_i \geq 0\;\forall i \rbrace$ is the space of nonnegative weights $W_{1:n}$.
The squared distance of the weights from uniform weights here serves as a convex surrogate for the variance of the resulting MSM (assuming homoskedasticity or bounded residual variances) and $\lambda$ in eq.~\eqref{cmse} can be interpreted as a penalization parameter that controls the trade off between imbalance and precision.
When $\lambda$ is equal to zero, the obtained weights provide minimal imbalance. When $\lambda \rightarrow \infty$, the weights become uniformly distributed leading to an ordinary least squares estimator for the MSM.
In the next section, we discuss a specific choice of the norm that specified the worst case discrepancies $\Delta_{a_t}^{(t)}(W_{1:n})$, presented in Section \ref{nswci}. Specifically, we show that by choosing an RKHS to specify the norm, we can express the optimization problem in eq.~\eqref{cmse} as a convex-quadratic function in $W_{1:n}$, which can be easily solved by using off-the-shelf solvers for quadratic optimization.
\subsection{RKHS and quadratic optimization to optimally balance time-dependent confounders}
\label{kernelqp}
An RKHS is a Hilbert space of functions which is associated a kernel (the reproducing kernel). Specifically, any positive semi-definite kernel $\mathcal K:\mathcal Z\times\mathcal Z\to\mathbb R$ on a ground space $\mathcal Z$ defines a Hilbert space given by (the unique completion of) the span of all functions $\mathcal K(z,\cdot)$ for
$z\in\mathcal Z$, endowed with the inner product $\left<\mathcal K(z,\cdot),\mathcal K(z',\cdot)\right>=\mathcal K(z,z')$.
Kernels are widely used in machine learning to generalize the structure of conditional expectation functions with many applications in statistics \citep{scholkopf2002learning,berlinet2011reproducing,kallus2016generalized,kallus2018optimal}. Commonly used kernels are the polynomial, Gaussian, and Mat\'ern kernels \citep{scholkopf2002learning}.
The following theorem shows that if $\|\cdot\|_{(t)}$, the norm that specified the worst case discrepancies $\Delta_{a_t}^{(t)}(W_{1:n})$, is an RKHS norm given by the kernel $\mathcal K_t$, then we can express it as a convex-quadratic function in
$W_{1:n}$.
\begin{theorem}
\label{thm2}
Define the matrix $K_t\in\mathbb R^{n\times n}$ as $$K_{tij}=\mathcal K_t((\overline A_{i,t-1},\overline X_{it}),(\overline A_{j,t-1},\overline X_{jt}))$$ and note that it is positive semidefinite by definition. Then,
if the norm $\|\cdot\|_{(t)}$ is the RKHS norm given by the kernel $\mathcal K_t$,
the squared worst case discrepancies are
\begin{equation*}
\begin{aligned}
\Delta^{(1)}_{a_1}(W_{1:n})^2 &= \frac1{n^2}W_{1:n}^TI^{(1)}_{a_1}K_1I^{(1)}_{a_1}W_{1:n}-2e^TK_1I^{(1)}_{a_1}W_{1:n}+e^TK_1e,\\
\Delta^{(t)}_{a_t}(W_{1:n})^2 &=
\frac1{n^2}W_{1:n}^T(I-I^{(t)}_{a_t}) K_t (I-I^{(t)}_{a_t}) W_{1:n},
\end{aligned}
\end{equation*}
\noindent
where $I$ is the identity matrix and $I^{(t)}_{a_t}$ is the diagonal matrix with $\mathbb I[A_{it}=a_t]$ in its $i^\text{th}$ diagonal entry.
\end{theorem}
Based on Theorem \ref{thm2}, we can now express the worst case imbalance, $\mathcal{B}^2(W_{1:n})$, defined in eq.~\eqref{nswcimb}, as a convex-quadratic function. Specifically, let $K_t^\circ={I^{(t)}_{0}K_tI^{(t)}_{0}+I^{(t)}_{1}K_tI^{(t)}_{1}}$, which is given by setting every entry $i,j$ of $K_t$ to 0 whenever $A_{it}\neq A_{jt}$, and $K^\circ = \sum_{t=1}^TK_t^\circ$.
We then get that
\begin{equation}
\begin{aligned}
\mathcal{B}^2(W_{1:n})&=\frac1{2}\sum_{t=1}^T(\Delta^{(t)}_0(W_{1:n})^2+\Delta^{(t)}_1(W_{1:n})^2)
\\&
=
\frac1{n^2}\biggl(\frac12W_{1:n}^TK^\circ W_{1:n}-e^TK_1W_{1:n}+e^TK_1e\biggr).
\end{aligned}
\end{equation}
Finally, to obtain weights that optimally balance covariates to control for time-dependent confounding while controlling precision we solve the quadratic optimization problem,
\begin{equation}
\label{QP}
\begin{aligned}
\underset{W_{1:n} \in \mathcal{W}}{\min} \quad &
\frac12W_{1:n}^TK_{\lambda}^\circ W_{1:n}-e^TK_{\lambda}W_{1:n}
\end{aligned}
\end{equation}
where $K_{\lambda}^\circ = K^\circ+2\lambda I$, $K_{\lambda}=K_1+2\lambda I$. We call this proposed methodology and the result of eq.~\eqref{QP}, Kernel Optimal Weighting (KOW).
\section{Practical guidelines}
\label{guidelines}
Solutions to the quadratic optimization problem (\ref{QP}) depend on several factors. First, they depend on the choice of the kernel and its hyperparameters. There are some existing practical guidelines on these
choices \citep{scholkopf2002learning,rasmussen2006gaussian}, on which we
rely as explained below.
Second, they depend on the penalization parameter $\lambda$. Finally, solutions to eq.~\eqref{QP} depend on the chosen set of lagged covariates to include in each kernel. In this section, we introduce some practical guidelines on how to apply KOW in consideration of these factors.
For each $t$, the unknown function $g_{\overline{a}}^{(t)}(\overline{A}_{t-1},\overline{X}_t)$ has two distinct inputs: the treatment history and the confounder history. To reflect this structure, we suggest to specify the kernel $\mathcal K_t$ as a \emph{product kernel}, \textit{i.e.},\break $\mathcal K_t((\overline a_{t-1},\overline x_t),(\overline a'_{t-1},\overline x'_t))=\mathcal K_t^{(1)}(\overline a_{t-1},\overline a'_{t-1})\mathcal K_t^{(2)}(\overline x_{t},\overline x'_{t})$ given a treatment history kernel $\mathcal K_t^{(1)}$ and a confounder history kernel $\mathcal K_t^{(2)}$. This simplifies the process of specifying the kernels. We further suggest that for the treatment history to use a linear kernel involving $\ell$ lagged treatments, $\mathcal K_t^{(1)}(\overline a_{t-1},\overline a'_{t-1})=\sum_{s=\max(1,t-\ell)}^{t-1}a_sa'_s$, and for the confounder history to use a polynomial kernel involving the time-invariant confounders and $\ell$ lagged time-dependent confounders, $\mathcal K_t^{(d)}(\overline x_{t},\overline x'_{t})=(1+\theta x_1^Tx'_1+\theta\sum_{s=\max(2,t-\ell+1)}^{t}x_t^Tx'_t)^d$, where $\theta>0$ and $d\in\mathbb N$ are hyperparameters. We discuss the choice of the number of lags and the hyperparameters below.
In our simulation study in Section \ref{simu}, we show that the MSE of the
MSM-estimated effect using KOW with a product of linear kernel and a quadratic kernel ($d=2$) outperforms estimates using weights obtained by IPTW, sIPTW and CBPS in all considered simulated scenarios.
We again use this choice of kernels in our empirical applications of KOW to real datasets in Section \ref{empirics}.
Many other choices of kernel are also possible and may be more appropriate in a particular application, but we suggest the above combination as a generic and successful recipe.
When using kernels, preprocessing the data is an important step. In particular, normalization is employed to avoid unit dependence and covariates with high variance dominating those with smaller ones. Consequently, we suggest, beforehand, to scale the covariates related to the treatment and confounder histories to have mean 0 and variance 1.
To tune the kernels' hyperparameters and the penalization parameter $\lambda$, we follow \cite{kallus2016generalized} and use the empirical Bayes approach of marginal likelihood \citep{rasmussen2006gaussian}.
We postulate a Gaussian process prior $g^{(t)} \sim \mathcal{GP}(c_t\mathbf 1,\mathcal{K}_{t}(\theta))$, where $c_t\mathbf 1$ is a constant function and $\mathcal{K}_{t}(\theta_t)$ is a kernel that depends on some set of hyperparameters $\theta_t$. For each $t$, we then maximize the marginal likelihood of seeing the data $Y \sim \mathcal{N}(g^{(t)}(\overline X_t,\overline A_{t-1}), \lambda_t)$ over $\theta_t,\lambda_t,c_t$ and let $\lambda=\sum_{t=1}^T\lambda_t$. It would be more correct to consider the marginal likelihood of observing the partial means of outcomes, but we find that this much simpler approach suffices for learning the right representation of the data ($\theta_t$) and the right penalization parameter ($\lambda$) and it enables the use of existing packages such as \textsf{GPML} \citep{rasmussen2010gaussian}. We demonstrate this
in the simulations presented in Section \ref{simu}, and in particular in Figures \ref{fig1b} and \ref{fig2b}
we see that this approach leads to a value of the penalization parameter that is near that which minimizes the resulting MSE of the MSM over possible parameters.
Another practical concern is how many lagged covariates to include in each of the kernels $\mathcal{K}_{t}$. When deriving inverse probability weights, it is common to model the denominator in eq.~\eqref{sipweights} by fitting a pooled logistic model \citep{d1990relation} including only the time-invariant confounders, $X_{1}$, the time-dependent confounders at time $t$, $X_{t}$, and the one-time lagged treatment history, $A_{t-1}$, rather than the entire histories, \textit{i.e.}, logit $\mathbb P(A_{t}=a_t \mid \overline{A}_{t-1}=\overline a_{t-1}, \overline{X}_{t}=\overline x_t )=\alpha_t + \beta_1 A_{t-1} + \beta_2 X_{1} + \beta_3 X_{t}$, \citep{hernan2001marginal,hernan2002estimating}. This can be understood as a certain Markovian assumption about the data generating process which simplifies the modeling when $T$ is large. The same can be done in the case of KOW, where we may assume that $g_{\overline{a}}^{(t)}$ is only a function of the one-time lagged treatment, the time-dependent counfounders at time $t$, and the time-invariant confounders, \textit{i.e.}, $g_{\overline{a}}^{(t)}(\overline A_{t-1},\overline X_t)=g_{\overline{a}}^{(t)}(A_{t-1},X_1,X_t)$, and correspondingly let the kernel $K_t$ only depend on $A_{t-1}$, $X_1$, and $X_t$.
More generally, we can consider including any amount of lagged variables, as represented by the parameter $\ell$ in the above specification of the linear and polynomial kernels. In Section \ref{caseblack}, we consider an empirical setting where $T$ is small and specify the kernels using the whole treatment and confounders histories ($\ell=T$). However, in Section \ref{casehiv} we consider a setting where $T$ is large and, following previous approaches studying the same dataset using IPTW with a logistic model of only the one-time lags \citep{hernan2000marginal,hernan2001marginal,hernan2002estimating}, we keep only the baseline and one-time-lagged data in each kernel specification ($\ell=1$).
Certain datasets, such as the one we study in Section \ref{casehiv}, have repeated observations of outcomes at each time $t=1, \ldots, T$. Thus, for each subject, we have $T$ observations to be used to fit the MSM. Correspondingly, each observation should be weighted appropriately. This can be seen as $T$ instances of the weighting problem. For sIPTW, this boils down to restricting the products in the numerator and denominator of eq.~\eqref{sipweights} to be only up to $t$ for each $t=1, \ldots, T$. Similarly, in the case of KOW, we propose to solve eq.~\eqref{QP} for each value of $t=1, \ldots, T$, producing $n \times T$ weights, one for each of the outcome observations, to be used in fitting the MSM. This is demonstrated in Section \ref{casehiv}.
In the case of a single, final observation of outcome, normalizing the weights, whether IPTW or KOW, does not affect the fitted MSM as it amounts to multiplying the least-squares loss by a constant factor.
But in the repeated observation setting described above, normalizing each set of weights for each time period separately can help.
Correspondingly, we can add a constraint to eq.~\eqref{QP} that the mean of the weights must equal one for each time period separately, which we
demonstrate in
Section \ref{casehiv}.
\section{Simulations}
\label{simu}
In this section, we show the results of a simulation study aimed at comparing the bias and MSE of estimating the cumulative effect of a time-varying treatment on a continuous outcome by using an MSM with weights obtained by each of KOW, IPTW, sIPTW, and CBPS.
\subsection{Setup}
\label{simu_setup}
We considered two different simulated scenarios with $T=3$ time periods, (1) linear, where the treatment was modeled linearly, and (2) nonlinear, where it was modeled quadratically. In both scenarios, we modeled the outcomes nonlinearly so as not to favor our method unfairly.
We tuned the kernel's hyperparameters and the penalization parameter as presented in Section \ref{guidelines} and computed bias and MSE over 1{,}000 replications for each of varying sample sizes, $n=100, \ldots, 1{,}000$. In addition, to study the impact of the penalization parameter $\lambda$ on bias and MSE, in both scenarios, we fixed the sample size to $n=500$ and considered a grid of 25 values for $\lambda$.
{For the linear scenario, we drew the data from the following model: }
\begin{equation*}
\begin{aligned}
\label{simu_l}
\notag Y_i &= -1.91 + 0.8 \textstyle \sum_{t=1}^T A_{i,t} + 0.5 \textstyle \sum_{k=1}^3 Z_{i,k} + 0.05 \textstyle \sum_{{k\neq m}} Z_{i,k}Z_{i,m} + N(0, 5),
\end{aligned}
\end{equation*}
where $Z_{i,k} = \sum_{t=1}^T X_{i,t,k}$, $A_{i,t} \sim \text{binom}(\pi_{i,t}^\text{(L)})$, $X_{i,t,k} \sim N(X_{i,t-1,k} + 0.1,1)$, $k=1,2,3$, and
\begin{equation*}
\begin{aligned}
\pi_{i,t}^\text{(L)} &= (1 + \exp(-(0.5 + 0.5 A_{i,t-1} + 0.05X_{i,t,1}+ 0.08X_{i,t,2} -0.03X_{i,t,3} \\\notag &+ 0.2 A_{i,t-1}\textstyle \sum_{k=1}^3 X_{i,t,k})))^{-1} .
\end{aligned}
\end{equation*}
For the nonlinear scenario, we drew the data from following model:
\begin{equation*}
\begin{aligned}
\label{simu_nl}
\notag
Y_i &= -21.46 + 0.8 \textstyle \sum_{t=1}^T A_{i,t} + 0.5 \textstyle \sum_{k=1}^3 Z_{i,k} + 0.1 (\textstyle \sum_{{k\neq m}} Z_{i,k}Z_{i,m}) + N(0, 5),
\end{aligned}
\end{equation*}
\noindent
where $Z_{i,k} = \sum_{t=1}^T X_{i,t,k}^2$, $A_{i,t} \sim \text{binom}(\pi_{i,t}^\text{(NL)})$, $X_{i,t,k} \sim N(X_{i,t-1,k} + 0.1,1)$, $k=1,2,3$ and
\begin{equation*}
\begin{aligned}
\pi_{i,t}^\text{(NL)} &= (1 + \exp(- (0.5 + 0.5 A_{i,t-1} +
0.05X_{i,t,1} + 0.08X_{i,t,2} -0.03X_{i,t,3} \\\notag &+
0.025X^2_{i,t,1} + 0.04X^2_{i,t,2} -0.015X^2_{i,t,3}
+ 0.3 \textstyle \sum_{k \neq m} X_{i,t,k} X_{i,t,m} \\\notag &+ 0.2 A_{i,t-1} \textstyle \sum_{k=1}^3 X_{i,t,k} + 0.1 A_{i,t-1} \textstyle \sum_{k=1}^3 X^2_{i,t,k} \\\notag &+ 0.05 A_{i,t-1} \textstyle \sum_{k \neq m} X_{i,t,k} X_{i,t,m})))^{-1}.
\end{aligned}
\end{equation*}
The intercepts $-1.91$ and $-21.46$ are chosen so the marginal mean of $Y_i$ is 0.
In each scenario and for each replication, we computed two sets of KOW weights. We obtain the first by using the product of two linear kernels ($\mathcal{K}_1$), one for the treatment history and one for the confounder history, and the second by using the product of a linear kernel for the treatment history and a quadratic kernel for the confounder history ($\mathcal{K}_2$). As presented in Section \ref{guidelines}, we rescaled the variables before inputting them to the kernel and, for each replication, we tuned $\lambda$ and the kernels' hyperparameters by using Gaussian-process marginal likelihood.
We also computed two sets of IPTW and sIPTW weights. We obtained the first by fitting for each $t=1,2,3$ a logistic regression model for the treatment $\overline A_{i,t}$ conditioned on $\overline A_{i,t-1},\overline X_{i,t}$ and their interactions, which is well-specified for $\pi_{i,t}^\text{(L)}$ (we term this the linear specification) and the second by adding all quadratic confounder terms and their interactions with $\overline A_{i,t-1}$ which is well-specified for $\pi_{i,t}^\text{(NL)}$ (we term this the non-linear specification).
The numerator of sIPTW in either case was obtained by
fitting
a logistic regression
on the treatment history alone. We obtain the final set of IPTW and sIPTW weights by multiplying the weights over $t$ as shown in eq. \eqref{sipweights}.
Finally, we computed two sets of weights using CBPS:
one using the covariates as they are (linear CBPS)
and one by augmenting the
covariates with all quadratic monomials (non-linear CBPS).
We used the full (non-approximate) version of CBPS.
We computed the causal parameter of interest by using WLS, regressing the outcome on the cumulative treatment and using weights computed by each of the methods. Specifically, in the linear scenario, we computed weights using (1) $\mathcal{K}_1$ for KOW, the linear specification for IPTW and sIPTW, and linear CBPS, which we refer to as the \textit{correct} case, and (2) $\mathcal{K}_2$ for KOW, the nonlinear specification for IPTW and sIPTW, and the nonlinear CBPS, which we refer to as the \textit{overspecified} case. In the nonlinear scenario, we again used each of the above, but refer to the first as the \textit{misspecified} case and the second as the \textit{correct} case.
We highlight that these terms may only reflect the model specification for IPTW and sIPTW, as CBPS does not require a particular
specification and
the function $g_{\overline a}^{(t)}$
need not necessarily be
in the RKHS that either kernel specify.
We used \textsf{Gurobi} and its \textsf{R} interface \citep{gurobi} to solve eq.~\eqref{cmse} and optimize the KOW weights, the \textsf{MatLab} package \textsf{GPML} \citep{rasmussen2010gaussian} to perform the marginal likelihood estimation of hyperparameters, {the \textsf{R} package \textsf{R.matlab} to call \textsf{MatLab} from within \textsf{R}}, the \textsf{R} command \textsf{glm} to fit treatment models for IPTW and sIPTW, the \textsf{R} package \textsf{CBMSM} for CBPS, and the \textsf{R} command \textsf{lm} to fit the MSM.
\subsection{Results}
In this section we discuss the results obtained in the simulation study across sample sizes and across values of the penalization parameter, $\lambda$. In summary, the proposed KOW outperformed IPTW, sIPTW and CBPS with respect to MSE across all sample sizes and simulation scenarios. An important result is that, in the misspecified case, KOW showed a lower bias and MSE than that of IPTW, sIPTW and CBPS across all considered sample sizes.
\subsubsection{Across sample sizes}
\label{simu_ss}
Figure \ref{fig1} shows bias and MSE of the estimated time-varying treatment effect using KOW (solid),
IPTW (dashed), sIPTW (dotted), and CBPS (dashed-dotted) when increasing the sample size from $n=100$
to $n=1{,}000$. In the linear-correct scenario, IPTW had a lower bias compared with sIPTW, CBPS and KOW in small samples (top-left panel of Figure \ref{fig1}).
However, for larger samples, KOW had a smaller bias compared with IPTW, sIPTW and CBPS. KOW outperformed IPTW, sIPTW and CBPS in terms of MSE across samples sizes (top-right panel of Figure \ref{fig1}). KOW outperformed the other methods with regards of MSE (bottom-right panel of Figure \ref{fig1}) across all sample sizes, in the linear-overspecified scenario. KOW and sIPTW performed similarly with respect to bias in the nonlinear-misspecified scenario (top-left panel of Figure \ref{fig2}), while KOW outperformed IPTW, sIPTW and CBPS with respect to MSE in all sample sizes (top-right panel of Figure \ref{fig2}). KOW, IPTW and sIPTW had similar bias in the nonlinear-correct scenario (bottom-left panel of Figure \ref{fig2}), with KOW outperforming the other methods, with respect of MSE, across all sample sizes (bottom-right panel of Figure \ref{fig2}). In summary, the MSE obtained by using KOW was lower than that of IPTW, sIPTW and CBPS across all considered sample sizes.
As the next section shows, the larger biases in some of the cases are driven by the choice of penalization parameter $\lambda$. Here we choose $\lambda$ with an eye toward minimizing MSE. A smaller $\lambda$, it is shown next, can lead to KOW having \emph{both} smaller bias and MSE than other methods, but the total benefit in MSE is smaller.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=.6]{Rplot_1.eps}
\end{center}
\caption{\footnotesize Bias and MSE of the estimated time-varying treatment effect using KOW (solid), IPTW (dashed), sIPTW (dotted) and CBPS (dashed-dotted) when increasing the sample size from $n=100$ to $n=1{,}000$ in the linear-correct scenario (top panels) and in the linear-overspecified scenario (bottom panels).
\label{fig1} }
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[scale=.6]{Rplot_2.eps}
\end{center}
\caption{\footnotesize Bias and MSE of the estimated time-varying treatment effect using KOW (solid), IPTW (dashed), sIPTW (dotted) and CBPS (dashed-dotted) when increasing the sample size from $n=100$ to $n=1{,}000$, in the nonlinear-misspecified scenario (top panels) and in the nonlinear-correct scenario (bottom panels).
\label{fig2} }
\end{figure}
\subsubsection{Across values of the penalization parameter, $\lambda$}
Figures \ref{fig1b} and \ref{fig2b} show the ratios of squared biases (left panels) and of MSEs (right panels) when comparing KOW with IPTW (solid), sIPTW (dashed) and CBPS (dotted) across different values of $\lambda$
and $n=500$ in the linear and nonlinear scenarios, respectively.
Values above 1 means that KOW had a lower bias or MSE.
For zero or small $\lambda$, KOW significantly outperformed IPTW, sIPTW and CBPS with respect to bias. In many cases, the MSE was also smaller for zero $\lambda$. But, the biggest benefit in MSE was seen for larger $\lambda$.
The peaks of the right panels represent the points for which $\lambda$ is optimal, \textit{i.e.}, the MSE of KOW is minimized. The solid vertical lines on the right panels show the mean values across replications of the $\lambda$ value obtained by the procedure described in Section \ref{guidelines} and \ref{simu_setup} as done in the previous section. It can be seen that these are very near the points at which the MSE is minimized. The benefit in MSE both at and around this point was significant across all scenarios.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=.6]{Rplot_LL1_final.eps}
\end{center}
\caption{\footnotesize Ratios of squared biases and MSEs comparing KOW with IPTW (solid), sIPTW (dashed) and CBPS (dotted) across values of $\lambda=0, \ldots, 1500$ in the linear-correct scenario (top panels) and in the linear-overspecified scenario (bottom panels). Ratios above 1 means that KOW had a lower bias or MSE. Vertical bars show the mean value of $\lambda$, across simulations, obtained as described in Section \ref{simu_ss}.
\label{fig1b} }
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[scale=.6]{Rplot_LL2_final.eps}
\end{center}
\caption{\footnotesize Ratios of squared biases and MSEs comparing KOW with IPTW (solid), sIPTW (dashed) and CBPS (dotted) across values of $\lambda=0, \ldots, 3{,}000$ in the nonlinear-misspecified scenario (top panels) and in the nonlinear-correct scenario (bottom panels). Ratios above 1 means that KOW had a lower bias or MSE. Vertical bars show the mean value of $\lambda_=0, \ldots, 3{,}000$, across simulations, obtained as described in Section \ref{simu_ss}.
\label{fig2b} }
\end{figure}
\subsubsection{Computational time of KOW}
\label{simu_comp_time}
In this section we present the results of a simulation study aimed at comparing the mean computational time of KOW and CBPS. Compared to sIPTW based on pooled logistic regression, which is generally very fast, both KOW and CBPS have a nontrivial computational time that can grow with both the total number time periods $T$ and the number of covariates (which, for KOW, manifests as the complexity of the kernel functions). For KOW, the most time-consuming tasks are tuning $\lambda$ by marginal likelihood and computing the matrices that define problem \eqref{QP}, which are affected by these two factors, while solving problem \eqref{QP} is fast and does not depend on those factors. CBPS computational time is dominated by inverting a covariance matrix with dimensions increasing exponentially in $T$ and linearly in the number of covariates. \cite{imai2015robust} also propose using an approximate low-rank matrix that ignores certain covariance terms to make the matrix inversion faster.
Here we compare KOW, CBPS with full covariance matrix (CBPS-full), and CBPS with its low-rank approximation (CBPS-approx) when increasing the number of time periods and the number of covariates. Specifically, following the linear-correct scenario presented in Section \ref{simu_setup}, we fixed the sample size equal to $n=100$ and randomly generated 100 samples considering $T=3, \ldots, 10$, and $p=3,\ldots,8$, where $p$ is the total number of covariates $X_{t}$ for each $t$. We fixed the number of covariates to be equal to $p=3$ when evaluating the mean computational times over time periods, while we fixed the number of time periods to be equal to $T=5$ when analyzing over the number of covariates.
For each sample, we computed the KOW weights by solving eq.~\eqref{QP} using kernel $\mathcal K_1$. We used Gaussian process marginal likelihood to tune the kernels' hyperparameters and penalization parameter. We computed CBPS weights using the linear CBPS as in Section \ref{simu_setup}. We used the \textsf{R} package \textsf{rbenchmark} to compute the computational time on a PC with an i7-3770 processor, 3.4 GHz, 8GB RAM and a Linux Ubuntu 16.04 operating system.
Solid lines of Figure \ref{fig3} represent mean computational times for KOW, dashed for CBPS-full, and dotted for CBPS-approx. When the number of time periods was relatively small, the mean computational time of KOW was higher compared with both CBPS methods (left panel of Figure \ref{fig3}). However, the mean computation time of KOW over time periods increased linearly while that of both CBPS methods increased exponentially. This is due to the fact that, as presented in Section \ref{imbalance}, the number of imbalances that we need to minimize grows linearly in the number of time periods. The mean computational time required by KOW when increasing the number of covariates remained constant, while it increased for both CBPS-full and CBPS-approx, with CBPS-full increasing more rapidly. In summary, KOW was less affected by the total number of time periods and covariates compared with CBPS with full and low-rank approximation matrix.
Computing KOW required three steps: tuning the parameters, constructing the matrices for problem \eqref{QP}, and solving problem \eqref{QP}. On average, for $T=3$, the first step required 21\% of the total computational time, the second 78.8\%, and the last 0.2\%.
Thus, solving the optimization problem itself is very fast and is not the bottleneck.
\begin{figure}[h!]
\begin{center}
\includegraphics[trim={0 13cm 0 0},clip,scale=.6]{Rplot_comp_time6.eps}
\end{center}
\caption{{\footnotesize Mean computational time in seconds of KOW (solid), CBPS with full covariate matrix (dashed), and CBPS with the low-rank approximation of the full matrix (dotted) over time periods when $n=100$, $p=3$ and $T=2,\ldots,10$ (left panel) and over number of covariates, when $n=100$, $T=5$ and $p=3,\ldots,8$ (right panel).}
\label{fig3} }
\end{figure}
\section{
{KOW with informative censoring}}
\label{censor}
In longitudinal studies, participants may drop out the study before the end of the follow-up time and their outcomes are, naturally, missing observations. When this missingness is due to reasons related to the study (\textit{i.e.}, related to the potential outcomes), selection bias is introduced. This phenomenon is referred to as informative censoring and it is common in the context of survival analysis where the interest is on analyzing time-to-event outcomes. Under the assumptions of consistency, positivity, and sequential ignorability of both treatment and censoring, \cite{robins1999estimation} showed that a consistent estimate of the causal effect of a time-varying treatment can be obtained by weighting each subject
$i=1, \ldots, n$
at each time period
by the product of inverse probability of treatment and censoring weights. Inverse probability of treatment weights are obtained as presented in Section \ref{sec:rew_msm}, while inverse probability of censoring weights are usually obtained by inverting the probability of being uncensored at time $t$, given the treatment and confounder history up to time $t$ \citep{hernan2001marginal}.
In this section we extend KOW to similarly handle informative censoring.
We demonstrate that under sequentially ignorable censoring, minimizing the very same discrepancies as before at each time period, restricted to the units for which data is available, actually controls for both time-dependent confounding as well as informative censoring. Thus, KOW naturally extends to the setting with informative censoring.
Let $C_{it}\in\{0,1\}$ for $t=1,\dots,T$ indicate whether unit $i$ is censored in time period $t$ and let $C_{i0}=0$.
Note that $C_{it}=1$ implies that $C_{i,t+1}=1$ and that $C_{it}=0$ implies that $C_{i,t-1}=0$.
All we require is that we (at least)
observe outcomes $Y_i$ whenever $C_{iT}=0$,
$X_{it}$ whenever $C_{i,t-1}=0$,
and $A_{it}$ whenever $C_{it}=0$.
Note we might observe more, such as the treatment at time $t$
for a unit with $C_{i,t-1}=0$, or perhaps only some of the data after censoring is corrupted, but that is not required.
We summarize the assumption of sequentially ignorable censoring as
\begin{equation}\label{ignocens}
Y(\overline a)\independent \overline C_{t} \mid \overline A_{t},\overline X_{t}.
\end{equation}
Let us redefine
\begin{align}
\label{deltasc}
\delta^{(1)}_{a_1}(W,h^{(1)}) &= \mathbbm{E} \left[ W \mathbbm{1}[A_1=a_1]\mathbbm 1[C_1=0] h^{(1)}(X_1) \right] - \mathbbm{E} \left[ h^{(1)}(X_1) \right] \\\notag
g_{\overline a}^{(1)}(X_1) &= \mathbbm{E} \left[ Y(\overline a) \mid X_1 \right],\\\notag
\delta^{(t)}_{a_t}(W,h^{(t)}) &= \mathbbm{E} \left[ W \mathbbm{1}[A_t=a_t]\mathbbm 1[C_t=0] h^{(t)}(\overline A_{t-1},\overline X_t) \right] \\ \notag &\phantom{=}- \mathbbm{E} \left[ W\mathbbm 1[C_{t-1}=0] h^{(t)}(\overline A_{t-1},\overline X_t) \right],&&\forall t\geq2, \\\notag
g_{\overline a}^{(t)}(\overline A_{t-1},\overline X_t) &= \mathbbm{1}[\overline A_{t-1}=\overline a_{t-1}]\mathbbm{E} \left[ Y(\overline a) \mid \overline A_{t-1},\overline X_t \right],&&\forall t\geq2.
\end{align}
Similarly to Theorem \ref{thm1}, the following theorem shows that we can write the difference between the weighted average outcome
among the \emph{uncensored} $\overline a$-treated units, $\mathbbm{E} \left[ W \mathbbm{1}[\overline A=\overline a]\mathbbm{1}[C_T=0] Y \right]$, and the true average potential outcome of $\overline a$, $\mathbbm{E} \left[ Y(\overline a) \right]$, as the sum over time points $t$ of discrepancies involving the values of treatment and confounder histories up to time $t$.
\begin{theorem}
\label{thm3}
Under
assumptions \eqref{positivity}--\eqref{igno} and \eqref{ignocens},
\begin{equation}\label{decomp_cens}
\mathbbm{E} \left[ W \mathbbm{1}[\overline A=\overline a]\mathbbm{1}[C_T=0] Y \right]
- \mathbbm{E} \left[ Y(\overline a) \right]
=
\sum_{t=1}^T\delta^{(t)}_{a_t}(W,g_{\overline a}^{(t)}).
\end{equation}
\end{theorem}
We then define
the empirical counterparts to $\delta^{(t)}_{a_t}(W,h^{(t)})$ as before in eq.~\eqref{deltase} but limit ourselves to \textit{uncensored} units, as in eq.~\eqref{deltasc}.
We similarly define imbalance,
$\text{IMB}(W_{1:n};(\overline g_{\overline a}^{(t)})_{\overline a\in \mathcal A})$,
and the worst case imbalance $\mathcal{B}^2(W_{1:n})$,
as before in eqs.~\eqref{biasb2} and \eqref{nswcimb}.
Finally, again using kernels to specify norms, we obtain weights that optimally balance covariates to control for time-dependent confounding and account for informative censoring while controlling precision by solving the quadratic optimization problem,
\begin{equation}
\label{QP_C}
\begin{aligned}
\underset{W_{1:n} \in \mathcal{W}}{\min} \quad &
\frac12W_{1:n}^TK_{\lambda}^\circ W_{1:n}-e^TK_{\lambda}W_{1:n}
,
\end{aligned}
\end{equation}
where
$K_{\lambda}^\circ = K^\circ+2\lambda I$,
$K_{\lambda}=K_1+2\lambda I$,
$K^\circ = \sum_{t=1}^TK_t^\circ$,
\break $K_t^\circ=\sum_{a_t \in \lbrace 0,1 \rbrace} I^{(t)}_{a_t}K_tI^{(t)}_{a_t}$,
$K_t\in\mathbb R^{n\times n}$ is a semidefinite positive matrix defined as $K_{tij}=\mathcal K_t((\overline A_{i,t-1},\overline X_{it}),(\overline A_{j,t-1},\overline X_{jt}))$,
$I^{(t)}_{a_t}$ is the diagonal matrix with $\mathbb I[A_{it}=a_t]\mathbb I[C_{it}=0]-\mathbb I[C_{i,t-1}=0]$ in its $i^\text{th}$ diagonal entry (recall $C_{i,0}=0$ for all $i$),
and $e$ is the vector of all ones.
\section{Applications}\label{empirics}
In this section, we present two empirical applications of KOW. In the first, we estimate the effect of treatment initiation on time to death among people living with HIV (PLWH). In the second, we evaluate the impact of negative advertising on election outcomes.
\subsection{The effect of HIV treatment on time to death}
\label{casehiv}
In this section, we analyze data from the Multicenter AIDS Cohort Study (MACS) to study the effect of the initiation time of treatment on time to death among PLWH. Indeed, due to the longitudinal nature of HIV treatment and the presence of time-dependent confounding, MSMs have been widely used to study causal effects in this domain
\citep[among others]{hernan2000marginal, hernan2001marginal, hiv2010effect, hiv2011initiate, lodi2017effect}.
As an example of time-dependent confounding, CD4 cell count, a measurement used to monitor immune defenses in PLWH and to make clinical decisions, is a predictor of both treatment initiation and survival, as well as being itself influenced by prior treatments.
Recognizing the censoring in the MACS data,
\cite{hernan2000marginal} showed how to estimate the parameters of the MSM by inverse probability of treatment and censoring weighting (IPTCW).
Here, we apply KOW as proposed in Section \ref{censor} to handle both time-dependent confounding and informative censoring while controlling precision. We considered the following potential time-dependent confounders associated with the effect of treatment initiation and the risk of death: CD4 cell count, white blood cell count, red blood cell count, and platelets. We also identified the age at baseline as a potential time-invariant confounding factor.
We considered only recently developed HIV treatments, thus, including in the analysis only PLWH that started treatment after 2001. The final sample was comprised of a total of $n=344$ people and 760 visits, with a maximum of $T=8$ visits per person. We considered two sets of KOW weights, either obtained by using a product of (1) two linear kernels, one for the treatment history and one for the confounder history ($\mathcal{K}_1$) or (2) a linear kernel for the treatment history and a polynomial kernel of degree 2 for the confounder history ($\mathcal{K}_2$). We scaled the covariates related to the treatment and confounder history, and tuned the kernels' hyperparameters and the penalization parameter by using Gaussian processes marginal likelihood as presented in Section \ref{guidelines}. Following previous approaches studying the HIV treatment using IPTCW that modeled treatment and censoring using single time lags \citep{hernan2000marginal,hernan2001marginal,hernan2002estimating}, we included in each kernel the time-invariant confounders, the previous treatment, $A_{t-1}$, and the time-dependent confounders at time $t$, $X_t$, instead of the entire histories.
As described in Section \ref{guidelines}, since we have repeated observations of outcomes, we compute a set of KOW weights by solving the optimization problem (\ref{QP_C}) for each horizon up to $T$.
In addition, as described in Section \ref{guidelines}, we constrained the mean of the weights to be equal to one.
We compared the results obtained by KOW with those from IPTCW and stabilized-IPTCW (sIPTCW).
The latter sets of weights
were obtained by using a logistic regression on the treatment history and the aforementioned time-invariant and time-dependent confounders and using only one time lag for each of the treatment and time-dependent confounders
as done in previous approaches studying the HIV treatment using IPTCW \citep{hernan2000marginal,hernan2001marginal,hernan2002estimating}.
The numerator of sIPTCW was computed
by modeling $h(\overline{A}_t)$ in eq.~(\ref{sipweights}) with a logistic regression on the treatment history only using one time lag.
We modeled the inverse probability of censoring weights similarly.
The final sets of IPTCW and sIPTCW weights were obtained by multiplying inverse probability of treatment and censoring weights.
We did not compare the results with those of CBPS
because it does not handle informative censoring. In particular, CBPS requires a complete $n \times T$ matrix of observed time-dependent confounders, while in the MACS dataset many entries are missing.
We estimated the hazard ratio of the risk of death by using a weighted Cox regression model \citep{hernan2000marginal} weighted by KOW, IPTCW, or sIPTCW and using robust standard errors \citep{freedman2006so}. We used \textsf{Gurobi} and its \textsf{R} interface to solve eq.~\eqref{QP_C} and obtain the KOW weights, the \textsf{Matlab} package \textsf{GPML} to perform the marginal likelihood estimation of hyperparameters, {the \textsf{R} package \textsf{R.matlab} to call \textsf{MatLab} from within \textsf{R},} the \textsf{R} package \textsf{ipw} \citep{van2011ipw} to fit the treatment models for IPTCW and sIPTCW, and the \textsf{R} command \textsf{coxph} (with robust variance estimation) to fit the outcome model. It took 13.5 seconds to obtain a solution for KOW. Table \ref{table_hiv} summarizes the result of our analysis. Both KOW ($\mathcal{K}_1$) and ($\mathcal{K}_2$) showed a significant protective effect of HIV treatment on time to death among PLWH. IPTCW showed a similar effect but with lower precision, resulting in a non-significant effect. With similar precision obtained by KOW, sIPTCW showed a non-significant effect of HIV treatment on time to death.
{Whereas analyses based on IPTCW and sIPTCW lead to non-significant and inconsistent conclusions, the results we obtained by using KOW show that PLWH can benefit from HIV treatment, as shown in independent
randomized placebo-controlled trials \citep{cameron1998randomised,hammer1997controlled}. }
\begin{table}[H]
\centering
\caption{Effect of HIV treatment on time to death.}
\label{table_hiv}
\begin{threeparttable}
\begin{tabular}{ccccc}
\hline
& \multicolumn{2}{c}{KOW} & \multicolumn{2}{c}{Logistic} \\
& \multicolumn{1}{c}{$\mathcal{K}_1$} & \multicolumn{1}{c}{$\mathcal{K}_2$} & \multicolumn{1}{c}{IPTCW} & \multicolumn{1}{c}{sIPTCW} \\
\hline
\textit{$\hat{HR}$ } & 0.40* & 0.48* & 0.14 & 1.25 \\
\textit{ SE } & (0.30) & (0.28) & (1.15) & (0.30)
\end{tabular}
\begin{tablenotes}
\footnotesize
\item Note: $\hat{HR}$ is the estimated hazard ratio of the effect of HIV treatment initiation on time to death. $SE$ is the estimated robust standard error. Weights were obtained by using, KOW ($\mathcal{K}_1$): a product of two linear kernels, one for the treatment history and one for the confounder history; KOW ($\mathcal{K}_2$): a product between a linear kernel for the treatment history and a polynomial kernel of degree 2 for the confounder history; IPTCW: a logistic regression on the treatment history and the time-invariant and time-dependent confounders (using only one time lag for each of the treatment and time-dependent confounders); sIPTCW: stabilized IPTCW. * indicates statistical significance at the 0.05 level.
\end{tablenotes}
\end{threeparttable}
\end{table}
\subsection{The impact of negative advertising on election outcomes}
\label{caseblack}
In this section, we analyze a subset of the dataset from \cite{blackwell2013framework} to estimate the impact of negative advertising on election outcomes.
Because of the dynamic and longitudinal nature of the problem and presence of time-dependent confounders, MSMs have been used previously used to study the question
\citep{blackwell2013framework}.
Specifically, poll numbers are time-dependent confounders as they might both be affected by negative advertising and might also affect future poll numbers.
We constructed the subset
of the data from \citet{blackwell2013framework} by considering
the five weeks leading up to each of 114 elections held 2000--2006 (58 US Senate, 56 US gubernatorial).
Differently from Section \ref{casehiv} in which the outcome was observed at each time period, in this analysis, the binary election outcome was observed only at the end of each five-week trajectory.
In addition, all units were uncensored.
We estimated the parameters of two MSMs, the first having separate coefficients for negative advertising in each time period and the second having one coefficient for the cumulative effect of negative advertising.
Each MSM was fit using weights given by each of KOW, IPTW, sIPTW, and CBPS (both full and approximate).
We used the following time-dependent confounders: Democratic share of the polls, proportion of undecided voters, and campaign length. We also used the following time-invariant confounders: baseline Democratic vote share, proportion of undecided voters, status of incumbency, election year and type of office.
We obtained two sets of KOW weights by using a product of (1) two linear kernels, one for the history of negative advertising and one for the confounder history ($\mathcal{K}_1$) and (2) a linear kernel for the history of negative advertising and a polynomial kernel of degree 2 for the confounder history ($\mathcal{K}_2$).
The kernels were over the complete confounder history up to time $t$,
$\overline X_t$, and two time-lags of treatment history, $A_{t-1},A_{t-2}$.
We scaled the covariates and tuned the kernels' hyperparameters and the penalization parameter by using Gaussian processes marginal likelihood. We obtained the final set of KOW weights by solving eq.~(\ref{QP}).
We compared the results obtained by KOW with those from IPTW, sIPTW, CBPS-full, and CBPS-approx. To obtain the sets of IPTW, sIPTW, and CBPS weights, we used logistic models conditioned on the confounder history and two time-lags from the treatment history. To compute the numerator of sIPTW weights, we used a logistic regression conditioned only on two time-lags from the treatment history.
We used \textsf{Gurobi} and its \textsf{R} interface to solve eq.~\eqref{QP_C} and obtain the KOW weights, the \textsf{Matlab} package \textsf{GPML} to perform the marginal likelihood estimation of hyperparameters, {the \textsf{R} package \textsf{R.matlab} to call \textsf{MatLab} from within \textsf{R},} the \textsf{R} command \textsf{glm} to fit the treatment models for IPTW and sIPTW, the \textsf{R} package \textsf{CBMSM} for CBPS, the \textsf{R} command \textsf{lm} to fit the outcome model, and the \textsf{R} package \textsf{sandwich} to estimate robust standard errors. The computational time to obtain a solution was equal to 12.6 seconds for KOW, while it was equal to 104 seconds for CBPS-full and 3.8 seconds for CBPS-approx.
Table \ref{table_black} summarizes the results of our analysis, reporting robust standard errors \citep{freedman2006so}.
The first six rows of Table \ref{table_black} show the effect of the time-specific negative advertising. The last two rows present the effect of the cumulative effect of negative advertising. KOW ($\mathcal{K}_1$ and $\mathcal{K}_2$) and IPTW showed similar effects, with increased precision when using KOW except for time 4, in which both methods showed a significant negative effect but with greater precision when using IPTW. sIPTW, CBPS-full and CBPS-approx showed a significant negative effect at time 3 with similar precision. No significant results were obtained when considering the cumulative effect of negative advertising. All except sIPTW, showed a negative cumulative effect. KOW ($\mathcal{K}_1$) was the most precise.
{We conclude that, the impact of negative advertising in the majority of the time periods and its cumulative effect on election outcomes are not statistically significant.}
\begin{table}[H]
\centering
\caption{Impact of negative advertising on election outcomes.}
\label{table_black}
\begin{threeparttable}
\begin{tabular}{ccccccc}
\hline
\multicolumn{1}{c}{$\hat \beta$} & \multicolumn{2}{c}{KOW} & \multicolumn{2}{c}{Logistic} & \multicolumn{2}{c}{CBPS} \\
\multicolumn{1}{c}{$SE$} & \multicolumn{1}{c}{$\mathcal{K}_1$} & \multicolumn{1}{c}{$\mathcal{K}_2$} & \multicolumn{1}{c}{IPTW} & \multicolumn{1}{c}{sIPTW} & \multicolumn{1}{c}{Full} & \multicolumn{1}{c}{Approx} \\
\hline
Intercept & 54.54* & 53.84* & 53.05* & 47.46* & 51.25* & 52.17* \\
& (2.15) & (2.38) & (2.88) & (2.98) & (2.70) & (2.39) \\
Negative$_1$ & 2.43 & 3.27 & 4.41 & 7.62* & 5.95* & 4.81* \\
& (1.86) & (1.86) & (2.56) & (3.26) & (2.49) & (2.22) \\
Negative$_2$ & 3.73 & 3.24 & 5.51* & 3.17 & 3.55 & 2.65 \\
& (2.18) & (2.22) & (2.38) & (3.19) & (2.73) & (2.42) \\
Negative$_3$ & -2.51 & -2.39 & -4.37 & -8.32* & -6.50* & -6.31* \\
& (2.34) & (2.45) & (2.54) & (3.84) & (3.20) & (3.24) \\
Negative$_4$ & -7.16* & -7.22* & -8.77* & 2.34 & -3.55 & -3.12 \\
& (2.57) & (2.75) & (1.54) & (3.11) & (3.71) & (3.59) \\
Negative$_5$ & -2.75* & -1.79 & -3.19 & -3.62 & -1.92 & -1.96 \\
& (1.42) & (1.59) & (2.19) & (2.59) & (1.62) & (1.54) \\
& & & & & & \\
\hline
Intercept & 51.40* & 50.56* & 58.29* & 42.63* & 49.38* & 50.28* \\
& (2.45) & (2.63) & (4.29) & (4.15) & (2.68) & (2.49)\\
Cumulative & -0.59 & -0.37 & -0.93 & 1.91 & -0.28 & -0.41 \\
& (0.58) & (0.64) & (1.57) & (1.15) & (0.65) & (0.77) \\
\hline
\end{tabular}
\begin{tablenotes}
\footnotesize
\item Note: $\hat{\beta}$ is the estimated effect of negative advertising. $SE$ is the estimated robust standard error. Weights were obtained by using, KOW ($\mathcal{K}_1$): a product of two linear kernels, one for the history of negative advertising and one for the confounder history; KOW ($\mathcal{K}_2$): a product between a linear kernel for the history of negative advertising and a polynomial kernel of degree 2 for the confounder history; IPTW: a logistic model conditioned on the confounder history and two time-lags from the treatment history; sIPTW: stabilized IPTW; CBPS-full: CBPS with full covariance matrix; CBPS-approx: CBPS with low-rank approximation. * indicates statistical significance at the 0.05 level.
\end{tablenotes}
\end{threeparttable}
\end{table}
\section{Conclusions}\label{conclusions}
In this paper we presented KOW, which optimally finds weights for fitting an MSM with the aim of balancing time-dependent confounders while controlling for precision. That KOW uses mathematical optimization to directly and fully balance covariates
as well as optimize precision
explains the better performance of KOW over IPTW, sIPTW and CBPS observed in our simulation study.
{In addition, as shown in Sections \ref{nswci}, \ref{simu} and \ref{censor}, the proposed methodology only needs to minimize a number of discrepancies that grows linearly in the number of time periods, mitigates the possible misspecification of the treatment assignment model, allows balancing non-additive covariate relationships, and can be extended to control for informative censoring, which is a common feature of longitudinal studies.}
Alternative formulations of our imbalance-precision optimization problem, eq.~\eqref{cmse}, may be investigated. For example, additional linear constraints can be added to the optimization problem, as shown in the empirical application of Section \ref{casehiv}, and different penalties can be considered to control for extreme weights. For instance, in eq.~\eqref{cmse}, at the cost of no longer being able to use convex-quadratic optimization, one may directly penalize the covariance matrix of the weighted least-square estimator rather than use a convex-quadratic surrogate as we do.
One may also change the nature of precision control. Here, we suggested penalization in an attempt to target total error.
Alternatively, similar to \cite{santacatterina2017optimal}, we may reformulate eq.~\eqref{cmse} as a constrained optimization problem where the precision of the resulting estimator is constrained by an upper bound $\xi$, thus seeking to minimize imbalances subject to having a bounded precision. In our convex formulation, the two are equivalent by Lagrangian duality in that for every precision penalization $\lambda$ there is an equivalent precision bound $\xi$. However, it may make specifying the parameters easier depending on the application as it may be easier for a practitioner to conceive of a desirable bound on precision. There may also be other ways to choose the penalization parameter. Here we suggested using maximum marginal likelihood but cross validation based on predicting outcomes and their partial means may also be possible.
The flexibility of our approach is that any of these changes amount to simply modifying the optimization problem that is fed to an off-the-shelf solver. Indeed, we were able to extend KOW from the standard longitudinal setting to also handle both repeated observations of outcomes and informative censoring. In addition to offering flexibility, the optimization approach we took, which directly and fully minimized our error objective phrased in terms of covariate imbalances, was able to offer improvements on the state of the art.
\bibliographystyle{chicago}
|
{
"timestamp": "2019-08-13T02:22:13",
"yymm": "1806",
"arxiv_id": "1806.01083",
"language": "en",
"url": "https://arxiv.org/abs/1806.01083"
}
|
\section{Introduction}
For an $n$-dimensional nilpotent non-abelian Lie algebra $ L,$ it is well-know that the dimension of its Schur multiplier is equal to $ \dfrac{1}{2}(n-1)(n-2)+1-s(L) $ for some $ s(L)\geq 0,$ by a result of \cite[Theorem 3.1]{ni}. There are several papers devoted to investigation of the structure of an $n$-dimensional nilpotent non-abelian Lie algebra $ L$ rely on $s(L).$ The structure of all nilpotent non-abelian Lie algebras $ L $ is obtain when $ s(L)=0,1,2,3$ in \cite{ni,ni2,sa}. These results not only characterize a nilpotent Lie algebra in terms of $ s(L) $ but also they can help to shorten the processes of finding the structure of a nilpotent Lie algebra $ L $ in terms of
$ t(L)=\dfrac{1}{2}n(n-1)-\dim \mathcal{M}(L) $ (see \cite{ba1,ni2}).
Let $ L$ be a Lie algebra presented as the quotient of a free Lie algebra $ F$ by an ideal $ R.$ Then the $2$-nilpotent multiplier of $ L, \mathcal{M}^{(2)}(L), $ is isomorphic to $ \dfrac{R\cap F^3}{[R,F,F]}.$ It is a less extent the $c$-nilpotent multiplier $ \mathcal{M}^{(c)}(L) $ for $ c=2 $ (see \cite{ni20}).\\ The study of the $2$-nilpotent multiplier of Lie algebras can lead to the classification of algebras Lie algebra into the equivalence classes as in the group theory case (see \cite{el}).
It also gives a criterion for detecting the $2$-capability of Lie algebras.
Recall that a Lie algebra $L$ is said to be $2$-capable provided that $L\cong H/Z_2(H)$ for a Lie algebra $H$.
In \cite{ni20}, the second author showed that the dimension of the $2$-nilpotent multiplier of an $n$-dimensional non-abelian nilpotent Lie algebra $ L $ with the derived subalgebra of dimension $ m $ is bounded by $ \frac{1}{3} (n-m)
\big{(}(n+2m-2)(n-m-1)+3(m-1)
\big{)}+3.$ Then $\dim \mathcal{M}^{(2)}(L)\leq \frac{1}{3} n(n-2)(n-1)+3 $ and so we have $ \dim \mathcal{M}^{(2)}(L) = \frac{1}{3} n(n-2)(n-1)+3-s_2(L)$ for some $s_2(L)\geq 0.$ The structure of all non-abelian nilpotent Lie algebras is obtained when $s_2(L)=0$ in \cite{ni20}. The current paper is devoted to obtain the structure of all nilpotent non-abelian Lie algebras $ L $ when $1\leq s_2(L)\leq 6.$ Moreover, we specify which of them are capable.
\section{Preliminaries}
Following to Shirshov in \cite{shi}, for a free Lie algebra $L$ on the set $X=\{x_1,x_2,\ldots \}.$
The basic commutator on the set $X$ defined inductively.
\begin{itemize}
\item[$(i)$] The generators $x_1,x_2,\ldots, x_n$ are basic commutators of length one and ordered by setting $x_i < x_j$ if $i < j.$
\item[$(ii)$] If all the basic commutators $d_i$ of length less than $t$ have been defined and ordered, then we may define the basic commutators of length $t$ to be all commutators of the form $[d_i, d_j]$ such that the sum of lengths of $d_i$ and $d_j$ is $t,$ $d_i > d_j,$ and if $d_i =[d_s, d_t],$ then $d_j\geq d_t.$ The basic commutators of length $t$ follow those of lengths less than $t.$ The basic commutators of the same length can be ordered in any way, but usually the lexicographical order is used.
\end{itemize}
The number of all basic commutators on a set $X=\{x_1,x_2,\ldots, x_d\}$ of length $n$ is denoted by $l_d(n)$. Thanks to \cite{2}, we have
\[l_d(n)=\frac{1}{n}\sum_{m|n}\mu (m)d^{\frac{n}{m}},\]
where $\mu (m)$ is the M\"{o}bius function, defined by $\mu (1) = 1, \mu (k) = 0$ if $k$ is divisible by a square, and
$\mu (p_1 \ldots p_s) = (-1)^s $ if $p_1,\ldots , p_s$ are distinct prime numbers.
Using the the topside statement and looking \cite[Lemma 1.1]{sal} and \cite{shi}, we have the following.\newline
\begin{thm}\label{13}
Let $ F $ be a free Lie algebra on set $ X,$ then $ F^c/ F^{c+i}$ is an abelian Lie algebra with the basis of all basic commutators on $ X $ of lengths $ c,c+1,\ldots,c+i-1 $ for all $0 \leq i \leq c$. In particular, $ F^c/ F^{c+1}$ is an abelian Lie algebra of dimension $l_d(c),$ where $ F^{c} $ is the $c$-th term of the lower central series of $F$.
\end{thm}
The following theorem improves the result of \cite[Theorem 2.5]{ara} for $ c=2 $ when $ L $ is a non-abelian nilpotent Lie algebra.
\begin{thm}\cite[Theorem 2.14]{ni20}\label{1}
Let $ L$ be an $n$-dimensional nilpotent Lie algebra with the derived subalgebra of dimension $m~ (m \geq 1).$ Then
$\dim \mathcal{M}^{(2)}(L) \leq \frac{1}{3} (n-m)
\big{(}(n+2m-2)(n-m-1)+3(m-1)
\big{)}+3.$ If $ m=1, $ then $ \dim \mathcal{M}^{(2)}(L)= \frac{1}{3}n(n-1)(n-2)+3 $ if and only if $ L\cong H(1)\oplus A(n-3). $
\end{thm}
\section{main results}
This section is devoted to obtain new result on the dimension of the $ 2$-nilpotent multiplier of a non-abelian nilpotent Lie algebra. We are going to obtain the structure of all Lie algebras $ L $ such that $1\leq s_2(L)\leq 6. $\\
We need the following two easy lemmas for the next investigation.
\begin{lem}\label{2} Let $ L$ be an $n$-dimensional nilpotent Lie algebra with the derived subalgebra of dimension $m~ (m \geq 3).$ Then
$\dim \mathcal{M}^{(2)}(L) \leq \frac{1}{3} n
(n-2)(n-1)-2.$
\end{lem}
\begin{proof}
By using Theorem \ref{1} and our assumption, we have
\begin{align*}
&\dim \mathcal{M}^{(2)}(L) \leq \frac{1}{3} (n-m)
\big{(}(n+2m-2)(n-m-1)+3(m-1) \big{)}+3 \leq \\& \frac{1}{3} (n-3)
\big{(}(n+4)(n-4)+3(3-1) \big{)}+3\\& =\frac{1}{3} (n-3)
\big{(}(n+4)(n-4)+6 \big{)}+3= \frac{1}{3} (n^3-3n^2)-\frac{10n}{3}+10+3-2+2\\&=\frac{1}{3} (n^3-3n^2)-5(\frac{2n}{3}-3)-2\leq \frac{1}{3} n(n-2)(n-1)-2.
\end{align*}
The result is obtained.
\end{proof}
\begin{lem}\label{3} Let $ L$ be an $n$-dimensional nilpotent Lie algebra with the derived subalgebra of dimension $2.$ Then
$\dim \mathcal{M}^{(2)}(L) \leq \frac{1}{3} n
(n-2)(n-1)+1.$
\end{lem}
\begin{proof}
By invoking Theorem \ref{1}, we have
\begin{align*}
&\dim \mathcal{M}^{(2)}(L) \leq \frac{1}{3} (n-2)
\big{(}(n+2)(n-3)+3 \big{)}+3 = \frac{1}{3} (n-2)(n^2-n-3)+3\\& = \frac{1}{3} (n^3-3n^2)-(\frac{n}{3}-5)\leq \frac{1}{3} (n^3-3n^2)+\frac{2n}{3}+1= \frac{1}{3} n(n-2)(n-1)+1,
\end{align*}
as required.
\end{proof}
\begin{thm}\cite[Theorem 2.13]{ni20}\label{4}
Let $ L$ be an $n$-dimensional nilpotent Lie algebra and $\dim L^2=1.$ Then $ L\cong H(k)\oplus A(n-2k-1) $ and
\begin{itemize}
\item[$ (i)$] $ \mathcal{M}^{(2)}(L)\cong A(\frac{1}{3}n(n-1)(n-2)+3),$ if $ k=1. $
\item[$ (ii)$] $ \mathcal{M}^{(2)}(L)\cong A(\frac{1}{3}n(n-1)(n-2)),$ for all $ k\geq 2 $.
\end{itemize}
\end{thm}
\begin{cor}\label{25}
There is no $n$-dimensional nilpotent Lie algebra $ L $ with the derived subalgebra of dimension $ m\geq 1 $ such that
$\dim \mathcal{M}^{(2)}(L)=\frac{1}{3} n
(n-2)(n-1)+2$ or equally $s_2(L)=1.$
\end{cor}
\begin{proof}
The result follows from Lemmas \ref{2}, \ref{3} and Theorem \ref{4}.
\end{proof}
By using the notation and terminology of \cite{cic,Gr}, we have
\begin{prop}\label{m2}
The $ 2$-nilpotent multiplier of the Lie algebras
\[L_{4,3}=\langle x_1, x_2, x_3, x_4\big{|}[x_1, x_2] = x_3, [x_1, x_3] = x_4\rangle,\]
\[L_{5,8}=\langle x_1, x_2, x_3, x_4,x_5\big{|}[x_1, x_2] = x_4, [x_1, x_3] = x_5\rangle\] and
\[L_{5,5}=\langle x_1, x_2, x_3, x_4,x_5\big{|}[x_1, x_2] = x_3, [x_1, x_3] = x_5=[x_2,x_4]\rangle\]
is abelian of dimension $6,$ $18$ and $17,$ respectively.
\end{prop}
\begin{proof}
Let $ L\cong L_{4,3}$ and $F$ be a free Lie algebra on the set
$ \lbrace x_1, x_2\rbrace $ and $ R=\langle [x_1,x_2,x_2]\rangle+F^4.$
Since $L_{4,3}$ is of class $ 3, $ $ F^4\subseteq R $ and so \[
\mathcal{M}^{(2)}(L_{4,3}) \cong \dfrac{ \langle [x_1,x_2,x_2]\rangle+F^4/F^6}{[\langle [x_1,x_2,x_2]\rangle+F^4,F,F]/F^6}.\]
Theorem \ref{13} implies $\dim F^4/F^6=l_2(4)+l_2(5)=3+6=9. $
It is easy to see that $ [R,F,F]/F^6=[\langle [x_1, x_2,x_2] \rangle,F,F]+F^6/F^6= \langle [x_1,x_2,x_2,x_1,x_1]+F^6,[x_1,x_2,x_2,x_2,x_1]$ $+F^6,[x_1,x_2,x_2,x_2,x_2]+F^6,[x_1,x_2,x_2,[x_1,x_2]]+F^6\rangle$
and so
$\dim [R,F,F]/F^6=4.$
It follows $\dim \mathcal{M}^{(2)}(L_{4,3})=10-4=6.$\\ Now, let $ L\cong L_{5,8}.$
Clearly, $ L= \langle x_1, x_2, x_3\big{|}[x_2, x_3]=[x_i,x_j,x_k]=0, 1\leq i,j,k\leq 3\rangle.$
Now assume that $F$ is a free Lie algebra on the set
$ \lbrace x_1, x_2, x_3\rbrace $ and $ R=\langle [x_2, x_3] \rangle+F^3.$
Since $L_{5,8}$ is of class two, $ F^3\subseteq R $ and so $
\mathcal{M}^{(2)}(L_{5,8}) \cong \dfrac{ F^3/F^5}{[R,F,F]/F^5}.$ \newline
Theorem \ref{13} implies $\dim F^3/F^5=l_3(3)+l_3(4)=8+18=26.$
It is easy to see that $ [R,F,F]/F^5= \langle [x_2,x_3,x_1,x_1]+F^5,[x_2,x_3,x_1,x_2]+F^5,[x_2,x_3,x_1,x_3]+F^5, [x_2,x_3,x_2,x_1]+F^5, [x_2,x_3,x_2,x_2]+F^5,[x_2,x_3,x_2,x_3]+F^5,[x_2,x_3,x_3,x_1]+F^5,
[x_2,x_3,x_3,x_2]+F^5,[x_2,x_3,x_3,x_3]+F^5\rangle.$
Use of jacobi identity on all triples and make some calculations, we obtain that
\begin{align*}
&[x_2,x_3,x_1,x_2]=-[x_1,x_2,[x_2,x_3]]+[x_2,x_3,x_2,x_1],\\
&[x_2,x_3,x_1,x_3]=-[x_1,x_3,[x_2,x_3]]+[x_2,x_3,x_3,x_1],\\
&[x_2,x_3,x_2,x_3]=-[x_2,x_3,[x_2,x_3]]+[x_2,x_3,x_3,x_2].
\end{align*}
Therefore
$[R,F,F]/F^5= \langle [x_2,x_3,x_1,x_1]+F^5,[x_2,x_3,x_2,x_1]+F^5,[x_2,x_3,x_2,x_2]+F^5,[x_2,x_3,x_3,x_1]+F^5, [x_2,x_3,x_3,x_2]+F^5,[x_2,x_3,x_3,x_3]+F^5,[x_1,x_2,[x_1,x_3]]+F^5,[x_1,x_2,[x_2,x_3]]+F^5 \rangle$
and so
$\dim [R,F,F]/F^5$ $=8.$
It follows $\dim \mathcal{M}^{(2)}(L_{5,8})=26-8=18.$\\
Let $ L\cong L_{5,5}$ and $F$ be a free Lie algebra on the set
$ \lbrace x_1, x_2,x_4\rbrace $ and
$ R=\langle [x_1,x_2,x_2],$ $[x_2,x_4,x_1],[x_2,x_4,x_2],[x_2,x_4,x_4],[x_1,x_4,x_1],[x_1,x_4,x_2],[x_2,x_4,x_4],[x_1,x_4]\rangle+F^4$ so $ R/F^6\cong F^3+\langle [x_1,x_4]\rangle/\langle [x_1,x_2,x_1]\rangle+F^6.$
Since $L_{5,5}$ is of class $ 3, $ $ F^4\subseteq R $ and so \[
\mathcal{M}^{(2)}(L_{5,5}) \cong \dfrac{ F^3/\langle [x_1,x_2,x_1]\rangle+F^6}{[\langle [x_1,x_4]\rangle,F,F]+\langle [x_1,x_2,x_1]\rangle+F^5/\langle [x_1,x_2,x_1]\rangle+F^6}.\]
Theorem \ref{13} implies $\dim F^3/F^6=l_3(3)+l_3(4)+l_3(5)=8+18+l_3(5). $
It is easy to see that $ [R,F,F]/F^6= \langle [x_1,x_4,x_1,x_1],[x_1,x_4,x_2,x_1],[x_1,x_4,x_2,x_2],$ \newline $[x_1,x_4,x_4,x_1],[x_1,x_4,x_4,x_2],[x_1,x_4,x_4,x_4],[x_1,x_2,[x_1,x_4]],[x_1,x_4,[x_2,x_4]]\rangle+F^5/F^6$
and so
$\dim [R,F,F]/F^6=8+l_3(5).$
Therefore $\dim \mathcal{M}^{(2)}(L_{5,5})=l_3(3)+l_3(4)+l_3(5)-1-l_3(5)-8=17,$ as required.
\end{proof}
A Lie algebra $L$ is called capable if $L\cong H/Z(H)$ for a Lie algebra $H$. See \cite{nin} for more information on this topic.
\begin{prop}\label{68}
Let $ L $ be a non-capable $n$-dimensional nilpotent Lie algebra of class $3$ with the derived subalgebra of dimension $2$ and $n\geq 6.$ Then $\dim \mathcal{M}^{(2)}(L)=\frac{1}{3} (n-1)
(n-2)(n-3)+2.$
\end{prop}
\begin{proof}
By \cite[Lemma 4.5, Corollary 4.11 and Theorem 5.1]{ni60}, $ Z^{*}(L)=L^3\cong A(1) $ and so $ L/L^3\cong H(1)\oplus A(n-4).$ Since
$ L $ is not $2$-capable, we have $\dim \mathcal{M}^{(2)}(L)=\dim \mathcal{M}^{(2)}(L/L^3)-1=\frac{1}{3} (n-1)
(n-2)(n-3)+2,$ by using Theorem \ref{4} and \cite[Lemma 2.2 and Theorem 3.2]{ni20}.
\end{proof}
\begin{lem}\label{5} There is no $n$-dimensional nilpotent Lie algebra $ L $ with the derived subalgebra of dimension $2$ such that
$\dim \mathcal{M}^{(2)}(L)=\frac{1}{3} n
(n-2)(n-1)+1$ or equally $ s_2(L)=2. $
\end{lem}
\begin{proof}
By contrary, let there be an $n$-dimensional nilpotent Lie algebra $ L $ with the derived subalgebra of dimension $2$ such that
$\dim \mathcal{M}^{(2)}(L) =\frac{1}{3} n
(n-2)(n-1)+1.$ Let $ B $ be a one dimensional central ideal of $L$ is contained in $L^2.$ Since $ \dim (L/B)^2=1, $ we have $ \dim \mathcal{M}^{(2)}(L/B)\leq \frac{1}{3}(n-1)(n-2)(n-3)+3$ by using Theorem \ref{4}. Now \cite[Theorem 2.4]{ara} implies that
\begin{align*}
&\frac{1}{3}(n-2)(n^2-n)+1=\frac{1}{3}n(n-1)(n-2)+1=\dim \mathcal{M}^{(2)}(L)\leq \dim \mathcal{M}^{(2)}(L) +\\&\dim L^3\cap B\leq \dim \mathcal{M}^{(2)}(L/B)+\dim (L/L^2\otimes L/L^2 \otimes B)\leq \\&\frac{1}{3}(n-1)(n-2)(n-3)+3+(n-2)^2=\frac{1}{3}(n-2)(n^2-n-3)+3.
\end{align*}
If $ n\geq 5, $ then we have a contradiction. Hence, we should have $ n=4, $ and so by looking at all nilpotent Lie algebras $ L $ listed in \cite{Gr} and our assumption, we obtain that $ L\cong L_{4,3}. $ By our assumption, since $ s_2(L)=2,$ $ \dim( \mathcal{M}^{(2)}(L_{4,3}) )=\frac{1}{3} 4 (4-2)(4-1)+1=9.$ On the other hand, by using Proposition \ref{m2}, we have $\dim( \mathcal{M}^{(2)}(L_{4,3}) )=6.$ It is a contradiction again. Hence, the result follows.
\end{proof}
Let $cl(L)$ be used to denote the nilpotency class of a Lie algebra $L.$
\begin{thm}\label{51}
There is no $n$-dimensional nilpotent Lie algebra $ L $ with the derived subalgebra of dimension $2$ such that
$\dim \mathcal{M}^{(2)}(L)=\frac{1}{3} n
(n-2)(n-1)$ or equally $ s_2(L)=3.$
\end{thm}
\begin{proof}
By contrary, let there be an $n$-dimensional nilpotent Lie algebra $ L $ with the derived subalgebra of dimension $2$ such that
$\dim \mathcal{M}^{(2)}(L) =\frac{1}{3} n
(n-2)(n-1).$ Let $ B $ be a one dimensional central ideal of $L$ is contained in $L^2.$ Since $ \dim (L/B)^2=1, $ we have $ \dim \mathcal{M}^{(2)}(L/B)\leq \frac{1}{3}(n-1)(n-2)(n-3)+3,$ by using Theorem \ref{4}. Now \cite[Theorem 2.4]{ara} implies
\begin{align*}
&\frac{1}{3}(n-2)(n^2-n)=\frac{1}{3}n(n-1)(n-2)=\dim \mathcal{M}^{(2)}(L)\leq \\&\dim \mathcal{M}^{(2)}(L/B)+\dim (L/L^2\otimes L/L^2 \otimes B)-\dim L^3\cap B\leq \\&\frac{1}{3}(n-1)(n-2)(n-3)+3+(n-2)^2-\dim L^3\cap B\\&=\frac{1}{3}(n-2)(n^2-n-3)+3-\dim L^3\cap B.
\end{align*}
If $ cl(L)=2, $ then $ L^3=0 $ so $ n\leq 5.$ If $ cl(L)=3, $ then since $B= L^2\cap Z(L)=L^3\cong A(1), $ we have $ n\leq 4.$
Let $ cl(L)=2. $ Hence, our assumption and looking at the classification of all nilpotent Lie algebras listed in \cite{Gr} show that $ L\cong L_{5,8}.$
By Proposition \ref{m2}, we have $\dim( \mathcal{M}^{(2)}(L_{5,8} ))= 18.$ It contradicts our assumption that
$\dim (\mathcal{M}^{(2)}(L_{5,8}))=20.$
Now, let $ cl(L)=3. $
By a similar way, we have $ L\cong L_{4,3}.$ Using Proposition \ref{m2}, we have $\dim( \mathcal{M}^{(2)}(L_{4,3} ))=6.$ It contradicts our assumption that
$\dim (\mathcal{M}^{(2)}(L_{4,3} ))=8.$ Hence, the supposition is false and the statement is true.
\end{proof}
\begin{thm}\label{519f}
There is no $n$-dimensional nilpotent Lie algebra $ L $ with the derived subalgebra of dimension $m=2$ such that
$\dim \mathcal{M}^{(2)}(L)=\frac{1}{3} n
(n-2)(n-1)-1$ or equally $ s_2(L)=4.$
\end{thm}
\begin{proof}
By contrary, let there be an $n$-dimensional nilpotent Lie algebra $ L $ with the derived subalgebra of dimension $2$ such that
$\dim \mathcal{M}^{(2)}(L) =\frac{1}{3} n
(n-2)(n-1)-1$ and $ B $ be a one dimensional central ideal of $L$ is contained in $L^2.$ Since $ \dim (L/B)^2=1, $ we have $ \dim \mathcal{M}^{(2)}(L/B)\leq \frac{1}{3}(n-1)(n-2)(n-3)+3,$ by using Theorem \ref{4}. Now \cite[Theorem 2.4]{ara} implies
\begin{align*}
&\frac{1}{3}(n-2)(n^2-n)-1=\frac{1}{3}n(n-1)(n-2)-1=\dim \mathcal{M}^{(2)}(L)\leq \\& \dim \mathcal{M}^{(2)}(L/B)+\dim (L/L^2\otimes L/L^2 \otimes B)-\dim L^3\cap B\leq \\&\frac{1}{3}(n-1)(n-2)(n-3)+3+(n-2)^2-\dim L^3\cap B\\&=\frac{1}{3}(n-2)(n^2-n-3)+3-\dim L^3\cap B.
\end{align*}
If $ cl(L)=2, $ then $ L^3=0 $ so $ n\leq 6.$ If $ cl(L)=3, $ then since $B= L^2\cap Z(L)=L^3\cong A(1), $ $ n\leq 5.$
Let $ cl(L)=2. $ Hence, our assumption and looking at the classification of all nilpotent Lie algebras listed in \cite{cic,Gr}, we obtain $ L\cong L_{5,8}, L\cong L_{5,8}\oplus A(1), L\cong L_{6,22}(\epsilon)$ or $ L\cong L_{6,7}^{(2)}(\eta). $
By Proposition \ref{m2} and \cite[Theorem 2.5]{ni20}, we have $\dim( \mathcal{M}^{(2)}(L_{5,8} ))= 18$ and $\dim( \mathcal{M}^{(2)}(L_{5,8} \oplus A(1)))=30.$ It contradicts our assumption. Now, let $ L\cong L_{6,22}(\epsilon) $
and
$ B $ be a one dimensional central ideal of $L_{6,22}(\epsilon)$ is contained in $L_{6,22}(\epsilon)^2.$ Since $ \dim (L_{6,22}(\epsilon)/B)^2=1 $ and $ L_{6,22}(\epsilon)/B\cong H(2), $ we have $ \dim \mathcal{M}^{(2)}(H(2))= 20,$ by using Theorem \ref{4}. Now \cite[Theorem 2.4]{ara} implies
$\dim \mathcal{M}^{(2)}(L_{6,22}(\epsilon))\leq \dim \mathcal{M}^{(2)}(H(2))+\dim (H(2)/H(2)^2\otimes H(2)/H(2)^2 \otimes B)= 20+16=36.
$ Similarly, we have $\dim \mathcal{M}^{(2)}(L_{6,7}^{(2)}(\eta))\leq 36. $ They contradict our assumption that $ \dim \mathcal{M}^{(2)}(L_{6,22}(\epsilon))=39= \dim \mathcal{M}^{(2)}(L_{6,7}^{(2)}(\eta)).$
Now let $ cl(L)=3. $
Hence, by looking at the classification of all nilpotent Lie algebras listed in \cite{Gr}, we obtain $ L\cong L_{4,3}, $ $ L\cong L_{4,3} \oplus A(1) $ or $ L\cong L_{5,5}. $ By Proposition \ref{m2} and \cite[Theorem 2.5]{ni20}, $\dim( \mathcal{M}^{(2)}(L_{4,3} ))=6,~ \dim( \mathcal{M}^{(2)}(L_{5,5} ))= 17$ and $\dim( \mathcal{M}^{(2)}(L_{4,3} \oplus A(1)))=12.$
They contradict our assumption that $ s_2(L)=4.$ Hence the result is obtained.
\end{proof}
\begin{thm}\label{kl89}
Let $ L$ be an $n$-dimensional nilpotent Lie algebra with the derived subalgebra of dimension $m\geq 1.$ Then
\begin{itemize}
\item[$ (i)$]$ \dim \mathcal{M}^{(2)}(L)= \frac{1}{3}n(n-2)(n-1)+3 $ or equally $ s_2(L)=0$ if and only if $ L\cong H(1)\oplus A(n-3). $
\item[$ (ii)$]There is no $n$-dimensional nilpotent Lie algebra $ L $ with the derived subalgebra of dimension $m\geq 1$ such that
$\dim \mathcal{M}^{(2)}(L)=\frac{1}{3} n
(n-2)(n-1)+2$ or equally $ s_2(L)=1.$
\item[$ ( iii)$]There is no $n$-dimensional nilpotent Lie algebra $ L $ with the derived subalgebra of dimension $m\geq 1$ such that
$\dim \mathcal{M}^{(2)}(L)=\frac{1}{3} n
(n-2)(n-1)+1$ or equally $ s_2(L)=2.$
\item[$ ( iv)$]There is no $n$-dimensional nilpotent Lie algebra $ L $ with the derived subalgebra of dimension $m\geq 2$ such that
$\dim \mathcal{M}^{(2)}(L)=\frac{1}{3} n
(n-2)(n-1)$ or equally $ s_2(L)=3.$
\item[$ ( v)$]There is no $n$-dimensional nilpotent Lie algebra $ L $ with the derived subalgebra of dimension $m\geq 1$ such that
$\dim \mathcal{M}^{(2)}(L)=\frac{1}{3} n
(n-2)(n-1)-1$ or equally $ s_2(L)=4.$
\end{itemize}
\end{thm}
\begin{proof}
The result follows from Theorem \ref{1}, Lemma \ref{2}, Theorem \ref{4}, Corollary \ref{25}, Lemma \ref{5}, Theorems \ref{51} and \ref{519f}.
\end{proof}
\begin{cor}\label{845}
Let $ L$ be an $n$-dimensional nilpotent Lie algebra with the derived subalgebra of dimension $m\geq 2.$ Then $\dim \mathcal{M}^{(2)}(L)\leq \frac{1}{3} n
(n-2)(n-1)-2.$
\end{cor}
\begin{proof}
The result follows from Theorem \ref{kl89}.
\end{proof}
\begin{thm}\label{5191}
There is no $n$-dimensional nilpotent Lie algebra $ L $ with the derived subalgebra of dimension $m\geq 3$ such that
$\dim \mathcal{M}^{(2)}(L)=\frac{1}{3} n
(n-2)(n-1)-2$ or $\dim \mathcal{M}^{(2)}(L)=\frac{1}{3} n
(n-2)(n-1)-3.$
\end{thm}
\begin{proof}
By contrary, let there be an $n$-dimensional nilpotent Lie algebra $ L $ with the derived subalgebra of dimension $m\geq 3$ such that
$\dim \mathcal{M}^{(2)}(L) =\frac{1}{3} n
(n-2)(n-1)-2.$ Let $ B $ be a one dimensional central ideal of $L$ is contained in $L^2.$ Since $ \dim (L/B)^2\geq 2, $ we have $ \dim \mathcal{M}^{(2)}(L/B)\leq \frac{1}{3}(n-1)(n-2)(n-3)-2,$ by using Corollary \ref{845}. Now \cite[Theorem 2.4]{ara} implies
\begin{align*}
&\frac{1}{3}(n-2)(n^2-n)-2=\frac{1}{3}n(n-1)(n-2)-2=\dim \mathcal{M}^{(2)}(L)\leq \dim \mathcal{M}^{(2)}(L) +\\&\dim L^3\cap B\leq \dim \mathcal{M}^{(2)}(L/B)+\dim (L/L^2\otimes L/L^2 \otimes B)\leq \\&\frac{1}{3}(n-1)(n-2)(n-3)-2+(n-3)^2,
\end{align*}
and so $ n\leq 3,$ which is a contradiction. By a similar way, we can see that there is no $n$-dimensional nilpotent Lie algebra $ L $ with the derived subalgebra of dimension $m\geq 3$ such that
$\dim \mathcal{M}^{(2)}(L)=\frac{1}{3} n
(n-2)(n-1)-3.$ The result follows.
\end{proof}
\begin{thm}\label{519}
Let $ L $ be an $n$-dimensional nilpotent Lie algebra with the derived subalgebra of dimension $2.$ Then
\begin{itemize}
\item[$(i)$]$\dim \mathcal{M}^{(2)}(L)=\frac{1}{3} n
(n-2)(n-1)-2$ or equally $s_2(L)=5$ if and only if $ L\cong L_{5,8}$ or $ L\cong L_{4,3}.$
\item[$(ii)$]$\dim \mathcal{M}^{(2)}(L)=\frac{1}{3} n
(n-2)(n-1)-3$ or equally $s_2(L)=6$ if and only if $ L\cong L_{5,5}.$
\end{itemize}
\end{thm}
\begin{proof}
\begin{itemize}
\item[$(i)$]
Let there be an $n$-dimensional nilpotent Lie algebra $ L $ with the derived subalgebra of dimension $2$ such that
$\dim \mathcal{M}^{(2)}(L) =\frac{1}{3} n
(n-2)(n-1)-2$ and $ B $ be a one dimensional central ideal of $L$ in contained $L^2.$ Since $ \dim (L/B)^2=1, $ we have $ \dim \mathcal{M}^{(2)}(L/B)\leq \frac{1}{3}(n-1)(n-2)(n-3)+3$ by using Theorem \ref{4}. Now \cite[Theorem 2.4]{ara} implies
\begin{align*}
&\frac{1}{3}(n-2)(n^2-n)-2=\frac{1}{3}n(n-1)(n-2)-2=\dim \mathcal{M}^{(2)}(L)\leq \\& \dim \mathcal{M}^{(2)}(L/B)+\dim (L/L^2\otimes L/L^2 \otimes B)-\dim L^3\cap B\leq \\&\frac{1}{3}(n-1)(n-2)(n-3)+3+(n-2)^2-\dim L^3\cap B\\&=\frac{1}{3}(n-2)(n^2-n-3)+3-\dim L^3\cap B.
\end{align*}
If $ cl(L)=2, $ then $ L^3=0 $ so $ n\leq 7.$ If $ cl(L)=3, $ then since $B= L^2\cap Z(L)=L^3\cong A(1), $ $ n\leq 6.$
Let $ cl(L)=2. $ Hence, by looking at the classification of all nilpotent Lie algebras listed in \cite{cic,Gr,ni60}, we obtain $ L\cong L_{5,8}, L\cong L_{5,8}\oplus A(1), L\cong L_{5,8}\oplus A(2),L\cong L_{6,22}(\epsilon),L\cong L_{6,22}(\epsilon)\oplus A(1),$ $ L\cong L_{6,7}^{(2)}(\eta),$ $L\cong L_{6,7}^{(2)}(\eta)\oplus A(1),~L\cong L_1$ or $ L\cong L_2.$
By Proposition \ref{m2} and \cite[Theorem 2.5]{ni20}, $\dim( \mathcal{M}^{(2)}(L_{5,8} ))= 18$ and so $\dim( \mathcal{M}^{(2)}(L_{5,8} \oplus A(1)))=30$ and $\dim( \mathcal{M}^{(2)}(L_{5,8} \oplus A(2)))=50.$ It contradicts our assumption that $ s_2(L)=5.$ Now, let $L\cong L_{6,22}(\epsilon)$ and
$ B $ be a one dimensional central ideal of $L_{6,22}(\epsilon)$ is contained in $L_{6,22}(\epsilon)^2.$ Since $ \dim (L_{6,22}(\epsilon)/B)^2=1 $ and $ L_{6,22}(\epsilon)/B\cong H(2), $ we have $ \dim \mathcal{M}^{(2)}(H(2))= 20,$ by using Theorem \ref{4}. Now \cite[Theorem 2.4]{ara} implies
$\dim \mathcal{M}^{(2)}(L_{6,22}(\epsilon))\leq \dim \mathcal{M}^{(2)}(H(2))+\dim (H(2)/H(2)^2\otimes H(2)/H(2)^2 \otimes B)= 20+16=36
$ and hence
$\dim \mathcal{M}^{(2)}(L_{6,22}(\epsilon)\oplus A(1)) \leq 66. $ Similarly, we have
$\dim \mathcal{M}^{(2)}(L_{6,7}^{(2)}(\eta))\leq 36 $ and hence
$\dim \mathcal{M}^{(2)}(L_{6,7}^{(2)}(\eta)\oplus A(1)) \leq 66.$ They cannot happen because of our assumption that $ s_2(L)=5$.
Also, if $ L\cong L_1 $ or $ L\cong L_2, $ then let
$ B $ be a one dimensional central ideal of $L$ is contained in $L^2.$ Since $ \dim (L/B)^2=1 $ and $ L/B\cong H(2)\oplus A(1), $ we have $ \dim \mathcal{M}^{(2)}(H(2)\oplus A(1))= 40,$ by using Theorem \ref{4}. Now \cite[Theorem 2.4]{ara} implies
$\dim \mathcal{M}^{(2)}(L)\leq \dim \mathcal{M}^{(2)}(H(2)\oplus A(1))+\dim (L/L^2\otimes L/L^2 \otimes B)= 40+25=65,$
which contradicts our assumption that $ s_2(L)=5.$ Hence we should have $ L\cong L_{5,8}.$
In the case that $ cl(L)=3. $ Hence, by looking the classification of all nilpotent Lie algebras of dimension $ 4 $ listed in \cite{Gr}, we obtain $ L\cong L_{4,3}.$ Proposition \ref{m2} implies $\dim( \mathcal{M}^{(2)}(L_{4,3} ))=6$ so $ S_2(L_{4,3})=5. $ By a similar way, there is no a Lie algebra such that $s_2(L)=5$ when $ \dim L\geq 5.$
\item[$(ii)$]By a similar technique is used in the proof of part $(i)$,
we conclude that $L\cong L_{5,5}.$ The converse holds by Proposition \ref{m2}.
\end{itemize}
\end{proof}
\begin{thm}\label{man}
Let $ L $ be an $n$-dimensional nilpotent Lie algebra with the derived subalgebra of dimension $m\geq 1.$ Then
\begin{itemize}
\item[$(a)$] $s_2(L)=0 $ if and only if $ L\cong H(1)\oplus A(n-3).$
\item[$(b)$] There is no $n$-dimensional nilpotent Lie algebra $ L $ such that $ s_2(L)=1,2,4.$
\item[$(c)$] $s_2(L)=3 $ if and only if $ L\cong H(k)\oplus A(n-2k-1) $ for all $ k\geq 2.$
\item[$(d)$] $s_2(L)=5 $ if and only if $ L\cong L_{4,3} $ or $ L\cong L_{5,8}$
\item[$(e)$] $s_2(L)=6 $ if and only if $ L\cong L_{5,5}.$
\end{itemize}
\end{thm}
\begin{proof}
The result is obtained by using Theorem \ref{4}, Corollary \ref{845}, Theorems \ref{5191} and \ref{519}.
\end{proof}
Recall from \cite{ni20}, a Lie algebra $L$ is said to be $2$-capable if $L\cong H/Z_2(H)$ for a Lie algebra $H$.
In the following corollary, we speciy which ones of Lie algebras with $0\leq s_2(L)\leq 6 $ are capable.
\begin{cor}
Let $ L $ be an $n$-dimensional nilpotent Lie algebra with the derived subalgebra of dimension $m\geq 1$ such that $0\leq s_2(L)\leq 5. $ Then $ L $ is $2$-capable if and only if $ L\cong H(1)\oplus A(n-3),$ $ L\cong L_{4,3}, $ $ L\cong L_{5,5} $ or $ L\cong L_{5,8}.$
\end{cor}
\begin{proof}
By using Theorem \ref{man}, $L$ is isomorphic to one of the Lie algebras $ H(k)\oplus A(n-3),$ for all $k\geq 1, $ $ L_{4,3}, $ $ L_{5,5} $ or $L_{5,8}.$
By invoking \cite[Theorem 3.3]{ni20}, $ H(1)\oplus A(n-3)$ is $2$-capable. Let $ L\cong L_{4,3}$ and $ B $ be a one dimensional central ideal of $L$ is contained in $L^2.$ Since $ \dim (L/B)^2=1, $ we have $ \dim \mathcal{M}^{(2)}(L/B)\leq 3,$ by using Theorem \ref{4}. Since $\dim \mathcal{M}^{(2)}(L/B) < \dim \mathcal{M}^{(2)}(L)=5,$ \cite[Theorem 3.2]{ni20} implies $ L_{4,3} $ is $2$-capable. By a similar way, $ L\cong L_{5,5}$ and $ L\cong L_{5,8}$ are $2$-capable. Hence the result follows.
\end{proof}
|
{
"timestamp": "2018-07-03T02:19:26",
"yymm": "1806",
"arxiv_id": "1806.00872",
"language": "en",
"url": "https://arxiv.org/abs/1806.00872"
}
|
\section*{Introduction}
In this paper we study Dynkin gradings on simple Lie algebras arising from nilpotent elements. Specifically, we investigate abelian subalgebras which are degree 1 homogeneous with respect to these gradings.
The study of gradings associated to nilpotent elements of simple Lie algebras is important since the finite and affine classical and quantum W-algebras are defined using these gradings. In order to study integrable systems associated to these W-algebras, it is useful to have their free field realizations. One of the ways to construct them is to use the generalized Miura map \cite{dSKV, KW}. This construction can be further improved by choosing an abelian subalgebra in the term $\g_1$ of the grading. That is why the description of such subalgebras, especially the ones of dimension equal half of the dimension of $\g_1$ (which is maximal possible), is important.
We show that for each odd nilpotent orbit there always exists a canonically associated ``strictly odd'' nilpotent orbit, which allows us to reduce our investigations to the latter. (Strictly odd means that all Dynkin labels are either 0 or 1.) The rest of the paper is devoted to the investigation of maximal abelian subalgebras in $\g_1$ for strictly odd nilpotents in simple Lie algebras. For algebras of exceptional type we provide tables with largest possible dimensions of such subalgebras in each case. For algebras of classical type, we find expressions for all possible maximal dimensions of abelian subalgebras in $\g_1$, and, based on that, characterize those nilpotents for which there exists such subalgebra of half the dimension of $\g_1$
\section{Recollections}\label{recoll}
Let us recall the nomenclature for nilpotents in a semisimple Lie algebra $\g$.
Given such a nilpotent $e$, one chooses an $\sla_2$-triple $(e,h,f)$ for it, that is, another nilpotent $f$ such that $[e,f]=h$ is semisimple and the identities $[h,e]=2e$, $[h,f]=-2f$ hold (Jacobson-Morozov theorem; see e.~g. \cite{CM}). The Dynkin grading is the eigenspace decomposition for $\ad h$:
$$
\g=\bigoplus_{j\in\mathbb Z}\g_j.
$$
Then, to $e$ one assigns a combinatorial object which determines it up to isomorphism. It is the \emph{weighted Dynkin diagram} corresponding to $e$, which is the Dynkin diagram of $\g$ with numbers assigned to each node. These numbers are the degrees $\alpha_i(h)$ of simple root vectors $e_i$ with respect to the choice of a Cartan and a Borel subalgebra in such a way that $h$ (resp. $e$) becomes an element of the corresponding Cartan (resp. Borel) subalgebra. The weighted Dynkin diagrams satisfy certain restrictions --- for example, the weights can only be equal to $0$, $1$ or $2$; moreover if $\g$ is simple of type A, then the weights are symmetric with respect to the center of the diagram, while for types B, C or D there is no weight $1$ occurring to the left of $2$.
The nilpotent is called \emph{even} if there are no $1$'s in its weighted Dynkin diagram, \emph{odd} if it is not even, and \emph{strictly odd} if there are no $2$'s.
It is clear that for even nilpotents the question about abelian subspaces in $\g_1$ is trivial since $\g_1$ is zero.
We will also need the following fact from \cite{Elashvili}:
\begin{proposition}\label{g1g0}
The degree $1$ part $\g_1$ of $\g$ with respect to the grading induced by a nilpotent $e\in\g$ is generated as a $\g_0$-module by those simple root vectors of $\g$ which have weight $1$ in the weighted Dynkin diagram corresponding to $e$.
\end{proposition}\qed
If $\g$ is a simple Lie algebra of classical type, one can assign to $e$ another combinatorial object --- a partition $\lambda_n\geqslant\lambda_{n-1}\geqslant\cdots$ which records dimensions of irreducible representations of $\sla_2$ into which the standard representation of $\g$ decomposes as a module over its subalgebra $(e,h,f)$. Alternatively, the partition consists of sizes of Jordan blocks in the Jordan decomposition of $e$ as an operator acting on the standard representation of $\g$. The partitions are restricted in a certain way, according to the type of $\g$. For type A one may have arbitrary partitions. For types B and D, all even parts must have even multiplicity, while for type C all odd parts must have even multiplicity. These conditions are sufficient as well as necessary, that is, any partition satisfying these conditions corresponds to a nilpotent orbit in a simple Lie algebra of the respective classical type.
Let us recall how one switches from a partition representing a nilpotent to its weighted Dynkin diagram (cf. \cite{SS}).
Each $\lambda_k$ in the partition represents a copy of the $\lambda_k$-dimensional irreducible representation of $\sla_2$, with eigenvalues of $h$ equal to
$$
1-\lambda_k,3-\lambda_k,...,\lambda_k-3,\lambda_k-1.
$$
To obtain the weighted Dynkin diagram one collects from each $\lambda_k$ those eigenvalues, arranges them in decreasing order, and takes consecutive differences.
For example, take the partition $8,6,3,3,2,1,1$. This gives the following eigenvalues of $h$:
\
\
$$
\begin{tabular}{rrrrrrrrrrrrrrr}
-7& &-5& &-3& &-1& &1& &3& &5& &7\\
& &-5& &-3& &-1& &1& &3& &5\\
& & & & &-2& &0& &2\\
& & & & &-2& &0& &2\\
& & & & & &-1& &1\\
& & & & & & &0\\
& & & & & & &0
\end{tabular}
$$
\
\
Arranging all numbers from this table in the decreasing order gives
$$
\begin{tabular}{cccccccccccccccccccccccc}
\phantom-7&\phantom-5&\phantom-5&\phantom-3&\phantom-3&\phantom-2&\phantom-2&\phantom-1&\phantom-1&\phantom-1&\phantom-0&\phantom-0&\phantom-0&\phantom-0&-1&-1&-1&-2&-2&-3&-3&-5&-5&-7.
\end{tabular}
$$
Taking the consecutive differences then gives
$$
\begin{tabular}{cccccccccccccccccccccccc}
\phantom-2&\phantom-0&\phantom-2&\phantom-0&\phantom-1&\phantom-0&\phantom-1&\phantom-0&\phantom-0&\phantom-1&\phantom-0&\phantom-0&\phantom-0&\phantom-1&\phantom-0&\phantom-0&\phantom-1&\phantom-0&\phantom-1&\phantom-0&\phantom-2&\phantom-0&\phantom-2
\end{tabular}
$$
which is already the weighted Dynkin diagram of the nilpotent in case of type A.
For types B, C, D one has to leave only left half of the obtained sequence (which obviously is centrally symmetric); more precisely, for an algebra of rank $r$, the first $r-1$ nodes of the weighted Dynkin diagram are as stated, while the rightmost node is defined in a specific way, depending on the type. We skip this part, as it will not play any r\^ole for us; details can be found in e.~g. \cite[Section 5.3]{CM}.
For example, the same partition $8,6,3,3,2,1,1$ also encodes a nilpotent orbit in a simple Lie algebra of type C, since all of its odd parts come with even multiplicities. Then, the weighted Dynkin diagram of this nilpotent is
$$
\begin{tabular}{cccccccccccccccccccccccc}
2&\,0&\,2&\,0&\,1&\,0&\,1&\,0&\,0&\,1&\,0&\,0.
\end{tabular}
$$
It is easy to see from the above procedure that the resulting weighted Dynkin diagram begins with certain sequence of $0$'s and $2$'s; if the largest part of the partition is $\lambda_n$ with multiplicity $m_n$, and the parts of the same parity following it are $\lambda_{n-1}$ with multiplicity $m_{n-1}$, $\lambda_{n-2}$ with multiplicity $m_{n-2}$, ..., $\lambda_{n-k+1}$ with multiplicity $m_{n-k+1}$, while the next part $\lambda_{n-k}$ has the opposite parity, then
the first $1$ appears at the $(km_n+(k-1)m_{n-1}+...+2m_{n-k+2}+m_{n-k+1})$-st place. For the type A it reflects symmetrically, thus having weights $2$ and $0$ at both ends and weights $1$ and $0$ in the middle, while for types B, C or D it starts with a sequence of weights $0$ and $2$ followed by a sequence of weights $0$ and $1$, without any further $2$'s.
According to the above procedure for assigning to a partition a weighted Dynkin diagram, it is easy to see the following
\begin{proposition}
A nilpotent in a simple Lie algebra of classical type is even iff all the parts of the corresponding partition are of the same parity, is odd iff there are some parts with different parities, and strictly odd iff the largest part and the next largest part differ by $1$.
\end{proposition}\qed
\section{Important reduction}
Let $V$ and $U$ be finite-dimensional modules over a reductive Lie algebra $\g$ and let $V\otimes V\to U$ be a $\g$-module homomorphism. It is thus a $\g$-equivariant algebra structure on $V$ with values in $U$.
\begin{proposition}\label{basisprop}
Suppose that there exists an abelian subalgebra of dimension $d$ of the algebra $V$. Then there exists an abelian subalgebra of the algebra $V$ of dimension $d$, spanned by weight vectors of $V$.
\end{proposition}
\begin{proof}[Proof \emph{(proposed by the referee)}]
It follows from Borel's fixed point theorem. Indeed, the Cartan subgroup acts on the complete variety of $d$-dimensional abelian subalgebras of $V$, hence has a fixed point.
\end{proof}
Using this, in what follows we will assume throughout that for a simple Lie algebra of classical type we are given a basis in the standard representation consisting of weight vectors corresponding to the weights $\pm\eps_i$, $i=1,...,n$ and moreover, for the type B, to the zero weight. In the adjoint representation, accordingly, we will have a basis corresponding to ${}\pm\eps_i\pm\eps_j$, $i\ne j$ (accounting for tensor products of basis vectors of the standard representation corresponding to $\pm\eps_i$ and to $\pm\eps_j$) and moreover, for the type B only, those corresponding to $\pm\eps_i$ (accounting for tensor product of a basis vector corresponding to $\pm\eps_i$ and that corresponding to the zero weight) and, for C only, corresponding to $\pm2\eps_i$ (accounting for the tensor product of a basis vector of the standard representation corresponding to $\pm\eps_i$ with itself), $i=1,...,n$.
\begin{proposition}\label{reduction}
For any weighted Dynkin diagram corresponding to a nilpotent $e$ in a simple Lie algebra $\g$, consider a subdiagram obtained as a result of erasing all nodes with weight $2$. Consider the resulting subdiagram together with the remaining weights. Then all connected components of this subdiagram, except possibly one of them, have all weights equal to zero. Moreover this one component (if it exists) is a weighted Dynkin diagram of some strictly odd nilpotent orbit in the diagram subalgebra $\tilde\g\subseteq\g$ of the type determined by the shape of the component.
\end{proposition}
\begin{proof}
For algebras of classical type, this is proved in \ref{mainlemma} below. For algebras of type G$_2$ this is clear as all nilpotents in them are either even or strictly odd. As for exceptional Lie algebras of types E or F, the assertion can be seen to be true directly from looking at the tables {\bf F4o, E6o, E7o, E8o} given in the last section.
\end{proof}
\begin{corollary}\label{coreduction}
For any odd nilpotent $e$ in a simple Lie algebra $\g$ there exists a simple diagram subalgebra $\tilde\g\subseteq\g$ and a strictly odd nilpotent $\tilde e\in\tilde\g$ such that
$$
\g_1(e)=\tilde\g_1(\tilde e),
$$
i.~e. the degree 1 homogeneous parts for the grading on $\g$ induced by $e$ and for the grading on $\tilde\g$ induced by $\tilde e$ coincide. In particular, these degree 1 homogeneous parts have the same abelian subspaces.
\end{corollary}
\begin{proof}
Take for $\tilde\g$ the subalgebra corresponding to the connected component of the weighted Dynkin diagram of $e$ as described in \ref{reduction} above. Moreover let $\tilde e$ be any representative from the orbit corresponding to the weights on this connected component --- it exists by \ref{reduction}.
By construction this subalgebra contains all simple root vectors of degree 1, and moreover they will be precisely the root vectors of those simple roots of $\tilde\g$ which contribute to degree $1$ part in the grading induced by $\tilde e$. From \ref{g1g0} we know that $\g_1(e)$ is the $\g_0(e)$-module generated by these root vectors, while $\tilde\g_1(\tilde e)$ is the $\tilde\g_0(\tilde e)$-module generated by them.
Now observe that the only removed nodes which connect with an edge to some node in the remaining connected component have weight $2$, so that all simple root vectors corresponding to removed nodes with weight $0$ commute with every simple root vector in this component.
It follows that the $\g_0(e)$-module generated by the root vectors corresponding to weight 1 nodes is no larger than the $\tilde\g_0(\tilde e)$-module generated by them, i.~e. $\g_1(e)$ coincides with $\tilde\g_1(\tilde e)$.
\end{proof}
\begin{definition}\label{strodef}
For the orbit of an odd nilpotent in a simple Lie algebra $\g$, call its \emph{strictly odd reduction} the nilpotent orbit in the simple Lie algebra $\tilde\g$ obtained as in \ref{coreduction}.
\end{definition}
Given a nilpotent $e\in\g$ as in \ref{reduction}, one can explicitly produce a nilpotent $\tilde e\in\tilde\g$ from the orbit corresponding to its strictly odd reduction in the sense of \ref{strodef} as follows. The nilpotent $e$ clearly lies in the degree 2 subspace $\g_2$ for the corresponding grading. This subspace is a $\g_0$-module and decomposes canonically into the direct sum of its submodule $[\g_1,\g_1]$ and the submodule $\g_2(2)$ generated by the root vectors of $\g$ corresponding to simple roots with weight $2$.
\begin{proposition}\label{e1}
Given a nilpotent $e$, represent it (in a unique way) as a sum $e_1+e_2$ with $e_1\in[\g_1,\g_1]$ and $e_2\in\g_2(2)$. Then the weighted Dynkin diagram of $e_1$ in the subalgebra corresponding to the subdiagram described in \ref{reduction} is given by weights on that subdiagram.
\end{proposition}
\begin{proof}
We have a reductive group $G_0$ corresponding to $\g_0$ acting on $\g_2=[\g_1,\g_1]+\g_2(2)$, with the element $e=e_1+e_2$ having an open orbit in $\g_2$.
This means that $[\g_0,e_1+e_2]=\g_2$. But this implies that $[\g_0,e_1]=[\g_1,\g_1]$ (and similarly for $e_2$).
Hence $G_0e_1$ is an open orbit in $[\g_1,\g_1]$.
Let us consider an intermediate diagram subalgebra $\tilde\g\subseteq\g'\subseteq\g$ corresponding to the (in general disconnected) diagram, obtained by erasing the nodes with weight $2$ but leaving all other nodes together with their weights intact. It is clear from \ref{reduction} that $\g'$ is a direct sum of $\tilde\g$ and some simple algebras of type A. Hence $e_1$, viewed as an element of this direct sum, obviously has zero summands in all these components of type A.
On the other hand from \ref{reduction} we know that there exists a (strictly odd) nilpotent element $\tilde e$ in $[\g_1,\g_1]$, which has the needed Dynkin diagram.
Then just as $e_1$, we can view $\tilde e$ as a nilpotent in $\g'$, having zero summands in all remaining type A components of $\g'$. It is then clear that this nilpotent
will have the weighted Dynkin diagram obtained as in \ref{reduction}. Moreover it will have an open $G_0$-orbit in $[\g_1,\g_1]$, hence it coincides with the $G_0$-orbit of $e_1$, so $\tilde e$ and $e_1$ have the same weighted Dynkin diagram when viewed as nilpotents in $\g'$. Then obviously they will also have the same weighted Dynkin diagram with respect to $\tilde\g$ since the latter is obtained just by throwing out type A components with zero weights only.
\end{proof}
\begin{remark}
It would be convenient to supplement \ref{coreduction} with the explicit construction, from an $\sla_2$-triple $(e,f,h)$ corresponding to a given nilpotent orbit in $\g$, of an $\sla_2$-triple $(\tilde e,\tilde f,\tilde h)$ for its strictly odd reduction as in \ref{strodef}. Since $\tilde\g$ comes with a grading (determined by the weights on the corresponding subdiagram), the semisimple element $\tilde h$ of $\tilde\g$ is determined by this grading, while $\tilde f$, which we know to exist by \ref{coreduction}, is uniquely determined by $\tilde e$ and $\tilde h$. Thus having an explicit construction of $\tilde f$ would provide an alternative general proof for \ref{coreduction} that would not require separate calculations for the exceptional types. One possibility that comes to mind is to produce $\tilde f$ from $f$ in the same way as we produced $\tilde e$ from $e$ in \ref{e1} --- that is, take $\tilde f=f_1$ where $f=f_1+f_2$ is the unique decomposition of $f\in\g_{-2}$ into a sum of $f_1\in[\g_{-1},\g_{-1}]$ and $f_2\in\g_{-2}(2)$, the latter being the $\g_0$-submodule of $\g_{-2}$ generated by the root vectors corresponding to negatives of the simple roots with weights 2 on the initial weighted Dynkin diagram. However as the following example shows, in general this does not give the correct $\tilde f$.
\end{remark}
\begin{example}
For $\g$ of type D$_6$, consider the nilpotent orbit corresponding to the weighted Dynkin diagram $\fordsix{.2}{1}201011$ (and to the partition 5,3,2,2). One of the nilpotents in this orbit is the following sum of positive root vectors
$$
e:=e_{\fordsix{.1}{.5}110000}+e_{\fordsix{.1}{.5}011110}+e_{\fordsix{.1}{.5}001110}+e_{\fordsix{.1}{.5}001101}+e_{\fordsix{.1}{.5}000111}
$$
where the subscripts denote the linear combinations of simple roots that give the corresponding positive roots. The corresponding $f$ in the $\sla_2$-triple for $e$ is the following combination of negative root vectors:
$$
f:=2f_{\fordsix{.1}{.5}100000} + 4f_{\fordsix{.1}{.5}110000} + 2f_{\fordsix{.1}{.5}011110} - 2f_{\fordsix{.1}{.5}011101} + 2f_{\fordsix{.1}{.5}001110} + 4f_{\fordsix{.1}{.5}001101} + f_{\fordsix{.1}{.5}000111},
$$
with subscripts now designating linear combinations of negatives of simple roots. Thus $h=[e,f]$ determines the grading corresponding to the above weighted Dynkin diagram. It is straightforward to check that in the degree 2 subspace $\g_2$, root vectors corresponding to the combinations $\fordsix{.15}{.75}100000$ and $\fordsix{.15}{.75}110000$ of simple roots span the $\g_0$-submodule $\g_2(2)\subseteq\g_2$ generated by the root vector of $\fordsix{.15}{.75}100000$, i.~e. of the simple root with weight 2, while the remaining positive root vectors from $\g_2$ lie in $[\g_1,\g_1]$. Thus according to \ref{e1}, a strictly odd nilpotent $\tilde e=e_1$ in the diagram subalgebra $\tilde\g$ of type D$_5$ corresponding to the subdiagram obtained by omitting the node with weight 2, is obtained by omitting in the sum for $e$ the leftmost summand (the one that lies in $\g_2(2)$). Thus
$$
\tilde e=e_{\fordsix{.1}{.5}011110}+e_{\fordsix{.1}{.5}001110}+e_{\fordsix{.1}{.5}001101}+e_{\fordsix{.1}{.5}000111}.
$$
Now if we attempt to choose for the companion of $\tilde e$ in the $\sla_2$-triple the element $f_1$ obtained in the same way from $f$, i.~e. by omitting in the sum for $f$ the summands that lie in $\g_{-2}(2)$, we obtain
$$
f_1=2f_{\fordsix{.1}{.5}011110} - 2f_{\fordsix{.1}{.5}011101} + 2f_{\fordsix{.1}{.5}001110} + 4f_{\fordsix{.1}{.5}001101} + f_{\fordsix{.1}{.5}000111}.
$$
However it turns out that $[e_1,f_1]$ is not the semisimple element determining the needed grading of $\tilde\g$. As a matter of fact this element is not semisimple, rather it has form
$$
[e_1,f_1]=h'-e_{\fordsix{.1}{.5}010000}
$$
with $h'$ in the Cartan subalgebra of $\tilde\g$. A correct $\tilde f$ (the one with $[\tilde e,\tilde f]=\tilde h$ an element in the Cartan subalgebra of $\tilde\g$ which gives the correct grading of $\tilde\g$) is
$$
\tilde f=2f{\fordsix{.1}{.5}011110} - 2f{\fordsix{.1}{.5}011101} + 2f{\fordsix{.1}{.5}001101} + f{\fordsix{.1}{.5}000111}
$$
and is thus not obtained from $f$ by projecting it to $[\g_{-1},\g_{-1}]$ or in any other readily apparent way.
\end{example}
Let us add that there are also many examples (even for algebras of type A) when the bracket of the projections $[e_1,f_1]$ of $e$ and $f$ is semisimple but does not induce the required grading on $\tilde\g$.
\section{Maximizing abelian subspaces}
We are interested in abelian subspaces of $\g_1$. First of all, one has the following well-known fact.
\begin{proposition}
Dimension of $\g_1$ is even, and the largest possible dimension of an abelian subspace in $\g_1$ is at most $\frac12\dim\g_1$.
\end{proposition}
\begin{proof}
Let $e$ be an element of the orbit, and choose an $\sla_2$-triple $(e,h,f)$ with $e\in\g_2$, and $h$ inducing the grading. Then one may define a bilinear form on $\g_1$ via
$$
(x,y)_f:=\langle f,[x,y]\rangle,
$$
where $\langle-,-\rangle$ is the Killing form. It is well known that the skew-symmetric form $(-,-)_f$ is nondegenerate (since $\ad f:\g_1\to\g_{-1}$ is an isomorphism), so that dimension of $\g_1$ is indeed even. Moreover any commuting elements of $\g_1$ are orthogonal with respect to this form. Since such a form does not possess isotropic subspaces of more than half dimension of the space, we obtain that there are no abelian subspaces of more than half dimension of $\g_1$.
\end{proof}
\begin{remark}
More generally it is known that a nondegenerate skew-symmetric form exists on the homogeneous part $\g_{2i-1}$ of each odd degree --- see \cite[Proposition 1.2]{Pan}. Thus each $\dim\g_{2i-1}$ is even, too.
\end{remark}
We now consider the abelian subalgebras in $\g_1$, separately for simple algebras of classical types (right now) and for algebras of exceptional types (in Section \ref{secomp}).
Let us thus turn to the simple algebras of classical types. For the type A, it has been proved in \cite{Shoji} that a half-dimensional abelian subspace in $\g_1$ exists for any nilpotent orbit.
The central result of this section is the following characterization in terms of the associated partitions, of those strictly odd nilpotent orbits in types B, C or D which admit an abelian subspace of half the dimension in $\g_1$. We will then deduce the general (not necessarily strictly odd) case using strictly odd reductions as in \ref{strodef}.
\begin{theorem}\label{strictheorem}
Given a strictly odd nilpotent in a simple Lie algebra $\g$ of type $\mathrm B$, $\mathrm C$ or $\mathrm D$, there is an abelian subspace of half dimension in $\g_1$ if and only if the partition corresponding to the nilpotent satisfies one of the following conditions:
\begin{itemize}
\item the largest part $\mu$ of the partition is even and there are no other even parts; moreover if $\g$ is of type $\mathrm B$ then $\mu$ has multiplicity $2$.
\item the largest part $\mu$ of the partition is odd, and either there are no other odd parts, or $\g$ is not of type $\mathrm C$, and the only other parts are $\mu-1$ with multiplicity $2$ and $1$ (with any multiplicity).
\end{itemize}
In other words, abelian subspaces of half dimension in $\g_1$ occur precisely for those strictly odd nilpotents which correspond to partitions of the following kind:
\
\begin{tabular}{rllll}
{\bf type C:}&$\left[1^{2\nu_1}3^{2\nu_3}\cdots(2k-1)^{2\nu_{2k-1}}(2k)^\nu\right]$&$(\nu_{2k-1}\nu\ne0)$,& $\left[2^{\nu_2}4^{\nu_4}\cdots(2k)^{\nu_{2k}}(2k+1)^{2\nu}\right]$&$(\nu_{2k}\nu\ne0)$;\\
{\bf type B or D:}&$\left[2^{2\nu_2}4^{2\nu_4}\cdots(2k)^{2\nu_{2k}}(2k+1)^\nu\right]$&$(\nu_{2k}\nu\ne0)$,&$\left[1^{\nu_1}(2k)^2(2k+1)^\nu\right]$&$(\nu_{2k}\nu\ne0)$;\\
{\bf type B:}&$\left[1^{\nu_1}3^{\nu_3}\cdots(2k-1)^{\nu_{2k-1}}(2k)^2\right]$&$(\nu_{2k-1}\ne0)$,\\
{\bf type D:}&$\left[1^{\nu_1}3^{\nu_3}\cdots(2k-1)^{\nu_{2k-1}}(2k)^{2\nu}\right]$&$(\nu_{2k-1}\nu\ne0)$.
\end{tabular}
\end{theorem}
\begin{proof}
It will be convenient to introduce the following notations: for a partition as above, let $m_k$ be the multiplicity of the number $k$ in it. Moreover let $S_k$ be the $h$-eigensubspace with eigenvalue $k$ in the standard representation, and let $s_k$ denote dimension of this subspace, i.~e. multiplicity of the eigenvalue $k$ for $h$.
As recalled in Section 1 above, the adjoint representation can be identified with the symmetric square of the standard one for type C, and with its exterior square for types B and D.
Because of this, clearly the degree 1 part of the adjoint representation is the direct sum of spaces of the form $S_k^*\otimes S_l$ with $l-k=1$, $k\ge0$, and
$$
\dim\g_1=s_0s_1+s_1s_2+...
$$
Now, from the correspondence described in Section \ref{recoll}, one has
\begin{equation}\label{ses}
\begin{aligned}
s_0&=m_1+m_3+m_5+...\\
s_1&=m_2+m_4+m_6+...\\
s_2&=m_3+m_5+m_7+...\\
s_3&=m_4+m_6+m_8+...\\
...\\
s_{\mu-4}&=m_{\mu-3}+m_{\mu-1}\\
s_{\mu-3}&=m_{\mu-2}+m_\mu\\
s_{\mu-2}&=m_{\mu-1}\\
s_{\mu-1}&=m_\mu
\end{aligned}
\end{equation}
Dimension of the subspace $\g_1$ of grading 1 with respect to the corresponding $\sla_2$-triple is thus given by
$$
s_0s_1+s_1s_2+s_2s_3+s_3s_4+...=\sum_{i,j>0}im_im_{i+2j-1}=m_1m_2+2m_2m_3+m_1m_4+3m_3m_4+2m_2m_5+...
$$
Given an abelian subspace in $\g_1$, using \ref{basisprop} we may assume it has a basis consisting of root vectors. In particular, each of our basis vectors is situated in one of the direct summands $S_k^*\otimes S_{k+1}$.
Note that any elements in $S_{k-1}^*\otimes S_k$ and $S_l^*\otimes S_{l+1}$ commute for $l>k$; whereas when $l=k$, we will obtain a non-commuting pair as soon as our basis contains any elements of the form $x\otimes y\in S_{k-1}^*\otimes S_k$ and $y'\otimes z\in S_k^*\otimes S_{k+1}$ with $y$ and $y'$ mutually dual basis elements. We are thus forced to choose non-intersecting subsets $X_k$, $Y_k$ in the weight vector bases of $S_k$ and include in the basis of the abelian subspace only those $x\otimes y$ which satisfy $x\in X_{k-1}$ and $y\in Y_k$. This does not concern $k=\mu-1$, where $\mu-1$ is the maximal occurring eigenvalue of $h$ ($\mu$, as above, is the largest part of the corresponding partition): in $S_{\mu-1}$ we may choose arbitrary subset of the basis without affecting abelianness; and since we are interested in maximal abelian subspaces, we choose the whole basis of $S_{\mu-1}$.
Moreover any such choice of non-intersecting subsets $X_k$, $Y_k$ of bases of $S_k$ gives indeed an abelian subspace, and we may further assume that $X_k\cup Y_k$ is the whole basis, since otherwise our abelian subspace would not be maximal.
The case $k=0$ is special, and depends on the type considered.
Namely, it may happen that two basis vectors, both from $S_0^*\otimes S_1$, do not commute. Two basis elements of this subspace, being the tensor products of basis vectors corresponding to $\pm\eps_i^{(0)}+\eps_j^{(1)}$ and $\pm\eps_k^{(0)}+\eps_l^{(1)}$ respectively, will commute if and only if the sum $\pm\eps_i^{(0)}+\eps_j^{(1)}\pm\eps_k^{(0)}+\eps_l^{(1)}$ is not a root. This implies that the root vector basis of an abelian subspace in $\g_1$ cannot contain root vectors corresponding to both $\pm\eps_i^{(0)}+\eps_j^{(1)}$ and $\mp\eps_i^{(0)}+\eps_k^{(1)}$ for $j\ne k$ (since the sum of these is the root $\eps_j^{(1)}+\eps_k^{(1)}$).
This is the only restriction on $S_0^*\otimes S_1$ for type D. For type C, there is an additional restriction that an abelian subspace of $\g_1$ cannot contain root vectors corresponding to both $\pm\eps_i^{(0)}+\eps_j^{(1)}$ and $\mp\eps_i^{(0)}+\eps_j^{(1)}$ (since the sum of these is the root $2\eps_j^{(1)}$). For type B, an additional restriction is that an abelian subspace of $\g_1$ cannot contain root vectors corresponding to both $(0+)\eps_j^{(1)}$ and $(0+)\eps_k^{(1)}$ for $j\ne k$ (since the sum of these is the root $\eps_j^{(1)}+\eps_k^{(1)}$).
It follows that to obtain a maximal abelian subspace of $\g_1$, in addition to splitting the weight vector basis of $S_1$ into nonintersecting subsets ($X_1$ and its complement $Y_1$), for any weights $\eps^{(1)}_j$ and $\eps^{(1)}_k$ corresponding to a weight basis vector in $X_1$ we have to pick in $S_0^*\otimes S_1$ the root basis elements corresponding either only to $\eps_i^{(0)}+\eps^{(1)}_j$ and $\eps_i^{(0)}+\eps^{(1)}_k$ or only to $-\eps_i^{(0)}+\eps^{(1)}_j$ and $-\eps_i^{(0)}+\eps^{(1)}_k$ for all possible $i$, but not both. Thus the maximal possible number of basis vectors from $S_0^*\otimes S_1$ which we may include in an abelian subspace of $\g_1$ is either $\left[\frac{s_0}2\right]x_1$ (if we choose either only $\eps_i^{(0)}+\eps^{(1)}_j$ or only $-\eps_i^{(0)}+\eps^{(1)}_j$ for all possible $i$ and $j$) or $s_0$, provided we are not in type C and moreover $X_1$ consists of a single element (corresponding to some $\eps^{(1)}_j$, and we choose root basis vectors corresponding to $\pm\eps_i^{(0)}+\eps^{(1)}_j$ for all possible $i$). In addition, if we are in type B, we may add one more root basis vector $v_0\otimes v_1$ with $v_0$ a weight basis vector with zero weight and $v_1$ some weight basis vector from $X_1$.
Thus for the maximal dimension of the piece of an abelian subspace corresponding to $S_0^*\otimes S_1$ we have the following possibilities:
\begin{center}
\begin{tabular}{c|ccc}
&B&C&D\\
\hline
$x_1=0$&0&0&0\\
$x_1=1$&$s_0$&$\frac{s_0}2$&$s_0$\\
$x_1>1$&$\frac{s_0-1}2x_1+1$&$\frac{s_0}2x_1$&$\frac{s_0}2x_1$
\end{tabular}
\end{center}
This results in the following possibilities for the maximal dimension of an abelian subspace in $\g_1$:
\begin{equation}\label{aposs}
\begin{array}{rl}
\frac{s_0-1}2x_1+1+(s_1-x_1)x_2+(s_2-x_2)x_3+...+(s_{\mu-3}-x_{\mu-3})x_{\mu-2}+(s_{\mu-2}-x_{\mu-2})s_{\mu-1}&\text{(for type B);}\\
&\\
\frac{s_0}2x_1+(s_1-x_1)x_2+(s_2-x_2)x_3+...+(s_{\mu-3}-x_{\mu-3})x_{\mu-2}+(s_{\mu-2}-x_{\mu-2})s_{\mu-1}&\text{(for type C or D);}\\
\\
s_0+(s_1-1)x_2+(s_2-x_2)x_3+...+(s_{\mu-3}-x_{\mu-3})x_{\mu-2}+(s_{\mu-2}-x_{\mu-2})s_{\mu-1}&\text{(for type B or D).}
\end{array}
\end{equation}
where $\mu$ is the largest part of the partition.
We thus want to maximize each of these quantities for $0\le x_k\le s_k$, $k=1,...,\mu-2$.
Note that each of them is linear in all of the $x_k$ separately, hence any possible maxima are attained when every $x_k$ is either $0$ or $s_k$. In fact, more is true:
\begin{lemma}
An abelian subspace of maximal possible dimension in $\g_1$ can be obtained either with $x_{2j-1}=0$, $x_{2j}=s_{2j}$ or with $x_{2j-1}=s_{2j-1}$, $x_{2j}=0$ for all $j$.
\end{lemma}
\begin{proof}
Looking at the subsum
$$
...+(s_{k-2}-x_{k-2})x_{k-1}+(s_{k-1}-x_{k-1})x_k+(s_k-x_k)x_{k+1}+...
$$
determining dimension of the abelian subspace, it is easy to see that each of the following changes:
$$
\begin{array}{llcll}
x_{k-1}=0,&x_k=0&\mapsto&x_{k-1}=0,&x_k=s_k,\\
x_{k-1}=s_{k-1},&x_k=s_k&\mapsto&x_{k-1}=s_{k-1},&x_k=0
\end{array}
$$
does not decrease the dimension of the abelian subspace.
Indeed, these changes do not affect any other summands except those in the above subsum. The first change transforms
$$
...+(s_{k-2}-x_{k-2})0+(s_{k-1}-0)0+(s_k-0)x_{k+1}+...\mapsto...+(s_{k-2}-x_{k-2})0+(s_{k-1}-0)s_k+0x_{k+1}+...,
$$
i.~e. changes the sum by the amount equal to the change from $s_kx_{k+1}$ to $s_{k-1}s_k$. But $x_{k+1}\le s_{k+1}$, and $s_{k+1}\le s_{k-1}$ by \eqref{ses}, so that indeed the sum does not decrease.
Similarly, the second change transforms
$$
...+(s_{k-2}-x_{k-2})s_{k-1}+(s_{k-1}-s_{k-1})s_k+(s_k-s_k)x_{k+1}+...\mapsto...+(s_{k-2}-x_{k-2})s_{k-1}+(s_{k-1}-s_{k-1})0+(s_k-0)x_{k+1}+...,
$$
i.~e. changes the sum by the amount equal to the change from $0$ to $s_kx_{k+1}$, which is obviously a nondecreasing change.
Now using the above changes we may arrive at one of the needed choices. For simplicity, let us encode a given choice of $x$'s by a sequence of zeroes and ones (at the $k$th place of the sequence stands zero if $x_k=0$ and one if $x_k=s_k$). We are allowed to perform ``local transformations'' of the kind $\cdots00\cdots\mapsto\cdots01\cdots$ and $\cdots11\cdots\mapsto\cdots10\cdots$. Using one of these transformations we can always shift the place of the leftmost occurrence of two consecutive identical symbols to the right: say, if this leftmost occurrence is $\cdots11\cdots$ we change it to $\cdots10\cdots$ and if it is $\cdots00\cdots$ we change it to $\cdots01\cdots$, and in the worst case the place of the leftmost occurrence of consecutive identical symbols still shifts to the right by at least one position. Thus if we keep applying the appropriate transformations to the leftmost occurrence of consecutive identical symbols we inevitably arrive either at $10101\cdots$ or at $01010\cdots$.
\end{proof}
Applying this in \eqref{aposs} we obtain that the maximal possible dimension of an abelian subspace in $\g_1$ can only be equal to one of the following six sums:
$$
\begin{array}{r|rl}
\frac{s_0-1}2s_1+1+s_2s_3+s_4s_5+...&s_1s_2+s_3s_4+s_5s_6+...&\text{(for type B)}\\
\frac{s_0}2s_1+s_2s_3+s_4s_5+...&s_1s_2+s_3s_4+s_5s_6+...&\text{(for types C, D)}\\
s_0+s_2s_3+s_4s_5+...&s_0+(s_1-1)s_2+s_3s_4+s_5s_6+...&\text{(for types B, D)}
\end{array}
$$
To find out whether there is an abelian subspace of half the dimension in $\g_1$ is thus equivalent to finding out whether subtracting from the dimension of $\g_1$, i.~e. from $s_0s_1+s_1s_2+...$, one of these sums doubled gives zero, i.~e. whether one of the sums
$$
\begin{array}{lr|lrl}
s_0s_1+s_1s_2+...&-2(\frac{s_0-1}2s_1+1+s_2s_3+s_4s_5+...)
&s_0s_1+s_1s_2+...&-2(s_1s_2+s_3s_4+s_5s_6+...)
&\text{(B)}\\
s_0s_1+s_1s_2+...&-2(\frac{s_0}2s_1+s_2s_3+s_4s_5+...)
&s_0s_1+s_1s_2+...&-2(s_1s_2+s_3s_4+s_5s_6+...)
&\text{(C, D)}\\
s_0s_1+s_1s_2+...&-2(s_0+s_2s_3+s_4s_5+...)
&s_0s_1+s_1s_2+...&-2(s_0+(s_1-1)s_2+s_3s_4+s_5s_6+...)
&\text{(B, D)}
\end{array}
$$
is zero.
Simplifying, we obtain respectively
$$
\begin{array}{rl|rll}
s_1-2+&s_1s_2-s_2s_3+s_3s_4-s_4s_5+s_5s_6-...
&&s_0s_1-s_1s_2+s_2s_3-s_3s_4+s_4s_5-...
&\text{(B)}\\
&s_1s_2-s_2s_3+s_3s_4-s_4s_5+...
&&s_0s_1-s_1s_2+s_2s_3-s_3s_4+...
&\text{(C, D)}\\
-2s_0+s_0s_1+&s_1s_2-s_2s_3+s_3s_4-...
&-2s_0+2s_2+&s_0s_1-s_1s_2+s_2s_3-s_3s_4+...
&\text{(B, D)}
\end{array}
$$
Rewriting this further as
$$
\begin{array}{rl|rll}
s_1-2+&(s_1-s_3)s_2+(s_3-s_5)s_4+(s_5-s_7)s_6+...
&(s_0-s_2)s_1+&(s_2-s_4)s_3+(s_4-s_6)s_5+...
&\text{(B)}\\
&(s_1-s_3)s_2+(s_3-s_5)s_4+(s_5-s_7)s_6+...
&(s_0-s_2)s_1+&(s_2-s_4)s_3+(s_4-s_6)s_5+...
&\text{(C, D)}\\
s_0(s_1-2)+&(s_1-s_3)s_2+(s_3-s_5)s_4+...
&(s_0-s_2)(s_1-2)+&(s_2-s_4)s_3+(s_4-s_6)s_5+...
&\text{(B, D)}
\end{array}
$$
and taking \eqref{ses} into account this can be rewritten as
$$
\begin{array}{rl|rll}
s_1-2+&m_2s_2+m_4s_4+m_6s_6+...
&m_1s_1+&m_3s_3+m_5s_5+...
&\text{(B)}\\
&m_2s_2+m_4s_4+m_6s_6+...
&m_1s_1+&m_3s_3+m_5s_5+...
&\text{(C, D)}\\
s_0(s_1-2)+&m_2s_2+m_4s_4+...
&m_1(s_1-2)+&m_3s_3+m_5s_5+...
&\text{(B, D)}
\end{array}
$$
Let us now assume that our nilpotent is strictly odd, which in terms of the corresponding partition means that $m_{\mu-1}>0$ (here as before $\mu$ is the largest nonzero part of the partition). This then implies that all multiplicities $s_i$ are nonzero. Thus to obtain an abelian subspace of half the dimension in $\g_1$ we have the following possibilities:
$$
\begin{array}{rl|rll}
\text{$s_1=2$ and }&\text{$m_{2k}=0$ for $2k<\mu$}
&&\text{$m_{2k-1}=0$ for $2k-1<\mu$}
&\text{(B)}\\
&\text{$m_{2k}=0$ for $2k<\mu$}
&&\text{$m_{2k-1}=0$ for $2k-1<\mu$}
&\text{(C, D)}\\
\text{$s_1=2$ and }&\text{$m_{2k}=0$ for $2k<\mu$}
&\text{$m_1=0$ or $s_1=2$, and }&\text{$m_{2k-1}=0$ for $1<2k-1<\mu$}
&\text{(B, D)}
\end{array}
$$
We now make the following observations, according to the parity of $\mu$:
\begin{itemize}
\item if $\mu$ is odd, then the cases in the first column are not realizable, since they require that the partition has no even parts, while by strict oddity both $m_{\mu-1}$ and $m_\mu$ must be nonzero;
\item if $\mu$ is even, the cases in the second column are not realizable by exactly the same reason.
\end{itemize}
Taking this into account, we are left with the following cases: for $\mu$ even,
$$
\begin{array}{l|ll}
\text{$m_2=m_4=...=m_{\mu-2}=0$, $m_{\mu-1}>0$, $m_\mu=2$}
&\text{---}
&\text{(B)}\\
\text{$m_2=m_4=...=m_{\mu-2}=0$, $m_{\mu-1}>0$, $m_\mu>0$}
&\text{---}
&\text{(C, D)}\\
\text{$m_2=m_4=...=m_{\mu-2}=0$, $m_{\mu-1}>0$, $m_\mu=2$}
&\text{---}
&\text{(B, D)}
\end{array}
$$
and for $\mu$ odd,
$$
\begin{array}{l|ll}
\text{---}
&\text{$m_1=m_3=...=m_{\mu-2}=0$, $m_{\mu-1}>0$, $m_\mu>0$}
&\text{(B)}\\
\text{---}
&\text{$m_1=m_3=...=m_{\mu-2}=0$, $m_{\mu-1}>0$, $m_\mu>0$}
&\text{(C, D)}\\
\text{---}
&\text{$m_3=m_5=...=m_{\mu-2}=0$, $m_\mu>0$ and either $m_1=0$ or $m_2=m_4=...=m_{\mu-3}=0$ and $m_{\mu-1}=2$}
&\text{(B, D)}
\end{array}
$$
Let us also observe the following:
\begin{itemize}
\item for $\mu$ even, the first case is subsumed by the third one;
\item for $\mu$ even, the third case is subsumed by the second one for type D;
\item for $\mu$ odd, the subcase $m_1=0$ of the third case is subsumed by the first one for type B, and by the second one for type D.
\end{itemize}
Taking all of the above into account gives the partitions as described.
\end{proof}
\begin{remark}
One can formulate the theorem more understandably as follows: in case of type C, there is exactly one parity change along the partition, while in cases B or D there might be either one or two parity changes, but if there are two parity changes then there must be only parts equal to $1$, $\mu-1$, $\mu$ and moreover $\mu-1$ must have multiplicity $2$. Moreover for type B there is one more restriction when there is only one parity change: namely, if the largest part is even, its multiplicity must be $2$.
\end{remark}
We now turn to the not necessarily strictly odd nilpotent orbits, using strictly odd reduction from \ref{strodef}. For classical types, its reformulation in terms of partitions is as follows.
\begin{lemma}\label{mainlemma}
Let $\g$ be a simple Lie algebra of classical type, and let $e$ be a nilpotent element of $\g$ corresponding to the partition $[...k^{m_k}\ell^{m_\ell}...n^{m_n}]$, with $...<k<\ell<...<n$ such that $k$ and $\ell$ are of opposite parity while all the larger parts $j$ (those with $\ell\le j\le n$) are of the same parity.
Then the partition $[...k^{m_k}(k+1)^{m_\ell+...+m_n}]$ defines a strictly odd nilpotent in a Lie algebra of the same type, and corresponds to the strictly odd reduction of $e$, as defined in \ref{strodef}.
\end{lemma}
\begin{proof}
Let us begin by noting that the modified partition is indeed suitable for the same type: if this requires that all parts of the same parity as $k$ have even multiplicity, then we have not touched them; while if this requires that all parts of the same parity as $k+1$ are even, then $\ell$ and all larger parts are of the same parity as $k+1$, so each of the multiplicities $m_\ell$, ..., $m_n$ was even, hence their sum is even too, and we indeed stay with the same type. Moreover the corresponding nilpotent is strictly odd since its largest parts are $k$ and $k+1$.
Now following the correspondence between partitions and weighted Dynkin diagrams described above it is easy to see that passing from the original partition to the one modified as described corresponds to the following modification of the weighted Dynkin diagram: one removes all nodes (and weights) from the left until no more $2$'s are left; for types B, C, D that's all, while for type A one has to do removals symmetrically to that from the right end too.
But this precisely means to leave the connected component of the weighted Dynkin diagram that contains nonzero weights, as described in \ref{reduction} above, so that we indeed obtain the strictly odd reduction of $e$.
\end{proof}
\begin{corollary}\label{maincorollary}
Given a nilpotent in a simple Lie algebra $\g$ of classical type $\mathrm B$, $\mathrm C$ or $\mathrm D$, there is an abelian subspace of half dimension in $\g_1$ if and only if the partition corresponding to the nilpotent satisfies the following conditions:
\begin{tabular}{rl}
type $\mathrm C$:&there is no more than one parity change along the partition;\\
types $\mathrm B$ and $\mathrm D$:&there are no more than two parity changes and, if there is at least one parity change then
\end{tabular}
\begin{itemize}
\item if the largest part of the partition is even, then there is only one parity change, and in the $\mathrm B$ case moreover it must be the unique even part and must have multiplicity $2$;
\item if there are two parity changes, then the largest part of the partition is odd, there is a unique even part, it has multiplicity $2$, and all smaller parts are equal to $1$.
\end{itemize}
Thus, abelian subspaces of half dimension in $\g_1$ occur precisely for nilpotents corresponding to partitions of one of the following kind (with $k\le\ell$ throughout):
\begin{tabular}{rl}
{\bf any type:}&$\left[\cdots(2k-2)^{\nu_{2k-2}}(2k)^{\nu_{2k}}(2\ell+1)^{\nu_{2\ell+1}}(2\ell+3)^{\nu_{2\ell+3}}\cdots\right]$;\\
{\bf type C or D:}&$\left[\cdots(2k-3)^{\nu_{2k-3}}(2k-1)^{\nu_{2k-1}}(2\ell)^{\nu_{2\ell}}(2\ell+2)^{\nu_{2\ell+2}}\cdots\right]$;\\
{\bf type B or D:}&$\left[1^{\nu_1}(2k)^2(2\ell+1)^{\nu_{2\ell+1}}(2\ell+3)^{\nu_{2\ell+3}}\cdots\right]$;\\
{\bf type B:}&$\left[\cdots(2k-3)^{\nu_{2k-3}}(2k-1)^{\nu_{2k-1}}(2\ell)^2\right]$,
\end{tabular}
\end{corollary}
\begin{proof}
This follows from \ref{mainlemma}. Indeed the latter shows that $\g_1(e)$ for a nilpotent $e$ corresponding to some partition has an abelian subspace of half dimension if and only if $\tilde\g_1(\tilde e)$, as described in \ref{coreduction}, has such a subspace; and this happens if and only if the partition modified as in \ref{mainlemma} satisfies conditions of \ref{strictheorem}.
It remains to note that a partition is of the kind indicated if and only if the partition obtained from it as in \ref{mainlemma} satisfies conditions of \ref{strictheorem}.
\end{proof}
\section{Computations}\label{secomp}
It thus remains to find out which of the strictly odd nilpotent orbits in simple Lie algebras of exceptional type do possess an abelian subspace of half dimension in degree 1.
For that, we used the computer algebra system {\tt GAP}. In the package {\tt SLA} by Willem A. de Graaf included in this system one can compute with nilpotent orbits of arbitrary semisimple Lie algebras. In particular, one obtains canonical bases consisting of root vectors for the homogeneous subspaces of all degrees in the grading of the Lie algebra induced by a nilpotent element.
Using \ref{basisprop}, we can determine abelian subspaces in $\g_1$ as follows. Let $B$ be the basis of $\g_1$ made from positive root vectors.
Let us construct a graph with the set of vertices $B$, where two vertices $e_\alpha$ and $e_\beta$ are connected with an edge if and only if they do not commute,
that is, if and only if $\alpha+\beta$ is a root. Then by \ref{basisprop}, $\g_1$ possesses an abelian subspace of dimension $d$ if and only if the basis
consisting of root vectors has a subset of cardinality $d$ consisting of pairwise commuting root vectors.
Clearly this is equivalent to the corresponding graph having an \emph{independent set} of cardinality $d$ --- that is, a subset consisting of $d$ vertices such that no two of these vertices are connected by an edge. Hence describing all possible dimensions of abelian subspaces in $\g_1$ reduces to listing all possible cardinalities of
independent subsets in the corresponding graph.
There is another package {\tt GRAPE} by Leonard H. Soicher in {\tt GAP} which can be used to list all independent sets in a finite graph. Using this package we determine independent sets of maximal possible cardinality in the graph corresponding to the nilpotent orbit.
The results are given in the tables below. A {\tt GAP} code for computing maximal dimensions of abelian subspaces in $\g_1$ for arbitrary semisimple Lie algebras is available at \cite{progaddress}. In fact the program can list all subsets of any given cardinality of pairwise commuting elements in the root vector basis.
\
As an illustration, here are two cases for E$_6$.
\begin{examples}
The nilpotent orbit with the weighted Dynkin diagram \EDynkin{1, 1, 0, 0, 0, 1}{1} has $\g_1$ of dimension $14$.
The corresponding graph with 14 vertices and edges connecting vertices corresponding to non-commuting root vectors in $\g_1$ looks as follows:
\vfill
\begin{center}
\includegraphics[scale=.4]{E6a.pdf}
\end{center}
\vfill
This graph has independent sets with $6$ vertices, e.~g. $\{2,5,8,9,12,14\}$, but any subset on more than $6$ vertices contains a pair of vertices connected with an edge, thus for this nilpotent orbit maximal dimension of an abelian subspace is equal to $6$.
\
Another orbit in E$_6$, with the diagram \EDynkin{1, 0, 1, 0, 1, 0}{1}, has $\g_1$ of dimension $10$ corresponding to the graph
\vfill
\begin{center}
\includegraphics[scale=.4]{E6b.pdf}
\end{center}
\vfill
\noindent with 10 vertices. It is easy to find in this graph an independent subset with five elements -- e.~g. $\{1,2,3,4,8\}$.
Thus the orbit in the first example does not possess an abelian subspace of half dimension in $\g_1$, while that in the second one does.
\end{examples}
\section{Tables}
\begin{center}
\def2.2{2.2}
\newlength{\mypar}\setlength{\mypar}{7.67em}
\
\vfill
\begin{tabular}[t]{l|c|c}
\multicolumn{3}{c}{\bf Table G2s}\\
\multicolumn{3}{c}{\parbox[c]{16em}{Strictly odd nilpotent orbits in G$_2$, all with half-abelian $\g_1$:}}\\[2ex]
\hline
Name&Diagram&$\dim\g_1$\\
\hline
&&\\[-5.5ex]
A$_1$&\GDynkin{0, 1}&4\\
$\widetilde{\mathrm A_1}$&\GDynkin{1, 0}&2\\
\end{tabular}
\vfill
\begin{tabular}[t]{l|c|c||l|c|c}
\multicolumn{6}{c}{\bf Table F4s}\\
\multicolumn{6}{c}{Strictly odd nilpotent orbits in F$_4$}\\
\multicolumn{3}{c||}{with half-abelian $\g_1$:}&\multicolumn{3}{c}{without half-abelian $\g_1$:}\\
\hline
Name&Diagram&$\dim\g_1$&Name&Diagram&\parbox[c][4em]{\mypar}{\raggedright$\dim\g_1$ (largest dimension of an abelian subspace)}\\
\hline
&&&&\\[-5.5ex]
A$_1$&\FDynkins{0, 1, 0, 0}&14&$\widetilde{\mathrm A_1}$&\FDynkins{1, 0, 0, 0}&8 (2)\\
A$_1$ $+$ $\widetilde{\mathrm A_1}$&\FDynkins{0, 0, 0, 1}&12&A$_2$ $+$ $\widetilde{\mathrm A_1}$&\FDynkins{0, 0, 1, 0}&6 (2)\\
C$_3(a_1)$&\FDynkins{0, 1, 1, 0}&6&$\widetilde{\mathrm A_2}$ $+$ A$_1$&\FDynkins{1, 0, 0, 1}&8 (3)
\end{tabular}
\vfill
\begin{tabular}{l|c|c||l|c|c}
\multicolumn{6}{c}{\bf Table E6s}\\
\multicolumn{6}{c}{Strictly odd nilpotent orbits in E$_6$}\\
\multicolumn{3}{c||}{with half-abelian $\g_1$:}&\multicolumn{3}{c}{without half-abelian $\g_1$:}\\
\hline
Name&Diagram&$\dim\g_1$&Name&Diagram&\parbox[c][4em]{\mypar}{\raggedright$\dim\g_1$ (largest dimension of an abelian subspace)}\\
\hline
&&&&\\[-5.5ex]
A$_1$&\EDynkin{1, 0, 0, 0, 0, 0}{1}&20&A$_2$ $+$ A$_1$&\EDynkin{1, 1, 0, 0, 0, 1}{1}&14 (6)\\
2A$_1$&\EDynkin{0, 1, 0, 0, 0, 1}{1}&16&2A$_2$ $+$ A$_1$&\EDynkin{0, 1, 0, 1, 0, 1}{1}&12 (5)\\
3A$_1$&\EDynkin{0, 0, 0, 1, 0, 0}{1}&18\\
A$_2$ $+$ 2A$_1$&\EDynkin{0, 0, 1, 0, 1, 0}{1}&12\\
A$_3$ $+$ A$_1$&\EDynkin{1, 0, 1, 0, 1, 0}{1}&10\\
A$_4$ $+$ A$_1$&\EDynkin{1, 1, 1, 0, 1, 1}{1}&8
\end{tabular}
\vfill
\
\pagebreak
\
\vfill
\begin{tabular}{l|c|c||l|c|c}
\multicolumn{6}{c}{\bf Table E7s}\\
\multicolumn{6}{c}{Strictly odd nilpotent orbits in E$_7$}\\
\multicolumn{3}{c||}{with half-abelian $\g_1$:}&\multicolumn{3}{c}{without half-abelian $\g_1$:}\\
\hline
Name&Diagram&$\dim\g_1$&Name&Diagram&\parbox[c][4em]{\mypar}{\raggedright$\dim\g_1$ (largest dimension of an abelian subspace)}\\
\hline
&&&&&\\[-5.5ex]
A$_1$&\EDynkin{0, 1, 0, 0, 0, 0, 0}{1}&32&4A$_1$&\EDynkin{1, 0, 0, 0, 0, 0, 1}{1}&26 (11)\\
2A$_1$&\EDynkin{0, 0, 0, 0, 0, 1, 0}{1}&32&A$_2$ $+$ A$_1$&\EDynkin{0, 1, 0, 0, 0, 1, 0}{1}&24 (9)\\
3A$_1'$&\EDynkin{0, 0, 1, 0, 0, 0, 0}{1}&30&2A$_2$ $+$ A$_1$&\EDynkin{0, 0, 1, 0, 0, 1, 0}{1}&20 (8)\\
A$_2$ $+$ 2A$_1$&\EDynkin{0, 0, 0, 1, 0, 0, 0}{1}&24&A$_3$ $+$ 2A$_1$&\EDynkin{0, 1, 0, 0, 1, 0, 1}{1}&18 (7)\\
(A$_3$ $+$ A$_1$)$'$&\EDynkin{0, 1, 0, 1, 0, 0, 0}{1}&18&A$_4$ $+$ A$_1$&\EDynkin{0, 1, 0, 1, 0, 1, 0}{1}&14 (6)\\
D$_4(a_1)$ $+$ A$_1$&\EDynkin{1, 0, 1, 0, 0, 0, 1}{1}&16\\
A$_3$ $+$ A$_2$&\EDynkin{0, 0, 0, 1, 0, 1, 0}{1}&16
\end{tabular}
\vfill
\
\pagebreak
\
\vfill
\begin{tabular}{l|c|c||l|c|c}
\multicolumn{6}{c}{\bf Table E8s}\\
\multicolumn{6}{c}{Strictly odd nilpotent orbits in E$_8$}\\
\multicolumn{3}{c||}{with half-abelian $\g_1$:}&\multicolumn{3}{c}{without half-abelian $\g_1$:}\\
\hline
Name&Diagram&$\dim\g_1$&Name&Diagram&\parbox[c][4em]{\mypar}{\raggedright$\dim\g_1$ (largest dimension of an abelian subspace)}\\
\hline
&&&&&\\[-5.5ex]
A$_1$&\EDynkin{0, 0, 0, 0, 0, 0, 0, 1}{1}&56&2A$_1$&\EDynkin{0, 1, 0, 0, 0, 0, 0, 0}{1}&64 (22)\\
3A$_1$&\EDynkin{0, 0, 0, 0, 0, 0, 1, 0}{1}&54&4A$_1$&\EDynkin{1, 0, 0, 0, 0, 0, 0, 0}{1}&56 (21)\\
A$_2$ $+$ 3A$_1$&\EDynkin{0, 0, 1, 0, 0, 0, 0, 0}{1}&42&A$_2$ $+$ 2A$_1$&\EDynkin{0, 0, 0, 0, 0, 1, 0, 0}{1}&48 (16)\\
A$_3$ $+$ A$_1$&\EDynkin{0, 0, 0, 0, 0, 1, 0, 1}{1}&34&A$_2$ $+$ A$_1$&\EDynkin{0, 1, 0, 0, 0, 0, 0, 1}{1}&44 (17)\\
A$_3$ $+$ A$_2$ $+$ A$_1$&\EDynkin{0, 0, 0, 1, 0, 0, 0, 0}{1}&30&2A$_2$ $+$ 2A$_1$&\EDynkin{0, 0, 0, 0, 1, 0, 0, 0}{1}&40 (16)\\
A$_4$ $+$ A$_2$ $+$ A$_1$&\EDynkin{0, 0, 1, 0, 0, 1, 0, 0}{1}&24&2A$_2$ $+$ A$_1$&\EDynkin{0, 1, 0, 0, 0, 0, 1, 0}{1}&36 (16)\\
E$_7(a_5)$&\EDynkin{0, 0, 0, 1, 0, 1, 0, 0}{1}&18&A$_3$ $+$ 2A$_1$&\EDynkin{0, 0, 1, 0, 0, 0, 0, 1}{1}&36 (15)\\
A$_6$ $+$ A$_1$&\EDynkin{0, 1, 0, 1, 0, 1, 0, 0}{1}&16&A$_3$ $+$ A$_2$&\EDynkin{0, 1, 0, 0, 0, 1, 0, 0}{1}&32 (13)\\
A$_7$&\EDynkin{0, 1, 0, 1, 0, 1, 1, 0}{1}&14&D$_4(a_1)$ $+$ A$_1$&\EDynkin{1, 0, 0, 0, 0, 0, 1, 0}{1}&32 (12)\\
\multicolumn{3}{c||}{}&2A$_3$&\EDynkin{0, 1, 0, 0, 1, 0, 0, 0}{1}&28 (13)\\
\multicolumn{3}{c||}{}&A$_4$ $+$ 2A$_1$&\EDynkin{0, 0, 0, 1, 0, 0, 0, 1}{1}&28 (12)\\
\multicolumn{3}{c||}{}&A$_4$ $+$ A$_1$&\EDynkin{0, 1, 0, 0, 0, 1, 0, 1}{1}&26 (10)\\
\multicolumn{3}{c||}{}&A$_4$ $+$ A$_3$&\EDynkin{0, 0, 0, 1, 0, 0, 1, 0}{1}&24 (10)\\
\multicolumn{3}{c||}{}&A$_5$ $+$ A$_1$&\EDynkin{0, 1, 0, 1, 0, 0, 0, 1}{1}&22 (9)\\
\multicolumn{3}{c||}{}&D$_5(a_1)$ $+$ A$_2$&\EDynkin{0, 0, 1, 0, 0, 1, 0, 1}{1}&22 (8)\\
\multicolumn{3}{c||}{}&D$_6(a_2)$&\EDynkin{1, 0, 1, 0, 0, 0, 1, 0}{1}&20 (9)\\
\multicolumn{3}{c||}{}&E$_6(a_3)$ $+$ A$_1$&\EDynkin{0, 1, 0, 0, 1, 0, 1, 0}{1}&20 (8)\\
\multicolumn{3}{c||}{}&D$_7(a_2)$&\EDynkin{0, 1, 0, 1, 0, 1, 0, 1}{1}&16 (7)
\end{tabular}
\vfill
\
\pagebreak
\
\vfill
\begin{tabular}[t]{l|c|l}
\multicolumn{3}{c}{\bf Table F4o}\\
\multicolumn{3}{c}{\parbox[c]{18em}{(Non-strictly) odd nilpotent orbits in F$_4$, all with half-abelian $\g_1$:}}\\[2ex]
\hline
Name&Diagram&Strictly odd piece\\
\hline&&\\[-5.5ex]
B$_2$&\FDynkino{1, 2, 0, 0}{3,1}&C$_3$ ($2,1^4$)\\
C$_3$&\FDynkino{2, 1, 1, 0}{4,2}&B$_3$ ($3,2^2$)
\end{tabular}
\
\vfill
\
\begin{tabular}[t]{l|c|l}
\multicolumn{3}{c}{\bf Table E6o}\\
\multicolumn{3}{c}{\parbox[c]{18em}{(Non-strictly) odd nilpotent orbits in E$_6$, all with half-abelian $\g_1$:}}\\[2ex]
\hline
Name&Diagram&Strictly odd piece\\
\hline&&\\[-5.5ex]
A$_3$&\EDynkin{2,1,0,0,0,1}{6,2}&A$_5$\\
A$_5$&\EDynkin{1,2,1,0,1,2}{5,1,3}&D$_4$ ($3,2^2,1$)\\
D$_5(a_1)$&\EDynkin{2,1,1,0,1,1}{6,2}&A$_5$
\end{tabular}
\
\vfill
\
\vfill
\
\begin{tabular}{l|c|l||l|c|l}
\multicolumn{6}{c}{\bf Table E7o}\\
\multicolumn{6}{c}{(Non-strictly) odd nilpotent orbits in E$_7$}\\
\multicolumn{3}{c||}{with half-abelian $\g_1$:}&\multicolumn{3}{c}{without half-abelian $\g_1$:}\\
\hline
Name&Diagram&Strictly odd piece&Name&Diagram&Strictly odd piece\\
\hline&&&&\\[-5.5ex]
A$_3$&\EDynkin{0,2,0,0,0,1,0}{7,1,3}&D$_6$ ($2^2,1^8$)&D$_4$ $+$ A$_1$&\EDynkin{1, 2, 1, 0, 0, 0, 1}{7,1,3}&D$_6$ ($3,2^4,1$)\\
D$_5(a_1)$&\EDynkin{0,2,0,1,0,1,0}{7,1,3}&D$_6$ ($3^2,2^2,1^2$)&A$_5$ $+$ A$_1$&\EDynkin{0, 1, 0, 1, 0, 1, 2}{6,1,2}&E$_6$ (2A$_2$ $+$ A$_1$)\\
A$_5'$&\EDynkin{0,1,0,1,0,2,0}{5,1,2}&D$_5$ ($3,2^2,1^3$)\\
D$_6(a_2)$&\EDynkin{1,0,1,0,1,0,2}{6,1,2}&E$_6$ (A$_3$ $+$ A$_1$)\\
D$_5$ $+$ A$_1$&\EDynkin{1,2,1,0,1,1,0}{7,1,3}&D$_6$ ($4^2,3,1$)\\
D$_6(a_1)$&\EDynkin{1,2,1,0,1,0,2}{6,1,3}&D$_5$ ($3^2,2^2$)\\
D$_6$&\EDynkin{1,2,1,0,1,2,2}{5,1,3}&D$_4$ ($3,2^2,1$)
\end{tabular}
\
\vfill
\
\vfill
\
\begin{tabular}{l|c|l||l|c|l}
\multicolumn{6}{c}{\bf Table E8o}\\
\multicolumn{6}{c}{(Non-strictly) odd nilpotent orbits in E$_8$}\\
\multicolumn{3}{c||}{with half-abelian $\g_1$:}&\multicolumn{3}{c}{without half-abelian $\g_1$:}\\
\hline
Name&Diagram&Strictly odd piece&Name&Diagram&Strictly odd piece\\
\hline&&&&&\\[-5.5ex]
A$_3$&\EDynkin{0,1,0,0,0,0,0,2}{7,1,2}&E$_7$ (A$_1$)&D$_4$ $+$ A$_1$&\EDynkin{1, 0, 0, 0, 0, 0, 1, 2}{7,1,2}&E$_7$ (4A$_1$)\\
D$_5(a_1)$ $+$ A$_1$&\EDynkin{0,0,0,1,0,0,0,2}{7,1,2}&E$_7$ (A$_2$ $+$ 2A$_1$)&D$_5(a_1)$&\EDynkin{0, 1, 0, 0, 0, 1, 0, 2}{7,1,2}&E$_7$ (A$_2$ $+$ A$_1$)\\
A$_5$&\EDynkin{0,2,0,0,0,1,0,1}{8,1,3}&D$_7$ ($3,2^2,1^7$)&D$_5$ $+$ A$_1$&\EDynkin{0, 1, 0, 0, 1, 0, 1, 2}{7,1,2}&E$_7$ (A$_3$ $+$ 2A$_1$)\\
D$_6(a_1)$&\EDynkin{1,0,1,0,0,0,1,2}{7,1,2}&E$_7$ (D$_4(a_1)$ $+$ A$_1$)&E$_6(a_1)$ $+$ A$_1$&\EDynkin{0, 1, 0, 1, 0, 1, 0, 2}{7,1,2}&E$_7$ (A$_4$ $+$ A$_1$)\\
E$_7(a_4)$&\EDynkin{0,0,0,1,0,1,0,2}{7,1,2}&E$_7$ (A$_3$ $+$ A$_2$)&D$_6$&\EDynkin{1, 2, 1, 0, 0, 0, 1, 2}{7,1,3}&D$_6$ ($3,2^4,1$)\\
E$_7(a_3)$&\EDynkin{0,2,0,1,0,1,0,2}{7,1,3}&D$_6$ ($3^2,2^2,1^2$)&E$_6$ $+$ A$_1$&\EDynkin{0, 1, 0, 1, 0, 1, 2, 2}{6,1,2}&E$_6$ (2A$_2$ $+$ A$_1$)\\
D$_7$&\EDynkin{1,2,1,0,1,1,0,1}{8,1,3}&D$_7$ ($5,4^2,1$)\\
E$_7(a_2)$&\EDynkin{1,0,1,0,1,0,2,2}{6,1,2}&E$_6$ (A$_3$ $+$ A$_1$)\\
E$_7(a_1)$&\EDynkin{1,2,1,0,1,0,2,2}{6,1,3}&D$_5$ ($3^2,2^2$)\\
E$_7$&\EDynkin{1,2,1,0,1,2,2,2}{5,1,3}&D$_4$ ($3,2^2,1$)
\end{tabular}
\end{center}
\section*{Acknowledgements}
The authors are grateful to the referee for highly professional work, including simplifications of proofs and improvements of exposition.
The third named author wishes to thank Daniele Valeri for discussions on generalization of Miura maps, constructed in \cite{dSKV}.
The second named author gratefully acknowledges help of the user {\tt marmot} from {\tt tex.stackexchange.com} in producing the \TeX\ code that was used to highlight the strictly odd pieces of weighted Dynkin diagrams in the last four tables.
\bibliographystyle{amsplain}
|
{
"timestamp": "2020-05-19T02:19:59",
"yymm": "1806",
"arxiv_id": "1806.00893",
"language": "en",
"url": "https://arxiv.org/abs/1806.00893"
}
|
\section{Conclusion}
\label{sec:conclusion}
We have proposed a very low cost edge sensing strategy, termed as LED, for color image demosaicking. It guides the green channel interpolation and color difference plane interpolation by logistic functional of the difference between directional variation. Among 29 demosaicking methods tested by code running, our method is one of the fastest. Without using any refinement or post-processing technique, LED achieves the accuracy higher than many recently proposed methods on low resolution images, and comparable to top performers on images of currently popular resolution. Our extensive experiments suggest that, \emph{accurate non-local edge detection for demosaicking is generally difficult and time consuming. Instead, leveraging the originally captured values of the nearest neighbours is much more efficient.}
Our algorithm is highly parallelable, and hence its GPU or FPGA implementation can easily restore very high resolution images in real time. This is desirable in the digital camera industry, as the camera resolution is increasing rapidly. Furthermore, in demosaicking applications where speed is allowed to trade for accuracy, the proposed method provides a quick and high quality initialization, which is generally needed in sophisticated iterative demosaicking algorithms.
\section{Experimental Results}
\label{sec:experiment}
We experimentally evaluate the proposed algorithm, which we name Logistic Edge-Sensing Demosaicking (LED). To examine the pure effectiveness of steering demosaicking by logistic functional of the difference between directional variation, we \emph{do not} enhance the image restoration quality by any post-processing or refinement technique.
\textbf{Datasets} \quad Following the literature convention, we first test LED on traditional benchmarks Kodak \cite{Kodak} and McM \cite{LDINAT11}. The Kodak dataset contains 24 images of size $768\times 512$ and the McM dataset contains 18 images of size $500\times 500$. As each of these test images has fewer than $0.4$-Mega pixels, whereas current consumer camera resolution typically has several Mega pixels, experiments on traditional McM and Kodak datasets may not fully reflect real applications. To examine the potential performance of LED in real practice, we further test it on the $3072\times 2048$ (about $6.2$-Mega pixels) version of the Kodak dataset \cite{HRKodak}, the resolution of which is comparable to the $8$-Mega pixels resolution of Iphone6. We term this modern resolution Kodak dataset as MR Kodak.
\textbf{Comparison and Metrics} \quad Beside comparing to the HA algorithm, we extensively compare LED with 28 existing demosaicking methods by running their publicly available source codes. The performance comparison is conducted in terms of demosaicking accuracy and efficiency. We measure the accuracy of the demosaicked images by Peak Signal to Noise Ratio (PSNR), Structural SIMilarity (SSIM) and S-CIELAB \cite{SCIELAB}. In the case that the competing methods have source codes in MATLAB, we measure the demosaicking efficiency by timing the particular demosaicking process, which outputs the final RGB image from the Bayer mosaicked input, on McM $500\times 500$ images. More specifically, if the demosaicking process is implemented by a single function in the source code, we add the MATLAB timing function \textbf{timeit} to record its running time; Whereas if the demosaicking process consists of multiple functions, we use the MATLAB timing function \textbf{tic} and \textbf{toc}. In the case that the competing methods have source codes in C, we use the time library functions \textbf{clock}.
Due to the ``warm up'' factor, the demosaicking generally takes longer on the first test image than the other images, hence we report the median running time from the 2nd to the 18th McM images as the final time measurement. All experiments are conducted on a 2.8GHz Intel i7-4900MQ CPU with 8GB RAM, unless otherwise specified.
\textbf{Parameter Settings} \quad The only hyper-parameter of our method to set is the logistic function steepness coefficient $k$ in Eq.\ref{eq:LogisticFunction}, which is fixed to $0.05$ (see Section\ref{subsec:greenReco}) in all experiments. Many existing works shave off image boundaries of various width from measuring demosaicking accuracy (for example, $11$ pixels in Ref.\cite{LSLCD13} and $20$ pixels in Ref.\cite{LDINAT11}), and we also shave off $4$ pixels wide image boundaries. If the source code of a competing method does not specify the shave-off boundary width, we also set it to $4$. For methods that simultaneously address demosaicking and denoising, we set the additional noise level to zero in their source codes. We leave other parameters (including boundary shaved-off size) as their default values in the original code, since they may lead to the optimal performance. Nevertheless, we suggest that future research to take image boundaries into account, as the demosaicked image should not shrink in real applications.
\subsection{Numerical Evaluation on Low Resolution Images}
Table \ref{tab:IndividualKodakLR} and Table \ref{tab:IndividualMcM} present the demosaicking accuracy, quantitatively measured by PSNR for each channel, PSNR for the whole image (a.k.a., cPSNR), SSIM and S-CIELAB, of the proposed LED algorithm on each individual image from Kodak and McM respectively. Table \ref{HAvsLED} compares the accuracy and computation time of the LED against the traditional HA method under the same implementation settings. Significantly, LED improves HA by $2.51$dB and $1.74$dB in PSNR, $0.01$ and $0.01$ in SSIM, $0.27$ and $0.31$ in S-CIELAB on Kodak and McM respectively, at an extra cost of merely $0.038$ seconds for $500 \times 500$ pixels.
\begin{table*}[tp
\centering
\label{tab:IndividualKodakLR}
\includegraphics[height=0.9\textheight]{LEDLRKodak.eps}
\caption{Demosaicking accuracy of the proposed method LED, measured by PSNR of each channel, cPSNR, SSIM and S-CIELAB, on each individual image of the traditional low resolution Kodak dataset.}
\end{table*}
\begin{table*}[tp
\centering
\label{tab:IndividualMcM}
\includegraphics[height=0.9\textheight]{LEDMcM.eps}
\caption{Demosaicking accuracy of the proposed method LED, measured by PSNR of each channel, cPSNR, SSIM and S-CIELAB, on each individual image of the McM dataset.}
\end{table*}
\input{HAvsLED.tex}%
We compare LED with previous demosaicking methods by running their available source codes, mostly found according to \cite{codeList1} and \cite{CodeList2} \footnote{Deep Convolutional Neural Network based method proposed in \cite{DJ16} has Matlab code available online. However, as deep learning methods trade memories for computation time and accuracy, they are in a very different vein from our method, and hence Ref.\cite{DJ16} is not included in this experiment.}. They are: Alternating Projection (AP) \cite{AP02} (using the implementation by Y. M. Lu in \cite{OSAP10}); High Quality Linear Interpolation (HQLI) \cite{HQLI04} (using the MATLAB build-in function \textbf{demosaic}); Primary Consistent Soft Decision (PCSD) \cite{PCSD04}; Adaptive Homogeneity-Directed (AHD) \cite{AHD05}; Directional Linear Minimum Mean Square-Error Estimation (DLMMSE) \cite{DLMMSE05}; Weighted Edge and Color Difference (WECD) \cite{WECD06}; Total Least Square (TLS) \cite{TLS06}; Directional Filtering and A Posteriori Decision (DFAPD) \cite{DFAPD07}; Wavelet Analysis of Luminance Component (WALC) \cite{WALC07}; Heterogeneity-Projection Hard-Decision (HPHD) \cite{HPHD07}; Regularization Approach (RA) \cite{RA08}; Self-Similarity Driven (SSD) \cite{SSD09}; Contour Stencils (CS) \cite{CS12}; One Step Alternating Projections (OSAP) \cite{OSAP10}; Local Directional Interpolation and Non-local Adaptive Thresholding (LDINAT) \cite{LDINAT11}; Directional Filtering and Weighting (DFW) \cite{DFW12}; Residual Interpolation (RI) \cite{RI13}; Multiscale Gradient (MSG) \cite{MSG13}; Least Square Luma-Chroma Demultiplexing and Noise Estimation (LSLCD-NE)\cite{LSLCD13}; Flexible Image Processing Framework (FlexISP) \cite{FlexISP14} (using the implementation by Tan et al. in \cite{ADMM17}\footnote{The original implementation of FlexISP in \cite{ADMM17} computes PSNR for each channel first, then averages them as cPSNR. We modified this computation, by using the mean squared error over all pixels and all channels to compute cPSNR.}); Inter-color Correlation \cite{ICC14}; Adaptive Residual Interpolation (ARI) \cite{ARI15}; Multidirectional Weighted Interpolation (MDWI) \cite{MDWI15} (using the implementation found in Github); Directional Difference Regression (DDR) \cite{DDR16}; Minimized-Laplacian Residual Interpolation (MLRI) \cite{MLRI16}; Sequential Energy Minimization \cite{SEM16}; and Alternating Direction Method of Multipliers (ADMM) \cite{ADMM17}\footnote{Same modification as we did for FlexISP.}. Web addresses of these source codes are provided along with the bibliography of this work.
For the clearance of comparison, Table \ref{tab:LEDvsOthers} only shows the accuracy measured by cPSNR and efficiency measured by running time of each competing method on the McM dataset, which is more challenging than the Kodak dataset \cite{FDRI16}. Evidently, it is observed that:
\begin{itemize}
\item \emph{None} of the competing methods outperforms the proposed method by both higher cPSNR and lower computation cost; Whereas the proposed LED clearly outperforms $18$ out of $28$ methods by both cPSNR and running time.
\item OS-AP and LSLCD-NE have similar running time to the proposed method, but their cPSNRs are about $2$dB and $1.15$dB lower respectively. SSD and FlexIP have similar (slightly superior) cPSNRs to the proposed method, but they are about $69$ and $2633$ times slower.
\item The proposed LED has lower cPSNR than RI, ICC, MLRI, CS, DDR, MDWI, ARI and LDINAT, but is about $12$, $17$, $27$, $28$, $122$, $280$, $420$ and $4400$ times faster than them respectively.
\end{itemize}
\input{LEDvsOthers.tex}
\subsection{Visual Performance on Low Resolution Images}
Fig.\ref{fig:Visual1}-Fig.\ref{fig:Visual18} show examples that LED works visually favorably to state-of-the-art methods. Fig.\ref{fig:Visual1} shows a local region taken from the 1st image of McM. Demosaicking by ICC and MLRI in this region suffers noticeable ``false color'' artifacts, whereas DDR and LED recoveries look more natural. Fig.\ref{fig:Visual9} shows another example taken from the 9th McM image. MLRI incompletely recovers the black lines in the example region, whereas LED and ICC both slightly blur the black lines with the red background, but their recovery is visually more acceptable. In the example shown by Fig.\ref{fig:Visual18}, DDR produces obvious ``smearing'' artifacts. In contrast, demosaicking results by ICC, MLRI and LED are all visually close to the original image.
\input{FigLocalMcM1}
\input{FigLocalMcM9}
\input{FigLocalMcM18}
\subsection{Evaluation on Modern Resolution Images}
The resolution of the MR Kodak dataset is similar to today's popular daily-use cameras, which entail fast demosaicking speed. Table \ref{tab:LEDvsOthersMRKodak} compares the proposed LED with faster algorithms HA and HQLI, as well as more sophisticated algorithms RI, ICC, MLRI, CS and DDR, which have higher cPSNR accuracy than LED on low resolution dataset McM. In this experiment, we exclude methods that are more than $200$ times slower than LED, since they would have different application scenarios. On test images of modern resolution, LED and DDR have the highest average SSIM value. It outperforms CS by cPSNR, SSIM and running time. The cPSNR of LED is still notably (more than 1db) higher than HA and HQLI, while it is comparable to the top-performing state-of-the-art methods that are tens of time slower. It takes LED only $2.86$ seconds, but takes RI, ICC, MLRI tens of seconds and DDR hundreds of seconds.
\input{LEDvsOthersMRKodak2.tex}
\section{Hamilton-Adams Demosaicking}
\label{sec:HARevisit}
Assuming the mosaicked image $\mathbf{M}$ has $m$ rows and $n$ columns, let
\[\mathcal{L}=\left\{(i,j)\in \mathbb{N}^2|i\in [1 \quad m],j\in [1 \quad n]\right\}\]
be the set of all pixel positions. According to the Bayer mosaicking pattern (Fig.\ref{fig:BayerCFA}), we define
\begin{eqnarray*}
\mathcal{G}&=&\left\{(i,j)\in \mathbb{N}^2 |(i+j) \medspace \textrm{is even}\right\}\nonumber \\
\mathcal{R}&=&\left\{(i,j)\in \mathbb{N}^2 |i \medspace \textrm{is even}, j \medspace \textrm{is odd} \right\}\nonumber\\
\mathcal{B}&=&\left\{(i,j)\in \mathbb{N}^2 |i \medspace \textrm{is odd}, j \medspace \textrm{is even} \right\}
\end{eqnarray*}
to be the sets of positions where the green, red and blue values are originally available respectively. Hence their complementary sets $\mathcal{G}^{c}=\mathcal{L}\setminus \mathcal{G}$, $\mathcal{R}^{c}=\mathcal{L}\setminus \mathcal{R}$ and $\mathcal{B}^{c}=\mathcal{L}\setminus \mathcal{B}$ are the sets of positions where the green, red and blue values are to be recovered respectively. We use $r$,$g$ and $b$ to denote the original RGB components of a pixel $(i,j)$, and denote its estimated color components by adding a ``hat'' symbol to the corresponding notation. For example, if a pixel $(i,j)\in \mathcal{G}$, we write its true RGB values as $\left(r,g,b\right)$ and its estimated RGB values as $\left(\hat{r},g,\hat{b}\right)$.
\subsection{HA Green Channel Demosaicking}
Let $(i,j)\in \mathcal{G}^{c}$. The HA algorithm first computes its horizontal and vertical intensity variation, then selects the less variation direction to perform interpolation. In particular, at pixel $(i,j)$, the horizontal first and second order partial derivatives (denoted by $\partial_{h}$ and $\partial^{2}_{h}$), as well as the vertical first and second order differential (denoted by $\partial_{v}$ and $\partial^{2}_{v}$) of $\mathbf{M}$, are computed by
\begin{eqnarray}
\partial_{h}\mathbf{M}(i,j) & = & \frac{\mathbf{M}\left(i,j+1\right)-\mathbf{M}\left(i,j-1\right)}{2} \nonumber \\
\partial^{2}_{h}\mathbf{M}(i,j) & = & \frac{\mathbf{M}\left(i,j+2\right)+\mathbf{M}\left(i,j-2\right)-2\mathbf{M}\left(i,j\right)}{4}\nonumber\\
\partial_{v}\mathbf{M}(i,j)& = & \frac{\mathbf{M}\left(i+1,j\right)-\mathbf{M}\left(i-1,j\right)}{2} \nonumber \\
\partial^{2}_{v}\mathbf{M}(i,j) & = & \frac{\mathbf{M}\left(i+2,j\right)+\mathbf{M}\left(i-2,j\right)-2\mathbf{M}\left(i,j\right)}{4},
\label{eq:horiDiff}
\end{eqnarray}
Note that in Eq.\ref{eq:horiDiff}, pixels that are one unit away from $(i,j)$ are in $\mathcal{G}$, and pixels that are two units away are in the same set as $(i,j)$.
The HA algorithm defines the horizontal variation $v_{h}$ and vertical variation $v_{v}$ as
\begin{eqnarray}
v_{h}& = & \left|\partial_{h}\mathbf{M}(i,j)\right|+\left|2\partial^{2}_{h}\mathbf{M}(i,j)\right|\nonumber\\
v_{v} & = & \left|\partial_{v}\mathbf{M}(i,j)\right|+\left|2\partial^{2}_{v}\mathbf{M}(i,j)\right|.
\label{eq:HA gradients}
\end{eqnarray}
Let $\bar{g}_{h}$ and $\bar{g}_{v}$ be the average of the neighbouring green values in the horizontal and vertical directions respectively, i.e.,
\begin{eqnarray}
\bar{g}_h & = & \frac{1}{2}\left(\mathbf{M}(i,j+1)+\mathbf{M}(i,j-1)\right)\nonumber\\
\bar{g}_v & = & \frac{1}{2}\left(\mathbf{M}(i+1,j)+\mathbf{M}(i-1,j)\right).
\label{eq:directional summation}
\end{eqnarray}
Finally, $\hat{g}(i,j)$ is estimated by
\begin{equation}
\hat{g}(i,j) = \left\{\begin{array}{ll}
\bar{g}_h-\partial^{2}_{h}\mathbf{M}(i,j)& \textrm{if $v_{h} < v_{v} $}\\
\bar{g}_v-\partial^{2}_{v}\mathbf{M}(i,j)& \textrm{if $v_{h} > v_{v} $}\\
\frac{1}{2}\left[\bar{g}_h-\partial^{2}_{h}\mathbf{M}(i,j)\right)+\left(\bar{g}_v-\partial^{2}_{v}\mathbf{M}(i,j)\right]
& \textrm{if $v_{v} = v_{h}$}
\end{array}\right.
\label{eq:HA G}
\end{equation}
\subsection{HA Red and Blue Channel Demosaicking}
The HA demosaicking method utilizes the recovered green plane to regulate the blue and red recovery. Particularly for the Bayer CFA pattern, this is a typical $2\times 2$ times super-resolution problem, with available subsamples evenly spaced at every other row and column. Instead of directly enlarging the red plane $r(\mathcal{R})$ and blue plane $b(\mathcal{B})$, the HA algorithm enlarges the colour difference planes $\hat{g}(\mathcal{R})-r(\mathcal{R})$ and $\hat{g}(\mathcal{B})-b(\mathcal{B})$, based on the observation that $g(\mathcal{L})-r(\mathcal{L})$ and $g(\mathcal{L})-b(\mathcal{L})$ are generally smoother than $r(\mathcal{L})$ and $b(\mathcal{L})$ respectively. The magnification is simply performed by a bilinear interpolation
\begin{subequations}
\begin{eqnarray}
\widehat{(g-r)}(i,j)= \sum_{(k,l)\in\mathcal{R}\cap \mathcal{N}_{i,j}} \omega_{k,l}\left(\hat{g}(k,l)-r(k,l)\right) \medspace \textrm {for $(i,j)\in \mathcal{R}^c$}
\label{eq: HA dgr}\\
\widehat{(g-b)}(i,j)= \sum_{(k,l)\in\mathcal{B}\cap \mathcal{N}_{i,j}} \varpi_{k,l}\left(\hat{g}(k,l)-b(k,l)\right) \medspace \textrm {for $(i,j)\in \mathcal{B}^c$},
\label{eq: HA dgb}
\end{eqnarray}
\end{subequations}
\normalsize
where $(k,l)$ index the pixels that are in the local neighbourhood $\mathcal{N}_{i,j}$ with $\hat{g}-r$ (in Eq.\ref{eq: HA dgr}) or $\hat{g}-b$ (in Eq.\ref{eq: HA dgb}) values available for bilinear interpolation; $\omega$ and $\varpi$ are the corresponding bilinear interpolation coefficients, determined by the spatial distance between $(i,j)$ and $(k,l)$.
Finally the missing red and blue values are recovered by
\begin{equation}
\hat{r}(i,j)=\left\{\begin{array}{ll}
g(i,j) - \widehat{(g-r)}(i,j) & \textrm {for $(i,j) \in \mathcal{G}$} \\
\hat{g}(i,j) - \widehat{(g-r)}(i,j) & \textrm {for $(i,j) \in \mathcal{B}$}
\end{array}
\right. .
\label{eq:HAR}
\end{equation}
and
\begin{equation}
\hat{b}(i,j)=\left\{\begin{array}{ll}
g(i,j) - \widehat{(g-b)}(i,j) & \textrm {for $(i,j) \in \mathcal{G}$}\\
\hat{g}(i,j) - \widehat{(g-b)}(i,j) & \textrm {for $(i,j) \in \mathcal{R}$}
\end{array}
\right. .
\label{eq:HAB}
\end{equation}
\subsection {Advantages and limitations of the HA algorithm }
The high effectiveness of the HA method is due to the wisdom of taking full advantage of the green channel, which is sampled more densely than the other two channels. The green channel is recovered first based on available samples. It is then used to regulate the recovery of the red and blue channels. In other words, it trusts the sampling frequency more than edge detection. Such a strategy is suitable for today's digital CFA cameras, the resolution of which is generally several mega-pixels. This high sampling frequency means that the intensity at each pixel is highly correlated to its local neighbours; whereas to perform edge detection in a non-local neighbourhood, especially when $\frac{2}{3}$ information at each pixel is lost, could be time consuming.
Nevertheless, HA's smoothness assumption is over simplified. In real applications, due to the existence of noise, the HA scheme restores the green component at a pixel exclusively from either its horizontal or vertical neighbours, as the variation in the two directions $v_{v}$ and ${v_{h}}$ are hardly equal (see Eq.\ref{eq:HA gradients} and Eq.\ref{eq:HA G}). This is disadvantageous in smooth regions, where more neighbours should be used to smooth out random noises. Moreover, its red and blue channel recovery assumes that the color difference plane is locally bilinear, which is seriously violated at edge or texture area and results in the ``false color'' artifacts \cite{NMPM03}. In the next section, we propose a more adaptive and flexible edge-sensing demosaicking scheme, which lifts the HA demosaicking accuracy to state-of-the-art comparable methods, while still being fast.
\section{Introduction}
The vast majority of current consumer digital cameras have Color Filter Arrays (CFA) placed on the light sensing units, to capture only one of the three primary color components at each pixel \cite{szeliski2010}. Fig.\ref{fig:BayerCFA} shows the most frequently used CFA layout named Bayer Pattern: in each $2\times 2$ subblock, the diagonal sensing units response to the green wavelength component and the anti-diagonal ones response to the red and blue wavelength components of light rays. Recovering the missing primary color values to form standard RGB color images, is called Demosaicking.
\begin{figure}[h]
\centering
\includegraphics[width=0.25\linewidth]{BayerCFA.eps}
\caption{The most widely used Bayer pattern for CFA arrangement.}
\label{fig:BayerCFA}
\end{figure}
A faithful demosaicking algorithm not only enables obtaining good quality images with low hardware cost, but also provides a potential solution to image compression. Therefore, demosaicking has been of intense interest in both academic research and industry. Similar to many ill-posed image recovery problems, what demosaicking research really aims to solve is demosaicking at non-regular regions such as edge and textures. Thus the numerous existing demosaicking methods commonly focus on how to accurately detect the least variation direction from the CFA sampled image data. They differ in the aspects of: 1) the domain to conduct finite differencing; 2) the measurement of directional variation by differencing; 3) the strategy of steering interpolation along local dominant direction; 4) the exploitation of the inter- and intra- channel correlation for higher accuracy; 5) the procedure to enhance the quality of a fully interpolated RGB image.
\textbf{Differencing Domain}\quad As a natural indicator of variation, magnitude of the first- or second-order finite differencing can be computed in each CFA sampled channel (e.g., \cite{HA96}\cite{Universal16}\cite{CCCA14} by Shao-Rehman); or across different channels (e.g. \cite{MSG13}\cite{PID16}). Alternatively, differencing can be conducted in each tentatively estimated color channel (e.g., \cite{HQLI04}), or their color-difference planes (e.g., \cite{LDINAT11}\cite{GBTF2010}). Many recent works perform differencing in the \emph{residual planes}, which are the difference between the CFA samples and intermediately interpolated channels, and have shown promising results (e.g., \cite{RI13}\cite{ARI15}\cite{ICC14}\cite{DDR16}).
\textbf{Measuring Variation}\quad To measure directional variation, the Hamilton-Adams (HA) method combines the first- and second-order differencing magnitude in two channels at the current single pixel \cite{HA96}. This method is also adopted in subsequent works such as \cite{PID16} and \cite{VCD06}. A more robust approach is to accumulate the directional differencing magnitude in a local neighbourhood (e.g., \cite{SSD09}\cite{RI13}\cite{MSG13}\cite{DDR16}), or further at a mixture of scales (e.g., \cite{ASMS14}).
\textbf{Edge Directed Interpolation}\quad HA compares the horizontal and vertical variation and selects the smoother direction to perform interpolation. In case of ties, interpolations in both directions are averaged \cite{HA96}. Su extended this idea by fusing the horizontal and vertical interpolation using machine learned weights \cite{WECD06}; Chung-Chan selected the local dominant direction based on the variance of directional color differences \cite{VCD06}; In \cite{Vote14}, local dominant direction is determined by voting for the horizontal or vertical or none edge hypotheses; Wu et al. relaxed the strict prerequisite for none-edge judgment to approximate equality \cite{PID16}. More methods estimate missing values by weighted summation of the estimation from the north, south, west and east directions, where the weights are obtained from the directional variation (e.g.\cite{SSD09}\cite{RI13}\cite{MSG13}\cite{DDR16}\cite{FDRI16}) and spatial distance (e.g.\cite{Universal16} \cite{BFDD17}). Ref.\cite{PCSD04} tests multiple direction hypotheses and chooses the one that shows the highest consistency among all channels. Deciding edge direction by maximizing a posteriori is also adopted in \cite{DFAPD07} and \cite{SSSC14}.
\textbf{Exploitation of Cross-Channel Correlation}\quad The color channels of a natural image generally show strong correlation, meaning that the color or edge information of one channel is also implied by other channels. Hence cross-channel priors are extensively explored for demosaicking. For example, HA assumes that the color difference planes are locally bilinear. As this assumption fails at edges, Ref.\cite{ICC14} proposes compensating inter-channel interpolation by intra-channel interpolation if evidence of non-linearity presents; Ref.\cite{PCSD04} assumes that the three channels have consistent edge directions; Regularization is investigated to formulate the inter- and intra- channel correlation (e.g., \cite{RA08} \cite{LDINAT11}). Ref.\cite{MLRI16} assumes that the local region in one channel is a linear transform of another channel in the intermediate estimation step.
\textbf{Quality Enhancement}\quad Due to the existence of cross-channel correlation, many works refine each channel by other channels' reconstruction alternatively and iteratively (e.g., \cite{AP02}\cite{SP05}\cite{AHD05}\cite{WECD06}\cite{OSAP10}\cite{IRI15}). Postprocessing techniques such as non-local regularization and median filtering have also been widely employed to suppress spurious high frequency components \cite{NMPM03}\cite{SSD09}\cite{LDINAT11}\cite{SSSC14}. It is known that non-local regularization and median filtering are essentially a series of iterative linear filtering, robust to outliers but computationally heavy \cite{niu12}. For efficiency, Wu et al. proposed a postprocessing technique based on machine learned regression priors.
\textbf{Deep Learning Demosaicing}\quad Convolutional Neural Networks (CNN) recently have attracted the attention of demosaicing research (\cite{MNN14}\cite{DJ16}\cite{DRL17}\cite{MDFCN18}). Some of these works have achieved the top demosaicing accuracy on benchmarks yet is faster than many classical methods (e.g., \cite{DJ16}). An implicit cost for CNN is the non-trivial memory required to store the trained model (e.g., \cite{DJ16},\cite{MDFCN18}) .
The accuracy of recent demosaicking methods keeps increasing, and so is the associated computational cost (see Sec.\ref{sec:experiment}). For real time visualization, sophisticated demosaicking may entail expensive processing hardware and high power supply, against the intention of using CFAs. In the vast literature on demosaicing, the HA method is extremely simple. Seemingly such simplicity may only yield baseline accuracy. However, it performs surprisingly well. Buades et al. tested the HA algorithm and $8$ well-known methods on $10$ images selected from McM dataset \cite{PCSD04}, and the HA algorithm is shown to achieve the least Mean Square Error (MSE) \cite{SSD09}. Gharbi et al. compared the HA algorithm with $16$ high-impact demosaicking methods of the literature (till year of 2016) \cite{DJ16} . While HA is the second fastest (only slower than Bilinear Interpolation), its Peak Signal to Noise Ratio (PSNR) accuracy on benchmark datasets Kodak \cite{Kodak} and McM is higher than $10$ methods.
Although the HA algorithm does not put much effort on edge detection, it is highly effective. Based on a close look at the HA algorithm, we propose a high-quality fast edge-sensing image demosaicking scheme that adopts the HA pipeline. Particularly, we recover the green channel first, and then the green-red and green-blue color difference planes. For adaptive edge-sensing, we replace HA's green channel selective directional interpolation by blending the directional estimation, using a logistic functional of the difference between directional variations. We extend this edge-sensing strategy to the green-red and green-blue colour difference planes. This extension is not straightforward, since Bayer CFA samples the green channel twice as many the red or blue channels. Our approach is to derive a logistic functional to blend the diagonal and anti-diagonal estimation, leveraging the diagonal symmetry of the Bayer pattern. Then the green channel interpolation scheme is applicable to computing the rest missing values in the green-red and green-blue difference planes. The proposed demosaicking process is highly parallelable: although the red and blue channels have to be estimated subsequently to the green channel estimation, the restoration in each step at a pixel is independent of the restoration of other pixels. This feature means that our method is very suitable for Graphics Processing Units (GPU) and Field Programmable Gate Arrays (FPGA) implementation, achieving instant image visualization in real applications.
The rest of the paper is organized as follows. Section-II analyzes the strength and weakness of the HA algorithm. Subsequently, Section \ref{sec:LHA} formulates a new fast edge-sensing demosaicking technique. Section \ref{sec:experiment} compares the efficiency and accuracy of our proposed method with the state-of-the-art methods by extensive experiments. Section \ref{sec:conclusion} concludes our work.
\section*{References}
\bibliographystyle{IEEEtran}
\section{Edge-Sensing Demosaicking by Logistic Functional of Difference between Directional Variations}
\label{sec:LHA}
\subsection {Green Channel Demosaicking}
\label{subsec:greenReco}
The green channel demosaicking process of the HA algorithm, as shown in Eq.\ref{eq:HA G}, can be rewritten as
\begin{equation}
\hat{g}(i,j) = \omega_{h}(\bar{g}_{h}-\partial^{2}_{h}\mathbf{M}(i,j))+(1-\omega_{h})(\bar{g}_{v}-\partial^{2}_{v}\mathbf{M}(i,j)),
\label{eq:Unified}
\end{equation}
where
\begin{equation}
\omega_{h}=\left\{\begin{array}{ll}
0 & \textrm{if $v_{h} > v_{v}$} \\
1 & \textrm{if $v_{h} < v_{v}$} \\
\frac{1}{2} & \textrm{if $v_{h} = v_{v} $}.
\end{array}\right.
\label{eq:HAGWeight}
\end{equation}
In practice, even in very flat region, $v_{h}$ and $v_{v}$ are rarely equal because of noise. A more practical solution is to relax the strict equality requirement $v_{h}=v_{v}$ to the approximate equality $v_{h}\approx v_{v}$, which can be expressed by the inequality $\left|v_{h} - v_{v}\right|\leq T$, where $T$ is the allowed noise level. That is,
\begin{equation}
\omega_{h}=\left\{\begin{array}{ll}
0 & \textrm{if $v_{h} - v_{v} > T $} \\
1 & \textrm{if $v_{h} - v_{v} < -T$} \\
\frac{1}{2} & \textrm{if $\left|v_{h} - v_{v}\right|\leq T$}.
\end{array}\right.
\label{eq:HAGWeightT}
\end{equation}
Although $\omega_{h}$ defined by Eq.\ref{eq:HAGWeightT} is more flexible than by Eq.\ref{eq:HAGWeight}, the value of $T$ has to be carefully defined for each image, as a small bias in $T$ may lead to an opposite interpolation decision. Desirably, $\omega_{h}$ should be a continuous function, which smoothly blends the estimation from both directions, thus a small bias does not cause the demosaicking result to vary abruptly. In particular, rather than using the step function defined by Eq.\ref{eq:HAGWeightT}, we seek for a smooth function $\omega_{h}$ that meets the criteria:
\begin{enumerate}
\item $\omega_{h}\to 0$, when $T < v_{h} - v_{v} \to \infty$;
\item $\omega_{h}\to 1$, when $-T > v_{h} - v_{v} \to -\infty$;
\item $\omega_{h}\approx \frac{1}{2}$, if $\left|v_{h} - v_{v}\right| \leq T$;
\end{enumerate}
Note that, $1-\omega_{h}$ and $\omega_{h}$ should have the same form. That is, if there is a function $f$, such that $\omega_{h}=f(v_{h} - v_{v})$, then $1-\omega_{h}=f(v_{v} - v_{h})$ should hold. In other words,
\begin{equation}
f(v_{h} - v_{v}) + f(v_{v} - v_{h}) = 1.
\end{equation}
It can be shown that the logistic function
\begin{equation}
f_{k}(x) = \frac{1}{1+e^{kx}},
\end{equation}
where $k$ is a positive real number adjusting the convergence of $f_{k}(x)$, fulfills all requirements on $\omega_{h}$. Thus we define
\begin{equation}
\omega_h = \frac{1}{1+e^{k\left(v_{h} - v_{v}\right)}} .
\label{eq:LogisticFunction}
\end{equation}
It can be verified that
\begin{equation}
1-\omega_h = \frac{1}{1+e^{k\left(v_{v} - v_{h}\right)}}.
\end{equation}
Algorithm \ref{alg:green interpolation} summarizes the green channel demosaicking pipeline. To examine the influence of hyper-parameter $k$ on demosaicking performance, we run the algorithm on $100$ high quality natural images from Waterloo Exploration Database \cite{Waterloo17}\cite{Waterloo}, with $k$ varying from $0.01$ to $1.0$ at a step size of $0.01$. We observe that $k=0.05$ yields the highest PSNR (averaged over the $100$ training images), hence we fix $k$ to be $0.05$ in this work. Fig.\ref{fig:logisticFuncCurv} plots the function curve of $f_{0.05}(x)$. Note that, the high pass filtering involved in the interpolation scheme does not preserve energy. Consequently, $\hat{g}$ might be out of the range of $[\min(g(\mathcal{G})),\max(g(\mathcal{G}))]$, hence we clip such $\hat{g}$ values to be either $\min(g(\mathcal{G}))$ or $\max(g(\mathcal{G}))$, whichever is closer, at the final step of the algorithm.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.75\textwidth,height=0.25\textheight]{logisticFuncCurv.eps}
\caption{The curve of logistic function $f_{k}(x)=(1+e^{kx})^{-1}$, with $k=0.05$, which we use to balance the contribution from the horizontal and vertical neighbours.}
\label{fig:logisticFuncCurv}
\end{figure}
\begin{algorithm}
\KwIn{Mosaicked image $\mathbf{M}$; hyper-parameter $k$}
\KwOut{ $\hat{g}(i,j)$ for each $(i,j) \in \mathcal{G}^c$}
$m_{max} := \max(g(\mathcal{G}))$;\\
$m_{min} := \min(g(\mathcal{G}))$;\\
\For{each $(i,j)\in \mathcal{G}^c$}
{
Compute $\partial_{h}$,$\partial_{v}$,$\partial^2_{h}$,$\partial^2_{v}$ of $\mathbf{M}$ at pixel $(i,j)$ by Eq.\ref{eq:horiDiff};\\
Compute $v_{h}$ and $v_{v}$ by Eq.\ref{eq:HA gradients}, $\bar{g}_{h}$ and $\bar{g}_{v}$ by Eq.\ref{eq:directional summation};\\
Compute $\omega_{h}$ by Eq.\ref{eq:LogisticFunction};\\
Compute $\hat{g}(i,j)$ by Eq.\ref{eq:Unified};\\
$\hat{g}(i,j) = \max(\min(\hat{g}(i,j),m_{max}),m_{min})$;
}
\Return{$\hat{g}(\mathcal{G}^c)$}
\caption{Green Channel Demosaicking}
\label{alg:green interpolation}
\end{algorithm}
\subsection {Red and Blue Channels Demosaicking}
We transform $r(\mathcal{R}^c)$ and $b(\mathcal{B}^c)$ estimation to $(\hat{g}-r)(\mathcal{R}^c)$ and $(\hat{g}-b)(\mathcal{B}^c)$ interpolation. We treat the two channels in the same fashion, hence this section only articulates the red channel demosaicking. Its blue channel counterpart can be derived by simply exchanging the positions of red and blue in the algorithm.
To respect edges and textures, we apply our edge-sensing strategy also to the red channel. This is cannot be done by a straightforward extension from the green to the red channel. Due to Bayer CFA color sensors arrangement, in the green channel, at a pixel $(i,j) \in \mathcal{G}^c$, its horizontal and vertical neighbours all have original green values available. In contrast, if $(i,j) \in \mathcal{R}^c$, at most two of its horizontal and vertical neighbours have green-red difference values available. Our approach is to leverage the diagonal symmetry of the red sensors' positions. We first derive the edge-sensing interpolation scheme for $(g-r)(i,j)$, where $(i,j)\in \mathcal{B}$, using its diagonal and anti-diagonal neighbours. This makes the green-red difference values available at the horizontal and vertical neighbours for each of the rest pixels. We then infer $r(\mathcal{G})$ from $(\hat{g}-r)(\mathcal{R})$ and the estimated $(g-r)(\mathcal{B})$.
\subsubsection {Estimating red values at $\mathcal{B}$}
As shown in Fig.\ref{fig:grAtB}, the nearest available red values around a pixel $(i,j) \in \mathcal{B}$ are $r(i-1,j-1)$, $r(i+1,j+1)$, $r(i-1,j+1)$, and $r(i+1,j-1)$, located in the diagonal and anti-diagonal directions. To obtain edge information, we compute the difference between the diagonal and anti-diagonal intensity variation (in the mosaicked image plane $\mathbf{M}$). We then use the logistic function value of this difference to weight the contribution of $(\hat{g}-r)$ at $(i-1,j-1)$, $(i+1,j+1)$, $(i-1,j+1)$, and $(i+1,j-1)$ to restore $(\hat{g}-r)(i,j)$.
\begin{figure*}[h!]
\centering
\subfigure[]{
\label{fig:grAtB}
\includegraphics[width=0.45\linewidth]{grAtB.eps}
}
~
\subfigure[]{
\label{fig:grAtG}
\includegraphics[width=0.45\linewidth]{grAtG.eps}
}
\caption{Illustration of recovering $(g-r)(i,j)$ for $(i,j)\in \mathcal{R}^c$. Pixels used for computing the second order partial derivatives are surrounded by a circle; Pixels used for computing the first order partial derivatives are surrounded by both a circle and a curved square. (a) First, for each pixel $(i,j) \in\mathcal{B}$, $\widehat{(g-r)}(i,j)$ is obtained from its diagonally and anti-diagonally neighbouring $(\hat{g}-r)$ and $(\hat{g}-b)$ values; (b) Subsequently, at each pixel $(i,j) \in\mathcal{G}$, $\widehat{(g-r)}(i,j)$ is obtained from its vertically neighbouring $(\hat{g}-r)$ values and horizontally neighbouring $\widehat{(g-r)}$ values.}
\end{figure*}
In particular, we compute the first and second order diagonal and anti-diagonal partial derivatives of $\mathbf{M}$ at $(i,j)$ by
\begin{eqnarray}
\partial_{d}\mathbf{M}(i,j) & = & \frac{\mathbf{M}\left(i+1,j+1\right)-\mathbf{M}\left(i-1,j-1\right)}{2\sqrt{2}} \nonumber \\
\partial^{2}_{d}\mathbf{M}(i,j) & = & \frac{\mathbf{M}\left(i+2,j+2\right)+\mathbf{M}\left(i-2,j-2\right)-2\mathbf{M}\left(i,j\right)}{8}\nonumber\\
\partial_{a}\mathbf{M}(i,j) & = & \frac{\mathbf{M}\left(i-1,j+1\right)-\mathbf{M}\left(i+1,j-1\right)}{2\sqrt{2}} \nonumber \\
\partial^{2}_{a}\mathbf{M}(i,j) & = & \frac{\mathbf{M}\left(i-2,j+2\right)+\mathbf{M}\left(i+2,j-2\right)-2\mathbf{M}\left(i,j\right)}{8},
\label{eq:diagDiff}
\end{eqnarray}
\normalsize
The local intensity variation in the diagonal and anti-diagonal directions are computed as
\begin{eqnarray}
v_{d}& = & \left|\partial_{d}\mathbf{M}(i,j)\right|+\left|2\sqrt{2}\partial^{2}_{d}\mathbf{M}(i,j)\right|\nonumber\\
v_{a} & = & \left|\partial_{a}\mathbf{M}(i,j)\right|+\left|2\sqrt{2}\partial^{2}_{a}\mathbf{M}(i,j)\right|.
\label{eq:diagVari}
\end{eqnarray}
Let $\omega_{d}$ be the logistic function value of $v_{d}-v_{a}$, i.e.,
\begin{equation}
\omega_d = \frac{1}{1+e^{k\left(v_{d} - v_{a}\right)}} ,
\label{eq:diagWeight}
\end{equation}
where hyper-parameter $k$ is fixed to $0.05$, as described in Section \ref{subsec:greenReco} for green channel recovery.
Define
\scriptsize
\begin{eqnarray}
\overline{(g-r)}_{d} & = & \frac{(\hat{g}-r)\left(i+1,j+1\right)+(\hat{g}-r)\left(i-1,j-1\right)}{2} \nonumber \\
\overline{(g-r)}_{a} & = & \frac{(\hat{g}-r)\left(i-1,j+1\right)+(\hat{g}-r)\left(i+1,j-1\right)}{2},
\label{eq:diagMean}
\end{eqnarray}
\normalsize
which compute the diagonal mean and anti-diagonal mean of $(\hat{g}-r)$ at $(i,j)$.
Furthermore, the second order partial derivatives in the colour-difference plane $\hat{g}-b$ at $(i,j)$ are approximated by the central differencing scheme,
\begin{eqnarray}
\partial^2_{d}(\hat{g}-b) & = & \frac{(\hat{g}-b)\left(i+2,j+2\right)+(\hat{g}-b)\left(i-2,j-2\right)-2(\hat{g}-b)\left(i,j\right)}{8} \nonumber \\
\partial^2_{a}(\hat{g}-b) & = & \frac{(\hat{g}-b)\left(i+2,j-2\right)+(\hat{g}-b)\left(i-2,j+2\right)-2(\hat{g}-b)\left(i,j\right)}{8},
\label{eq:diagMean}
\end{eqnarray}
\normalsize
We infer $(g-r)(i,j)$ by fusing the directional estimation using $\omega_{d}$, that is,
\begin{equation}
\widehat{(g-r)}(i,j) = \omega_{d}\left(\overline{(g-r)}_{d}-\partial^2_{d}(\hat{g}-b)\right)+(1-\omega_{d})\left(\overline{(g-r)}_{a}-\partial^2_{a}(\hat{g}-b)\right),
\label{eq:grAtB}
\end{equation}
\normalsize
which recovers $r(i,j)$ by
\begin{equation}
\hat{r}(i,j) = \hat{g}(i,j)-\widehat{(g-r)}(i,j),
\label{eq:rAtB}
\end{equation}
for $(i,j)\in\mathcal{B}$.
\subsubsection {Estimating red values at $\mathcal{G}$}
Once $\widehat{(g-r)}(\mathcal{B})$ is available, $\widehat{(g-r)}(\mathcal{G})$ can be estimated from its horizontal and vertical neighbours, as shown in Fig. \ref{fig:grAtG}. Note that in this step, for each $(i,j) \in \mathcal{G}$, either $(\hat{g}-r)$ or $\widehat{(g-r)}$ values have been already computed at the four nearest neighbours $(i-1,j)$,$(i+1,j)$,$(i,j-1)$ and $(i,j+1)$. For notation simplicity, we denote them uniformly by $\widetilde{(g-r)}$. In the green-red difference plane at pixel $(i,j)$, we compute its horizontal and vertical average values $\overline{(g-r)}_{h}$ and $\overline{(g-r)}_{v}$ by
\begin{eqnarray}
\overline{(g-r)}_{h} & = & \frac{\widetilde{(g-r)}\left(i,j+1\right)+\widetilde{(g-r)}\left(i,j-1\right)}{2} \nonumber \\
\overline{(g-r)}_{v} & = & \frac{\widetilde{(g-r)}\left(i+1,j\right)+\widetilde{(g-r)}\left(i-1,j\right)}{2}.
\label{eq:colorDiffHoriMean}
\end{eqnarray}
\normalsize
Moreover, we approximate the second order partial derivatives of $\widetilde{(g-r)}$ at $(i,j)$ by the central differencing scheme as
\scriptsize
\begin{eqnarray}
\partial^2_{h}\widetilde{(g-r)} & = & \frac{\partial_{h}\widetilde{(g-r)}\left(i,j+1\right)-\partial_{h}\widetilde{(g-r)}\left(i,j-1\right)}{2} \nonumber \\
\partial^2_{v}\widetilde{(g-r)} & = & \frac{\partial_{v}\widetilde{(g-r)}\left(i+1,j\right)-\partial_{v}\widetilde{(g-r)}\left(i-1,j\right)}{2},
\label{eq:colorDiffCurv}
\end{eqnarray}
\normalsize
where $\partial_{h}\widetilde{(g-r)}(i,j-1)$, $\partial_{h}\widetilde{(g-r)}(i,j+1)$, $\partial_{v}\widetilde{(g-r)}(i-1,j)$ and $\partial_{v}\widetilde{(g-r)}(i+1,j)$
are further approximated by central differencing
\begin{eqnarray}
\partial_{h}\widetilde{(g-r)}(i,j-1) & = & \frac{\partial_{h}\widetilde{(g-r)}(i,j+1)-\partial_{h}\widetilde{(g-r)}(i,j-3)}{4} \nonumber \\
\partial_{h}\widetilde{(g-r)}(i,j+1) & = & \frac{\partial_{h}\widetilde{(g-r)}(i,j+3)-\partial_{h}\widetilde{(g-r)}(i,j-1)}{4} \nonumber \\
\partial_{v}\widetilde{(g-r)}(i-1,j) & = & \frac{\partial_{h}\widetilde{(g-r)}(i+1,j)-\partial_{h}\widetilde{(g-r)}(i-3,j)}{4} \nonumber \\
\partial_{v}\widetilde{(g-r)}(i+1,j) & = & \frac{\partial_{h}\widetilde{(g-r)}(i+3,j)-\partial_{h}\widetilde{(g-r)}(i-1,j)}{4} \nonumber \\
\label{eq:colorDiffGrad}
\end{eqnarray}
\normalsize
Subsequently, $\widehat{(g-r)}(i,j)$ is given by
\begin{equation}
\widehat{(g-r)}(i,j) = \omega_{h}\left(\overline{(g-r)}_{h}-\partial^2_{h}\widetilde{(g-r)}\right)+(1-\omega_{h})\left(\overline{(g-r)}_{v}-\partial^2_{v}\widetilde{(g-r)}\right),
\label{eq:grAtG}
\end{equation}
\normalsize
where $\omega_{h}$ is computed by the same formula as in Eq.\ref{eq:horiDiff}, Eq.\ref{eq:HA gradients} and Eq.\ref{eq:LogisticFunction}.
Finally, $r(i,j)$ is restored by Eq.\ref{eq:rAtB} for $(i,j)\in\mathcal{G}$. Algorithm \ref{alg:red interpolation} summarizes the estimation process of the missing red components.
\begin{algorithm}
\KwIn{Mosaicked image $\mathbf{M}$; estimated green channel $\hat{g}(\mathcal{L})$; hyper-parameter $k$}
\KwOut{ $\hat{r}(i,j)$ for each $(i,j) \in \mathcal{R}^c$}
$r_{max} := \max(r(\mathcal{R}))$;\\
$r_{min} := \min(r(\mathcal{R}))$;\\
\For{each $(i,j)\in \mathcal{B}$}
{
Compute $\hat{r}(i,j)$ by sequentially implementing Eq.\ref{eq:diagDiff} until Eq.\ref{eq:rAtB};\\
$\hat{r}(i,j) = \max(\min(\hat{r}(i,j),r_{max}),r_{min})$;
}
\For{each $(i,j)\in \mathcal{G}$}
{
Compute $\widehat{(g-r)}(i,j)$ by sequentially implementing Eq.\ref{eq:colorDiffHoriMean} until Eq.\ref{eq:grAtG};\\
Compute $\hat{r}(i,j)$ by Eq.\ref{eq:rAtB};\\
$\hat{r}(i,j) = \max(\min(\hat{r}(i,j),r_{max}),r_{min})$;
}
\Return{$\hat{r}(\mathcal{R}^c)$}
\caption{Red Channel Demosaicking}
\label{alg:red interpolation}
\end{algorithm}
At image boundaries where pixels required for central differencing or averaging are unavailable, we simply restore the missing colour components by linear interpolation or nearest-neighbour interpolation.
|
{
"timestamp": "2018-06-06T02:08:21",
"yymm": "1806",
"arxiv_id": "1806.00771",
"language": "en",
"url": "https://arxiv.org/abs/1806.00771"
}
|
\section{Introduction}
A moving mirror can produce quantum radiation from a vacuum \cite{Fu73, De75, FD76, BD82}; two mirrors at rest can form a cavity with a negative Casimir energy density inside \cite{Ca48, De75, BD82, BMM01, KMM09}, and one or two such cavity mirrors moving in specific ways can create particles in the cavity
\cite{Mo70, JJ09, JJ10, WJ11}. All these interesting physics can be obtained by simply modeling a perfect mirror as a Dirichlet boundary
condition for the field at the position of the mirror. Nevertheless,
such simple models may not always be satisfactory either in theoretical or experimental aspects.
Theoretically, a detector or atom inside a cavity of perfect mirrors would experience endless echoes without
relaxation if the atom and the field are not started with a steady state of the combined system \cite{LCH16}.
The equilibrium approach will never apply if one does not introduce an {\it ad hoc} dissipation for the cavity.
Experimentally, while the incident waves of the fields at all frequencies get total reflection by a perfect mirror in the Dirichlet boundary
condition, a physical mirror is not perfect anyway: The charges of a realistic mirror responding to the incident electromagnetic waves
have a finite relaxation time and the reflectivity reaches almost $100\%$ only in a finite working range of frequency.
To describe more realistic situations and to see how well the results by simply introducing the Dirichlet boundary conditions can do,
there have been some mirror models which are more sophisticated than the simple, strong boundary conditions for the field.
For example, Barton and Calogeracos introduced a mass term for the field at the mirror's position which acts
like a delta-function potential \cite{BC95, CB95},
Golestanian and Kardar applied an auxiliary field to constrain the field amplitude around the mirror's position \cite{GK97, GK98},
and Sopova and Ford replaced perfect conducting mirrors by dispersive dielectrics \cite{SF02}.
Recently Galley, Behunin, and Hu constructed a mirror-oscillator-field (MOF) model with a new internal degree of freedom of the mirror minimally coupled to the field at the mirror's position
to mimic the microscopic interaction between the field and the surface charges of the mirror \cite{GBH13}.
Such a microscopic treatment captures the mirror-field interaction in a more physically consistent way.
The authors of \cite{GBH13} also showed that their MOF model can be connected to the earlier models in Refs.
\cite{BC95, CB95, GK97, GK98, La95} with different choices of parameters and limits.
A similar model with the derivative coupling was considered by Wang and Unruh to study the force exerted on the mirror by vacuum
fluctuations \cite{WU14}. Wang and Unruh further considered a model with the internal oscillator minimally coupled to a massive scalar
field \cite{WU15} to get rid of the divergent effective mass in \cite{WU14}.
In Ref. \cite{SLH15} Sinha, Hu and the author of the present paper realized that the mirrors in the MOF models with the minimal and the
derivative couplings behave like metal and dielectric mirrors, respectively. They introduced a new coupling
to a harmonic-oscillator bath to describe the interaction between the mirror's internal degree of freedom and the mechanical degrees
of freedom such as the vibration of the mirror substrate and the environment connected by the suspension of the mirror.
They also verified that in the strong coupling regime their results are close to those with the Dirichlet boundary conditions
\cite{Ja05, GJ02, GJ03, GJ04}.
Since the MOF models are nonlinear due to the mirror motion, one usually needs to make some linear approximations in practical calculations.
Among those approximations, restricting the mirror moving along a prescribed worldline may be the simplest one.
By doing so, the motion of the mirror is not dynamical, and a derivative-coupling MOF model in Ref. \cite{SLH15} reduces to a
derivative-coupling Unruh-DeWitt (UD$'$) harmonic-oscillator (HO) detector theory \cite{Unr76, DeW79, UZ89, RSG91} with additional HO baths,
which is the model we are considering in this paper.
The late-time reflectivity of our ``detector mirror" in the weak oscillator-field (OF) coupling regime is similar to the atom mirrors in the
cavity and waveguide QED \cite{SF05, ZD08, AZ10, HSHB11, WJ11, CGS11, CJGK12, SMK14}, whose reflectivity are peaked in a narrow band around
a single frequency of resonance. The cavity of those atom mirrors can only generate few cavity modes inside since the detector and atom mirrors are almost transparent for other harmonics \cite{ZD08}.
In the field-theoretical derivation for the Casimir effect \cite{Ca48}, however, one needs to sum over all the cavity modes to get the
Casimir energy density in a perfect cavity.
Thus in this paper we extend our attention to the detector mirrors in the strong OF coupling regime, where the reflectivity of
the detector mirror is close to 100\% in a very wide frequency range of the field.
Later we will see that, while the transient behaviors of the combined system can be significantly different for different coupling strengths,
the late-time renormalized energy density of the field inside a cavity of our detector mirrors is always negative even in the weak
OF coupling regime where the cavity modes are few.
The paper is organized as follows. The classical theory for our single ``detector mirror" is given in Sec. \ref{CT1DM}, where we examine
the relaxation time and the late-time behavior of the system and then derive the late-time reflectivity determined by the
interplay between the HO and the field.
In Sec. \ref{QT1DM} we develop the quantum theory of the detector mirror and show that the energy density of the field outside the
detector mirror is zero at late times while the equal-time correlations of the field amplitudes at different positions are reduced by the
mirror. In Sec. \ref{detCav}, we consider a cavity of the detector mirrors, show that there are indeed many cavity modes inside our
cavity at late times in the strong OF coupling regime, and then calculate the late-time renormalized energy density inside the cavity, which
turns out to be negative for all nonzero coupling strengths. After addressing the HO-HO entanglement at late times,
a summary of our findings is given in Sec. \ref{Summa}.
\section{Detector mirror: Classical theory}
\label{CT1DM}
A mirror moving along the worldline $z^\mu$ with its internal degree of freedom $Q$ coupled with a quantum field $\Phi$
in (1+1)D Minkowski space may be described by the action given in Eq. (1) of Ref. \cite{SLH15}, with $Z$ there replaced by $z^1$ and
with the mechanical damping $\Gamma$ and noise $\xi$ in Eq. (35) there introduced.
Since the position of the mirror $z^1$ is not considered as a dynamical variable in this paper, we can write down the reduced action as
\begin{eqnarray}
S &=& -\int dt dx \frac{1}{2}\partial_\mu\Phi_x(t) \partial^\mu\Phi_x(t)
+\frac{1}{2}\int d\tau \left[ \left(\partial_\tau Q(\tau)\right)^2 -\Omega_0^2 Q^2(\tau)\right]
\nonumber\\
& & -\int d\tau \int dt dx \lambda(\tau) Q(\tau) \partial_\tau\Phi_x(t) \delta(t-z^0(\tau))\delta(x-z^1(\tau)) \nonumber\\
& & -\int d\tau dy \frac{1}{2}\partial_\nu {\cal Z}_y(\tau) \partial^\nu {\cal Z}_y(\tau)
-\int d\tau dy \tilde{\lambda}(\tau) Q(\tau) \partial_\tau {\cal Z}_y(\tau) \delta(y-\vartheta),
\label{Stot1}
\end{eqnarray}
which is a derivative-coupling UD$'$ detector theory \cite{Unr76, DeW79, UZ89, RSG91}.
Here the natural unit with the speed of light $c=1$ is adopted, $(t,x)$ are the Minkowski coordinates, $\tau$ is the proper time of the
detector mirror, $Q$ is a HO of mass $m=1$ living in an internal space of the detector, and $\lambda(\tau)$ is the
switching function of the OF coupling, assumed to be vanishing before the initial time $\tau^{}_I$. The derivative
coupling is chosen for its well-behaved radiation reaction term, which is the first derivative of the proper time of the detector [e.g. Eq.
(\ref{eomQRR})]. The function $\tilde{\lambda}(\tau)$ corresponds to the coupling between the internal HO and the environmental
oscillator bath responsible for the mechanical damping and noise. It can be switched on at a different initial moment $\tau'_I\not=\tau^{}_I$.
In the strong OF coupling regime, the absolute value of the OF coupling $|\lambda|$ is much greater than the oscillator-environment(OE)
coupling $|\tilde{\lambda}|$ so that the former interaction dominates and the detail of the environment would not be important.
Thus for simplicity and consistency, we model the complicated environmental degrees of freedom such as the vibration of the mirror substrate
and those connected by the suspension of the mirror by a single massless scalar field ${\cal Z}_y(\tau)$ in another internal space $y\in
{\bf R}^1$, and assume that the internal HO of the mirror also acts as an UD$'$ detector located at $y=\vartheta$ in that internal space
\footnote{The internal HO $Q$ of a detector in a massless scalar field $\Phi$ in (1+1)D with the $F(Q,\dot{Q})\dot{\Phi}$ coupling
($F(Q,\dot{Q})\Phi$ coupling) behaves like a HO in an Ohmic (sub-Ohmic) bath in the quantum Brownian motion models with a particular set of
couplings to the bath oscillators \cite{HM94, LH07}. Here $F$ is an arbitrary function.}. In this way the dissipation and fluctuations will be related consistently.
Then the action (\ref{Stot1}) is quadratic and the combined system is linear and solvable.
When considering two or more mirrors, the internal space $y$, the phase parameter $\vartheta$, and the coupling $\tilde{\lambda}$ of
each mirror will be considered independent of those of the other detector mirrors.
From (\ref{Stot1}) the conjugate momenta of the detector, the field, and the mechanical environment read
\begin{eqnarray}
P(\tau) &=& \frac{\delta S}{\delta \partial_\tau Q(\tau)} = \partial_\tau Q(\tau), \\
\Pi_x(t) &=& \frac{\delta S}{\delta\partial_t\Phi_x(t)}
=\partial_t \Phi_x(t) - \lambda(\tau^{}_t) Q(\tau^{}_t) \delta (x - z^1(\tau^{}_t)), \\
\Upsilon_y(\tau) &=& \frac{\delta S}{\delta\partial_\tau {\cal Z}_y(\tau)} = \partial_\tau {\cal Z}_y(\tau) -
\tilde{\lambda}(\tau) Q(\tau) \delta(y-\vartheta),
\end{eqnarray}
respectively, with which the Hamiltonian on a $t$-slice is given by
\begin{eqnarray}
H(t) &=& \frac{1}{2 v^0(\tau^{}_t)}\left[ P^2(\tau^{}_t) + \Omega_0^2 Q^2(\tau^{}_t) \right] \nonumber\\
& &+\,\frac{1}{2}\int dx \left\{ \left[ \Pi_x(t)+\lambda(\tau^{}_t) Q(\tau^{}_t) \delta(x-z^1(\tau^{}_t))\right]^2 +
\left[ \partial_x \Phi_x(t)\right]^2 \right\} \nonumber\\
& &+\,\frac{1}{2v^0(\tau^{}_t)}\int dy \left\{ \left[ \Upsilon_y(\tau^{}_t)+ \tilde{\lambda}(\tau^{}_t) Q(\tau^{}_t) \delta(y-\vartheta)
\right]^2 + \left[ \partial_y {\cal Z}_y(\tau^{}_t)\right]^2 \right\}\nonumber\\
& &+\,\lambda(\tau^{}_t) Q(\tau^{}_t) \frac{v^1(\tau^{}_t)}{v^0(\tau^{}_t)}\partial_x\Phi_{z^1(\tau^{}_t)}(t),
\label{Hamil}
\end{eqnarray}
where $\tau^{}_t$ is obtained by solving $t=z^0(\tau^{}_t)$ and $v^\mu(\tau)\equiv \partial_\tau z^\mu(\tau)$ is the
two-velocity \footnote{The last term of the Hamiltonian (\ref{Hamil}) is overlooked in Eq.(2.6) of Ref.\cite{LCH16}.
This does not affect the results in \cite{LCH16} since
$(v^0(\tau),v^1(\tau))=(1,0)$ in the cases considered there.}.
Suppose the detector is at rest at $x=0$ in the external Minkowski space, so that $z^1(\tau)=0$, $z^0(\tau)=\tau=t$, $v^1(\tau)=0$ and
$v^0(\tau)=1$. Then the value of the Hamiltonian (\ref{Hamil}) equals
\begin{eqnarray}
E(t) &=& \frac{1}{2}\left[ \left(\partial_t Q(t)\right)^2 + \Omega_0^2 Q^2(t) \right] +
\frac{1}{2}\int dx \left[ \left(\partial_t\Phi_x(t)\right)^2 + \left(\partial_x \Phi_x(t)\right)^2 \right]\nonumber\\
& &+\,\frac{1}{2}\int dy \left[ \left(\partial_\tau {\cal Z}_y(t)\right)^2 + \left(\partial_y {\cal Z}_y(t)\right)^2\right] , \label{Eden}
\end{eqnarray}
which appears no cross term between different kinds of the degrees of freedom.
Anyway, the Euler-Lagrange equations in this case,
\begin{eqnarray}
\left( \partial_t^2 - \partial_x^2\right)\Phi_x(t) &=&
\partial_t \left(\lambda(t) Q(t)\right)\delta(x), \label{eomPhi} \\
\left( \partial_t^2 - \partial_y^2\right){\cal Z}_y(t) &=& \partial_t \left(\tilde{\lambda}(t) Q(t)\right)\delta(y-\vartheta),\label{eomX}\\
\left(\partial_t^2+ \Omega_0^2\right) Q(t) &=& -\lambda(t) \partial_t \Phi_0(t) -\tilde{\lambda}(t) \partial_t {\cal Z}_\vartheta(t),\label{eomQ}
\end{eqnarray}
from (\ref{Stot1}), are still coupled.
The general solution of the field $\Phi$ in (\ref{eomPhi}) can be expressed as
\begin{equation}
\Phi_x(t) = \Phi_x^{^{[0]}}(t) + \Phi_x^{^{[1]}}(t) ,
\end{equation}
where $\Phi_x^{^{[0]}}(t)$ is the homogeneous solution satisfying $\Box\Phi^{^{[0]}}=0$ and $\Phi_x^{^{[1]}}(t)$ is the inhomogeneous
solution given by
\begin{eqnarray}
\Phi_x^{^{[1]}}(t) &=& \int_{-\infty}^\infty d\tau\partial_{\tau}\left( \lambda(\tau)Q(\tau)\right) G_{\rm ret}(t,x;z^0(\tau),z^1(\tau))
\nonumber\\ &=& \frac{1}{2} \lambda(t-|x|)Q(t-|x|) , \label{Phisol}
\end{eqnarray}
after an integration by part. Here the retarded Green's function for a massless scalar field in (1+1)-dimensional Minkowski space
${\bf R}_1^1$ reads $G_{\rm ret}(t,x;t',x') = \theta[(t+x)-(t'+x')]\theta[(t-x)-(t'-x')]/2$, where $\theta(x)$ is the Heaviside step
function with the convention $\theta(0)=1/2$. The surface terms in (\ref{Phisol}) have been dropped since $\lim_{\tau\to\infty}$ $G_{\rm ret}
(t,x;$ $z^0(\tau),z^1(\tau))=0$ for all finite $t$ and $x$, and we assume $\lim_{\tau\to-\infty} \lambda(\tau)=0$ long before the coupling
is switched on. Similarly, the general solution of ${\cal Z}$ in (\ref{eomX}) is
\begin{equation}
{\cal Z}_y(t) ={\cal Z}_y^{^{[0]}}(t)
+\frac{1}{2}\tilde{\lambda}(t-|y-\vartheta|)Q(t-|y-\vartheta|) \label{Xsol}
\end{equation}
where ${\cal Z}_y^{^{[0]}}(t)$ is the homogeneous solution.
Inserting the solutions of the field $\Phi$ and the mechanical environment ${\cal Z}$ into (\ref{eomQ}), one obtains
\begin{eqnarray}
&&\ddot{Q}(t) + \left(\frac{\lambda^2(t)}{2} +\frac{\tilde{\lambda}^2(t)}{2}\right)\dot{Q}(t) +
\left(\Omega_0^2 + \frac{\lambda(t)\dot{\lambda}(t)}{2}+\frac{\tilde{\lambda}(t)\dot{\tilde{\lambda}}(t)}{2}\right) Q(t)\nonumber\\
&=& -\lambda(t) \dot{\Phi}_0^{^{[0]}}(t) - \tilde{\lambda}(t)\dot{\cal Z}_\vartheta^{^{[0]}}(t),
\label{eomQRR}
\end{eqnarray}
which shows that $Q$ behaves like a driven, damped HO with a time-dependent frequency.
\subsection{Relaxation}
\label{relax}
Suppose the OF coupling is switched on at $t=t_0$, namely, $\lambda(t)=\lambda \theta(t-t_0)$ with $\theta(0)=1/2$, while $\tilde{\lambda}$ has become a positive constant long before $t_0$, and initially $Q(t_0)=\dot{Q}(t_0)=0$. Integrating (\ref{eomQRR}) from
$t=t_0-\epsilon$ to $t_0+\epsilon$ for $\epsilon \to 0+$, one has $0 = \dot{Q}(t_0+\epsilon)-\dot{Q}(t_0-\epsilon)+ (\lambda^2/4) Q(t_0)$
for continuous $Q(t)$. This implies that $\dot{Q}$ is continuous around $t=t_0$ since $Q(t_0)=0$, and
so the solution for (\ref{eomQRR}) reads
\begin{equation}
Q(t) = \int_{t_0}^t d\tilde{\tau} K(t-\tilde{\tau}) \left[ -\lambda \dot{\Phi}_0^{^{[0]}}(\tilde{\tau}) -
\tilde{\lambda}\dot{\cal Z}_\vartheta^{^{[0]}}(\tilde{\tau})\right], \label{Qsol}
\end{equation}
for $t \ge t_0$, where the propagator $K$ is defined by
\begin{equation}
K(s) \equiv \frac{1}{2\Gamma} e^{-(\gamma+\tilde{\gamma})s} \left(e^{\Gamma s}-e^{-\Gamma s}\right)
= e^{-(\gamma+\tilde{\gamma})s} \Gamma^{-1} \sinh \Gamma s , \label{Kpropa}
\end{equation}
with the coupling strengths $\gamma \equiv\lambda^2/4 >0$ and $\tilde{\gamma}\equiv\tilde{\lambda}^2/4 >0$, the parameter
$\Gamma\equiv\sqrt{(\gamma+\tilde{\gamma})^2-\Omega_0^2}$ in the over-damping cases, and $\Gamma = i\Omega \equiv i \sqrt{\Omega_0^2 -
(\gamma+\tilde{\gamma})^2}$ in the under-damping cases.
In the cases of critical damping, $\Gamma^{-1}\sinh\Gamma s$ in (\ref{Kpropa}) reduces to $s$ as $\Gamma \to 0$ .
In the integrand of (\ref{Qsol}) one can see that there are two channels of relaxation proportional to $e^{-(\gamma+\tilde{\gamma}-\Gamma)
(t-\tau)}$ and $e^{-(\gamma+\tilde{\gamma}+\Gamma)(t-\tau)}$ after (\ref{Kpropa}) is inserted.
In the cases of under- and critical damping, one has the relaxation time-scale $1/(\gamma+\tilde{\gamma})$, which gets shorter for larger
$\gamma$ and $\tilde{\gamma}$. In the over-damping cases, however, $\Gamma$ is a positive real number and so $e^{-(\gamma+\tilde{\gamma}-
\Gamma)(t-\tau)}$ sets a time-scale of relaxation,
\begin{equation}
t_{\rm rlx}=(\gamma+\tilde{\gamma}-\Gamma)^{-1}, \label{rlxtime}
\end{equation}
which will be longer for a stronger coupling strength $\gamma$ and/or $\tilde{\gamma}$ if $\Omega_0$ is fixed. For $\gamma\gg \Omega_0$,
one has $t_{\rm rlx} \approx 2(\gamma+\tilde{\gamma})/\Omega_0^2$.
\subsection{Late-time solutions}
Introducing a right-moving wave $\Phi_x^{^{[0]}}(t)= e^{-i \omega t + i k x}$, $\omega = k >0$, as the driving force in (\ref{Qsol}) and
assuming ${\cal Z}_y^{^{[0]}}(t) = 0$ for simplicity,
once the OF coupling $\lambda$ has become a positive constant for a sufficiently long time for relaxation ($t-t_0 \gg t_{\rm rlx}$), one has
$Q(t)\propto e^{-i\omega t}$ at late times according to (\ref{Qsol}). Suppose the time scale of switching-on the OF coupling is much shorter
than $t_{\rm rlx}$. Inserting the late-time ansatz $Q(t)\to\tilde{Q}_\omega e^{-i\omega t}$ into (\ref{eomQRR}), one can solve
$\tilde{Q}_\omega$ and find the late-time solution
\begin{equation}
Q(t) \to \chi^{}_\omega \left[ -\lambda \Phi_{0}^{^{[0]}}(t)-\tilde{\lambda} {\cal Z}_{\vartheta}^{^{[0]}}(t)\right]
= - \lambda e^{-i\omega t}\chi^{}_\omega ,
\end{equation}
with the susceptibility function
\begin{equation}
\chi^{}_\omega \equiv \frac{-i\omega}{\Omega_0^2-\omega^2 -2i\omega(\gamma + \tilde{\gamma})}, \label{chiw1}
\end{equation}
which implies that
\begin{eqnarray}
\Phi_x(t) &\to&
e^{-i \omega (t-x)} - 2\gamma e^{-i\omega (t-|x|)}\chi^{}_\omega \label{Philate}\\
&\equiv& \theta(-x)\left[\Phi_x^{^{[0]}}(t) + \Phi_x^{^{[R]}}(t)\right] + \theta(x) \Phi_x^{^{[T]}}(t) \label{PhiIRT}
\end{eqnarray}
from (\ref{Phisol}).
\subsection{Reflectivity}
\label{secReflect}
In (\ref{Philate}), the first term and the second term in the $x<0$ region can be interpreted as the incident and
reflected waves $\Phi_x^{^{[0]}}$ and $\Phi_x^{^{[R]}}$, respectively, while the superposition of $\Phi_x^{^{[0]}}$ and $\Phi_x^{^{[1]}}$
in the $x>0$ region can be interpreted as the transmitted wave $\Phi_x^{^{[T]}}$, as in (\ref{PhiIRT}).
Thus at late times we can define the reflectivity as
\begin{equation}
|{\cal R}(k)|^2 \equiv \frac{|\Phi_{x}^{^{[R]}}(t)|^2}{|\Phi_{x}^{^{[0]}}(t)|^2} \to
\left| 2\gamma \chi^{}_\omega \right|^2\label{RR}
\end{equation}
and the transmittivity as
\begin{equation}
|{\cal T}(k)|^2 \equiv \frac{|\Phi_{x}^{^{[T]}}(t)|^2}{|\Phi_{x}^{^{[0]}}(t)|^2} \to
\left| 1- 2\gamma\chi^{}_\omega\right|^2. \label{TT}
\end{equation}
An example of the above late-time reflectivity and transmittivity is shown in Figure \ref{reflec}. One can see that the reflectivity is
peaked around $\omega =\Omega_0$ where the internal HO of the detector mirror and the incident wave of the field are resonant.
Observing that $|{\cal R}(k)|^2 = [\gamma/(\gamma+\tilde{\gamma})]^2$ and $|{\cal T}(k)|^2 = [\tilde{\gamma}/(\gamma+\tilde{\gamma})]^{2}$
at $\omega=\Omega_0$, the UD$'$ detector will be a perfect mirror ($|{\cal R}(k)|^2 =1$) for the incident monochrome wave
if the internal HO is decoupled from the mechanical environment ($\tilde{\gamma}=0$).
When the OE coupling $\tilde{\gamma}$ is not negligible, however, the energy of the field $\Phi$ around the resonant frequency
will be significantly absorbed by the environment ${\cal Z}$, so that $|{\cal R}(k)|^2 +|{\cal T}(k)|^2$ becomes lower than $1$ around
$\omega\approx \Omega_0$.
\begin{figure}
\includegraphics[width=8cm]{fig1_R2_W01r1p05.pdf}
\caption{The late-time reflectivity $|{\cal R}|^2$ (black lines) and the sum of the reflectivity and transmittivity $|{\cal R}|^2+|{\cal T}|^2$ (red lines) against $\omega=|k|$, given in (\ref{RR}) and (\ref{TT}). Here $\Omega_0=1$, $\tilde{\gamma}=0.05$, and $\gamma = 0.2$ (dashed lines) and $10$ (solid or dotted lines). We plot the reflectivity of the minimal-coupling model [Eq.(17) in Ref.\cite{SLH15}] with the same parameters (green lines) for
comparison. One can see that the derivative-coupled and the minimal-coupled detectors act like a dielectric and a metal mirror, respectively, in the regime of $\omega\to 0$.}
\label{reflec}
\end{figure}
In Figure \ref{reflec} one can also see that when the frequency of the incident wave is far off resonance, the reflectivity is small and so
the mirror is almost transparent for that incident wave. With weak couplings ($\gamma, \tilde{\gamma} < \Omega_0$), large reflectivity
occurs only in a narrow frequency range of the width $O(\gamma+\tilde{\gamma})$ around the resonance (dashed curves). This feature is
similar to the usual dielectric mirrors and atom mirrors \cite{SF05, ZD08, AZ10, HSHB11, WJ11}.
The cavities of these kind of mirrors can produce only one or a few pairs ($k=\pm \omega$) of resonant modes inside \cite{ZD08}, since the
detector mirrors are nearly transparent for other harmonics.
In constructing a cavity model for comparing with the conventional approach to the Casimir effect, one may need a detector mirror with a
very wide working range of frequency to form an effective Dirichlet boundary condition $\Phi_{x_0}(t)\approx 0$ at the mirror's position
$x=x_0$. This could be done by carefully arranging a collection of detectors or atoms to form the mirror \cite{CGS11, CJGK12}.
Alternatively, one can simply raise the OF coupling of a single detector all the way to the over-damping regime for the internal HO
($\gamma \gg \Omega_0, \tilde{\gamma}$) to achieve it.
As shown by the solid curve in Figure \ref{reflec}, the reflectivity of a detector mirror in this
over-damping regime will go to approximately $1$ in a wide frequency range at late times,
though it may take a very long relaxation time to reach this stage, as discussed in Sec. \ref{relax}.
Later we will see explicitly in the quantum theory that a cavity of the detector mirrors in the over-damping regime can indeed generate many
cavity modes and the field spectrum inside the cavity is quasi-discrete at late times.
Note that the definition of reflectivity (\ref{RR}) makes sense only at late times. $|\Phi_{x}^{^{[R]}}(t)|^2/$ $|\Phi_{x}^{^{[0]}}(t)|^2$
can be greater than 1 in transient when the initial zero-point fluctuations of the detector burst out right after the OF coupling is
switched on (see, e.g., the left plots of Figure \ref{FxkEvo}).
\section{Detector mirror: Quantum theory}
\label{QT1DM}
The Heisenberg equations of motion in the quantum theory of our model (\ref{Stot1}), which is a linear system, have the same form as the
Euler-Lagrange equations (\ref{eomPhi})-(\ref{eomQ}):
\begin{eqnarray}
\left(\partial_t^2 -\partial_x^2\right)\hat{\Phi}_x(t) &=& \partial_t \left(\lambda(t) \hat{Q}(t)\right)\delta(x), \label{HeomPhi} \\
\left(\partial_t^2 -\partial_y^2\right)\hat{\cal Z}_y(t) &=& \partial_t \left(\tilde{\lambda}(t) \hat{Q}(t)\right)\delta(y-\vartheta),
\label{HeomX}\\
\left(\partial_t^2+\Omega_0^2\right)\hat{Q}(t) &=& -\lambda(t) \partial_t \hat{\Phi}_0(t)
-\tilde{\lambda}(t)\partial_t \hat{\cal Z}_\vartheta(t). \label{HeomQ}
\end{eqnarray}
One can see that each operator will gradually evolve to other operators whenever the couplings are on.
To deal with, we write the operators of the dynamical variables at finite $t$ in terms of the linear combinations of the free operators
defined before the couplings are switched on, each multiplied by a time-dependent c-number coefficient called the ``mode function," namely,
\begin{eqnarray}
\hat{Q}^{}_A (t) &=& \sqrt{\frac{\hbar}{2\Omega^{}_{0}}}
\left[ q_{A}^{A}(t) \hat{a}^{}_A + q_{A}^{{A}*}(t) \hat{a}^\dagger_{A}\right] +
\int \frac{d{\rm k}}{2\pi}\sqrt{\frac{\hbar}{2{\rm w}}}
\left[ q_{A}^{\rm k}(t) \hat{a}^{}_{p} + q_{A}^{{\rm k}*}(t) \hat{a}^\dagger_{\rm k}\right]\nonumber\\
& &+ \int \frac{d\tilde{\rm k}}{2\pi}\sqrt{\frac{\hbar}{2\tilde{\rm w}}} \left[ q_{A}^{\tilde{\rm k}}(t)
\hat{a}^{}_{\tilde{\rm k}} + q_{A}^{\tilde{\rm k}*}(t) \hat{a}^\dagger_{\tilde{\rm k}}\right]
\equiv \sum_{\kappa}\sqrt{\frac{\hbar}{2\Omega_{\kappa}}}
\left[ q_{A}^{\kappa}(t) \hat{a}^{}_{\kappa} + q_{A}^{\kappa *}(t) \hat{a}^\dagger_{\kappa}\right],\label{QAexpan}\\
\hat{\Phi}^{}_x(t) &=& \sum_{\kappa}\sqrt{\frac{\hbar}{2\Omega_{\kappa}}}
\left[ \varphi_{x}^{\kappa}(t) \hat{a}^{}_{\kappa} + \varphi_{x}^{\kappa *}(t) \hat{a}^\dagger_{\kappa}\right],\label{Phiexpan}\\
\hat{\cal Z}^{}_{y}(t) &=& \sum_{\kappa}\sqrt{\frac{\hbar}{2\Omega_{\kappa}}}
\left[ \zeta_{y}^{\kappa}(t) \hat{a}^{}_{\kappa} + \zeta_{y}^{\kappa *}(t) \hat{a}^\dagger_{\kappa}\right],\label{Xexpan}
\end{eqnarray}
where $\kappa$ runs over $A$, $\{{\rm k}\}$, and $\{\tilde{\rm k}\}$, which are the indices for the free HO labeled $A$, the free field mode
of wave-number ${\rm k}$, and the free mechanical environment mode of wave-number $\tilde{\rm k}$, respectively.
Here we have renamed $\hat{Q}$ to $\hat{Q}^{}_A$ to be consistent with the multi-detector cases later in this paper, and we denote
$\sum_{\rm k} \equiv \int d{\rm k}/(2\pi)$, $\sum_{\tilde{\rm k}}\equiv \int d\tilde{\rm k}/(2\pi)$, $\Omega^{}_{A}\equiv \Omega_0$,
$\Omega_{\rm k}\equiv {\rm w} \equiv |{\rm k}|$, and $\Omega_{\tilde{\rm k}}\equiv \tilde{\rm w}\equiv |\tilde{\rm k}|$.
The raising and lowering operators of the free internal HO have the commutation relation $[\hat{a}^{}_{A}, \hat{a}^\dagger_{A}]= 1$,
while the creation and annihilation operators for the free massless scalar field and the free environment satisfy $[\hat{a}^{}_{\rm k},
\hat{a}^\dagger_{\rm k'}]=2\pi\delta ({\rm k}-{\rm k'})$ and $[\hat{a}^{}_{\tilde{\rm k}}, \hat{a}^\dagger_{\tilde{\rm k}'}]=
2\pi\delta(\tilde{\rm k}-\tilde{\rm k}')$, respectively.
Applying these commutation relations of $\hat{a}$ and $\hat{a}^\dagger$ to the Heisenberg equations (\ref{HeomPhi})-(\ref{HeomQ}),
one obtains the equations of motion for the mode functions,
\begin{eqnarray}
\left(\partial_t^2-\partial_x^2\right)\varphi^\kappa_x(t)&=&\partial_t\left(\lambda(t) q_A^\kappa(t)\right)\delta(x), \label{eomvarphi} \\
\left(\partial_t^2-\partial_y^2\right)\zeta^\kappa_y(t) &=& \partial_t \left(\tilde{\lambda}(t)q_A^\kappa(t)\right)\delta(y-\vartheta),
\label{eomchi}\\
\left(\partial_t^2+\Omega_0^2\right)q_A^\kappa(t)&=&-\lambda(t)\partial_t \varphi_0^\kappa(t)-
\tilde{\lambda}(t)\partial_t\zeta^\kappa_\vartheta(t). \label{eomq}
\end{eqnarray}
Again they have the same form as the Euler-Lagrange equations,
while the initial conditions will be different from those in the classical theory.
The solutions for $\varphi$ and $\zeta$ are similar to (\ref{Phisol}) and (\ref{Xsol}):
\begin{eqnarray}
\varphi_x^\kappa(t) &=& \varphi_x^{\kappa^{[0]}}(t) +\frac{1}{2} \lambda(t-|x|)q_A^\kappa (t-|x|), \label{varphisol}\\
\zeta_y^\kappa(t) &=& \zeta_y^{\kappa^{[0]}}(t) +\frac{1}{2} \tilde{\lambda}(t-|y-\vartheta|)q_A^\kappa (t-|y-\vartheta|). \label{chisol}
\end{eqnarray}
where $\varphi_x^{{\rm k}^{[0]}}(t) = e^{-i{\rm w} t + i{\rm k}x}$, $\zeta_y^{\tilde{\rm k}^{[0]}}(t) = e^{-i \tilde{\rm w} t +
i \tilde{\rm k} y}$, and $\varphi_x^{A^{[0]}}(t) = \varphi_x^{\tilde{\rm k}^{[0]}}(t) = \zeta_y^{A^{[0]}}(t) = \zeta_y^{{\rm k}^{[0]}}(t) =0$.
Thus, similar to (\ref{eomQRR}), Eq.(\ref{eomq}) becomes
\begin{eqnarray}
&&\left[ \partial_t^2+ \left( \frac{\tilde{\lambda}^2(t)}{2} + \frac{\lambda^2(t)}{2} \right)\partial_t +
\left( \Omega_0^2 + \frac{\tilde{\lambda}(t)\partial_t \tilde{\lambda}(t)}{2} +
\frac{\lambda(t)\partial_t \lambda(t)}{2}\right)\right]q_A^\kappa(t)\nonumber\\ &=&
-\lambda(t) \partial_t \varphi_0^{\kappa^{[0]}}(t) -\tilde{\lambda}(t)\partial_t \zeta^{\kappa^{[0]}}_\vartheta(t), \label{eomq2}
\end{eqnarray}
after including the back-reactions of the field and the environment.
\subsection{Mode functions for internal HO}
\label{QTUD}
Since the environmental effect on the system is inevitable even at the stage of experiment preparation
while the details of the environment are uncontrollable in laboratories, we
assume the OE coupling $\tilde{\lambda}(t)$ was switched on in the far past $t = \tilde{t}_0 \ll -\tilde{\gamma}^{-1} < 0$ and
then settled to a constant $\tilde{\lambda}$, and the OF coupling $\lambda(t)$ is not switched on until $t=0$
\footnote{For theoretical interests, one may first turn on the OF coupling $\lambda(t)$ in the far past then switch on the OE coupling $\tilde{\lambda}(t)$ at $t=0$, though this is hard to be realized in laboratories. Since the model is linear, the switching functions are regular, and the cutoffs for the OE and the OF couplings are the same, the late-time results in this alternative scenario with the same initial state (\ref{IS1mir}) are expected to be the same as those we obtained in this paper. In transients, anyway, the evolutions of the system will be different in different scenarios.}.
Suppose the combined system started with a factorized state:
\begin{equation}
|\psi(t\le \tilde{t}_0)\rangle = |0\rangle^{}_Q \otimes |0\rangle^{}_{\cal Z} \otimes |0\rangle^{}_\Phi, \label{IS1mir}
\end{equation}
which is a product of the ground state of the free internal HO $|0\rangle^{}_Q$, the vacuum state of the free environment
$|0\rangle^{}_{\cal Z}$, and the vacuum state of the free field $|0\rangle^{}_\Phi$.
Then, right before $t=0$, the quantum state of the combined system has become
\begin{equation}
\rho(t\to 0-) = \rho^{}_{Q{\cal Z}} \otimes \rho^{}_{\Phi}, \label{rho0m}
\end{equation}
where $\rho^{}_{Q{\cal Z}}$ is the late-time state of the HO-environment subsystem and $\rho^{}_\Phi = |0\rangle^{}_\Phi\langle 0|$ is still
the vacuum state of the field. Between $t=\tilde{t}_0$ and $t=0$, $q_A^A(t)$ follows the equation of motion
\begin{equation}
\left[ \partial_t^2+ 2\tilde{\gamma}\partial_t + \Omega_0^2 \right]q_A^A(t) = 0 \label{EOMqAA0m}
\end{equation}
and behaves like a damped harmonic oscillator, while $q_A^{\tilde{\rm k}}(t)$ follows the equation
\begin{equation}
\left[ \partial_t^2+ 2\tilde{\gamma}\partial_t + \Omega_0^2\right]q_A^{\tilde{\rm k}}(t) =
-\tilde{\lambda}\partial_t \zeta^{\tilde{\rm k}^{[0]}}_\vartheta(t). \label{EOMqAp0m}
\end{equation}
Thus, as $\tilde{t}_0\to -\infty$, $\rho^{}_{Q{\cal Z}}$ in (\ref{rho0m}) is characterized by the two-point correlators with
the late-time solutions, $q_A^A(0)= \partial_t q_A^A(0)=0$ for (\ref{EOMqAA0m}), and
\begin{equation}
\left. q_A^{\tilde{\rm k}}(t)\right|_{t \to 0-}=\left.\frac{ \tilde{\lambda} i\tilde{\rm w} e^{-i\tilde{\rm w}t
+i\tilde{\rm k}\vartheta}}{\Omega_0^2 - \tilde{\rm w}^2 -2 i \tilde{\rm w}\tilde{\gamma}}\right|_{t \to 0-} \label{qaptm}
\end{equation}
for (\ref{EOMqAp0m}), which implies $\partial_t q_A^{\tilde{\rm k}}(0) = -i\tilde{\rm w} q_A^{\tilde{\rm k}}(0)$.
Suppose the OF coupling is suddenly switched on at $t=0$ like $\lambda(t) = \lambda \theta(t)$.
Integrating (\ref{eomq2}) from $t=-\epsilon$ to $\epsilon$ for $\epsilon \to 0+$, one has $\dot{q}^\kappa_A(\epsilon)-
\dot{q}^\kappa_A(-\epsilon)+(\lambda^2/4)q^\kappa_A(0)= 0$ provided that $q^\kappa_A(t)$ and $\varphi^{^{[0]}\kappa}_0(t)$ are continuous.
Then introducing the conditions $q_A^A(0)= \partial_t q_A^A(0)=0$, $q_A^{\rm k}(0)= \partial_t q_A^{\rm k}(0)=0$, and those from
(\ref{qaptm}) for $q_A^{\tilde{\rm k}}$ and $\partial_t q_A^{\tilde{\rm k}}$ around $t=0$,
the solutions of (\ref{eomq2}) for $t>0$ are found to be
\begin{eqnarray}
q_A^{\rm k}(t) &=& -\lambda\int_{t^{}_0}^t d\tilde{\tau} K(t-\tilde{\tau}) \dot{\varphi}_0^{{\rm k}^{[0]}}(\tilde{\tau}) \nonumber\\
&=& \frac{\lambda i{\rm w}}{2\Gamma}\left[
\frac{e^{-i{\rm w} t}-e^{-(\gamma+\tilde{\gamma}-\Gamma)\eta-i{\rm w}t^{}_0}}{\gamma+\tilde{\gamma}-\Gamma-i{\rm w}}
-\frac{e^{-i{\rm w} t}-e^{-(\gamma+\tilde{\gamma}+\Gamma)\eta-i{\rm w}t^{}_0}}{\gamma+\tilde{\gamma}+\Gamma-i{\rm w}}
\right], \label{qAksol}\\
q_A^{\tilde{\rm k}}(t) &=& -\tilde{\lambda}\int_{t^{}_0}^t d\tilde{\tau} K(t-\tilde{\tau})\dot{\zeta}_\vartheta^{\tilde{\rm k}^{[0]}}
(\tilde{\tau})\nonumber\\& & +\,\frac{\tilde{\lambda} i\tilde{\rm w} e^{-(\gamma+\tilde{\gamma})\eta+i\tilde{\rm k}\vartheta}}
{2\Gamma(\Omega_0^2-\tilde{\rm w}^2-2i\tilde{\gamma}\tilde{\rm w})}\left[
\left(\tilde{\gamma}-i\tilde{\rm w}+\Gamma\right)e^{\Gamma \eta}-
\left(\tilde{\gamma}-i\tilde{\rm w}-\Gamma\right)e^{-\Gamma \eta}\right], \label{qApsol}
\end{eqnarray}
and $q_A^A(t)=0$. Here $\eta\equiv t-t^{}_0 \ge 0$ with $t_0=0$, and the propagator $K(s)$ has been given in (\ref{Kpropa}).
The integral in the first line of (\ref{qApsol}) can be worked out to get an expression similar to the second line of (\ref{qAksol}).
In our numerical calculation, we replace $\theta(t)$ in $\lambda(t)$ by a $C^1$ function
\begin{equation}
\theta^{}_T(t) = \left\{ \begin{array}{lcc}
0 & & t \le 0 \\
\left[ 1 -\cos (\pi t/T)\right]/2 & \;{\rm for}\; & 0 < t < T\\
1 & & t \ge T
\end{array}\right. \label{thetaT}
\end{equation}
to regularize the delta function $\delta(t) = \partial_t \theta(t)$. Then we find $q_A^\kappa(t)$ are always continuous, and our numerical results do approach to (\ref{qAksol}) and (\ref{qApsol}) in the small $T$ limit.
Note that our $\theta^{}_T(t)$ is not smooth or normalizable ($\int_{-\infty}^\infty\theta^{}_T(t)$ diverge), and thus our results are not
restricted by the quantum inequalities for smooth and normalizable switching functions \cite{Fo91, FR95, Fl97}.
\subsection{Detector energy and HO-field entanglement}
\label{OFEnt}
With the operator expansion (\ref{QAexpan}) and the initial state (\ref{IS1mir}), the symmetric two-point correlators of the internal
oscillator of the detector read
\begin{eqnarray}
\langle \hat{Q}_A^2(t) \rangle &=&\lim_{(t',t'_0)\to (t,t_0)}{\rm Re}\left[ \frac{\hbar}{2\Omega_0}q_A^A(t)q_A^{A*}(t') \right.\nonumber\\
&&\left. + \int \frac{d{\rm k}}{2\pi} \frac{\hbar}{2{\rm w}} q_A^{\rm k}(t)q_A^{\rm k*}(t')+
\int \frac{d\tilde{\rm k}}{2\pi} \frac{\hbar}{2\tilde{\rm w}} q_A^{\tilde{\rm k}}(t) q_A^{\tilde{\rm k}*}(t') \right],\label{QA2formal}\\
\langle \hat{P}_A^2(t) \rangle &=&\lim_{(t',t'_0)\to (t,t_0)}{\rm Re}\left[\frac{\hbar}{2\Omega_0} \dot{q}_A^A(t)\dot{q}_A^{A*}(t')\right.
\nonumber\\ && \left.+ \int \frac{d{\rm k}}{2\pi} \frac{\hbar}{2{\rm w}}\dot{q}_A^{\rm k}(t)\dot{q}_A^{\rm k*}(t')+
\int \frac{d\tilde{\rm k}}{2\pi} \frac{\hbar}{2\tilde{\rm w}} \dot{q}_A^{\tilde{\rm k}}(t) \dot{q}_A^{\tilde{\rm k}*}(t') \right],
\label{PA2formal}
\end{eqnarray}
and $\langle \hat{Q}_A (t), \hat{P}_A(t)\rangle \equiv \langle (\hat{Q}_A(t)\hat{P}_A(t)+\hat{P}_A(t)\hat{Q}_A(t))\rangle/2 =
\partial_t \langle \hat{Q}(t)^2\rangle/2$.
For $t>t_0$, $q_A^A = 0$ and so only the integrals in the above expressions contribute. The closed form of these integrals can be obtained
straightforwardly after the mode functions are inserted.
For example, by inserting (\ref{qAksol}) we get
\begin{eqnarray}
&& \lim_{(t',t'_0)\to (t,t_0)} \int \frac{d{\rm k}}{2\pi} \frac{\hbar}{2\rm w} q_A^{\rm k}(t)q_A^{\rm k*}(t')
= \nonumber\\
&& \frac{\hbar\gamma}{2\pi\Gamma^2} \left\{
\frac{\Gamma}{\gamma^{}_2}\left[\left(1+e^{-2\gamma^{}_2 \eta}\right) \ln \frac{\gamma^{}_2+\Gamma}{\gamma^{}_2-\Gamma}
+{\rm Ei}[-(\gamma^{}_2-\Gamma)\eta] - {\rm Ei}[-(\gamma^{}_2+\Gamma)\eta]\right] \right. \nonumber\\
&&\hspace{.5cm} + e^{-2 \gamma^{}_2 \eta}\left[
\left(e^{2\Gamma \eta} -1+\frac{\Gamma}{\gamma^{}_2} \right){\rm Ei} [(\gamma^{}_2-\Gamma)\eta]
+\left(e^{-2\Gamma \eta} -1-\frac{\Gamma}{\gamma^{}_2} \right){\rm Ei} [(\gamma^{}_2+\Gamma)\eta]\right.\nonumber\\
&& \hspace{1.5cm}\left.\left. + 4 \Lambda_0 \sinh^2\Gamma \eta -e^{2\Gamma \eta}\ln\frac{\gamma^{}_2-\Gamma}{\Omega_0}
-e^{-2\Gamma \eta}\ln\frac{\gamma^{}_2+\Gamma}{\Omega_0} \right] \right\} \label{IntqAk2}
\end{eqnarray}
for real $\Gamma$ in the over-damping cases. Here Ei$(s)$ is the exponential integral function,
$\gamma^{}_2\equiv\gamma +\tilde{\gamma}$, and $\Lambda_0\equiv -\gamma^{}_e
-\ln\Omega_0|t'_0-t_0|$ with the Euler's constant $\gamma^{}_e$. At late times ($\eta=t-t_0 \gg 1/(\gamma+\tilde{\gamma}-\Gamma)$),
(\ref{IntqAk2}) becomes
\begin{equation}
\lim_{(t',t'_0)\to (t,t_0)} \int \frac{d\rm k}{2\pi} \frac{\hbar}{2\rm w} q_A^{\rm k}(t)q_A^{\rm k*}(t')
\to \frac{\hbar\gamma}{2\pi\Gamma\gamma^{}_2} \ln \frac{\gamma^{}_2+\Gamma}{\gamma^{}_2-\Gamma} . \label{IntqAk2LT}
\end{equation}
If the environment is excluded in our consideration,
(\ref{IntqAk2}) will be identical to the v-part of the detector correlator
$\langle \hat{Q}^2(t)\rangle_{\rm v}$ defined in refs. \cite{LH07, LCH16}, where their closed forms in the under-damping regime have been
given. Indeed, (\ref{IntqAk2}) with $\tilde{\gamma}=0$ can be obtained from Eq.(A9) in Ref. \cite{LH07} with Re $f$ there written
as Re $[f+f^*]/2$, then replacing the renormalized natural frequency $\Omega_r$ there for the minimal-coupling Unruh-DeWitt HO detector theory in (3+1)D Minkowski space by $\Omega_0$ here for the derivative-coupling detector model in (1+1)D
(also see the Appendix of Ref. \cite{LCH16}),
and finally replacing every $i\Omega$ there by $\Gamma$ here while noticing that ${\rm Re}\{\Gamma(0,s)\} = -{\rm Re}\{{\rm Ei}(-s)\}$ with the incomplete gamma function $\Gamma(0,s)$.
Note that in this paper we have changed the definitions of $\Lambda_0$ and $\Lambda_1$, corresponding to the UV cutoffs, from
$-\gamma^{}_e-\ln\Omega |\Delta\tau|$ with $\Delta\tau\to 0$ in our earlier works to $-\gamma^{}_e-\ln\Omega_0|\Delta\tau|$
here since the latter is more convenient in the over-damping regime (one cannot simply replace $\Omega$ in the former by $-i\Gamma$,
which leads to complex values of $\Lambda_0$ and $\Lambda_1$). From now we will use these new definitions for $\Lambda_0$
and $\Lambda_1$ even in the under- and critical-damping cases. Associated with this change, the $\ln [(\gamma/\Omega)\pm i] =
\ln[(\gamma\pm i\Omega)/\Omega]$ terms in (A9)-(A12) of Ref. \cite{LH07} should be replaced by $\ln[(\gamma\pm i\Omega)/\Omega_0]$ here.
The closed form of the integral $\int \frac{d\rm k}{2\pi}\frac{\hbar}{2\tilde{\rm w}} q_A^{\tilde{\rm k}}(t)q_A^{\tilde{\rm k}*}(t')$ is
much more lengthy than (\ref{IntqAk2}) due to the second line of (\ref{qApsol}). Fortunately all these extra terms decay out at late times,
and the late-time result of the integral with $q_A^{\tilde{\rm k}}$ in the over-damping regime is simply (\ref{IntqAk2LT}) with the overall
factor $\hbar\gamma$ replaced by $\hbar\tilde{\gamma}$. Summing these two integrals together we find
\begin{equation}
\langle \hat{Q}_A^2 \rangle \to {\rm Re}\,\frac{\hbar}{2\pi\Gamma} \ln \frac{\gamma^{}_2+\Gamma}{\gamma^{}_2-\Gamma} \label{Q2LT}
\end{equation}
at late times, which also applies to the under- and critical-damping cases for $\Gamma = i\Omega$ and $\Gamma\to 0$, respectively.
In the latter case, $\langle \hat{Q}_A^2 \rangle \to \hbar/(\pi\gamma_2)$ at late times.
The late-time result for the correlators can also be obtained by inserting the late-time mode functions
\begin{equation}
q_A^{\rm k}(t) \to - \lambda \chi^{}_{\rm w} e^{-i{\rm w}t}, \hspace{1cm}
q_A^{\tilde{\rm k}}(t) \to - \tilde{\lambda} \chi^{}_{\tilde{\rm w}} e^{-i\tilde{\rm w}t+i \tilde{\rm k} \vartheta}, \label{qAkpLT}
\end{equation}
with the susceptibility function $\chi^{}_\omega$ given in (\ref{chiw1}), into the integrals in (\ref{QA2formal}) and (\ref{PA2formal}).
Then we get $\langle \hat{Q}_A^2 \rangle$ in (\ref{Q2LT}), $\langle\hat{Q}_A, \hat{P}_A\rangle =\partial_t \langle \hat{Q}^2_A\rangle/2
\to 0$, and
\begin{equation}
\langle \hat{P}_A^2 \rangle \to \frac{\hbar}{2\pi} {\rm Re} \left[ 4 \gamma^{}_2 \Lambda_1 -
\left(2\Gamma+\frac{\Omega_0^2}{\Gamma}\right)\ln \frac{\gamma^{}_2+\Gamma}{\gamma^{}_2-\Gamma}\right] \label{P2LT}
\end{equation}
at late times.
Note that $\Lambda_1$ has to be large enough to make $\langle \hat{P}_A^2 \rangle$ positive and the uncertainty relation ${\cal U} \ge \hbar/2$ valid, where ${\cal U}\equiv [\langle \hat{Q}_A^2 \rangle\langle \hat{P}_A^2 \rangle -\langle \hat{Q}_A,\hat{P}_A\rangle^2]^{1/2}$ is the uncertainty function \cite{LH07}. This is not pathological, anyway.
Recall that $\Lambda_1 \equiv -\ln\Omega_0 |\Delta\tau| - \gamma_e$ is defined in the coincidence limit $\Delta\tau \to 0$. For a lower UV cutoff $\Lambda_1$, the time resolution for the internal oscillator of the detector is poorer. If $\Lambda_1$ or $\omega_M\equiv \Omega_0 e^{\Lambda_1}$ is too small, the correlators of the oscillators will actually represent the nonlocal correlations of dynamical variables at different proper times (e.g. $\langle \hat{Q}_A(\tau),\hat{Q}_A(\tau + \Delta \tau)\rangle$) with a large time-difference $\Delta\tau \sim 2\pi/\omega_M$. In this case quantum anti-correlation of vacuum fluctuations will enter and reduce the values of the correlators and the uncertainty function. This leads to violation of the uncertainty relation while the uncertainty function ${\cal U}$ has lost its equal-time sense.
From the detector sector of (\ref{Eden}), the expectation value of the energy of the internal oscillator of the UD$'$ detector is
\begin{equation}
E^{}_A =\frac{1}{2} \left(\langle \hat{P}_A^2 \rangle + \Omega_0^2 \langle \hat{Q}_A^2 \rangle \right)
\to \frac{\hbar}{2\pi} \left[ 4 \gamma^{}_2 \Lambda_1 - 2\Gamma\ln \frac{\gamma^{}_2+\Gamma}{\gamma^{}_2-\Gamma}\right]
\end{equation}
at late times from (\ref{Q2LT}) and (\ref{P2LT}).
It also depends on $\Lambda_1$ and will be positive if $\Lambda_1$ is sufficiently large.
The HO-field entanglement will be strong if the direct coupling $\gamma$ between them is strong. In this case the linear entropy $S_L=1/
(2{\cal U})$, where ${\cal U}\equiv\sqrt{\langle\hat{Q}_A^2\rangle\langle\hat{P}_A^2\rangle -\langle\hat{Q}_A,\hat{P}_A\rangle^2}$, would be
very close to 1 since $\langle\hat{P}_A^2\rangle$ can be very large in the strong OF coupling limit with a sufficiently large $\Lambda_1$.
\subsection{Reduction of late-time field correlations}
A perfect mirror placed at $x=0$ forces a Dirichlet boundary condition $\Phi^{}_{x=0}(t)=0$ at its position. This would cut the equal-time correlations of the field amplitudes on different sides of the mirror, namely, $\langle \hat{\Phi}^{}_{x}(t)\hat{\Phi}^{}_{x'}(t)\rangle =0$
for $x x' <0$. Our detector mirror is not perfect, but it still can reduce the correlations of the field on different sides.
From (\ref{Phiexpan}) and (\ref{IS1mir}), the two-point correlators of the field are given by
\begin{equation}
\langle \hat{\Phi}_x(t)\hat{\Phi}_{x'}(t')\rangle =
\frac{\hbar}{2\Omega_0} \varphi_x^A(t)\varphi_{x'}^{A*}(t') + \int \frac{d\rm k}{2\pi} \frac{\hbar}{2\rm w}
\varphi_x^{\rm k}(t)\varphi_{x'}^{k*}(t') + \int \frac{d\tilde{\rm k}}{2\pi} \frac{\hbar}{2\tilde{\rm w}}
\varphi_x^{\tilde{\rm k}}(t)\varphi_{x'}^{\tilde{\rm k}*}(t') .
\label{FFformal}
\end{equation}
At late times, in the presence of the detector mirror at $x=0$, one has $\varphi_x^A = 0$ and
\begin{eqnarray}
\varphi_x^{\rm k} (t) &\to& e^{-i{\rm w} t +i {\rm k} x}-
2\gamma e^{-i {\rm w} (t-|x|)} \chi^{}_{\rm w}, \\
\varphi_x^{\tilde{\rm k}}(t) &\to& -2\sqrt{\gamma\tilde{\gamma}}e^{i\tilde{\rm k}\vartheta -i\tilde{\rm w}(t-|x|)}
\chi^{}_{\tilde{\rm w}},
\end{eqnarray}
from (\ref{varphisol}), (\ref{chisol}), and (\ref{qAkpLT}).
Inserting these mode functions into (\ref{FFformal}), one obtains a sum of two integrals of dummy variables ${\rm k}$ and $\tilde{\rm k}$.
One can rename both ${\rm k}$ and $\tilde{\rm k}$ to $k$ to get
\begin{eqnarray}
& &\langle\hat{\Phi}_x(t) \hat{\Phi}_{x'}(t')\rangle \to \int_{-\infty}^\infty \frac{dk}{2\pi}\frac{\hbar}{2\omega}
\left\{e^{ik(x-x')-i\omega(t-t')} - \gamma e^{-i\omega(t-t')} \times \right.\nonumber\\
& &\left. \left[
\left(2 e^{i(\omega|x|-kx')}-e^{i\omega(|x|-|x'|)}\right)\chi^{}_{\omega} +
\left(2 e^{-i(\omega|x'|-kx)}-e^{i\omega(|x|-|x'|)}\right)\chi^*_{\omega} \right] \right\} \label{FFkint}
\end{eqnarray}
with $\omega = |k| >0$, by applying the identity straightforwardly from (\ref{chiw1}),
\begin{equation}
\chi^{}_\omega + \chi^*_\omega = 4(\tilde{\gamma}+\gamma)|\chi^{}_{\omega}|^2 , \label{FDR}
\end{equation}
which has the form of the fluctuation-dissipation relation.
The first term in the integrand of (\ref{FFkint}) gives the correlator of
the free field; thus, the late-time renormalized two-point correlator of the field reads
\begin{eqnarray}
& & \langle\hat{\Phi}_x(t) \hat{\Phi}_{x'}(t')\rangle_{\rm ren}\equiv \langle\hat{\Phi}_x(t) \hat{\Phi}_{x'}(t')\rangle -
\langle\hat{\Phi}^{[0]}_x(t) \hat{\Phi}^{[0]}_{x'}(t')\rangle \nonumber\\
&\to& -\frac{\hbar \gamma}{2\pi} \int_0^\infty \frac{d\omega}{\omega} e^{-i\omega(t-t')} \left[ e^{i\omega(|x|+|x'|)}\chi^{}_{\omega} +
e^{-i\omega(|x|+|x'|)}\chi^*_{\omega} \right] \label{FFrenInt}
\end{eqnarray}
after we split $\int_{-\infty}^\infty dk(\cdots)$ into $\int_{-\infty}^0 dk (\cdots) + \int_0^\infty dk(\cdots)$ and then
express both terms in $\int_0^\infty d\omega (\cdots)$. The above integral can be done analytically, which yields
\begin{eqnarray}
& &\langle\hat{\Phi}_x(t) \hat{\Phi}_{x'}(t')\rangle_{\rm ren} \nonumber\\
&\to & \frac{\hbar\gamma}{4\pi\Gamma} \left\{
e^{\gamma^{}_-\Delta^{}_-}{\rm Ei}\left(-\gamma^{}_-\Delta^{}_-\right) -
e^{\gamma^{}_+\Delta^{}_-}{\rm Ei}\left(-\gamma^{}_+\Delta^{}_-\right) \right. \nonumber\\
& & \hspace{.7cm} +\, e^{\gamma^{}_-\Delta^{}_+}{\rm Ei}\left(-\gamma^{}_-\Delta^{}_+\right) -
e^{\gamma^{}_+\Delta^{}_+}{\rm Ei}\left(-\gamma^{}_+\Delta^{}_+\right) \nonumber\\
& & \hspace{.7cm}+\left. i\pi \left[ \theta(-\Delta^{}_-)\left(e^{\gamma^{}_-\Delta^{}_-} - e^{\gamma^{}_+\Delta^{}_-}\right) -
\theta(-\Delta^{}_+)\left(e^{\gamma^{}_-\Delta^{}_+} -e^{\gamma^{}_+\Delta^{}_+}\right) \right]\right\} \label{FFrenEi}
\end{eqnarray}
with $\Delta^{}_\pm \equiv (|x|+|x'|)\pm (t-t')$, $\Gamma$ defined below (\ref{Qsol}), and $\gamma^{}_\pm\equiv \gamma +
\tilde{\gamma}\pm \Gamma >0$.
In the strong OF couplings, over-damping regime, $\gamma \gg \Omega_0$, $\tilde{\gamma}$, one has $\Gamma\approx\gamma$, and
$\gamma^{}_+ \approx 2\gamma \gg 1\gg \gamma^{}_- \approx \Omega_0^2/(2\gamma)$. For $0 < \frac{\Omega_0^2}{2\gamma}
|\Delta^{}_\pm| \ll 1 \ll 2\gamma |\Delta^{}_\pm|$, the above late-time renormalized field correlator approximately reads
\begin{eqnarray}
\langle\hat{\Phi}_x(t) \hat{\Phi}_{x'}(t')\rangle_{\rm ren}
&\to& \frac{\hbar}{4\pi} \left( \ln \left|(|x|+|x'|)^2-(t-t')^2\right| + 2\ln \frac{\Omega_0^2}{2\gamma}+2\gamma^{}_e \right)
\nonumber\\ & & + \frac{i\hbar}{4} \left[ \theta(-\Delta^{}_-) -\theta(-\Delta^{}_+)
\right] + O\left( s\ln s, 1/s'\right)\label{FFrenEiStrong}
\end{eqnarray}
with $s \sim \Omega_0^2|\Delta_\pm|/\gamma$ and $s' \sim \gamma|\Delta_\pm|$ (given $e^s {\rm Ei}(-s) \to \ln s + \gamma^{}_e + O(s \ln s)$
as $s\to 0$ and $e^{s'}{\rm Ei}(-s') \to -1/s' + O(s'^{-2})$ as $s'\to \infty$).
On the other hand, the two-point correlator of the free massless scalar field in (1+1)D Minkowski space is given by
\begin{eqnarray}
\langle \hat{\Phi}^{[0]}_x(t) \hat{\Phi}^{[0]}_{x'}(t')\rangle &=& - \frac{\hbar}{4\pi} \ln |\sigma| + \hbar C \nonumber\\
& & - \frac{i\hbar}{4} \left[ \theta(t-t'-(x-x')) + \theta(t-t'+(x-x'))\right], \label{F0F0}
\end{eqnarray}
up to a complex constant $C$. Here $\sigma = -(x_\mu - x'_\mu)(x^\mu - x'^\mu)/2$ is Synge's world function.
Comparing (\ref{F0F0}) with (\ref{FFrenEiStrong}), one can see that the constant $C$ should be chosen as $(2\ln [\Omega_0^2/(2\gamma)] +
2 \gamma^{}_e )/(4\pi) + (i/4)$ to cancel similar constants in (\ref{FFrenEiStrong}) when $x x'<0$ in the strong OF coupling limit.
With this choice, adding (\ref{FFrenEi}) to (\ref{F0F0}), one finds that the real part of the full equal-time correlation of the
field amplitudes on different sides of the mirror, Re $\langle \hat{\Phi}_{x}(t)\hat{\Phi}_{x'}(t) \rangle$ with $x x'<0$, will indeed
be suppressed for small $|x|$ and $|x'|$ at late times (Figure \ref{FFrenRI} (upper-right)). However, when $|x|$ or $|x'|$ gets greater,
the correlation would not be largely corrected since $\langle\hat{\Phi}_x(t)\hat{\Phi}_{x'}(t)\rangle_{\rm ren}$ goes to zero as $|x|+|x'|
\to \infty$ while $\langle \hat{\Phi}^{[0]}_x(t) \hat{\Phi}^{[0]}_{x'}(t')\rangle$ does not (Figure \ref{FFrenRI} (upper-left) and
(upper-middle)).
Actually the real part of the equal-time correlator of the field amplitudes on the same side of the mirror ($x x' >0$) is also reduced
since the real part of (\ref{FFrenEi}) for $t=t'$ is a negative function of $|x|+|x'|$ only. This may be interpreted as a consequence of the
image ``point charge" in the Green's function of the field in the presence of the detector mirror.
Regarding to the imaginary part of the field correlator, the renormalized correlator simply adds the effect of the mirror to the retarded
and advanced Green's functions of the field in free space. In Figure \ref{FFrenRI} (lower right) one can see the reflected and transmitted
fields generated by the detector mirror at $x=0$. In the presence of the detector mirror, the translational symmetry of the system is broken.
Anyway, comparing (\ref{FFrenEiStrong}) and (\ref{F0F0}), one can see that for $x$ and $x'$ fixed at finite values with $x x' <0$, which
implies $(|x|+|x'|)^2= (x-x')^2$, one has the full correlator $\langle \hat{\Phi}^{}_{x}(t) \hat{\Phi}^{}_{x'}(t)\rangle\to 0$ as
$\gamma\to\infty$ (such that $s\to 0$ and $s'\to \infty$ in (\ref{FFrenEiStrong})). This is exactly the property we mentioned: A perfect
mirror will suppress the correlations of the field amplitudes on different sides of the mirror.
\begin{figure}
\includegraphics[width=5.5cm]{fig2a_FFrenR.pdf}
\includegraphics[width=5.5cm]{fig2b_FF0R.pdf}
\includegraphics[width=5.5cm]{fig2c_FFtotalR.pdf}\\
\includegraphics[width=5.5cm]{fig2d_FFrenI.png}
\includegraphics[width=5.5cm]{fig2e_FF0I.png}
\includegraphics[width=5.5cm]{fig2f_FFtotalIF.png}
\caption{The real parts (upper row) and imaginary parts (lower row) of the late-time renormalized correlator of the field in the presence of
the detector mirror (left plots, Eq. (\ref{FFrenEi})), the correlator of the free field (middle, (\ref{F0F0})), and the full correlator
(right, the sum of (\ref{FFrenEi}) and (\ref{F0F0})). Since (\ref{FFrenEi}) and (\ref{F0F0}) are stationary, we have shifted $t$ and $t'$
from large ($t,t'\gg t_{\rm rlx}$ at late times) to small values for presentation. We choose $t=t'=0$ (equal time) in the upper row, and
$(t',x')=(0,2000)$ in the lower row, where the gray scale from black to white represents the values from $-1/4$ to $1/4$, and the values
outside the past and future light cones of $(t',x')$ are exactly zero. Here $\gamma=10$, $\tilde{\gamma}=1$, $\Omega_0=0.1$, and $c=\hbar=1$.}
\label{FFrenRI}
\end{figure}
\subsection{Field spectrum}
From (\ref{FFformal}) we define the field spectrum $F_x^k$ by looking at the full correlators of the field in the coincidence limit,
\begin{equation}
\langle \hat{\Phi}_{x}(t)^2\rangle = \lim_{t'\to t, x'\to x}\langle \hat{\Phi}_{x}(t),\hat{\Phi}_{x'}(t')\rangle
\equiv \int_{-\infty}^{\infty} \frac{dk}{2\pi} \frac{\hbar}{2\omega} F_x^k, \label{Flatecoin}
\end{equation}
with $\omega\equiv |k|$ such that
\begin{equation}
F_x^k(t) = \left|\varphi_x^{\rm k}(t)\right|^2_{{\rm k}=k} + \left|\varphi_x^{\tilde{\rm k}}(t)\right|_{\tilde{\rm k}=k}^2 +
\int_{-\infty}^{\infty} d\tilde{x} e^{-ik(\tilde{x}-x)} \frac{\omega}{\Omega_0} \varphi_{\tilde{x}}^A(t) \varphi_{x}^{A*}(t)\label{FxKdef}
\end{equation}
in the presence of our single detector mirror. Note that $k$ is simply a dummy variable in the integral of (\ref{Flatecoin}) and $F_x^k$ is
not only contributed by the vacuum fluctuations of the field $\Phi$.
At late times, the last term in (\ref{FxKdef}) decays out and the field spectrum becomes
\begin{equation}
F_x^k \to 1 -\gamma \left[ \left(2 e^{i(\omega|x|-kx)}-1\right)\chi^{}_{\omega} +
\left(2 e^{-i(\omega|x|-kx)}-1\right)\chi^*_{\omega} \right], \label{FSpec1}
\end{equation}
which is independent of $t$, from (\ref{FFkint}). An example in the over-damping regime is shown in Figure \ref{Phi2xk1}. For $kx<0$, the
factor $e^{i(\omega|x|-kx)} =e^{-2ikx}$ produces the ripple structure. For $kx>0$, $F_x^k=1-\gamma(\chi_\omega +\chi^*_\omega)$ is
independent of $x$ (Figure \ref{Phi2xk1} (right), in particular).
In this case, for $\tilde{\gamma}\ll\gamma$, one has $F_x^k\approx |{\cal T}(k)|^2$, which is the transmittivity defined in (\ref{TT}) with
${\cal Z}^{^{[0]}}_y(t)=0$. Thus one may interpret that $F_x^k$ for $kx>0$ is small in our example because the low-$|k|$ modes are almost
totally reflected in the over-damping regime, while the ripple structure of $F_x^k$ for $kx<0$ is due to the interference of the incident
and the reflected waves. The minimum values in the valleys of the ripple in the low-$|k|$ regime can be very close to zero, which is
significantly deviated from the value $1$ for the field vacuum in free space. In contrast, the field spectrum at fixed $x$ goes to $1$ as
$|k|\to\infty$, so the detector mirror is almost transparent to the short-wavelength fluctuations (Figure \ref{Phi2xk1} (middle)).
\begin{figure}
\includegraphics[width=5.8cm]{fig3a_Fkx1_W0p1r10rt1L40.pdf}
\includegraphics[width=5.5cm]{fig3b_Fkx1_W0p1r10rt1L40xmL.pdf}
\includegraphics[width=5.5cm]{fig3c_Fkx1_W0p1r10rt1L40k800PibyL.pdf}
\caption{The late-time field spectrum $F_x^k$ of a single mirror in Eq. (\ref{FSpec1}) against $k$ and $x$, where $\gamma=10$,
$\tilde{\gamma}=1$, $\Omega_0 =0.1$ (over-damping), and $c=\hbar=1$. $L=40$ is simply a scaling parameter here for convenience
of comparison with Figure \ref{Phi2xkOvr}.}
\label{Phi2xk1}
\end{figure}
\subsection{Renormalized energy density of the field}
The expectation value of the energy density of the field is given by
\begin{eqnarray}
\langle \hat{T}_{00}(t,x)\rangle &=& \frac{1}{2} \left\{ \langle [\partial_t\hat{\Phi}_{x}(t)]^2\rangle +
\langle [\partial_x \hat{\Phi}_{x}(t)]^2\rangle \right\} \nonumber\\
&=& \lim_{(t',x')\to(t,x)} \frac{1}{2} \left( \partial_t \partial_{t'} + \partial_x \partial_{x'}\right)
\langle \hat{\Phi}_{x}(t),\hat{\Phi}_{x'}(t')\rangle . \label{T00def}
\end{eqnarray}
While the above expression formally diverges in the coincident limit $(t',x')\to (t,x)$, we are only interested in the renormalized energy
density of the field with the contribution by the free field subtracted,
\begin{equation}
\langle \hat{T}_{00}(t,x)\rangle_{\rm ren} = \langle \hat{T}_{00}(t,x)\rangle -\langle \hat{T}_{00}^{^{[0]}}(t,x)\rangle, \label{T00renDef}
\end{equation}
which can be obtained from (\ref{T00def}) with the full correlator of the field $\langle \hat{\Phi}_{x}(t),\hat{\Phi}_{x'}(t')\rangle$
replaced by the renormalized one, $\langle\hat{\Phi}_{x}(t),\hat{\Phi}_{x'}(t')\rangle_{\rm ren}$.
When substituted into (\ref{T00def}) and (\ref{T00renDef}), the late-time correlator $\langle\hat{\Phi}_{x}(t),\hat{\Phi}_{x'}
(t')\rangle_{\rm ren}$ in (\ref{FFrenInt}) is always a function of $t-t'$ and $x+x'$ since $x$ and $x'$ must have the same sign in the
coincidence limit for $x\not= 0$. This implies $\partial_t \partial_{t'}\langle\hat{\Phi}_{x}(t), \hat{\Phi}_{x'}(t')\rangle_{\rm ren} =
-\partial_x \partial_{x'} \langle\hat{\Phi}_{x}(t),\hat{\Phi}_{x'}(t')\rangle_{\rm ren}$ at late times, and thus $\langle \hat{T}_{00}(t,x)
\rangle_{\rm ren} \to 0$ for $x\not=0$, namely, the late-time energy density of the field outside the detector is the same as the vacuum energy density, though the field spectra are quite different. This is not surprising: It is well known that the late-time stress energy tensor of the field for a uniformly accelerated UD$'$ detector (without coupling to ${\cal Z}$) is exactly zero \cite{HR00}.
Right at the position of the detector $x=0$, if we choose the regularization $|x|= \sqrt{x^2+\epsilon^2}$, $\epsilon= 0+$, then
$\partial_x |x|$ will vanish at $x=0$ for any finite regulator $\epsilon$ and we will end up with $\langle \hat{T}_{00}(t,x=0)
\rangle_{\rm ren}\to\frac{1}{2}\lim_{t'\to t}\partial_t\partial_{t'}\langle \hat{\Phi}_{0}(t),\hat{\Phi}_{0}(t')\rangle_{ren} = -(\gamma/2)
\langle \hat{P}_A^2\rangle$ at late times, with the late-time result of $\langle \hat{P}_A^2\rangle$ given in (\ref{P2LT}).
\section{Cavity of detector mirrors}
\label{detCav}
With the knowledge about a detector mirror, we are ready to model a cavity with two detector mirrors coupled to a common scalar
field in (1+1)D Minkowski space while each detector mirror couples to its own mechanical environment. Our model is described by the action
\begin{eqnarray}
S &=& -\int dt dx \frac{1}{2}\partial_\mu\Phi_x(t) \partial^\mu\Phi_x(t) \nonumber\\ & &
+ \sum_{{\bf d}=A,B}\left\{ \frac{1}{2}\int d\tau^{}_{\bf d} \left[ \dot{Q}_{\bf d}^2(\tau^{}_{\bf d})-
\Omega_{\bf d}^2 Q_{\bf d}^2(\tau_{\bf d})\right]
-\int d\tau^{}_{\bf d} dy^{}_{\bf d} \frac{1}{2}\partial^{}_{\nu^{}_{\bf d}}
{\cal Z}_{y^{}_{\bf d}}(\tau^{}_{\bf d}) \partial_{}^{\nu^{}_{\bf d}} {\cal Z}_{y^{}_{\bf d}}(\tau^{}_{\bf d}) \right.\nonumber\\
& & \hspace{1cm}-\int d\tau^{}_{\bf d} \int dt dx \lambda^{}_{\bf d}(\tau^{}_{\bf d}) Q^{}_{\bf d}(\tau^{}_{\bf d})
\frac{d}{d\tau^{}_{\bf d}}\Phi_x(t) \delta(t-z_{\bf d}^0(\tau^{}_{\bf d}))\delta(x-z_{\bf d}^1(\tau^{}_{\bf d}))\nonumber\\ & &
\hspace{1cm}\left. -\int d\tau^{}_{\bf d} dy^{}_{\bf d} \tilde{\lambda}^{}_{\bf d}(\tau^{}_{\bf d})Q^{}_{\bf d}(\tau^{}_{\bf d})
\frac{d}{d\tau^{}_{\bf d}} {\cal Z}_{y^{}_{\bf d}}(\tau^{}_{\bf d}) \delta(y^{}_{\bf d}-\vartheta^{}_{\bf d}) \right\}.
\label{Stot2}
\end{eqnarray}
Suppose the two detector mirrors with internal oscillators $Q^{}_A$ and $Q^{}_B$ are at rest in space, and located at $x=0$ and
$x=L >0$, respectively. In other words, $\tau^{}_A=\tau^{}_B=t$, $z_A^\mu(\tau^{}_A) = (t, 0)$ and $z_B^\mu(\tau^{}_B) = (t, L)$. Let the
two detector mirrors be identical, $\Omega_A = \Omega_B = \Omega_0$, $\lambda^{}_A(t) = \lambda^{}_B(t) = \lambda(t)$, and
$\tilde{\lambda}^{}_A(t)=\tilde{\lambda}^{}_B(t)=\tilde{\lambda}(t)$. Generalizing the operator expansions (\ref{QAexpan})-(\ref{Xexpan}) to
$\kappa = A, B, \{{\rm k}\}, \{\tilde{\rm k}_A\}, \{\tilde{\rm k}_B\}$,
one can write down the equations of motion for the mode functions
\begin{eqnarray}
\left( \partial_t^2 -\partial_x^2\right) \varphi_x^\kappa(t) &=& \partial_t\left[ \lambda(t) q_A^\kappa(t) \delta(x)+
\lambda(t) q_B^\kappa(t) \delta(x-L) \right], \label{EOMfxK}\\
\left( \partial_t^2 -\partial_{y^{}_{\bf d}}^2\right) \zeta_{{\bf d},y^{}_{\bf d}}^\kappa(t) &=&\partial_t\left[\tilde{\lambda}_{\bf d}(t)
q_{\bf d}^\kappa(t) \delta(y^{}_{\bf d} - \vartheta^{}_{\bf d}) \right], \label{EOMXyK}\\
\left( \partial_t^2 +\Omega_0^2\right) q_{\bf d}^\kappa(t) &=& -\lambda(t)\partial_t \varphi_{z^1_{\bf d}}^\kappa(t)
- \tilde{\lambda}(t)\partial_t \zeta_{{\bf d},\vartheta^{}_{\bf d}}^\kappa(t) . \label{EOMqdK}
\end{eqnarray}
Similar to the cases of single detectors, inserting the solutions for (\ref{EOMfxK}) and (\ref{EOMXyK}),
\begin{eqnarray}
\varphi_x^\kappa(t) &=& \varphi_x^{\kappa^{[0]}}(t) +\frac{1}{2} \lambda(t-|x|)q_A^\kappa (t-|x|) +
\frac{1}{2} \lambda(t-|x-L|)q_B^\kappa (t-|x-L|), \label{fxKsol}\\
\zeta_{{\bf d},y_{\bf d}}^\kappa(t) &=& \zeta_{{\bf d},y_{\bf d}}^{\kappa^{[0]}}(t) +\frac{1}{2}
\tilde{\lambda}(t-|y^{}_{\bf d}-\vartheta^{}_{\bf d}|)q_{\bf d}^\kappa (t-|y^{}_{\bf d}-\vartheta^{}_{\bf d}|), \label{XyKsol}
\end{eqnarray}
into (\ref{EOMqdK}), one obtains
\begin{eqnarray}
& & \ddot{q}_{\bf d}^\kappa(t) + 2\left[\gamma(t)+\tilde{\gamma}(t)\right]\dot{q}_{\bf d}^\kappa(t) +
\left[\Omega_0^2 +2\dot{\gamma}(t)+2\dot{\tilde{\gamma}}(t)\right]q_{\bf d}^\kappa(t) \nonumber\\ &=& -\frac{\lambda(t)}{2}\partial_t
\left[ \lambda(t-L) q_{\bf \bar{d}}^\kappa(t-L) \right] - \lambda(t)\dot{\varphi}_{z^1_{\bf d}}^{[0]\kappa}(t)
- \tilde{\lambda}(t)\dot{\zeta}_{{\bf d},\vartheta^{}_{\bf d}}^{[0]\kappa}(t), \label{EOMqdKBR}
\end{eqnarray}
where $\bar{A}\equiv B$ and $\bar{B}\equiv A$.
\subsection{Relaxation and resonance}
\label{relax2}
Suppose the combined system is going through a process similar to the one in Sec. \ref{QTUD}: It is started with the product of the
ground states of the free internal HOs and the vacuum states of the free field and of the free mechanical environments, and the OE couplings of both detector mirrors have been switched on for a long time ($\tilde{t}_0 \to -\infty$) when their OF couplings are switched on at
$t=t_0=0$. In (\ref{fxKsol}) and (\ref{EOMqdKBR}), one can see that only half of the retarded field emitted by one detector mirror of the
cavity in (1+1)D Minkowski space will reach the other detector mirror of the cavity. The other half will go all the way to the null infinity
and never return. Carried by the retarded field, it seems that all the initial information in the internal HO and the switching function of
the OF coupling would eventually dissipate into the deep Minkowski space, so that there would be no initial information around $t=0$ kept
in our cavity at late times. Nevertheless, as we will see below, in the absence of the OE coupling ($\tilde{\gamma}=0$), there
can exist late-time non-steady states of the combined system which may depend on the initial conditions around $t=0$, if the internal HOs
of the detector mirrors are resonant with their mutual influences via the field.
Let $q_\pm^\kappa = (q_A^\kappa \pm q_B^\kappa)/\sqrt{2}$. Then (\ref{EOMqdKBR}) can be rewritten as
\begin{eqnarray}
\ddot{q}_{\pm}^\kappa(t) + 2\left[\gamma(t)+\tilde{\gamma}(t)\right]\dot{q}_{\pm}^\kappa(t) +
\left[\Omega_0^2 +2\dot{\gamma}(t)+2\dot{\tilde{\gamma}}(t)\right]q_{\pm}^\kappa(t) & & \nonumber\\
=\, \mp \frac{\lambda(t)}{2}\partial_t \left[ \lambda(t-L) q_{\pm}^\kappa(t-L) \right] +f_\pm^\kappa(t), & &
\label{EOMqpm}
\end{eqnarray}
where the driving force is defined as $f_\pm^\kappa(t)\equiv-\lambda(t)\dot{\varphi}_{\pm}^{[0]\kappa}(t)-\tilde{\lambda}(t)
\dot{\zeta}_{\pm}^{[0]\kappa}(t)$ with $\varphi_\pm^{[0]\kappa} \equiv ( \varphi_{0}^{[0]\kappa} \pm \varphi_{L}^{[0]\kappa} )/\sqrt{2}$ and
$\zeta_\pm^{[0]\kappa}\equiv ( \zeta_{A, \vartheta^{}_A}^{[0]\kappa} \pm \zeta_{B, \vartheta^{}_B}^{[0]\kappa} )/\sqrt{2}$.
Now $q_+^\kappa$ and $q_-^\kappa$ decouple and each is driven by a nonlocal force.
Suppose $q_\pm^\kappa (t) = \sum_\Omega \alpha_\pm^\kappa (\Omega) e^{-i\Omega t}$ for $t \gg L > 0 \gg \tilde{t}_0$ and $T\ll L$ in
(\ref{thetaT}), so that $\gamma$ and $\tilde{\gamma}$ have become constants of time. Since $f_\pm^\kappa(t)$ are zero for $\kappa =A,B$
and simple harmonic for $\kappa= \{\rm k\}, \{\tilde{\rm k}_A\}, \{\tilde{\rm k}_B\}$ (cf. the expressions below Eq. (\ref{chisol})), for
those $\Omega\not={\rm w}(\equiv |{\rm k}|)$ for $\kappa={\rm k}$, or $\Omega\not= \tilde{\rm w}_{\bf d}(\equiv|\tilde{\rm k}_{\bf d}|)$
for $\kappa = \tilde{\rm k}_{\bf d}$, Eq. (\ref{EOMqpm}) requires
\begin{equation}
\Omega^2 + 2 i\Omega \left[\tilde{\gamma} + \gamma \left(1\pm e^{i\Omega L}\right) \right]- \Omega_0^2 = 0 \label{EqOm}
\end{equation}
for nonzero $\alpha_\pm^\kappa(\Omega)$.
Let $\Omega = R + i I$ with $R, I \in {\bf R}$. Then the real and imaginary parts of (\ref{EqOm}) read
\begin{eqnarray}
R^2-I^2-\Omega_0^2-2 I\left[\tilde{\gamma} +\gamma\left( 1\pm e^{-I L}\cos RL\right)\right]\mp 2\gamma R\, e^{-I L}\sin R L &=&
0\label{ReW}\\
2RI+2R\left[\tilde{\gamma} +\gamma\left( 1\pm e^{-I L}\cos R L\right)\right]\mp 2\gamma I\, e^{-I L}\sin R L &=& 0. \label{ImW}
\end{eqnarray}
The real solutions for $\Omega$, if they exist, will have $I=0$ and so (\ref{ImW}) implies
\begin{equation}
\mp\cos RL = 1 + \frac{\tilde{\gamma}}{\gamma},
\end{equation}
which will not be true unless $\tilde{\gamma}=0$ since $|\cos RL| \le 1$ and $\gamma, \tilde{\gamma} \ge 0$. For $\tilde{\gamma}=0$,
the real solution for (\ref{EqOm}) is $\Omega = \Omega_0$ for $q_-^\kappa$ when $\Omega_0 = 2n\pi/L$ for some positive integer $n$, or
for $q_+^\kappa$ when $\Omega_0=(2n-1)\pi/L$. When one of these happens, the internal HOs in the detector mirrors are resonant with their mutual influences, while $q_\pm^\kappa (t)$ will never both settle down to steady states of constant amplitudes.
This makes the late-time field spectrum ($\sim |\varphi_x^k(t)|^2$; see Sec. \ref{cavmodLT}) inside the cavity restless forever in a
range of frequency $|k|$ of the driving force $f_\pm^k(t)$ ($k={\rm k}, \tilde{\rm k}_A, \tilde{\rm k}_B$), due to the mixing of the driving
and the resonant frequencies. Outside the cavity, the late-time field spectrum at the same frequencies will never settle down, either,
though the changes in time are less significant in magnitude than those inside the cavity.
These time-varying patterns of the field spectrum at late times may depend on the initial conditions such as the time-scale
and the functional form of the switching function $\gamma(t)$ for the OF coupling.
If there exist purely imaginary solutions, which have $R=0$, then (\ref{ImW}) will be trivial ($0=0$) and (\ref{ReW}) will become
\begin{equation}
I^2 + \Omega_0^2 = -2 I [\tilde{\gamma} + \gamma(1\pm e^{-I L})], \label{condR0}
\end{equation}
which implies that $I\not= 0$ and $ I [\tilde{\gamma} + \gamma(1\pm e^{-I L})]$ must be negative.
If $I>0$, then $1\pm e^{-I L} >0$ and so $I [\tilde{\gamma} + \gamma(1\pm e^{-I L})] > 0$, which contradicts (\ref{condR0}).
Thus $I$ must be negative here. Similarly, when both $R$ and $I$ are nonzero, (\ref{ReW}) and (\ref{ImW}) yield
\begin{equation}
(R^2+I^2)\{ 1+ 2 I^{-1}[\tilde{\gamma} +\gamma(1\pm e^{-I L}\cos RL)]\} = -\Omega_0^2, \label{condRn0}
\end{equation}
which implies that the expression in the curly brackets must be negative. If $I>0$, then $1+ (2/I)[\tilde{\gamma} +
\gamma(1\pm e^{-I L}\cos RL)]> 0$ and (\ref{condRn0}) cannot hold. So $I$ must be negative here, too.
Therefore, the imaginary parts of the complex solutions for $\Omega\not={\rm w}$, $\tilde{\rm w}_{\bf d}$, or $\Omega_0$ if $\Omega_0=
n\pi/L$ for some positive integer $n$, must be negative, and the corresponding modes $e^{-i\Omega t} = e^{-|I| t}e^{-i R t}$ will decay out
as $t\to \infty$. At late times, only the oscillations of $\Omega={\rm w}$ and $\tilde{\rm w}_{\bf d}$ for all values of $\Omega_0$, and
additionally $\Omega=\Omega_0$ when $\Omega_0$ happens to be $n\pi/L$ for some positive integer $n$, will survive.
Longer relaxation times would occur in the cases with $\Omega\approx \Omega_0 \approx n\pi/L$ for some positive integer $n$.
In these near-resonance cases, one may write
\begin{equation}
\Omega = \frac{n\pi}{L} + \epsilon_{n} + i I_{n} ,
\end{equation}
where $|\epsilon_n|, |I_n| \ll n\pi/L$ and $I_{n}<0$.
Assuming $|\epsilon_n|$ and $|I_n|$ are roughly the same order and $|\epsilon_n L|, |I_n L| \ll 1$,
then they can be approximated by
\begin{eqnarray}
\epsilon^{}_{n} \approx \frac{\Omega_0^2 - (n\pi/L)^2}{2(n\pi/L)(1+\gamma L)}, \hspace{.5cm} & &
I_n \approx \frac{1}{\gamma L^2}\left\{ J_n-\sqrt{ J_n^2+ 2\gamma L^2\left(\tilde{\gamma}+\frac{\gamma\epsilon_n^2 L^2}{2}\right)}\right\},
\nonumber\\ & & J_n\equiv 1+\gamma L+(-1)^n\frac{\gamma\epsilon_n L^2}{n\pi},
\end{eqnarray}
from (\ref{ReW}) and (\ref{ImW}). To keep the above approximate expression of $\epsilon^{}_n$ small, one should take a large value of
$\gamma L$, and/or $\Omega_0$ should be very close to $n\pi/L$ with some positive integer $n$. This can be achieved more easily when the
separation of the mirrors $L$ is large, since $|\Omega_0 - n\pi/L| \le \pi/(2L)$ will be small for a general $\Omega_0$ and the integer $n$
closest to the value of $\Omega_0 L/\pi$. For a very large $L$ the approximation can be good even for $|\Omega_0 -n' \pi/L|$ being a few
times of $\pi/L$ for some $n'\not= n$. Note that $I_n|_{\epsilon_n=0}$ vanishes for $\tilde{\gamma}=0$, when we return to the resonant cases.
As we have known in (\ref{condR0}), besides $\Omega\approx n\pi/L$ with positive integer $n$, there may exist purely imaginary solutions
$\Omega = i I_0$ for $q_+^\kappa$ (in general) and for $q_-^\kappa$ (in some particular parameter ranges).
According to (\ref{condR0}), indeed, when $|I_0 L| \ll 1$, one has
\begin{equation}
I_0 \approx {\rm Re}\left[ \frac{-(\tilde{\gamma}+2\gamma)+\sqrt{ (\tilde{\gamma}+2\gamma)^2-\Omega_0^2(1-2\gamma L)}}{1-2\gamma L}
\right] \label{Image0}
\end{equation}
for $q_+$, which is always closer to zero than the counterpart for $q_-$ (if any) is.
These would be clear by arranging (\ref{condR0})
into $I^2 +2I(\tilde{\gamma} +\gamma) +\Omega_0^2 = \mp 2 I e^{-I L}$ for $q_\pm$, and then observing that the left-hand side is a concave-up
parabola with the minimum at some negative $I$ while the right-hand side is zero at $I=0$ and monotonically decreasing (increasing) for
$q_+^\kappa$ ($q_-^\kappa$) as $I$ approaches to $0$ from a negative value.
The relaxation time for our cavity with a not-too-small separation of the mirrors
could be estimated by the inverse of the minimal $|I_{n'}|$ ($n'=0,1,2,3,\cdots$) among the above solutions.
In the cases with the minimal $|I_n|\not= |I_0|$ (namely, $n>0$), when the separation is sufficiently large
so that $\gamma L \gg \tilde{\gamma}$, and $\Omega_0$ is close enough to $n\pi/L$, one has
\begin{equation}
t_{\rm rlx}\approx (1+\gamma L)/\tilde{\gamma}
\end{equation}
for the HO pair in the weak OE coupling and strong OF coupling, over-damping regime. Compared with (\ref{rlxtime}) for the HO in a
single detector mirror in the same regime, we see that a stronger OF coupling still makes the relaxation time longer and
$t_{\rm rlx}\sim \gamma$ for very large $\gamma$ in both cases, but a stronger HO environment here plays the opposite role to those in the
single-mirror cases and shorten the relaxation time of the cavity near resonance.
Note that, unlike the (3+1)D case in Ref. \cite{LH09}, there is no instability in the small $L$ limit here since the retarded field is
independent of the distance $L$ from the source in (1+1)D, while it is proportional to $1/L$ in \cite{LH09}. As $L\to 0$, the equations
of motion in (\ref{EOMqpm}) simply become regular, ordinary differential equations without delay.
\subsection{Cavity modes at late times}
\label{cavmodLT}
With a non-vanishing coupling to the environment $\tilde{\gamma}$, one can get rid of the late-time non-steady states
described in Sec. \ref{relax2}. After the OF coupling is switched on, if we look at the field amplitudes only in the cavity,
the field spectrum will appear to evolve from continuous to nearly discrete in the neighborhood of the resonant frequency.
For $t>0$, the field spectrum defined in (\ref{Flatecoin}) can be read off from the coincidence limit of the symmetrized two-point
correlator of the field,
\begin{eqnarray}
\langle \hat{\Phi}_{x}(t),\hat{\Phi}_{x'}(t')\rangle &=& {\rm Re}\left\{
\int \frac{d\rm k}{2\pi} \frac{\hbar}{2\rm w} \varphi_x^{\rm k}(t)\varphi_{x'}^{\rm k*}(t') \right.\nonumber\\ & & + \left.
\int \frac{d\tilde{\rm k}^{}_A}{2\pi} \frac{\hbar}{2\tilde{\rm w}^{}_A}\varphi_x^{\tilde{\rm k}^{}_A}(t)
\varphi_{x'}^{\tilde{\rm k}^{}_A*}(t')+
\int \frac{d\tilde{\rm k}^{}_B}{2\pi} \frac{\hbar}{2\tilde{\rm w}^{}_B}\varphi_x^{\tilde{\rm k}^{}_B}(t)
\varphi_{x'}^{\tilde{\rm k}^{}_B*}(t')
\right\} \label{FxtFyt}
\end{eqnarray}
in the presence of the cavity.
An example on the time evolution of the field modes is given in Figure \ref{FxkEvo}, where we consider a case with a larger value of
$\tilde{\gamma}$, namely, $\Omega_0, L^{-1} <\tilde{\gamma} \ll \gamma$ to reach the late-time steady states sooner while a wide range of
the cavity modes can still be generated. In this example, the evolution of each single field mode from the initial moment to late times can
roughly be divided into four stages:
(i) At very early times, the shock waves produced by the switching-on of the OF coupling propagate freely in space;
(ii) after the waves produced by two different mirrors collide, violent changes of the field amplitude squared occur;
(iii) after a timescale comparable with the relaxation time of the cavity, the interference pattern of the cavity mode is basically built up,
but the field amplitude squared keeps ringing down with small oscillations in time;
(iv) after a longer timescale the shape of the field spectrum against $x$ gets into the late-time steady state.
The resonant modes ($\omega \approx n \pi/L$, $n=1,2,3,\cdots$) will survive, while the off-resonant modes will be suppressed in the cavity.
\begin{figure}
\includegraphics[width=4.2cm]{fig4a_Fkx_W0p1r10rt1L40k2t15.pdf}
\includegraphics[width=4.2cm]{fig4b_Fkx_W0p1r10rt1L40k2t118.pdf}
\includegraphics[width=4.2cm]{fig4c_Fkx_W0p1r10rt1L40k2t2201.pdf}
\includegraphics[width=4.2cm]{fig4d_Fkx_W0p1r10rt1L40k2t4231.pdf}\\
\includegraphics[width=4.2cm]{fig4e_Fkx_W0p1r10rt1L40km1p5t15Ra.pdf}
\includegraphics[width=4.2cm]{fig4f_Fkx_W0p1r10rt1L40km1p5t118Ra.pdf}
\includegraphics[width=4.2cm]{fig4g_Fkx_W0p1r10rt1L40km1p5t2201Ra.pdf}
\includegraphics[width=4.2cm]{fig4h_Fkx_W0p1r10rt1L40km1p5t4231Ra.pdf}
\caption{Time evolution of the field spectrum $F_x^k$ defined in (\ref{Flatecoin}) and read off from (\ref{FxtFyt}) for $k=2.01\pi/L$
(right mover, upper row) and $k=-1.5\pi/L$ (left mover, lower row) against $x$. Here $\gamma=10$, $\tilde{\gamma}=1$, $\Omega_0=1/10$,
$L=40$, and $c=\hbar=1$. The green dashed lines mark the locations of the detector mirrors at $x=0$ and $x=L=40$.
Here the relaxation time for each single mirror is $t^{(1)}_{\rm rlx} \approx 2200$ according to (\ref{rlxtime}),
while the relaxation time for the cavity is $t^{(2)}_{\rm rlx} = 1/|I_0| \approx 4219 \approx 2t^{(1)}_{\rm rlx}$ from (\ref{Image0}).
The third and the fourth plots from the left in each row are $F_x^k$ at $t\approx t^{(1)}_{\rm rlx}$ and
$t^{(2)}_{\rm rlx}$, respectively.}
\label{FxkEvo}
\end{figure}
\begin{figure}
\includegraphics[width=6.3cm]{fig5a_Fkx2_W0p1r10rt1L40.png}
\includegraphics[width=5.5cm]{fig5b_Fkx_W0p1r10rt1L40xc.pdf}
\includegraphics[width=5.5cm]{fig5c_Fkx_W0p1r10rt1L40k.png}
\caption{(Left) The late-time field spectrum $F_x^k(t)$ against $k$ and $x$ in the over-damping regime, with the same parameter values as
those in Figure \ref{FxkEvo}. (Middle) The late-time results of $F_x^k$ in Figure \ref{FxkEvo} for $k=2.01\pi/L$ (black line) and $k=-1.5\pi/L$ (red line).
(Right) $F_x^k$ at the cavity center $x=L/2=20$ shows that the field spectrum in the cavity is nearly discrete in the low-$|k|$ regime
(inset), while the sharpness and the contrast of the comb teeth around $|k|=(2n-1)\pi/L$, $n=1,2,3,\cdots$, decrease as $|k|$ increases.}
\label{Phi2xkOvr}
\end{figure}
\begin{figure}
\includegraphics[width=6.3cm]{fig6a_Fkx2_W0p4rp01rtp003L40.png}
\includegraphics[width=5.5cm]{fig6b_Fkx_W0p4rp01rtp003L40x.pdf}
\includegraphics[width=5.5cm]{fig6c_Fkx_W0p4rp01rtp003L40k.pdf}
\caption{(Left) The late-time field spectrum $F_x^k(t)$ against $k$ and $x$ in the under-damping regime, where $\gamma=0.01$,
$\tilde{\gamma}=0.003$, $\Omega_0=0.4$, $L=40$, and $c=\hbar=1$. Here we only show the domain of $k>0$ (right movers).
(Middle) The field spectrum against $x$ for the cavity mode of $k =5.0628\pi/L \approx 0.3976 \approx \Omega_0-\tilde{\gamma}$ (black)
and the field mode of $k=-6.5345\pi/L \approx -(7-(1/2))\pi/L$ (red line, resonant transmission from right to left).
(Right) $F_x^k$ at the cavity center $x=L/2=20$ (blue line) and $x=2L=80$ (red dashed line) for $k>0$. The blue curve shows that the only significant cavity mode for $k>0$ is peaked at $k\approx 5.0628L/\pi$.
The red dashed curve indicates that the transmittivity through the cavity is suppressed around the cavity mode, and close to $1$ around the resonant transmissions at $k\approx (n-1/2)\pi/L$, $n=1,2,3,\cdots$.}
\label{Phi2xkUnd}
\end{figure}
At late times, the mode functions in (\ref{FxtFyt}) become
\begin{eqnarray}
\varphi_x^{\rm k}(t) &\to& e^{-i{\rm w} t}\left\{ e^{i{\rm k}x} - \gamma\left[ (1+e^{i{\rm k}L}){\cal E}^+_{\rm w}(x)\chi^{+}_{\rm w}
+(1-e^{i{\rm k}L}){\cal E}^-_{\rm w}(x) \chi^{-}_{\rm w} \right] \right\}, \\
\varphi_x^{\tilde{\rm k}^{}_A}(t) &\to& -\sqrt{\gamma\tilde{\gamma}} e^{i\tilde{\rm k}^{}_A\vartheta^{}_A-i\tilde{\rm w}^{}_A t}
\left[{\cal E}^+_{\tilde{\rm w}^{}_A}(x) \chi^{+}_{\tilde{\rm w}^{}_A} +
{\cal E}^-_{\tilde{\rm w}^{}_A}(x)\chi^{-}_{\tilde{\rm w}^{}_A} \right], \\
\varphi_x^{\tilde{\rm k}^{}_B}(t) &\to& -\sqrt{\gamma\tilde{\gamma}} e^{i\tilde{\rm k}^{}_B\vartheta^{}_B-i\tilde{\rm w}^{}_B t}
\left[{\cal E}^+_{\tilde{\rm w}^{}_B}(x) \chi^{+}_{\tilde{\rm w}^{}_B} -
{\cal E}^-_{\tilde{\rm w}^{}_B}(x)\chi^{-}_{\tilde{\rm w}^{}_B} \right],
\end{eqnarray}
with
\begin{eqnarray}
{\cal E}^\pm_\omega (x) &\equiv& e^{i\omega |x|}\pm e^{i\omega |x-L|}, \\
\chi^{\pm}_\omega &\equiv& \frac{-i\omega}{\Omega_0^2 - \omega^2 -2i\omega\left[ \tilde{\gamma}+\gamma(1\pm e^{i\omega L})\right]},
\label{chipm}
\end{eqnarray}
such that $q_\pm^k =\chi_{\omega}^\pm (-\lambda\varphi_\pm^{[0]k}-\tilde{\lambda}\zeta_\pm^{[0]k})$, $k=\{{\rm k}\}, \{\tilde{\rm k}_A\},
\{\tilde{\rm k}_B\}$, $\omega=|k|$, from (\ref{EOMqpm}). Then the coincidence limit $(t',x')\to (t,x)$ gives the late-time field spectrum:
\begin{eqnarray}
F^k_x = 1 +\gamma &{\rm Re}& \left\{
\chi^+_\omega {\cal E}_\omega^+(x)\left[{\cal E}_\omega^{+*}(x) -2\left(e^{-ikx}+e^{-ik(x-L)}\right)\right] \right. \nonumber\\
& & + \left.\chi^-_\omega {\cal E}_\omega^-(x)\left[{\cal E}_\omega^{-*}(x) -2\left(e^{-ikx}-e^{-ik(x-L)}\right)\right] \right\} ,
\label{Flate}
\end{eqnarray}
as defined in (\ref{Flatecoin}). Here we have used the identity
\begin{equation}
\chi^\pm_\omega +\chi^{\pm*}_\omega = 4\left[\tilde{\gamma}+\gamma\left( 1\pm\cos\omega L\right)\right]|\chi^\pm_\omega|^2 \label{FDR2}
\end{equation}
similar to (\ref{FDR}).
Note that the odd functions of $k$ in the integrand for the late-time $\langle \hat{\Phi}_{x}(t),\hat{\Phi}_{x'}(t')\rangle$
do not contribute to the $k$ integral and so they are not included in the above $F^k_x$.
Examples of the late-time field spectra in the over- and under-damping regimes are shown in Figures \ref{Phi2xkOvr} and \ref{Phi2xkUnd},
respectively. Figure \ref{Phi2xkOvr} is the late-time result of the case considered in Figure \ref{FxkEvo}. One can see that there are
indeed many cavity modes inside the cavity ($0<x<L$) in the strong OF coupling, over-damping regime. The standing waves due to the
interference of the incident and reflected waves outside the cavity, similar to those in the single mirror case in Figure
\ref{Phi2xk1}, can also be seen. Sampling at the center of the cavity $x=L/2$, the field spectrum $F_{L/2}^k$ looks discrete in the low-$|k|$
regime. In this example, $\Omega_0^2 \ll 2 \tilde{\gamma} \pi/L$ and so the peak values of the comb teeth of $F_{L/2}^k$ with small $n$ are
about $2\gamma/\tilde{\gamma}$,
while in the high-$|k|$ regime $F_{L/2}^k \approx 1+4 (\gamma/\omega)\sin\omega L$ looks continuous and goes to
the free-space value $1$ as $\omega=|k|\to \infty$.
The working range of this detector mirror is about $0< k < 150\pi/L$ from Figure \ref{Phi2xkOvr} (right).
When our attention is restricted in the cavity, it appears that all the two-point correlators of an off-resonant mode in the
cavity, $\langle \Phi_k, \Phi_{-k}\rangle$, $\langle \Pi_k, \Pi_{-k}\rangle$, and $\langle \Phi_k, \Pi_{-k}\rangle$, are suppressed in the
strong OF coupling regime, and the uncertainty relation of that mode would be violated. This is not true since in looking at those
correlators in the $k$ space we have to consider the field spectrum outside the cavity as well.
As we discussed in Sec. \ref{secReflect} and illustrate in Figure \ref{Phi2xkUnd}, there are only one or a few pairs of significant cavity
modes at late times in the weak OF coupling, under-damping regime. In Figure \ref{Phi2xkUnd} the only significant cavity modes are peaked
around $|k| \approx 5 \pi/L$, which is nearly resonant with the natural frequency $\Omega_0$ of the internal HO in this example.
The reflectivity in the vicinity of the resonant frequency is high enough to suppress the transmitted wave on the other side of the cavity,
while the detector mirrors become almost transparent for the field modes away from this narrow resonance.
Outside the cavity, one can see the interference pattern of the incident wave and the reflected waves by the two detector mirrors if the
reflectivity of the mirror for that field mode is not too small or too large. The interferences of the waves reflected by the two detector
mirrors are destructive for $k\approx \pm (n-(1/2))\pi/L$, $n=1,2,3,\cdots$, where the resonant transmission occurs, and constructive for
$k \approx \pm n\pi/L$, which is the basis of Bragg reflection \cite{CGS11, CJGK12, CG16, SB16}.
The result in the over-damping regime in Figure \ref{Phi2xkOvr} (left) does not show this feature because the reflectivity of the
detector mirrors in the plot is so close to $1$ that the waves (say, from $x<0$) transmitted through the first mirror (at $x=0$) and
reflected by the second mirror (at $x=L$), and then transmitted through the first mirror again to the incident region ($x<0$), are negligible. In the same conditions as those in Figure \ref{Phi2xkOvr} but now going to the high-$|k|$ regime where the reflectivity
is lower, similar destructive and constructive interferences of the incidence and reflected waves outside the cavity can also be observed.
\subsection{Casimir effect}
\label{CasimirEff}
Inserting the results (\ref{FxtFyt})-(\ref{chipm}) into (\ref{T00def}) and (\ref{T00renDef}), one obtains the late-time renormalized field
energy density in the presence of the cavity mirrors:
\begin{equation}
\langle \hat{T}_{00}(t,x)\rangle_{\rm ren}
\to \lim_{(t',x')\to (t,x)}\frac{1}{2}\left( \partial_t\partial_{t'}+\partial_x\partial_{x'} \right)
\int_{0}^{\omega^{}_M} \frac{d\omega}{2\pi}\frac{\hbar}{2\omega}F^\omega(t,x;t',x'), \label{T00int}
\end{equation}
where
\begin{eqnarray}
& & F^\omega(t,x;t',x') = - 2\gamma\cos\omega(t-t'){\rm Re}\left[ \chi^+_\omega {\cal E}_+(x){\cal E}_+(x') +
\chi^-_{\omega} {\cal E}_-(x){\cal E}_-(x') \right] \label{Fomegalate}
\end{eqnarray}
and $\omega^{}_M$ is the UV cutoff, which should be identical to the ones for the internal HOs of our detector mirrors
(will be introduced in Sec. \ref{EntMirOsc}) since (\ref{T00int}) has included the back-reaction of the detector mirrors to the field.
A straightforward calculation shows that at late times $\langle \hat{T}_{00}(t,x)\rangle_{\rm ren} =0$ outside the cavity ($x<0$ or $x>L$),
and inside the cavity
\begin{eqnarray}
& & \left. \langle \hat{T}_{00}(t,x)\rangle_{\rm ren}\right|_{0<x<L} \to \nonumber\\
& & -\hbar {\rm Re} \int_0^{\omega^{}_M}
\frac{d\omega}{2\pi}\frac{8 \gamma^2 \omega^3 e^{2i\omega L}}{\left[ \omega^2 + 2i\omega(\gamma+\tilde{\gamma})-\Omega_0^2\right]^2+
4\gamma^2\omega^2 e^{2i\omega L}} \stackrel{\omega^{}_M\to\infty}{\longrightarrow} \rho^{}_\Phi, \label{E0inCav}
\end{eqnarray}
which is a finite constant independent of $x$.
For $\Omega_0=0.1$, $\gamma=10$, $\tilde{\gamma}=1$, and $L=40$ in Figures \ref{FxkEvo} and \ref{Phi2xkOvr}, we have $\rho^{}_\Phi\approx
-0.0000483163 <0$ ($c=\hbar=1$). This is the Casimir effect in our cavity of imperfect mirrors.
\begin{figure}
\includegraphics[width=8cm]{fig7a_E0x_W0p1r10rt1L40wM480KPibyL.png} \hspace{.5cm}
\includegraphics[width=5cm]{fig7b_T00_Contour.pdf}
\caption{(Left) Late-time energy density of the field $\langle \hat{T}_{00}\rangle_{\rm ren}$ in (\ref{E0inCav}) inside the cavity against
the UV cutoff $\omega^{}_M$ scaled by $\pi/L$ (red). Here $\gamma=10$, $\tilde{\gamma}=1$, $\Omega_0=0.1$, $L=40$, and $c=\hbar=1$.
The value of $\langle \hat{T}_{00}\rangle_{\rm ren}$ oscillates between negative and positive values for $\omega^{}_M$ less than about
$4.2\times 10^5 \pi/L$, and then converges to $-0.0000483163$ (black dashed line) as $\omega^{}_M$ increases further.
The blue curve in the inset is the field spectrum in Figure \ref{Phi2xkOvr} (right). The largest amplitude of the oscillating $\langle
\hat{T}_{00}\rangle_{\rm ren}$ occurs around $(\omega^{}_M L/\pi) \approx 250$, namely, $\omega^{}_M \approx 2\gamma = 20$, where the peak
values of the field spectrum have dropped significantly from the maximum at low $\omega^{}_M$.
(Right) The poles (represented in ``$\times$") in the integrand of (\ref{E0inCav}) are all in the lower half of the complex $\omega$ plane.
Thus the integral along the closed contour (dashed and dotted lines) must vanish.}
\label{E0Lam}
\end{figure}
The integral in (\ref{E0inCav}) for small UV cutoff $\omega^{}_M$ oscillates between negative
and positive values as $\omega^{}_M$ increases (Figure \ref{E0Lam} (left)).
The amplitude of this oscillation remains large until $\omega^{}_M$ gets much greater than $\gamma$, $\tilde{\gamma}$, and $\Omega_0$, when
the $\omega^4$ term dominates the denominator of the integrand in (\ref{E0inCav}) for $\omega$ close to $\omega^{}_M$ and makes the integral
evolving like $-\hbar{\rm Re}[\int^{\omega^{}_M} d\omega 8 \gamma^2 e^{2i\omega L}/(2\pi\omega)] = -4\hbar\gamma^2 {\rm Ci}(2L\omega^{}_M)/
\pi\approx -2\hbar\gamma^2 (\pi L\omega^{}_M)^{-1}\sin(2L\omega^{}_M)$ on top of the lower-UV-cutoff result, so that $\langle \hat{T}_{00}
(t,x)\rangle_{\rm ren}$ in the cavity oscillates roughly about the constant $\rho_\Phi$ with the amplitude decreasing as $\omega_M^{-1}$.
One cannot see whether the value of the renormalized energy density is negative or positive if the UV cutoff is not large enough.
If $\rho^{}_\Phi <0$, one should take the value of $\omega^{}_M$ much greater than $2\hbar\gamma^2/(\pi L|\rho^{}_\Phi|)$ to resolve the
negativity of $\rho^{}_\Phi$. This reminds us about the fact that the Casimir effect is a finite-size effect of constraints on quantum fluctuations \cite{HO87}, which is not a purely IR or UV phenomenon. It depends not only on the field modes of long wavelengths comparable with the scale of the background geometry. One has to sum over all the cavity modes in a perfect cavity to obtain the conventional result of the Casimir energy density \cite{Ca48}.
If one introduces a normalizable, smooth switching function such as a Gaussian or Lorentzian function of time for the coupling of an
apparatus to the cavity field, it will suppress the contribution from the short-wavelength modes \cite{LCH16} and makes the ``observed"
energy density not so negative \cite{Fo91, FR95, Fl97}.
In our model the spectrum of the short-wavelength modes is closer to the ones in free space than those in a perfect cavity.
One may wonder if there exists some choice of the parameter values which leads to a non-negative late-time energy density in our cavity
for $\omega^{}_M$ sufficiently large. To answer this question, one needs to know the exact sign of $\rho^{}_\Phi$,
which looks very hard in calculating (\ref{E0inCav}) numerically when $\rho^{}_\Phi$ is extremely close to zero.
Fortunately, the poles in the integrand of (\ref{E0inCav}) are all located in the lower half of the complex plane. Thus the integral along a
closed contour from $\omega = 0\to \infty\to i\infty\to 0$ in the upper complex plane (Figure \ref{E0Lam} (right)) gives zero.
Since $L > 0$ in the factor $e^{2iL\omega}$ in the numerator of the integrand in (\ref{E0inCav}), which suppressed the contribution
around $\omega \sim i\infty$ (the dotted part of the contour in Figure \ref{E0Lam} (right)), we have
\begin{equation}
\rho^{}_\Phi = -\hbar \int_0^\infty \frac{d\beta}{2\pi}\frac{8 \gamma^2 \beta^3 e^{-2L\beta}}
{\left[ \beta^2 + 2\beta(\gamma+\tilde{\gamma})+\Omega_0^2\right]^2 -4\gamma^2 \beta^2 e^{-2L\beta}}, \label{E0inCavWR}
\end{equation}
which is Wick-rotated from (\ref{E0inCav}) by letting $\omega =i\beta$ \cite{GJ02, GJ03, GJ04, Ja05}.
Eq. (\ref{E0inCavWR}) converges much faster than (\ref{E0inCav}) in numerical calculations.
Further, the integrand in (\ref{E0inCavWR}) is positive definite for $\beta\ge 0$, so $\rho^{}_\Phi$ must be negative for all regular, non-resonant choices of the parameter values in our model (in the resonant case with $\tilde{\gamma}=0$ and $\Omega_0=n\pi/L$ for some positive integer $n$, the system will never settle down to the late-time steady state with (\ref{E0inCav}); see Sec. \ref{cavmodLT}).
Note that we did not take the strong OF coupling limit in obtaining (\ref{E0inCav}) and (\ref{E0inCavWR}).
Even in the weak OF coupling regime where the working range of our detector mirrors is narrow (recall Figures \ref{reflec} and
\ref{Phi2xkUnd}), the Casimir energy density in our cavity with sufficiently large $\omega^{}_M$ is still negative, though it may be very
close to zero. In the example in Figure \ref{Phi2xkUnd}, indeed, one has $\rho^{}_\Phi \approx -6.9096\times 10^{-10} <0$ in the cavity,
while only one pair of the cavity modes are significant in the under-damping regime there.
It is obvious in (\ref{E0inCav}) and (\ref{E0inCavWR}) that the Casimir energy density goes to zero as the OF coupling $\gamma\to 0$.
Going to the other extreme, if one takes the limit $\gamma\to \infty$ before doing integration \cite{GJ02,GJ03,GJ04,Ja05}, then
\begin{eqnarray}
\rho^{}_\Phi &\to& -\hbar {\rm Re} \int_0^\infty
\frac{d\omega}{2\pi}\frac{8 \gamma^2 \omega^3 e^{2i\omega L}}{- 4\omega^2\gamma^2+
4\gamma^2\omega^2 e^{2i\omega L}} =-\hbar {\rm Re} \int_0^\infty
\frac{d\omega}{\pi}\frac{\omega e^{2i\omega L}}{-1 + e^{2i\omega L}} \nonumber\\
&=& \hbar {\rm Re} \int_0^\infty\frac{d\omega}{\pi} \omega \sum_{n=1}^\infty e^{2i\omega Ln}
=\frac{\hbar}{\pi} {\rm Re} \sum_{n=1}^\infty \int_0^\infty d\omega \omega e^{2i\omega Ln}
\nonumber\\ &=& \frac{\hbar}{\pi}\sum_{n=1}^\infty \frac{-1}{4 L^2 n^2}
= - \frac{\hbar \pi}{24L^2}, \label{ECconven}
\end{eqnarray}
and one recovers the conventional result for a perfect cavity in (1+1)D \cite{BD82}. In the above calculation a regularization
$L\to L+i\epsilon$ with $\epsilon\to 0+$ is understood. For $L=40$, $\rho^{}_\Phi \approx -0.0000818123$ in (\ref{ECconven}), which is
the same order of magnitude as the Casimir energy density in Figure \ref{E0Lam}.
Right at the position of a detector mirror ($z^1_A=0$ or $z^1_B=L$), one has the late-time renormalized energy density of the field
\begin{equation}
\langle \hat{T}_{00}(z^\mu_{\bf d})\rangle_{\rm ren} \to -\frac{\gamma}{2} \langle \hat{P}^2_{\bf d}(t) \rangle +
\left.\langle \hat{T}_{00}(t,x)\rangle_{\rm ren}\right|_{0<x<L}
\end{equation}
which appears to have a logarithmic divergence in the first term if we did not introduce a UV cutoff $\omega^{}_M$
for $\langle \hat{P}^2_{\bf d}(t) \rangle$ (see (\ref{PA2CavLT}) and below). With a finite $\omega^{}_M$, while the above energy density of
the field has a large negative value, its contribution to the field energy is about $\{-(\gamma/2) \langle \hat{P}^2_{\bf d}(t)\rangle
+\langle \hat{T}_{00}\rangle_{\rm ren}|_{0<x<L}\} dx$, which is small compared with the detector energy
$E_{\bf d} =(\langle \hat{P}^2_{\bf d}(t) \rangle + \Omega_0^2\langle \hat{Q}^2_{\bf d}(t) \rangle)/2$.
Thus the total energy around is still positive. Also the total Casimir energy of the field is still
\begin{equation}
E^{}_\Phi =\int_{-\infty}^\infty dx\langle\hat{T}_{00}(t,x)\rangle_{\rm ren}=L \left.\langle \hat{T}_{00}\rangle_{\rm ren}\right|_{0<x<L}
\label{CasEn}
\end{equation}
since the contribution by the finite $\langle \hat{T}_{00}(z^\mu_{\bf d})\rangle_{\rm ren}$ at $x=0$ and $x=L$ are infinitesimal in the
integral.
When $L\to 0$, the conventional result for the Casimir energy diverges like $L\times (-L^{-2}) = -L^{-1}$ from (\ref{ECconven}) and
(\ref{CasEn}). In contrast, $\rho^{}_\Phi$ in (\ref{E0inCavWR}) behaves like $\ln L$ when $L$ is small, so the total Casimir energy $E_\Phi
\sim L\times \ln L$ goes to zero as the separation $L\to 0$ in our model.
The total energy of our HO-field system (with the field energy radiated in transient ignored) is thus finite and cutoff dependent, and would be positive when the UV cutoff is sufficiently large.
\subsection{Late-time entanglement between mirror oscillators}
\label{EntMirOsc}
For our cavity of two detector mirrors, the symmetric two-point correlators of the internal HOs of the detectors can be formally represented
as
\begin{eqnarray}
\langle \hat{Q}^{}_{\bf d}(t), \hat{Q}^{}_{\bf d'}(t') \rangle &=& \frac{1}{2} {\rm Re} \left[
\sum_{\tilde{\bf d},\tilde{\bf d}'=A,B}\frac{\hbar}{2\Omega_0} q_{\bf d}^{\tilde{\bf d}}(t)q_{\bf d'}^{\tilde{\bf d}'*}(t') +
\int \frac{d\rm k}{2\pi} \frac{\hbar}{2\rm w} q_{\bf d}^{\rm k}(t)q_{\bf d'}^{\rm k*}(t') \right. \nonumber\\ & & +\left.
\int \frac{d\tilde{\rm k}_A}{2\pi} \frac{\hbar}{2\tilde{\rm w}^{}_A} q_{\bf d}^{\tilde{\rm k}_A}(t)q_{\bf d'}^{\tilde{\rm k}_A*}(t')+
\int \frac{d\tilde{\rm k}_B}{2\pi} \frac{\hbar}{2\tilde{\rm w}^{}_B} q_{\bf d}^{\tilde{\rm k}_B}(t)q_{\bf d'}^{\tilde{\rm k}_B*}(t')\right],
\end{eqnarray}
and so on. After some algebra, the late-time correlators of the oscillators are found to be
\begin{eqnarray}
&&\langle \hat{Q}^{2}_A(t)\rangle = \langle \hat{Q}^{2}_B(t)\rangle = 2{\rm Re} \left( {\cal F}^{}_{0+} + {\cal F}^{}_{0-}\right),
\label{QAQACavLT}\\
&&\langle \hat{Q}^{}_A(t), \hat{Q}^{}_B(t)\rangle = 2{\rm Re} \left( {\cal F}^{}_{0+} - {\cal F}^{}_{0-}\right), \\
&&\langle \hat{P}^{2}_A(t)\rangle = \langle \hat{P}^{2}_B(t)\rangle = 2{\rm Re} \left( {\cal F}^{}_{2+} + {\cal F}^{}_{2-}\right),
\label{PA2CavLT}\\
&&\langle \hat{P}^{}_A(t), \hat{P}^{}_B(t)\rangle = 2{\rm Re} \left( {\cal F}^{}_{2+} - {\cal F}^{}_{2-}\right), \label{PAPBCavLT}
\end{eqnarray}
and $\langle\hat{Q}_{\bf d}(t), \hat{P}_{\bf d'}(t)\rangle = 0$. Here
\begin{equation}
{\cal F}^{}_{c\pm} \equiv \frac{\hbar}{4\pi} \int_0^{\omega^{}_{M}} d\omega \, \omega^{c-1} \chi_\omega^\pm . \label{calFdef}
\end{equation}
with the UV cutoff $\omega^{}_M$ and the susceptibility functions $\chi_\omega^\pm$ defined in (\ref{chipm}).
The above late-time results are actually constants of $t$ and very similar to Eqs. (48)-(52) in Ref. \cite{LH09} except the oscillating term
($\propto \gamma e^{i\omega L}$ in the denominator of $\chi_\omega^\pm$) due to the differences in the coupling and the number of spatial
dimensions. Unlike its counterpart in \cite{LH09}, the oscillating term here keeps the denominator of the integrand of
${\cal F}^{}_{c\pm}$ regular as $L\to 0$ for every finite $\omega$.
For $L=0$, the integrals of ${\cal F}^{}_{c\pm}$ can be done analytically to get
\begin{eqnarray}
\left.{\cal F}^{}_{0\pm}\right|_{L=0} &=& \left.\frac{\hbar i}{4\pi\Gamma^{}_\pm}\tan^{-1}\frac{\omega+i\gamma^{}_\pm}{\Gamma^{}_\pm}
\right|^{\omega^{}_{M}}_{\omega=0} \nonumber\\ &&\stackrel{\omega^{}_M \gg\gamma_\pm,\Omega_0}{\longrightarrow}
\frac{\hbar i}{4\pi\Gamma^{}_\pm}\left( \frac{\pi}{2} -
\tan^{-1}\frac{i\gamma^{}_\pm}{\Gamma^{}_\pm} \right), \label{F0pmL0}\\
{\rm Re} \left.{\cal F}^{}_{2\pm}\right|_{L=0} &=& (\Omega_0^2 -2\gamma_\pm^2){\rm Re}\left.{\cal F}^{}_{0\pm}\right|_{L=0}
+\frac{\hbar\gamma_\pm}{8\pi} \ln \frac{(\omega_M^2 - \Omega_0^2)^2 + 4 \gamma_\pm^2 \omega_M^2}{\Omega_0^4} \nonumber\\
&& \stackrel{\omega^{}_M\gg\gamma_\pm,\Omega_0}{\longrightarrow}
(\Omega_0^2 -2\gamma_\pm^2){\rm Re}\left.{\cal F}^{}_{0\pm}\right|_{L=0} +
\frac{\hbar\gamma^{}_\pm}{2\pi} \Lambda_1, \label{F2pmL0}
\end{eqnarray}
with $\gamma^{}_\pm \equiv \tilde{\gamma} + \gamma (1\pm 1)$ and $\Gamma^{}_\pm \equiv \sqrt{\gamma_\pm^2 - \Omega_0^2}$, which can be
real (over-damping) or imaginary (under-damping).
Here we set $\omega^{}_M = \Omega_0 e^{\Lambda_1}$ to recover Eq. (A12) in Ref. \cite{LH07} after the $\Lambda_1$ there is redefined
as $\Lambda_1 = -\gamma_e - \ln \Omega_r|\tau-\tau'|$, as we discussed in Sec. \ref{OFEnt}
\footnote{Below Eq. (A9) in Appendix A of \cite{LCH16}, $\omega^{}_M$ is put as $2\pi\Omega e^{\Lambda_0+\gamma_e}$ or $2\pi\Omega
e^{\Lambda_1+\gamma_e}$. To exactly recover Eqs. (A9)-(A12) in \cite{LH07}, they should be corrected to $\omega^{}_M =\Omega e^{\Lambda_0}$ or
$\Omega e^{\Lambda_1}$.}.
While ${\rm Re}\,{\cal F}^{}_{2\pm}|_{L=0}$ is UV divergent as $\omega^{}_{M}\to\infty$, when the UV cutoff $\omega^{}_M$ and so $\Lambda_1$ are set to be finite and not too large, the internal HOs of the two UD$'$ detectors can be entangled.
For example, when $\Lambda_1=100$,
$\gamma = 10$, $\tilde{\gamma}=0.01$, $\Omega_0 = 0.1$, and $c=\hbar=1$, we find $c_-^2 -(\hbar^2/4)\approx -0.18$
with $c_-^2 \equiv 16{\rm Re} {\cal F}^{}_{0+} {\rm Re} {\cal F}^{}_{2-}$
\footnote{The condition $16{\rm Re}{\cal F}^{}_{0+}{\rm Re}{\cal F}^{}_{2-} -(\hbar^2/4) <0$ in the context above
Eq.(58) in Ref. \cite{LH09} obviously should be corrected to $16{\rm Re} {\cal F}^{}_{0-} {\rm Re}{\cal F}^{}_{2+} - (\hbar^2/4) < 0$.
Here in the UD$'$ detector theory with the derivative coupling in (1+1)D, however, we have $(c_-^2, c_+^2) = (16{\rm Re} {\cal F}^{}_{0+}
{\rm Re} {\cal F}^{}_{2-}, 16{\rm Re} {\cal F}^{}_{0-} {\rm Re} {\cal F}^{}_{2+})$, in contrast to those expressions in
\cite{LH09} with the minimal coupling in (3+1)D. So $16{\rm Re} {\cal F}^{}_{0+} {\rm Re}{\cal F}^{}_{2-} - (\hbar^2/4) < 0$ implies
entanglement here.}, and
the separability function $\Sigma\equiv\left(16{\rm Re} {\cal F}^{}_{0+} {\rm Re} {\cal F}^{}_{2-} - (\hbar^2/4)\right) \times
\left(16{\rm Re}{\cal F}^{}_{0-} {\rm Re} {\cal F}^{}_{2+} - (\hbar^2/4)\right) \approx -1076$
is negative \cite{LH09}, while the uncertainty function $\Upsilon \equiv \left(16{\rm Re} {\cal F}^{}_{0+} {\rm Re} {\cal F}^{}_{2+} -
(\hbar^2/4)\right) \left(16{\rm Re} {\cal F}^{}_{0-} {\rm Re} {\cal F}^{}_{2-} -(\hbar^2/4)\right) \approx 370$
is positive. This implies that the reduced state of the oscillator pair, which is a Gaussian state, is well behaved and the
oscillators are entangled (with the logarithmic negativity $E_{\cal N} = \max \{0, -\log_2 (2c_-/\hbar) \} \approx 0.94$)
\cite{Si00, DG00, VW02, Pl05, LH09}.
If we increase the value of $\Lambda_1$ while keeping all other parameters unchanged, the oscillators will be entangled until $\Lambda_1$
exceeds about $400$.
\begin{figure}
\includegraphics[width=7cm]{fig8a_UpSig_W0p1r10rtp01Lp0001.pdf}\hspace{.4cm}
\includegraphics[width=7cm]{fig8b_UpSig_W0p1_r5Piby2L_rtp01L1.pdf}
\caption{The uncertainty function $\Upsilon$ (black lines) and the separability function $\Sigma$ (red lines) with $L=0.0001$ (left) and
$L=1$ (right) against the UV cutoff $\omega^{}_M$ (scaled by $L/\pi$ in the plot) at late times.
The gray dashed and pink dashed curves are $\Upsilon$ and $\Sigma$, respectively, for $L=0$ with the same $\omega^{}_M$ (obtained from
Eqs. (\ref{F0pmL0}) and (\ref{F2pmL0})).}
\label{UpSig}
\end{figure}
\begin{figure}
\includegraphics[width=5.5cm]{fig9a_Ent_W0p1rtp01Lp0001_r10xwM10x.png}
\includegraphics[width=5.5cm]{fig9b_Ent_W0p1rtp01Lp0001_rS.png}
\includegraphics[width=5.5cm]{fig9c_Ent_W0p1rtp01L40_r10xwM10x.png}\\ \vspace{.5cm}
\includegraphics[width=5.5cm]{fig9d_Ent_W0p1L1rtp01n0p0PiL_r10xwM10x.png}
\includegraphics[width=5.5cm]{fig9e_Ent_W0p1L1rtp01n1p5PiL_r10xwM10x.png}
\includegraphics[width=5.5cm]{fig9f_Ent_W0p1L1rtp01n3p0PiL_r10xwM10x.png}
\caption{The HO pair with ($\omega^{}_M$, $\gamma$) in the dark regions is entangled at late times ($\Sigma<0$ and $\Upsilon\ge 0$),
while in the gray regions the uncertainty relation of the reduced state of the HOs is violated ($\Upsilon <0$) and so unphysical.
The upper-middle plot is an enlargement of the lower-left corner of the upper-left plot.
The result along the horizontal lines $\gamma L/\pi =0.001$ in the upper-left and upper-middle plots and the line $\gamma L/\pi =2.5$ in the
lower-left plot can be compared with Figure \ref{UpSig} (left) and (right), respectively.}
\label{UpSig2}
\end{figure}
For $L>0$, the integrals of ${\cal F}^{}_{c\pm}$ deviate significantly from those with $L=0$ for $\omega^{}_M>O(\pi/L)$ (Figure \ref{UpSig}).
When we fix $\tilde{\gamma}$, $\Omega_0$, and $L$, the unphysical negative-$\Upsilon$ region in which the uncertainty relation
$\Upsilon\ge 0$ is violated looks like a wedge in the $\omega^{}_M\gamma$-plane in our examples with either $\omega^{}_M$ or $\gamma$ not
too large (gray regions in Figure \ref{UpSig2}). The angle and the slopes of the two boundaries of the wedge decrease as $L$ increases
(compare the upper-left, upper-right, and lower-left plots in Figure \ref{UpSig2}). Around the boundary of the negative-$\Upsilon$ region
there are islands of parameter values in which one has $\Sigma<0$ while the uncertainty relation $\Upsilon\ge 0$ holds
(dark regions). The late-time quantum entanglement between the oscillators of the two mirrors only occurs when the point $(\omega^{}_M,
\gamma)$ with the fixed values of $\tilde{\gamma}$, $\Omega_0$, and $L$ is located in one of these islands in the parameter space.
The islands look disconnected in the $\omega^{}_M\gamma$-plane because $\Upsilon(\omega^{}_M)$ and $\Sigma(\omega^{}_M)$ are alternating
when $\omega^{}_M \sim O(\gamma)$; namely, if $\Upsilon(\omega^{}_M)>\Sigma(\omega^{}_M)$ for some $\omega^{}_M = \Lambda$ then $\Upsilon(\omega^{}_M)<\Sigma(\omega^{}_M)$ for $\omega^{}_M \approx \Lambda\pm \pi/L$, as shown in Figure \ref{UpSig} (right). This is due to the
alternating nature of the $\gamma(1\pm e^{i\omega L})$ term in the denominators of $\chi^\pm_\omega$ in ${\cal F}_{c\pm}$.
As $\tilde{\gamma}$ increases, the projections of the islands on the $\omega^{}_M$-axis are roughly invariant, while the whole wedge of the
$\Upsilon<0$ region shifts along the $+\omega^{}_M$ direction (from left to right in the lower row of Figure \ref{UpSig2}).
The width of those islands in $\omega^{}_M$ is about $O(\pi/L)$; thus, the larger $L$ would give a smaller scale of the
islands in the $\omega^{}_M\gamma\tilde{\gamma}$-space.
For any UV cutoff $\omega^{}_M$, no matter how large it is, the above result suggests that one still has a chance to find an
OF coupling strength $\gamma \sim O(\omega^{}_M)$ while adjusting the UV cutoff around $\omega^{}_M \pm \pi/L$
(with $\tilde{\gamma}$, $\Omega_0$, and $L$ fixed) to make the two internal HOs entangled at late times.
However, this is extremely fine-tuned and the result cannot be trusted in this regime since the interaction energy could easily exceed the validity range of this model.
Moreover, when $\gamma$ and $\omega^{}_M$ have the same order of magnitude while $\tilde{\gamma}$, $\Omega_0$, and $1/L$ are relatively small, the denominator of the integrand in (\ref{E0inCav}) is approximately $\omega^4 +4i\gamma\omega^3 +4\gamma^2\omega^2(e^{2i\omega L}-1)$, whose three terms are roughly the same order of magnitude, namely, $O(\gamma^4)$, so the energy density of the field in the cavity around this parameter range oscillates largely between positive and negative values as $\omega^{}_M$ increases.
Indeed, in Figure \ref{E0Lam} (left) one can see that the maximum amplitude of the oscillating value of the field energy density occurs
around $\omega^{}_M \approx 2\gamma$, and the oscillation will not be suppressed until $\omega^{}_M$ is much larger.
Such a large UV cutoff ($\omega^{}_M \gg O(\gamma)$) is also desirable to get rid of the violation of the uncertainty relation,
by noting that the small dark islands are always neighboring to the gray regions in Figure \ref{UpSig2}.
Thus the late-time entanglement between the HOs of the cavity mirrors is very unlikely to exist for physically reasonable values of the UV cutoff in our model.
\section{Summary}
\label{Summa}
We employed the derivative-coupling Unruh-DeWitt(UD$'$) HO detector theory in (1+1) dimensions to model the atom mirror interacting with a massless quantum field (OF coupling) and an environment of mechanical degrees of freedom (OE coupling). The reflectivity of our
atom or detector mirror is dynamically determined by the interplay of the detector's internal oscillator and the field. In the strong OF coupling regime, the effect of the mechanical environment is negligible and the detector acts like a perfect mirror at late times, when the energy density of the field outside the detector vanishes while the field spectrum is nontrivial. Compared with the field correlators in free space, in the presence of a detector mirror the late-time correlators are reduced for both the field amplitudes on the same side and those on two different sides of the mirror.
A pair of such UD$'$ detector mirrors can form a cavity. If both oscillators are decoupled from the environment, the system will not settle to a steady state at late times if the two internal HOs of the cavity mirrors are on resonance, namely, the natural frequency of the oscillator is integer times of the frequency for the massless scalar field in the cavity traveling from one detector mirror to the other.
If the OE coupling is nonvanishing, the field in this cavity will evolve into a steady, quasi-discrete spectrum at late times. Then there will be many cavity modes in the strong OF coupling, over-damping regime but only one or a few pairs of significant cavity modes in the weak OF coupling, under-damping regime. With the UV cutoff sufficiently large, the late-time renormalized field energy density in the cavity converges to a negative value for all positive OF coupling strengths. In the infinite OF coupling limit, the negative field energy density goes to the conventional result in the Casimir effect. In contrast to the conventional result with the perfect mirrors, however, the total energy density in our cavity does not diverge as the separation of the detector mirrors goes to zero. Outside the cavity the renormalized field energy density is again vanishing while the field spectrum is nontrivial.
Our result shows that the internal oscillators of the two mirrors of our cavity can have late-time entanglement when the OF coupling strength is roughly of the same order of the UV cutoff for the two identical HOs. In this regime, however, the model is nearly broken down, and the field energy density in the cavity does not converge but is very sensitive to the choice of the UV cutoff. When the UV cutoff is large enough to obtain a convergent value of the Casimir energy density and far from inconsistencies, the HOs in the parameter range of our results are always separable.
\begin{acknowledgments}
I thank Bei-Lok Hu, Larry Ford, and Jen-Tsung Hsiang for illuminating discussions.
This work is supported by the Ministry of Science and Technology of Taiwan under Grant No. MOST 106-2112-M-018-002-MY3 and in part by the
National Center for Theoretical Sciences, Taiwan.
\end{acknowledgments}
|
{
"timestamp": "2018-11-30T02:07:01",
"yymm": "1806",
"arxiv_id": "1806.00816",
"language": "en",
"url": "https://arxiv.org/abs/1806.00816"
}
|
\section{Introduction} \label{sec:intro}
The unprecedented data traffic growth over the last decade has radically transformed the wireless ecosystem. Two major trends related to traffic consumption could be identified. First, the largest amount of data traffic over the network requires high bandwidth and contains rich multimedia services, including video/audio streaming, cell broadcasting, and mobile television. Second, the same digital content is often requested simultaneously by or is of interest to a group of users, e.g., broadcasting of sporting events, popular videos, live shows, headline news, satellite broadcast, etc.
Several standards, such as 3GPP eMBMS (evolved Multimedia Broadcast Multicast Services) \cite{eMBMS} and DVB-H (Digital Video Broadcasting - Handheld) \cite{DVB}, have been introduced as a means to support efficient massive content delivery and multicast applications. Among the various transmission techniques that serve those objectives, physical layer multicasting (PLM) stands as a key enabler. The simplest scenario of PLM consists of a transmitter conveying a common message to a group of receivers, while more complex scenarios involve simultaneous transmissions of distinct messages to multiple multicast groups.
Fifth generation (5G), the next generation mobile communication system, aims to support a broader spectrum of use cases than just mobile broadband. 5G envisions to provide wireless connectivity for massive machine-type communications (mMTC) and to support ultra-reliable, low latency communication (URLLC) for mission-critical services. Physical layer multicasting is envisaged to play a significant role in providing quality of service (QoS) in emerging 5G networks, especially with the anticipated integration of satellite communications in 5G terrestrial networks. Many mission-critical Internet of Things (IoT) applications and content-centric services can benefit from multicasting and its content diversity capabilities. PLM can also be used in edge caching, bringing content closer to the user in order to achieve the 5G low latency requirement. Prior work on PLM has mainly focused on its capacity limits \cite{Jindal,Park} and on beamforming techniques \cite{SidiropoulosPMC,KaripidisMC,CE_PLM}.
In this paper, we investigate the delay performance of physical layer multicasting in multiuser multiple-input single-output (MISO) downlink channels. We consider a low-complexity technique that does not require channel state information (CSI) at the transmitter and transmits using a spatially white covariance. We provide a statistical characterization of the service process in terms of its Mellin transform and derive bounds on the delay violation probability using tools from stochastic network calculus \cite{Chang00,Multihop13_Infocom,Fidler15_Guide}. Furthermore, using extreme value theory, we characterize the service process for increasing number of users and provide scaling laws as the number of antennas and/or users is taken to infinity. The analytical expressions based on the exact and the asymptotic distribution of the instantaneous channel gain quantify the effect of transmit power, number of transmit antennas and users on the delay distribution of physical layer multicasting.
\section{System Model} \label{sec:syst}
We consider multicast data transmissions, i.e., a point-to-multipoint communication channel where the base station (BS) broadcasts common messages to all active users. The BS is equipped with $M$ antennas and serves $K$ single-antenna users.
\subsection{Signal model}
We consider a flat-fading channel and assume that time is divided into equally sized time slots. The discrete-time complex baseband signal received by user $k$ at slot $i$ is given by
\begin{eqnarray}
y_{k,i} = \mathbf{h}_{k,i}\mathbf{x}_{i} + z_{k,i}, \ \ k=1, \ldots, K
\end{eqnarray}
where $\mathbf{h}_{k,i} \in \mathbb{C}^{1\times M}$ is the channel between the transmitter and $k$-th user at slot $i$, $\mathbf{x}_i \in \mathbb{C}^{M\times1}$ is the transmitted signal with $\mathbb{E}[\mathbf{x}^H\mathbf{x}] \leq 1$, and $z_{k,i}$ is zero-mean circularly symmetric complex Gaussian additive noise with variance of $1/P$.
We assume a Rayleigh block fading model, thus $\mathbf{h}_{k,i} \sim \mathcal{CN}(\mathbf{0},\mathbf{I}_M)$.
We focus on low-complexity transmission techniques with no CSI at the transmitter and perfect CSI at the receiver. For that, a spatially white transmit covariance $\mathbf{Q}_i \triangleq \mathbb{E}[\mathbf{x}_i\mathbf{x}_i^H] = \frac{1}{M}\mathbf{I}_M$ is employed, fixed over all channel realizations and slots. Therefore, the instantaneous signal-to-noise ratio (SNR) for user $k$ in the $i$-th slot is given by $\gamma_{k,i} = \rho\|\mathbf{h}_{k,i}\|^2$, where $\rho = P/M$.
\subsection{Traffic Model}
The analysis follows a system-theoretic stochastic network calculus approach as in \cite{Multihop13_Infocom}, which involves a queueing system with stochastic arrival and departure processes described by bivariate stochastic processes $A(\tau, t)$ and $D(\tau, t)$, respectively.
We consider a fluid-flow traffic model and the system starts with empty queues at $t=0$.
The cumulative arrival and departure processes for any $0 \leq \tau \leq t$ during time interval $[\tau,t)$ are defined respectively as
\begin{eqnarray}
A(\tau, t) = \displaystyle \sum_{i = \tau}^{t-1}a_i, \ \ D(\tau, t) = \displaystyle \sum_{i = \tau}^{t-1}d_i
\end{eqnarray}
where $a_i$ models the number of bits that arrives at the queue at time instant $i$ and $d_i$ describes the number of bits that arrives successfully at the destination. For a successful transmission, the service process $C_i$ should be less or equal to the instantaneous achievable rate. In case of transmission errors, the service is considered to be zero as no data is removed from the queue.
For lossless first-in first-out (FIFO) queueing systems, the delay $\Delta(t)$ at time $t$, i.e., the number of slots it takes for an information bit arriving at time $t$ to be received at the destination, is defined as
\begin{eqnarray}
\Delta(t) = \inf\{u \geq 0 : A(0,t)/D(0,t+u) \leq 1 \}.
\end{eqnarray}
The delay violation probability is given by
\begin{eqnarray}
\Lambda(w,t) = \displaystyle \sup_{t\geq 0}\mathbb{P}\left[\Delta(t) > w \right].
\end{eqnarray}
\subsection{Service Process}
Assuming Gaussian codebooks and ideal link adaptation, the instantaneous transmission rate $C_i$ at time instant $i$ is equal to $C_i = N\log(1+\gamma_i)$ nats/s, where $N$ is the number of transmitted symbols per time slot (bandwidth) and $\gamma_i$ is the instantaneous SNR using multicasting. For exposition convenience, the rate is expressed with the natural logarithm.
The service process (or cumulative capacity) through period $[\tau,t)$ is defined as
\begin{eqnarray}
S(\tau, t) \triangleq \displaystyle \sum_{i = \tau}^{t-1}C_i = \sum_{i = \tau}^{t-1}N\log(1+\gamma_i).
\end{eqnarray}
\section{Delay Performance: Exact Analysis} \label{sec:perf_exact}
In this section, we provide a statistical characterization of the arrival and service processes in terms of their Mellin transforms as a means to obtain bounds on the delay violation probability. For fading channels, it is more convenient to map and analyze these processes into a transfer domain, referred to as exponential or SNR domain \cite{Multihop13_Infocom}.
First, we convert the cumulative processes from the bit domain to the SNR domain through the exponential function. The corresponding processes, denoted by calligraphic letters, are
\begin{equation*}
\mathcal{A}(\tau, t) = e^{A(\tau, t)}, \quad \mathcal{D}(\tau, t) = e^{D(\tau, t)}, \quad \mathcal{S}(\tau, t) = e^{S(\tau, t)}.
\end{equation*}
An upper bound on the delay violation probability can be computed as \cite{Multihop13_Infocom}
\begin{equation}\label{eq:delay_bound_1}
p_\mathrm{v}(w) = \inf_{s>0}\left\lbrace K(s,-w)\right\rbrace \geq \Lambda(w)
\end{equation}
where $K(s,-w)$ is the so-called steady-state kernel, defined as
\begin{equation}\label{eq:ker_limits}
\mathcal{K}(s,-w) = \lim_{t\to\infty} \sum_{u=0}^{t}\mathcal{M}_{\mathcal{A}}(1+s,u,t)\mathcal{M}_{\mathcal{S}}(1-s,u,t+w)
\end{equation}
where $\mathcal{M}_{\mathcal{X}}(s,\tau,t) = \mathcal{M}_{\mathcal{X}_{(\tau,t)}}(s) = \mathbb{E}\left[\mathcal{X}^{s-1}(\tau,t)\right]$ denotes the Mellin transform of a nonnegative random variable for any $s \in \mathbb{C}$ for which the expectation exists.
\subsection{Mellin transform of arrival and service processes}
Assuming that $\mathcal{A}(\tau, t)$ has stationary and independent increments, the Mellin transform becomes independent of the time instance, as follows
\begin{eqnarray}
\mathcal{M}_\mathcal{A} (s,\tau,t) & = & \mathbb{E}\left[\left(\prod_{i=\tau}^{t-1} e^{a_i}\right)^{s-1}\right] \nonumber \\ & = & \mathbb{E}\left[e^{a(s-1)}\right]^{t-\tau} = \mathcal{M}_\alpha(s)^{t-\tau}
\end{eqnarray}
where we have defined $\alpha = e^a$.
We consider the traffic class of $(z(s), \lambda(s))$-bounded arrivals, whose moment generating function in the bit domain is bounded by \cite{Chang00}
\begin{eqnarray}
\frac{1}{s}\log\mathbb{E}[e^{sA(\tau,t)}] \leq \lambda(s)\cdot(t-\tau) + z(s)
\end{eqnarray}
for some $s>0$. Here we consider the case where $\lambda$ is independent of $s$ and $z(s) = 0$, thus we have
\begin{equation}\label{eq:defmellin_alpha}
\mathcal{M}_\alpha(s) = e^{\lambda(s-1)}.
\end{equation}
Since $C_i$ is i.i.d., the Mellin transform of the cumulative service process with $g(\gamma_i) = 1+\gamma_i$ is
\begin{eqnarray}\label{eq:defmellin_s}
\mathcal{M}_\mathcal{S}(s,\tau,t) &=& \mathbb{E}\left[\left(\prod_{i=\tau}^{t-1}g(\gamma_i)^N\right)^{s-1}\right] \nonumber \\ &=& \mathbb{E}\left[g(\gamma)^{N(s-1)}\right]^{t-\tau} \nonumber \\
&=& \mathcal{M}_{g(\gamma)}\left(1+N(s-1)\right)^{t-\tau}.
\end{eqnarray}
\subsection{Delay Bound}\label{sec:delay_viol}
Plugging (\ref{eq:defmellin_alpha}) and (\ref{eq:defmellin_s}) into (\ref{eq:ker_limits}) and following \cite{Multihop13_Infocom}, the steady-state kernel can be finally rewritten as
\begin{eqnarray}
\mathcal{K}(s,-w) = \frac{\left(\mathcal{M}_{g(\gamma)}(1-Ns)\right)^{w}}{1 - \mathcal{M}_{\alpha}(1+s)\mathcal{M}_{g(\gamma)}(1-Ns)},
\end{eqnarray}
for any $s > 0$ under the stability condition $\mathcal{M}_{\alpha}(1+s)\mathcal{M}_{g(\gamma)}(1-Ns) < 1$. The delay bound (\ref{eq:delay_bound_1}) thus reduces to
\begin{equation}
p_\mathrm{v}(w) = \inf_{s>0}\left\lbrace\frac{\left(\mathcal{M}_{g(\gamma)}(1-Ns)\right)^{w}}{1 - \mathcal{M}_{\alpha}(1+s)\mathcal{M}_{g(\gamma)}(1-Ns)}\right\rbrace.
\end{equation}
\subsection{Service for Physical Layer Multicasting}
In this section, we derive exact closed-form expressions for the steady-state kernel $\mathcal{K}(s,-w)$ of multi-antenna multicasting. For that, we need to derive the Mellin transform of $g(\gamma)$, which is a function of the instantaneous SNR. For exposition convenience, we set $N=1$ and we drop the time subindex since SNRs are independent and ergodic.
Since the common message should be decoded by all $K$ users, the instantaneous rate should not exceed the rate achievable by the weakest user. Therefore, the instantaneous SNR of the system is given by $\gamma_i = \rho\displaystyle \min_{1\leq k \leq K}\|\mathbf{h}_k\|^2$, where $X_k \triangleq \|\mathbf{h}_k\|^2 \sim \chi_{2M}^2$ follows a chi-squared distribution with $2M$ degrees of freedom.
The CDF of $X_{(1)} \triangleq \displaystyle \min_{1\leq k \leq K}X_k$ is
\begin{eqnarray}
\label{CDFmin}
F_{X_{(1)}}(x) = 1 - (1-F_{X}(x))^K = 1 - \left(\frac{\Gamma(M,x)}{\Gamma(M)}\right)^K
\end{eqnarray}
where $\Gamma(a,x) = \int_{x}^{\infty}t^{a-1}e^{-t}\,\mathrm{d}t$ and $\Gamma(a) = \Gamma(a,0)$ is the upper incomplete and complete gamma function, respectively.
\smallskip
\noindent The Mellin transform of $g(\gamma)$ is given by
\begin{eqnarray}
\mathcal{M}_{g(\gamma)}(s) = \mathbb{E}\left[g(\gamma)^{s-1}\right] = \displaystyle \int_{0}^{\infty}(1+\rho x)^{s-1}\d F_{X_{(1)}}(x).
\end{eqnarray}
Using (\ref{CDFmin}) and the multinomial theorem, and after some algebraic manipulations, we obtain the following result.
\begin{theorem}\label{Th1:main}
For physical layer MISO multicasting, we have
\begin{eqnarray}\label{eq:mellin}
\mathcal{M}_{g(\gamma)}(s) &=& 1 + (s-1)\displaystyle\sum_{k_1+\ldots+k_M = K}\frac{\varphi\Gamma(1+\vartheta)}{\rho^{\vartheta}} \nonumber \\
&& \hspace{-7mm} \times \ \Psi(\vartheta+1;\vartheta+s;K/\rho), \ \ \textrm{for} \ s < 1
\end{eqnarray}
where $\Psi(a;b;z)$ is the confluent hypergeometric function of the second kind (also called Tricomi's confluent hypergeometric function \cite[Eq. 13.2.5]{Abramowitz1964} and denoted by $U(a,b,z)$),
\begin{equation*}
\varphi = \frac{\binom {K} {k_1,k_2,\ldots,k_M}}{\prod_{n=0}^{M-1}(n!)^{k_{n+1}}} \ \ \textrm{and} \ \ \vartheta = \textstyle \sum_{\ell = 0}^{M-1}\ell \cdot k_{\ell+1}.
\end{equation*}
\end{theorem}
The above expression is quite complex and cumbersome to evaluate. The following lemma provides easily computable bounds using Alzer's inequality \cite{Alz97}.
\begin{lemma}\label{lem1}
The Mellin transform of $g(\gamma)$ can be bounded as
\begin{eqnarray}\label{eq:mellin_bound}
1+(s-1)\mathcal{B}(s,b) \leq \mathcal{M}_{g(\gamma)}(s) \leq 1+(s-1)\mathcal{B}(s,1)
\end{eqnarray}
where $b = [\Gamma(1+M)]^{-1/M}$ and
\begin{eqnarray*}
\mathcal{B}(s,\beta) & = & \textstyle \sum_{k=0}^{K}\sum_{j=0}^{kM}\binom{K}{k}\binom{kM}{j}(-1)^{k+j}e^{\frac{j\beta}{\rho}}\left(\frac{j\beta}{\rho}\right)^{1-s} \nonumber \\
&&\times \ \Gamma\left(s-1,\frac{j\beta}{\rho}\right).
\end{eqnarray*}
\end{lemma}
The above expressions and bounds provide a relatively accurate characterization of the service process and can easily be evaluated numerically. However, the quasi closed-form expressions are rather involved; they do not provide any insight on how the number of antennas and users affects the delay violation probability and its scaling. For that, we take a different approach and investigate the asymptotic behavior of the service process (and of its Mellin transform).
\begin{remark}
The above analysis allows us to directly obtain the effective capacity $\mathcal{R}(\theta)$ \cite{WuNegi03_EC}, i.e., the maximum constant arrival rate supported by the service process while satisfying statistical QoS requirements specified by the QoS exponent $\theta$, by noticing that
\begin{equation}
\mathcal{R}(\theta) = -\frac{1}{\theta}\log\mathcal{M}_{g(\gamma)}(1-\theta), \quad \theta>0.
\end{equation}
\end{remark}
\section{Delay Performance: Asymptotic Analysis} \label{sec:perf_exact}
In this section, we characterize the service process when $M$ is fixed, and $K$ is going to infinity. The first step is to find the asymptotic (limiting) distribution of the minimum SNR, which can be used to approximate its exact distribution.
\vspace{-0.5mm}
\subsection{Asymptotic Distribution}
We recall that the CDF of $X_{(1)}$ is
\begin{eqnarray}
L_K(x) = \mathbb{P}[X_{(1)} \leq x] = 1 - (1-F_{X}(x))^K.
\end{eqnarray}
From Fisher-Tippett-Gnedenko theorem \cite{David_EVT}, $F_{X}(x)$ belongs to the minimal domain of attraction of $L(x)$ if for at least one pair of sequences of real numbers ${c_K}$ and ${d_K > 0}$ it holds
\begin{eqnarray}
\lim_{K\to\infty}L_K(c_K+d_Kx) & = & \lim_{K\to\infty} 1 - (1-F_{X}(c_K+d_Kx))^K \nonumber \\
& = & L(x), \ \forall x.
\end{eqnarray}
Calculating the below necessary and sufficient condition
\begin{eqnarray}
\lim_{\epsilon \to 0}\frac{F_X^{-1}(\epsilon) - F_X^{-1}(2\epsilon)}{F_X^{-1}(2\epsilon) - F_X^{-1}(4\epsilon)} = 2^{-\kappa},
\end{eqnarray}
we have that the shape parameter of the associated limit distribution $\kappa > 0$, which implies that $F_X(x)$ belongs to the Weibull minimal domain of attraction.
In other words, the CDF of $X_{(1)}$ converges to the scaled and translated Weibull CDF, denoted by $W(x)$, for sequences $\{c_K\}$ and $\{d_K > 0\}$, i.e.,
\begin{eqnarray*}
F_{X_{(1)}}(u) & = & W\left(\frac{u-c_K}{d_K}\right) \nonumber \\
& \to & 1 - \exp\left(-\left(\frac{u-c_K}{d_K}\right)^{\kappa}\right), \ u \geq c_K.
\end{eqnarray*}
The location constant $c_K$ is related to the lower end of the CDF $F_X(x)$ and is given as $c_K = v(F) = \inf\{x|F_X(x)>0\} = 0$, $\forall K$ since the chi-squared distribution is supported on $[0,\infty)$.
\smallskip
The shape parameter $\kappa$ can be alternatively calculated as \cite{Leadbetter}
\begin{eqnarray}
\displaystyle \lim_{t\to\-\infty} \frac{F_X(v(F)-1/tx)}{F_X(v(F)-1/t)}=x^{-\kappa}
\end{eqnarray}
where evaluating the limit with $v(F) = 0$ gives $\kappa = M$.
\smallskip
The scale parameter is given by $d_K = F_X^{-1}(1/K) - v(F) = F_X^{-1}(1/K)$. Otherwise stated, we need to find $z$ such that $F_X(z) = 1/K$. Since $1/K$ approaches a very small value as $K \to \infty$, $z$ should be very small as well. So, approximating $F_X(x)$ with its Taylor expansion and keeping only the first term of the series, we have
\begin{eqnarray}
d_K = \frac{1}{M}\left[\frac{M!}{K}\right]^{1/M}.
\end{eqnarray}
Therefore, the limiting distribution of $X_{(1)}$ is
\begin{eqnarray}
F_{X_{(1)}}(c_K + d_K x) = \mathbb{P}(X_{(1)} < d_K x) \stackrel{K \to \infty}{\longrightarrow} 1 - e^{-x^M}.
\end{eqnarray}
The support of the asymptotic distribution is
\begin{eqnarray*}
S_L = \Big\{u\in[0,1]: \left(1-d_K(\log\frac{1}{\epsilon})^{\frac{1}{M}}\right) \leq u \nonumber \\
\leq \left(1-d_K(\log\frac{1}{1-\epsilon})^{\frac{1}{M}}\right)\Big\}
\end{eqnarray*}
where $\epsilon > 0$ is a very small number.
To quantify the accuracy of using the limit distribution for moderate number of users, we need to find a bound on the approximation/replacement error. We can show that $\mathbb{P}(X_{(1)} < d_K x) < 1 - e^{-x^M}$ and that the speed of convergence is faster than $\Theta(K^{-1/M})$. Using elementary results from \cite{Galambos} and after some algebraic manipulations, we have that
\begin{eqnarray*}
\left|\mathbb{P}(X_{(1)} < d_K x) - (1 - e^{-x^M}) \right| < e^{-x^M}x^{M+1}\left(\frac{M!}{K}\right)^{\frac{1}{M}}.
\end{eqnarray*}
Replacing the exact distribution by its asymptotic distribution, the Mellin transform of the service process for increasing $K$ is given by
\begin{eqnarray}\label{eq:mellin_as}
\mathcal{M}_{g(\gamma)}^{\rm as}(s) = 1+(s-1)\int_{0}^{\infty}(1+\rho x)^{s-2}e^{-(x/d_K)^M} \d x.
\end{eqnarray}
\subsection{Scaling results}
We present here results on the order growth of the service process when $M$ and/or $K$ is taken to infinity.
The easiest way is to derive an upper bound on the Mellin transform (using Jensen's inequality) and show that it is asymptotically tight.
\begin{theorem}
Let $\{X_k\}$ be positive random variables with finite mean and variance, and $\frac{|c_K|}{d_K} \to \infty$, then as $K\to\infty$
\begin{eqnarray}
f(\mathbb{E}[(X_{(1)}]) - \mathbb{E}[f(X_{(1)})] \to 0
\end{eqnarray}
where $f(x) = g(x)^{s-1}$.
\end{theorem}
The above convergence result implies that in order to calculate the Mellin transform of the service process, it is sufficient to evaluate the asymptotic mean of the minimum SNR. Note that the mean and the variance of Weibull distribution is given by $\mathbb{E}[W] = d_K\Gamma(1+1/M)$ and $\textrm{Var}[W] = d_K^2(\Gamma(1+2/M) - \Gamma^2(1+1/M))$, respectively.
\smallskip
\subsubsection{Finite $M$, Increasing $K$}
For MISO multicasting, as the number of users grows large, we have
\begin{eqnarray}
\lim_{K\to\infty}\mathcal{M}_{g(\gamma)}^{\textnormal{\tiny{as}}}(s) \to \left(1+\rho\left(\frac{M!}{K}\right)^{\frac{1}{M}}\right)^{s-1} \approx O(K^{-\frac{1}{M}}).
\end{eqnarray}
\subsubsection{Finite $K$, Increasing $M$}
For MISO multicasting, as the number of antennas grows large, we have
\begin{eqnarray}
\lim_{M\to\infty}\mathcal{M}_{g(\gamma)}^{\textnormal{\tiny{as}}}(s) \to (1+P)^{s-1} \approx O(1).
\end{eqnarray}
\subsubsection{Increasing $M$ and $K$}
We consider now the case where both the number of users and transmit antennas increase while maintaining a linear constant $\delta = K/M > 0$. For $\ell \in (0,1)$ and using Chebyshev's inequality and the fact that $\|\mathbf{h}_{k,i}\|^2/M$ has mean 1 and variance $1/2M$, we have
\begin{eqnarray}
\mathbb{P}(X_{(1)} \geq M\ell) \geq \left(1-\frac{1}{2M(1-\ell)^2}\right)^K \to e^{-\frac{\delta}{2(1-\ell)^2}}.
\end{eqnarray}
The Mellin transform can be lower bounded as follows
\begin{eqnarray}
\mathcal{M}_{g(\gamma)}(s) & \geq & \mathbb{P}(X_{(1)} \geq M\ell) (1+P\ell)^{s-1} \\
& \rightarrow & e^{-\frac{\delta}{2(1-\ell)^2}}(1+P\ell)^{s-1} > 0.
\end{eqnarray}
Note that the service process is upper bounded by the multicast capacity (with perfect CSI), in which case the following upper bound on the Mellin transform can be found
\begin{eqnarray}
\mathcal{M}_{g(\gamma)}(s) \leq (1+P(1+\sqrt{\delta})^2)^{s-1} \approx O(1).
\end{eqnarray}
\section{Simulation Results} \label{sec:num}
In this section, we validate our delay performance analysis using simulations. The duration of a slot is set to 2 ms and the blocklength is $N=100$ symbols per slot.
In Figure~\ref{fig1}, we compare the analytical expression on the delay violation probability and its lower bound with the simulated delay performance. We observe that the analytical expression curve follow quite well the trend of the simulated one, having a difference of about two slots. Furthermore, the proposed bound on $\mathcal{M}_{g(\gamma)}$ given in (\ref{eq:mellin_bound}) has a smaller gap compared to the simulated performance.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\columnwidth]{./img/fig1}%
\caption{Delay violation probability and associated bounds as a function of the target delay for different arrival rates, $M = 5$, $K=10$, and $P = 10$ dB.}
\label{fig1}
\end{figure}
In Figure~\ref{fig2}, we study the effect of the number of transmit antennas on the delay performance. Interestingly, for the scenario considered here, adding one antenna leads to a drastic drop of the delay violation probability.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\columnwidth]{./img/fig2}%
\caption{Delay violation probability vs. number of antennas for $P = 10$ dB, arrival rate $\lambda = 100$ kbps, and $\omega = 3$ slots.}
\label{fig2}
\end{figure}
Finally, in Figure~\ref{fig3}, we assess the effectiveness of our asymptotic analysis for charactering the delay violation probability. It can be seen that the asymptotic expression on $\mathcal{M}_{g(\gamma)}$ provides satisfactory results even for moderate number of users. Moreover, the horizontal offset between the curve corresponding to the asymptotic delay violation probability and that of the non-asymptotic expression is of the order of one slot.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\columnwidth]{./img/fig3}%
\caption{Delay violation probability as a function of the target delay using the asymptotic distribution for $M = 10$, $P = 1$ dB, and $\lambda = 7.2$ kbps.}
\label{fig3}
\end{figure}
\section{Conclusions} \label{sec:conc}
In this work, we investigated the queueing performance of physical layer MISO multicasting under statistical delay constraints. Using stochastic networks calculus, we derived a statistical characterization of the multicast service process and provided tight bounds on the delay violation probability. Furthermore, using extreme value theory, we characterized the service process for increasing number of users and provided scaling laws as the number of antennas and/or users is taken to infinity. Our analytical results indicate how the number of antennas, the number of users, and the transmit SNR may affect the delay violation probability in different system operating regimes.
\addcontentsline{toc}{chapter}{References}
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2018-06-05T02:16:20",
"yymm": "1806",
"arxiv_id": "1806.01035",
"language": "en",
"url": "https://arxiv.org/abs/1806.01035"
}
|
\section{\label{}}
{\em Introduction.} It has long been recognised that collective spin dynamics of quantum mechanical origin can exist in a dilute gas at temperature $T\gtrsim T_d$, where $T_d$ is the degeneracy temperature. It arises due to indistinguishability of identical atoms in binary scattering and is known as the identical spin rotation effect (ISRE)~\cite{jphys.43.197, PhysRevLett.52.1508, PhysRevLett.52.1512}. This effect is operative for both bosons \cite{PhysRevLett.88.230403, PhysRevLett.88.230404, PhysRevLett.88.230405} and fermions~\cite{PhysRevA.79.051601, PhysRevLett.102.215301} and has led to the observations of spin waves and anomalous spin segregation for weakly interacting bosons~\cite{PhysRevLett.88.070403} and fermions~\cite{PhysRevLett.101.150401}. Similar effects also occurs in a degenerate Fermi liquid like $^3$He where it leads to anomalous spin diffusion known as the Leggett-Rice effect~\cite{PhysRevLett.20.586, 0022-3719-3-2-027}. Recently, Leggett-Rice effect has also been observed in unitary Fermi gas in both two~\cite{nphys2637} and three dimensions~\cite{Bardon722, PhysRevLett.114.015301}.
The ISRE effects explored so far are limited to systems with spin-SU$(2)$ symmetry where the total spin is a good quantum number and its dynamics decouples from that of the density~\cite{PhysRevLett.88.230403, PhysRevLett.88.230404, PhysRevLett.88.230405,PhysRevA.79.051601, PhysRevLett.102.215301}. In this Letter, we investigate the spin dynamics of a normal Bose gas with spin-orbit coupling (SOC) that was recently realized in cold atom experiments \cite{nature09887, PhysRevLett.109.095301, PhysRevLett.109.095302, PhysRevLett.109.115301, PhysRevLett.111.095301, PhysRevA.88.021604, PhysRevA.90.013616, ncomms5023, PhysRevX.6.031022, PhysRevA.94.061604, nphys3672, PhysRevLett.117.185301, PhysRevLett.117.235304, science.aaf6689}. The coupling between spin and orbit degrees of freedom breaks the SU$(2)$ symmetry and leads to more intricate dynamics that has no analog in usual dilute gases discussed above. In particular, we show how the long-wavelength and low-frequency hydrodynamic equations are modified in the presence of SOC, and how it leads to the appearance of persistent spin helical (PSH) structure. The decay of the spin helical structure is discussed in both adiabatic and diabatic limits. The general equations we obtain should serve as the starting point for investigating spin dynamics in a spin-orbit coupled Bose gas such as spin waves and their attenuations. Spin dynamics for Fermi gas with spin-orbital coupling has been discussed in Refs.\cite{PhysRevLett.99.110403, PhysRevA.87.041602, PhysRevB.87.125416, PhysRevA.88.033613, PhysRevA.88.043634, PhysRevLett.112.095302, PhysRevA.92.013607, PhysRevA.93.063635}.
{\em General Setup.} For definiteness, let us consider a gas of bosonic atoms $^{87}\mathrm{Rb}$ of mass $m$ with two hyperfine-Zeeman sub-levels $\ket{F,m_F} \equiv \ket{1,0} \equiv \ket{\uparrow}$ and $\ket{1,-1}\equiv \ket{\downarrow}$ that are coupled by a pair of Raman lasers with momentum transfer $\mathbf{q}=q\hat{\mathbf{x}}$ along the $\hat{\mathbf{x}}$-direction. We set the two-photon detuning to be zero for simplicity in the following discussion. The harmonic trapping potential $V(\mathbf{r})$, independent of spin, is assumed to be strong in the $\hat{\mathbf{y}}$- and $\hat{\mathbf{z}}$-directions but weak in the $\hat{\mathbf{x}}$-direction and the system can be considered quasi-one-dimensional. The $s$-wave interaction is almost SU$(2)$ invariant for $^{87}${Rb} and is given by a single coupling constant $g$. The Hamiltonian can be written as $\hat{\mathscr{H}}=\int d^3 \mathbf{r} \sum_{\mu,\nu=\uparrow,\downarrow}\psi^\dagger_{\mu}(\mathbf{r})H_{\mu\nu}\psi_{\nu}(\mathbf{r})+\frac{1}{2}g\int d^3\mathbf{r} \colon\hat{n}(\mathbf{r})\hat{n}(\mathbf{r})\colon$ where $H_{\mu\nu}$ is given by
\begin{equation}
H_{\mu\nu}=\left[\dfrac{-\hbar^2 \nabla^2}{2m} + V(\mathbf{r})\right]\delta_{\mu\nu} -\frac{i\hbar q}{m}\sigma^z_{\mu\nu} \partial_x + \frac{\hbar\Omega_R}{2}\sigma^x_{\mu\nu}~.
\end{equation}
$\hat{\psi}_{\mu}(\mathbf{r})$ ($\hat{\psi}^\dagger_{\mu}(\mathbf{r})$) is the annihilation (creation) operator for boson with spin $\mu$ at position $\mathbf{r}$. $\Omega_R$ is the two-photon Rabi coupling. The number and spin densities are then given by $\hat{n}(\mathbf{r})=\sum_{\mu}\hat{\psi}^\dagger_{\mu}(\mathbf{r})\hat{\psi}_{\mu}(\mathbf{r})$ and ${\hat{s}}_i(\mathbf{r}) = \frac{1}{2}\sum_{\mu,\nu}\hat{\psi}^\dagger_\mu(\mathbf{r})\sigma^i_{\mu\nu}\hat{\psi}_\nu(\mathbf{r})$, respectively. $\sigma^i$ are the Pauli matrices. In what follows, we use arrow on top of an operator to indicate that it is a vector in spin space while {boldface $\hat{\mathbf{x}}, \hat{\mathbf{y}}, \hat{\mathbf{z}}$} describes the spatial direction. Properties of condensation described by $\hat{\mathscr{H}}$ have been discussed extensively in the literature, including its phase diagram and collective excitations~\cite{PhysRevLett.105.160403, PhysRevLett.107.150403, doi:10.1142/S0217979212300010, PhysRevLett.108.225301, PhysRevLett.110.235302, nphys2905, PhysRevLett.114.105301, 0034-4885-78-2-026001, doi:10.1142/9789814667746_0005, srep15307, Zhang2016, PhysRevLett.118.145302, PhysRevLett.120.120401} as well as spin dynamics~\cite{PhysRevA.93.063420}.
{\em Transport Equations.} We first derive the continuity equations for number and spin densities and also identify the modifications to the associated number and spin currents due to spin-orbit coupling. We restrict ourselves to transport along the $\hat{\mathbf{x}}$-direction. Using Heisenberg's equation of motion $i\hbar \braket{\partial_t \hat{A}} = \braket{[\hat{A}, \hat{\mathscr{H}}]}$ with $\hat{A}$ being the number $\hat{n}(\mathbf{r})$ and spin $\hat{\vec{s}}(\mathbf{r})$ densities, we find immediately
\begin{equation}
\partial_t n + \partial_x j_0 = 0~\label{eq:DEOM}
\end{equation}
where the number current along $\hat{\mathbf{x}}$-direction $j_0=\langle\hat{j}_0\rangle$ with $\hat{j}_0=-{i\hbar}/({2m})\sum_{\mu}(\hat{\psi}^\dagger_{\mu}\partial_x \hat{\psi}_{\mu}-\partial_x\hat{\psi}^\dagger_{\mu}\hat{\psi}_{\mu})+ (2q/m)\hat{s}_z$. We note that due to SOC, the number current is coupled to the $\hat{z}$-component of the spin density. This redefinition is recently found to cause the violation of irrotationality of velocity field in spin-orbit couple condensate and the reduction of the quantum of circulation \cite{PhysRevLett.118.145302}.
In the presence of spin-orbit coupling, the total spin is no longer conserved and the definition of spin current operator $\hat{\vec{j}}$ is not entirely obvious. In our case, we identify the spin current by grouping all the gradient term in the continuity equation for spin density
\begin{equation}
\partial_t \vec{s} + \partial_x \vec{j} = \Omega_R \hat{x}\times\vec{s} + (2q/\hbar)\hat{z}\times\vec{j}~\label{eq:SEOM}
\end{equation}
where the spin current operator is given by $\hat{\vec{j}}=-{i\hbar}/({4m})\sum_{\mu,\nu}\hat{\psi}^\dagger_\mu\vec{\sigma}_{\mu\nu}\,\partial_x \hat{\psi}_\nu + \mathrm{H.c.}+\hat{n}q/(2m)\hat{z}$. We note two important modifications due to SOC. Firstly, the spin-current is now coupled to the total density of the system. Secondly, apart from the usual spin precessing term due to Rabi coupling, there is an additional precessing term, proportional to the strength of SOC, of spin current along $\hat{z}$-direction in Eq.(\ref{eq:SEOM}). We note that the modified definition of spin current operator $\hat{\vec{j}}$ can also be motivated from semiclassical considerations. Let the distribution function (a matrix in spin space) be given by $\hat{f}(\mathbf{r},\mathbf{p},t)$, then one can define the spin current as
\begin{equation}
\vec{j}(\mathbf{r},t)=\frac{1}{2}{\rm Tr}\int \frac{d^3 \mathbf{p}}{(2\pi \hbar)^3}\hat{f}(\mathbf{r},\mathbf{p},t)\frac{1}{2}\left(\vec{\sigma}\frac{\partial H}{\partial p_x}+\frac{\partial H}{\partial p_x}\vec{\sigma}\right)~.
\end{equation}
The symmetrization is necessary because of non-commutivity of $\vec{\sigma}$ and ${\partial H}/{\partial p_x}$. Since ${\partial H}/{\partial p_x}=p_x/m+(q/m)\hat{\sigma}^z$. The first term $p_x/m$ corresponds to the standard spin-current operator, while the second term $(q/m)\hat{\sigma}^z$ only modifies the $\hat{z}$-component of the spin-current by an additional term $\hat{n}q/(2m)$.
Using the operator forms of the number and spin currents, it is now straightforward to obtain their equations of motions, which are much more complicated because of the involvement of the momentum flux tensors. However, in the normal state above the degeneracy temperature, the momentum flux tensors can be simplified using Boltzmann distribution (recall $T\gtrsim T_d$) and gradient expansion (for detailed derivation, see Supplemental Material~\cite{supp}). As a result, we obtain
\begin{align}
\partial_t j_0+\dfrac{k_BT}{m} \partial_x n &=\frac{2q}{m}\Omega_R s_y-\frac{g}{2m}\partial_x\left(\frac{3}{4}n^2+\vec{s}^{\,2}\right)\label{eq: current density EOM}\\
\partial_t \vec{j} + \alpha \partial_x \vec{s} &= \left(\Omega_R \hat{x} + \frac{g}{\hbar} \vec{s}\right)\times\vec{j} +\frac{2q\alpha}{\hbar}\hat{z}\times\vec{s}\nonumber \\& +\frac{qn\Omega_R}{2m}\hat{y} - \frac{3g}{4m} (\partial_x n)\vec{s} - \gamma \vec{j}~,\label{eq: spin current density EOM}
\end{align}
where $\alpha = {k_B T}/{m} + {ng}/(4m) $. A phenomenological spin current relaxation term $-\gamma\vec{j}$ is added to Eq.(\ref{eq: spin current density EOM}). In the absence of the spin-orbit coupling ($\Omega_R=0$ and $q=0$), Eqs.(\ref{eq:SEOM},\ref{eq: spin current density EOM}) reduce to the standard Leggett-Rice form for a degenerate Fermi liquid \cite{PhysRevLett.20.586, 0022-3719-3-2-027}. It is noteworthy that the spin gradient term ${ng}/({4m})\partial_x \vec{s}$ in Eq.(\ref{eq: spin current density EOM}) is usually omitted in comparison to the Leggett-Rice term $(g/\hbar)\vec{s}\times\vec{j}$ when the spatial variation of $\vec{s}$ is small. In the presence of SOC, however, it has to be retained because the natural scale of variation for $\vec{s}$ is set by the spin-orbit scale $q$ which can be quite large. In addition, due to the fast temporal variation of spin density, it is necessary to go beyond the adiabatic approximation $|\partial_t \vec{s}/\vec{s}| \lesssim \gamma$ usually assumed in literature and discuss the dynamics in the diabatic regime as well.
Equations (\ref{eq:DEOM},\ref{eq:SEOM},\ref{eq: current density EOM},\ref{eq: spin current density EOM}) form the basic equations for the spin dynamics of a SOC boson above the degeneracy temperature. In following, we first discuss the limit when the effects of Rabi coupling $\Omega_R$ is small, or what is equivalent, for time $t\ll 1/\Omega_R$, and discuss the existence of persistent spin helix (PSH) at wave vector $k=2q$ (hereafter $\hbar = 1$) and its decay when $k$ deviates from $2q$. The effects of Rabi term on PSH will be discussed in the end of the Letter.
{\em {Persistent spin helical structure.}} The full set of equations allow an exact solution corresponding to persistent spin helix with uniform density $n=n_0$, spin density $s_z=s_{z,0}$ and $\vec{s}^{\,2}\equiv\vec{s}\cdot\vec{s}$ that are independent of time. If we write the transverse spin $\vec{s}_\perp = s_x\hat{x} + s_y\hat{y}$ in terms of $s^\pm = s_x \pm i s_y$, and likewise for the spin currents $\vec{j}(x,t) = \vec{j}_\perp(x,t) + j_{z}(t)\hat{z}$. Then for the spin helical structure with definite wave number $k$, we can write $s^{\pm}(x,t) = e^{\pm ikx}\tilde{s}^{\pm}(t)$ and similarly $j^{\pm}(x,t)=e^{\pm ikx}\tilde{j}^{\pm}(t)$ and obtain the following set of equations
\begin{align}
\partial_t \tilde{s}^+ &= - i (k-2q)\tilde{j}^+~,\label{ds tilde}\\
\partial_t \tilde{j}^+ &= (i \lambda s_{z,0} - \gamma) \tilde{j}^+ -i[\alpha(k-2q) + \lambda j_z]\tilde{s}^+,\label{dj tilde}\\
\partial_t j_z &= \lambda {\rm Im}[\tilde{s}^- \tilde{j}^+] -\gamma j_z,\label{djz}
\end{align}
where $\lambda = g/\hbar$ and ${\rm Im}$ denotes the imaginary part. When $k=2q$, the transverse spin $\tilde{s}^+$ is independent of time and corresponds to a static spin helical structure in which spin density rotates about $\hat{z}$-axis with wave vector $2q$ in the $\hat{\mathbf{x}}$ direction,
\begin{equation}
\vec{s}_{\rm psh} = s_{\perp,0} \cos(2qx)\hat{x} + s_{\perp,0} \sin(2qx)\hat{y} + s_{z,0}\hat{z}~.
\end{equation}
In semiconductor heterostructure, it is understood that the persistent spin helix is due to an emergent SU$(2)$ symmetry in the presence of spin-orbit coupling~\cite{PhysRevLett.97.236601, nature07871, RevModPhys.89.011001}. In the long time limit $t\gg 1/\gamma$, it is easy to see that both $j_z$ and $\tilde{j}^+$ decays to zero, according to Eqs.(\ref{dj tilde},\ref{djz}).
{\em {Vicinity of PSH.}} In the following, we investigate the dynamics of spin helical structure when its wave vector deviates away from $2q$, described by the parameter $\varepsilon\equiv k/(2q)-1$. Here it is important to distinguish two regimes. In the adiabatic regime where the spin currents can relax much faster than the spin densities and can thus follow adiabatically the time evolution of spin density, $|\partial_t \vec{s}/\vec{s}\,| \lesssim \gamma$, we can set $\partial_t \tilde{j}^+ = \partial_t j_z = 0$ in Eqs.(\ref{dj tilde},\ref{djz}) in the steady state. Writing $\tilde{s}^+(t)\equiv s_\perp(t)\exp[i\theta(t)]$, we obtain the following set of equations
\begin{align}
\begin{split}
(\gamma^2 + \lambda^2 s_{z,0}^2)\ln\dfrac{s_\perp(t)}{s_{\perp,0}} + \frac{\lambda^2}{2}\left[s_{\perp}^2(t) - s_{\perp,0}^2\right] \\= -\alpha\gamma (k - 2q)^2 t~\label{eq: sperp},
\end{split}\\
\theta(t) &= \dfrac{\lambda s_{z,0}}{\gamma}\ln\left[\dfrac{s_\perp(t)}{s_{\perp,0}}\right]~,\\
j_z &= -\dfrac{\lambda s_\perp^2 \alpha(k-2q)}{\gamma^2 + \lambda^2 (s_\perp^2 + s_{z,0}^2)}~,\\
\tilde{j}^+ &= \tilde{s}^+ \dfrac{\alpha(k-2q)(\lambda s_{z,0} - i \gamma)}{\gamma^2 + \lambda^2 (s_\perp^2 + s_{z,0}^2)}~,\label{adia jtilde}
\end{align}
where $s_{\perp,0}=s_\perp(t=0)$. Substitution of $k=2q$ recovers the previous solution of PSH. When $k \neq 2q$, the transverse spin magnitude decays according to Eq.(\ref{eq: sperp}). Depending on the relative magnitude of $s_\perp$ and $s_{z,0}$, one can distinguish two qualitatively different behaviours. \\
(i) When $|s_{\perp,0}| \geq |s_\perp(t)| \gg |s_{z,0}|$, namely, when spins are polarized close to the $xy$-plane, the first term on the left of Eq.(\ref{eq: sperp}) is negligible, hence the transverse spin magnitude decays parabolically in the short time limit $t\ll \tau_{\rm para}$,
\begin{equation}
s_\perp (t) \approx s_{\perp,0}\sqrt{1 - \frac{t}{\tau_{\rm para}}}, ~~\tau_{\rm para} = \frac{\lambda^2 s_{\perp,0}^2}{2\alpha\gamma(k-2q)^2 }, \label{asym eq para}
\end{equation}
where the time constant $\tau_{\rm para}$ depends quadratically on the interaction parameter $\lambda$ and inversely on the spin current relaxation rate $\gamma$. As expected, it diverges when $k=2q$. \\
(ii) In the long time limit $t\gg \tau_{\rm para}$ when $|s_\perp(t)| \ll |s_{z,0}|$, the decay becomes exponential in the adiabatic regime with a different time constant $\tau_{\rm exp}$
\begin{equation}
s_\perp (t) \approx s_{\perp,0} e^{-t/\tau_{\rm exp}},~~~\tau_{\rm exp} = \frac{\gamma^2 + \lambda^2 s_{z,0}^2}{\alpha\gamma(k-2q)^2} \label{asym eq exp}.
\end{equation}
Fig.\ref{fig:AdiaPt} shows the excellent agreement between above analytic formulae and numerical results. We note that the dynamical equation (\ref{eq: sperp}) is similar in form to the Leggett-Rice equation derived for a degenerate Fermi liquid~\cite{0022-3719-3-2-027}, except for the explicit appearance of spin-orbit coupling term on the right hand side of Eq.(\ref{eq: sperp}). We emphasize that Eqs.(\ref{asym eq para}, \ref{asym eq exp}) apply so long as $1/\Omega_R\gg \tau_{\rm para}, \tau_{\rm exp}$ even in the presence of a small Rabi coupling.
\begin{figure}[h]
\includegraphics[width=85mm,scale=1]{Fig1.eps}
\caption{\label{fig:AdiaPt}(Colour online) Short- and long-time behaviours of the transverse spin in the adiabatic regime. Short-time decay for initial spin density polarised close to the $xy$-plane. The decay is parabolic (left panel). Long-time decay for initial spin density polarised close to the $z$-axis. The decay is exponential (right panel). Numerical simulations of the full set of Eqs. (\ref{eq:DEOM},\ref{eq:SEOM},\ref{eq: current density EOM},\ref{eq: spin current density EOM}) (black solid) agree very well with the analytical results (red dashed) given by Eq.(\ref{eq: sperp}). Blue dashed lines shows the asymptotic results, Eqs.(\ref{asym eq para}, \ref{asym eq exp}). The deviation at the tail of left panel between the simulation and analytic equation (\ref{eq: sperp}) indicates the failure of adiabatic approximation. }
\end{figure}
{\em {Diabatic Regime.}} When the wave vector $k$ of the spin helix deviates significantly away from the PSH wave vector $2q$, the adiabatic condition fails. In the diabatic regime when $|\partial_t \vec{s}/\vec{s}| \gg \gamma$, we can neglect $\lambda |j_z|$ in Eq.(\ref{dj tilde}), and as a result $\tilde{s}^+(t), \tilde{j}^+(t)$ form a closed dynamical system in Eqs.(\ref{ds tilde},\ref{dj tilde}). With the initial condition $(\tilde{s}^+, \tilde{j}^+, j_z)_{t=0} = (s_{\perp,0},0,j_{z,0})$, one obtains~\cite{supp}
\begin{align}
\tilde{s}^+(t)=& s_{\perp,0} e^{-\frac{\gamma t}{2} + i \frac{\lambda s_{z,0}}{2} t } \left[\cos \Gamma t + \dfrac{\gamma - i \lambda s_{z,0}}{2\Gamma}\sin \Gamma t \right] ,\label{general stilde}\\
\tilde{j}^+(t)=& i\sqrt{\alpha} s_{\perp,0} e^{-\frac{\gamma t}{2} + i \frac{\lambda s_{z,0}}{2} t}\sin(\Gamma t)\,\mbox{sign}(2q - k),\label{general jtilde}\\
j_z(t)=& j_{z,0} e^{-\gamma t} \nonumber - e^{-\gamma t} \dfrac{\lambda s_{\perp,0}^2}{4 (k-2q)}\\&\times \left[ 1 + \gamma t - \cos(2\Gamma t) - \dfrac{\gamma}{2}\dfrac{\sin(2\Gamma t)}{\Gamma} \right]~,
\end{align}
where $\Gamma=\sqrt{\alpha}|k-2q|$. In obtaining the above simplified expressions we have assumed that the polarization of spin are close to the $xy$-plane and as a result $\Gamma \gg \gamma, \lambda s_{z,0}$.
\begin{figure}[h]
\includegraphics[width=85mm,scale=1]{Fig2.eps}
\caption{\label{fig:DiaSX}(Colour online) Left panel shows time dependence of $y$-component of spin density for spin density polarized along $\hat{x}$-direction at $t=0$. Numerical result (black dashed) agrees very well with the analytical result (red solid), Eq.(\ref{general stilde}). Right panel is the trajectory of transverse spin component in the $xy$-plane. Also indicated in the graph are the fast oscillations of the magnitude of transverse spin ($\Gamma$) and its slow rotations with rate $\lambda s_{z,0}/2$.}
\end{figure}
The dynamics of the transverse components consists of three parts: fast oscillation in magnitude with frequency $\Gamma$, slow precessing of the axis of oscillation with frequency $\lambda s_{z,0}/2$ and the damping of oscillation amplitude with the rate of $\gamma/2$, as shown in Fig.\ref{fig:DiaSX}. If one neglects the small correction of the sine function in Eq.(\ref{general stilde}), then there is an exact $\pi/2$ phase difference between the oscillations of $\tilde{s}^+(t)$ and $\tilde{j}^+(t)$, similar to the un-dampened LC circuit, in contrast to the over-damped case where $\tilde{j}^+$ follows adiabatically the dynamics of $\tilde{s}^+$.
The region of adiabaticity for various initial polarizations $s_{z,0}$ and $s_{\perp,0}$ are determined (approximately) by the condition $|\partial_t \vec{s}/\vec{s}| \sim \gamma$. It is shown in Fig.\ref{fig:AdiaN} that close to PSH, the adiabatic region prevails for most of the parameter regime except when the spin polarisation is small. On the other hand, as one moves away from PSH, the region of non-adiabatic evolution grows much larger. Starting from an arbitrary initial conditions, the spin dynamics might traverse both adiabatic and diabatic regimes and becomes much richer. In particular, close to the boundaries, it is necessary to deal with the full set of hydrodynamic equations (\ref{eq:DEOM},\ref{eq:SEOM},\ref{eq: current density EOM},\ref{eq: spin current density EOM}) that we derived before.
\begin{figure}[h]
\includegraphics[width=85mm,scale=1]{Fig3.eps}
\caption{\label{fig:AdiaN}(Colour online) The approximate demarcation of adiabatic from dabatic regions based on the condition $|\partial_t \vec{s}/\vec{s}| \sim \gamma$ (red shaded lines). The region of adiabaticity becomes smaller when the wave number of spin helix deviates away from $2q$.}
\end{figure}
{\em {Quenching of Rabi coupling on PSH.}} In the above analysis, we have assumed that the Rabi coupling is weak and can be neglected. Inclusion of Rabi term results in complex dynamics, for example an extra precession of spin density and spin current density in Eqs.(\ref{eq:SEOM},\ref{eq: spin current density EOM}). In the equilibrium state, due to the breaking of the emergent SU(2) symmetry, PSH is no longer stable and decays with a rate that is determined by $\Omega_R$.
Considering the situation that the Rabi coupling is turned on suddenly at $t=0$ and remains fixed, all spin helices except the PSH with $k=2q$ vanish long before $t=0$. In the following the short-time effect of the Rabi coupling on the PSH will be studied. We can separate the densities and current densities into two parts, one from the PSH while the other from the leading correction due to Rabi coupling which vanishes for $t \leq 0$,
\begin{align}
n(x,t) =&~n_0 + \delta n(x,t)~,\\
\vec{s}(x,t) =&~\vec{s}_{\rm psh} + \delta \vec{s}(x,t)~,\\
j_0(x,t) =&~0 + \delta j_0(x,t)~,\\
\vec{j}(x,t) =&~0 + \delta \vec{j}(x,t)~.
\end{align}
Substituting the above expressions into the transport equations Eqs.(\ref{eq:DEOM},\ref{eq:SEOM},\ref{eq: current density EOM},\ref{eq: spin current density EOM}) and only keeping terms linear in small derivations and the Rabi coupling, we are led to an inhomogeneous diffusion equation of the form
\begin{equation}
\partial_t \vec{\delta V}(x,t) = \hat{\mathbf{H}}(x,\partial_x)\vec{\delta V}(x,t) + \vec{g}(x)~,
\end{equation}
with $\vec{\delta V} = (\delta n, \delta j_0, \delta s_z, \delta j_z, \delta s_x, \delta j_x, \delta s_y, \delta j_y)^T$ and $\vec{g}(x) = \Omega_R (0, {2q}s_{{\rm psh},y}/{m}, s_{{\rm psh},y},0,0,0, -s_{z,0},qn_0/(2m))^T$. The explicit form of $\hat{\mathbf{H}}(x,\partial_x)$ is given in Supplemental Material~\cite{supp} and the solution is given by
\begin{align} \label{deltaVSeries}
\vec{\delta V}(x,t)= \hat{\mathbf{H}}^{-1}\left[\exp({\hat{\mathbf{H}} t}) - \hat{\mathbf{I}}\right] \vec{g}(x).
\end{align}
To characterize the decay of transverse component of spin density due to Rabi coupling, we define a quantity that measures the amplitude of the spin helical structure
\begin{equation}
R(t) = \dfrac{1}{|\vec{s}(x,0)|(\pi/q)}\int_{0}^{\frac{\pi}{q}}{\rm Re}[s^+(x,t) e^{-i2qx} ] \mathrm{d}x~,
\end{equation}
where $|\vec{s}(x,0)| = \sqrt{s_{\perp,0}^2 + s_{z,0}^2}$ is the initial spin magnitude. For $t>0$, Rabi coupling destroys the helical structure and results in decay of $R(t)$. The short-time behaviour is described by the leading terms of the series Eq.(\ref{deltaVSeries}). When $\Omega_R t \ll 1$,
\begin{equation} \label{LeadingR}
R(t) \approx \dfrac{s_{\perp,0}}{\sqrt{s_{\perp,0}^2 + s_{z,0}^2}}\left[ 1- \dfrac{(\Omega_R t)^2}{4} \right] + \mathcal{O}(t^4).
\end{equation}
{\em Experimental Considerations.} For $^{87}\mathrm{Rb}$ used in spin-orbit coupling experiment~\cite{nature09887}, the typical density is about ${n_0} =2\times10^{13}\,\mbox{cm}^{-3}$. For our calculation, we assume $T= 700\mbox{nK}$, well above the typical condensation temperature. The Raman laser defines a scale of wave number $k_L = (\sqrt{2}\pi/804.1\,\mbox{nm})/10$, chosen to be 10 times smaller than that in Ref.\cite{nature09887} to make the spin helical structure more visible. In the numerical calculations presented, we chose $q = 0.5 \hbar k_L$ and Rabi coupling $\hbar \Omega_R = 0.5 \hbar^2 k_L^2/(2m)$, appropriate to experimental situations. We assume the system is initially polarized with $|\vec{s}_0|=s_{\rm max}= {n_0}/2$. The intrinsic spin current relaxation rate $\gamma$ is chosen to be approximately $20$Hz, appropriate to $^{87}\mathrm{Rb}$~\cite{PhysRevLett.88.070403, PhysRevLett.88.230403}. Numerical simulations show that the time scale for spin dynamics in the adiabatic regime is of order of seconds, while it is of order of milliseconds in the diabatic regime, and can be observed experimentally. To initialize the system in a particular spin helical state, one can start with Rb atoms in the $\ket{\downarrow}$ state with no Raman lasers and apply a radio-frequency pulse to achieve a desired $s_z$ polarization. Afterwards, a small magnetic field with linear gradient $\Delta B$ can be applied to create the spin helical structure with wave vector $k$. To investigate the stability of spin helical structure, one can now turn on the Raman fields which create the spin-orbit coupling with strength $q$, and measure the evolution of transverse spin component.
\begin{acknowledgments}
This work is supported by Hong Kong Research Grants Council, GRF HKU 17305217, CRF C6026-16W, and the Croucher Foundation under the Croucher Innovation Award.
\end{acknowledgments}
\bibliographystyle{apsrev4-1}
\section{\label{}}
\section{Derivation of Equations of Motion}
\subsection{Dynamics of Densities}
Given the Hamiltonian of system $\hat{\mathscr{H}}=\int d^3\mathbf{r}\sum_{\mu,\nu}\psi^\dagger_{\mu}(\mathbf{r})H_{\mu\nu}\psi_{\nu}(\mathbf{r})+\frac{1}{2}g\int d^3 \mathbf{r} \colon\hat{n}(\mathbf{r})\hat{n}(\mathbf{r})\colon$ with $\mu,\nu=\uparrow,\downarrow$ and
\begin{equation}
H_{\mu\nu}=\left[\dfrac{-\hbar^2 \nabla^2}{2m} + V(\mathbf{r})\right]\delta_{\mu\nu} -\frac{i\hbar q}{m}\sigma^z_{\mu\nu} \partial_x + \frac{\hbar\Omega_R}{2}\sigma^x_{\mu\nu}~.
\end{equation}
$\hat{\psi}_{\mu}(\mathbf{r})$ ($\hat{\psi}^\dagger_{\mu}(\mathbf{r})$) is the annihilation (creation) operator of a boson with spin component $\mu$ at position $\mathbf{r}$. $\sigma^i$ are the Pauli matrices. The dynamics of number density $\hat{n}(\mathbf{r})=\sum_{\mu}\hat{\psi}^\dagger_{\mu}(\mathbf{r})\hat{\psi}_{\mu}(\mathbf{r})$ and spin densities ${\hat{s}}_i(\mathbf{r}) = \frac{1}{2}\sum_{\mu,\nu}\hat{\psi}^\dagger_\mu(\mathbf{r})\sigma^i_{\mu\nu}\hat{\psi}_\nu(\mathbf{r})$ is governed by the equations of motion derived from Heisenberg's equation $i\hbar\braket{\partial_t \hat{A}} = \braket{[ \hat{A} , \hat{\mathscr{H}}]}$, explicitly
\begin{align}
\partial_t n + \nabla \cdot \mathbf{j}_0 &= -\dfrac{2}{m}\mathbf{q} \cdot \nabla s_z~,\\
\partial_t s_z + \nabla \cdot \mathbf{j}_z &= \Omega_R s_y - \dfrac{1}{2m}\mathbf{q} \cdot \nabla n~.\\
\partial_t s_x + \nabla \cdot \mathbf{j}_x &= - \dfrac{2}{\hbar}\mathbf{q} \cdot \mathbf{j}_y~,\\
\partial_t s_y + \nabla \cdot \mathbf{j}_y &= - \Omega_R s_z + \dfrac{2}{\hbar}\mathbf{q} \cdot \mathbf{j}_x~,
\end{align}
with spin-orbit coupling $\mathbf{q} = q \hat{\mathbf{x}}$ and the help of Wick's theorem:
\begin{equation}
\braket{\hat{\psi}^\dagger_\alpha\,\hat{\psi}^\dagger_\beta\,\hat{\psi}_m\,\hat{\psi}_n} = \braket{\hat{\psi}^\dagger_\alpha\,\hat{\psi}_m}\braket{\hat{\psi}^\dagger_\beta\,\hat{\psi}_n} + \braket{\hat{\psi}^\dagger_\alpha\,\hat{\psi}_n}\braket{\hat{\psi}^\dagger_\beta\,\hat{\psi}_m} + \braket{\hat{\psi}^\dagger_\alpha\,\hat{\psi}^\dagger_\beta}\braket{\hat{\psi}_m\,\hat{\psi}_n}~,
\end{equation}
the last term called anomalous term is omitted in the present case of normal phase. The number current $\mathbf{j}_0=\langle\hat{\mathbf{j}}_0\rangle$ with $\hat{\mathbf{j}}_0=-{i\hbar}/({2m})\sum_{\mu}(\hat{\psi}^\dagger_{\mu}\nabla \hat{\psi}_{\mu}-\nabla\hat{\psi}^\dagger_{\mu}\,\hat{\psi}_{\mu})$ and spin currents $\mathbf{j}_i =\langle\hat{\mathbf{j}}_i\rangle$ with $\hat{\mathbf{j}}_i=-{i\hbar}/({4m})\sum_{\mu,\nu}\hat{\psi}^\dagger_\mu \sigma^i_{\mu\nu}\,\nabla \hat{\psi}_\nu + \mathrm{H.c.}$ are defined in \textit{conventional} manner, i.e. only taking kinetic term of Hamiltonian into account. The equations of motion of these conventional currents are necessary to complete the system's dynamics.
\subsection{Dynamics of Current Densities}
After straightforward but tedious calculation, equations of motion of conventional number and spin current densities are obtained,
\begin{align}
\partial_t j_{0,\alpha} + \partial_\beta \braket{\hat{\Pi}^0_{\alpha\beta}} &= -\dfrac{n}{m} \partial_\alpha V(\mathbf{r}) -\dfrac{3g}{4m}\partial_\alpha(n^2) - \dfrac{g}{m}\partial_\alpha(s_x^2 + s_y^2 + s_z^2 \,) -\dfrac{2(\mathbf{q}\cdot\nabla)}{m} j_{z,\alpha}~,\\
\partial_t j_{z,\alpha} + \partial_\beta \braket{\hat{\Pi}^z_{\alpha\beta}} &= -\dfrac{s_z}{m}\partial_\alpha V(\mathbf{r}) + \dfrac{2g}{\hbar}(s_x j_{y,\alpha} - s_y j_{x,\alpha}) - \dfrac{3g}{2m} s_z \partial_\alpha n - \dfrac{g}{2m} n \partial_\alpha s_z -\dfrac{(\mathbf{q}\cdot\nabla)}{2m} j_{0,\alpha} + \Omega_R j_{y,\alpha}~,\\
\partial_t j_{x,\alpha} + \partial_\beta \braket{\hat{\Pi}^x_{\alpha\beta}} &= -\dfrac{s_x}{m}\partial_\alpha V(\mathbf{r}) + \dfrac{2g}{\hbar}(s_y j_{z,\alpha} - s_z j_{y,\alpha}) - \dfrac{3g}{2m} s_x \partial_\alpha n - \dfrac{g}{2m} n \partial_\alpha s_x -\dfrac{2q_\beta \braket{\hat{\Pi}^y_{\alpha\beta}}}{\hbar}~,\\
\partial_t j_{y,\alpha} + \partial_\beta \braket{\hat{\Pi}^y_{\alpha\beta}} &= -\dfrac{s_y}{m}\partial_\alpha V(\mathbf{r}) + \dfrac{2g}{\hbar}(s_z j_{x,\alpha} - s_x j_{z,\alpha}) - \dfrac{3g}{2m} s_y \partial_\alpha n - \dfrac{g}{2m} n \partial_\alpha s_y + \dfrac{2q_\beta \braket{\hat{\Pi}^x_{\alpha\beta}}}{\hbar} -\Omega_R j_{z,\alpha}~.
\end{align}
Repeated index $\beta$ means summation of $\beta = x,y,z$. The generalized momentum flux tensors $\hat{\Pi}^i_{\alpha\beta}$ are defined as
\begin{align}
\hat{\Pi}^0_{\alpha\beta} &= \left(\dfrac{\hbar}{2m}\right)^2 (\partial_\alpha \hat{\psi}_\uparrow^\dagger\,\partial_\beta\hat{\psi}_\uparrow - \hat{\psi}_\uparrow^\dagger\,\partial_\alpha\partial_\beta\hat{\psi}_\uparrow + \partial_\alpha \hat{\psi}_\downarrow^\dagger\,\partial_\beta\hat{\psi}_\downarrow - \hat{\psi}_\downarrow^\dagger\,\partial_\alpha\partial_\beta\hat{\psi}_\downarrow) + \mathrm{H.c.}~,\\
\hat{\Pi}^z_{\alpha\beta} &= \dfrac{1}{2}\left(\dfrac{\hbar}{2m}\right)^2 (\partial_\alpha \hat{\psi}_\uparrow^\dagger\,\partial_\beta\hat{\psi}_\uparrow - \hat{\psi}_\uparrow^\dagger\,\partial_\alpha\partial_\beta\hat{\psi}_\uparrow - \partial_\alpha \hat{\psi}_\downarrow^\dagger\,\partial_\beta\hat{\psi}_\downarrow + \hat{\psi}_\downarrow^\dagger\,\partial_\alpha\partial_\beta\hat{\psi}_\downarrow) + \mathrm{H.c.}~,\\
\hat{\Pi}^x_{\alpha\beta} &= \dfrac{1}{2}\left(\dfrac{\hbar}{2m}\right)^2 (\partial_\alpha \hat{\psi}_\uparrow^\dagger\,\partial_\beta\hat{\psi}_\downarrow - \hat{\psi}_\uparrow^\dagger\,\partial_\alpha\partial_\beta\hat{\psi}_\downarrow + \partial_\beta \hat{\psi}_\uparrow^\dagger\,\partial_\alpha\hat{\psi}_\downarrow - \partial_\alpha\partial_\beta\hat{\psi}_\uparrow^\dagger\,\hat{\psi}_\downarrow) + \mathrm{H.c.}~,\\
\hat{\Pi}^y_{\alpha\beta} &= \dfrac{1}{2i}\left(\dfrac{\hbar}{2m}\right)^2 (\partial_\alpha \hat{\psi}_\uparrow^\dagger\,\partial_\beta\hat{\psi}_\downarrow - \hat{\psi}_\uparrow^\dagger\,\partial_\alpha\partial_\beta\hat{\psi}_\downarrow + \partial_\beta \hat{\psi}_\uparrow^\dagger\,\partial_\alpha\hat{\psi}_\downarrow - \partial_\alpha\partial_\beta\hat{\psi}_\uparrow^\dagger\,\hat{\psi}_\downarrow) + \mathrm{H.c.}~.
\end{align}
In contrast to Oktel's work where the densities is nearly uniform over the time duration studied, spin helix having large wave number $k \sim 2q/\hbar$ implies that the gradient terms associated with exchange interaction are comparable with the Leggett-Rice terms, hence these gradient terms should be retained in present studies.
\subsection{Quasi-one-dimensional limit}
The calculation of ensemble average of momentum flux tensors are not trivial in presence of spin-orbit coupling. Only the component $(\alpha\beta)=(xx)$ of the tensors are concerned as quasi-one-dimensional limit is taken below. In the case without Rabi coupling, the number and longitudinal spin densities are conserved so their corresponding current densities are well defined. These currents are modified by spin-orbit coupling as shown in main text. Their modifications can also be obtained by performing gauge transformation, which eliminates the spin-orbit coupling at the expense of introducing space-dependent spin quantisation axis,
\begin{equation}
\tilde{\psi}=\exp(i q x \sigma_z)\psi~.
\end{equation}
Therefore the gradient expansion of momentum flux tensors $\braket{\Pi^0_{xx}}$ and $\braket{\Pi^z_{xx}}$ could be obtained by first carrying out the standard gradient expansion on energy momentum flux without SOC ($\tilde{\Pi}^0_{xx}$, $\tilde{\Pi}^z_{xx}$) and then using the gauge transformation to obtain that for $\braket{\Pi^0_{xx}}$ and $\braket{\Pi^z_{xx}}$. As a result, we obtain,
\begin{align}
\braket{\Pi^0_{xx}} &= \left(\dfrac{k_B T}{m} - \dfrac{q^2}{m^2} \right)n - \dfrac{4q}{m} j_z~,\\
\braket{\Pi^z_{xx}} &= \left(\dfrac{k_B T}{m} - \dfrac{q^2}{m^2} \right)s_z - \dfrac{q}{m} j_0~.
\end{align}
However, the transverse spin components in general does not satisfy the equations of continuity owing to spin-orbit coupling, as a result, the definition of the corresponding energy-momentum tensors is not uniquely fixed. In our study, we recognise that the ``streaming'' term (group velocity) is not modified in the $\hat{x}$ and $\hat{y}$-directions and as a result, we shall apply the standard gradient expansion to their corresponding energy momentum flux tensor,
\begin{align}
\braket{\Pi^x_{xx}} &= \dfrac{k_B T}{m} s_x~,\\
\braket{\Pi^y_{xx}} &= \dfrac{k_B T}{m} s_y~.
\end{align}
Since the external potential $V(\mathbf{r})$ is uniform along $\hat{\mathbf{x}}$-direction but has strong confinement along $\hat{\mathbf{y}}$-$\hat{\mathbf{z}}$ plane, the system is in quasi-one-dimensional limit, in which $n(x,y,z)$ is replaced by the central peak value $n(x,0,0)$ (similarly for other quantities), the Gaussian distribution profile along $\hat{\mathbf{y}}$-$\hat{\mathbf{z}}$ plane is averaged out and results in replacement of interaction strength $g \to g/2$. After adopting redefinitions of currents defined in main text, one have the equations of motion in quasi-one-dimensional limit,
\begin{align}
\partial_t n + \partial_x j_0 &= 0~,\label{eq n} \\
\partial_t \vec{s} + \partial_x \vec{j} &= \Omega_R \hat{x}\times\vec{s} + (2q/\hbar)\hat{z}\times\vec{j}~,\label{eq s} \\
\partial_t j_0 + \dfrac{k_B T}{m} \partial_x n &=\frac{2q}{m}\Omega_R s_y -\frac{g}{2m}\partial_x\left(\frac{3}{4}n^2+\vec{s}^{\,2}\right)~,\label{eq j0} \\
\partial_t \vec{j} + \alpha \partial_x \vec{s} &= \left(\Omega_R \hat{x} + \frac{g}{\hbar} \vec{s}\right)\times\vec{j} +\frac{2q\alpha}{\hbar}\hat{z}\times\vec{s}+\frac{qn\Omega_R}{2m}\hat{y} - \frac{3g}{4m} (\partial_x n)\vec{s} - \gamma \vec{j}~,\label{eq spin j}
\end{align}
where $\alpha = k_B T/m + ng/(4m)$, the relaxation of spin current $-\gamma \vec{j}$ is phenomenologically added. Vectorial quantities are in spin space.
\section{Adiabatic Evolution of Spin helix}
Hereafter $\hbar = 1$. The spin dynamics of spin helix simplifies and given by Eqs.(7-9) in main text, explicitly
\begin{align}
\partial_t \tilde{s}^+ &= - i (k-2q)\tilde{j}^+~,\label{EOM1}\\
\partial_t \tilde{j}^+ &= (i \lambda s_{z,0} - \gamma) \tilde{j}^+ -i [\alpha(k-2q) + \lambda j_z]\tilde{s}^+,\label{EOM2}\\
\partial_t j_z &= \lambda {\rm Im}[\tilde{s}^- \tilde{j}^+] -\gamma j_z.\label{EOM3}
\end{align}
For $k \neq 2q$, there is no persistent spin helix and the spin dynamics could be classified into two regimes. In this section the adiabatic regime where the spin density varies slowly is discussed. The diabatic regime where the spin density is changing fast will be the subject of the next section.
The dynamical equation of spin current is inhomogeneous so the spin current consists of a homogeneous solution and an inhomogeneous solution. The former dies out in time scale about $\gamma^{-1}$ and is unimportant, while the latter is what we desire. The spin current will acquire the inhomogeneous value in the duration $\Delta t \sim \gamma^{-1}$, which is obtained by setting $\partial_t \vec{j} = 0$. The solution is valid only when every terms in the equation are slowly varying in the time interval $\Delta t \sim \gamma^{-1}$, i.e. $|\partial_t \vec{s}/\vec{s}\,| \lesssim \gamma$, the validity will be justified at the end.
Adiabatic assumption $\partial_t \tilde{j}^+ = \partial_t j_z = 0$ implies
\begin{align}
j_z &= \dfrac{\lambda}{\gamma} {\rm Im}[\tilde{s}^- \tilde{j}^+]~,\label{adia jz}\\
\tilde{j}^+ &= \dfrac{-i\tilde{s}^+}{\gamma - i \lambda s_{z,0}}\left[\alpha \left(k-2q\right) + \lambda j_z \right]~.\label{adia j+}
\end{align}
By writing $\tilde{s}^{\pm} (t) \equiv s_\perp(t) e^{\pm i\theta(t)}$, $j_z$ could be solved by substituting Eq.(\ref{adia j+}) into Eq.(\ref{adia jz}), and hence $\tilde{j}^+$:
\begin{align}
j_z &= -\dfrac{\lambda s_\perp^2 \alpha(k-2q)}{\gamma^2 + \lambda^2 (s_\perp^2 + s_{z,0}^2)}~,\\
\tilde{j}^+ &= \tilde{s}^+ \dfrac{\alpha(k-2q)(\lambda s_{z,0} - i \gamma)}{\gamma^2 + \lambda^2 (s_\perp^2 + s_{z,0}^2)}~.
\end{align}
As $\partial_t \tilde{s}^+ = e^{i\theta}[\partial_t s_\perp + i(s_\perp \partial_t \theta)]$, separation of real and imaginary parts results in the rates of change of the transverse spin magnitude $s_\perp (t)$ and phase $\theta(t)$,
\begin{align}
\partial_t s_\perp &= -s_\perp \dfrac{\gamma \alpha \left(k-2q\right)^2}{\gamma^2 + \lambda^2 (s_\perp^2 + s_{z,0}^2)}~,\\
\partial_t \theta &= \dfrac{\lambda s_{z,0}}{\gamma}\dfrac{\partial_t s_\perp}{s_\perp}~.
\end{align}
Direct integration implies algebraic equations of $s_\perp (t)$ and $\theta(t)$,
\begin{align}
(\gamma^2 + \lambda^2 s_{z,0}^2)\ln\dfrac{s_\perp(t)}{s_{\perp,0}} + \frac{\lambda^2}{2}\left[s_{\perp}^2(t) - s_{\perp,0}^2\right] = -\alpha (k - 2q)^2 \gamma t~,\\
\theta(t) = \dfrac{\lambda s_{z,0}}{\gamma}\ln \dfrac{s_\perp(t)}{s_{\perp,0}}~.
\end{align}
Above solution is valid only when it is consistent with the adiabatic assumption,
\begin{align}
|\partial_t \vec{s}/\vec{s}\,|\, = \sqrt{\dfrac{(\partial_t s_\perp)^2 + (s_\perp \partial_t \theta)^2}{s_\perp^2 + s_{z,0}^2}} \lesssim \gamma~.
\end{align}
For PSH $k = 2q$ this condition is always satisfied as the left hand side vanishes. For $k = (1+\varepsilon)2q \neq 2q$ the condition becomes
\begin{equation}
\left( k- 2q \right)^2 = \varepsilon^2 (2q)^2 \lesssim \dfrac{\gamma}{\sqrt{\gamma^2 + \lambda^2 s_{z,0}^2}} \dfrac{\sqrt{s_\perp^2 + s_{z,0}^2}}{s_\perp} \dfrac{\gamma^2 + \lambda^2(s_\perp^2 + s_{z,0}^2)}{\alpha}~.
\end{equation}
In conclusion, adiabatic regimes is valid only in the vicinity of PSH, i.e. small $\varepsilon$.
\section{Diabatic Evolution of Spin helix}
We have seen that the adiabatic condition is only valid in the vicinity of PSH (say $|\varepsilon| \lesssim 0.05$). This precision is hard to control in experiments, it means we have to study the general regime as well. As previous, we only have interest in the dynamics of spin helix, which is governed by Eqs.(\ref{EOM1}-\ref{EOM3}). Attention should be paid to Eq.(\ref{EOM2}), when $k$ is far away from $2q$, the time-dependent term $j_z(t)$ is dominated by the term with $\alpha$, this neglect justified below benefits us that the coefficients are time-independent and that the dynamics of $\tilde{s}^+$ and $\tilde{j}^+$ are closed. Once the transverse spin and current are known, the longitudinal current $j_z(t)$ can be solved by direct integration on Eq.(\ref{EOM3}).
The equations of motion of spin helix can be rewritten as below,
\begin{align}
\partial_t \left[ \begin{array}{cc}
\tilde{s}^+ \\ \tilde{j}^+ \end{array} \right]
&= \left[ \begin{array}{cc}
0 & -i(k-2q) \\
-i[\alpha(k-2q) + \lambda j_z] & i \lambda s_{z,0} - \gamma
\end{array} \right]
\left[ \begin{array}{cc}
\tilde{s}^+ \\ \tilde{j}^+ \end{array} \right]
= \Lambda \left[ \begin{array}{cc}
\tilde{s}^+ \\ \tilde{j}^+ \end{array} \right]~.\label{eigen1}
\end{align}
The time dependence of $j_z$ results in that of eigenvalues and eigenstates. We approximate $j_z$ by its equilibrium value, $j_z(t\to \infty) = 0$ thus the eigenvalues and eigenvectors become time independent, the general solution can be decomposed into the form
\begin{equation}
\left[ \begin{array}{cc}
\tilde{s}^+(t) \\ \tilde{j}^+(t) \end{array} \right]
=
C_1\,e^{\Lambda_1 t} \left[ \begin{array}{cc}
\tilde{s}^+ \\ \tilde{j}^+ \end{array} \right]_1
+
C_2\,e^{\Lambda_2 t} \left[ \begin{array}{cc}
\tilde{s}^+ \\ \tilde{j}^+ \end{array} \right]_2~,
\end{equation}
where constants $C_{1,2}$ are fixed by the initial condition $(\tilde{s}^+, \tilde{j}^+)_{t=0} = (s_{\perp,0},0)$. Vanishing initial current is assumed for convenience but it has captured enough features. Therefore the transverse components of spin density and spin current are known,
\begin{align}
s^+(x,t) &= \dfrac{s_{\perp,0}}{2(\Gamma_1 + i \Gamma_2)} \exp\left( -\dfrac{\gamma t}{2} + i \dfrac{\lambda s_{z,0}}{2} t + ikx \right) \times \left\{ (\gamma - i \lambda s_{z,0})\sin[(\Gamma_1 + i \Gamma_2)t] + 2(\Gamma_1 + i \Gamma_2)\cos[(\Gamma_1 + i \Gamma_2)t]\right\}~,\\
j^+(x,t) &= \dfrac{s_{\perp,0}}{2(\Gamma_1 + i \Gamma_2)} \exp\left( -\dfrac{\gamma t}{2} + i \dfrac{\lambda s_{z,0}}{2} t + ikx \right) \times \left\{-i2\alpha(k-2q) \sin[(\Gamma_1 + i \Gamma_2)t]\right\}~,
\end{align}
where
\begin{equation}
\Gamma_1 + i \Gamma_2 = \sqrt{\alpha \left(k-2q\right)^2 - \left(\dfrac{\gamma}{2} - i\dfrac{\lambda s_{z,0}}{2}\right)^2}~.
\end{equation}
The longitudinal component of spin current obeying Eq.(\ref{EOM3}) with initial condition $j_z(0) = j_{z,0}$ is given by
\begin{equation}
j_z(t) = j_{z,0} e^{-\gamma t} - e^{-\gamma t} \dfrac{\lambda s_{\perp,0}^2 \alpha (k-2q)}{4(\Gamma_1^2 + \Gamma_2^2)} \left[ \cosh(2\Gamma_2 t) + \dfrac{\gamma}{2}\dfrac{\sinh(2\Gamma_2 t)}{\Gamma_2} - \cos(2\Gamma_1 t) - \dfrac{\gamma}{2}\dfrac{\sin(2\Gamma_1 t)}{\Gamma_1} \right]~.
\end{equation}
If the time dependence of $j_z(t)$ is retained, the eigenvalues of Eq.(\ref{eigen1}) become
\begin{equation}
\Lambda_{1,2} = -\dfrac{\gamma}{2} + i \dfrac{\lambda s_{z,0}}{2} \pm \sqrt{\alpha \left(k-2q\right)^2 - \left(\dfrac{\gamma}{2} - i\dfrac{\lambda s_{z,0}}{2}\right)^2 + \lambda j_z \left(k - 2q\right)}~.
\end{equation}
The first two terms remain unchanged but the square root term $\Gamma_1 + i \Gamma_2$ becomes time dependent. Estimate of magnitude shows that
\begin{equation}
\alpha \left(k-2q\right)^2 ~\gg~ \lambda |j_z| |k - 2q| ~\sim~ |\dfrac{\gamma}{2} - i\dfrac{\lambda s_{z,0}}{2}|^2~,\label{magnitude}
\end{equation}
hence the approximation $\Gamma_1 + i \Gamma_2 \approx \sqrt{\alpha} |k-2q| = \Gamma$ simplifies the expressions,
\begin{align}
s^+(x,t) &= s_{\perp,0} \exp\left(-\dfrac{\gamma t}{2} + i \dfrac{\lambda s_{z,0}}{2} t + ikx \right) \left( \cos \Gamma t + \dfrac{\gamma - i \lambda s_{z,0}}{2\Gamma}\sin \Gamma t \right)~,\\
j^+(x,t) &= i \sqrt{\alpha} s_{\perp,0} \exp\left(-\dfrac{\gamma t}{2} + i \dfrac{\lambda s_{z,0}}{2} t + ikx \right) \sin(\Gamma t)\,\mbox{sign}(2q - k)~,\\
j_z(t) &= j_{z,0} e^{-\gamma t} - e^{-\gamma t} \dfrac{\lambda s_{\perp,0}^2}{4 (k-2q)} \left[ 1 + \gamma t - \cos(2\Gamma t) - \dfrac{\gamma}{2}\dfrac{\sin(2\Gamma t)}{\Gamma} \right]~.
\end{align}
Recalling that $\lambda |j_z| \ll \Gamma$ is assumed when de-coupling the equations of motion, for self-consistency we substitute the amplitude of $j_z$
\begin{equation}
\lambda |j_z|/\Gamma \sim (\lambda s_{\perp,0}/\Gamma)^2 \ll 1~,
\end{equation}
the last inequality is identical to the limit taken in Eq.(\ref{magnitude}).
\section{Quenching of Rabi coupling on PSH}
After substituting Eqs.(20-23) in main text into Eqs.(\ref{eq n}-\ref{eq spin j}) and dropping quadratic terms of fluctuations, one can obtain an inhomogeneous diffusion equation for the dynamics of fluctuation $\vec{\delta V} = (\delta n, \delta j_0, \delta s_z, \delta j_z, \delta s_x, \delta j_x, \delta s_y, \delta j_y)^T$,
\begin{equation}
\partial_t \vec{\delta V}(x,t) = \hat{\mathbf{H}}(x,\partial_x)\vec{\delta V}(x,t) + \vec{g}(x)
\end{equation}
with source term $\vec{g}(x) = \Omega_R (0, {2q}s_{{\rm psh},y}/{m}, s_{{\rm psh},y},0,0,0, -s_{z,0},qn_0/(2m))^T$. The evolution matrices are given by
\begin{align}
\hat{\mathbf{H}} &= \left[
\begin{array}{cc}
H_{11} & H_{12} \\
H_{21} & H_{22} \\
\end{array}
\right]~,\\
H_{11} &= \left[
\begin{array}{cccc}
0 & -\partial_x & 0 & 0 \\
-\left(\frac{k_B T}{m} + \frac{3g n_0}{4m}\right) \partial_x & 0 & -\frac{g}{m}(\partial_x s_{z,0} + s_{z,0} \partial_x) & 0 \\
0 & 0 & 0 & -\partial_x \\
-\frac{3g s_{z,0}}{4m}\partial_x & 0 & -\alpha \partial_x & -\gamma
\end{array}
\right]~,\\
H_{12} &= \left[
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
-\frac{g}{m}(\partial_x s_{{\rm psh},x} + s_{{\rm psh},x} \partial_x) & 0 & -\frac{g}{m}(\partial_x s_{{\rm psh},y} + s_{{\rm psh},y} \partial_x) + \frac{2q\Omega_R}{m} & 0 \\
0 & 0 & +\Omega_R & 0 \\
0 & -\lambda s_{{\rm psh},y} & 0 & + \lambda s_{{\rm psh},x} + \Omega_R
\end{array}
\right]~,\\
H_{21} &= \left[
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
-\frac{3g s_{{\rm psh},x}}{4m}\partial_x & 0 & 0 & +\lambda s_{{\rm psh},y} \\
0 & 0 & -\Omega_R & 0 \\
-\frac{3g s_{{\rm psh},y}}{4m}\partial_x + \frac{q\Omega_R}{2m} & 0 & 0 & -\lambda s_{{\rm psh},x} - \Omega_R \\
\end{array}
\right]~,\\
H_{22} &= \left[
\begin{array}{cccc}
0 & -\partial_x & 0 & -2q \\
-\alpha\partial_x & -\gamma & - 2q\alpha & -\lambda s_{z,0} \\
0 & 2q & 0 & -\partial_x \\
2q\alpha & \lambda s_{z,0} & -\alpha \partial_x & -\gamma
\end{array}
\right]~.
\end{align}
The inhomogeneous diffusion equation can be solved by repeating integrations,
\begin{align}
\vec{\delta V}(x,t) &= \int_{0}^{t} \vec{g}(x)\,\mathrm{d}t^\prime + \int_{0}^{t} \hat{\mathbf{H}}(x,\partial_x) \vec{\delta V}(x,t^\prime)\,\mathrm{d}t^\prime \nonumber \\
&= \int_{0}^{t} \vec{g}(x)\,\mathrm{d}t^\prime + \int_{0}^{t} \int_{0}^{t^\prime} \hat{\mathbf{H}}(x,\partial_x) \vec{g}(x)\,\mathrm{d}t^{\prime \prime} \mathrm{d}t^\prime + \int_{0}^{t} \int_{0}^{t^\prime} \hat{\mathbf{H}}^2 \vec{\delta V}(x, t^{\prime\prime})\,\mathrm{d}t^{\prime \prime} \mathrm{d}t^\prime \nonumber \\
&= \cdots = \left(\sum_{n=1}^{\infty} \dfrac{t^n}{n!} \hat{\mathbf{H}}^{n-1} \right)\vec{g}(x) = \hat{\mathbf{H}}^{-1}\left[\exp(\hat{\mathbf{H}} t) - \hat{\mathbf{I}} \right] \vec{g}(x)~,
\end{align}
and imposing initial condition $\vec{\delta V}(x,0) = 0$, i.e. only the inhomogeneous solution is concerned. Identical solution could be obtained alternatively by letting the form of $\vec{\delta V}(x,t)$ as
\begin{equation}
\vec{\delta V}(x,t) = \hat{\mathbf{S}}(t)\vec{g}(x)~~\mbox{with}~~\hat{\mathbf{S}}(0)=0~,
\end{equation}
therefore the diffusion equation becomes
\begin{align}
\partial_t [\hat{\mathbf{S}}(t)\vec{g}(x)] &= \hat{\mathbf{H}}(x,\partial_x) [\hat{\mathbf{S}}(t)\vec{g}(x)] + \vec{g}(x)\\
\partial_t \hat{\mathbf{S}}(t) &= \hat{\mathbf{H}}(x,\partial_x) \hat{\mathbf{S}}(t) + \hat{\mathbf{I}}~,
\end{align}
and implies
\begin{equation}
\hat{\mathbf{S}}(t) = \sum_{n=1}^{\infty} \dfrac{t^n}{n!} \hat{\mathbf{H}}^{n-1} = \hat{\mathbf{H}}^{-1}\left[\exp(\hat{\mathbf{H}} t) - \hat{\mathbf{I}} \right]~.
\end{equation}
Above solution is valid only for $\vec{\delta V} \approx \vec{g}(x)t \ll \vec{V}_0$, i.e. $\Omega_R t \ll 1$.
\end{document}
|
{
"timestamp": "2018-07-31T02:21:01",
"yymm": "1806",
"arxiv_id": "1806.00766",
"language": "en",
"url": "https://arxiv.org/abs/1806.00766"
}
|
\section{0pt}{12pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt}
\titlespacing\subsection{0pt}{12pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt}
\titlespacing\subsubsection{0pt}{12pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt}
\newcommand{\mathbb{P}}{\mathbb{P}}
\newcommand{\mathbb{E}}{\mathbb{E}}
\newtheorem{theorem}{Theorem}
\newtheorem{proposition}{Proposition}
\newtheorem{axiom}{Axiom}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{definition}{Definition}
\newtheorem{assumption}{Assumption}
\newtheorem{corollary}{Corollary}[proposition]
\title{Causal Inference with Noisy and Missing Covariates via Matrix Factorization}
\author{
Nathan Kallus\footnote{Alphabetical order}, Xiaojie Mao\footnotemark[\value{footnote}], Madeleine Udell\footnotemark[\value{footnote}]
\\
Cornell University\\
\texttt{\{kallus, xm77, udell\}@cornell.edu} \\
}
\date{\vspace{-5ex}}
\begin{document}
\maketitle
\begin{abstract}
Valid causal inference in observational studies often requires controlling for confounders.
However, in practice measurements of confounders may be noisy,
and can lead to biased estimates of causal effects.
We show that we can reduce the bias caused by measurement noise
using a large number of noisy measurements of the underlying confounders.
We propose the use of matrix factorization to infer the confounders from noisy covariates,
a flexible and principled framework that adapts to missing values,
accommodates a wide variety of data types,
and can augment a wide variety of causal inference methods.
We bound the error for the induced average treatment effect estimator
and show it is consistent in a linear regression setting,
using Exponential Family Matrix Completion preprocessing.
We demonstrate the effectiveness of the proposed procedure in
numerical experiments with
both synthetic data and real clinical data.
\end{abstract}
\section{Introduction}
Estimating the causal effect of an intervention is a fundamental goal across many domains.
Examples include evaluating the effectiveness of recommender systems \citep{schnabel2016recommendations}, identifying the effect of therapies on patients' health \citep{connors1996effectiveness} and understanding the impact of compulsory schooling on earnings \citep{angrist1991does}.
However, this task is notoriously difficult in observatonal studies due to the presence of confounders: variables that affect both the intervention and the outcomes. For example, intelligence level can influence both students' decisions regarding whether to go to college, and their earnings later on. Students who choose to go to college may have higher intelligence than those who do not. As a result, the observed increase in earnings associated with attending college is confounded with the effect of intelligence and thus cannot faithfully represent the causal effect of college education.
One standard way to avoid such confounding effect is to control for all confounders \citep{imbens2015causal}.
However, this solution poses practical difficulties.
On the one hand, an exhaustive list of confounders is not known a priori, so investigators usually adjust for a large number of covariates for fear of missing important confounders.
On the other hand, \textit{measurement noise} may abound in the collected data: some confounder measurements may be contaminated with noise (e.g., data recording error),
while other confounders may not be amenable to direct measurements and instead admit only proxy measurements. For example, we may use an IQ test score as a proxy for intelligence.
It is well known that using proxies in place of the true confounders
leads to biased causal effect estimates \citep{frost1979proxy, wickens1972note, wooldridge2015introductory}.
However, we show in a linear regression setting that the bias due to measurement noise can be effectively alleviated by using many proxies for the underlying confounders (Section 2.2).
For example, in addition to IQ test score, we may also use coursework grades and other academic achievements to characterize the intelligence.
Intuitively, using more proxies may allow for a more accurate reconstruction of
the confounder and thus may facilitate more accurate causal inference.
Therefore, collecting a large number of covariates is beneficial for causal inference not only
to avoid confounding effects but also to alleviate bias caused by measurement noise.
Although in the big-data era, collecting myriad covariates is easier than ever before,
it is still challenging to use the collected noisy covariates in causal inference.
On the one hand, data is inevitably contaminated with missing values,
especially when we collect many covariates.
Inaccurate imputation of these missing values may aggravate measurement noise.
Moreover,
missing value imputation can at most gauge the values of noisy
covariates but inferring the latent confounders is the most critical
for accurate causal inference.
On the other hand, the large number of covariates may include heterogeneous data types (e.g., continuous, ordinal, categorical, etc.) that must be handled appropriately
to exploit covariate information.
To address the aforementioned problems, we propose to use low rank matrix factorization as a principled approach to preprocess covariate matrices for causal inference.
This preprocessing step infers the confounders for subsequent causal inference from partially observed noisy covariates. Investigators can thus collect more covariates to control for potential confounders and use more proxy variables to characterize the unmeasured traits of the subjects without being hindered by missing values.
Moreover, matrix factorization preprocessing is a very general framework. It can adapt to a wide variety of data types and it can be seamlessly integrated with many causal inference techniques, e.g., regression adjustment, propensity score reweighting, matching \citep{imbens2015causal}.
Using matrix factorization as a preprocessing step makes the whole procedure modular and enables investigators to take advantage of existing packages for matrix factorization and causal inference.
We rigorously investigate the theoretical implication of the matrix factorization preprocessing with respect to causal effect estimation.
We establish a convergence rate for the induced average treatment effect (ATE) estimator and show its consistency in a linear regression setting with Exponential Family Matrix Completion preprocessing \citep{gunasekar2014exponential}.
In contrast to traditional applications of matrix factorization methods with matrix reconstruction as the end goal, our theoretical analysis validates matrix factorization as a preprocessing step for causal inference.
We further evaluate the effectiveness of our proposed procedure on both synthetic datasets and a clinical dataset involving the mortality of twins born in the USA introduced by Louizos et al. \citep{louizos2017causal}.
We empirically illustrate that matrix factorization can accurately estimate causal effects by effectively inferring the latent confounders from a large number of noisy covariates.
Moreover, matrix factorization preprocessing achieves superior performance with loss functions adapting to the data types. It also works well with many causal inference methods and is robust to the presence of missing values.
{\bf Related work.} Our paper builds upon low rank matrix completion methods that have been successfully applied in many domains to recover data matrices from incomplete and noisy observations \citep{bennett2007netflix,cao2015image,schuler2016discovering}.
These methods are not only computationally efficient but also theoretically sound with provable guarantees \citep{gunasekar2014exponential,candes2009exact,candes2010power,candes2010matrix,recht2011simpler,keshavan2010matrix}.
Moreover, matrix completion methods have been developed to accommodate heterogeneous data types prevalent in empirical studies by using a rich library of loss functions and penalties \citep{udell2016generalized}. Recently, Athey et al. \citep{athey2017matrix} use marix completion methods to impute the unobservable counterfactual outcomes and estimate the ATE for panel data. In constrast, our paper focuses on measurement noise in the covariate matrix.
Measurement noise has been considered in literature for a long time \citep{frost1979proxy,wickens1972note}. Louizos et al. recently \citep{louizos2017causal} propose to use Variational Autoencoder as a heuristic way to recover the latent confounders. Similarly, they also suggest that multiple proxies are important for the confounder recovery. In contrast, matrix factorization methods, despite stronger parametric assumptions, address the problem of missing values simultaneously, require considerably less parameter tuning, and have theoretical justifications.
{\bf Notation.} For two scalars $a, b \in \mathbb{R}$, denote $a \vee b = \max \{a, b\}$ and $a \wedge b = \min\{a, b\}$.
For an positive integer $N$, we use $[N]$ to represent the set $\{1, 2, \dots, N\}$. For a set $\Omega$, $|\Omega|$ is the total number of elements in $\Omega$.
For matrix $X \in \mathbb{R}^{N \times p}$, denote its singular values as $\sigma_1 \ge \sigma_2 \ge \dots \ge \sigma_{N \wedge p} \ge 0$. The spectral norm, nuclear norm, Frobenius norm and max norm of $X$ are defined as $\| X \| = \sigma_1$, $\|X\|_{\star} = \sum_{i = 1}^{N \wedge p}\sigma_i$, $\| X\|_F = \sqrt{\sigma_1^2 + \dots + \sigma_{N \wedge p}^2}$ and $\| X\|_{\max} = \underset{ij}{\max}\ |X_{ij}|$ respectively.
The projection matrix for $X$ is defined as $P_X = X(X^\top X)^{-1}X^\top$.
We use $\operatorname{col}(X)$ to denote the column space of $X$ and $\sigma(z)$ to denote the sigmoid function $1/(1 + \exp(-z))$.
\section{Causal inference with low rank matrix factorization}
In this section, we first introduce the problem of causal inference under measurement noise and missing values formally and define notation.
We then show that the bias caused by measurement noise in linear regression is alleviated when more covariates are used.
Finally we review low rank matrix factorization methods and describe the proposed procedure for causal inference.
\subsection{Problem formulation}
We consider an observational study with $N$ subjects.
For subject $i$, $T_i$ is the treatment variable and we assume $T_i \in \{0, 1\}$ for simplicity.
We use $Y_i(0), Y_i(1)$ to denote the potential outcomes for subject $i$ under treatment and control respectively \citep{imbens2015causal}.
We can only observe the potential outcome corresponding to the treatment level that subject $i$ received, i.e., $Y_i = Y_i(T_i)$.
Assume that $\{Y_i(0), Y_i(1), T_i\}_{i=1}^N$ are independently and identically distributed (i.i.d). We denote $T = [T_1, ..., T_N]^\top$ and $Y = [Y_1, ..., Y_N]^\top$.
For the ease of exposition, we focus on estimating the average treatment effect (ATE):
\begin{equation*}
\tau = \mathbb{E}(Y_i(1) - Y_i(0)).
\end{equation*}
One standard way to estimate ATE is to adjust for the confounders. Suppose we have access to the confounders $U_i \in \mathbb{R}^r$ for subject $i$, $\forall i \in [N]$. Then we can employ many standard causal inference techniques (e.g., regression adjustment, propensity score reweighting, matching, etc.) to estimate ATE under the following unconfoundedness assumption:
\begin{assumption}[Unconfoundedness] \label{assumption: unconf}
For each $t = 0, 1$ and $i = 1, ..., N$, $Y_i(t)$ is independent of $T_i$ conditionally on $U_i$: $\mathbb{P}(Y_i(t) \mid T_i, U_i) = \mathbb{P}(Y_i(t) \mid U_i)$.
\end{assumption}
However, in practice we may not observe $\{U_i\}_{i=1}^N$ directly.
Instead suppose we can only partially observe covariates $X_i \in \mathbb{R}^p$, which is a collection of noisy measurements for the confounders.
The covariates $X_i$ can represent various data types by canonical encoding schemes.
For example, Boolean data is encoded using $1$ for true and $-1$ for false.
Many other encoding examples, e.g., categorical data or ordinal data, can be found in Udell et al. \citep{udell2016generalized}. We concatenate these covariates into $X \in \mathbb{R}^{N \times p}$.
We assume that only entries of $X$ over a subset of indices $\Omega \subset [N] \times [p]$ are observed and denote $\mathcal{P}_{\Omega}(X) = \sum_{(i, j) \in \Omega}X_{ij}e_ie_j^\top$ as the observed covariate matrix.
We further specify the generative model for individual entries $X_{ij}$, $(i, j) \in [N] \times [p]$.
We assume that $X_{ij}$ are drawn indepedently from distributions $\mathbb{P}(X_{ij} \mid U_i^\top V_j)$, where $V_j \in \mathbb{R}^p$ represents loadings of the $j^{\text{th}}$ covariate on confounders.
The distribution $\mathbb{P}(X_{ij} \mid U_i^\top V_j)$ models the \textit{measurement noise} mechanism for $X_{ij}$.
For example, if $X_{i1}$ is a measurement for $U_{i1}$ contaminated with standard Gaussian noise, then $\mathbb{P}(X_{i1} \mid U_i^\top V_1) \sim \mathcal{N}(U_i^\top V_1, 1)$ where $V_1 = [1, 0, ..., 0]^\top $.
This generative model also accomodates \textit{proxy variables}.
Consider a simplified version of Spearman's measureable intelligence theory \citep{spearman1904general} where multiple kinds of test scores are used to characterize two kinds of (unobservable) intelligence: quantitative and verbal.
Suppose that there are $p$ tests (e.g., Classics, Math, Music, etc.) which are recorded in $X_{i1}, ..., X_{ip}$ and the two intelligence are represented by $U_{i1}$ and $U_{i2}$.
We assume that these proxy variables are noisy realizations of \textit{linear combinations} of two intelligence.
This can be modelled using the generative model $X_{ij} \sim \mathbb{P}(X_{ij} \mid U_i^\top V_j)$ with $V_{j} = [V_{i1}, V_{i2}, 0, ..., 0]^\top $ for $j \in [p]$.
While this linear assumption seems restrictive, it's approximately true for a large class of nonlinear latent variable models when many proxies are used for a small number of latent variables \citep{udell2017nice}.
We aim to estimate ATE based on $\mathcal{P}_{\Omega}(X)$, $Y$ and $T$.
It is however very challenging for the presence of measurement noise and missing values.
One the one hand, most causal inference techniques cannot adapt to missing values directly and appropriate preprocessing is needed.
On the other hand, it is well known that measurement noise can dramatically undermine the unconfoundedness assumption and lead to biased causal effect estimation \citep{frost1979proxy, wickens1972note}, i.e., $\mathbb{P}(Y_i(t) | T_i, X_{i}) \ne \mathbb{P}(Y_i(t) | X_{i})$ for $t = 0, 1$.
\subsection{Measurement noise and bias}
In this subsection, we show that using a large number of noisy covariates can effectively alleviate the ATE estimation bias resulted from measurement noise in linear regression setting.
Suppose there are no missing values, i.e., $\mathcal{P}_{\Omega}(X) = X$.
We consider the linear regression model: $\forall i \in [N]$, $Y_i = U_i^\top\alpha + \tau T_i + \epsilon_i$ , where $\alpha \in \mathbb{R}^r$ is the coefficient for confounders $U_i$, $\tau$ is the ATE, and $\epsilon_i$ are i.i.d sub-Gaussian error terms with mean $0$ and variance $\sigma^2$.
For $\forall i \in [N]$, $T_i$ are independently and probabilistically assigned according to confounders $U_i$. Unconfoundedness (Assumtpion \ref{assumption: unconf}) implies that $T_i$ are independent with $\epsilon_i$ conditionally on $U_i$.
\begin{proposition} \label{prop: bias}
Consider the additive noise model: $X = UV^\top + W$ where $\{U_i\}_{i=1}^N$ are i.i.d samples from a common distribution, $W \in \mathbb{R}^{N \times p}$ contains independent noisy entries with mean $0$ and variance $\sigma_w^2$, and entries in $W$ are independent with $\{U_i\}_{i=1}^N$. Suppose that $r$, $p$ are fixed and $p < N$.
As $N \to \infty$, the asymptotic bias of least squares estimator in linear regression of $Y_i$ on $X_i$ and $T_i$ has the following form:
\begin{align} \label{formula: bias}
\frac{\mathbb{E}(T_iU_i)\mathbb{E}(U^\top_iU_i)^{-1}[\frac{1}{\sigma_w^2} V^\top V + \mathbb{E}(U^\top_iU_i)^{-1}]^{-1}\alpha}{\mathbb{E}(T_i^2) - \mathbb{E}(T_iU_i)[(\frac{1}{\sigma_w^2}V^\top V)^{-1} + \mathbb{E}(U^\top_iU_i)]^{-1}\mathbb{E}(U^\top_iT_i)}
\end{align}
\end{proposition}
\begin{corollary} \label{corollary: bias}
The asymptotic bias (\ref{formula: bias}) diminishes to $0$ when $\|V\| \to \infty$.
\end{corollary}
Corrolary \ref{corollary: bias} suggests an important fact: collecting a large number of noisy covariates is an effective remedy for the bias induced by measurement noise as long as the loadings of the covariates on latent confounders do not vanish too fast.
Surprisingly, in this independent additive noise case, the asymptotic bias (\ref{prop: bias}) is even nearly optimal: it is identical to the optimal asymptotic bias we would have if we knew the unobservable $V$ (Proposition 2, Appendix A).
In the rest of the paper, we further exploit this fact by using matrix factorization preprocessing which adapts to missing values, heterogenenous data types and more general noise models.
\subsection{Low rank matrix factorization preprocessing}
In this paper, we propose to recover the latent confounders $\{U_i \}_{i=1}^{N}$ from noisy and incomplete observations $\mathcal{P}_{\Omega}(X)$ by using low rank matrix factorization methods, which rely on the assumption:
\begin{assumption}[Low Rank Matrix] \label{assumption:low_rank}
The fully observed matrix $X$ is a noisy realization of a low rank matrix $\Phi \in \mathbb{R}^{N \times p}$ with rank $r \ll \min \{N, p\}$.
\end{assumption}
In the context of causal inference, Assumption \ref{assumption:low_rank} corresponds to the surrogate-rich setting where many proxies are used for a small number of latent confounders.
Under the generative model in section 2.1, Assumption \ref{assumption:low_rank} implies that $\Phi = UV^T$ where $U = [U_1, ..., U_N]^\top $ is the confounder matrix and $V = [V_1, ..., V_p]^T$ is the covariate loading matrix.
Although this assumption is unverifiable, low rank structure is shown to pervade in many domains such as images \citep{cao2015image}, customer preferences \citep{bennett2007netflix}, healthcare \citep{schuler2016discovering}, etc.
The recent work by Udell and Townsend \citep{udell2017nice} provides theoretical justifications that low rank structure arises naturally from a large class of latent variable models.
Moreover, low rank matrix factorization methods usually assume the \textit{Missing Completely at Random} (MCAR) setting where the observed entries are sampled uniformly at random \citep{gunasekar2014exponential,little2014statistical}.
\begin{assumption}[MCAR] \label{assumption: missing}
$\forall (i, j) \in \Omega$, $i \sim \operatorname{uniform}([N])$ and $j \sim \operatorname{uniform}([p])$ independently and the sampling is independent with the measurement noise.
\end{assumption}
Our paper takes the Exponential Family Matrix Completion (EFMC) as a concrete example, which further assumes exponential family noise mechanism \citep{gunasekar2014exponential}.
\begin{assumption} [Natural Exponential Family]
Suppose that each entry $X_{ij}$ is drawn independently from the corresponding \textit{natural exponential family} with $\Phi_{ij}$ as the natural parameter:
\begin{equation*}
\mathbb{P}(X_{ij} | \Phi_{ij}) = h(X_{ij})\exp(X_{ij}\Phi_{ij} - G(\Phi_{ij}))
\end{equation*}
where $G: \mathbb{R} \to \mathbb{R}$ is a strictly convex and anlytic function called log-partition function. Furthermore, for some $\eta > 0$ and $\forall\ u \in \mathbb{R}$, $\nabla^2 G(u) \ge \operatorname{e}^{-\eta |u|}$.
\end{assumption}
Exponential family distributions encompass a wide variety of distributions like Gaussian, Poisson, Bernoulli that have been extensively used for modelling different data types \citep{mccullagh1984generalized}.
For example, if $X_{ij}$ takes binary values $\pm 1$, then we can model it using Bernoulli distribution: $\mathbb{P}(X_{ij} \mid \Phi_{ij}) = \sigma(X_{ij}\Phi_{ij})$.
Moreover, it can be verified that the assumption on $\nabla^2 G(u)$ is satisfied by commonly used members of natural exponential family \citep{gunasekar2014exponential}.
EFMC estimates $\Phi$ by the following regularized \textit{M}-estimator:
\begin{equation} \label{formula: exp_family}\textstyle
\hat{\Phi} = \min_{\|\Phi\|_{\max} \le \frac{\alpha^*}{\sqrt{Np}}} \ \frac{Np}{|\Omega|}[\sum_{(i, j)\in \Omega} - \log \mathbb{P}(X_{ij}| \Phi_{ij})] + \lambda \|\Phi\|_{\star}
\end{equation}
The estimator in (\ref{formula: exp_family}) involves solving a convex optimization problem, whose solution can be found efficiently by many off-the-shelf algorithms \citep{boyd2004convex}.
The nuclear norm regularization encourages a low-rank solution: the larger the tuning parameter $\lambda$, the smaller the rank of the solution $\hat{\Phi}$. In practice, $\lambda$ is usually selected by cross-validation.
Moreover, the constraint $\|\Phi\|_{\max} \le \frac{\alpha^*}{\sqrt{Np}}$ appears merely as an artifact of the proof and it is recommended to drop this constraint in practice \citep{kallus2016dynamic}.
It can be proved that under Assumptions $2-4$ and some regularity assumptions the relative reconstruction error of $\hat{\Phi}$ converges to $0$ with high probability (Lemma 4, Appendix A).
Furthermore, EFMC can be extended by using a rich library of loss functions and regularization functions \citep{udell2016generalized,singh2008unified}.
Suppose the solution $\hat{\Phi}$ from (\ref{formula: exp_family}) is of rank $\hat{r}$.
Then we can use its top $\hat{r}$ left singular matrix $\hat{U}$ to estimate the confounder matirx $U$.
The estimated confounder matrix $\hat{U}$ is used in place of the covariate matrix for subsequent causal inference methods (e.g., regression adjustment, propensity reweighting, matching, etc.). Admittedly, the confounder matrix $U$ can be identified only up to nonsingular linear transformation.
However, this suffices for many causal inference techniques. For example, regression adjustment methods based on linear regression \citep{wooldridge2015introductory}, polynomial regression, neural networks trained by backpropogation \citep{ng2004feature}, propensity reweighting or propensity matching using propensity score estimated by logistic regressions, and Mahalanobis matching are invariant to nonsingular linear transformations. Moreover, the invariance to linear transformation is important since the latent confounders may be abstract without commonly acknowledged scale (e.g., intelligence).
\section{Theoretical guarantee}
In this section, we theoretically justify matrix factorization preprocessing for estimating causal effect in linear regression setting.
We first identify the sufficient conditions on the estimated confounder matrix $\hat{U}$ for consistently estimating ATE in linear regression.
We then derive error bound for the induced ATE estimator with EFMC (\ref{formula: exp_family}) as the preprocessing step. Proofs are deferred to Appendix A.
Consider the linear regression model in Section 2.2. Suppose we use EFMC preprocessing and linear regression for causal inference, which leads to the ATE estimator $\hat{\tau}$.
It is well known that the accuracy of $\hat{\tau}$ relies on how well the estimated column space $\operatorname{col}(\hat{U})$ approximates the column space of true confounder matrix $\operatorname{col}(U)$. Ideally, if $\operatorname{col}(\hat{U})$ aligns with $\operatorname{col}(U)$ perfectly, then $\hat{\tau}$ is identical to the least squares estimator based on true confounders and is thus consistent.
We introduce the following distance metric between two column spaces \citep{cai2018rate}:
\begin{definition}
Consider two matrices $\hat{M} \in \mathbb{R}^{N \times k}$ and $M \in \mathbb{R}^{N \times r}$ with orthonormal columns, the principle angle between their column spaces is defined as
\begin{equation*}
\angle(M, \hat{M}) = \sqrt{1 - \sigma^2_{r \wedge k}(\hat{M}^\top M)}
\end{equation*}
\end{definition}
This metric measures the magnitude of the "angle" between two column spaces. For example, $\angle(M, \hat{M}) = 0$ if $\operatorname{col}(M) = \operatorname{col}(\hat{M})$ while $\angle(M, \hat{M}) = 1$ if they are orthogonal.
\begin{theorem} \label{theorem: linear-regression}
We assume the following assumptions hold:
(1) $\|\alpha\|_{\max} \le A$ for a positive constant $A$;
(2) $\frac{1}{\sqrt{Nr}}\|U\|$ is bounded above for any $N$;
(3) $\frac{1}{N}T^\top(I - P_U)T$ is bounded away from 0 for any $N$;
(4) $r\angle(\hat{U}, U) \to 0$ as $N \to 0$;
(5) Unconfoundedness (Assumption \ref{assumption: unconf}). Then
$\exists$ constant $c > 0$ such that with probability at least $1 - 2\exp(-cN^{1/2})$,
\begin{equation}
|\hat{\tau} - \tau^* | \le \frac{(\frac{2A}{\sqrt{N}}\|T\|)(\frac{1}{\sqrt{Nr}}\|U\|)({r}\angle(U, \hat{U})) - \frac{\sigma}{N^{1/4}}}{\frac{1}{N}T^\top(I - P_U)T - \frac{2}{N}\|T\|^2\angle(U, \hat{U})} \overset{N \to \infty}{\longrightarrow} 0
\end{equation}
\end{theorem}
In the above theorem, assumption (3) is satisfied as long as the treatment variable is almost surely not a linear combination of the confounders (Lemma 7, Appendix A). Otherwise it is impossible to estimate ATE accurately due to multicollinearity.
Assumption (4) states that the column space of the estimated confounder matrix should converge to the true column space with rate faster than $1/r$ to guanrantee consistency of the resulting ATE estimator. This suggests that when the true rank $r$ grows with dimensions, estimating ATE consistently requires stronger column space convergence than merely estimating the true column space consistently, i.e., $\angle(U, \hat{U}) \to 0$.
Now we prove that EFMC leads to accurate ATE estimator with high probability under some generative assumptions on confounder matrix $U$ as well as covariate loading matrix $V$.
\begin{assumption} [Latent Confounders and Covariate Loadings] \label{assump: confound}
$U$ and $V$ satisfy the following for some positive constants $\underline{v}$, $\overline{v}$, $c_V$ and $c_L$:
(1) for $i \in [N]$, $U_i$ are i.i.d Gaussian samples with covariance matrix $\Sigma_{r \times r} = LL^\top$ for some full rank matrix $L \in \mathbb{R}^{r \times r}$ such that $\frac{1}{\sqrt{r}}\|L\| < c_L$;
(2) $\underline{v} p \le \sigma_r^2(VL^\top) \le \sigma_1^2(VL^\top) \le \overline{v} p$ and $\frac{\max_j\|V_j\|}{\|V\|_F} \le \frac{c_V}{\sqrt{p}}$, $j = 1, ..., p$.
\end{assumption}
Assumption \ref{assump: confound} specifies a Gaussian random design for latent confounders, which implies assumption (2) in Theorem \ref{theorem: linear-regression} with high probability (Lemma 8, Appendix).
It also assumes without loss of generality that the latent confounders are not perfectly linearly correlated.
Moreover, Assumption \ref{assump: confound} exludes the degenerate case where almost all covariates have vanishing loadings on the latent confounders, i.e., $\frac{\max_j\|V_j\|}{\|V\|_F} \approx \frac{\max_j\|V_j\|}{\sqrt{n_V} \max_j\|V_j\|} = \frac{1}{\sqrt{n_V}}$ where $n_V$ is the numebr of covariates with nonvanishing loadings and $n_V$ scales much slower than $p$. In this case, the collected covariates are not informative enough for recoverying the latent confounders.
\begin{theorem} \label{theorem: exp}
Let $X_{ij}$ be sub-Exponential conditionally on $U_i$ with parameter $\sigma'$ for $\forall (i, j)$ and $T_i$ is almost surely not a linear combination of $U_i$. Suppose EFMC is used as the preprocessing step with $\lambda = 2c_0\sigma'\sqrt{Np}\sqrt{\frac{r\overline{N}\log\overline{N}}{|\Omega|}}$, where $\overline{N} = N \vee p$ and $|\Omega| > c_1r\overline{N}\log\overline{N}$ for positive constants $c_0$ and $c_1$ . Assume $r/N \to 0$ and $\exists \delta > 0$ such that $p^{1 + \delta}/N \to 0$. Under Assumption $1 - 5$, assumptions (2)-(4) in Theorem \ref{theorem: linear-regression} hold with high probability. Furthermore, $\exists$ positive constants $c_2$, $c_3$, $c_{\sigma', \eta}$ such that, the following holds with probability at least $1 - c_2\operatorname{exp}(-c_3N^{1/2}) - c_2N^{-1/2} - 2\operatorname{exp}(-c_3p^{\delta})$,
\begin{equation}\label{formula: error_bound}
|\hat{\tau} - \tau| \le \frac{Ac_Lc_{\sigma', \eta}c_V\sqrt{\frac{r^5\overline{r}\overline{N}\log\overline{N}}{|\Omega|}} - \frac{\sigma}{N^{1/4}}[\sqrt{\frac{\underline{v}}{\underline{v} + 2\overline{v}}} - \Lambda(r, \overline{N}, |\Omega|)]}{[\sqrt{\frac{\underline{v}}{\underline{v} + 2\overline{v}}} - \Lambda(r, \overline{N}, |\Omega|)][\frac{1}{N}T^\top(I - P_U)T - 2\Lambda(r, \overline{N}, |\Omega|)]}
\end{equation}
where $\Lambda(r, \overline{N}, |\Omega|) = c_{\sigma', \eta}c_V\sqrt{\frac{\bar{r}r^3\overline{N}\log\overline{N}}{|\Omega|}}$ and $\overline{r} = \max\{r, \log\overline{N}\}$.
\end{theorem}
The assumption that $X_{ij}$ is sub-Exponential encompasses common exponential family distributions like Gaussian, Bernoulli, Poisson, Binomial, etc.
The assumption that $p^{1 + \delta}/N \to 0$ appears as an artifact of proof and our simulation shows that the consistency also holds when $N < p$ (Figure $3$, Appendix B).
Theorem \ref{theorem: exp} guarantees that the ATE estimator induced by EFMC is consistent as long as $r^5\overline{r}\overline{N}\log\overline{N}/|\Omega| \to 0$ when $N, p \to \infty$. This seems much more restrictive than consistent matrix reconstruction that merely requires $r\overline{N}\log\overline{N}/|\Omega| \to 0$ (Lemma 4, Appendix A). However, this is due to the pessimistic nature of the error bound. Our simulations in Section 4.1 show that matrix factorization works very well for $r = 5$, $N = 1500$ and $p = 1450$ such that $r^6 \gg N$.
\section{Numerical results}
In this part, we illustrate the effectiveness of low rank matrix factorization in alleviating the ATE estimation error caused by measurement noise using synthetic datasets with both continuous and binary covariates and the twins dataset introduced by Louizos et al. \citep{louizos2017causal}. For the implementation of matrix factorization, we use the following nonconvex formulation:
\begin{equation} \label{formula: glrm}\textstyle
\hat{U}, \hat{V} = \underset{{U \in \mathbb{R}^{N \times k}, V \in \mathbb{R}^{p \times k}}}{\operatorname{argmin}} \sum_{(i, j) \in \Omega} L_{i, j}(X_{ij}, U_{i}^\top V_j) + \frac{\lambda}{2}(\| U\|_F + \| V \|_F)
\end{equation}
where $L_{ij}$ is a loss function assessing how well $U_{i}^\top V_j$ fits the observation $X_{ij}$ for $(i, j) \in \Omega$. The solution $\hat{U}$ can be viewed as the estimated confounder matrix.
This nonconvex formulation (\ref{formula: glrm}) is proved to equivalently recover the solution of the convex formulation (\ref{formula: exp_family}) when log-likelihood loss functions and sufficient large $k$ are used \citep{udell2016generalized,kallus2016dynamic}.
Solving the nonconvex formulation (\ref{formula: glrm}) approximately is usually much faster than solving the convex counterpart.
In our experiments, we use the the R package softImpute \citep{hastie2015matrix} when dealing with continuous covariates and quadratic loss, the R package logisticPCA \citep{collins2002generalization} when dealing with binary covariates and logistic loss, and the Julia package LowRankModel \citep{udell2016generalized} when dealing with categorical variables and multinomial loss.
All tuning parameters are chosen via $5$-fold cross-validation.
\subsection{Synthetic experiment}
We generate synthetic samples according to the following linear regression process:
$Y_i \mid U_i, T_i \sim \mathcal{N}(\alpha^\top U_i + \tau T_i, 1)$ where confounder $U_{ij} \sim \mathcal{N}(0, 1)$ and treatment variable $T_i \mid U_i \sim \operatorname{Bernoulli}(\sigma(\beta^\top U_i))$ for $i \in [N]$, $j \in [r]$.
We consider covariates generated from both indepedent Gaussian noise and independent Bernoulli noise: $X_{ij} \sim \mathcal{N}(U_i^\top V_j, 5)$ and $X_{ij} \sim \operatorname{Bernoulli}(\sigma(U_i^\top V_j))$ for $V_j \in \mathbb{R}^r$.
We set the dimension of the latent confounders $r = 5$, use $\alpha = [-2, 3, -2, -3, -2]$ and $\beta = [1, 2, 2, 2, 2]$, and choose $\tau = 2$ in our example. (But our conclusion is robust to different values of these parameter.)
We consider low dimensional case where the number of covariates $p$ varies from $100$ to $1000$ and the sample size $N = 2p$ and high dimensional case where $p$ varies from $150$ to $1500$ and $N = p + 50$.
For each dimensional setting, we compute the error metrics based on $50$ replications of the experiments and we generate entries of $V$ independently from standard normal distribution with $V$ fixed across the replications.
\begin{figure} \label{figure: synthetic}
\centering
\includegraphics[width = \linewidth]{synthetic.pdf}
\caption{Results from experiments on synthetic data. }
\end{figure}
We compare the root mean squared error (RMSE) scaled by the true ATE in Figure \ref{figure: synthetic} for the following five ATE estimators in linear regression:
the Lasso, Ridge and OLS estimators from regressing $Y_i$ on $T_i$ and noisy covariates $X_i$, the OLS estimator from regressing $Y_i$ on $T_i$ and the estimated confounders $\hat{U}_i$ from matrix factorization (MF), and the OLS estimator from regressing $Y_i$ on $T_i$ and the true confounders $U_i$ (Oracle). The shaded area corresponds to the $2$-standard-deviation error band for the estimated relative RMSE across $50$ replications.
Figure $1$ shows that OLS leads to accurate ATE estimation for Gaussian additive noise when the number of covariates is sufficiently large, which is consistent with Corollary \ref{corollary: bias}.
However, for high dimensional data, matrix factorization preprocessing dominates all other feasible methods and its RMSE is very close to the oracle regression for sufficiently large number of covariates.
While all feasible methods tend to have better performance when more covariates are available, matrix factorization preprocessing is the most effective in exploiting the noisy covariates for accurate causal inference.
Sufficiently many noisy covariates are very important for accurate ATE estimation in the presence of measurement noise. We can show that the error does not converge when only $N$ grows but $p$ is fixed (Figure $5$, Appendix B).
With only a few covariates, matrix factorization preprocessing may have high error because the cross-validation chooses rank smaller than the ground truth.
Furthermore, the gain from using matrix factorization is more dramatic for binary covariates, which demonstrates the advantage of matrix factorization preprocessing with loss functions adapting to the data types.
More numerical results on different dimensional settings and missing data can be found in Appendix.
\subsection{Twin mortality}
We further examine the effectiveness of matrix factorization preprocessing using the twins dataset introduced by Louizos et al. \citep{louizos2017causal}.
This dataset includes information for $N = 11984$ pairs of twins of same sex who were born in the USA between 1998-1991 and weighted less than $2$kg.
For the $i^{\text{th}}$ twin-pair, the treatment variable $T_i$ corresponds to being the heavier twin and the outcomes $Y_i(0), Y_i(1)$ are the mortality in the first year after they were born.
We have outcome records for both twins and view them as two potential outcomes for the treatment variable. Therefore, the $-2.5\%$ difference between the average mortality rate of heavier twins and that of ligher twins can be viewed as the "true" ATE.
This dataset also includes other $46$ covariates relating to the parents, the pregnancy and birth for each pair of twins. More details about the dataset can be found in Louizos et al. \citep{louizos2017causal}.
\begin{figure}\label{figure: twins}
\centering
\includegraphics[width = \linewidth]{twins.pdf}
\caption{Results on the twins dataset.}
\end{figure}
To simulate confounders in observational studies, we follow the practice in Louizos et al. \citep{louizos2017causal} and selectively hide one of the two twins based on one variable highly correlated with the outcome: GESTAT10, the number of gestation weeks prior to the birth.
This is an ordinal variable with values from $0$ to $9$ indicating less than $20$ gestation weeks, $20 - 27$ gestation weeks and so on. We simulate $T_i \mid U_i \sim \operatorname{Bernoulli}(\sigma(5(U_i/10 - 0.1)))$, where $U_i$ is the confounder GESTAT10.
Then for each twin-pair, we only observe the lighter twin if $T_i = 0$ and the heavier twin otherwise.
We create noisy proxies for the confounder as follows: we replicate the GESTAT10 $p$ times and independently perturb the entries of these $p$ copies with probability $0.5$.
Each perturbed entry is assigned with a new value sampled from $0$ to $9$ uniformly at random.
We denote these proxy variables as $\{X_i\}_{i=1}^N$.
We also consider the presence of missing values: we set each entry as missing value independently with probability $0.3$.
We vary $p$ from $5$ to $50$ and for each $p$ we repeat the experiments $20$ times for computing error metrics.
We compare the performance of different methods for both complete data and missing data in Figure \ref{figure: twins}.
For complete data, we consider logistic regression (LR), doubly robust estimator (DR), Mahalanobis matching (Match) and propensity score matching (PS Match) using $\{X_i\}_{i=1}^N$, and their counterparts using the estimated confounders $\{\hat{U}_i\}_{i=1}^N$ from matrix factorization. All propensity scores are estimated by logistic regression using $\{X_i\}_{i=1}^N$ or $\{\hat{U}_i\}_{i=1}^N$ accordingly. The matching methods are implemented via the full match algorithm in the R package optmatch \citep{fullmatch}.
For missing data, we consider logistic regression using data output from different preprocessing method: imputing missing values by column-wise mode, multiple imputation using the R package MICE with $5$ repeated imputations \citep{mice}, and the estimated confounders $\{\hat{U}_i\}_{i=1}^N$ from matrix factorization.
We can observe that all methods that use matrix factorization clearly outperform their counterparts that do not, even though the noise mechanism does not obey common noise assumptions in matrix factorization literature. This also demonstrates that matrix factorization preprocessing can augment popular causal inference methods beyond linear regression. Furthermore, matrix factorization preprocessing is robust to a considerable amount of missing values and it dominates both the ad-hoc mode imputation method and the state-of-art multiple imputation method. This suggests that inferring the latent confounders is more important for causal inference than imputing the noisy covariates.
\section{Conclusion}
In this paper, we address the problem of measurement noise prevalent in causal inference.
We show that with a large number of noisy proxies, we can reduce the bias
resulting from measurement noise by using matrix factorization preprocessing to infer latent confounders.
We guarantee the effectiveness of this approach in a linear regression setting, and show its effectiveness numerically on both synthetic and real clinical datasets.
These results demonstrate that preprocessing by matrix factorization to infer latent confounders
has a number of advantages: it can accommodate a wide variety of data types,
ensures robustness to missing values, and
can improve causal effect estimation when used in conjunction
with a wide variety of causal inference methods.
As such, matrix factorization allows more principled and accurate
estimation of causal effects from observational data.
\bibliographystyle{unsrt}
\medskip
|
{
"timestamp": "2018-06-05T02:11:23",
"yymm": "1806",
"arxiv_id": "1806.00811",
"language": "en",
"url": "https://arxiv.org/abs/1806.00811"
}
|
\section{Introduction}
Since the 1970s it has been well known that a moving mirror can radiate particles \cite{MOO70, FUL76}. The radiation flux is thermal for an appropriately chosen accelerated trajectory, and hence an analogy \cite{DAV77,CAR87} can be drawn
with Hawking radiation from a collapsing star \cite{HAW75} that forms a black hole. Interest in this problem has been maintained over the years due to this connection with gravitational physics, but also due to difficulties in obtaining and interpreting results for such systems \cite{OBA01,OBA03,MAR15}. Recently a circuit model approach has been introduced which allows semi-analytical solutions to be obtained for this and related problems \cite{Daiqin2016, Daiqin2017}, allowing clearer exploration of the physics.
\\
\\
Interactions with an accelerated mirror inevitably mixes left and right going modes (in a 1+1 approximation). Hence it has been assumed that the particle production and mixing seen by an inertial observer looking at (say) left moving modes coming from an accelerated mirror are due to loss of information via entanglement to the right going modes. That is, in quantum mechanics it is expected that initial pure states will evolve into pure final states hence, if a mixed state is observed, it is assumed there is some coupling to unobserved parts of the system. By tracing off parts of the system, the observed sub-system may seem mixed due to the information that has been lost.
\\
\\
However, using the circuit model, it has recently been shown that a single mode squeezed signal sent by a uniformly accelerated observer would, from an operational point of view, be observed to be decohered by an inertial observer \cite{Daiqin2017}. This is in spite of there being no coupling between left and right going modes or to other unobserved degrees of freedom. The key restriction on the observer in this scenario is that they do not possess global information about the modal decomposition of the interaction, but rather are provided with a mode reference from the accelerated source. It is interesting to consider whether similar effects might be present for passive accelerated objects.
\\
\\
In this paper we analyse the effect of the Minkowski vacuum interacting with an accelerated time-delay. The natural modes in the reference frame of an object uniformly accelerating in the right going direction are the right Rindler modes. We model an interaction that delays the right Rindler modes with respect to the left Rindler modes. The delay is passive and doesn't couple left and right going modes. The global effect of such a unitary delay can be analysed straightforwardly in the Schr\"odinger picture and predicts particle production in the Minkowski frame. However the analysis of the statistics expected from particular detection models for inertial Minkowski observers is more complicated.
\begin{figure} [h!]
\centering
\includegraphics[width=0.4\textwidth]{SelfHomodyneSetUp.PNG}
\caption{The time-delay source moves along the red trajectory. The detector remains stationary along the blue line and holds a broad-band detector.}
\label{fig: Unitary}
\end{figure}
\\
\\
We adopt the circuit model (input-output) formalism and the self-homodyne detection method for our analysis \cite{Daiqin2017}. In this scheme, the observer's detector is a broad-band ``bucket" detector, looking at all field modes. The mode reference is sent from the accelerated reference frame. The conceptual set-up of such scenario can be found in Fig. 1. This method does not utilize the perturbation method, and given particular conditions, simple, accurate, semi-analytic expression are obtained. This method also has the virtue of analysing the effects of the interaction from an operational point of view which has a strong connection with experimental methodology.
\\
\\
Our paper is set out in the following way. In the following section we introduce our model for the accelerated time delay and derive a global solution in terms of Unruh modes initially in the Minkowski vacuum state. In Section III we introduce the self-homodyne detection model and derive approximate solutions, valid for particular parameter choices. In Section IV we analyse the results and in Section V we make connection with the accelerated mirror under similar conditions and find parameters for which the measurement statistics of the time-delay and mirror coincide. The self-homodyne detection model assumes the mode reference is sent from the source of the interaction in the right Rindler wedge. In Section VI we show that if an additional mode reference is sent from the left Rindler wedge then pure state statistics can be observed for the ``mirror-like" case. Two interesting cases are presented. We discuss and conclude in Section VII.
\section{Accelerated Unitary Time Delay}
\subsubsection{Introducing the Operators}
In this paper we consider a massless scalar bosonic field $\hat{\Phi}$ in (1+1)-dimensional Minkowski space-time. Details on the quantisation method and the definition of the single frequency annihlation/creation opeartors can be found in \cite{Unruh1976, Takagi1986, Crispino2008, Fulling1973}. For simplicity, we only consider the left moving modes in this paper. The single frequency Minkowski annihilation operator is defined as $\hat{e}_k$. It is useful to introduce what is known as the single frequency Unruh operators, $\hat{c}_\omega$ and $\hat{d}_\omega$. The Unruh operators are related to the Minkowski operator in the following way \cite{Daiqin2016, Crispino2008, Birrell1982}:
\begin{align}
&\hat{e}_k=\int d\omega \; A_{k\omega}\hat{c}_{\omega}+B_{k\omega}\hat{d}_{\omega}
\end{align}
where,
\begin{equation}
\begin{aligned}
A_{k\omega}&=\frac{i\sqrt{2\sinh[\pi\omega/a]}}{2\pi\sqrt{\omega k}}\Gamma[1-i\omega/a]\left(\frac{k}{a}\right)^{i\omega /a} =B_{k\omega}^*
\end{aligned}
\end{equation}
Where $\Gamma(x)$ is the gamma function. The Unruh modes are related to the right and left Rindler modes, $\hat{a}_{\omega}$ and $\hat{b}_{\omega}$ respectively, by a two mode squeezing operation.
\begin{equation}
\begin{aligned}
&\hat{c}_{\omega}= \cosh(r_{\omega}) \hat{a}_{\omega}- \sinh(r_{\omega}) \hat{b}_{\omega}^{\dag} \\
&\hat{d}_{\omega}
= \cosh(r_{\omega}) \hat{b}_{\omega}- \sinh(r_{\omega}) \hat{a}_{\omega}^{\dag}
\end{aligned}
\end{equation}
Where $r_{\omega}\equiv \tanh^{-1}[\exp(-\pi \omega /a)]$ and $a$ is the acceleration of the observer. By inverting equation (3), we obtain the following equations:
\begin{equation}
\begin{aligned}
&\hat{a}_{\omega}= \cosh(r_{\omega}) \hat{c}_{\omega} + \sinh(r_{\omega}) \hat{d}_{\omega}^{\dag} \\
&\hat{b}_{\omega}
= \cosh(r_{\omega}) \hat{d}_{\omega} + \sinh(r_{\omega}) \hat{c}_{\omega}^{\dag}
\end{aligned}
\end{equation}
These definitions will form the basis of the quantum circuit model (or input-output formalism) which was developed by Su et al. \cite{Daiqin2017, Daiqin2016}. It is noted that we have utilized a different notation to denote the Rindler and Minkowski operators to other authors.
\subsubsection{Introducing the Unitary}
Interactions between uniformly accelerated objects and quantum fields have been studied for many years, however to the best of our knowledge, a time-delay in the Rindler frame has not been previously studied. We first introduce the unitary time-evolution operator in the Rindler frame as follows;
\begin{equation}
\hat{U}_t=e^{-i\hat{H}_R\tau+i\hat{H}_L\overline{\tau}}
\end{equation}
Where $\hat{H}_R$ is the Hamiltonian in the right Rindler wedge and $\hat{H}_L$ is the Hamiltonian in the left Rindler wedge. In the right and left Rindler wedges, the Hamiltonian is defined as follows:
\begin{equation}
\begin{aligned}
\hat{H}_{R}=\int \mathrm{d}\omega \; \omega(\hat{a}_{\omega}^{\dag}\hat{a}_{\omega}), \; \hat{H}_{L}=\int \mathrm{d}\omega \; \omega(\hat{b}_{\omega}^{\dag}\hat{b}_{\omega})
\end{aligned}
\end{equation}
The unitary time delay in the right Rindler wedge can be modelled through the following unitary:
\begin{figure} [h!]
\centering
\includegraphics[width=0.4\textwidth]{TimeDelaySource.png}
\caption{The unitary time delay can be modelled through the use of mirrors. It can be seen that the incoming light beam must travel an extra distance of $\Delta$ due to the mirrors. If this mirror arrangement is accelerated, the delay will occur to the Rindler modes in one Rindler wedge.}
\label{fig: Unitary}
\end{figure}
\begin{equation}
\hat{U}=e^{i\hat{H}_{R}\Delta}
\end{equation}
This unitary can be compared with equation (5). It is easy to see that we have set $\tau=-\Delta$ and $\overline{\tau}=0$. We can induce a time delay by accelerating an object which delays the incoming signal by a set time $|\Delta|$. In Fig. 2, a physical example of an accelerated object that could cause such a delay is shown.
\subsubsection{Schr\"odinger Picture}
To have an understanding of what physically occurs to the field, we seek how the Minkowski vacuum evolves under the operator defined in equation (7). The Minkowski vacuum is defined as $\hat{e}_k\ket{0_M}=0, \; \forall k$ while the Rindler vacuum is defined as $\hat{a}_{\omega} \ket{0_R}=\hat{b}_{\omega} \ket{0_R}=0, \; \forall \omega$
We calculate how the Minkowski vacuum transforms under this unitary. First, we look into the output state in terms of the Rindler vacuum (in terms of the accelerated observers) \cite{Crispino2008}:
\begin{widetext}
\begin{equation}
\begin{aligned}
\hat{U} \ket{0_M} =\prod\limits_{\omega}\sqrt{1-\exp[-2\pi \omega/a]} \sum\limits_{n_{\omega}=0}^{\infty} \frac{\exp[-n_{\omega} \pi \omega /a]}{n_{\omega}!}
(\hat{a}_{\omega}^{\dag} e^{i\Delta\omega} \hat{b}_{\omega}^{\dag})^{n_{\omega}}
\ket{0_R}
\end{aligned}
\end{equation}
\end{widetext}
We find that we can explicitly write the final state in terms of the Rindler single frequency creation operators from the Rindler vacuum. It is clear that the right and left Rindler observers can measure a pure state by comparing the correlation between the right and left single frequency Rindler particles.
\\
\\
We now look into the output state in terms of the Minkowski vacuum (in terms of the inertial observers). To do this, we decompose the unitary into a form which can be understood in the Minkowski frame. With lengthy calculations, we can show the following:
\begin{equation}
\hat{U} = \hat{U}_{\hat{d},p}(\omega \Delta)\hat{S}(r,(\theta_2+\theta_{1}))\hat{U}_{\hat{c},p}(\theta_{1})\hat{U}_{\hat{d},p}(\theta_{1})
\end{equation}
Where we have defined the following:
\begin{equation}
\begin{aligned}
r(\omega)&=\cosh ^{-1}(|\cosh(r_\omega)^2 e^{-i \omega \Delta}-\sinh(r_\omega)^2|)
\\
e^{i \theta_1}& \equiv\frac{\cosh(r_\omega)^2 e^{-i \omega \Delta}-\sinh(r_\omega)^2}{|\cosh(r_\omega)^2 e^{-i \omega \Delta}-\sinh(r_\omega)^2|}
\\
e^{i \theta_2}& \equiv\frac{(e^{-i \omega \Delta}-1)}{|e^{-i \omega \Delta}-1|}
\end{aligned}
\end{equation}
The unitary transformation is decomposed into a combination of phase shifters and a two mode squeezer. The two mode squeezer is defined in the following way:
\begin{equation}
\begin{aligned}
&\hat{S}(r,\theta) \equiv \exp [\int \mathrm{d}\omega \; {r(\omega)}(\hat{d}_{\omega}^{\dag} \hat{c}_{\omega}^{\dag}e^{i \theta}-e^{-i \theta}\hat{c}_{\omega} \hat{d}_{\omega})]
\\
&\hat{S}(r,\theta)^{\dag}\hat{c}_{\omega}\hat{S}(r,\theta)=\cosh(r(\omega))\hat{c}_{\omega}+e^{i\theta}\sinh(r(\omega))\hat{d}_{\omega}^{\dag}
\\
&\hat{S}(r,\theta)^{\dag}\hat{d}_{\omega}\hat{S}(r,\theta)=\cosh(r(\omega))\hat{d}_{\omega}+e^{i\theta}\sinh(r(\omega))\hat{c}_{\omega}^{\dag}
\end{aligned}
\end{equation}
Phase shifters are defined in the following way:
\begin{equation}
\begin{aligned}
&U_{\hat{o},p}(\theta) \equiv \exp\left(i\int \mathrm{d}\omega \;\theta(\omega) \hat{o}^{\dag}_{\omega}\hat{o}_{\omega}\right)
\\
&U_{\hat{o},p}(\theta)^{\dag}\hat{o}_{\omega}U_{\hat{o},p}(\theta)=e^{i \theta(\omega) }\hat{o}_{\omega}
\end{aligned}
\end{equation}
By acting this unitary onto the Minkowski vacuum, we find the following:
\begin{equation}
\begin{aligned}
\hat{U} \ket{0_M} &=
\hat{U}_{\hat{d},p}(\omega \Delta)\hat{S}(r,\theta_2+\theta_{1}) \ket{0_M}
\\
&=\hat{S}(r,(\theta_2+\theta_{1}+\omega\Delta)) \ket{0_M}
\end{aligned}
\end{equation}
As a result, the final state is a pure two-mode squeezed state. Equation (13) makes it clear that different Unruh frequencies are completely uncorrelated from each other, thus we can conclude that the correlations will exist between single frequency Unruh modes. For single frequency, the squeezing strength is $r(\omega)$ and squeezing angle is $\theta_1(\omega)+\theta_2(\omega)+\omega\Delta$.
\\
\\
From a practical point of view, this information can only be extracted when the observer knows the modal structure of the Unruh modes (which is dependent on the trajectory of the accelerated observer). We impose a key restriction on the inertial observer that they do not possess global information about the modal decomposition of the interaction; the observer has no information on the structure of the incoming signal. That is to say that the signal must be accompanied with information telling the inertial observer where the signal is.
To address this issue, we implement the self-homodyne detection method. The information of the modal structure will be encoded within the strong coherent signal sent by the source. The inertial observer will only require a broad-band detector and the information of the incoming mode can be analysed via statistical analysis of the particle count.
\section{Self Homodyne Detection on Accelerated Unitary Time Evolution}
\subsection{Self-Homodyne Detection}
We utilize homodyne tomography \cite{Lvovsky2009} to characterise the state of a particular field mode. For Gaussian states, the analysis of the first and second order moment \cite{Weedbrook2012} is sufficient to characterise the Wigner function of a particular output mode \cite{Scully1997}.
\\
\\
We will utilize self-homodyne detection method to characterise the Wigner function of a particular output mode. This section will introduce the self-homodyne detection method. Self-homodyne detection is conducted through displacing the mode of interest by a large displacement operator $\hat{D}_i(\alpha=|\alpha| e^{i \phi})=\exp[\alpha \hat{o}_i^{\dag}-\alpha^* \hat{o}_i]$. The particle count of such state is compared to the particle count of an output without the presence of the signal for various $\phi$.
\\
\\
The state with the signal can be created by acting the unitary operator onto the initial state. In the Heisenberg picture, we interpret this as the following:
\begin{equation}
\begin{aligned}
\hat{o}_{i}' & \equiv \hat{U}^{\dag} \hat{o}_{i}\hat{U}
\end{aligned}
\end{equation}
The signal that is created is then coupled with a strong coherent signal. In the Heisernberg picture, the operator evolves in the following way:
\begin{equation}
\begin{aligned}
\hat{o}_{i}'' & \equiv \hat{U}^{\dag} \hat{D}^{\dag}_i(\alpha)\hat{o}_{i}\hat{D}_i(\alpha)\hat{U}
\end{aligned}
\end{equation}
The photon number operator can be written in the following way:
\begin{equation}
\begin{aligned}
\hat{N}_{i} & \equiv \hat{o}_{i}''^{\dag}\hat{o}_{i}''
\\ & = |\alpha|^2+|\alpha|\hat{X}_{i}(\phi)+(\hat{o}_i^{\dag}{}'\hat{o}_i')
\\ & \approx |\alpha|^2+|\alpha|\hat{X}_i(\phi)
\\ \hat{N}_{0,i} & \equiv \hat{D}_i^{\dag}(\alpha)\hat{o}_{i}^{\dag}\hat{o}_{i}\hat{D}_i(\alpha)
\\
&=|\alpha|^2+|\alpha|\hat{X}_{0,i}(\phi)+(\hat{o}_{i}^{\dag}\hat{o}_{i})
\\ & \approx |\alpha|^2
\end{aligned}
\end{equation}
Where we have defined the following:
\begin{equation}
\begin{aligned}
\hat{X}_{i}& \equiv \hat{o}_i'e^{-i\phi}+\hat{o}_i^{\dag}{}'
e^{i\phi}
\\
\hat{X}_{0,i}& \equiv \hat{o}_ie^{-i\phi}+\hat{o}_i^{\dag}
e^{i\phi}
\end{aligned}
\end{equation}
\\
The approximation in equation (16) is valid when we set $|\alpha|^2 \gg \braket{\hat{o}_i^{\dag}{}'\hat{o}_i'}$ and assume $\braket{\hat{X}_{0,i}(\phi)} \ll |\alpha|$. Utilizing equation (16), we find the following:
\begin{equation}
\begin{aligned}
{X}_i(\phi) & \equiv \frac{\braket{\hat{N}_{i}- \hat{N}_{0,i}}}{\sqrt{\braket{\hat{N}_{0,i}}}} \approx \braket{\hat{X}_{i}(\phi)}
\end{aligned}
\end{equation}
In future calculations, we will differentiate between $\braket{\hat{X}_i(\phi)}$ and $X_i(\phi)$. The first being the explicit expectation value of $\hat{X}_i(\phi)$, while the latter is the approximate value we find via self-homodyne detection method. We take the variance of the expression above and find the following:
\begin{equation}
\begin{aligned}
{V}_i(\phi)& \equiv \frac{(\braket{\Delta \hat{N}_{i}})^2}{{\braket{\hat{N}_{0,i}}}} \approx\braket{\hat{V}_{i}(\phi)}
\end{aligned}
\end{equation}
In cases where the observer does not know the mode in which the signal is sent, the observer is restricted to conducting a measurement over all basis. The mode of interest is amplified by the large coherent signal, thus we can utilize the following assumption:
\begin{equation}
\begin{aligned}
\frac{\hat{N}_{j \neq i}-\hat{N}_{0,j\neq i}}{\braket{\hat{N}_{0,i}}} \approx 0
\end{aligned}
\end{equation}
Utilizing this result and the fact that the total number operator does not change with the change of basis, we conclude the following:
\begin{equation}
\begin{aligned}
(\hat{N}_{i} - \hat{N}_{0,i}) & \approx \sum_j \hat{N}_{j} - \hat{N}_{0,j}
\\ & = \sum_n \hat{N}_{n} - \hat{N}_{0,n}
\end{aligned}
\end{equation}
In the equation above, n denotes the complete orthonormal basis set in which the particle number is counted in. By utilizing this approximation, we find the following:
\begin{equation}
\begin{aligned}
X(\phi) &\equiv \left(\braket{\hat{N}}-\braket{\hat{N_0}}\right)/{\sqrt{\braket{\hat{N}_0}}} \approx X_i(\phi)
\\
V(\phi) &\equiv \left(\braket{\hat{N}^2}-\braket{\hat{N}}^2\right)/\braket{\hat{N}_0}\approx V_i(\phi)
\end{aligned}
\end{equation}
Equations (18) and (19) is the homodyne detection measured over a single basis. Equation (22) is a self homodyne detection method which can be implemented when the basis is not well-defined.
\\
\\
Our observer will be placed in Minkowski space time, and hence we have defined the following:
\begin{equation}
\begin{aligned}
\hat{N}& \equiv \int \mathrm{d}k \; \hat{e}_k^{\dag}{}''\hat{e}_k''
\\ &=\int \mathrm{d}\omega \; \hat{c}_{\omega}^{\dag}{}''\hat{c}_{\omega}''+\hat{d}_{\omega}^{\dag}{}''\hat{d}_{\omega}''
\\
\hat{N}_0 & =\; \hat{D}_i (\alpha)^{\dag} \left(\int \mathrm{d}\omega \; \hat{c}_{\omega}^{\dag}{}\hat{c}_{\omega}+\hat{d}_{\omega}^{\dag}\hat{d}_{\omega}\right) \hat{D}_i (\alpha)
\end{aligned}
\end{equation}
Equations (22) and (23) will form the foundation of the self-homodyne detection method. We now analyse how these equations can be put into a more useful form. To do this, we define the following operators:
\begin{equation}
\begin{aligned}
\hat{c}_{\omega }'' \equiv& \hat{U}^{\dag} \hat{D}_i (\alpha)^{\dag} \hat{c}_{\omega} \hat{D}_i (\alpha)\hat{U}
\\
\hat{d}_{\omega }'' \equiv& \hat{U}^{\dag}\hat{D}_i (\alpha)^{\dag} \hat{d}_{\omega} \hat{D}_i (\alpha)\hat{U}
\\
\hat{c}_{\omega }' \equiv& \hat{U}^{\dag} \hat{c}_{\omega} \hat{U}
\\
\hat{d}_{\omega }' \equiv& \hat{U}^{\dag} \hat{d}_{\omega}' \hat{U}
\end{aligned}
\end{equation}
We also define the following values:
\begin{equation}
\begin{aligned}
g_c(\omega) \equiv & \; \hat{c}_\omega{}''-\hat{c}_\omega{}'
\\
g_d(\omega) \equiv & \; \hat{d}_\omega{}''-\hat{d}_\omega{}'
\end{aligned}
\end{equation}
$g_c(\omega)$ and $g_d(\omega)$ are generally in the order of magnitude of $\alpha$. Utilizing these expressions, and the fact that $\hat{c}_{\omega}\ket{0_M}=\hat{d}_{\omega}\ket{0_M}=0$, we find that the quadrature amplitude can be calculated as follows:
\begin{equation}
\begin{aligned}
X(\phi) \approx & \frac{2 Re[ \int\mathrm{d}\omega\; g_c(\omega)^*\braket{\hat{c}_{\omega }'}+g_d(\omega)^*\braket{\hat{d}_{\omega } '}]}{[\int\mathrm{d}\omega\; |g_c(\omega)|^2+|g_d(\omega)|^2]^{1/2}}
\end{aligned}
\end{equation}
We have neglected all values which are in the order of $1/{|\alpha|}$, as we can set $\alpha$ to be arbitrarily large. We now proceed onto calculating the quadrature variance. To do this, we first simplify the following expectation values:
\begin{widetext}
\begin{equation}
\begin{aligned}
\braket{\hat{c}_{\omega}^{\dag}{}''\hat{c}_{\omega}{}'' \hat{c}_{\omega'}^{\dag}{}''\hat{c}_{\omega'}{}''} - \braket{\hat{c}_{\omega}^{\dag}{}''\hat{c}_{\omega}{}''}\braket{ \hat{c}_{\omega'}^{\dag}{}''\hat{c}_{\omega'}{}''}\approx \;
& g_c(\omega)g_c(\omega')^*\braket{ \delta(\omega-\omega')+ 2\hat{c}_{\omega}^{\dag}{}'\hat{c}_{\omega'}{}'}+2 Re[g_c(\omega)g_c(\omega')\braket{ \hat{c}_{\omega}^{\dag}{}'\hat{c}_{\omega'}^{\dag}{}'}]
\\
\braket{\hat{d}_{\omega}^{\dag}{}''\hat{d}_{\omega}{}'' \hat{d}_{\omega'}^{\dag}{}''\hat{d}_{\omega'}{}''} - \braket{\hat{d}_{\omega}^{\dag}{}''\hat{d}_{\omega}{}''}\braket{ \hat{d}_{\omega'}^{\dag}{}''\hat{d}_{\omega'}{}''} \approx \;
& g_d(\omega)g_d(\omega')^*\braket{ \delta(\omega-\omega')+2 \hat{d}_{\omega}^{\dag}{}'\hat{d}_{\omega'}{}'}+2 Re[g_d(\omega)^*g_d(\omega')^*\braket{ \hat{d}_{\omega}{}'\hat{d}_{\omega'}{}'}]
\\
\braket{\hat{c}_{\omega}^{\dag}{}''\hat{c}_{\omega}
{}'' \hat{d}_{\omega'}^{\dag}{}''\hat{d}_{\omega'}{}''}- \braket{\hat{c}_{\omega}^{\dag}{}''\hat{c}_{\omega}{}''}\braket{ \hat{d}_{\omega'}^{\dag}{}''\hat{d}_{\omega'}{}}
\approx \; & 2 Re[ g_c(\omega)^*g_d(\omega')^*\braket{\hat{c}_{\omega}{}'\hat{d}_{\omega'}{}'}+g_c(\omega)g_d(\omega')^*\braket{ \hat{c}_{\omega}^{\dag}{}'\hat{d}_{\omega'}{}'}]
\end{aligned}
\end{equation}
\end{widetext}
We have neglected the terms which are 0th and 1st order in $|\alpha|$. The remaining terms are 2nd order in $|\alpha|$.
We now simplify this expression a little more by assuming that the displacement operator is applied onto the right Rindler mode; $\hat{D}_g(\alpha=|\alpha| e^{i\phi})$. The explicit expression of the displacement operator is defined in equation (32) and the explicit expressions for equation (25) are calculated in equation (34).
Utilizing the expression found in equation (34), it is easy to show that $g_c(\omega)g_c(\omega')^*$, $g_d(\omega)g_d(\omega')^*$ and $g_c(\omega)^*g_d(\omega')^*$ are not dependent on $\phi$. We can also show that $g_c(\omega)g_c(\omega')$, $g_d(\omega)^*g_d(\omega')^*$ and $g_c(\omega)g_d(\omega')^*$ are proportional to $e^{2i \phi}$. Thus, we can split the variance into the part that is phase insensitive and that which is phase sensitive:
\begin{equation}
\begin{aligned}
V(\phi)\approx 1+ V_1+V_2(\phi)
\end{aligned}
\end{equation}
Where we have defined the following:
\begin{widetext}
\begin{equation}
\begin{aligned}
V_1 &\equiv\frac{\int \mathrm{d}\omega \mathrm{d}\omega' \;
2\{g_c(\omega)g_c(\omega')^*\braket{\hat{c}_{\omega}^{\dag}{}'\hat{c}_{\omega'}{}'}
+g_d(\omega)g_d(\omega')^* \braket{\hat{d}_{\omega}^{\dag}{}'\hat{d}_{\omega'}{}'}
+2Re[g_c(\omega)^*g_d(\omega')^* \braket{\hat{c}_{\omega}{}'\hat{d}_{\omega'}{}'}]\}}{\int \mathrm{d}\omega \; |g_c(\omega)|^2+|g_d(\omega)|^2}
\\
V_2(\phi) &\equiv \frac{2\{Re[g_c(\omega)g_c(\omega')\braket{ \hat{c}_{\omega}^{\dag}{}'\hat{c}_{\omega'}^{\dag}{}'}+g_d(\omega)^*g_d(\omega')^*\braket{ \hat{d}_{\omega}{}'\hat{d}_{\omega'}{}'} +2g_c(\omega)g_d(\omega')^*\braket{ \hat{c}_{\omega}^{\dag}{}'\hat{d}_{\omega'}{}'}]\}}{\int \mathrm{d}\omega \; |g_c(\omega)|^2+|g_d(\omega)|^2}
\end{aligned}
\end{equation}
Alternatively, we can write $V_2(\phi)$ in the following way:
\begin{equation}
\begin{aligned}
V_2(\phi) &= - \overline{V}_{2} \times \cos(2\phi-\theta)
\\
\overline{V}_2 & = \left|
\frac{2[g_c(\omega)g_c(\omega)\braket{ \hat{c}_{\omega}^{\dag}{}'\hat{c}_{\omega'}^{\dag}{}'}+g_d(\omega)^*g_d(\omega)^*\braket{ \hat{d}_{\omega}{}'\hat{d}_{\omega'}{}'} +2g_c(\omega)g_d(\omega)^*\braket{ \hat{c}_{\omega}^{\dag}{}'\hat{d}_{\omega'}{}'}]}{\int \mathrm{d}\omega \; |g_c(\omega)|^2+|g_d(\omega)|^2}
\right|
\\
\theta &= -\arccos\left(-\frac{V_2(0)}{\overline{V}_2}\right)
\end{aligned}
\end{equation}
\end{widetext}
Equation (30) decomposes $V_2(\phi)$ into two different parts. $\overline{V}_2$ is a measure of how large the squeezing effect is, and $\theta$ is the angle in which the state is squeezed. We now have all the necessary tools to calculate the statistics of a certain mode via self-homodyne detection method. In the next section, we will introduce the circuit model to calculate the correlation function that are of interest.
\subsection{Circuit Model}
In this section, we implement the circuit model to calculate the first and second order mode moment. The unitary only interacts with the right Rindler mode, thus we expect no radiation in the left Rindler wedge from the accelerated time-delay. We can analyse the Wigner function of the radiation from the accelerated time-delay source by analysing the right Rindler statistics. Hence this section will focus on analysing the statistics of an arbitrary right Rindler mode $\hat{a}_g$. This Rindler mode is defined in the following way:
\begin{equation}
\begin{aligned}
\hat{a}_g &\equiv \int \mathrm{d} \omega \; \hat{a}_{\omega}g(\omega)
\end{aligned}
\end{equation}
Where $g(\omega)$ is an arbitrary normalised positive frequency mode. Thus we introduce the displacement operator in the following way:
\begin{equation}
\hat{D}_g(\alpha=|\alpha|e^{i\phi})\equiv \exp(\alpha\hat{a}_g^{\dag}-\alpha^*\hat{a}_g)
\end{equation}
Any arbitrary bosonic operator can be written as a superposition of the part that overlaps with $\hat{a}_g$ and a part which is orthonormal to $\hat{a}_g$ \cite{Daiqin2017, Rohde2007}:
\begin{equation}
\begin{aligned}
\hat{o}=(\hat{o}-([\hat{o},\hat{a}_g^{\dag}]\hat{a}_g+[\hat{a}_g,\hat{o}]\hat{a}_g^{\dag}))+([\hat{o},\hat{a}_g^{\dag}]\hat{a}_g+[\hat{a}_g,\hat{o}]\hat{a}_g^{\dag}{})
\end{aligned}
\end{equation}
We have decomposed an arbitary bosonic operator $\hat{o}$ into two terms; the second term in the braket is affected by a unitary that acts on a particular mode $\hat{a}_g$, and the first term in the braket remains unaffected. We now have the necessary tools to introduce the input-output relations \cite{Daiqin2017, Obadia2001, Daiqin2016}. We expand equation (25) utilizing this decomposition to find:
\begin{equation}
\begin{aligned}
g_c(\omega)&= g(\omega)^*\cosh(r_\omega)\alpha
\\
g_d(\omega)&=- g(\omega)\sinh(r_\omega)\alpha^*
\end{aligned}
\end{equation}
Likewise, we can calculate how the Unruh operators evolve by utilizng equations (3) and (4):
\\
\\
\begin{widetext}
\begin{equation}
\begin{aligned}
\hat{a}_{\omega}' &=\hat{a}_{\omega}e^{-i\omega \Delta}
\\
\hat{c}_{\omega}' &=\hat{c}_{\omega}+\cosh(r_\omega)(e^{-i \omega \Delta}-1)\hat{a}_{\omega}=\hat{c}_{\omega}\left(\cosh(r_{\omega})^2 e^{-i \omega \Delta}-\sinh(r_{\omega})^2\right)+\hat{d}_{\omega}^{\dag} \cosh(r_{\omega})\sinh(r_{\omega})(e^{-i\omega \Delta}-1)
\\
\hat{d}_{\omega}' &=\hat{d}_{\omega}+\sinh(r_\omega)(e^{i \omega \Delta}-1)\hat{a}_{\omega}^{\dag}=\hat{d}_{\omega} (\cosh(r_{\omega})^2-\sinh(r_{\omega})^2 e^{i \omega \Delta})+ \hat{c}_{\omega}^{\dag} \cosh(r_{\omega})\sinh(r_{\omega})(e^{i \omega \Delta}-1)
\end{aligned}
\end{equation}
\end{widetext}
We can calculate the quadrature variance and amplitude by utilizing equations (22), (34) and (35).
\subsection{Quadrature Amplitude and Variance}
By utilizing the fact that $\braket{\hat{c}_{\omega}'}=0$ and $\braket{\hat{d}_{\omega}'}=0$, we find the following:
\begin{equation}
\begin{aligned}
X(\phi)=0
\end{aligned}
\end{equation}
We note there are some complications to equations (36) and (37) which will be addressed in the appendix Sec. \ref{sec: Reference}. We utilize the formalism we introduced in equation (28) to (30) to calculate the quadrature variance. Utilizing the correlation functions that are calculated in the appendix Sec. VIII, we find the following:
\begin{widetext}
\begin{equation}
\begin{aligned}
V(\phi) & \approx 1+ \frac{8 \int \mathrm{d}\omega \; (1+2 \sinh(r_{\omega})^2) \cosh(r_{\omega})^2 \sinh(r_{\omega})^2 |g(\omega)|^2(1-\cos (\omega \Delta))}{(1+2 I_s)}
\end{aligned}
\end{equation}
\end{widetext}
\begin{equation}
\begin{aligned}
\\I_c &\equiv \int \mathrm{d}\omega \; \cosh(r_{\omega})^2 |g(\omega)|^2
\\
I_s &\equiv \int \mathrm{d}\omega \; \sinh(r_{\omega})^2 |g(\omega)|^2
\end{aligned}
\end{equation}
As a result, $V_1=V(\phi)-1$ and $V_2=0$. We were able to obtain a completely general simple semi-analytic expression for the variance through the use of the input-output formalism. Furthermore, this expression can be obtained in an actual experiment through analysing the photon count statistics. This expression will be analysed numerically in order to gain further understanding of the statistics of the signal.
\section{Statistical Analysis of Accelerated Unitary Time Evolution via Self Homodyne Detection}
\subsection{Self Homodyne Measurement in Rindler Vacuum}
Before examining the statistics of the signal created by a unitary time-delay, we analyse the statistics of the Minkowski vacuum in the right Rindler frame; a thermal statistics.
We examine the statistics of the right Rindler frame via self-homodyne detection. As usual, we introduce a large reference signal by the displacement operator $\hat{D}_g(\alpha)$. We are interested in the following scenario:
\begin{equation}
\begin{aligned}
\hat{U}&=1
\\
\hat{N}_{R}&=\int \mathrm{d}\omega \; \hat{a}_\omega^{\dag}{}''\hat{a}_\omega''
\\
\hat{N}_{0,R}&=\hat{D}_g(\alpha)^{\dag}\left(\int \mathrm{d}\omega \; \hat{a}_\omega^{\dag}\hat{a}_\omega\right)\hat{D}_g(\alpha)
\end{aligned}
\end{equation}
Following similar steps to the previous sections, we calculate the following:
\begin{equation}
\begin{aligned}
\braket{\hat{a}_{\omega}'}&=0
\\
\braket{\hat{a}_{\omega}'\hat{a}_{\omega}'}&=0
\\
\braket{\hat{a}_{\omega}'^{\dag}\hat{a}_{\omega}'}&=\sinh(r_\omega)^2
\\
g_a(\omega)&=g^*(\omega)\alpha
\end{aligned}
\end{equation}
As a result, the quadrature amplitude and variance are calculated to be as follows:
\begin{equation}
\begin{aligned}
X_{vac}(\phi)=0
\\
V_{vac}(\phi)=1+2I_s
\end{aligned}
\end{equation}
This describes the statistics of a thermal bath, and it will be compared with the result obtained in equation (37).
\subsection{Numerical Analysis of Variance}
The analysis with the Schr\"odinger picture showed that there are no correlations between different Rindler/Unruh frequency modes. As a result, we are interested in the right Rindler single frequency statistics due to the unitary. As it is difficult to consider a normalized single frequency mode, we consider a localised Gaussian wave-packet mode in the right Rindler frame:
\begin{figure}[h!]
\centering
\includegraphics[width=0.48\textwidth]{EffectofDeltaLinear.PNG}
\caption{The graph above demonstrates how $\Delta$ affects the variance of the signal detected by the Minkowski detector. We have utilized the following settings: $a=1$, $\omega_0=0.1$ and $\delta=0.005$.}
\end{figure}
\begin{equation}
g(\omega;\omega_0, \delta, v_c) \equiv B\sqrt{\omega}(\frac{1}{2\pi \delta^2})^{1/4} \exp\left[-\frac{(\omega-\omega_0)^2}{4\delta^2}-i\omega v_c\right]
\end{equation}
where $B$ is the normalization constant, $\omega_0$ is the central frequency, $\delta$ is the bandwidth of the wavepacket mode and $v_c$ is the central position of the Gaussian wavepacket mode. By restricting ourselves to $\delta<0.4 \omega_0$, the approximation $B \approx 1/\sqrt{\omega_0}$ is valid. In this section we compute $V_1$ which characterises the deviation of the variance from the shot noise. We first analyse how $\Delta$ affects the variance.
\\
\\
\noindent We find that the variance remains roughly constant for $\Delta > \delta^{-1}$ from Fig. 3. This is the regime where the overlap between the delayed mode is roughly zero; $[\hat{a}_g{}',\hat{a}_g^{\dag}] \approx 0$. While the original and delayed modes are overlapping, we observe sinusoidal waves. This can be understood as a result of the wave-packet modes becoming correlated/anti-correlated. When the two modes are out of phase by $\pi$, we observe a local maxima in the variance. The local minima corresponds to when the two modes are in phase with each other. The amplitude of the sinusoidal waves decreases due to the decrease in the overlap between the two modes.
\begin{figure}[h!]
\centering
\includegraphics[width=0.48\textwidth]{EffectofDelta2.PNG}
\caption{The graph above demonstrates how $\Delta$ affects the variance of the signal detected by the Minkowski detector. We have utilized the following settings: $a=1$ and $\delta=0.2\omega_0$.}
\end{figure}
\\
\\
Fig. 4 analyses how the variance increases for various $\omega_0$. We find that the variance increases quadratically for small $\Delta$. This regime corresponds to when the two modes still overlap with each other. As Fig 4. has a significantly larger value of $\delta$ than Fig. 3, we only see one oscillation before the variance becomes constant. The regime where the variance is constant can be interpreted as the regime when the two modes no longer overlapping with each other.
\\
\\
The variance starts to increase again for a sufficiently large $\Delta$. With further analysis, it can be shown that the variance increases due to low frequency contributions around $\omega=0$. As a result this regime can be interpreted as an artifact of assuming the delay also applies to ultra low frequencies. As this is physically unlikely, we are not interested in this regime. The low frequency contributions can be suppressed by setting $\delta \ll \omega_0$.
\\
\\
We can obtain numerical results that are similar to the single frequency statistics by considering the regime where the variance is roughly constant. This can be done by setting $\delta \ll \omega_{0}$ and setting $1/\delta \ll \Delta$. Fig. 5 is a plot which demonstrates the statistics of the signal; how $\omega_0$ affects the variance.
\begin{figure} [h!]
\centering
\includegraphics[width=0.45\textwidth]{Comparison.PNG}
\caption{The graph above is a plot of which demonstrates the statistics of the state. We have utilized the following settings: $\Delta=10^8, \delta=0.05 \omega_0$.}
\label{fig: Unitary}
\end{figure}
\\
\\
Fig. 5 compares the variance of $V_{vac}$ (Eq. 41) and $V(\phi)$.
For low $\omega_0$, it is found that $V_{vac}$ is inversely proportional to $\omega_0$, while $V(\phi)$ is inversely proportional to $\omega_0 ^2$. This demonstrates the statistics of a thermal state and the state created by an accelerated time delay on Minkowski vacuum are quite different in general. On the other hand, it is interesting to note that for high $\omega_0$ the characteristics of the variances are quite similar. This figure can be summarised with the following equations:
\begin{equation}
\begin{aligned}
V(\phi,\omega) &= 1+ 2\csch(\pi \omega/a)^2
\\
V_{vac}(\phi,\omega) &= 1+ 2\sinh(r_{\omega})^2
\end{aligned}
\end{equation}
The output state has fluctuation above the shot noise, and hence seems mixed. Through the analysis in the Schr\"odinger picture, we know that the output state is a two mode squeezed state. However, this does not mean that the local observer can easily observe a two-mode squeezed state. The analysis via self homodyne detection demonstrates that when the local observer only looks at the statistic of the radiation from the accelerated time-delay source, the radiation seems noisy. As a result, the output state has apparent decoherence.
\\
\\
This apparent decoherece can be traced back to the underlying vacuum correlations that existed between the right and left Rindler modes. The unitary distorted this correlation. We conjecture that the inertial observer can observe a final pure state when this correlation is extracted. Due to technical reasons, extracting this correlation for a time-delay is difficult. As a result, we consider another passive unitary, where some of the correlations can be extracted more easily.
\section{Mirror}
In this section we briefly consider another passive unitary; an accelerated mirror. This unitary has been considered in many literature in the past \cite{Crispino2008}. The Minkowski frequency statistics of the outcome has been considered by Su et al \cite{Daiqin2016}. In this paper we consider the statistics with respect to Rindler frequencies.
\\
\\
By introducing the right and left moving modes, $\hat{a}_{\omega,1}$ and $\hat{a}_{\omega,2}$ respectively, we can introduce the mirror operator as follows:
\begin{equation}
\hat{U}_M \equiv \exp [\int \mathrm{d}\omega\; \theta_\omega(\hat{a}_{\omega,1}^{\dag}\hat{a}_{\omega,2}-\hat{a}_{\omega,2}^{\dag}\hat{a}_{\omega,1})]
\end{equation}
The operators evolve under this unitary in the following way:
\begin{equation}
\begin{aligned}
\hat{a}_{\omega,1}' &= \hat{a}_{\omega,1}\cos (\theta_\omega)+\hat{a}_{\omega,2}\sin (\theta_\omega)
\\
\hat{a}_{\omega,2}' &= \hat{a}_{\omega,2}\cos (\theta_\omega)-\hat{a}_{\omega,1}\sin (\theta_\omega)
\end{aligned}
\end{equation}
Following the same method taken before, we find that the Unruh operators evolve in the following way:
\begin{equation}
\begin{aligned}
\hat{c}_{\omega}' &= \hat{c}_{\omega}+\cosh(r_\omega)(\hat{a}_{\omega,1}(\cos (\theta_\omega)-1)+\hat{a}_{\omega,2}\sin (\theta_\omega))
\\
\hat{d}_{\omega}' &= \hat{d}_{\omega}-\sinh(r_\omega)(\hat{a}_{\omega,1}^{\dag}(\cos (\theta_\omega)-1)+\hat{a}_{\omega,2}^{\dag}\sin (\theta_\omega))
\end{aligned}
\end{equation}
Utilizing this result, we find the following:
\begin{equation}
\begin{aligned}
\braket{\hat{c}_{\omega}'} &=0
\\
\braket{\hat{d}_{\omega}'} &=0
\end{aligned}
\end{equation}
Hence, the quadrature variance is $X_M(\phi)=0$. We now proceed onto calculating the variance. To do this, we first calculate the following correlation functions:
\begin{equation}
\begin{aligned}
\braket{0_M|\hat{c}_{\omega}^{\dag}{}'\hat{c}_{\omega'}{}'|0_M}&= F_{M,1}(\omega) \delta(\omega-\omega')
\\
\braket{0_M|\hat{d}_{\omega}^{\dag}{}'\hat{d}_{\omega'}{}'|0_M}&=F_{M,1}(\omega) \delta(\omega-\omega')
\\
\braket{0_M|\hat{c}_{\omega}{}'\hat{c}_{\omega'}{}'|0_M}&=0\\
\braket{0_M|\hat{d}_{\omega}{}'\hat{d}_{\omega'}{}'|0_M}&=0
\\
\braket{0_M|\hat{c}_{\omega}{}'\hat{d}_{\omega'}{}'|0_M}&=F_{M,2}(\omega) \delta(\omega-\omega')
\\
\braket{0_M|\hat{c}_{\omega}^{\dag}{}'\hat{d}_{\omega'}{}'|0_M}&=0
\end{aligned}
\end{equation}
Where we have defined the following:
\begin{equation}
\begin{aligned}
F_{M,1}&\equiv 2\cosh(r_\omega)^2\sinh(r_\omega)^2(1-\cos(\theta))
\\
F_{M,2}&\equiv -\cosh(r_\omega)\sinh(r_\omega)(1+2\sinh(r_\omega)^2)(1-\cos(\theta))
\end{aligned}
\end{equation}
It is interesting to compare this result to the result found in equation (58). We utilize the same procedure as before to calculate the variance. The values for $g_c(\omega)$ and $g_d(\omega)$ are the same as the ones calculated in equation (34).
\begin{widetext}
\begin{equation}
\begin{aligned}
V_M(\phi) & \approx 1+ \frac{8 \int \mathrm{d}\omega \; (1+2 \sinh(r_{\omega})^2) \cosh(r_{\omega})^2 \sinh(r_{\omega})^2 |g(\omega)|^2(1-\cos (\theta))}{(1+2 I_s)}
\end{aligned}
\end{equation}
\end{widetext}
We obtain the single frequency statistic by substituting $|g(\omega)|^2$ with a delta-function:
\begin{equation}
V_M(\phi,\omega)=1+2\csch(\pi\omega/a)^2(1-\cos(\theta))
\end{equation}
By comparing this equation to equation (41), we find that the two results coincide when $\theta=\pi/2$. The condition in which the result in equation (43) was valid is when $\Delta \gg 1/\delta$. Both conditions correspond to cases when $[\hat{a}_{\omega}',\hat{a}_{\omega}^{\dag}]=0$, which is when the overlap of the displaced mode and the original mode is 0. This could be understood as a measure of how much the vacuum correlation between the left Rindler and right Rindler mode has been distorted. It is interesting to note that the variance is highest when we set $\theta=\pi$. An analogous result of this can be understood as the local maxima that are observed in Fig. 3.
\section{Purification of the the Output State}
In this section, we introduce several special cases where purification of the state can be observed. We extract the correlation for the mirror case, as $F_{M,2}(\omega)$ is a real valued function. This makes extraction of correlation much simpler compared to $F_2(\omega)$ which is a complex valued function.
\\
\\
The strategy is now not only to place the displacement for self-homodyne on the right Rindler mode, but also to displace the left Rindler mode. We displace the right and left Rindler wedge by the following displacement operators:
\begin{equation}
\begin{aligned}
\hat{D}_{g,R}(\alpha=|\alpha|e^{i \phi})\equiv \exp(\alpha\hat{a}_g^{\dag}-\alpha^{*}\hat{a}_g)
\\
\hat{D}_{g,L}(\beta=|\beta|e^{-i \phi})\equiv \exp(\beta\hat{b}_g^{\dag}-\beta^{*}\hat{b}_g)
\end{aligned}
\end{equation}
where we have defined the following:
\begin{equation}
\begin{aligned}
\hat{b}_g=\int \mathrm{d}\omega \; g(\omega)^*\hat{b}_\omega
\end{aligned}
\end{equation}
In this case, we calculate $g_c(\omega)$ and $g_d(\omega)$ to be:
\begin{equation}
\begin{aligned}
g_c(\omega)& = g(\omega)e^{i \phi} (\cosh(r_\omega) |\alpha| - \sinh(r_\omega) |\beta|)
\\
g_d(\omega)& = g(\omega)^*e^{-i \phi}(\cosh(r_\omega) |\beta| - \sinh(r_\omega) |\alpha|)
\end{aligned}
\end{equation}
As a result, $g_c(\omega)$ and $g_d(\omega)^*$ are proportional to $e^{i \phi}$. Thus, we can use equations (28), (29), (48) and (54) to calculate the variance of the output state. We find that $V_2=0$. For simplicity, we consider the single frequency limit. When we set either of the following:
\begin{equation}
\begin{aligned}
\frac{|\alpha|}{|\beta|}=\frac{\cosh(r_\omega)^2+\sinh(r_\omega)^2}{2\cosh(r_\omega)\sinh(r_\omega)}
\\
\frac{|\beta|}{|\alpha|}=\frac{\cosh(r_\omega)^2+\sinh(r_\omega)^2}{2\cosh(r_\omega)\sinh(r_\omega)}
\end{aligned}
\end{equation}
We find that the single frequency variance is:
\begin{equation}
V(\omega,\phi)=1
\end{equation}
As a result, we observe vacuum. This means that the left Rindler mode is perfectly anti-correlated with the noisy particles we observed coming from the right Rindler wedge. Furthermore, if we set $|\alpha|=|
\beta|$ we find that the single frequency variance is:
\begin{equation}
V(\omega,\phi)= 1+2\cosh(r_{\omega})\sinh(r_{\omega})(\sinh(2r_\omega)-\cosh(2r_\omega))
\end{equation}
The noise is less than vacuum. As a result, the output state has a characteristic of a two-mode squeezed state. This suggests there is entanglement between the particles that is coupled with the right and left Rindler frequencies, as observed by the Minkowski observer. By fully characterising this correlation, we would be able purify the output state.
\\
\\
This section highlighted some simple scenarios where purification of the state could found, implying the presence of entanglement. Full characterisation of this entanglement is not a simple task, and exceeds the scope of this paper.
\section{Conclusion}
In this paper we looked into the effect of accelerated unitary time-delay. Through the Schr\"odinger picture, we showed that the output state is a two-mode squeezed state. We continued onto analysing what an inertial observer would observe due to this unitary via self-homodyne detection. We showed that the radiation from the accelerated time-delay source would be observed to be noisy according to an inertial observer. As a result, accelerated time-delay causes an apparent decoherence. We propose that the information is hidden in the vacuum noise that existed in the left Rindler wedge. We conducted some further research into the mirror case, and we showed that indeed correlations existed in the left Rindler wedge.
\\
\\
We believe that, from an operational point of view, the extraction of this information is not practical. The stationary observer wants to extract information out of the signal that is sent. As a result, they would not know the source of the signal. The stationary observer only has access to the physical signal (radiation) that is created from the right Rindler observer. The sender is causally disconnected from the left Rindler wedge, and the sender cannot tell the observer which mode the vacuum correlations would be hidden in. As the stationary observer has no clue where the vacuum correlation is hidden (i.e. only has access to the decohered physical signal), according to the stationary observer the system has apparently decohered.
\\
\\
The only method in which a pure state can be observed by the stationary observer is if we considered a scenario where two parties agreed on which signal is sent by the right Rindler observer. They calculate where the vacuum correlations will be hidden due to that particular signal. The two parties then follow the left/right Rindler trajectories. The sender in the right Rindler wedge sends a signal, and the other party in the left Rindler tells the stationary observer where the vacuum correlation would be hidden. There are numerous technical difficulties with this method, but nevertheless, it is in principle possible to conduct such an experiment. Future research could examine if more effective protocols exist.
\\
\\
Our paper looked into the statistics of the signal that is created. Our results show that the statistics are indeed mixed, but do not follow thermal statistics. One noteworthy difference between the statistics of a thermal bath and the results obtained in our paper was the low frequency statistics. It can be shown that the $1/\omega_0^2$ dependence for low frequency leads to energy divergences. This is due to ultra-low frequency delays, which cannot be achieved in practice.
\\
\\
The issue regarding the infinite energy was also encountered for the case of uniformly accelerated mirror \cite{Daiqin2016}. It is noted that the energy flux and particle flux of a uniformly accelerating mirror away from the horizon is actually zero \cite{PCWDavies1975, Birrell1982, Fulling1976, Grove1986}. The particles and energy are only created when there is a change in acceleration. For an eternally accelerated mirror, the radiation source can be traced back to the horizon, where there is a divergence in energy flux \cite{Frolov1979, Frolov1980, Frolov1999}. In our case, we make similar argument and argue that the divergence occurs due to accelerating the time delay source for an infinite time.
\\
\\
Another intriguing motivation for studying the time-delay is a possible connection to the results presented in recent experiments by Riek et al \cite{Subcycle}. These authors measured the effect of a rapidly varying time-delay produced by transmission through a crystal with a changing refractive index. Due to the similarity between a time varying refractive index and acceleration \cite{refractiveindex}, we believe that analysing the effect of the accelerated time-delay may give further insight into the results obtained in this paper and lead to new experimental proposals. In the next section, we note some possible implications of our results on black-hole information paradox.
\subsection{A comment on black hole information paradox}
The black hole information information paradox, \cite{Hawking1976, Susskind1993,Stephens1994, Mathur2005, Hawking2016, VBaccetti2016,Braunstein2013, Almheiri2013} points out the apparent contradiction between quantum mechanics and Hawking radiation. To restore the purity of the final state of an evaporated black hole, there must be hidden correlations in the final state. Some of the previous proposal were correlations between early and late time thermal bath \cite{Susskind1993,Stephens1994,Almheiri2013} and correlations between the thermal bath and curvature of space-time \cite{Nomura2016, Nomura2015a, Nomura2015b}. Our study raises the possibility that the correlations may exist between the distorted vacuum fluctuations. The equivalence principle ties a strong connection between gravity and acceleration \cite{Gravitation1973, Davies1977, Walker1985, Carlitz1987, Hawking1975}. Thus, we conjecture that the notion of apparent decoherence in the Rindler case can also be applied to the case of a black hole.
\clearpage
\begin{widetext}
\section*{Appendix}
\vspace{0.5cm}
\section{Correlation between Unruh operators due to Accelerated time delay/evolution}
The vacuum expectation values of the product of the two output Unruh operator are calculated by utilizing equation (35) and the fact that the Unruh and Minkowski vacuum coincides:
\begin{equation}
\begin{aligned}
\braket{0_M|\hat{c}_{\omega}^{\dag}{}'\hat{c}_{\omega'}{}'|0_M}&= F_1(\omega) \delta(\omega-\omega')
\\
\braket{0_M|\hat{d}_{\omega}^{\dag}{}'\hat{d}_{\omega'}{}'|0_M}&=F_1(\omega) \delta(\omega-\omega')
\\
\braket{0_M|\hat{c}_{\omega}{}'\hat{c}_{\omega'}{}'|0_M}&=0\\
\braket{0_M|\hat{d}_{\omega}{}'\hat{d}_{\omega'}{}'|0_M}&=0
\\
\braket{0_M|\hat{c}_{\omega}{}'\hat{d}_{\omega'}{}'|0_M}&=F_2(\omega) \delta(\omega-\omega')
\\
\braket{0_M|\hat{c}_{\omega}^{\dag}{}'\hat{d}_{\omega'}{}'|0_M}&=0
\end{aligned}
\end{equation}
where we have defined the following:
\begin{equation}
\begin{aligned}
F_1(\omega) & \equiv \cosh(r_{\omega})^2 \sinh(r_{\omega})^2(2-2 \cos(\omega \Delta))
\\
F_2(\omega)& \equiv \cosh(r_{\omega}) \sinh(r_{\omega}) (1- e^{i \omega \Delta})(\cosh(r_{\omega})^2 e^{-i \omega \Delta}-\sinh(r_{\omega})^2)
\end{aligned}
\end{equation}
All other combination can be found utilizing commutation relations for the Unruh operators or applying the complex conjugate.
\end{widetext}
\section{Practical and Ideal Self Homodyne Detection}\label{sec: Reference}
\subsection{Practical Measurements}
We note that, if we explicitly calculate the quadrature amplitude, equation (36) without ignoring the terms which are in the order of $1/\sqrt{\alpha}$, this is the expression we obtain:
\begin{equation}
\begin{aligned}
X(\phi)&=\frac{1}{|\alpha|} \frac{\int \mathrm{d}\omega \; \delta(0) F_1(\omega) }{\sqrt{1+2I_s}}
\end{aligned}
\end{equation}
In this case, the approximation that the above expression is 0 is not valid. As there are infinite particles, we require $|\alpha|$ to be infinite, which cannot be achieved in an experiment. This practicality issue will be addressed in this section. The results obtained in the paper are the idealized results which neglected these values. This issue also appears in the calculations for the variance as well.
\\
\\
In practice, we cannot measure infinitely small and large wavelength particles. This is due to the limitations caused by our experimental apparatus and set up. We assume that our detector can only measure frequencies between $k_{min}$ and $k_{max}$. We introduce a new subscript $Pr$ to denote practical measurements with low and high frequency cut-off. This can be compared with the subscript $Id$ which denotes the ideal measurements that were obtained in the paper.
\\
\\
The particle count measured by a practical detector is modelled by the operator defined in equation (61).
\begin{widetext}
\begin{equation}
\begin{aligned}
\hat{N}_{Pr}& \equiv \int_{k_{min}}^{k_{max}} \mathrm{d}k \; \hat{e}_k^{\dag}{}''\hat{e}_k''
\\ &=\int \mathrm{d}\omega \mathrm{d}\omega' \; A_{\omega\omega',1}[\hat{c}_{\omega}^{\dag}{}''\hat{c}_{\omega'}{}''+\hat{d}_{\omega'}^{\dag}{}''\hat{d}_{\omega}{}'']+[A_{\omega\omega',2}\hat{c}_{\omega}^{\dag}{}''\hat{d}_{\omega'}{}''+A_{\omega\omega',2}^*\hat{d}_{\omega}^{\dag}{}''\hat{c}_{\omega'}{}'']
\\A_{\omega\omega',1} & \equiv \int\mathrm{d}k A_{k\omega}^*A_{k\omega'}, \;
A_{\omega\omega',2} \equiv \int\mathrm{d}k A_{k\omega}^*A_{k\omega'}^*
\end{aligned}
\end{equation}
The corresponding expectation value are calculated as follows:
\begin{equation}
\begin{aligned}
\braket{\hat{N}_{Pr}} = & \int \mathrm{d}\omega \mathrm{d}\omega' \; A_{\omega\omega',1}[\braket{\hat{c}_{\omega}^{\dag}{}''\hat{c}_{\omega'}{}''}+\braket{\hat{d}_{\omega'}^{\dag}{}''\hat{d}_{\omega}{}''}]+2 Re[A_{\omega\omega',2}\braket{\hat{c}_{\omega}^{\dag}{}''\hat{d}_{\omega'}{}''}]
\\ = & \;|\alpha|^2 \int \mathrm{d}\omega \mathrm{d}\omega' \; A_{\omega\omega',1}[(\cosh(r_{\omega})\cosh(r_{\omega'})+\sinh(r_{\omega})\sinh(r_{\omega'}) )g(\omega)g(\omega')^* )]
\\& \; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; - 2 Re[ A_{\omega\omega',2} \cosh(r_{\omega}) \sinh(r_{\omega})g(\omega) g(\omega')\alpha^2]
\\ & + |\alpha|^0 \int \mathrm{d}\omega \; F_1(\omega) A_{\omega\omega,1}
\\
\braket{\hat{N}_{Pr,0}} = & \;|\alpha|^2 \int \mathrm{d}\omega \mathrm{d}\omega' \; A_{\omega\omega',1}[(\cosh(r_{\omega})\cosh(r_{\omega'})+\sinh(r_{\omega})\sinh(r_{\omega'}) )g(\omega)g(\omega')^* )]
\\& \; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; - 2 Re[ A_{\omega\omega',2} \cosh(r_{\omega}) \sinh(r_{\omega'})g(\omega) g(\omega')\alpha^2]
\end{aligned}
\end{equation}
\end{widetext}
Thus, the practical quadrature amplitude is:
\begin{equation}
\begin{aligned}
X_{Pr}(\phi) &= \frac{\frac{1}{2\pi}\log (\frac{k_{max}}{k_{min}})\int \mathrm{d}\omega \; F_1(\omega)}{\sqrt{\braket{\hat{N}_0}}}
\\ &\approx 0
\end{aligned}
\end{equation}
The approximation is valid as $\int \mathrm{d}\omega \; F_1(\omega)$ is finite. How large $|\alpha|$ must be for the approximation to be valid will be analysed later in the appendix. We are now interested in calculating the variance. To do this, we must calculate the expectation value of $\hat{N}_{Pr}^2$. $\hat{N}_{Pr}^2$ is defined as follows:
\begin{widetext}
\begin{equation}
\begin{aligned}
\hat{N}_{Pr}{}^2 = \int \mathrm{d}\omega \mathrm{d}\omega'\mathrm{d}\omega'' \mathrm{d}\omega''' &\; (A_{\omega\omega',1}A_{\omega''\omega''',1}[\hat{c}_{\omega}^{\dag}{}'\hat{c}_{\omega'}{}'\hat{c}_{\omega''}^{\dag}{}'\hat{c}_{\omega'''}{}'+\hat{d}_{\omega'}^{\dag}{}'\hat{d}_{\omega}{}'\hat{d}_{\omega'''}^{\dag}{}'\hat{d}_{\omega''}{}']+\{\hat{d}_{\omega'}^{\dag{}'}\hat{d}_{\omega}{}'\hat{c}_{\omega''}^{\dag}{}'\hat{c}_{\omega'''}{}'+\hat{c}_{\omega}^{\dag}{}'\hat{c}_{\omega'}{}'\hat{d}_{\omega'''}^{\dag}{}'\hat{d}_{\omega''}{}'\}]
\\
&+ A_{\omega\omega',2}A_{\omega''\omega''',2}^*\hat{c}_{\omega}^{\dag}{}'\hat{d}_{\omega'}{}'\hat{d}_{\omega''}^{\dag}{}'\hat{c}_{\omega'''{}'}+A_{\omega\omega',2}^*A_{\omega''\omega''',2}\hat{d}_{\omega}^{\dag}{}'\hat{c}_{\omega'}{}'\hat{c}_{\omega''}^{\dag}{}'\hat{d}_{\omega'''}{}'
\\
&+\{A_{\omega\omega',2}A_{\omega''\omega''',2}\hat{c}_{\omega}^{\dag}{}'\hat{d}_{\omega'}{}'\hat{c}_{\omega''}^{\dag}{}'\hat{d}_{\omega'''}{}'+A_{\omega\omega',2}^*A_{\omega''\omega''',2}^*\hat{d}_{\omega}^{\dag}{}'\hat{c}_{\omega'}{}'\hat{d}_{\omega''}^{\dag}{}'\hat{c}_{\omega'''}{}'\}
\\
&+\{A_{\omega\omega',1}A_{\omega''\omega''',2}\hat{c}_{\omega}^{\dag}{}'\hat{c}_{\omega'}{}'\hat{c}_{\omega''}^{\dag}{}'\hat{d}_{\omega'''}{}'+A_{\omega\omega',2}^*A_{\omega''\omega''',1}\hat{d}_{\omega}^{\dag}{}'\hat{c}_{\omega'}{}'\hat{c}_{\omega''}^{\dag}{}'\hat{c}_{\omega'''}{}'\}
\\
&+\{A_{\omega\omega',2}A_{\omega''\omega''',1}\hat{c}_{\omega}^{\dag}{}'\hat{d}_{\omega'}{}'\hat{c}_{\omega''}^{\dag}{}'\hat{c}_{\omega'''}{}'+A_{\omega\omega',1}A_{\omega''\omega''',2}^*\hat{c}_{\omega}^{\dag}{}'\hat{c}_{\omega'}{}'\hat{d}_{\omega''}^{\dag}{}'\hat{c}_{\omega'''}{}'\}
\\
&+ \{A_{\omega\omega',1}A_{\omega''\omega''',2}^*\hat{d}_{\omega'}^{\dag}{}'\hat{d}_{\omega}{}'\hat{d}_{\omega''}^{\dag}{}'\hat{c}_{\omega'''}{}'+A_{\omega\omega',2}A_{\omega''\omega''',1}\hat{c}_{\omega}^{\dag}{}'\hat{d}_{\omega'}{}'\hat{d}_{\omega'''}^{\dag}{}'\hat{d}_{\omega''}{}'\}
\\
&+\{A_{\omega\omega',2}^*A_{\omega''\omega''',1}\hat{d}_{\omega}^{\dag}{}'\hat{c}_{\omega'}{}'\hat{d}_{\omega'''}^{\dag}{}'\hat{d}_{\omega''}{}'+A_{\omega\omega',1}A_{\omega''\omega''',2}\hat{d}_{\omega'}^{\dag}{}'\hat{d}_{\omega}{}'\hat{c}_{\omega''}^{\dag}{}'\hat{d}_{\omega'''}{}'\}
\end{aligned}
\end{equation}
In the equation above, we have grouped the operators which are related by relabelling and a complex conjugate. The expectation value of this operator can be calculated by first computing the following correlation functions:
\begin{equation}
\begin{aligned}
\braket{\hat{c}_{\omega}^{\dag}{}''\hat{c}_{\omega'}{}''\hat{c}_{\omega''}^{\dag}{}''\hat{c}_{\omega'''}{}''}=\braket{\hat{c}_{\omega}^{\dag}{}''\hat{c}_{\omega'}{}''}\braket{\hat{c}_{\omega''}^{\dag}{}''\hat{c}_{\omega'''}{}''}&+g_c(\omega')g_c(\omega'')^*\delta(\omega-\omega''')F_1(\omega)
\\&+ g_c(\omega)^*g_c(\omega''')\delta(\omega'-\omega'')(1+F_1(\omega'))
\\&+|\alpha|^0\delta(\omega-\omega''')\delta(\omega'-\omega'')F_1(\omega)(1+F_1(\omega'))
\\
\braket{\hat{d}_{\omega}^{\dag}{}''\hat{d}_{\omega'}{}''\hat{d}_{\omega''}^{\dag}{}''\hat{d}_{\omega'''}{}''}=\braket{\hat{d}_{\omega}^{\dag}{}''\hat{d}_{\omega'}{}''}\braket{\hat{d}_{\omega''}^{\dag}{}''\hat{d}_{\omega'''}{}''}
&+g_d(\omega')g_d(\omega'')^*\delta(\omega-\omega''')F_1(\omega)
\\&+g_d(\omega)^*g_d(\omega''')\delta(\omega'-\omega'')(1+F_1(\omega'))
\\&+|\alpha|^0\delta(\omega-\omega''')\delta(\omega'-\omega'')F_1(\omega)(1+F_1(\omega'))
\\
\braket{\hat{c}_{\omega}^{\dag}{}''\hat{c}_{\omega'}{}''\hat{d}_{\omega''}^{\dag}{}''\hat{d}_{\omega'''}{}''}=\braket{\hat{c}_{\omega}^{\dag}{}''\hat{c}_{\omega'}{}''}\braket{\hat{d}_{\omega''}^{\dag}{}''\hat{d}_{\omega'''}{}''}
&+g_c(\omega)^*g_d(\omega'')^*\delta(\omega'-\omega''')F_2(\omega')
\\&+g_c(\omega')g_d(\omega''')\delta(\omega-\omega'')F_2(\omega)^*
\\&+|\alpha|^0 \delta(\omega-\omega'')\delta(\omega'-\omega''')F_2(\omega)^* F_2(\omega')
\\
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\braket{\hat{c}_{\omega}^{\dag}{}''\hat{d}_{\omega'}{}''\hat{d}_{\omega''}^{\dag}{}''\hat{c}_{\omega'''}{}''}=\braket{\hat{c}_{\omega}^{\dag}{}''\hat{d}_{\omega'}{}''}\braket{\hat{d}_{\omega''}^{\dag}{}''\hat{c}_{\omega'''}{}''}
&+g_c(\omega)^*g_c(\omega''')\delta(\omega'-\omega'')(1+F_1(\omega'))
\\&+g_d(\omega')g_d(\omega'')^*\delta(\omega-\omega''')F_1(\omega)
\\&+g_c(\omega)^*g_d(\omega'')^*\delta(\omega'-\omega''')F_2(\omega')
\\&+g_d(\omega')g_c(\omega''')\delta(\omega-\omega'')F_2(\omega)^*
\\&+|\alpha|^0 \delta(\omega-\omega'')\delta(\omega'-\omega''')F_2(\omega') F_2(\omega)^*
\\&+|\alpha|^0\delta(\omega-\omega''')\delta(\omega'-\omega'')F_1(\omega)(1+F_1(\omega'))
\\
\braket{\hat{d}_{\omega}^{\dag}{}''\hat{c}_{\omega'}{}''\hat{c}_{\omega''}^{\dag}{}''\hat{d}_{\omega'''}{}''}=\braket{\hat{d}_{\omega}^{\dag}{}''\hat{c}_{\omega'}{}''}\braket{\hat{c}_{\omega''}^{\dag}{}''\hat{d}_{\omega'''}{}''}
&+g_c(\omega')g_c(\omega'')^* \delta(\omega-\omega''')F_1(\omega)
\\&+g_d(\omega)^*g_d(\omega''')\delta(\omega'-\omega'')(1+F_1(\omega'))
\\&+g_d(\omega)^*g_c(\omega'')^*\delta(\omega'-\omega''')F_2(\omega')
\\&+g_c(\omega')g_d(\omega''')\delta(\omega-\omega'')F_2(\omega)^*
\\&+|\alpha|^0 \delta(\omega-\omega'')\delta(\omega'-\omega''')F_2(\omega')F_2(\omega)^*
\\&+|\alpha|^0 \delta(\omega'-\omega'')\delta(\omega-\omega''')F_1(\omega)(1+F_1(\omega'))
\\
\braket{\hat{c}_{\omega}^{\dag}{}''\hat{d}_{\omega'}{}''\hat{c}_{\omega''}^{\dag}{}''\hat{d}_{\omega'''}{}''}=\braket{\hat{c}_{\omega}^{\dag}{}''\hat{d}_{\omega'}{}''}\braket{\hat{c}_{\omega''}^{\dag}{}''\hat{d}_{\omega'''}{}''}&
\\
\braket{\hat{c}_{\omega}^{\dag}{}''\hat{c}_{\omega'}{}''\hat{c}_{\omega''}^{\dag}{}''\hat{d}_{\omega'''}{}''}=\braket{\hat{c}_{\omega}^{\dag}{}''\hat{c}_{\omega'}{}''}\braket{\hat{c}_{\omega''}^{\dag}{}''\hat{d}_{\omega'''}{}''}
&+g_c(\omega)^*g_c(\omega'')^* \delta(\omega'-\omega''') F_2(\omega')
\\
&+g_c(\omega)^*g_d(\omega''')\delta(\omega'-\omega'')(F_1(\omega')+1)
\\
\braket{\hat{c}_{\omega}^{\dag}{}''\hat{d}_{\omega'}{}''\hat{c}_{\omega''}^{\dag}{}''\hat{c}_{\omega'''}{}''}=\braket{\hat{c}_{\omega}^{\dag}{}''\hat{d}_{\omega'}{}''}\braket{\hat{c}_{\omega''}^{\dag}{}''\hat{c}_{\omega'''}{}''}&+g_c(\omega)^*g_c(\omega'')^* \delta(\omega'-\omega'') F_2(\omega')
\\
&+g_d(\omega')g_c(\omega'')^*\delta(\omega-\omega''')F_1(\omega)
\\
\braket{\hat{d}_{\omega}^{\dag}{}''\hat{d}_{\omega'}{}''\hat{d}_{\omega''}^{\dag}{}''\hat{c}_{\omega'''}{}''}
=\braket{\hat{d}_{\omega}^{\dag}{}''\hat{d}_{\omega'}{}''}\braket{\hat{d}_{\omega''}^{\dag}{}''\hat{c}_{\omega'''}{}''}
&+g_d(\omega)^*g_d(\omega'')^* \delta(\omega'-\omega''') F_2(\omega')
\\
&+g_d(\omega)^*g_c(\omega''')\delta(\omega'-\omega'')(F_1(\omega')+1)
\\
\braket{\hat{d}_{\omega}^{\dag}{}''\hat{c}_{\omega'}{}''\hat{d}_{\omega''}^{\dag}{}''\hat{d}_{\omega'''}{}''}=\braket{\hat{d}_{\omega}^{\dag}{}''\hat{c}_{\omega'}{}''}\braket{\hat{d}_{\omega''}^{\dag}{}''\hat{d}_{\omega'''}{}''}
&+g_d(\omega)^*g_d(\omega'')^* \delta(\omega'-\omega''') F_2(\omega')
\\
&+g_c(\omega')g_d(\omega'')^*\delta(\omega-\omega''')F_1(\omega))
\end{aligned}
\end{equation}
All other expressions can be found by applying a complex conjugate to the expressions above or by utilizing the fact that $\hat{c}_{\omega}$ commutes with $\hat{d}_{\omega}$. We introduce the G functions to simplify further calculations:
\begin{equation}
\begin{aligned}
G_{\alpha \beta \gamma \delta}(\omega,\omega',\omega'',\omega''') &\equiv \braket{\hat{\alpha}_{\omega}^{\dag}{}''\hat{\beta}_{\omega'}{}''\hat{\gamma}_{\omega''}^{\dag}{}''\hat{\delta}_{\omega'''}{}''}-\braket{\hat{\alpha}_{\omega}^{\dag}{}''\hat{\beta}_{\omega'}{}''}\braket{\hat{\gamma}_{\omega''}^{\dag}{}''\hat{\delta}_{\omega'''{}''}}
\end{aligned}
\end{equation}
Where $\alpha,\beta,\gamma,\delta \in {c,d}$. The explicit expressions of these terms can be found by plugging in the expression written in equation (65) and (66). We introduce a subscript to these G-functions: $G_{\alpha\beta\gamma\delta,n}$, where $n \in {0,2}$. The new subscript denotes the 0th order $\alpha$ term or the 2nd order $\alpha$ term. Utilizing equations (64) to (67), we find that the particle number fluctuation can be written in the following way:
\begin{equation}
\begin{aligned}
(\Delta\braket{\hat{N}_{Pr}})^2 = \int \mathrm{d}\omega \mathrm{d}\omega'\mathrm{d}\omega'' \mathrm{d}\omega''' &\; A_{\omega\omega',1}A_{\omega''\omega''',1}G_{cccc}(\omega,\omega',\omega'',\omega''')+A_{\omega\omega',1}^*A_{\omega''\omega''',1}^*G_{dddd}(\omega,\omega',\omega'',\omega''')
\\ & + 2A_{\omega,\omega',1}A_{\omega'',\omega''',1}^*G_{ccdd}(\omega,\omega',\omega'',\omega''')
\\ &+A_{\omega\omega',2}A_{\omega''\omega''',2}^*G_{cddc}(\omega,\omega',\omega'',\omega''') +A_{\omega\omega',2}^*A_{\omega''\omega''',2}G_{dccd}(\omega,\omega',\omega'',\omega''')
\\
&+2Re [A_{\omega\omega',1}^*A_{\omega''\omega''',2}^*G_{cccd}(\omega,\omega',\omega'',\omega''')^*+A_{\omega\omega',2}^* A_{\omega''\omega''',1}^* G_{cdcc}(\omega,\omega',\omega'',\omega''')^*
\\
&\;\;\;\;\;\;\;\;\;\;+ A_{\omega\omega',1}A_{\omega''\omega''',2}^* G_{dddc}(\omega,\omega',\omega'',\omega''')+A_{\omega\omega',2}^*A_{\omega''\omega''',1}G_{dcdd}(\omega,\omega',\omega'',\omega''')]
\end{aligned}
\end{equation}
We notice that every term in the last two lines are proportional to $\alpha^2$. Following similar steps to the paper, we write the variance of the signal in a compact way:
\begin{equation}
\begin{aligned}
V_{Pr}(\phi) & = \frac{(\Delta\braket{\hat{N_{Pr}}})^2}{\braket{\hat{N}_{Pr,0}}}
\\ &=V_{1,2}' + V_{2}' \cos(\theta-2\phi)+ V_{1,0}'
\\ & \approx V_{1,2}' + V_{2}' \cos(\theta-2\phi)
\end{aligned}
\end{equation}
Where we have defined the following:
\begin{equation}
\begin{aligned}
V_{1,n}'\equiv \frac{1}{\braket{\hat{N}_0}}\int \mathrm{d}\omega \mathrm{d}\omega'\mathrm{d}\omega'' \mathrm{d}\omega''' &\; A_{\omega\omega',1}A_{\omega''\omega''',1}G_{cccc,n}(\omega,\omega',\omega'',\omega''')+A_{\omega\omega',1}^*A_{\omega''\omega''',1}^*G_{dddd,n}(\omega,\omega',\omega'',\omega''')
\\ & + 2A_{\omega,\omega',1}A_{\omega'',\omega''',1}^*G_{ccdd,n}(\omega,\omega',\omega'',\omega''')
\\ &+A_{\omega\omega',2}A_{\omega''\omega''',2}^*G_{cddc,n}(\omega,\omega',\omega'',\omega''') +A_{\omega\omega',2}^*A_{\omega''\omega''',2}G_{dccd,n}(\omega,\omega',\omega'',\omega''')
\\
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
V_{2,\phi}'\equiv \frac{2}{\braket{\hat{N}_0}}\int \mathrm{d}\omega \mathrm{d}\omega'\mathrm{d}\omega'' \mathrm{d}\omega''' &A_{\omega\omega',1}^*A_{\omega''\omega''',2}^*G_{cccd}(\omega,\omega',\omega'',\omega''')^*+A_{\omega\omega',2}^* A_{\omega''\omega''',1}^* G_{cdcc}(\omega,\omega',\omega'',\omega''')^*
\\
&\;\;\;\;\;\;\;\;\;\;+ A_{\omega\omega',1}A_{\omega''\omega''',2}^* G_{dddc}(\omega,\omega',\omega'',\omega''')+A_{\omega\omega',2}^*A_{\omega''\omega''',1}G_{dcdd}(\omega,\omega',\omega'',\omega''')
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\overline{V}_2'&=|V_{2,\phi}'|
\\
e^{i \theta} & \equiv \frac{V_{2,\phi=0}'}{|V_{2,\phi=0}'|}
\end{aligned}
\end{equation}
\end{widetext}
$V_{1,2}'$ can be interpreted as the average noise of the signal. $V_{2}'$ can be interpreted as the amount of squeezing in the signal. $V_{1,0}'$ can be interpreted as the error that arises due to the construction of self-homodyne detection.
In the next section we will conduct numerical analysis of the quadrature amplitude and variance.
\subsection{Numerical Analysis of Practical Measurements}
\subsubsection{Particle Number}
In this section, we look into how various parameters affect the particle count of a coherent Rindler signal with an amplitude of $|\alpha|=1$. This section will analyse the necessary conditions for $\braket{\hat{N}_{Pr}} \approx \braket{\hat{N}_{Id}}$. This is important, as if this is not satisfied it would mean significant amount of the signal was traced out.
\\
\\
\begin{figure}[h!]
\includegraphics[width=0.45\textwidth]{NEffectofKminKmax2.PNG}
\hfill
\caption{The graph above demonstrates how $k_{wid}$ affects the particle count detected by the Minkowski detector. We have utilized the following settings: $\omega_0=0.6, \; k_{med}=1, \; \delta = 0.4\omega_0, \; v_c=0.577, \; a=1$.}
\end{figure}
Fig. 6 demonstrates how $\braket{\hat{N}_{Pr}}$ converges towards $\braket{\hat{N}_{Id}}$ as we increase $k_{wid}$. We have defined the following:
\begin{equation}
\begin{aligned}
k_{med} &\equiv \sqrt{k_{max}k_{min}}
\\
k_{wid} &\equiv \sqrt{k_{max}/k_{min}}
\end{aligned}
\end{equation}
\noindent We find that the particle count converges to the particle count $\braket{\hat{N}_{0,Id}}$ by increasing $k_{wid}$.
To analyse the convergence rate, we now analyse how $\delta$ affects the particle count. The Rindler coordinate $v$ is related to the Minkowski coordinate $V$ in the following way:
\begin{equation}
\begin{aligned}
V=a^{-1}e^{-av}
\end{aligned}
\end{equation}
From this equation, we conclude that a constant oscillation in the Rindler coordinate would result in a exponentially decaying frequency in the Minkowski coordinate. This ties a strong relationship bewteen the Rindler position and Minkowski frequency.
Utilizing this notion, and the fact that the field of the operator $\hat{a}_g$ is,
\begin{equation}
\begin{aligned}
f_{\hat{a}_g}(v)&= \left(\frac{1}{2\pi}\right)^{1/4} \sqrt{\frac{\delta}{\omega _0}} e^{-\delta^2 (v-v_c)^2}e^{-i\omega_0(v-v_c)}
\end{aligned}
\end{equation}
We conclude that the following condition should be satisfied to measure 2 standard deviation of the signal:
\begin{equation}
k_{wid}> a \;e^{\sqrt{2} a \delta^{-1}}
\end{equation}
Two standard deviation of a Gaussian covers $97.7 \%$ of the signal. As a result, we expect $\braket{\hat{N}_{Pr}}/\braket{\hat{N}_{Id}} \approx 0.977$ when $k_{wid} = a \;e^{\sqrt{2} a \delta^{-1}}$. We verify this conjecture through Fig. 7.
\\
\\
\begin{figure}[h!]
\subfigure[$\omega_0=0.5$]{\includegraphics[width=0.45\textwidth]{NEffectofBandWidth3.PNG}}
\hfill
\caption{The graph above demonstrates how $k_{width}$ affects the particle count detected by the Minkowski detector, for various $\omega_0$. We have utilized the following settings: $k_{med}=1, k_{wid}=10^8, v_c=0.577, a=1$.}
\end{figure}
From equation (76), it is found that when we have $k_{wid}=1 \times 10^8$, then 2 standard deviation of the signal is covered when $\delta \approx 0.077$. Further analysis shows that when $\delta=0.077$, $\braket{\hat{N}_{Pr}}/\braket{\hat{N}_{Id}} \approx 0.977$, regardless of $\omega_0$. This validates the conjecture made in equation (67). It is concluded that the spatial width of a Rindler signal has a very strong correlation with the frequency width of the signal in the Minkowski frame.
\\
\\
As we have tied a strong relation between the Rindler position and Minkowski frequency, there must be a connection between the central Rindler position $v_c$ and central Minkowski frequency $k_{med}$. We analyse the oscillatory behaviour within the integrand of $\braket{\hat{N}_{Pr}}$, found in equation (62). By utilizing the low frequency limit for the gamma function, $\Gamma(1+i x)\approx e^{-i \gamma x}$, we find that we can cancel out all of the oscillatory behaviour within the integrand by setting $v_c$ as follows:
\begin{equation}
v_c= \frac{1}{a}(0.577) + \log ([k_{med}]/a)
\end{equation}
Where $\gamma$ is the Euler constant. This expression explicitly demonstrates the connection between the Rindler position of the wave-packet mode, and the frequency in the Minkowski frame.
\\
\\
How the particle count changes with $\omega_0$ is demonstrated in Fig. 8. It is found that the ideal and practical particle count coincides with each other for a smaller $\omega_0$ with larger $k_{wid}$.
\begin{figure}[h!]
\subfigure[$k_{wid}=10^6$]{\includegraphics[width=0.45\textwidth]{NEffectofomega04.PNG}}
\hfill
\caption{The graphs above demonstrates how $\omega_0$ affects the particle count detected by the Minkowski detector. We have utilized the following settings: $k_{med}=1, \delta=0.4 \omega_0, v_c'=0.577, a=1$.}
\end{figure}
\subsubsection{Quadrature Amplitude}
In this section we look at $\braket{\hat{N}_{Pr}}-\braket{\hat{N}_{Pr,0}}$ and look at the validity of equation (63). The approximation made in this equation is valid when $\braket{\hat{N}_{Pr}}-\braket{\hat{N}_{Pr,0}} \ll (I_c+I_s)|\alpha|$. Thus, we analyse how large we must set $|\alpha|$ for the approximation in equation (63) to be valid. $\braket{\hat{N}_{Pr}}-\braket{\hat{N}_{Pr,0}}$ can be simplified as follows:
\begin{equation}
\begin{aligned}
\braket{\hat{N}_{Pr}} - \braket{\hat{N}_{Pr,0}} =\frac{1}{ \pi} \log(k_{wid}) \int \mathrm{d} \omega \; F_1(\omega ; \Delta)
\end{aligned}
\end{equation}
Looking at this equation, it is clear that the particle count is proportional to $\log(k_{wid})$. We analyse how $\Delta$ affects the particle count in Fig. 9.
\\
\\
\begin{figure} [h!]
\centering
\includegraphics[width=0.45\textwidth]{XEffectofDelta.PNG}
\caption{The graph above is a plot of how the particle count, $\braket{\hat{N}_0} X(\phi)$, is affected by $\Delta$. We have utilized the following settings: $k_{wid}=10^8$}
\label{fig: Unitary}
\end{figure}
\noindent
By analysing this graph, we find that the sufficient condition to assume $X(\phi) \approx 0$ is when the amplitude of the local oscillator satisfies the following:
\begin{equation}
|\alpha|\gg \Delta \frac{\log_{10} (k_{med})}{16}
\end{equation}
\subsubsection{Variance}
We now look at whether there are practical settings where the ideal and practical variances coincides with each other. This can be done by looking into the validity of the following equations:
\begin{align}
&V_{Pr}\approx V_{1,2}' + V_{2}'\cos(\theta-2\phi)
\\
&V_{1,2}' + V_{2}' \cos(\theta-2\phi)\approx V_{Id}
\end{align}
In this section we will explore the validity of the the latter equation. In the previous section we looked at the condition in which most of the coherent signal is observed. In this section we explore whether there are any further constraint for equation (81) to be valid.
\\
\\
We first examine how $\Delta$ affects the convergence bewteen practical and ideal variance. From Fig. 10 it is found that the practical variance coincides with the ideal case for $\omega_0=0.6$ regardless of $\Delta$. This is because we have set $k_{wid}=10^6$, which is large enough for more than 2 standard deviation of the signal to be measured. As a result, we conclude that $\Delta$ is not responsible for the relative deviation between the ideal and practical results. This makes sense, as $\Delta$ does not change which part of the signal is traced out. The effect of $\Delta$ on the variance will be discussed further in the following chapter.
\\
\\
\begin{figure}[h!]
\hfill
\subfigure[$\omega_0=0.6$]{\includegraphics[width=0.45\textwidth]{VarianceEffectofDelta2.PNG}}
\caption{The graph above demonstrates how $\Delta$ affects the variance of the particle count detected by the Minkowski detector, for various $\omega_0$. We have utilized the following settings: $k_{med}=1, k_{wid}=10^6, \delta=0.4 \omega_0, v_c=0.577, a=1$.}
\end{figure}
\begin{figure} [h!]
\centering
\includegraphics[width=0.45\textwidth]{VarianceEffectofOmega0.PNG}
\caption{The plots show how $\omega_0$ affects the variance. We have utilized the following settings: $k_{med}=1, k_{wid}=10^6, \delta=0.4 \omega_0, v_c'=0.577, a=1, \Delta=10$.}
\label{fig: Unitary}
\end{figure}
\\
\noindent We now examine how $\omega_0$ affects the variance. From Fig 11, we find that the variance follows a similar trend to what was observed for the particle count. The practical and ideal variance deviates from each other due to a $\delta$ that is too small compared to $k_{wid}$. It is interesting to note the squeezing effect that appears with smaller $\delta$. The squeezing effect appears when we introduce a low and high frequency cut-off. From this graph we can conclude that squeezing arises from tracing off important parts of the signal. Tracing off information not only causes mixing, but can also cause squeezing. Previously, it was shown that squeezing is observed from an accelerated mirror when the signal was analysed with reference to Minkowski frequencies \cite{Daiqin2016}. In this paper we showed that the squeezing effect observed in their paper is removed if we conduct self-homodyne detection with respect to Rindler frequencies. The squeezing observed in their paper was a result of tracing out correlations that existed between Unruh/Rindler modes.
\\
\\
In this section, we looked at the convergence rate of the variance and found that $\Delta$ does not play a huge role on the amount of error from the ideal case. We found that the error arises when important parts of the signal is traced out. The error is suppressed when equation (76) is satisfied. In the following section we will look into how large $|\alpha|$ must be in order to neglect the 0th order term.
\subsubsection{0th Order Variance Term}
In this section we examine the particle fluctuation, $\braket{\hat{N}_{Pr,0}} \times V_{1,0}$. By setting $|\alpha|^2 \gg \braket{\hat{N}_0}V_{1,0}$, equation (80) is satisfied.
\\
\\
We first look into how $k_{wid}$ affects the particle fluctuations. Fig. 12 is a log-linear plot of particle fluctuation versus $k_{wid}$. This graph shows that the particle fluctuation is approximately logarithmically proportional to $k_{wid}$ for $k_{wid}$ larger than $10$.
\begin{figure} [h!]
\centering
\includegraphics[width=0.45\textwidth]{VEffectofkmed.PNG}
\caption{The graph above is a plot of how the 0th order particle fluctuation, $\braket{\hat{N}_0}V_{1,0}'$, is affected by $k_{wid}$. We have utilized the following settings: $\Delta=10$}
\label{fig: Unitary}
\end{figure}
\\
\noindent We now look into how $\Delta$ affects the particle fluctuation. Fig. 13 is a log-log plot of the particle fluctuation versus $k_{wid}$. By a linear regression, we find that the particle fluctuation is quadratically proportional to the particle count. It is found that there is an increase in proportionality constant between $\Delta<10$ and $\Delta >100$. As we are interested in a sufficient condition to neglect the 0th order term, we consider the case when $\Delta > 100$. We find $\braket{\hat{N}_0}V_{1,0} \approx \Delta^2$. Combining this result with the result from Fig. 12, the sufficient condition to neglect the 0th order term is as follows:
\begin{equation}
|\alpha|\gg \Delta \sqrt{\log(k_{wid})/6}
\end{equation}
It is found that this condition puts a larger lower bound on $|\alpha|$ than equation (79) for $k_{wid}< 4 \times 10^{42}$. As it is impossible to reach this bound in a practical experiment, we conclude equation (82) must be satisfied for our experiment to neglect the 0th order term.
\\
\\
We have now demonstrated that there is a regime in which the practical measurement converges with the ideal measurement. We showed that when equation (76) and (82) are satisfied, and with the correct $k_{mid}$, the practical measurement and the ideal measurements coincides with each other. This demonstrates the validity of the results found in equation (36) and (37).
\begin{figure} [h!]
\centering
\includegraphics[width=0.45\textwidth]{VEffectofDelta.PNG}
\caption{The graph above is a plot of how the 0th order particle fluctuation, $\braket{\hat{N}_0}V_{1,0}'$, is affected by $\Delta$. We have utilized the following settings: $k_{wid}=10^6$}
\label{fig: Unitary}
\end{figure}
|
{
"timestamp": "2018-06-05T02:13:52",
"yymm": "1806",
"arxiv_id": "1806.00929",
"language": "en",
"url": "https://arxiv.org/abs/1806.00929"
}
|
\section*{COPYRIGHT NOTICE}
Andr\'e Xuereb, Matteo Aquilina, and Shabir Barzanjeh, ``Routing thermal noise through quantum networks,'' eds.\ David L. Andrews, Angus J. Bain, Jean-Michel Nunzi, and Andreas Ostendorf, Proc. SPIE \textbf{10672} Nanophotonics VII, 10672N (2018).
Copyright 2018 Society of Photo-Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.
\href{https://doi.org/10.1117/12.2309928}{https://doi.org/10.1117/12.2309928}
\section{INTRODUCTION}
Non-reciprocal devices for electromagnetic radiation are of significant practical use; much like the humble and ubiquitous diode in electronic circuits, the possibility to direct optical or microwave radiation in one direction, but not in reverse, is of importance in everything from telecommunications\cite{Kobayashi1980} to preventing feedback-induced instabilities in lasers\cite{Ohtsubo2013}. The paradigmatic example of a non-reciprocal device for light is a Faraday optical isolator (FOI), whose operation depends on the Faraday effect. The action of a magnetic field on specific materials causes the polarisation vector of light passing through the isolator to rotate in a specific direction. Due to the symmetry-breaking caused by the magnetic field, this rotation is not undone when light travels in the reverse direction. Upon interchanging the inputs and outputs of a FOI, one obtains qualitatively different behaviour (see Fig.~\ref{fig:FOI}).
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.4]{fig1}
\end{center}
\caption{\label{fig:FOI}Schematic of a Faraday optical isolator (FOI). A static magnetic field $B$ acts on a material to rotate the polarisation of light passing through it. We consider rotation by $45^\circ$ as a simple case. Two polarising beam-splitters, one at each end of the Faraday material and orientated at $45^\circ$ with respect to each other, complete the setup. Light entering the setup from the input port passes straight through, albeit suffering a change in polarisation; any back-reflection is diverted to a beam dump. Seen as a two-port device, as indicated by the dashed box, the FOI allows light to pass through in one direction but not in reverse.}
\end{figure}
This picture provides us with the foundation for a definition of non-reciprocal behaviour that is more general than symmetry breaking under time reversal. In dissipative systems, time-reversal symmetry is a weaker concept\cite{Lax1980}. Consider a dissipative system with multiple inputs and outputs; its dissipative nature implies that any transient input signal will cause a decaying output. Even if such a system were to be reciprocal, in the sense that the interchange of inputs and outputs yields identical physics, it could not be time-symmetric, since dissipation implies that any output signal will decay. Our investigation concentrates on dissipative systems, and we will correspondingly be concerning ourselves with systems whose behaviour is not identical when inputs and outputs are interchanged.
Whereas it is not our intention to review the significant process that has taken place in proposing and demonstrating non-reciprocal devices in recent years, it is useful to highlight a few specific approaches. Optomechanical systems\cite{Aspelmeyer2014} provide a veritable playground for investigating the interaction between light and motion at the nano- and micro-scales. This interaction was suggested as the basis for a non-reciprocal device by Hafezi and Rabl in 2012\cite{Hafezi2012}. Consider a toroidal optical micro-resonator supporting degenerate clockwise and counter-clockwise modes, coupled to a waveguide held close to it. Transmission along either direction in the waveguide is identical, due to the degeneracy of the modes in the resonator. One can break the symmetry by pumping one of these modes, e.g., the clockwise mode, thereby enhancing the optomechanical interaction between the mechanical breathing modes of the resonator and its optical modes. Under these conditions, transmission in one direction is preferred to that in the opposite direction; external pumping renders the system non-reciprocal. An alternative approach\cite{Peano2015} makes use of the optomechanical interaction in a specially-designed crystal structure. Tailored input optical fields, carefully chosen to pump each cell in the crystal with a specific phase, induce phases when photons or phonons hop from one cell to the next. This can be rephrased as an effective pseudo-magnetic field acting on the photons or phonons, thereby inducing non-reciprocal behaviour in their motion along the crystal. Further studies predicted topologically-protected edge states in mechanical systems that can even be used to build directional acoustic amplifiers\cite{Peano2016}.
The mechanisms described so far do not require dissipation to work, and can be described to an extent in a fully unitary picture. In this paper we will be concerned with non-unitary devices connected to heat baths. Under the guise of reservoir engineering\cite{Metelmann2015,Malz2018} parts of our work have been discussed previously. Such techniques have recently been exploited in optomechanical experiments to build circulators and non-reciprocal devices for microwave\cite{Bernier2017,Barzanjeh2017} and optical\cite{Ruesink2018} signals.
In this paper, however, we will take a different point of view in two essential ways\cite{Barzanjeh2018}. First, we will describe the system using the cascaded-systems formalism; despite this being textbook material\cite{Gardiner2004} we will briefly review its key points. Second, we will concentrate not on input and output \emph{signals}, but on the flow of thermal noise through a non-reciprocal network of quantum devices. Throughout, our focus will be on optomechanical devices as a platform on which to realise our proposal.
\section{THE CASCADED QUANTUM SYSTEMS FORMALISM}
Our basic building block is a network composed of two open quantum systems, which we label 1 and 2. We demand, by construction, that the output of system 1 forms the input of system 2, but not vice versa. Suppose that the two systems are single-mode bosonic fields, with which we associate annihilation operators $\hat{c}_1$ and $\hat{c}_2$. With each system $i=1,2$ we also associate an input field $\hat{b}_{\text{in},i}$, an output field $\hat{b}_{\text{out},i}$, and a decay rate $\gamma_i>0$ which sets the coupling rate between system $i$ and its input and output fields. As is well-known\cite{Gardiner2004}, the input--output formalism yields
\begin{equation}
\hat{b}_{\text{out},i}=\hat{b}_{\text{in},i}+\sqrt{\gamma_i}\hat{c}_i,
\end{equation}
which is to be interpreted in the following as a Heisenberg-picture equation with all the operators evaluated at some time $t$. To induce cascaded dynamics, we require further that $\hat{b}_{\text{in},2}(t)=\hat{b}_{\text{out},1}(t-\tau)$. The quantity $\tau\geq0$ denotes the time delay incurred between the two system, but we can formally set it to zero by shifting the time coordinate for system 2. The Langevin equation governing the evolution of any operator $\hat{a}$ of the compound system is\cite{Gardiner2004}
\begin{multline}
\label{eq:LE}
\tfrac{\mathrm{d}}{\mathrm{d} t}\hat{a}=-\tfrac{\imath}{\hbar}\bigl[\hat{a},\hat{H}_\text{sys}\bigr]-\bigl[\hat{a},\hat{c}_1^\dagger\bigr]\bigl(\tfrac{\gamma_1}{2}\hat{c}_1+\sqrt{\gamma_1}\hat{b}_{\text{in},1}\bigr)+\bigl(\tfrac{\gamma_1}{2}\hat{c}_1^\dagger+\sqrt{\gamma_1}\hat{b}_{\text{in},1}^\dagger\bigr)\bigl[\hat{a},\hat{c}_1\bigr]\\
-\bigl[\hat{a},\hat{c}_2^\dagger\bigr]\bigl(\tfrac{\gamma_2}{2}\hat{c}_2+\sqrt{\gamma_2}\hat{b}_{\text{in},1}\bigr)+\bigl(\tfrac{\gamma_2}{2}\hat{c}_2^\dagger+\sqrt{\gamma_2}\hat{b}_{\text{in},1}^\dagger\bigr)\bigl[\hat{a},\hat{c}_2\bigr]\\
-\bigl[\hat{a},\hat{c}_2^\dagger\bigr]\sqrt{\gamma_1\gamma_2}\hat{c}_1+\sqrt{\gamma_1\gamma_2}\hat{c}_1^\dagger\bigl[\hat{a},\hat{c}_2\bigr],
\end{multline}
where $\hat{H}_\text{sys}$ is the Hamiltonian that governs the evolution of systems $1$ and $2$; $\bigl[\hat{a},\hat{b}\bigr]=\hat{a}\hat{b}-\hat{b}\hat{a}$ is the commutator. The identification of the output of system 1 with the input of system 2 has effectively reduced the number of input and output ports of the network, which now has one global input, $\hat{b}_{\text{in},1}$, and one global output, $\hat{b}_{\text{out},2}$.
The Langevin equation~(\ref{eq:LE}) is equivalent to a master equation for the density matrix $\rho$ of the system in standard (Lindblad) form:
\begin{equation}
\label{eq:StdME}
\tfrac{\mathrm{d}}{\mathrm{d} t}\rho=-\tfrac{\imath}{\hbar}\bigl[\hat{H}_\text{eff},\rho\bigr]+\mathcal{D}_{\bar{N}_3,\kappa_3,\hat{c}_3}[\rho],
\end{equation}
where $\mathcal{D}_{\bar{N},\kappa,\hat{a}}[\rho]$ is the Liouvillian corresponding to a bosonic heat bath with average occupancy $\bar{N}$ that is coupled through system operator $\hat{c}$ with a rate $\kappa$:
\begin{equation}
\mathcal{D}_{\bar{N},\kappa,\hat{a}}[\rho]:=(\bar{N}+1)\kappa\bigl(\hat{a}\rho\hat{a}^\dagger-\tfrac{1}{2}\bigl\{\rho,\hat{a}^\dagger\hat{a}\bigr\}\bigr)+\bar{N}\kappa\bigl(\hat{a}^\dagger\rho\hat{a}-\tfrac{1}{2}\bigl\{\rho,\hat{a}\hat{a}^\dagger\bigr\}\bigr),
\end{equation}
with $\bigl\{\hat{a},\hat{b}\bigr\}=\hat{a}\hat{b}+\hat{b}\hat{a}$ being the anticommutator. In Eq.~(\ref{eq:StdME}) we introduced an effective Hamiltonian
\begin{equation}
\hat{H}_\text{eff}:=\hat{H}_\text{sys}+\tfrac{\imath\hbar}{2}\sqrt{\gamma_1\gamma_2}\bigl(\hat{c}_1^\dagger\hat{c}_2-\hat{c}_1\hat{c}_2^\dagger\bigr),
\end{equation}
a collective coupling rate $\kappa_3:=\gamma_1+\gamma_2$, and a collective bosonic annihilation operator
\begin{equation}
\hat{c}_3:=\tfrac{1}{\sqrt{\kappa_3}}\bigl(\sqrt{\gamma_1}\hat{c}_1+\sqrt{\gamma_2}\hat{c}_2\bigr),
\end{equation}
which satisfies $\bigl[\hat{c}_3,\hat{c}_3^\dagger\bigr]=1$. The physical content of master equation~(\ref{eq:StdME}) is that our cascaded quantum system is fully equivalent to two bosonic modes that are coupled to each other by means of the second (``hopping'') term in $\hat{H}_\text{eff}$, and which are coupled to a common bath by means of the collective damping operator $\hat{c}_3$. The non-reciprocal behaviour arises from the interference that is set up between these two channels, since excitations can hop between the two systems either through the former (unitary dynamics), or through the latter (non-unitary dynamics). The sign change between the hopping term and the operator $\hat{c}_3$ is the mathematical basis for the constructive (destructive) interference in the direction $1\to2$ ($2\to1$).
We will investigate a more complete situation, where each system is coupled independently to a heat bath as well as to the common heat bath described above. In order to make our comparison with the optomechanical situation in the next section more straightforward we introduce a phase $\phi$ to $\hat{c}_2$, and to account for imperfect non-reciprocity, we introduce a new coupling term to our Hamiltonian. Thus,
\begin{equation}
\hat{H}_\text{eff}:=\hat{H}_\text{sys}+\tfrac{\imath\hbar}{2}\sqrt{\gamma_1\gamma_2}\bigl(e^{\imath\phi}\hat{c}_1^\dagger\hat{c}_2-e^{-\imath\phi}\hat{c}_1\hat{c}_2^\dagger\bigr)+\hbar\bigl(F\hat{c}_1^\dagger\hat{c}_2+F^\ast\hat{c}_1\hat{c}_2^\dagger\bigr),
\end{equation}
where $F$ is an arbitrary complex number, and
\begin{equation}
\hat{c}_3:=\tfrac{1}{\sqrt{\kappa_3}}\bigl(\sqrt{\gamma_1}\hat{c}_1+\sqrt{\gamma_2}e^{\imath\phi}\hat{c}_2\bigr).
\end{equation}
Perfect non-reciprocity is restored when $F=0$. Finally, our full master equation reads
\begin{equation}
\tfrac{\mathrm{d}}{\mathrm{d} t}\rho=-\tfrac{\imath}{\hbar}\bigl[\hat{H}_\text{eff},\rho\bigr]+\sum_{i=1,2,3}\mathcal{D}_{\bar{N}_i,\kappa_i,\hat{c}_i}[\rho],
\end{equation}
where the Liouvillian term with $i=1,2$ corresponds to the heat bath for system $i$, with average occupancy $\bar{N}_i$, that is coupled to the network through the damping operator $\hat{c}_i$ with a coupling rate $\kappa_i$.
At this stage we need to specify $\hat{H}_\text{sys}$. We assume that the two systems are uncoupled bosonic modes, with free oscillation frequencies $\omega_i$ ($i=1,2$):
\begin{equation}
\hat{H}_\text{sys}=\hbar\omega_1\hat{c}_1^\dagger\hat{c}_1+\hbar\omega_2\hat{c}_2^\dagger\hat{c}_2.
\end{equation}
Starting from the Langevin equation~(\ref{eq:LE}) it is straightforward to show that
\begin{equation}
\tfrac{\mathrm{d}}{\mathrm{d} t}\hat{c}_1=-(\imath\omega_1+\tfrac{\gamma_1+\kappa_1}{2})\hat{c}_1-\imath F\hat{c}_2+\sqrt{\kappa_1}\hat{c}_{\text{in},1}+\sqrt{\gamma_1}\hat{c}_{\text{in},3},
\end{equation}
and
\begin{equation}
\tfrac{\mathrm{d}}{\mathrm{d} t}\hat{c}_2=-(\imath F^\ast+\sqrt{\gamma_1\gamma_2}e^{\imath\phi})\hat{c}_1+\sqrt{\kappa_2}\hat{c}_{\text{in},2}-(\imath\omega_2+\tfrac{\gamma_2+\kappa_2}{2})\hat{c}_2+\sqrt{\gamma_2}e^{\imath\phi}\hat{c}_{\text{in},3}.
\end{equation}
It is now easy to see that when $F=0$, system 2 is affected by system 1, but system 1 is entirely uncoupled from system 2. In these equations of motion, each input bosonic operator $\hat{c}_{\text{in},i}$ is associated with bath $i$ and has the following properties:
\begin{align}
\langle\hat{c}_{\text{in},i}(t)\rangle&=0,\\
\langle\hat{c}_{\text{in},i}^\dagger(t)\hat{c}_{\text{in},j}(t^\prime)\rangle&=\bar{N}_i\delta_{i,j}\delta(t-t^\prime),\ \text{and}\\
\langle\hat{c}_{\text{in},i}(t)\hat{c}_{\text{in},j}^\dagger(t^\prime)\rangle&=\bigl(\bar{N}_i+1)\delta_{i,j}\delta(t-t^\prime),
\end{align}
where $\delta_{i,j}$ is the Kronecker delta, $\delta(t-t^\prime)$ the Dirac delta function, and $i,j=1,2,3$.
In steady state, these equations can be Fourier-transformed from the time domain to the frequency domain. We can express the resulting equations compactly in matrix form:
\begin{equation}
-\imath\omega\begin{pmatrix}
\hat{c}_1 \\
\hat{c}_2
\end{pmatrix} = \begin{bmatrix}
-\imath\omega_1-\tfrac{\gamma_1+\kappa_1}{2} & -\imath F \\
-\imath F^\ast-\sqrt{\gamma_1\gamma_2}e^{\imath\phi} & -\imath\omega_2-\tfrac{\gamma_2+\kappa_2}{2}
\end{bmatrix}\begin{pmatrix}
\hat{c}_1 \\
\hat{c}_2
\end{pmatrix}
+\begin{pmatrix}
\sqrt{\kappa_1}\hat{c}_{\text{in},1} \\
\sqrt{\kappa_2}\hat{c}_{\text{in},2}
\end{pmatrix}+\begin{pmatrix}
\sqrt{\gamma_1} \\
\sqrt{\gamma_2}e^{\imath\phi} \\
\end{pmatrix}\hat{c}_\text{in,3}.
\end{equation}
In the next section we will develop an optomechanical system that realises this model.
\section{AN OPTOMECHANICAL SCENARIO}
We consider a network composed of two electromagnetic cavity modes that mutually interact with a mechanical oscillator via the standard optomechanical interaction. The Hamiltonian that generates the dynamics of this system is
\begin{equation}
\hat{H}=\hbar\omega_\text{m}\hat{b}^\dagger\hat{b}+\sum_{i=1,2}\hbar\bigl[\omega_i\hat{a}^\dagger_i\hat{a}_i+g_i\hat{a}^\dagger_i\hat{a}_i\bigl(\hat{b}+\hat{b}^\dagger\bigr)\bigr]
+\hbar J\bigl(\hat{a}_1\hat{a}_2^\dagger+\hat{a}_1^\dagger\hat{a}_2\bigr)+\sum_{i=1,2}\hbar\mathcal{E}_i\big(e^{-\imath\omega_{\text{d},i}t}\hat{a}_i+e^{\imath\omega_{\text{d},i}t}\hat{a}_i^\dagger\bigr),
\end{equation}
where $\hat{a}_i$ ($i=1,2$) is the bosonic annihilation operator that corresponds to electromagnetic mode $i$ whose frequency is $\omega_i$, and $\hat{b}$ is the annihilation operator corresponding to the mechanical oscillator with frequency $\omega_\text{m}$. Photons are allowed to hop directly between the electromagnetic modes; this process is governed by the coupling constant $J$, and each electromagnetic field mode is driven by means of a classical source with strength $\mathcal{E}_i$ and frequency $\omega_{\text{d},i}$. Finally, we describe the optomechanical interaction by means of the constant $g_i$, which shifts the position of the mechanical oscillatior by an amount proportional to the photon number of mode $i$. We can rewrite this equation in a frame rotating at the driving frequencies. Assuming that $\omega_{\text{d},1}=\omega_{\text{d},2}$, and defining $\Delta_i:=\omega_i-\omega_{\text{d},i}$, we obtain the time-independent Hamiltonian
\begin{equation}
\hat{H}=\hbar\omega_\text{m}\hat{b}^\dagger\hat{b}+\sum_{i=1,2}\hbar\bigl[\Delta_i\hat{a}^\dagger_i\hat{a}_i+g_i\hat{a}^\dagger_i\hat{a}_i\bigl(\hat{b}+\hat{b}^\dagger\bigr)\bigr]
+\hbar J\bigl(\hat{a}_1\hat{a}_2^\dagger+\hat{a}_1^\dagger\hat{a}_2\bigr)+\sum_{i=1,2}\hbar\mathcal{E}_i\big(\hat{a}_i+\hat{a}_i^\dagger\bigr).
\end{equation}
Current realisations of such optomechanical systems have a coupling strength $g_i$ that is rather small\cite{Aspelmeyer2014}. This is overcome by means of strong classical driving, which allows us to approximate $\hat{H}$ by means of a Hamiltonian that is quadratic in the operators, and which therefore leads to linear equations of motion. This process is detailed elsewhere in the literature\cite{Aspelmeyer2014}, so we will only list the key steps to linearisation. First, we start from the master equation governing the full system
\begin{equation}
\tfrac{\mathrm{d}}{\mathrm{d} t}\rho=-\tfrac{\imath}{\hbar}\bigl[\hat{H},\rho\bigr]+\sum_{i=1,2}\mathcal{D}_{\bar{N}_i,\kappa_i,\hat{a}_i}[\rho]+\mathcal{D}_{\bar{N}_\text{m},\gamma_\text{m},\hat{b}}[\rho],
\end{equation}
defining $\bar{N}_\text{m}$ as the average occupancy of the mechanical bath. Next, rewrite
\begin{align}
\hat{a}_i&\to\hat{a}_i+\alpha_i,\ \text{and}\\
\hat{b}&\to\hat{b}+\beta,
\end{align}
where the $\alpha_i$ and $\beta$ are complex numbers whose values will be determined self-consistently. The terms in the resulting master equation can be sorted by their order, i.e., constants, or linear, quadratic, or cubic in the field operators. Constants can be ignored, since they do not affect the dynamics. The linear terms can be eliminated by solving a set of equations that define the $\alpha_i$ and $\beta$ in terms of each other and of the $\mathcal{E}_i$. Operating under the assumption that $\lvert\mathcal{E}_i\rvert$ is large enough so that $\lvert\alpha_i\rvert\gg1$, we can ignore the cubic terms. Finally, defining
\begin{equation}
G_i:=g_i\alpha_i,
\end{equation}
we obtain the so-called linearised optomechanical Hamiltonian
\begin{equation}
\hat{H}_\text{lin}=\hbar\omega_\text{m}\hat{b}^\dagger\hat{b}+\sum_{i=1,2}\hbar\bigl[\Delta_i\hat{a}^\dagger_i\hat{a}_i+\bigl(G_i^\star\hat{a}_i+G_i\hat{a}_i^\dagger\bigr)\bigl(\hat{b}+\hat{b}^\dagger\bigr)\bigr]
+\hbar J\bigl(\hat{a}_1\hat{a}_2^\dagger+\hat{a}_1^\dagger\hat{a}_2\bigr),
\end{equation}
with the rest of the master equation unchanged:
\begin{equation}
\tfrac{\mathrm{d}}{\mathrm{d} t}\rho=-\tfrac{\imath}{\hbar}\bigl[\hat{H}_\text{lin},\rho\bigr]+\sum_{i=1,2}\mathcal{D}_{\bar{N}_i,\kappa_i,\hat{a}_i}[\rho]+\mathcal{D}_{\bar{N}_\text{m},\gamma_\text{m},\hat{b}}[\rho].
\end{equation}
The equations of motion for these new operators read
\begin{subequations}
\begin{align}
\tfrac{\mathrm{d}}{\mathrm{d} t}\hat{a}_1&=-\bigl(\imath\Delta_1+\tfrac{\kappa_1}{2}\bigr)\hat{a}_1-\imath J\hat{a}_2-\imath G_1\bigl(\hat{b}+\hat{b}^\dagger)+\sqrt{\kappa_1}\hat{a}_{\text{in},1},\\
\tfrac{\mathrm{d}}{\mathrm{d} t}\hat{a}_2&=-\bigl(\imath\Delta_2+\tfrac{\kappa_2}{2}\bigr)\hat{a}_2-\imath J\hat{a}_1-\imath G_2\bigl(\hat{b}+\hat{b}^\dagger)+\sqrt{\kappa_2}\hat{a}_{\text{in},2},\ \text{and}\\
\tfrac{\mathrm{d}}{\mathrm{d} t}\hat{b}&=-\bigl(\imath\omega_\text{m}+\tfrac{\gamma_\text{m}}{2}\bigr)\hat{b}-\imath\bigl(G_1^\ast\hat{a}_1+G_2^\ast\hat{a}_2+G_1\hat{a}_1^\dagger+G_2\hat{a}_2^\dagger\bigr)+\sqrt{\gamma_\text{m}}b_{\text{in},\text{m}},
\end{align}
\label{eq:OMEoM}
\end{subequations}
where we have defined the input field operators similarly to the previous section. Specifically, for $i,j=1,2$, we have
\begin{align}
\langle\hat{a}_{\text{in},i}(t)\rangle&=0,\\
\langle\hat{a}_{\text{in},i}^\dagger(t)\hat{a}_{\text{in},j}(t^\prime)\rangle&=\bar{N}_i\delta_{i,j}\delta(t-t^\prime),\ \text{and}\\
\langle\hat{a}_{\text{in},i}(t)\hat{a}_{\text{in},j}^\dagger(t^\prime)\rangle&=\bigl(\bar{N}_i+1)\delta_{i,j}\delta(t-t^\prime),
\end{align}
as well as
\begin{align}
\langle\hat{b}_{\text{in},\text{m}}(t)\rangle&=0,\\
\langle\hat{b}_{\text{in},\text{m}}^\dagger(t)\hat{b}_{\text{in},\text{m}}(t^\prime)\rangle&=\bar{N}_\text{m}\delta(t-t^\prime),\\
\langle\hat{b}_{\text{in},\text{m}}(t)\hat{b}_{\text{in},\text{m}}^\dagger(t^\prime)\rangle&=\bigl(\bar{N}_\text{m}+1)\delta(t-t^\prime),\ \text{and}\\
\langle\hat{b}_{\text{in},\text{m}}(t)\hat{a}_{\text{in},i}^\dagger(t^\prime)\rangle&=\langle\hat{b}_{\text{in},\text{m}}^\dagger(t)\hat{a}_{\text{in},i}(t^\prime)\rangle=0.
\end{align}
Since we have a concrete model in mind of three bosonic modes (two electromagnetic and one mechanical) we can relate the average occupancy of the baths to their temperatures, by means of the formulae ($i=1,2,\text{m}$)
\begin{equation}
\bar{N}_i=\frac{1}{\exp\bigl[\hbar\omega_i/(k_\text{B}T_i)\bigr]-1},
\end{equation}
with $T_i$ being the absolute temperature of the bath.
To make the connection with our previous formalism, we first rewrite equations~(\ref{eq:OMEoM}) in the frequency domain:
\begin{align}
-\imath\omega\hat{a}_1&=-\bigl(\imath\Delta_1+\tfrac{\kappa_1}{2}\bigr)\hat{a}_1-\imath J\hat{a}_2-\imath G_1\bigl(\hat{b}+\hat{b}^\dagger)+\sqrt{\kappa_1}\hat{a}_{\text{in},1},\\
-\imath\omega\hat{a}_2&=-\bigl(\imath\Delta_2+\tfrac{\kappa_2}{2}\bigr)\hat{a}_2-\imath J\hat{a}_1-\imath G_2\bigl(\hat{b}+\hat{b}^\dagger)+\sqrt{\kappa_2}\hat{a}_{\text{in},2},\ \text{and}\\
-\imath\omega\hat{b}&=-\bigl(\imath\omega_\text{m}+\tfrac{\gamma_\text{m}}{2}\bigr)\hat{b}-\imath\bigl(G_1^\ast\hat{a}_1+G_2^\ast\hat{a}_2+G_1\hat{a}_1^\dagger+G_2\hat{a}_2^\dagger\bigr)+\sqrt{\gamma_\text{m}}b_{\text{in},\text{m}}.
\end{align}
Next, we solve the last of these equations for $\hat{b}$. We take $\Delta_i\approx\omega_\text{m}$ and assume operation in the sideband-resolved regime ($\omega_\text{m}\gg\kappa_i$), which together allow us to eliminate contributions from creation operators in the equation for $\hat{b}$. Finally, we substitute this solution into the equations for $\hat{a}_i$ ($i=1,2$). In vector form, we find
\begin{multline}
-\imath\omega\begin{pmatrix}
\hat{a}_1 \\
\hat{a}_2
\end{pmatrix}=\begin{bmatrix}
-\imath\Delta_1-\frac{\kappa_1}{2}-\lvert G_1\rvert^2\chi_\text{m}(\omega) & -\imath J-\chi_\text{m}(\omega)G_1G_2^\ast \\
-\imath J-\chi_\text{m}(\omega)G_1^\ast G_2&-\imath\Delta_2-\frac{\kappa_2}{2}-\lvert G_2\rvert^2\chi_\text{m}(\omega)
\end{bmatrix}\begin{pmatrix}
\hat{a}_1 \\
\hat{a}_2
\end{pmatrix}\\
+\begin{pmatrix}
\sqrt{\kappa_1}\hat{a}_{\text{in},1} \\
\sqrt{\kappa_2}\hat{a}_{\text{in},2}
\end{pmatrix}+\begin{pmatrix}
-\imath G_1\sqrt{\gamma_\text{m}}\chi_\text{m}(\omega) \\
-\imath G_2\sqrt{\gamma_\text{m}}\chi_\text{m}(\omega)
\end{pmatrix}\hat{b}_{\text{in},\text{m}},
\end{multline}
where we have defined the mechanical susceptibility
\begin{equation}
\chi_\text{m}(\omega):=\frac{1}{\gamma_\text{m}/2-\imath(\omega-\omega_\text{m})}.
\end{equation}
In preparation for the forthcoming equivalence, we perform a gauge transformation $\hat{b}_{\text{in},\text{m}}\to\imath e^{-\imath\nu}\hat{b}_{\text{in},\text{m}}$, where
\begin{equation}
\nu:=\arg\{\chi_\text{m}(\Omega)\},
\end{equation}
where $\Omega$ is some frequency of interest. We assume that $G_1$ is real, which can always be performed by an appropriate choice of phase, and set $G_2\to G_2e^{\imath\phi}$, where the transformed $G_2$ is also real. For convenience, we also define $\tilde{\chi}_\text{m}(\omega):=e^{-\imath\nu}\chi_\text{m}(\omega)$. We thus obtain, quite simply,
\begin{multline}
-\imath\omega\begin{pmatrix}
\hat{a}_1\\
\hat{a}_2
\end{pmatrix}=\begin{bmatrix}
-\imath\Delta_1-\frac{\kappa_1}{2}-G_1^2\chi_\text{m}(\omega) & -\imath J-\chi_\text{m}(\omega)G_1G_2e^{-\imath\phi}\\
-\imath J-\chi_\text{m}(\omega)G_1G_2e^{\imath\phi}&-\imath\Delta_2-\frac{\kappa_2}{2}-G_2^2\chi_\text{m}(\omega)
\end{bmatrix}\begin{pmatrix}
\hat{a}_1\\
\hat{a}_2
\end{pmatrix}\\
+\begin{pmatrix}
\sqrt{\kappa_1}\hat{a}_{\text{in},1}\\
\sqrt{\kappa_2}\hat{a}_{\text{in},2}
\end{pmatrix}+\begin{pmatrix}
G_1\sqrt{\gamma_\text{m}}\tilde{\chi}_\text{m}(\omega)\\
G_2\sqrt{\gamma_\text{m}}\tilde{\chi}_\text{m}(\omega)
\end{pmatrix}\hat{b}_{\text{in},\text{m}}.
\end{multline}
Before continuing, we note that the off-diagonal elements of the matrix in the first term on the right-hand side of this equation are neither complex conjugates of, nor equal to, each other. It is for this reason that this optomechanical platform gives rise to non-reciprocal behaviour. In the next section we will explicitly show how this platform realises the cascaded system described earlier.
\section{EQUIVALENCE BETWEEN THE TWO SCENARIOS}
Suppose that we are concerned with noise in a bandwidth that is small compared to $\gamma_\text{m}$ but large compared to $\kappa_i$. This situation can be realised by having the mechanical oscillator interact with a third eletromagnetic mode to increase its damping rate\cite{Bernier2017}. Under these circumstances, it is fair to assume that the mechanical susceptibility is no longer a function of frequency. By formally taking $\gamma_\text{m}\gg\lvert\Omega-\omega_\text{m}\rvert$, the equivalence between the optomechanical platform and the cascaded system is exact. This can be seen by comparing the two:
\begin{center}
\begin{tabular}{|c||c|}
\hline
\textbf{Cascaded system} & \textbf{Optomechanical platform}\\
\hline
\hline
$\hat{c}_i$ & $\hat{a}_i$\\
$\hat{c}_{\text{in},i}$ & $\hat{a}_{\text{in},i}$\\
$\hat{c}_{\text{in},3}$ & $\hat{b}_\text{in,m}$\\
$\omega_i$ & $\Delta_i$\\
$\gamma_i$ & $\frac{4G_i^2}{\gamma_\text{m}}$\\
$F$ & $J-\frac{2\imath G_1G_2e^{-\imath\phi}}{\gamma_\text{m}}$\\
\hline
\end{tabular}
\end{center}
Any results derived from the cascaded system formalism, therefore, apply identically to the optomechanical platform. For example, if we set $\phi=\pi/2$, such that $e^{\imath\phi}=\imath$, and $J=2G_1G_2/\gamma_\text{m}$, then we recover $F=0$ and perfect non-reciprocity.
\section{RESULTS}
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.7]{fig2a}\quad
\includegraphics[scale=0.7]{fig2b}
\end{center}
\caption{\label{fig:Deltan2}Change in occupation number of system 2, $\Delta n_2$, as a function of the detuning $\Delta$ between the two modes and the occupancy of the common bath, $\bar{m}_2$. Regions where the thermal noise in the system 2 increases are shown in red, whereas regions where it decreases are shown in blue. We used $\phi=0$, $\bar{m}_1=100$, and $\bar{m}_2=50$. (Left) $F=0$, which means that the thermal noise in system 1 is unaffected. (Right) $F=\kappa$, which means that the system is not fully non-reciprocal; in this case, the control over the thermal noise in system 2 is still present.}
\end{figure}
We are finally in a position to demonstrate how this system leads to a modified flow of thermal noise. To avoid defining heat and flow of heat in the quantum regime, we will base our discussion on the average thermal occupancy of the two field modes, which we write as $\bar{n}_i:=\langle\hat{c}_i^\dagger\hat{c}_i\rangle$ ($i=1,2$) in the notation of the first section. Because of the form of the equations of motion, we know that the state of the field modes will be a thermal state, and so fully characterised by $\bar{n}_i$.
Our intention is to compare two situations, with and without the non-reciprocal link. If we simply removed the coupling between the two systems and the common bath, we would obtain two disconnected field modes, but we would have modified the physics: In the cascaded scenario, each field mode is coupled to two baths, but in this hypothetical case each would be coupled to only a single bath. The way forward is to compare $\bar{n}_i$ with the equivalent quantity, which we denote $\bar{m}_i$, obtained in the formal limit $\lvert\Delta\rvert\to\infty$, where $\Delta:=\omega_1-\omega_2$, whilst keeping $F$, $\kappa_i$, and $\bar{N}_i$ fixed. We use this to define
\begin{equation}
\Delta n_i:=\bar{n}_i-\bar{m}_i.
\end{equation}
The physical interpretation of this quantity is straightforward. A positive (negative) $\Delta n_i$ means an increase (decrease) in thermal noise, brought about by the non-reciprocal link. In the simplest case when $\kappa_1=\kappa_2=\gamma_1=\gamma_2=:\kappa$ and $F=0$, for example,
\begin{align}
\Delta n_1&=0,\ \text{and}\\
\Delta n_2&=\frac{2\kappa^2}{4\kappa^2+\Delta^2}\bigl(\bar{m}_1-\bar{m}_3\bigr).
\end{align}
Under these circumstances, moreover, we have
\begin{align}
\bar{m}_i&=\tfrac{1}{2}\bigl(\bar{N}_i+\bar{N}_3\bigr),\ \text{for}\ i=1,2\ \text{and}\\
\bar{m}_3&=\bar{N}_3.
\end{align}
By way of example, we show in Fig.~\ref{fig:Deltan2} the change in occupation number $\Delta n_2$ of the second mode as a function of the detuning $\Delta$ and the occupancy $\bar{m}_3$ of the common bath. We note that the temperature of the common bath acts as a knob through which the thermal noise of the second mode can be increased or decreased. For the same parameters, when $F=0$ we find that $\Delta n_1=0$ throughout, demonstrating the power of our system to route thermal noise to or away from the second mode without affecting the first.
\section{CONCLUSIONS}
We have presented an optomechanical platform on which we can demonstrate the ability to controllably route thermal noise into or out of an electromagnetic field mode. To analyse this system we constructed a simplified model based on the cascaded systems formalism. Our results show that it is possible to use a heat bath as a knob with which to route thermal noise towards or away from particular systems in a network of quantum devices.
\acknowledgments
We acknowledge funding from the European Union's Horizon 2020 research and innovation program under grant agreement no.\ 732894 (FETPRO HOT). S.B.\ acknowledges support under the Marie Sk\l{}odowska-Curie Actions programme, grant agreement no.\ 707438 (MSCA-IF-EF-ST SUPEREOM).
|
{
"timestamp": "2018-06-05T02:15:17",
"yymm": "1806",
"arxiv_id": "1806.01000",
"language": "en",
"url": "https://arxiv.org/abs/1806.01000"
}
|
\section{Introduction}
\label{intro}
Many statistical learning algorithms require as input a numerical
feature matrix. When categorical variables are present in the data, feature
engineering is needed to encode the different categories into a
suitable feature vector\footnote{Some methods, e.g.,
tree-based, do not require
vectorial encoding of categories \cite{coppersmith1999partitioning}.}.
One-hot encoding is a simple and widely-used encoding method
\cite{alkharusi2012categorical,berry1998factorial,cohen2013applied,davis2010contrast,pedhazur1973multiple,myers2010research,ogrady1988categorical}.
For example, a categorical variable having as categories
\{\textit{female, male, other}\} can be encoded respectively with 3-dimensional
feature vectors: \{[1, 0, 0], [0, 1, 0], [0, 0, 1]\}.
In the resulting vector space, each category
is orthogonal and equidistant to the others, which agrees with
classical intuitions about nominal categorical variables.
Non-curated categorical data often lead to larger
cardinality of the categorical variable and give rise to several
problems when using one-hot encoding.
A first challenge is that the dataset may contain
different morphological representations of the
same category.
For instance, for a categorical variable named \textit{company}, it is not
clear if \textit{`Pfizer International
LLC'}, \textit{`Pfizer Limited'}, and \textit{`Pfizer Korea'}
are different names for the same entity, but they are probably related.
Here we build upon the intuition that
these entities should be closer in the feature space than unrelated
categories, e.g., \textit{`Sanofi Inc.'}.
In \textit{dirty} data, errors such as typos can cause morphological variations
of the categories\footnote{A detailed taxonomy of
dirty data can be found on Kim \cite{kim2003taxonomy} and a formal description of
data quality problems is proposed by Oliveira \cite{oliveira2005formal}.}.
Without data
cleaning, different string representations of the same category
will lead to completely different encoded vectors.
Another related challenge is that of encoding
categories that do not appear in the training set.
Finally, with high-cardinality categorical variables, one-hot
encoding can become impracticable due the high-dimensional feature matrix
it creates.
Beyond one-hot encoding, the statistical-learning literature has
considered other categorical encoding methods
\cite{duch2000symbolic,grkabczewski2003transformations,micci2001preprocessing,shyu2005handling,weinberger2009feature},
but, in general, they do not
consider the problem of encoding in the presence of errors, nor how
to encode categories absent from the training set.
From a data-integration standpoint, dirty categories may be seen as a
data cleaning problem, addressed, for instance, with entity resolution.
Indeed, database-cleaning research
has developed many approaches to curate
categories \cite{pyle1999data,rahm2000data}. Tasks such as
deduplication or record linkage strive to recognize different variants of
the same entity. A classic approach to learning with dirty categories would
be to apply them as a preprocessing step and then proceed with standard
categorical encoding. Yet, for the specific case of supervised learning,
such an approach is suboptimal for two reasons. First, the uncertainty on the
entity merging is not exposed to the statistical model. Second, the
statistical objective function used during learning is not used to guide the entity resolution.
Merging entities is a difficult problem. We build from the assumption that
it may not be necessary to solve it, and that simply exposing similarities
is enough.
In this paper, we study prediction with
high-cardinality categorical variables. We seek a simple
feature-engineering approach to replace the widely used one-hot encoding method.
The problem of dirty categories has not received much attention in the
statistical-learning literature---though it is related to database cleaning
research \cite{krishnan2016activeclean,krishnan2017boostclean}. To ground
it in supervised-learning settings,
we introduce benchmarks on seven real-world datasets
that contain at least one textual categorical variable with a high
cardinality. The goal of this paper is to stress the importance
of adapting encoding schemes to dirty categories by showing that a simple
scheme based on string similarities brings important practical gains.
In \autoref{sec:problem_setting} we describe
the problem of dirty categorical data and its impact on encoding
approaches. In \autoref{sec:related_work}, we describe in detail
common encoding approaches for categorical variables,
as well as related techniques in database cleaning---record linkage,
deduplication---and in natural language processing (NLP).
Then, we propose in \autoref{sec:similarity_encoding} a softer version
of one-hot encoding, based on string similarity measures.
We call this generalization \emph{similarity encoding}, as it
encodes the morphological resemblance between categories. We also present
dimensionality reduction approaches that decrease the run time of
the statistical learning task.
Finally, we show in \autoref{sec:empirical_study} the results of a
thorough empirical
study to evaluate encoding methods on dirty categories. On average,
similarity encoding with 3-gram distance is the method that has the best
results in terms of prediction score, outperforming one-hot encoding even
when applying strong dimensionality reduction.
\section{Problem setting: non-standardized categorical variables}
\label{sec:problem_setting}
In a classical statistical data analysis problem, a categorical variable is
typically defined as a variable with values---categories---of either a nominal
or ordinal nature. For example,
\textit{place of birth} is a nominal categorical variable. Conversely, answers
in the Likert scale to the question: `\textit{Do you agree with this
statement: A child's education is the responsability of parents, not the
school system.}', compose an ordinal categorical variable in which
the level of \textit{agreement} is associated with a numerical value. In
addition, given a prediction problem, variables can be either the target
variable (also known as the dependent or response variable) or an
explanatory variable (a feature or independent variable). In this work, we
focus on the general problem of nominal categorical variables that are part
of the feature set.
In controlled data-collection settings, categorical variables are
standardized: the set of categories is finite and known a
priori---independently from the data---and categories are mutually exclusive.
Typical machine-learning benchmark datasets, as
in UCI Machine Learning Repository, use
standardized categories. For instance, in the Adult
dataset\footnote{\url{https://archive.ics.uci.edu/ml/datasets/adult}.} the
\textit{occupation} of individuals is described with 14 predefined categories
in both the training and testing set.
\paragraph{\textbf{A dirty data problem.}} With
non-standardized categorical variables
the set of possible categories is unknown before the
data collection process. One example of such non-standardized categories
can be found in the Open Payments
dataset\footnote{\url{https://openpaymentsdata.cms.gov/}.}, which
describes financial relationships between healthcare companies
and physicians or teaching hospitals. One possible task is to predict the
value of the binary variable \textit{status} (whether the payment has been
done under a research protocol or not) given the
following variables: \textit{corporation name}, \textit{amount}, and
\textit{dispute} (whether the physician refused the payment in
a second instance). A challenge with this dataset is that some categories
are not standardized. For instance, \autoref{tab:pfizer_freq} shows
all categories of the variable \textit{company name} with the word
\textit{Pfizer} in it for the year 2013.
\begin{table}[tb]
\caption{Entities containing the word \textit{Pfizer} in the variable
\textit{company name} of the Open Payments
dataset (year 2013).}
\label{tab:pfizer_freq}
\centering
\scriptsize
\begin{tabular}{lr}
\hline\noalign{\smallskip}
\textbf{Company name} & \llap{\textbf{Frequency}} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Pfizer Inc. & 79,073 \\
Pfizer Pharmaceuticals LLC & 486 \\
Pfizer International LLC & 425 \\
Pfizer Limited & 13 \\
Pfizer Corporation Hong Kong Limited & 4 \\
Pfizer Pharmaceuticals Korea Limited & 3 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
This type of data poses a problem from the point of view of the statistical
analysis because we do not know a priori, without external expert information,
which of these categories refer to the exact same company or whether all
of them have slight differences and hence should be considered as different
entities. Also, we can observe that the frequency of the different categories
varies by several orders of magnitude, which could imply that errors
in the data collection process have been made, unintentionally or not.
Often, the cardinality of a dirty categorical variable
grows with the number of samples in the dataset.
\autoref{fig:cardinality_vs_nsamples} shows the cardinality of the
corresponding categorical variable as a function of the number of samples for
each of the seven datasets that we analyze in this paper.
\begin{figure}[tb]
\begin{minipage}{.3\linewidth}
\caption{Evolution of the number of categories as a function of the number
of samples. In six of our seven datasets, a higher number of samples
implies a higher cardinality of the respective categorical variable.
The dataset \textit{medical charges} is the only one of this list that
reaches its highest cardinality (100 categories) at around 1,000
samples.}
\label{fig:cardinality_vs_nsamples}
\end{minipage}%
\hfill%
\begin{minipage}{.68\linewidth}
\includegraphics[trim={0 0.7cm 0 0cm},clip,width=.95\textwidth]{datasets_unique_categories.pdf}
\end{minipage}
\end{figure}
Dirty categorical data can arise from a variety of mechanisms
\cite{kim2003taxonomy}:
\begin{itemize}
\item Typographical errors (e.g., \textit{proffesor} instead of
\textit{professor})
\item Extraneous data (e.g., name and title, instead of just the name)
\item Abbreviations (e.g., \textit{Dr.} for \textit{doctor})
\item Aliases (e.g., \textit{Ringo Starr} instead \textit{Richard Starkey})
\item Encoding formats (e.g., ASCII, EBCDIC, etc.)
\item Uses of special characters (space, colon, dash, parenthesis, etc.)
\item Concatenated hierarchical data (e.g., state-county-city
vs. state-city)
\end{itemize}
\paragraph{\textbf{A knowledge-engineering problem.}}
The presence of a large number of categories calls for representing the relationships between them.
In knowledge engineering this is done via an ontology or a taxonomy.
When the taxonomy is unknown, the problem is
challenging. For example, in the \textit{medical charges} dataset,
`cervical spinal fusion' and `spinal fusion except cervical' are different
categories, but both share the fact that they are a spinal fusion,
hence they are not completely independent.
\section{Related work and common practice}
\label{sec:related_work}
Most of the literature on encoding categorical variables relies
on the idea that the set of categories is finite, known a priori, and composed
of mutually exclusive elements \cite{cohen2013applied}. Some studies have
considered encoding high-cardinality categorical variables
\cite{micci2001preprocessing,guo2016entity}, but not the problem of dirty data.
Nevertheless, efforts on this issue have been made in other areas such as
Natural Language Processing and Record Linkage, although they have not been
applied
to encode categorical variables. Below we summarize the main
relevant approaches.
\paragraph{Notation:} we write sets of elements with capital curly fonts,
as $\mathcal{X}$. Elements of a vector space are written in bold
$\mathbf{x}$, and matrices in capital and bold $\mathbf{X}$. For
a matrix $\mathbf{X}$, we denote by $x^i_j$ the entry on
the $i$-th row and $j$-th column.
\subsection{Formalism: concepts in relational databases and statistical
learning}
We first link our formulations to a
database formalism, which relies on sets.
A table is specified by its \emph{relational scheme} $\mathcal{R}$: the set of
$m$ attribute names $\{A_j, j =1...m\}$, i.e., the column names
\cite{maier1983theory}.
Each attribute name has a domain
$\text{dom}(A_j) = \mathcal{D}_j$.
A table is defined as a \emph{relation} $r$ on the scheme $\mathcal{R}$:
a set of
mappings (tuples) $\{t^i: \mathcal{R} \rightarrow \bigcup_{j=1}^{m}
\mathcal{D}_j, \; i=1...n\}$,
where for each \emph{record} (sample)
$t^i \in r$, $t^i(A_j) \in \mathcal{D}_j, \; j = 1...m$.
If $A_j$ is a numerical attribute, then
$\text{dom}(A_j) = \mathcal{D}_j \subseteq \mathbb{R}$.
If $A_j$ is a categorical attribute represented by strings,
then $\mathcal{D}_j \subseteq \mathbb{S}$, where $\mathbb{S}$ is the set of
finite-length strings\footnote{Note that the domain of the categorical
variable depends on the training set.}. As a shorthand, we call
$k_j = \text{card}(\mathcal{D}_j)$ the cardinality of the
variable.
As categorical entities are not numerical, they require
an operation to define a feature matrix $\mathbf{X}$
from the relation $r$. Statistical or machine learning models that need
vector data are applied after a \textbf{categorical variable encoding},
a feature map that consists of
replacing the tuple elements $t^i(A_j), i=1...n$
by feature vectors:
\begin{equation}
\mathbf{x}_j^i \in \mathbb{R}^{p_j}, p_j \geq 1.
\end{equation}
Using the same notation in case of numerical attributes, we can define
$\mathbf{x}_j^i = t^i(A_j) \in \mathbb{R}^{p_j}, p_j = 1$ and write the
feature matrix $\mathbf{X}$ as:
\begin{equation}
\mathbf{X} = \left[
\begin{array}{*5{c}}
\mathbf{x}_1^1 & \ldots & \mathbf{x}_m^1 \\
\vdots & \ddots & \vdots \\
\mathbf{x}_1^n & \ldots & \mathbf{x}_m^n
\end{array}\right]
\in \mathbb{R}^{n\times p}, p = \sum_{j = 1}^{m} p_j
\end{equation}
In standard supervised-learning settings, the observations, represented
by the feature matrix $\mathbf{X}$, are associated with a target vector
$\mathbf{y} \in \mathbb{R}^n$ to predict.
We now review classical encoding methods. For simplicity of exposition,
in the rest of the section we will consider only a single categorical
variable $A$, omitting the column index $j$ from the previous definitions.
\paragraph{\textbf{One-hot encoding.}}
Let $A$ be a categorical variable with cardinality $k \geq 2$ such that
$\text{dom}(A) = \{d_\ell, 1 < \ell \leq k\}$ and $t^i(A) = d^i$.
The one-hot encoding method sets each feature vector as:
\begin{equation}
\mathbf{x}^i = \left[\mathbf{1}_{\{d_1\}}(d^i),\;\;
\mathbf{1}_{\{d_2\}}(d^i),\;\; ...\;, \;\;
\mathbf{1}_{\{d_{k}\}}(d^i)
\right] \; \in \mathbb{R}^{k}
\label{eq:onehot_encoding}
\end{equation}
where $\mathbf{1}_{\{d_\ell\}}(\cdot)$ is the indicator function over the
singleton $\{d_\ell\}$.
Several variants of the one-hot encoding have been
proposed\footnote{Variants of one-hot encoding include dummy coding, choosing the
zero vector for a \textit{reference} category, effects coding,
contrast coding, and nonsense coding \cite{cohen2013applied}.},
but in a linear regression, all perform equally in terms of $R^2$
score\footnote{The difference between methods is the interpretability
of the values for each variable.} (see Cohen \cite{cohen2013applied} for details).
The one-hot encoding method is intended
to be used when categories are mutually exclusive \cite{cohen2013applied},
which is not necessarily
true of dirty data (e.g., misspelled variables should be
interpreted as overlapping categories).
Another drawback of this method is that it provides no heuristics
to assign a code vector to new categories that appear in the testing set
but have not been encoded on the training set. Given the previous definition,
the zero vector will be assigned to any new category in the testing set, which
creates collisions if more that one new category is introduced.
Finally, high-cardinality categorical variables greatly increase the
dimensionality of the feature matrix, which increases its
computational cost. Dimensionality reduction on the
one-hot encoding vector tackles this problem
(see \autoref{subsec:dimensionality_reduction}), with the risk of loosing
information.
\paragraph{\textbf{Hash encoding.}}
A solution to reduce the dimensionality of the data is
to use the hashing trick \cite{weinberger2009feature}. Instead of
assigning a different unit vector to each category, as one-hot encoding does,
one could define a hash function to designate a feature
vector on a reduced vector space. This method does not consider the
problem of dirty data either, because it assigns hash values that are
independent of the morphological similarity between categories.
\paragraph{\textbf{Encoding using target statistics.}}
\label{subsec:target-encoding}
The target encoding method \cite{micci2001preprocessing}, is a variation of the
\textit{VDM (value difference metric) continuousification scheme}
\cite{duch2000symbolic}, in which each category is
encoded given the effect it has on the target variable $\mathbf{y}$. The
method considers that categorical variables can contain rare categories.
Hence it represents each category by the probability of $\mathbf{y}$ conditional
on this category. In addition, it takes an empirical Bayes approach to shrink the
estimate. Thus, for a binary classification task:
\begin{equation}
\mathbf{x}^i = \lambda(n^i) \, \mathbb{E}_\ell
\bigl[\mathbf{y}^\ell|d^\ell = d^i \bigr]
+ \bigl(1 - \lambda(n^i) \bigr) \, \mathbb{E}_\ell \bigl[\mathbf{y}^\ell \bigr]
\;\; \in \mathbb{R}
\label{eq:target_encoding}
\end{equation}
where $n^i$ is the frequency of the category $d^i$
and $\lambda(n^i) \in [0, 1]$ is a weight such that its derivative with respect
to $n^i$ is positive, e.g.,
$\lambda(n^i) = (\frac{n^i}{n^i + m}, m > 0$ \cite{micci2001preprocessing}).
Note that the obtained feature vector is in this case one-dimensional.
Another related approach is the MDV continuousification scheme
\cite{grkabczewski2003transformations}, which encodes a category $d^i$ by
its expected value on each target $c_k$,
$\mathbb{E}_\ell \bigl[d^\ell = d^i | \mathbf{y}^\ell = c_k\bigr]$ instead
of $\mathbb{E}_\ell \bigl[\mathbf{y}^\ell|d^\ell = d^i \bigr]$ used in the
VDM. In the case of a classification problem, $c_k$ belongs to the set of
possible classes for the target variable.
However, in a dirty dataset, as with spelling mistakes, some categories can
appear only once, undermining the meaning of their
marginal link to $\mathbf{y}$.
\paragraph{\textbf{Clustering.}}
To tackle the problem of high dimensionality for high-cardinality categorical
variables, one approach is to
perform a clustering of the categories and generate indicator
variables with respect to the clusters. If $A$ is a categorical variable
with domain $\mathcal{D}$ and cardinality $k$, we can partition the set
$\mathcal{D}$ into $c \ll k$ clusters
$\mathcal{D}_{1}...\mathcal{D}_{c}$; hence the feature vector
associated to this variable is:
\begin{equation}
\mathbf{x}_j^i = \left[\mathbf{1}_{\mathcal{D}_{1}}(d^i),
\mathbf{1}_{\mathcal{D}_{2}}(d^i), ...,
\mathbf{1}_{\mathcal{D}_{c}}(d^i)\right]
\end{equation}
To build clusters, Micci-Barreca \cite{micci2001preprocessing} proposes grouping categories
with similar target statistics, typically using
hierarchical clustering.
\paragraph{\textbf{Embedding with neural networks.}}
Guo \cite{guo2016entity} proposes an encoding
method based on neural networks. It is inspired by NLP methods that perform
word embedding based on textual context \cite{mikolov2013efficient}
(see \autoref{subsec:nlp}). In tabular data, the equivalent to this
context is given by the values of the other columns, categorical or not.
The approach is simply a standard neural network, trained to link the
table $\mathcal{R}$ to the target $\mathbf{y}$ with standard
supervised-learning architectures and loss and as inputs the table with
categorical columns one-hot encoded. Yet, Guo \cite{guo2016entity} uses as a
first hidden layer a bottleneck for each categorical variable.
The corresponding intermediate
representation, learned by the network, gives a vector embedding of the
categories in a reduced dimensionality. This approach is
interesting as it guides the encoding in a supervised way. Yet, it entails
the computational and architecture-selection
costs of deep learning. Additionally, it is still based on an initial
one-hot encoding which is susceptible to dirty categories.
\paragraph{\textbf{Bag of n-grams.}}
One way to represent morphological variation of strings is
to build a vector containing the count of all possible n-grams of consecutive characters (or words).
This method is straightforward and naturally creates vectorial
representations where similar strings are close to each other. In this work we
consider n-grams of characters to capture the morphology of short strings.
\subsection{Related approaches in natural language processing}
\label{subsec:nlp}
\paragraph{\textbf{Stemming or lemmatizing.}} Stemming and lemmatizing
are text preprocessing techniques that strive to extract a common root
from different variants of a word \cite{lovins1968development,hull1996stemming}.
For instance,
`standardization', `standards', and `standard' could all be reduced
to `standard'. These techniques are based on a set of rules, crafted to the
specificities of a language. Their drawbacks are that they may not be
suited to a specific domain, such as medical practice, and are costly to
develop. Some recent developments in NLP avoid stemming by working
directly at the character level \cite{bojanowski2016enriching}.
\paragraph{\textbf{Word embeddings.}}
Capturing the idea that some categories are closer than others, such as
`cervical spinal fusion' being closer to `spinal fusion except cervical' than
to `renal failure' in the \emph{medical charges} dataset can be seen as
a problem of learning semantics. Statistical approaches to semantics stem from
low-rank data reductions of word occurrences: the original LSA (latent
semantic analysis) \cite{landauer1998introduction} is a PCA of the
word occurrence matrix in documents; {\tt word2vec} \cite{mikolov2013efficient}
can be seen as a matrix factorization on a matrix of word occurrence in local
windows; and {\tt fastText} \cite{bojanowski2016enriching},
a state-of-the-art approach for
supervised learning on text, is based on a low-rank representation of text.
However, these semantics-capturing embeddings for words cannot
readily be used for categorical columns of a table. Indeed, tabular data
seldom contain enough samples and enough context to train modern
semantic approaches. Pretrained embeddings would not work for
entries drawn from a given specialized domain, such as company names or
medical vocabulary. Business or application-specific tables require
domain-specific semantics.
\subsection{Related approaches in database cleaning}
\paragraph{\textbf{Similarity queries.}}
To cater for different ways information might appear, databases use queries
with inexact matching. Queries using textual similarity
help integration of heterogeneous databases without common domains
\cite{cohen1998integration}.
\paragraph{\textbf{Deduplication, record linkage, or fuzzy matching.}}
In databases, deduplication or record linkage strives to find different
variants that denote the same entity and match them
\cite{elmagarmid2007duplicate}. Classic record
linkage theory
deals with merging multiple tables that have entities in
common. It seeks a combination of similarities across columns and a
threshold to match rows \cite{fellegi1969theory}. If known matching pairs
of entities are available, this problem can be cast as a supervised
or semi-supervised learning problem \cite{elmagarmid2007duplicate}.
If there are no known matching pairs, the simplest
solution boils down to a clustering approach, often on a similarity
graph, or a related expectation
maximization approach \cite{winkler2002methods}.
Supervising the deduplication task is challenging and often calls for
human intervention. Sarawagi \cite{sarawagi2002interactive} uses active learning to
minimize human effort.
Much of the recent progress in database research strives for faster
algorithms to tackle huge databases \cite{christen2012survey}.
\section{Similarity encoding: robust feature engineering}
\label{sec:similarity_encoding}
\subsection{Working principle of similarity encoding}
One-hot encoding can be interpreted as a
feature vector in which each dimension corresponds to the zero-one
similarity between the category we want to encode and all the known
categories (see \autoref{eq:onehot_encoding}).
Instead of using this particular similarity,
one can extend the encoding to use one of the many string similarities,
e.g., as used for entity resolution. A survey of the most
commonly used text similarity measures can be found in
\cite{cohen2003comparison,gomaa2013survey}.
Most of these similarities are based on a morphological comparison between
two strings. Identical strings will have a similarity equal to 1 and
very different strings will have a similarity closer to 0.
We first describe three of the most commonly used similarity measures:
\paragraph{Levenshtein-ratio.}
It is based on the Levenshtein distance \cite{levenshtein1966binary}
(or edit distance) $d_\text{lev}$ between two strings $s_1$ and $s_2$,
which is calculated as a function of the minimum number
of edit operations that are necessary to transform one string into another.
In this paper we used a Levenshtein distance in which all edit operations have a
weight of 1, except for the \emph{replace} operation,
which has a weight of 2. We obtain a similarity measure using:
\begin{equation}
\text{sim}_{\text{lev-ratio}}(s_1, s_2) = 1 -
\frac{d_\text{lev}(s_1, s_2)}{|s_1|+|s_2|}
\end{equation}
where $|s|$ is the character length of the string $s$.
\paragraph{Jaro-Winkler.}
\cite{winkler1999state}
This similarity is a variation of the Jaro distance $d_\text{jaro}$
\cite{jaro1989advances}:
\begin{equation}
d_\text{jaro}(s_1, s_2) = \frac{m}{3|s_1|} + \frac{m}{3|s_2|}
+ \frac{m-t}{3m}
\end{equation}
where $m$ is the number of matching characters between $s_1$ and
$s_2$\footnote{Two characters belonging to
$s_1$ and $s_2$ are considered to be a match if they are identical and the
difference in their respective positions does not exceed
$2 \, \text{max}(|s_1|,|s_1|) - 1$.
For m=0, the Jaro distance is set to 0.},
and $t$ is the number of character transpositions between the strings
$s_1$ and $s_2$ without considering the unmatched characters.
The Jaro-Winkler similarity $\text{sim}_\text{j-w}(\cdot, \cdot)$ emphasizes
prefix similarity between the two strings. It is defined as:
\begin{equation}
\text{sim}_\text{j-w}(s_1, s_2) =
1 - \left(d_\text{jaro}(s_1, s_2) + l p [1 - d_\text{jaro}(s_1, s_2)]\right)
\end{equation}
where $l$ is the length of the longest common prefix
of $s_1$ and $s_2$, and $p$ is a constant scaling factor.
\paragraph{N-gram similarity.}
It is based on splitting both strings into n-grams and then
calculating the Dice coefficient between them
\cite{angell1983automatic}:
\begin{equation}
\text{sim}_{\text{n-gram}}(s_1, s_2) =
\frac{|\text{n-grams}(s_1) \cap \text{n-grams}(s_2)|}
{|\text{n-grams}(s_1) \cup \text{n-grams}(s_2)|}
\end{equation}
where $\text{n-grams}(s), s \in \mathbb{S},$
is the set of consecutive n-grams for the
string $s$. The notion behind this is that categories sharing a large number of
n-grams are probably very similar.
For instance, $\text{3-grams}(\text{Paris}) =
\{\text{Par}, \text{ari}, \text{ris}\}$ and
$\text{3-grams}(\text{Parisian}) =
\{\text{Par}, \text{ari}, \text{ris}, \text{isi}, \text{sia},
\text{ian}\}$ have three 3-grams in common, and their similarity is
$\text{sim}_{\text{3-gram}}(\text{Paris}, \text{Parisian}) = \frac{3}{6}$.
There exist
more efficient versions of the 3-gram similarity
\cite{kondrak2005n}, but we do not explore them in this work.
\paragraph{\textbf{Similarity encoding.}}
Given a similarity measure, one-hot encoding can be generalized to
account for similarities in categories.
Let $A$ be a categorical variable of cardinality $k$, and let
$\text{sim}: (\mathbb{S} \times \mathbb{S}) \rightarrow [0, 1]$ be an arbitrary
string-based similarity measure so that:
\begin{equation}
\text{sim}(s_1, s_2) = \text{sim}(s_2, s_1), \quad\forall s_1, s_2 \in \mathbb{S}.
\end{equation}
The similarity encoding we propose replaces the instances of $A$
$d^i, i=1...n$ by a feature vector $\mathbf{x}^i \in
\mathbb{R}^k$ so that:
\begin{equation}
\mathbf{x}^i = \left[\text{sim}(d^i, d_1), \; \text{sim}(d^i, d_2), \;...,
\;\text{sim}(d^i, d_k)\right] \in \mathbb{R}^k.
\end{equation}
\subsection{Dimensionality reduction: approaches and experiments}
\label{subsec:dimensionality_reduction}
With one-hot or similarity encoding,
high-cardinality categorical variables lead to high-dimensional feature
vectors. This may lead to
computational and statistical challenges.
Dimensionality reduction may be used on the resulting feature matrix.
A natural approach is to use Principal Component Analysis, as it captures
the maximum-variance subspace. Yet, it entails a high computational
cost\footnote{Precisely, the cost of PCA is $\mathcal{O}(n\,p\,\min(n, p))$.}
and is cumbersome to run in a online setting. Hence, we explored
using \textbf{random projections}: based on the Johnson-Lindenstrauss lemma,
these give a reduced representation that accurately approximates distances of
the vector space \cite{rahimi2008random}.
A drawback of such a projection approach is that it requires first
computing the similarity to all categories. Also, it mixes the
contribution of all categories in non trivial ways and hence
may make interpreting the encodings difficult. For this reason, we also explored
prototype based methods: choosing a small number $d$ of categories and
encoding by computing the similarity to these prototypes.
These prototypes should be representative of the full category set in order to have a meaningful reduced space.
One simple approach is to choose the $d \ll k$ \textbf{most frequent
categories} of the dataset.
Another way of choosing prototype elements in the category set are
clustering methods like \textbf{k-means}, which chooses cluster centers
that minimize a distortion measure. We use as prototype candidates the
closest element to the center of each cluster. Note that we can
apply the clustering on a initial version of the similarity-encoding matrix
computed on a subset of the data.
Clustering of dirty categories based on a string similarity is strongly
related to deduplication or record-linkage strategies used in database
cleaning. One notable difference with using a cleaning strategy before
statistical learning is that we are not converting the various forms of
the categories to the corresponding cluster centers, but rather
encoding their similarities to these.
\section{Empirical study of similarity encoding}
\label{sec:empirical_study}
To evaluate the performance of our encoding methodology in a prediction task
containing high-cardinality categorical variables, we present an
empirical study on seven real-world datasets. If a
dataset has more than one categorical variable,
only the most relevant one (in terms of predictive
power\footnote{Variables'
predictive power was evaluated with the
feature importances of a Random Forest as implemented in {\tt
scikit-learn} \cite{pedregosa2011scikit}. The feature importance is
calculated as the average (normalized) total reduction of the Gini impurity criterion brought by each feature.})
was encoded with our approach,
while the rest were one-hot encoded.
\begin{table}[tb]
\caption{Dataset description. The columns \emph{Number of categories},
\emph{Most frequent category} and \emph{Least frequent category} contain
information about the particular categorical variable selected for each
dataset (see \autoref{subsec:datasets} for details)}
\label{tab:datasets_description}
\setlength\tabcolsep{5pt}
\scriptsize
\begin{tabular}
{L{.19\linewidth} R{.11\linewidth} R{.14\linewidth} R{.12\linewidth}
R{.11\linewidth} L{.15\linewidth}}
\hline\noalign{\smallskip}
\textbf{Dataset} & \textbf{Number of rows} &
\textbf{Number of categories} & \textbf{Most frequent category} &
\textbf{Least frequent category} & \textbf{Prediction type} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
medical charges & 1.6E+05 & 100 & 3023 & 613 & regression \\
employee salaries & 9.2E+03 & 385 & 883 & 1 & regression \\
open payments & 1.0E+05 & 973 & 4016 & 1 & binary-clf \\
midwest survey & 2.8E+03 & 1009 & 487 & 1 & multiclass-clf \\
traffic violations & 1.0E+05 & 3043 & 7817 & 1 & multiclass-clf \\
road safety & 1.0E+04 & 4617 & 589 & 1 & binary-clf \\
beer reviews & 1.0E+04 & 4634 & 25 & 1 & multiclass-clf \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\autoref{tab:datasets_description} summarizes the characteristics of
the datasets and the respective categorical variable
(for more information about the data, see \autoref{subsec:datasets}). The
sample size of the datasets varies from 3,000 to 160,000 and the cardinality of
the selected categorical variable ranges from 100 to more than 4,600 categories.
Most datasets have at least one category that appears only once,
hence when the data is split into a train and test set,
some categories will likely be present only in the testing set.
To measure prediction performance, we use the following metrics: $R^2$ score
for regression, average precision score for binary classification, and
accuracy for multiclass classification. All these scores are upper bounded
by $1$ and higher values mean better predictions.
For the prediction pipeline we used standard data processing and
classification/regression methods implemented in the Python module scikit-learn
\cite{pedregosa2011scikit}. As we focus on evaluating general categorical
encoding methods, all datasets use the same pipeline: no specific parameter
tuning was performed for a particular dataset
(for technical details see \autoref{subsec:prediction_pipeline}).
\begin{figure*}[tb]
\centering
\includegraphics[trim={0.5cm 0.5cm 0.5cm 0.7cm},clip,width=.985\textwidth]
{datasets_encoders_GradientBoosting.pdf}%
\llap{\rlap{\raisebox{2em}{\sffamily\bfseries
Gradient}}\hspace*{.98\textwidth}}%
\llap{\rlap{\raisebox{1em}{\sffamily\bfseries
boosted trees}}\hspace*{.98\textwidth}}
\includegraphics[trim={0.5cm 0.5cm 0.5cm 0cm},clip,width=.985\textwidth]
{datasets_encoders_Ridge.pdf}%
\llap{\rlap{\raisebox{2em}{\sffamily\bfseries
Ridge}}\hspace*{.98\textwidth}}%
\llap{\rlap{\raisebox{1em}{\sffamily\bfseries
regression}}\hspace*{.98\textwidth}}
\caption{\textbf{Performance of different encoding methods.}
Upper figure: gradient boosting; Lower figure: ridge regression.
Each box-plot summarizes the prediction scores of 100 random splits
(with 80\% of the samples for training and 20\% for testing).
For all datasets, the prediction score is upper bounded by $1$
(a higher score means a better prediction).
The right side of the figure indicates the average ranking
across datasets for each method.
The vertical dashed line indicates the median value of the one-hot
encoding method.}
\label{fig:datasets_distances}
\end{figure*}
First, we benchmarked the similarity encoding with one-hot encoding and
other commonly used methods. Each box-plot in \autoref{fig:datasets_distances}
contains the prediction scores of 100 random
splits of the data (80\% of the samples for training and 20\% for testing)
using gradient boosted trees and ridge regression.
The right side of each plot shows the average ranking of each method
across datasets in terms of the median value of the respective box-plots
In general, similarity encoding methods have the best results in terms of the
average ranking across datasets, with 3-gram being the one that performs
the best for both classifiers (for Ridge, 3-gram similarity is the best
method on every dataset).
On the contrary, the hashing encoder\footnote{We used the MD5 hash function with 256 components.} has the worst performance.
Target and MDV encodings perform well
(in particular with gradient boosting),
considering that the dimension of the feature vector is equal to $1$ for
regression and binary classification, and to the number of classes for
multiclass classification (which goes up to 104 classes for the
\emph{beer reviews}
dataset).
\begin{figure*}[tb]
\centering
\includegraphics[trim={0.5cm 0.5cm 0.5cm 0.5cm},clip,width=1\textwidth]
{datasets_3gram_classifiers_scorediff.pdf}
\caption{\textbf{Scores with different classifiers} Comparison between
one-hot and 3-gram similarity encoding. Each box-plot corresponds to 100
random splits with 20\% of the samples for the testing set.
The right side of the figure indicates
the average ranking across datasets for each method in terms of the
median value of the 3-gram similarity.}
\label{fig:datasets_3gram_classifiers}
\end{figure*}
\autoref{fig:datasets_3gram_classifiers} shows
the difference in score between one-hot and similarity encoding for
different
regressors/classifiers: standard linear methods, ridge
and logistic regression with internal cross-validation of the regularization
parameter, and also the tree-based
methods, random forest and gradient boosting.
The average ranking is computed with respect to the 3-gram similarity scores.
The \emph{medical charges} and \emph{employee salaries} datasets
do not have scores for the logistic model because their prediction
task is a regression problem.
\begin{figure*}[tb]
\centering
\includegraphics[trim={0.5cm 0cm 0.5cm 0.5cm},clip,width=1\textwidth]
{datasets_dimension-reduction_GradientBoosting.pdf}%
\llap{\rlap{\raisebox{2.9em}{\sffamily\bfseries
Gradient}}\hspace*{\textwidth}}%
\llap{\rlap{\raisebox{1.9em}{\sffamily\bfseries
boosted trees}}\hspace*{\textwidth}}
\includegraphics[trim={0.5cm 0.5cm 0.5cm 0.5cm},clip,width=1\textwidth]
{datasets_dimension-reduction_Ridge.pdf
\llap{\rlap{\raisebox{2.5em}{\sffamily\bfseries
Ridge}}\hspace*{1\textwidth}}%
\llap{\rlap{\raisebox{1.5em}{\sffamily\bfseries
regression}}\hspace*{1\textwidth}}
\caption{\textbf{Performance with different dimensionality
reduction methods}. $Full$ denotes
the encoding without dimensionality reduction and $d$ the
dimension of the reduction. Each box-plot corresponds to 100 random
splits with 80\% of the samples for the training set and 20\% for the
testing set. The right side of the plot indicates the average
ranking across datasets for each method ($^*$ denotes the best average ranking).
}
\label{fig:datasets_dimension-reduction}
\end{figure*}
\autoref{fig:datasets_dimension-reduction} shows
prediction results of different dimensionality reduction methods
applied six of our seven datasets (\emph{medical charges} was
excluded from the figure because of its smaller cardinality in comparison with
the other datasets).
For dimension reduction, we investigated \emph{i)}
random projections, \emph{ii)} encoding with similarities to the
most frequent categories, \emph{iii)} encoding with similarities to
categories closest to the centers of a k-means clustering,
and \emph{iv)} one-hot encoding after merging categories with a k-means clustering,
which is a simple form of deduplication.
The latter method enables bridging the gap with the deduplication
literature: we can compare merging entities before statistical learning
to expressing their similarity using the same similarity measure.
\section{Discussion}
Encoding categorical textual variables in dirty tables has not been
studied much in the statistical-learning literature. Yet it is a common hurdle
in many application settings. This paper shows that there is room for
improvement upon the standard practice of one-hot encoding by accounting
for similarities across the categories. We studied similarity
encoding, which is a very simple generalization of the one-hot
encoding method\footnote{A Python implementation is available at
\url{https://dirty-cat.github.io/}.}.
An important contribution of this paper is the empirical benchmarks on
dirty tables. We selected seven real-world datasets containing
at least one dirty categorical variable with high-cardinality
(see \autoref{tab:datasets_description}). These datasets are openly
available, and we hope that they will foster more research on dirty
categorical variables. By their diversity, they enable exploring the
trade-offs of encoding approaches and conclude on generally-useful
defaults.
The 3-gram similarity appears to be a good choice,
outperforming similarities typically used for entity
resolution such as Jaro-Winkler and Levenshtein-ratio
(\autoref{fig:datasets_distances}).
A possible reason for the success of 3-gram is visible in the
histogram of the similarities across classes
(\autoref{fig:histogram_distances}).
For all datasets, 3-gram has the smallest median values, and assigns 0
similarity to many pairs of categories. This
allows better separation of similar and dissimilar categories,
e.g., \emph{`midwest'} and \emph{`mid west'} as opposed to \emph{`southern'}.
3-gram similarity also outperforms the bag of 3-grams.
Indeed, similarity encoding implicitly defines the following kernel between two observations:
\begin{equation}
\inner{d^i}{d^j}_{\text{sim}} =
\sum_{l=1}^k \text{sim}(d^i, d_l) \, \text{sim}(d^j, d_l)
\end{equation}
Hence, it projects on a dictionary of reference n-grams and
gives more importance to the n-grams that best capture
the similarity between categories.
\begin{figure*}[tb]
\centerline{
\includegraphics[trim={0 0.5cm 0 0.5cm},clip, width=1\textwidth]{histogram_distances.pdf}}
\caption{\textbf{Histogram of pairwise similarity between categories for
different string similarity metrics.} 10,000 pairs of categories
were randomly generated for each dataset (y-axis in logarithmic scale).
The red bar denotes the median value for each distribution. Note that
\textit{medical charge}, \textit{employee salaries} and
\textit{traffic violations} present bimodal distributions.}
\label{fig:histogram_distances}
\end{figure*}%
\autoref{fig:histogram_distances} also reveals that
three of
the seven datasets (\textit{medical charge}, \textit{employee salaries} and
\textit{traffic violations}) display a
bimodal distribution in similarities.
On these datasets, similarity encoding brings the largest
gains over one-hot encoding (\autoref{fig:datasets_distances}). In these
situations, similarity encoding is particularly useful as it gives a vector
representation in which a non-negligible number of category pairs are close to each other.
Performance comparisons with different classifiers (linear models and
tree-based models in \autoref{fig:datasets_3gram_classifiers}) suggest
that 3-gram similarity reduces the gap between models by giving a better
vector representation of the categories.
Note that in these experiments linear models slightly outperformed
tree-based models, however we did not tune the hyper parameters of the
tree learners.
While one-hot encoding can be expressed as a sparse matrix,
a drawback of similarity encoding is
that it creates a dense feature matrix, leading to increased memory
and computational costs.
Dimensionality reduction of the resulting matrix maintains most of
the benefits of similarity encoding
(\autoref{fig:datasets_dimension-reduction})
even with a strong reduction ($d$$=$$100$)\footnote{With
Gradient Boosting, similarity encoding reduced to $d$$=$$30$ still
outperforms one-hot encoding. Indeed, tree models are good at capturing
non-linear decisions in low dimensions.}.
It greatly reduces the computational cost: fitting the models on our benchmark
datasets takes on the order of seconds or minutes on commodity hardware
(see \autoref{tab:prediction_times} in the appendix). Note that on some
datasets, a random projection of one-hot encoded vectors improves
prediction for gradient boosting. We interpret this as a regularization
that captures some semantic links across the categories, as with LSA.
When more than one categorical variable is present, a related approach
would be to use Correspondence Analysis \cite{shyu2005handling}, which
also seeks a low-rank representation as it can be
interpreted as a weighted form of PCA for categorical data. Here we focus
on methods that encode a single categorical variable.
The dimension
reduction approaches that we have studied can be applied in an online
learning setting: they either select a small number prototype categories,
or perform a random projection.
Hence, the approach can be applied on datasets that do not fit in
memory.
Classic encoding methods are hard to apply
in incremental machine-learning settings. Indeed, new samples with new
categories require recomputation of the encoding representation, and hence
retrain the model from scratch.
This is not the case of similarity encoding because new categories are
naturally encoded without creating collisions.
We have shown the power of a straightforward strategy
based on selecting 100 prototypes on subsampled data,
for instance with k-means clustering.
Most importantly, no data cleaning on categorical variables is required to apply
our methodology. Scraped data for commercial or marketing applications are good
candidates to benefit from this approach.
\section{Conclusion}
Similarity encoding, a generalization of the one-hot encoding method,
allows a better representation of categorical variables, especially in the
presence of dirty or high-cardinality categorical data.
Empirical results on seven real-world datasets show that 3-gram similarity
is a good choice to capture morphological resemblance between categories and
to encode new categories that do not appear in the testing set.
It improves prediction of the associated supervised learning task without
any prior data-cleaning step. Similarity encoding also outperforms
representing categories via ``bags of n-grams'' of the associated strings.
Its benefits carry over even with strong
dimensionality reduction based on cheap operations such as random
projections. This methodology can be used in online-learning settings,
and hence can lead to tractable analysis on
very large datasets without data cleaning.
This paper only scratches the surface of statistical
learning on non-curated tables, a topic that has not been studied much.
We hope that the benchmarks datasets will foster more work on this
subject.
\begin{acknowledgements}
We would like to acknowledge the excellent feedback from the reviewers.
\end{acknowledgements}
\bibliographystyle{spmpsci}
|
{
"timestamp": "2018-06-05T02:14:45",
"yymm": "1806",
"arxiv_id": "1806.00979",
"language": "en",
"url": "https://arxiv.org/abs/1806.00979"
}
|
\section*{Appendices}
\subsection{GCLBList Pseudo Code}
\begin{breakablealgorithm}
\caption{The find method}\label{GCLBList_find}
\begin{algorithmic}[1]
\Function{Window find}{$Node\ head,\ int\ key$}\Comment{Traverse from head and find node with key-value 'key'}
\If{$head.infoNext.getReference()\ ==\ tail$}\Comment{\parbox[t]{.4\linewidth}{head \& tail are the only nodes in the list}}
\State \Return {$Window\ (head,\ tail,\ head.infoNext.getStamp()\ ,\ tail.infoNext.getStamp())$}
\EndIf
\While{$true$}\label{retry}
\State $pred\gets head$ \Comment{Start from the head}
\State $curr \gets pred.infoNext.get(predSt)$ \Comment{Read pred’s infoNext’s reference \& stamp atomically}
\While{$true$}
\State $breakTest \gets key\le curr.key$ \Comment{\parbox[t]{.5\linewidth}{Break when key-value greater than or equal to required key is found}}
\State $succ \gets curr.infoNext.get(currSt)$
\Comment{\parbox[t]{.45\linewidth}{Read curr’s infoNext’s reference \& stamp atomically. succ may be null if curr has been deleted}}
\State $nPredSt \gets pred.infoNext.getStamp()$
\Statex \Comment{\parbox[t]{.45\linewidth}{Read pred’s stamp again before advancing forward. This is the safety check to ensure we are traversing the list correctly, in increasing order of keys}}
\If{$predSt \neq nPredSt$}
\State{\textbf{go to 5}}
\Comment{\parbox[t]{.6\linewidth}{If pred’s new stamp is different from the one read previously, a synchronization conflict is detected. curr may have been deleted by another thread from the list. The thread restarts it’s traversal to ensure correctness. If pred’s stamp is still the same, then everything is fine.}}
\EndIf
\If{$breakTest$}
\State{\textbf{go to 22}
\Comment{\parbox[t]{.67\linewidth}{If pred’s stamp has not changed, everything is fine. Check if required pair of nodes has been found. If yes, break. Else, continue.}}
\EndIf
\State $pred \gets curr$ \Comment{Keep advancing pred and curr in the list}
\State $curr \gets succ$
\State $predSt \gets currSt$ \Comment{\parbox[t]{.65\linewidth}{Keep track of new pred’s old stamp to be used later, to detect synchronization conflicts}}
\EndWhile
\State \Return {$Window(pred,\ curr,\ predSt,\ currSt)$} \label{return}\Comment{\parbox[t]{.4\linewidth}{Return pred and curr, along with their stamps, encapsulated in a window object.}}
\EndWhile
\EndFunction
\end{algorithmic}
\end{breakablealgorithm}
\begin{algorithm}
\caption{The validate method}\label{GCLBList_validate}
\begin{algorithmic}[1]
\Function{bool validate}{$Node\ pred,\ int\ predSt,\ Node\ curr,\ int\ currSt$}
\Statex \Comment{\parbox[t]{.56\linewidth}{Checks consistency of locked nodes 'pred' \& 'curr', using their stamps, predSt \& currSt}}
\State $nCurr \gets pred.infoNext.get(predSt)$ \Comment{Re-read pred’s infoNext’s reference and stamp atomically}
\State $nCurrSt \gets curr.infoNext.getStamp()$ \Comment{Re-read curr’s infoNext’s stamp atomically}
\State \Return {$predSt\ ==\ nPredSt\ \&\&\ currSt\ ==\ nCurrSt\ \&\&\ curr\ ==\ nCurr$}
\Statex \Comment{\parbox[t]{.83\linewidth}{Checks if pred is still pointing to curr. And if any of their stamps have changed from their old values. If yes, a conflict is detected. Returns true or false to calling method.}}
\EndFunction
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[H]
\caption{The remove method}\label{GCLBList_remove}
\begin{algorithmic}[1]
\Function{bool remove}{$Node\ head,\ int\ key$}\Comment{\parbox[t]{.5\linewidth}{Remove a node with key-value 'key' from the list}}
\While{$true$} \label{retry}
\State $window \gets find(head, key)$
\State $pred \gets window.pred, curr \gets window.curr$
\State $predSt \gets window.predSt, currSt \gets window.currSt$
\Statex \Comment{Retrieve pred and curr, and their stamps, from the window object}
\State{$pred.lock()$}
\If{$!curr.tryLock()$}
\State $pred.unlock()$
\State{\textbf{go to 6}}
\Comment{\parbox[t]{.7\linewidth}{Lock both the nodes. tryLock() is to prevent deadlocks, since there is no guarantee, that keys are being locked in increasing order}}
\EndIf
\If{$validate(pred,\ predSt,\ curr,\ currSt)$} \Comment{\parbox[t]{.45\linewidth}{Use validate to ensure the consistency of pred and curr}}
\If{$curr.key \neq key$}
\State $curr.unlock()$
\State $pred.unlock()$
\State \Return $false$
\Comment{If key is not present, unlock both nodes. And return false}
\Else
\State $stamp \gets pred.infoNext.getStamp()$
\State $temp \gets curr.infoNext.getReference()$
\State $pred.infoNext.set(temp,++stamp)$
\Comment{\parbox[t]{.4\linewidth}{Deletion Step 1: atomically swing pred’s infoNext’s reference to curr’s infoNext’s reference and increment pred’s infoNext’s stamp by 1}}
\State $temp \gets curr.infoNext.get(stamp)$
\State $curr.infoNext.set(temp,++stamp)$
\Comment{\parbox[t]{.4\linewidth}{Deletion Step 2: atomically increment curr’s infoNext’s stamp by 1}}
\State $Pool.set(curr)$
\Comment{\parbox[t]{.6\linewidth}{Add deleted node 'curr' to the Pool. curr can be reused for later add operations}}
\State $curr.unlock()$
\State $pred.unlock()$
\State \Return $true$
\Comment{\parbox[t]{.6\linewidth}{Unlock pred and curr. Return true}}
\EndIf
\EndIf
\State $curr.unlock()$
\State $pred.unlock()$
\EndWhile
\EndFunction
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{The add method}\label{GCLBList_add}
\begin{algorithmic}[1]
\Function{bool add}{$Node\ head,\ int\ key$}\Comment{Add a node with key-value 'key' from the list}
\While{$true$} \label{retry}
\State $window \gets find(head, key)$
\State $pred \gets window.pred, curr \gets window.curr$
\State $predSt \gets window.predSt, currSt \gets window.currSt$
\Statex \Comment{Retrieve pred and curr, and their stamps, from the window object}
\State{$pred.lock()$}
\If{$!curr.tryLock()$}
\State {$pred.unlock()$}
\State{\textbf{go to 6}}
\Comment{\parbox[t]{.7\linewidth}{Lock both the nodes. tryLock() is to prevent deadlocks, since there is no guarantee, that keys are being locked in increasing order}}
\EndIf
\If{$validate(pred, predSt, curr, currSt)$} \Comment{\parbox[t]{.45\linewidth}{Use validate to ensure the consistency of pred and curr}}
\If{$curr.key == key$}
\State $curr.unlock()$
\State $pred.unlock()$
\State \Return $false$
\Comment{If key is already present, unlock both nodes. And return false}
\Else
\State $node \gets Pool.get()$
\Comment{Query the Pool for a node}
\If{$node \neq nullptr$}
\State $node.key \gets key$
\Comment{node has been retrieved from pool. Reuse for new add operation}
\Else
\State $node \gets new Node(key)$
\Comment{Pool is empty. Create new node.}
\EndIf
\State $stamp \gets node.infoNext.getStamp()$
\State $node.infoNext.set(curr, stamp)$
\Comment{\parbox[t]{.4\linewidth}{Set new node’s reference to curr. No need to change new node’s stamp}}
\State $stamp \gets pred.infoNext.getStamp()$
\State $pred.infoNext.set(node, stamp)$
\Comment{\parbox[t]{.4\linewidth}{Atomically set pred’s infoNext’s reference to new node. No need to change pred’s stamp}}
\State $curr.unlock()$
\State $pred.unlock()$
\State \Return $true$
\Comment{\parbox[t]{.4\linewidth}{Unlock pred and curr. Return true}}
\EndIf
\EndIf
\State $curr.unlock()$
\State $pred.unlock()$
\EndWhile
\EndFunction
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{The contains method}\label{GCLBList_contains}
\begin{algorithmic}[1]
\Function{bool contains}{$Node\ head,\ int\ key$}\Comment{Traverse from head and find node with key-value 'key'}
\State $breakTest \gets false$
\While{true}\label{retry}
\State $pred\gets head$ \Comment{Start from the head}
\State $curr \gets pred.infoNext.get(predSt)$
\Comment{Read pred’s infoNext’s reference \& stamp atomically}
\State $currKey \gets curr.key$
\Comment{Read curr's key-value}
\While{$true$}
\State $breakTest \gets key\le currKey$
\Comment{\parbox[t]{.45\linewidth}{Break when key-value greater than or equal to required key is found}}
\State $succ \gets curr.infoNext.get(currSt)$
\Comment{\parbox[t]{.45\linewidth}{Read curr’s infoNext’s reference \& stamp atomically. succ may be null if curr has been deleted}}
\State $nPredSt \gets pred.infoNext.getStamp()$
\Statex \Comment{\parbox[t]{.45\linewidth}{Read pred’s stamp again before advancing forward. This is the safety check to ensure we are traversing the list correctly, in increasing order of keys}}
\If{$predSt \neq nPredSt$}
\State{\textbf{go to 3}}
\Comment{\parbox[t]{.6\linewidth}{If pred’s new stamp is different from the one read previously, a synchronization conflict is detected. curr may have been deleted by another thread from the list. The thread restarts it’s traversal to ensure correctness. If pred’s stamp is still the same, then everything is fine.}}
\EndIf
\If{$breakTest$}
\State{\textbf{go to 22}}
\Comment{\parbox[t]{.6\linewidth}{If pred’s stamp has not changed, everything is fine. Check if required pair of nodes has been found. If yes, break. Else, continue.}}
\EndIf
\State $pred \gets curr$
\Comment{Keep advancing pred and curr in the list}
\State $curr \gets succ$
\State $predSt \gets currSt$
\Comment{\parbox[t]{.6\linewidth}{Keep track of new pred’s old stamp to be used later, to detect synchronization conflicts}}
\State $currKey \gets curr.key$
\Comment{Read curr's key-value}
\EndWhile
\State \Return {$currKey == key$} \label{return}
\Comment{\parbox[t]{.6\linewidth}{Return true if 'key' has been found. Else, return false.}}
\EndWhile
\EndFunction
\end{algorithmic}
\end{algorithm}
\clearpage
\subsection{GCLFList Pseudo Code}
\begin{breakablealgorithm}
\caption{The find method}\label{GCLFList_find}
\begin{algorithmic}[1]
\Function{Window find}{$Node\ head,\ int key,\ Node\ prevCurr$}
\Statex \Comment{\parbox[t]{.4\linewidth}{Traverse from head and find node with key-value 'key'}}
\State $breakTest \gets false, snip \gets false$
\While{$true$}\label{retry}
\State $pred \gets head$
\Comment{Start from the head}
\State $curr \gets pred.infoNext.get(predSt)$
\Comment{Read curr’s infoNext’s reference \& stamp atomically}
\While{true}
\State $currKey \gets curr.key$
\Comment{Read curr’s key value}
\State $succ \gets curr.infoNext.get(currSt)$
\Comment{\parbox[t]{.45\linewidth}{Atomically read curr’s infoNext’s reference and stamp. Successor may be null if curr has been deleted}}
\If{$currSt\mod 2 ==1$}
\State $snip \gets pred.infoNext.compareAndSet(curr, succ, predSt, predSt+2)$
\Statex \Comment{\parbox[t]{.67\linewidth}{This is the "\textbf{helping}" step. If curr is marked(stamp is odd), attempt to physically remove from the list. Done by calling an atomic CAS operation on pred, to atomically set pred’s infoNext’s reference to successor and increment stamp by 2}}
\If{$!snip$}
\State{\textbf{go to 3}}
\Comment{If the CAS operation fails, restart the traversal}
\EndIf
\State $Pool.set(curr)$
\Comment{Else, add curr to the Pool.}
\State $predSt+=2$
\Comment{And keep track of updated pred’s stamp}
\EndIf
\State $breakTest \gets key \le currKey$
\Comment{\parbox[t]{.45\linewidth}{Break when key greater than or equal to required key is found}}
\State $nPredSt \gets pred.infoNext.getStamp()$
\Statex \Comment{\parbox[t]{.45\linewidth}{Read pred’s stamp again before advancing forward. This is the safety check to ensure we are traversing the list correctly, in increasing order of keys}}
\If{$predSt \neq nPredSt$}
\State\textbf{go to 3}
\Comment{\parbox[t]{.6\linewidth}{If pred’s new stamp is different from the one read previously, a synchronization conflict is detected. curr may have been deleted by another thread from the list. The thread restarts it’s traversal to ensure correctness. If pred’s stamp is still the same, then everything is fine}}
\EndIf
\If{$breakTest$}
\State{\textbf{go to 34}}
\Comment{\parbox[t]{.68\linewidth}{If pred’s stamp has not changed, everything is fine. Check if required pair of nodes has been found. If yes, break. Else, continue}}
\EndIf
\If{$!snip$}
\State{$pred \gets curr$}
\Comment{\parbox[t]{.63\linewidth}{If no helping was done i.e. no marked node was found,}}
\State{$curr \gets succ$}
\Comment{\parbox[t]{.63\linewidth}{Keep advancing pred and curr in the list}}
\State{$predSt \gets currSt$}
\Comment{\parbox[t]{.63\linewidth}{Keep track of new pred’s old stamp to be used later, to detect synchronization conflicts}}
\Else
\State{$curr \gets succ$}
\Comment{\parbox[t]{.63\linewidth}{If helping was done to remove an encountered marked done, It implies pred is still the same. Advance only curr}}
\State{$snip \gets false$}
\EndIf
\EndWhile
\State \Return {$Window(pred,\ curr,\ predSt,\ currSt)$} \label{return}
\Statex \Comment{\parbox[t]{.83\linewidth}{Return pred and curr, along with their stamps, encapsulated in a Window object}}
\EndWhile
\EndFunction
\end{algorithmic}
\end{breakablealgorithm}
\begin{algorithm}
\caption{The remove method}\label{GCLFList_remove}
\begin{algorithmic}[1]
\Function{bool remove}{$Node\ head,\ int\ key$}\Comment{\parbox[t]{.45\linewidth}{Remove a node with key-value 'key' from the list}}
\While{$true$} \label{rem_retry}
\State $window \gets find(head, key, nullptr)$
\State $pred \gets window.pred, curr \gets window.curr$
\State $predSt \gets window.predSt, currSt \gets window.currSt$
\Statex \Comment{Retrieve pred and curr, and their stamps, from the window object}
\If{$curr.key \neq key$}
\State \Return {$false$}
\Comment{If key is not present, return false}
\Else
\State{$succ \gets curr.infoNext.getReference()$}
\Comment{Read curr’s infoNext’s reference}
\State{$snip \gets curr.infoNext.compareAndSet(succ, succ, currSt, currSt+1)$}
\Statex \Comment{\parbox[t]{.52\linewidth}{Deletion Step 1: Atomically increment curr’s infoNext’s stamp by 1 using CAS i.e. Logical Deletion}}
\If{$!snip$}
\State{\textbf{go to 2}}
\Comment{\parbox[t]{.52\linewidth}{If CAS fails, restart the operation}}
\EndIf
\If{$pred.infoNext.compareAndSet(curr, succ, predSt, predSt+2)$}
\Statex \Comment{\parbox[t]{.52\linewidth}{Deletion Step 2: Atomically swing pred’s infoNext’s reference to successor. And increment pred’s infoNext’s stamp by 2 i.e. Physical Deletion}}
\State{$Pool.set(curr)$}
\Comment{\parbox[t]{.52\linewidth}{If physical deletion is successful, add curr to the Pool}}
\Else
\State{$find(head, key, nullptr)$}
\Comment{\parbox[t]{.52\linewidth}{This step is optional. If physical deletion is unsuccessful, retraverse the list to remove it. Or depend on some other thread to "help out"}}
\EndIf
\State \Return {$true$}
\Comment{\parbox[t]{.7\linewidth}{Return true on successful deletion. \textbf{Note:} Will return true even if only Logical deletion is successful}}
\EndIf
\EndWhile
\EndFunction
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{The add method}\label{GCLFList_add}
\begin{algorithmic}[1]
\Function{bool add}{$Node\ head,\ int\ key$}\Comment{add a node with key-value 'key' from the list}
\State{$fromPool \gets false$}
\State {$node \gets Pool.get()$}
\Comment{Query the Pool for a node}
\If{$node == nullptr$}
\State {$node \gets new Node(key)$}
\Comment{If Pool is empty, create a new node}
\State{$fromPool \gets false$}
\Else
\State {$node.key \gets key$}
\Comment{\parbox[t]{.5\linewidth}{Else, node successfully retrieved from the Pool.}}
\State {$nodeSt \gets node.infoNext.getStamp()$}
\State {$node.infoNext.set(nullptr, nodeSt+1)$}
\Comment{\parbox[t]{.5\linewidth}{Increment new node’s stamp by 1, to make the stamp even}}
\State {$fromPool \gets true$}
\EndIf
\While{$true$} \label{add_retry}
\State $window \gets find(head, key, nullptr)$
\State $pred \gets window.pred, curr \gets window.curr$
\State $predSt \gets window.predSt, currSt \gets window.currSt$
\Statex \Comment{Retrieve pred and curr, and their stamps, from the window object}
\If{$curr.key \neq key$}
\State {$nodeSt \gets node.infoNext.getStamp()$}
\State {$node.infoNext.set(curr, nodeSt)$}
\Comment{\parbox[t]{.34\linewidth}{If 'key' is not already present in the list, set new node’s infoNext’s reference to curr}}
\If{$pred.infoNext.compareAndSet(curr, node, predSt, predSt)$}
\Statex \Comment{\parbox[t]{.65\linewidth}{Attempt to atomically CAS pred’s infoNext’s reference to new node. If CAS succeeds, return true}}
\State \Return {$true$}
\Else
\State{\textbf{go to 13}}
\Comment{\parbox[t]{.65\linewidth}{Else, restart the operation. \textbf{Note:} Next iteration, some other thread may have added the new key instead. If so, then this thread will return false}}
\EndIf
\Else
\Comment{Key is already present in the list}
\If{$!fromPool$}
\State{$delete\ node$}
\Comment{\parbox[t]{.65\linewidth}{new node was newly created by this thread. It can be safely freed, since no other thread has a reference to this node}}
\Else
\State {$nodeSt \gets node.infoNext.getStamp()$}
\State{$node.infoNext.set(nullptr, nodeSt-1)$}
\State{$Pool.set(node)$}
\Comment{\parbox[t]{.63\linewidth}{node was retrieved from the Pool. Decrement node’s stamp to make it odd again and add the node back to the Pool}}
\EndIf
\State \Return {$false$}
\Comment{Return false since 'key' already present}
\EndIf
\EndWhile
\EndFunction
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{The contains method}\label{GCLFList_contains}
\begin{algorithmic}[1]
\Function{bool contains}{$Node\ head,\ int\ key$}\Comment{Traverse from head and find node with key-value 'key'}
\State $breakTest \gets false$
\While{true}\label{con_retry}
\State $pred\gets head$ \Comment{Start from the head}
\While{$true$}
\State {$curr \gets pred.infoNext.get(predSt)$}
\Comment{\parbox[t]{.45\linewidth}{Read curr’s infoNext’s reference \& stamp atomically}}
\State {$currKey \gets curr.key$}
\Comment{Read curr’s key value}
\State {$succ \gets curr.infoNext.get(currSt)$}
\Comment{\parbox[t]{.45\linewidth}{Atomically read curr’s infoNext’s reference and stamp. Successor may be null if curr has been deleted}}
\State {$breakTest \gets key \le currKey$}
\Comment{\parbox[t]{.45\linewidth}{Break when key greater than or equal to required key is found}}
\State {$nPredSt \gets pred.infoNext.getStamp()$}
\Comment{\parbox[t]{.45\linewidth}{Read pred’s stamp again before advancing forward. This is the safety check to ensure we are traversing the list correctly, in increasing order of keys}}
\If{$predSt \neq nPredSt$}
\State{\textbf{go to 3}}
\Comment{\parbox[t]{.6\linewidth}{If pred’s new stamp is different from the one read previously, a synchronization conflict is detected. curr may have been deleted by another thread from the list. The thread restarts it’s traversal to ensure correctness. If pred’s stamp is still the same, then everything is fine}}
\EndIf
\If{$breakTest$}
\State\textbf{go to 20}
\Comment{\parbox[t]{.65\linewidth}{If pred’s stamp has not changed, everything is fine. Check if required node has been found. If yes, break. Else, continue}}
\EndIf
\State {$pred \gets curr$}
\Comment{Keep advancing pred in the list}
\State {$predSt \gets currSt$}
\Comment{\parbox[t]{.65\linewidth}{Keep track of new pred’s old stamp to be used later, to detect synchronization conflicts}}
\EndWhile
\State {$marked \gets currSt\mod 2 ==1$} \label{return}
\Comment{Check if curr is marked i.e. odd stamp}
\State \Return {$currKey == key \ \&\&\ !marked$}
\Comment{\parbox[t]{.45\linewidth}{Return true if and only if key is found and node is unmarked. Else, return false}}
\EndWhile
\EndFunction
\end{algorithmic}
\end{algorithm}
\clearpage
\subsection{The Pool}
\subsubsection{Lock-Based Queue} C++ code
\begin{lstlisting}[language=C++]
class LBQueue
{
public:
class QNode;
mutex enqLock,deqLock;
QNode *head,*tail;
class QNode
{
public:
Node *node;
AtomicStampedReference<QNode> next;
QNode()
{
node = nullptr;
next.set(nullptr, 0);
}
QNode(Node *node)
{
this->node = node;
next.set(nullptr, 0);
}
};
LBQueue() {
//QNode *sentinel = new QNode();
head = new QNode();
tail = new QNode();
(tail->next).set(nullptr, 0);
(head->next).set(tail, 0);
}
void set(Node *node) {
enqLock.lock();
if(node != nullptr) {
QNode *qNode = new QNode(node);
int stamp = (tail->next).getStamp();
(tail->next).set(qNode, stamp);
tail = qNode;
}
enqLock.unlock();
}
Node* get() {
Node* result = nullptr;
deqLock.lock();
if ((head->next).getReference() != tail) {
result = head->node;
head = (head->next).getReference();
}
deqLock.unlock();
return result;
}
};
\end{lstlisting}
\subsubsection{Lock-Free Queue} C++ code
\begin{lstlisting}[language=C++]
class LFQueue {
public:
class QNode;
AtomicStampedReference<QNode> *head;
AtomicStampedReference<QNode> *tail;
class QNode {
public:
Node *node;
AtomicStampedReference<QNode> next;
QNode() {
node = nullptr;
next.set(nullptr, 0);
}
QNode(Node *node) {
this->node = node;
next.set(nullptr, 0);
}
};
LFQueue() {
QNode *sentinel = new QNode();
head = new AtomicStampedReference<QNode>(sentinel, 0);
tail = new AtomicStampedReference<QNode>(sentinel, 0);
}
void set(Node *node) {
int lastStamp, nextStamp, stamp;
if (node == nullptr)
return;
QNode *x = new QNode(node);
while (true) {
QNode *last = tail->get(&lastStamp);
QNode *next = (last->next).get(&nextStamp);
if (last == tail->get(&stamp) && stamp == lastStamp) {
if (next == nullptr) {
if ((last->next).compareAndSet(next, x, nextStamp, nextStamp+1)) {
tail->compareAndSet(last, x, lastStamp, lastStamp+1);
return;
}
}
else {
tail->compareAndSet(last, next, lastStamp, lastStamp+1);
}
}
}
}
Node* get() {
int lastStamp, firstStamp, nextStamp, stamp;
while (true) {
QNode *first = head->get(&firstStamp);
QNode *last = tail->get(&lastStamp);
QNode *next = (first->next).get(&nextStamp);
if (first == head->get(&stamp) && stamp == firstStamp) {
if (first == last) {
if (next == nullptr) {
return nullptr;
}
tail->compareAndSet(last, next, lastStamp, lastStamp+1);
}
else {
Node *ret = first->node;
if (head->compareAndSet(first, next, firstStamp, firstStamp+1)) {
return ret;
}
}
}
}
}
};
\end{lstlisting}
\section{Conclusion and Future Work}
In this paper, we have presented \textbf{GCList}, a linked-list representation of a concurrent set, with in-built garbage collection. Both the lock-based and lock-free versions of \textbf{GCList}, i.e. \textbf{GCLBList} and \textbf{GCLFList}, are introduced.
\par
Our results show that \textbf{GCList} matches or outperforms most of the existing representations of a concurrent set, while consuming a lot lesser memory than the higher-performing algorithms like LazyList. Memory consumption was at par with generic garbage collection facilities like Shared Pointers and Hazard Pointers, while outperforming them many folds.
\par
In future work, we plan to investigate whether we can extend it to other data structures similar to a concurrent list or using it as a part of it's structure e.g. SkipList, Hash Tables etc.
\section{Evaluation}
We tested both versions of GCList against existing implementations of a concurrent set namely: LazyList\cite{lazy_list}, Hand-over-Hand List\cite{hoh}, Harris's LockFreeList\cite{lfree}, a Shared Pointer\cite{SPtr} version of LazyList and a LockFreeList using Hazard Pointers.
\par
We used both the lock-based queue and the lock-free queue, as a Pool, in combination with the two versions of GCList. The resultant set representation is named by using the list's name as prefix and pool's name as suffix. For example, the GCLBList using the lock-based queue would be named GCLBListLBQueue and the GCLFList using the lock-free queue would be named GCLFListLFQueue.\\
The LazyList based on Shared Pointers has been named LazyList\_SP. The LockFreeList using Hazard Pointers for Memory Reclamation has been named LockFreeList\_HP.
\par
We tested the above mentioned algorithms versus our algorithms for both performance and memory consumption, with varying workloads and randomized Set operations.
\par
For performance, we fix the total number of operations that each thread can perform, divided in varying ratios between adds, removes and contains. We allowed each algorithm to run for 10 seconds and measured the number of operations completed during said time period.\\
For memory consumption, we fix the total number of operations that each thread can perform, divided in varying ratios between adds, removes and contains. We keep track of the number of times each thread allocates and deallocates memory. Whenever the thread allocates new memory, a thread-local variable is incremented and whenever the memory is released, the variable is decremented. At the end of all thread operations, the main thread consolidates the sum of all the thread-local variables. We take the ratio of a List's node count versus the Hand-Over-Hand List. We use this ratio to compare the memory consumed by an algorithm during it's entire execution.
\par
Based on the above criteria, we obtained the following graphs.
\input{graphs}
\subsection{Analysis of Results}
\subsubsection{Performance Analysis}
From the graphs, we can see that the performance of both versions of GCList i.e GCLBList and GCLFList, is at par or even better than Harris's LockFreeList. Both outperform the Hand-over-Hand List, the LazyList based on Shared Pointers and the LockFreeList using Hazard Pointers for memory reclamation by multiple folds. The GCList versions are only outperformed by the original LazyList.
\subsubsection{Memory Consumption Analysis}
However, in terms of Memory consumption, both versions of GCList consume a lot less memory than the original LazyList. It also needs less memory than Harris's LockFreeList and the Hand-over-Hand List. In comparison with generic techniques like Shared Pointers and Hazard Pointers, memory consumption of GCList is still comparable to both.
\par
The plots for LazyList and LockFreeList have not been shown in the graphs. This is because they consume way too much memory compared to the other lists. Adding the plots for LazyList and LockFreeList reduces the other plots to straight lines similar to the Hand-over-Hand plot. This is due to the fact that LazyList and LockFreeList are unable to either free deleted nodes or reuse them. For each insert operation, new memory has to be allocated for the node.
\section{Our Algorithm: GCList}
This paper introduces the \textbf{GCList}, a list-based set algorithm with an in-built garbage collection scheme. The set is represented as a linked-list of nodes, supporting the following operations:\\
- \textbf{add(key)}, adds key to the set, and returns true if and only if key was not already present in the set.\\
- \textbf{remove(key)}, removes key from the set, and returns true if and only if key was present in the set.\\
- \textbf{contains(key)}, searches for key in the set, and returns true if and only if key is present in the set.\\
We introduce two versions of GCList, a blocking version or \textbf{GCLBList} and a non-blocking version or \textbf{GCLFList}.
\par
The pseudo-code for both the versions has been kept in the appendix.
\subsection{GCLBList}
Each node in the list consists of three fields: the key field, an \textbf{AtomicStampedReference}\cite{ASR} object called as infoNext and a lock associated with the node. We have implemented our own AtomicStampedReference\cite{ASR} in C++. The list is ordered according to the keys of each node. infoNext contains a reference to the next node in the list and an integer stamp associated with the node. Both the stamp and the reference can be read and updated atomically\cite{ASR}. The lock field is a lock used for synchronization.
\begin{lstlisting}[language=C++,caption=GCLBList Node,captionpos=b]
class Node
{
int key;
AtomicStampedReference<Node> infoNext;
mutex nodeLock;
};
\end{lstlisting}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{Diagrams/GCList_Object.jpg}
\caption{GCList Node and it's components}
\end{figure}
As mentioned earlier, we consider three operations on the list i.e. add, remove and contains. However we factor out functionality common to the add and remove methods by creating an inner Window class to help navigation. The common functionality is used to optimistically traverse the list and “find” the required pair of nodes required for each operation. The find method then returns the references to the nodes and their respective stamps in a Window object to the calling method.
\subsubsection{The find method}
The find method is used by the add and remove methods to optimistically traverse the list. The thread gets a reference to the “head” node and keeps traversing the list in an optimistic hand-over-hand fashion. At every step of the traversal, the infoNext's reference and stamp fields of a node are read atomically\cite{ASR}. The thread keeps traversing the list until it finds the relevant pair of nodes, pred and curr. curr holds a reference to the first node with a key greater than or equal to the key that is being searched, in the list, with pred being curr's predecessor. The find method returns a window object, containing references to pred and curr along with their respective stamps, to the calling method.
\par
An important observation to be made here is the use of stamps during traversal. Stamps are used to detect synchronization conflicts by a traversing thread. This can be inferred from the working of the remove method later. If at any time during a thread's traversal, the stamp of the pred node changes, a synchronization conflict with another “removing” thread is detected. The current thread “retries” it's traversal from the head node.
\subsubsection{The validate method}
The validate method is used to ensure that the calling method has locked the correct pair of nodes. It uses the stamps and references returned by the find method to ensure that both pred and curr are still present in the list and pred is still pointing to curr. If the stamps of either node has changed or pred is no longer pointing to curr, then it signifies a synchronization conflict with another thread. The current thread then restarts it's execution.
\subsubsection{The remove method}
The remove method is used to remove key from the set, returning true if and only if key was in the set. It calls the “find” method to determine the correct pair of nodes for the remove operation. The nodes are locked and then validation is performed using the “validate” method. If validation fails, the nodes are unlocked and the thread retries, otherwise it continues it's operation.
\par
Deletion is performed in two steps:\\
- \textbf{Step 1:} pred's infoNext's reference is swung to curr's infoNext's reference and pred's infoNext's stamp is incremented by one. This operation to update pred's infoNext's reference and stamp fields is atomic.\\
- \textbf{Step 2:} curr's infoNext's stamp is incremented by 1. This marks the successful deletion of curr from the list.\\
Step 2 is the \textbf{\lp} for the remove method.
\par
After curr has been successfully deleted, it is added to the “Pool”. A \textbf{Pool} is a concurrent data structure which is used to hold the deleted nodes. These deleted nodes can now be reused for later add operations.
\par
Now, an important thing to discuss in this section is why does a thread traversing the list, in the find or contains method, has to retry if the pred's stamp changes. Based on the working of the “find” method, we can see that if any thread has a reference to curr, it should also have read pred's old stamp. This is because reads from an AtomicStampedReference\cite{ASR} object is atomic. At this point, if curr were to be deleted from the list, pred's stamp would have been incremented, in Step 1. Again, this updation of pred's infoNext fields is atomic.
\par
If the current thread were to continue it's traversal, it may instead traverse the Pool or some other part of the list, since we have no guarantees about curr's position after it's deletion. Instead, before advancing pred and curr in the list, we check pred's stamp again. If it has changed, it implies that curr may have been deleted and the current thread is in a synchronization conflict with a removing thread. The current thread then restarts it's traversal from the list's head again. If pred's stamp is unchanged though, it implies that curr is still a part of the list and the thread can advance pred and curr.
\par
Conversely, we can say that if a thread has read pred's updated stamp at the first read, then it cannot have a reference to curr. Again, this is because the updation of pred's infoNext fields is atomic\cite{ASR}.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.25]{Diagrams/GCLBList_Deletion.jpg}
\caption{GCLBList: Remove Steps}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.24]{Diagrams/Two_Threads_Deletion_1.jpg}
\caption{GCLBList: Two concurrent removing Threads(Part 1)}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.24]{Diagrams/Two_Threads_Deletion_2.jpg}
\caption{GCLBList: Two concurrent removing Threads(Part 2)}
\end{figure}
\subsubsection{The add method}
The add method is used to add a key to the list if and only if the key is not already present in the list. It calls the “find” method to determine the correct pair of nodes for the add operation. The nodes are locked and then validation is performed using the “validate” method. If validation fails, the nodes are unlocked and the thread retries, otherwise it continues it's operation. The thread then queries the Pool(a data structure containing deleted nodes) for a node. If the Pool is not empty, a node is returned to be reused. Else, the thread creates a new node. It then inserts the new node, unlocks pred and curr and returns true.\\
The step in which pred's infoNext's reference is set to the new node is the \textbf{\lp} for the add method.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.23]{Diagrams/GCLBList_Add.jpg}
\caption{GCLBList: Add Steps}
\end{figure}
\subsubsection{The contains method}
The contains method is similar to the find method. It starts from the “head” node and keeps traversing the list in an optimistic hand-over-hand fashion. At every step of the traversal, the infoNext's reference and stamp fields of a node are read atomically\cite{ASR}. The thread keeps traversing the list until it finds the first node with a key greater than or equal to the key that is being searched.
\par
Similar to the find method, stamps are used to detect synchronization conflicts during traversal. If at any time during a thread's traversal, the stamp of the pred node changes, a synchronization conflict with another “removing” thread is detected. The current thread “retries” it's traversal from the head node.
\par
The method returns true if and only if the key is present in the list. A successful contains is \textbf{linearized} when a matching key is found and the stamp of the predecessor hasn't changed from it's previous value.
\subsection{GCLFList}
GCLFList is the non-blocking version of our list-based set algorithm.
\par
Each node in the list now consists of two fields, the key field and an AtomicStampedReference\cite{ASR} object called as infoNext. The list is ordered according to the keys of each node. infoNext contains a reference to the next node in the list and an integer stamp associated with the node. Both the stamp and the reference can be read and updated atomically\cite{ASR}. There is no lock field associated with the node anymore.
\begin{lstlisting}[language=C++,float,floatplacement=H,caption=GCLFList Node,captionpos=b]
class Node
{
public:
int key;
AtomicStampedReference<Node> infoNext;
}
\end{lstlisting}
\par
We instead use atomic functions like compareAndSet\cite{ASR} or CAS to perform our operations on the list. Atomic operations\cite{ASR} are used to atomically read and update the AtomicStampedReference object associated with each node. However, this also leads to complications. For example, if we follow the deletion steps of GCLBList, what happens in the case of two adjacent concurrent remove operations, using CAS? We can see that one of the nodes won't be removed from the list.
\par
To solve this problem, we need a way to identify a marked node in the list, even though it may still be present in the list i.e. a logically deleted node. We differentiate between a logically deleted node and a node that is a part of the list by using parity of stamp.\\
- A node with an \textbf{even stamp} is a part of the list.\\
- A node with an \textbf{odd stamp} denotes a node that has been deleted from the list.
\par
The deletion operation is also divided into two steps\\
- \textbf{Logical Deletion:} Increment curr's stamp by 1 using CAS\cite{ASR} i.e marking curr. This step is the \textbf{\lp} of the remove method.\\
- \textbf{Physical Deletion:} Swing pred's infoNext's reference to curr's infoNext's reference and increment pred's infoNext's stamp by 2, atomically using CAS\cite{ASR}.
\par
We also adopt the concept of \textbf{Helping} i.e. if a traversing thread encounters a logically deleted or marked node, it attempts to first remove the node from the list, before advancing forward.
\subsubsection{The find method}
The find method is used by the add and remove methods to optimistically traverse the list. The thread gets a reference to the “head” node and keeps traversing the list in an optimistic hand-over-hand fashion. At every step of the traversal, the next reference and stamp of a node is read atomically\cite{ASR}. The thread keeps traversing the list until it finds the relevant pair of nodes, pred and curr. It returns a window object, containing references to pred and curr along with their respective stamps, to the calling method.
\par
As mentioned above, each time the thread encounters a marked node i.e. a node with an odd stamp, it attempts to physically delete the node first before advancing. If the CAS operation for the physical deletion succeeds, the node advances forward. Else it retries. Threads never traverse marked nodes because they lead to consistency issues.
\par
For example, find may return a marked pred and an unmarked curr to the remove method trying to add a new node between pred and curr. If pred is physically removed by another thread before the new node could be added, the new node would end up being not added to the list. This difficulty arises because the current thread is not holding locks on pred and curr.
\par
Similar to the previous find method, stamps are also used to detect synchronization conflicts by a traversing thread. If at any time during a thread's traversal, the stamp of the pred node changes, a synchronization conflict with another “removing” thread is detected. The current thread “retries” it's traversal from the head node.
\subsubsection{The remove method}
The remove method is used to remove key from the set, returning true if and only if key was in the set. It calls the “find” method to determine the correct pair of nodes for the remove operation.\\
Deletion of “curr” is performed in two steps as mentioned earlier. The step for logical deletion of "curr" is the \lp for the remove method.\\
After curr has been successfully deleted, it is added to the “Pool”. These deleted nodes can now be reused for later add operations.
\par
Now, what happens if any of the two CAS operations fail.\\
\textbf{Case 1:} CAS for logical deletion of curr fails. It implies that some other thread has performed a concurrent operation on curr and a synchronization conflict is detected. The current thread has to restart it's operation.\\
\textbf{Case 2:} CAS for physical deletion fails. It implies that some other thread has performed a concurrent operation on pred. The current thread has two choices: it can depend on other traversing threads to “help” physically delete curr or it can traverse the list once more time to ensure curr's deletion.\\
An important note is that incrementing the pred's stamp by 2 during physical deletion prevents the ABA problem.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.25]{Diagrams/GCLFList_Deletion.jpg}
\caption{GCLFList: Remove Steps}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.24]{Diagrams/GCLFList_Deletion_1.jpg}
\caption{GCLFList: Two concurrent removing Threads(Part 1)}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.24]{Diagrams/GCLFList_Deletion_2.jpg}
\caption{GCLFList: Two concurrent removing Threads(Part 2)}
\end{figure}
\subsubsection{The add method}
The add method is used to add a key to the list if and only if the key is not already present in the list. It calls the “find” method to determine the correct pair of nodes for the add operation. The thread then queries the Pool for a node. If the Pool is not empty, a node is returned to be reused. Else, the thread creates a new node. It then inserts the new node, unlocks pred and curr and returns true. If the node is obtained from the Pool, it's stamp is incremented by 1 before inserting it into the list.
\par
An important observation to be made her is if the adding thread's CAS call on pred to insert the new node to the list fails, it calls the find method again, resulting in a new pair of pred and curr. However, another concurrent adding thread may have meanwhile added the same key to the list. The current thread now cannot add the same key anymore and has to return false. Before doing that, if the node was retrieved from the Pool, it is added back again to it. Else, if it was a newly created node, we can delete it since we have a guarantee that no other thread has a reference to it.
\par
This scenario never occured in GCLBList since once pred and curr were locked and validated and the key was previously absent from the list, there is a guarantee that the current thread would be able to add the key to the list successfully. Provided it doesn't crash midway before that.\\
The CAS call to set pred's infoNext's reference to the new node is the \textbf{\lp} for this method.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.24]{Diagrams/GCLFList_Add.jpg}
\caption{GCLFList: Add Steps}
\end{figure}
\subsubsection{The contains method}
The contains method starts from the “head” node and keeps traversing the list in an optimistic hand-over-hand fashion. At every step of the traversal, the next reference and stamp of a node is read atomically\cite{ASR}. The thread keeps traversing the list until it finds the first node with a key greater than or equal to the key that is being searched.
\par
Again, stamps are used to detect synchronization conflicts during traversal. If at any time during a thread's traversal, the stamp of the pred node changes, a synchronization conflict with another “removing” thread is detected. The current thread “retries” it's traversal from the head node.\\
The method returns true if and only if the key is present in the list and it's infoNext's stamp is even. A successful contains is \textbf{linearized} when a matching key is found and the it's stamp is even.
\section{The Pool}
The pool is a concurrent data structure that is used to hold the deleted nodes that have been reclaimed from the list. Ideally, any data structure that treats the node object as a "payload" can be used as the pool. In our experiments we used two different queue implementations to act as the pool. The code for both the queues has been kept in the appendix.
\subsection{The blocking unbounded total queue}
This lock-based concurrent queue\cite{HerlihyShavit:AMP:Book:2012} uses two separate locks for each queue operation i.e. an enqLock to enqueue a deleted node to the queue and a deqLock to dequeue a node from the queue, respectively.
\par
Before a thread performs an enqueue or a dequeue operation, it acquires the corresponding lock on the queue. After acquiring the lock, the thread performs it's operation and releases the lock upon completion. The lock ensures that, at a particular time, only one thread is able to perform an enqueue or a dequeue operation on the queue.
\subsection{The unbounded Lock-free queue}
This lock-free concurrent queue\cite{HerlihyShavit:AMP:Book:2012} uses atomic compareAndSet or CAS calls instead of locks for the queue operations. The CAS calls are used to enqueue a node into the queue and also to dequeue a node from the queue.
\par
This lock-free implementation helps to prevent faster threads from starving, with the removal of coarse-grained locks. This queue implementation also uses the concept of helping, where faster thread help the slower threads to finish their queue operations.
\par
The enqueue operation is done in two steps:\\
- The thread locates the last node in the queue and uses a CAS call to append the new node to the queue.\\
- It then uses another CAS call to change the queue's tail from the previous last node to the current last node.
\par
Since the above two CAS calls are not a single atomic operation, threads help each other to complete the second CAS, if a half finished enqueue operation is encountered.
\par
An important attribute to be noted about the queue is that it also uses the AtomicStampedReference\cite{ASR} object, in its' head and tail, to prevent the ABA problem\cite{queue_aba} problem from occurring in the queue.
\section{Introduction}
List-based implementation of concurrent sets are fairly common. LazyList\cite{lazy_list}, Hand-over-Hand List\cite{hoh} and Harris's LockFreeList\cite{lfree} are some common examples. However none of these implementations address the issue of garbage collection of nodes deleted from the list. Either the algorithm ignores the issue or it relies on the language’s garbage collector to handle it for them.
\par
There are several reasons to implement our own memory management scheme. Languages such as C and C++ do not provide garbage collection and often it is more efficient to do our own memory management. C++ has some constructs like Shared Pointers\cite{SPtr} that offer limited garbage collection facility. Other garbage collection techniques like Stop-the-World are also available. Even though Shared Pointers, Hazard Pointers\cite{HP} and these other garbage collection schemes are very generic techniques, since they can be applied to almost all concurrent data structures, they are expensive and cost a lot in terms of performance and the extra data structures required to implement them.
\par
Integrating Shared Pointers, Hazard Pointers and these other garbage collection schemes into a concurrent data structure is also not a trivial task. And more often than not, they are not very optimized for performance. They become even more complicated in case of lock-free data structures employing lock-free methods. Garbage collection, in these cases, is byzantine.\cite{queue_aba}
\par
In this paper, we concentrate on the garbage collection scheme for a concurrent set. We introduce a new representation of a concurrent set, \textbf{GCList}, with in-built garbage collection. Nodes that are removed from the set are collected in a "Pool" of deleted nodes, to be reused for later add operations. We introduce both lock-based and lock-free versions of GCList. We use the terms node, key and value interchangeably in this paper.
\section{System Model \& Preliminaries}
\label{sec:System-Model-Preliminaries}
\vspace{-2mm}
In this paper, we assume that our system consists of finite set of $p$ processors, accessed by a finite set of $n$ threads that run in a completely asynchronous manner and communicate using shared objects. The threads communicate with each other by invoking higher-level \mth{s} on the shared objects and getting corresponding responses. Consequently, we make no assumption about the relative speeds of the threads. We also assume that none of these processors and threads fail. \\
\noindent
\textbf{Safety:} To prove a concurrent data structure to be correct, \textit{\lbty} proposed by Herlihy \& Wing \cite{HerlWing:1990:TPLS} is the standard correctness criterion in the concurrent world. They consider a history generated by a data structure which is collection of \mth invocation and response events. Each invocation of a method call has a subsequent response. A history is \lble if it is possible to assign an atomic event as a \emph{\lp} inside the execution interval of each \mth such that the result of each of these \mth{s} is the same as it would be in a sequential history in which the \mth{s} are ordered by their \lp{s} \cite{HerlWing:1990:TPLS}. \\
\ignore{
\textbf{Linearizability:} To prove a concurrent data structure to be correct, \textit{\lbty} proposed by Herlihy \& Wing \cite{HerlWing:1990:TPLS} is the standard correctness criterion in the concurrent world. They consider a history generated by a data structure which is collection of \mth invocation and response events. Each invocation of a method call has a subsequent response. A history to be \lble if (1) The invocation and response events can be reordered to get a valid sequential history. (2) The generated sequential history satisfies the object's sequential specification. (3) If a response event precedes an invocation event in the original history, then this should be preserved in the sequential reordering. \\
}
\noindent
\textbf{Progress:} The \emph{progress} properties specifies when a thread invoking \mth{s} on shared objects completes in presence of other concurrent threads. Some progress conditions used in this paper are mentioned here which are based on the definitions in Herlihy \& Shavit. The progress condition of a method in concurrent object is defined as: (1) Blocking: In this, an unexpected delay by any thread (say, one holding a lock) can prevent other threads from making progress. (2) Deadlock-Free: This is a \textbf{blocking} condition which ensures that \textbf{some} thread (among other threads in the system) waiting to get a response to a \mth invocation will eventually receive it. (3) Wait-Free: This is a \textbf{non-blocking} condition which ensures that \textbf{every} thread trying to get a response to a \mth, eventually receives it\cite{Herlihy:WFS:TPLS:1991}.
\section{Related Work}
We discuss some of the list-based set algorithms in this section and some existing garbage collection techniques that can be used in concurrent sets.
\subsection{Hand-Over-Hand List}
In this list-based representation of a set, also called \textbf{lock-coupling}\cite{hoh}, each thread traverses the list from the head of the list, while acquiring fine-grained locks in a hand-over-hand manner. Each thread acquires the lock for the next node and then releases the lock for the current node.\\
All operations require the usage of locks which may affect the overall performance of the list, even though garbage collection in this list is a fairly trivial task.
\subsection{LazyList}
An improvement over the Hand-over-Hand list is the LazyList\cite{lazy_list}. Threads traverse the list \textbf{optimistically}, without using any locks. Nodes are locked only when the required pair are found. An additional boolean field called "marked" field is associated with every node. The "marked" field is used to identify nodes that have been deleted but are still reachable from the head of the list.
\par
In LazyList, nodes are deleted in two steps:\\
- \textbf{Logical deletion:} The marked field is set to true.\\
- \textbf{Physical deletion:} The node's predecessor's next reference is swung to the node's successor.
\par
The contains method is completely wait-free. It traverses the list without using any locks. It's easy to see that garbage collection, in this case, is not so trivial. It may lead to an issue known as the "\textbf{ABA Problem}"\cite{queue_aba}.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.25]{ABA_Problem/ABA_1.jpg}
\caption{The ABA Problem in LazyList (Part 1)}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.25]{ABA_Problem/ABA_2.jpg}
\caption{The ABA Problem in LazyList (Part 2)}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.25]{ABA_Problem/ABA_3.jpg}
\caption{The ABA Problem in LazyList (Part 3)}
\end{figure}
\subsection{LockFreeList}
The LockFreeList\cite{HerlihyShavit:AMP:Book:2012} is an extension of the LazyList\cite{lazy_list}, where locks are eliminated altogether from the list operations and all the methods are non-blocking\cite{Herlihy:WFS:TPLS:1991}.
\par
The list uses an AtomicMarkableReference\cite{AMR} object as a part of it's structure, which allows a thread to atomically read and update both the boolean mark and the next reference of a node. The list also uses compareAndSet or CAS calls for its operations.
\par
The remove method is similar to LazyList\cite{lazy_list}, in that deletion is done in two steps.\\
- A CAS call is used to set the marked field of a node.\\
- Another CAS call is used on the node's predecessor to physically delete the node from the list.
\par
An important difference between LockFreeList\cite{HerlihyShavit:AMP:Book:2012} and LazyList\cite{lazy_list} is that LockFreeList never traverses logically marked nodes. Instead the encountered marked nodes are physically deleted from the list. Essentially, threads "help" out other slower threads that have completed the first CAS call but not the second.
\par
It can also be seen that similar to LazyList\cite{lazy_list}, LockFreeList\cite{HerlihyShavit:AMP:Book:2012} is also vulnerable to the ABA problem\cite{queue_aba}.
\subsection{Reference Counting}
In a dynamic and concurrent data structure, arbitrary objects can continuously
and concurrently be added or removed from the data structure. And multiple owners may have a reference to the shared objects. Unsafe freeing of a node may lead to safety issues and possible crashes\cite{RefCount}.\\
So, before freeing a shared object, it should be checked that there are no remaining references to it. This should also include possible local references to the shared object that any thread might have, as a read or write access to the memory of a reclaimed object might be fatal to the correctness of the data structure andor to the whole system\cite{RefCount}.
\par
In the “Reference-Counting” category of garbage collection techniques, shared counters are assigned to objects and they are used to count the number of references to any object at any given time\cite{RefCount}. In other words, a group of owners share the ownership for an object.This group is responsible for deleting that object when the last one among them releases that ownership. The shared object can be freed if and only if the counter becomes zero\cite{RefCount2}.
\par
This method, however is expensive. A shared atomic counter has to be associated with every object\cite{RefCount}\cite{RefCount2}. Getting a reference to an object and incrementing the shared counter has to be an atomic operation. Same thing applies when losing the reference to the object and decrementing the shared counter. Even a simple read operation from the shared object has to increment the shared counter. Essentially, the memory read becomes a read-modify-write operation\cite{ThrScan}.
\subsection{Pointer-based techniques}
Pointer-based techniques such as Hazard Pointers\cite{HP} explicitly mark live objects (objects that threads can access) which are not de-allocated. Pointer-based schemes suffer from two limitations: they must be customized to the data structure at hand, which makes them difficult to deploy; they publish each pointer that is used in a shared memory location, which is expensive in terms of synchronization.
\par
Hazard Pointers (HP) and other pointer-based techniques will typically publish the pointer to each object they use, and then check that the pointer has not changed in the meantime. Such approach guarantees that an object which has been deleted will not be later dereferenced, at the cost of each reader doing synchronization on a per-object basis.
\par
Because it requires validation of the pointer that will be accessed next, Hazard Pointers are lock-free for readers, although in some situations they can be made wait-free for readers. HP is wait-free bounded for reclamation, with the bound being proportional to the number of threads times the number of hazard pointers, because each reclaimer has to scan all the hazard pointers of all the other threads before deleting a node. In HP the retired nodes are placed in a retired list which is scanned once its size reaches an R threshold. In terms of memory usage, when the R factor is set to the lowest setting of 1, each reclaimer can have at most a list of retired nodes with a size equal to the number of threads minus 1, times the number of hazard pointers. If each thread has one such list of nodes pending to be deleted, at any given point in time there are at most O($N_{threads}^2$) nodes to be deleted.
|
{
"timestamp": "2018-07-24T02:05:28",
"yymm": "1806",
"arxiv_id": "1806.00834",
"language": "en",
"url": "https://arxiv.org/abs/1806.00834"
}
|
\subsubsection{\@startsection{subsubsection}{3}{\parindent}{0ex plus 0.1ex minus 0.1ex}%
{0.5ex}{\normalfont\normalsize\itshape}}%
\renewcommand\paragraph{\@startsection{paragraph}{4}{2\parindent}{0ex plus 0.1ex minus 0.1ex}%
{0.3ex}{\normalfont\normalsize\itshape}}%
\renewcommand{\figurename}{Figure}
\makeatother
\newcommand{\ccomment}[1]{ }
\newcommand{\ensuremath {\mathit{FC}}{\xspace}}{\ensuremath {\mathit{FC}}{\xspace}}
\newcommand{\ensuremath {\mathit{CR}}{\xspace}}{\ensuremath {\mathit{CR}}{\xspace}}
\newcommand{\ensuremath {\mathit{DB}}{\xspace}}{\ensuremath {\mathit{DB}}{\xspace}}
\newcommand{\ensuremath {\mathit{SU}}{\xspace}}{\ensuremath {\mathit{SU}}{\xspace}}
\newcommand{\ensuremath {\mathit{PU}}{\xspace}}{\ensuremath {\mathit{PU}}{\xspace}}
\newcommand{\ensuremath {\mathit{FCC}}{\xspace}}{\ensuremath {\mathit{FCC}}{\xspace}}
\newcommand{\ensuremath {\mathit{CRN}}{\xspace}}{\ensuremath {\mathit{CRN}}{\xspace}}
\newcommand{\ensuremath {\mathit{DSA}}{\xspace}}{\ensuremath {\mathit{DSA}}{\xspace}}
\newcommand{\ensuremath {\mathit{BS}}{\xspace}}{\ensuremath {\mathit{BS}}{\xspace}}
\newcommand{\ensuremath {\mathit{AoA}}{\xspace}}{\ensuremath {\mathit{AoA}}{\xspace}}
\newcommand{\ensuremath {\mathit{ToA}}{\xspace}}{\ensuremath {\mathit{ToA}}{\xspace}}
\newcommand{\ensuremath {\mathit{TDoA}}{\xspace}}{\ensuremath {\mathit{TDoA}}{\xspace}}
\newcommand{\ensuremath {\mathit{DoA}}{\xspace}}{\ensuremath {\mathit{DoA}}{\xspace}}
\newcommand{\ensuremath {\mathit{RSS}}{\xspace}}{\ensuremath {\mathit{RSS}}{\xspace}}
\newcommand{\ensuremath {\mathit{REM}}{\xspace}}{\ensuremath {\mathit{REM}}{\xspace}}
\newcommand{\ensuremath {\mathit{t}}{\xspace}}{\ensuremath {\mathit{t}}{\xspace}}
\newcommand{\ensuremath {\mathit{SSE}}{\xspace}}{\ensuremath {\mathit{SSE}}{\xspace}}
\newcommand{\ensuremath {\mathit{SNR}}{\xspace}}{\ensuremath {\mathit{SNR}}{\xspace}}
\newcommand{\ensuremath {\mathit{SINR}}{\xspace}}{\ensuremath {\mathit{SINR}}{\xspace}}
\newcommand{\ensuremath {\mathit{PPSS}}{\xspace}}{\ensuremath {\mathit{PPSS}}{\xspace}}
\newcommand{\ensuremath {\mathit{chn}}{\xspace}}{\ensuremath {\mathit{chn}}{\xspace}}
\newcommand{\ensuremath {\mathit{QoS}}{\xspace}}{\ensuremath {\mathit{QoS}}{\xspace}}
\newcommand{\ensuremath {\mathit{PIR}}{\xspace}}{\ensuremath {\mathit{PIR}}{\xspace}}
\newcommand{\ensuremath {\mathit{ORAM}}{\xspace}}{\ensuremath {\mathit{ORAM}}{\xspace}}
\newcommand{\ensuremath {\mathit{OT}}{\xspace}}{\ensuremath {\mathit{OT}}{\xspace}}
\newcommand{\ensuremath {\mathit{MPC}}{\xspace}}{\ensuremath {\mathit{MPC}}{\xspace}}
\newcommand{\ensuremath {\mathit{n}}{\xspace}}{\ensuremath {\mathit{n}}{\xspace}}
\newcommand{\ensuremath {\mathit{MAC}}{\xspace}}{\ensuremath {\mathit{MAC}}{\xspace}}
\newcommand{\ensuremath {\mathit{OPE}}{\xspace}}{\ensuremath {\mathit{OPE}}{\xspace}}
\newcommand{\ensuremath {\mathit{GW}}{\xspace}}{\ensuremath {\mathit{GW}}{\xspace}}
\newcommand{\ensuremath {\mathit{SP}}{\xspace}}{\ensuremath {\mathit{SP}}{\xspace}}
\newtheorem{definition}{Definition}{\bfseries}{\rmfamily}
\usepackage{xcolor,cite,etoolbox}
\makeatletter
\pretocmd\@bibitem{\color{black}\csname keycolor#1\endcsname}{}{\fail}
\newcommand\citecolor[1]{\@namedef{keycolor#1}{\color{red}}}
\makeatother
\begin{document}
\title{Location Privacy in Cognitive Radio Networks: A Survey
}
\author{Mohamed~Grissa,
Bechir~Hamdaoui
and~Attila~A.~Yavuz\\
\small Oregon State University, Corvallis, OR 97331, grissam,hamdaoui,attila.yavuz@oregonstate.ed
\thanks{This work was supported in part by the US National Science Foundation under NSF award CNS-1162296. Mohamed Grissa, Bechir~Hamdaoui and~Attila~A.~Yavuz are with the Electrical Engineering and Computer Science (EECS) Department, Oregon State University, Corvallis, OR 97331-5501, USA (e-mail: grissam,hamdaoui,attila.yavuz@oregonstate.edu).}
\thanks{\copyright~2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.}
}
\maketitle
{\let\thefootnote\relax\footnote{{\\Digital Object Identifier 10.1109/COMST.2017.2693965}}}
\begin{abstract}
Cognitive radio networks (\ensuremath {\mathit{CRN}}{\xspace} s) have emerged as an essential technology to enable dynamic and opportunistic spectrum access which aims to exploit underutilized licensed channels to solve the spectrum scarcity problem. Despite the great benefits that \ensuremath {\mathit{CRN}}{\xspace} s offer in terms of their ability to improve spectrum utilization efficiency, they suffer from user location privacy issues. Knowing that their whereabouts may be exposed can discourage users from joining and participating in the \ensuremath {\mathit{CRN}}{\xspace} s, thereby potentially hindering the adoption and deployment of this technology in future generation networks. The location information leakage issue in the \ensuremath {\mathit{CRN}}{\xspace}~context has recently started to gain attention from the research community due to its importance, and several research efforts have been made to tackle it. However, to the best of our knowledge, none of these works have tried to identify the vulnerabilities that are behind this issue or discuss the approaches that could be deployed to prevent it. In this paper, we try to fill this gap by providing a comprehensive survey that investigates the various location privacy risks and threats that may arise from the different components of this \ensuremath {\mathit{CRN}}{\xspace}~technology, and explores the different privacy attacks and countermeasure solutions that have been proposed in the literature to cope with this location privacy issue. We also discuss some open research problems, related to this issue, that need to be overcome by the research community to take advantage of the benefits of this key \ensuremath {\mathit{CRN}}{\xspace}~technology without having to sacrifice the users' privacy.
\end{abstract}
\begin{keywords}
Location privacy, cognitive radio networks, dynamic spectrum access, privacy preserving protocols.
\end{keywords}
\section{Introduction}
\label{introduction}
{\em Cognitive radio networks} (\ensuremath {\mathit{CRN}}{\xspace} s) have been widely adopted as an efficient way to improve the spectrum utilization efficiency and alleviate the spectrum scarcity crisis caused by the huge demand on radio frequency resources. This technology has several applications and is considered as one of the main enablers for 5G wireless networks to deal with its stringent spectrum requirement.
This paradigm, first coined by Mitola~\cite{mitola1999cognitive}, could be thought of as an intelligent wireless communication system that is aware of its surrounding and that can adapt dynamically to the changes in the RF environment. It enables {\em dynamic spectrum access} (\ensuremath {\mathit{DSA}}{\xspace}) and improves the spectrum utilization efficiency by allowing unlicensed/secondary users (\ensuremath {\mathit{SU}}{\xspace} s) to exploit unused spectrum bands of licensed/primary users (\ensuremath {\mathit{PU}}{\xspace} s). That is, \ensuremath {\mathit{SU}}{\xspace} s can opportunistically use unused spectrum bands (aka spectrum holes or white spaces), which are defined by FCC as the channels that are unused at a specific location and time~\cite{akhtar2016white},
so long as doing so does not cause harmful interference to \ensuremath {\mathit{PU}}{\xspace} s.
\subsection{The \ensuremath {\mathit{CRN}}{\xspace}~location privacy problem}
Despite its great potential for improving spectrum utilization efficiency, the \ensuremath {\mathit{CRN}}{\xspace}~technology suffers from serious privacy and security risks.
Although the survey covers location privacy issues arising at the various \ensuremath {\mathit{CRN}}{\xspace}~components, for motivation purposes, we focus in this section on the {\em spectrum discovery component} only, in which white spaces are identified using either the {\em cooperative spectrum sensing} or the {\em database-driven spectrum access} functions.
\subsubsection{Cooperative spectrum sensing} In cooperative sensing, a central entity called Fusion Center (FC) orchestrates the sensing operations as follows: It selects one channel for sensing and, through a control channel, requests that each \ensuremath {\mathit{SU}}{\xspace}~perform local sensing on that channel to detect the presence of \ensuremath {\mathit{PU}}{\xspace}~signals and send its sensing report back to it. Fusion Center then combines the received sensing reports, makes a decision about the channel availability, and diffuses the decision back to the \ensuremath {\mathit{SU}}{\xspace} s.
Here, a sensing report is essentially a sensed/measured quantity characterising some \ensuremath {\mathit{PU}}{\xspace}~signal strength the \ensuremath {\mathit{SU}}{\xspace}~observed on some \ensuremath {\mathit{PU}}{\xspace}~channel, and what quantity the \ensuremath {\mathit{SU}}{\xspace}~measures depends on the spectrum sensing method it uses (e.g., waveform~\cite{tian2006wavelet}, energy detection~\cite{poor2013introduction}, cyclostationarity~\cite{ghozzi2006cyclostatilonarilty}, etc.; see Section~\ref{section2-sensing} for more details). For example, when using the energy detection method, the sensed quantity is the energy strength of the sensed \ensuremath {\mathit{PU}}{\xspace}~signal, often referred to as {\bf received signal strength} (RSS)~\cite{fatemieh2011using}.
In cooperative sensing, communications between \ensuremath {\mathit{SU}}{\xspace} s and Fusion Center could be done via one of the following: $(i)$ direct, single-hop wireless links; $(ii)$ multi-hop links (with first link being wireless); $(iii)$ wired links (whether single or multiple hops).
In the first and second types, location privacy information can be easily leaked by observing the wireless radio signals sent by \ensuremath {\mathit{SU}}{\xspace} s to Fusion Center. In this case, existing (mostly mature) location privacy preservation technologies (e.g., see~\cite{conti2013providing,xi2006preserving} for sensor, ~\cite{jiang2007preserving} for WiFi and~\cite{gorlatova2011managing} for cellular) can be applied here to protect the location privacy of \ensuremath {\mathit{SU}}{\xspace} s during cooperative sensing.
In the third communication type when \ensuremath {\mathit{SU}}{\xspace} s communicate with Fusion Center via wired links, wireless signal-based localization techniques can no longer be used here to locate \ensuremath {\mathit{SU}}{\xspace} s.
However, unlike traditional wireless networks, in the case of cooperative sensing, preventing leakage of location information from wireless signals (e.g., by communicating via wired links) does not guarantee the preservation of \ensuremath {\mathit{SU}}{\xspace} s' location privacy. This is because location information can also be leaked from the sensing reports sent by \ensuremath {\mathit{SU}}{\xspace} s to the Fusion Center during cooperative sensing\cite{li2012location}.
Recall that a sensing report is essentially the {received signal strength} (RSS) value of some \ensuremath {\mathit{PU}}{\xspace}~signal that the \ensuremath {\mathit{SU}}{\xspace}~observed on a specific \ensuremath {\mathit{PU}}{\xspace}~channel.
And it has been shown that these values are highly correlated to the physical location of the reporting \ensuremath {\mathit{SU}}{\xspace}~\cite{li2012location}. Now Fusion Center may know the actual physical locations of few \ensuremath {\mathit{PU}}{\xspace} s as well as the channels these \ensuremath {\mathit{PU}}{\xspace} s communicate on, and thus, by knowing the RSS values measured by an \ensuremath {\mathit{SU}}{\xspace}~on each of these \ensuremath {\mathit{PU}}{\xspace}~channels, Fusion Center can easily locate the \ensuremath {\mathit{SU}}{\xspace}. Some illustrative scenarios, showing how Fusion Center can easily infer the physical locations of \ensuremath {\mathit{SU}}{\xspace} s by simply looking at few sensing reports on different \ensuremath {\mathit{PU}}{\xspace}~channels, are given in~\cite{li2012location}. This is also illustrated in Figure~\ref{coopLocation}.
\begin{figure}[h!]
\vspace{-10pt}
\centering
\subfigure[\small { Cooperative sensing}]{\includegraphics[width=0.24\textwidth]{LocationPrivacyCoop.pdf}\label{coopLocation}}
\subfigure[\small { \ensuremath {\mathit{DB}}{\xspace}-driven access}]{\includegraphics[width=0.24\textwidth]{LocationPrivacyDB.pdf}\label{DBLocation}}
\vspace{-5pt}
\caption{\small {Location privacy issues during spectrum discovery}} \label{}
\end{figure}
\subsubsection{ Database-driven access} In database-driven spectrum access, spectrum availability information is provided to \ensuremath {\mathit{SU}}{\xspace} s by querying a spectrum database, often maintained and controlled by a third
party (e.g., Google, Spectrum Bridge, RadioSoft, etc.). Here, although \ensuremath {\mathit{SU}}{\xspace}~queries' final destination is the database, which is often located far away from the \ensuremath {\mathit{SU}}{\xspace} s, location information can also be leaked from wireless radio signals if the \ensuremath {\mathit{SU}}{\xspace} s' first hop is a wireless link; e.g., a cellular base station or a WiFi access point. In this case, the aforementioned, existing location privacy preservation techniques that overcome wireless signal-based leakage can also be applied to protect \ensuremath {\mathit{SU}}{\xspace} s' location privacy.
However, as illustrated in Figure~\ref{DBLocation}, there is a more straightforward location privacy threat specific to the database-driven access method: In order for an \ensuremath {\mathit{SU}}{\xspace}~to acquire spectrum availability information, it is required to query the database with its physical location, so that the database can inform it about spectrum availability in its vicinity. This explicit exposure of \ensuremath {\mathit{SU}}{\xspace} s' location information to third (commercial) parties raises serious privacy concerns and has some undesired consequences, as discussed next.
\subsection{Why worry about location information privacy?}
Most users will not be okay with having their whereabouts and private location information made public, especially in the presence of malicious entities that may be eager to exploit this information for malicious purposes and to gain more knowledge about other sensitive and private information.
A survey conducted in 2015 by Pew Research found that ''{\it Most Americans hold strong views about the importance of privacy in their everyday lives}", and that ''{\it These feelings also extend to their wishes that they be able to maintain privacy in their homes, at work, during social gatherings, at times when they want to be alone and when they are moving around in public}"(Madden et al.~\cite{privacy-report}). This same survey also reports that ''{\it 90\% say that controlling what information is collected about them is important}'' and
''{\it 88\% say it is important that they not have someone watch or listen to them without their permission}''.
For instance, with an operation as simple as a succession of database accesses, a database can easily monitor and track the \ensuremath {\mathit{SU}}{\xspace}'s daily life activities and communications, allowing the database to learn various behavioral information about the user; e.g., where he/she goes for shopping, which social circles he/she attends, and where and when he/she eats.
As spectrum databases are being managed by business entities, such a private information is at the risk of being sold and shared with other commercial entities.
Indeed, a \ensuremath {\mathit{SU}}{\xspace}'s fine-grained location information, when combined with other publicly available information, could easily be exploited to infer other personal information about an individual including his/her behavior, preferences, personal habits or even beliefs. For instance, an adversary can learn an individual's religious belief by observing that a he/she frequently visits places with religious affiliations.
Location traces could also reveal some information about the health condition of a user if the adversary observes that the user regularly goes to a hospital for example. The frequency and duration of these visits can even reveal the seriousness of a user illness and even the type of illness if the location corresponds to that of a specialty clinic. The adversary could sell this health information to pharmaceutical advertisers without the user's consent. Moreover, malicious adversaries with criminal intent could use the location information to pose a threat to individuals' security and privacy; for instance, they can commit crimes of theft and burglary when users are absent.
We envision that the public's acceptance of the dynamic and opportunistic
spectrum sharing paradigm will greatly depend on the robustness and trustworthiness of \ensuremath {\mathit{CRN}}{\xspace} s vis-a-vis of their ability to address these privacy concerns. It is, therefore, imperative that techniques and tools to be developed by the research community for \ensuremath {\mathit{CRN}}{\xspace} s be enabled with privacy preserving capabilities that protect the location privacy of \ensuremath {\mathit{SU}}{\xspace} s while allowing them to harness the benefits of the \ensuremath {\mathit{CRN}}{\xspace}~paradigm without disrupting the functionalities that these techniques are designed for to promote {\em dynamic spectrum access}.
\subsection{Location privacy protection: pros and cons}
Ensuring that the location privacy information of \ensuremath {\mathit{SU}}{\xspace} s is protected has great benefits.
First and most importantly, it promotes dynamic and opportunistic sharing of spectrum resources, thereby increasing spectrum utilization efficiency. Knowing that their location privacy is protected so that they do not have to worry about their whereabouts being tracked and their privacy being compromised, \ensuremath {\mathit{SU}}{\xspace} s will be encouraged to participate in the cooperative spectrum sensing process, and to query spectrum databases for spectrum availability.
Ensuring location privacy protection can also be beneficial to \ensuremath {\mathit{PU}}{\xspace} s. For example, being concerned that their location privacy information may be leaked to spectrum databases, \ensuremath {\mathit{SU}}{\xspace} s may attempt to use \ensuremath {\mathit{PU}}{\xspace}~channels without registering and querying spectrum databases for spectrum availability, thereby causing harmful interference to \ensuremath {\mathit{PU}}{\xspace} s.
Providing location privacy preservation guarantees cannot, however, be done without a cost. It does introduce additional communication, computation and storage overheads, which may, in turn, also introduce additional delay when it comes to learning about the availability status of some channel, and can, in the extreme case, make the spectrum availability information outdated, thus possibly resulting in using a channel that is not vacant.
The pros and cons of providing location privacy protection are summarized in Table~\ref{advDis}.
\begin{table*}[th!]
\vspace{-5pt}
\centering
\caption{\small Pros and cons of preserving the location privacy of \ensuremath {\mathit{SU}}{\xspace} s}
\label{advDis}
\resizebox{\textwidth}{!}{%
\renewcommand{\arraystretch}{1.25}{
\begin{tabular}{@{}lp{8.5cm}p{8.5cm}@{}}
\toprule[1.5pt]
& Pros & Cons \\ \midrule
{$\boldsymbol{\ensuremath {\mathit{SU}}{\xspace}}$} & \begin{tabular}[c]{@{}p{8.5cm}@{}} {- Encourages \ensuremath {\mathit{SU}}{\xspace} s to participate in the cooperative spectrum sensing process, and hence in accurately locating spectrum opportunities. } \\{ - Discourages \ensuremath {\mathit{SU}}{\xspace} s from using spectrum opportunities without checking for availability first, either through spectrum databases or cooperative sensing, and thus prevents \ensuremath {\mathit{SU}}{\xspace} s from violating secondary spectrum access policies. } \\{- Promotes dynamic spectrum sharing, and thus increases spectrum utilization efficiency and helps address the spectrum supply shortage problem.} \end{tabular} & \begin{tabular}[c]{@{}p{8.5cm}@{}} {- Incurs additional \ensuremath {\mathit{SU}}{\xspace} s' communication, computation, and storage overheads; this can be problematic when \ensuremath {\mathit{SU}}{\xspace} s are resource-limited devices (e.g., IoT devices, sensors, etc.).} \\ {- Introduces delay in the process of querying spectrum databases for spectrum availability information in the case of database-driven \ensuremath {\mathit{CRN}}{\xspace}~approach.} \\ {- Introduces delay when locating and deciding about spectrum availability through the cooperative spectrum sensing approach. } \end{tabular} \\ \addlinespace[5pt] \hline \addlinespace[5pt]
{$\boldsymbol{\ensuremath {\mathit{PU}}{\xspace}}$ } & \begin{tabular}[c]{@{}p{8.5cm}@{}}{- Protects \ensuremath {\mathit{PU}}{\xspace} s from harmful interference that might come from \ensuremath {\mathit{SU}}{\xspace} s not willing to check for spectrum availability (either via the cooperative spectrum sensing approach or database-driven access approach) before using \ensuremath {\mathit{PU}}{\xspace}~channels.} \end{tabular} & \begin{tabular}[c]{@{}p{8.5cm}@{}} {- Outdated spectrum availability information due to the delays incurred as a result of protecting the location privacy may lead to the use of occupied \ensuremath {\mathit{PU}}{\xspace}~channels by \ensuremath {\mathit{SU}}{\xspace} s, thereby causing interference to \ensuremath {\mathit{PU}}{\xspace} s.} \end{tabular} \\ \addlinespace[5pt]
\bottomrule[1.5pt]
\end{tabular}}}
\end{table*}
\begin{figure*}[th!]
\center
\includegraphics[width=0.8\textwidth]{survey-structure.pdf}
\caption{\small Survey structure}
\label{fig:structure}
\end{figure*}
\subsection{State-of-the art surveys}
There have been several existing works that investigate and address the various \ensuremath {\mathit{CRN}}{\xspace}~vulnerabilities and security issues~\cite{araujo2012security,fragkiadakis2013survey,ling2015application,el2011survey,sharma2015advances}. However, most of these survey works focus on security and privacy issues in general with little or no attention to the location privacy issue that we address in this survey. For instance, Mee et al.~\cite{ling2015application} present an extensive review on the use of reinforcement learning to achieve security enhancement in the context of \ensuremath {\mathit{CRN}}{\xspace} s while dealing with jamming and byzantine attacks. El-Hajj et al.~\cite{el2011survey} provide a per-layer classification of attacks targeting \ensuremath {\mathit{CRN}}{\xspace} s, and discuss existing countermeasure solutions that address these attacks. Sharma et al.~\cite{sharma2015advances} discuss security threats, attacks, and countermeasures in \ensuremath {\mathit{CRN}}{\xspace} s for both \ensuremath {\mathit{PU}}{\xspace} s and \ensuremath {\mathit{SU}}{\xspace} s with focus on the physical layer. There have also been few surveys that aimed at exploring location privacy issues, but they are generally not focusing on \ensuremath {\mathit{CRN}}{\xspace} s. For instance, Zhang et al.~\cite{zhang2015location} present a high-level overview of fundamental approaches for user localization and privacy preservation but mainly in the context of location-based services (LBS). They also discuss this issue, but only briefly, in the context of indoor environments, wireless sensor networks, and cognitive radio networks. To the best of our knowledge, this is the first comprehensive survey that digs into the different privacy threats and attacks that target the location information of \ensuremath {\mathit{SU}}{\xspace} s at the different \ensuremath {\mathit{CRN}}{\xspace}~components, along with the different techniques that have been proposed in the literature to mitigate and address these threats.
\label{related}
\subsection{Structure and acronyms}
This paper provides a comprehensive survey of the location privacy threats and vulnerabilities arising at the various components of \ensuremath {\mathit{CRN}}{\xspace} s, as well as the different techniques proposed in the literature to overcome these privacy issues. The general survey structure is depicted in Figure~\ref{fig:structure}, and is as follows:
\begin{itemize}
\item Section~\ref{sec:sources} investigates the vulnerabilities and sources of location information leakage in \ensuremath {\mathit{CRN}}{\xspace} s, and provides insights on how these vulnerabilities could become potential threats to \ensuremath {\mathit{SU}}{\xspace} s' location privacy.
\item Section~\ref{limitGeneric} explores the privacy enhancing technologies (PETs) that are most relevant to \ensuremath {\mathit{CRN}}{\xspace} s. The goal is to show how these techniques, that are widely adopted in other contexts, could not be applied off-the-shelf as they are in the context of \ensuremath {\mathit{CRN}}{\xspace} s unless judiciously adapted to the unique requirements of \ensuremath {\mathit{CRN}}{\xspace} s.
\item Sections~\ref{lpsd} and~\ref{lpoc} discuss threats and attacks that have been identified in the literature with respect to the spectrum opportunity discovery component (Section~\ref{lpsd}), as well as other \ensuremath {\mathit{CRN}}{\xspace}~components (Section~\ref{lpoc}). They also discuss their impacts on \ensuremath {\mathit{SU}}{\xspace} s' privacy, and investigate countermeasure solutions that have been proposed in the literature to deal with these attacks. These two sections also explore and present the different metrics used to assess and evaluate both the achievable performance and the privacy level of these proposed solutions.
\item Section~\ref{openproblems} discusses unsolved research challenges pertaining to the location privacy in \ensuremath {\mathit{CRN}}{\xspace} s, with a special focus on the \ensuremath {\mathit{CR}}{\xspace}~components that have received the least attention from the research community. It also discusses open research problems arising from alternative \ensuremath {\mathit{CRN}}{\xspace}~architectures and from emerging \ensuremath {\mathit{CR}}{\xspace}-based technologies.
\item Section~\ref{con} concludes the survey.
\end{itemize}
For convenience, we summarize the used acronyms in Table~\ref{t:notations}.
\begin{table}[h!]
\vspace{-7pt}
\caption{\small Acronyms}
\centering
\resizebox{0.4\textwidth}{!}{
\label{t:notations}
\begin{tabular}{l l}
\hline
\noalign{\medskip }
$\ensuremath {\mathit{AoA}}{\xspace}$ & Angle of arrival \\
$\ensuremath {\mathit{BS}}{\xspace}$ & Base station \\
$\ensuremath {\mathit{CR}}{\xspace}$ & Cognitive radio \\
$\ensuremath {\mathit{CRN}}{\xspace}$ & Cognitive radio network \\
$\ensuremath {\mathit{DB}}{\xspace}$ & Spectrum database \\
$\ensuremath {\mathit{DSA}}{\xspace}$ & Dynamic spectrum access \\
$\ensuremath {\mathit{FC}}{\xspace}$ & Fusion center\\
$\ensuremath {\mathit{FCC}}{\xspace}$ & Federal Communications Commission \\
$\ensuremath {\mathit{GW}}{\xspace}$ & Gateway\\
$\ensuremath {\mathit{MAC}}{\xspace}$ & Medium Access Control \\
$\ensuremath {\mathit{MPC}}{\xspace}$ & Secure multiparty computation \\
$MTP$ & Maximum transmit power \\
$\ensuremath {\mathit{OPE}}{\xspace}$ & Order preserving encryption \\
$\ensuremath {\mathit{ORAM}}{\xspace}$ & Oblivious random access memory \\
$\ensuremath {\mathit{OT}}{\xspace}$ & Oblivious transfer \\
PET & Privacy enhancing technology\\
$\ensuremath {\mathit{PIR}}{\xspace}$ & Private information retrieval \\
$\ensuremath {\mathit{QoS}}{\xspace}$ & Quality of service \\
$\ensuremath {\mathit{REM}}{\xspace}$ & Radio environment map \\
$\ensuremath {\mathit{RSS}}{\xspace}$ & Received signal strength\\
$\ensuremath {\mathit{SINR}}{\xspace}$ & Signal to interference-plus-noise ratio \\
$\ensuremath {\mathit{SNR}}{\xspace}$ & Signal to noise ratio \\
$\ensuremath {\mathit{SP}}{\xspace}$ & Service provider \\
$\ensuremath {\mathit{SSE}}{\xspace}$ & Searchable symmetric encryption \\
$\ensuremath {\mathit{SU}}{\xspace}$ & Secondary user \\
$\ensuremath {\mathit{PU}}{\xspace}$ & Primary user \\
$\ensuremath {\mathit{ToA}}{\xspace}$ & Time of arrival \\
$\ensuremath {\mathit{TDoA}}{\xspace}$ & Time difference of arrival \\
$WSN$ & Wireless sensor network \\
\noalign{\smallskip} \hline \noalign{\smallskip}
\end{tabular}
}
\end{table}
\section{Sources of location information leakage in \ensuremath {\mathit{CRN}}{\xspace} s}
\label{sec:sources}
\label{sec:sourcesof}
\ensuremath {\mathit{CRN}}{\xspace} s need to perform a number of spectrum-aware operations to adapt to the dynamic spectrum environment. These operations form what is called a {\em cognition cycle} \cite{mitola1999cognitive,haykin2005cognitive,akyildiz2009crahns,hossain2009dynamic}, which mainly consists of four spectrum functions as shown in Figure~\ref{cogCycle}: Spectrum opportunity discovery, spectrum analysis, spectrum sharing and spectrum mobility. Despite their merit in enhancing the spectrum utilization, \ensuremath {\mathit{CRN}}{\xspace} s may present some privacy risks to \ensuremath {\mathit{SU}}{\xspace} s especially in terms of their location privacy. In this section, we investigate the different aspects of the cognitive spectrum functions and we discuss the different threats that can compromise the location privacy of \ensuremath {\mathit{SU}}{\xspace} s in \ensuremath {\mathit{CRN}}{\xspace} s during the execution of these functions.
\begin{figure}[h!]
\center
\includegraphics[width=0.36\textwidth]{cognitive-cycle.pdf}
\caption{\small Cognitive radio cycle~\cite{akyildiz2009crahns}.}
\label{cogCycle}
\vspace{-10pt}
\end{figure}
\subsection{Location information leakage in spectrum discovery}
\label{specDisc}
This is considered to be one of the most important components of the cognition cycle, as it provides information about spectrum holes and \ensuremath {\mathit{PU}}{\xspace} s' presence. Mainly, there are two approaches to obtain this information: spectrum sensing, to be performed by \ensuremath {\mathit{SU}}{\xspace} s \cite{yucek2009survey}, and geolocation database. We first describe these two approaches, and then investigate the sources of location information leakage that they may have.
\subsubsection{Spectrum sensing}
\label{section2-sensing}
Spectrum sensing enables \ensuremath {\mathit{SU}}{\xspace} s to be aware of their surroundings and to be able to identify spectrum holes in their vicinity so that they can exploit them opportunistically. It basically requires \ensuremath {\mathit{SU}}{\xspace} s to sense and detect primary signals without interfering with \ensuremath {\mathit{PU}}{\xspace}'s transmissions~\cite{ghasemi2008spectrum,axell2012spectrum}. Spectrum sensing could be divided into two main functionalities, \ensuremath {\mathit{PU}}{\xspace}~detection~and cooperation, which are detailed next.
\paragraph{\ensuremath {\mathit{PU}}{\xspace}~detection}
\label{pudetection}
The first step towards discovering spectrum opportunities is to detect \ensuremath {\mathit{PU}}{\xspace} s' signals. To do so, each \ensuremath {\mathit{SU}}{\xspace}~needs to sense its local radio environment, as it is generally assumed not to have any prior knowledge about \ensuremath {\mathit{PU}}{\xspace} s' activities.
We now present existing techniques that have been proposed in the literature to detect primary signals.
\begin{itemize}
\item {\em Energy detection~\cite{poor2013introduction}}: This is the simplest and the most popular approach for signal detection~\cite{letaief2009cooperative}. It is also considered as the optimal sensing approach when no information about the primary signal is available~\cite{hoven2005some}.
The presence or absence of a \ensuremath {\mathit{PU}}{\xspace}~is decided by measuring \ensuremath {\mathit{PU}}{\xspace}~signal's energy (aka the received signal strength (\ensuremath {\mathit{RSS}}{\xspace})) on a target channel and comparing it against a detection energy threshold $\lambda$~\cite{fatemieh2011using,ma2009signal}.
\item {\em Matched filter detection~\cite{proakis2001intersymbol}}: It is considered as the optimal signal detection method~\cite{cabric2004implementation,letaief2009cooperative} as it maximizes the signal to noise ratio. It requires a full knowledge of \ensuremath {\mathit{PU}}{\xspace}'s signal features such as modulation format, data rate, etc. It compares the known signal (aka template) with the input signal to detect the presence of the template signal in the unknown signal. The output of the matched filter is then compared to a predetermined threshold to decide on \ensuremath {\mathit{PU}}{\xspace}'s presence or absence.
\item {\em Cyclostationary detection \cite{lunden2009collaborative,letaief2009cooperative}}:
\ensuremath {\mathit{PU}}{\xspace} s' transmitted signals are usually cyclostationary, i.e. their statistics exhibit periodicity~\cite{ma2009signal}. Such a periodicity is usually introduced to the primary signals so that receivers can use it for timing and channel estimation purposes. But it can also be exploited for detecting \ensuremath {\mathit{PU}}{\xspace} s~\cite{hossain2009dynamic}. \ensuremath {\mathit{SU}}{\xspace} s can detect this periodicity in the modulated signals by analyzing a spectral correlation function. This spectrum sensing technique is appealing because of its capability of differentiating the primary signal from noise and interference even in very low \ensuremath {\mathit{SNR}}{\xspace}~environments~\cite{ma2009signal}.
\item {\em Wavelet detection \cite{tian2006wavelet,ma2009signal}}:
This method uses wavelet transform, an attractive mathematical tool used to investigate signal local regularity to analyze singularities and irregular structures in the power spectrum density caused by spectrum usage~\cite{hossain2009dynamic}. Wavelets are used for detecting edges, which are boundaries between spectrum holes and occupied bands, in the power spectral density (PSD) of a wideband channel.
\end{itemize}
Most of the above mentioned techniques are based on a set of measurements
sampled at the Nyquist rate and can sense only one band at a time because of the hardware limitations~\cite{salahdine2016survey}. In addition, sensing a wideband spectrum requires dividing it into narrow bands and making \ensuremath {\mathit{SU}}{\xspace}~sense each band using multiple RF frontends simultaneously~\cite{salahdine2016survey}. This may result in a very high processing time and hardware cost which makes these approaches not suitable for wideband sensing. Compressive sensing~\cite{donoho2006compressed} is proposed to overcome these issues. In compressive sensing theory, a sparse signal can be acquired and compressed simultaneously in the same process with
only the essential information at rates significantly lower than Nyquist rate. This means that the signal can be recovered from far fewer measurements and at a lower rate (below Nyquist rate) compared to that of traditional methods~\cite{salahdine2016survey,sharma2016application}.
As the wideband spectrum is inherently sparse due to its low spectrum utilization, compressive sensing becomes a promising approach to realize wideband spectrum sensing.
\paragraph{Cooperation}
\label{coop}
One widely adopted approach for improving spectrum sensing accuracy is cooperation, where \ensuremath {\mathit{SU}}{\xspace} s share their local sensing observations and collaboratively make spectrum availability decisions.
These observations can be made using one of \ensuremath {\mathit{PU}}{\xspace}~detection techniques discussed in Section~\ref{pudetection}.
The idea behind cooperation is to exploit spatial diversity of spatially distributed \ensuremath {\mathit{SU}}{\xspace} s to cope with problems, like shadowing, multipath fading, and receiver uncertainty, that may impact individual local observations of \ensuremath {\mathit{SU}}{\xspace} s~\cite{yucek2009survey}. There have been many cooperative approaches proposed in the literature~\cite{cheng2012energy,ganesan2005cooperative,ganesan2007cooperative,letaief2009cooperative,althunibat2013optimizing}, and cooperative spectrum sensing has been widely adopted in many cognitive radio standards, e.g. WhiteFi~\cite{bahl2009white}, IEEE 802.22 WRAN~\cite{ieee802.22} and IEEE 802.11af~\cite{ieee802.11af}. The collaboration between \ensuremath {\mathit{SU}}{\xspace} s is usually performed through a control channel~\cite{cabric2004implementation} and could be realized in two major ways: centralized and distributed~\cite{akyildiz2011cooperative}.
In the centralized approach a central entity, referred to as {\em fusion center} (\ensuremath {\mathit{FC}}{\xspace}), orchestrates the cooperative sensing task among \ensuremath {\mathit{SU}}{\xspace} s through a control channel as shown in Figure~\ref{centralized}. \ensuremath {\mathit{FC}}{\xspace}~collects the local observations from \ensuremath {\mathit{SU}}{\xspace} s and combines them to determine \ensuremath {\mathit{PU}}{\xspace}'s presence on a specific channel.
In the distributed approach, \ensuremath {\mathit{SU}}{\xspace} s do not rely on \ensuremath {\mathit{FC}}{\xspace}~for making channel availability decisions. They instead exchange sensing information among one another to come to a unified decision~\cite{grissa2017preserving,akyildiz2011cooperative} as shown in Figure~\ref{distributed}
\begin{figure}[h!]
\vspace{-5pt}
\centering
\subfigure[\small Centralized]{\includegraphics[width=0.24\textwidth]{SingleHopInfrastructure.pdf}\label{centralized}}
\subfigure[ \small Distributed]{\includegraphics[width=0.24\textwidth]{SingleMultiAdhoc.pdf}\label{distributed}}
\vspace{-5pt}
\caption{Cooperative spectrum sensing} \label{}
\end{figure}
Another promising approach for enabling effective cooperative spectrum sensing over a large geographic area is to exploit the emerging {\em crowdsourcing}
paradigm, in which spectrum service providers outsource spectrum sensing tasks to distributed mobile users~\cite{fatemieh2010secure,fatemieh2011using,zhang2013secure,feng2014trac,jin2016privacy}.
Crowdsourcing is formally defined as the act of taking a job traditionally performed by a designated agent and outsourcing it to an undefined, generally large group of people in the form of an open call. This concept has been adopted in many contexts~\cite{sun2016securefind}, and has been first applied to \ensuremath {\mathit{CRN}}{\xspace} s by Fatemiah et al.~\cite{fatemieh2010secure}.
The use of crowdsourcing for enabling spectrum sensing is motivated by several facts and trends. First, according to a recent Cisco report~\cite{cisco2016}, the number of mobile-connected devices is expected to hit $11.6$ billion. This huge number guarantees sufficient geographic coverage, especially in highly populated regions such as metropolitan areas~\cite{zhang2013secure} where {\em dynamic spectrum access (\ensuremath {\mathit{DSA}}{\xspace})} systems are expected to play important roles~\cite{jin2016privacy}. Moreover, future mobile devices are widely expected to be able to perform spectrum sensing tasks given the expected pervasiveness of \ensuremath {\mathit{DSA}}{\xspace}~future wireless systems~\cite{nika2014towards,zhang2013secure}. Finally, mobile devices are
increasingly equipped with more powerful communication and
computation resources, and are enabled with self-localization capabilities, making mobile crowdsourcing even more appealing and attractive~\cite{jin2016privacy}.
The cooperative spectrum sensing process is usually performed by a specified set of nodes that are considered to be trustworthy~\cite{fatemieh2010secure}. Crowdsourcing-based cooperative spectrum sensing, on the other hand, is to be realized by gathering and combining sensing reports from a large group of nodes that could be unreliable, untrustworthy, or even malicious~\cite{fatemieh2010secure}, thereby giving rise to new challenges.
Another important challenge that arises from \ensuremath {\mathit{SU}}{\xspace} s' cooperation nature is how to combine the various \ensuremath {\mathit{SU}}{\xspace} s' sensing results or observations for hypothesis testing to decide on the presence of primary signals in an accurate manner. This process consists of sending the sensing results to \ensuremath {\mathit{FC}}{\xspace}~or to the neighboring \ensuremath {\mathit{SU}}{\xspace} s, depending on the topology, to make spectrum availability decisions. It is referred to as data fusion and can be done in one of two ways: soft combining and hard combining~\cite{teguig2012data}. In soft combining, local sensing reports, measured locally by \ensuremath {\mathit{SU}}{\xspace} s from their received signals, are combined together to compute some statistics using combining rules such as square law combining (SLC), maximal ratio combining (MRC) and selection combining (SC)~\cite{teguig2012data}. In hard combining, \ensuremath {\mathit{SU}}{\xspace} s make decisions about the availability of the spectrum locally, and share their one-bit decision (i.e., available or not available) outputs to make a voting decision about spectrum availability~\cite{teguig2012data}.
\subsubsection{Geolocation database}
\label{db}
This is another approach for spectrum opportunity discovery that was recommended recently by FCC~\cite{federal2012third}.
A typical database-driven \ensuremath {\mathit{CRN}}{\xspace}~\cite{murty2012senseless,grissa2015cuckoo} consists of a geolocation database (\ensuremath {\mathit{DB}}{\xspace}) containing spectrum availability information, a set of \ensuremath {\mathit{SU}}{\xspace} s and a set of \ensuremath {\mathit{PU}}{\xspace} s as shown in Figure~\ref{databaseDirect}. To learn about spectrum opportunities in its vicinity, a \ensuremath {\mathit{SU}}{\xspace}~is not required to detect the primary signal by itself anymore. Instead, it needs to query \ensuremath {\mathit{DB}}{\xspace}~by including its exact location in the query. \ensuremath {\mathit{DB}}{\xspace}~then replies with a set of available channels in \ensuremath {\mathit{SU}}{\xspace}'s location and with the appropriate transmission parameters (e.g. transmit power) for each channel to avoid interfering with the incumbents. Afterwards, depending on the situation, \ensuremath {\mathit{SU}}{\xspace}~may optionally inform \ensuremath {\mathit{DB}}{\xspace}~of its choice and registers the channel it is planning to operate on during what is referred to as notification or commitment phase~\cite{gao2013location,zhang2015optimal}. \ensuremath {\mathit{DB}}{\xspace}~keeps track of this information to have more visibility over the \ensuremath {\mathit{CRN}}{\xspace}~and make its decision adaptively which allows it to reduce interference among \ensuremath {\mathit{SU}}{\xspace} s. \ensuremath {\mathit{SU}}{\xspace} s may be able to communicate directly with \ensuremath {\mathit{DB}}{\xspace}~as in Figure~\ref{databaseDirect} or via a fixed base station that relays their queries to \ensuremath {\mathit{DB}}{\xspace}~as in Figure~\ref{databaseBS}.
\begin{figure}[h!]
\centering
\subfigure[without \ensuremath {\mathit{BS}}{\xspace}]{\includegraphics[width=0.24\textwidth]{databaseSingle.pdf}\label{databaseDirect}}
\subfigure[with \ensuremath {\mathit{BS}}{\xspace}]{\includegraphics[width=0.24\textwidth]{databaseDouble.pdf}\label{databaseBS}}
\vspace{-5pt}
\caption{\small Spectrum database-based \ensuremath {\mathit{CRN}}{\xspace}~topologies}
\end{figure}
\subsubsection{Sources of location information leakage}
In this Section, we investigate the different vulnerabilities in the spectrum opportunities discovery phase and the potential threats that could exploit them in order to localize \ensuremath {\mathit{SU}}{\xspace} s. We first begin by exploring the sources of leakage in the cooperative spectrum sensing approach, and then we explore those in the database-based approach.
\paragraph{Cooperative spectrum sensing} \label{coopSources}
In the cooperative spectrum sensing approach, \ensuremath {\mathit{SU}}{\xspace} s need to communicate with other entities in the \ensuremath {\mathit{CRN}}{\xspace}~to exchange and share their observations of the spectrum. This collaboration may lead to a significant leakage of information regarding the location of the collaborating \ensuremath {\mathit{SU}}{\xspace} s. In the following, we investigate and discuss the different vulnerabilities that arise from the cooperation process.
\textbf{Wireless signal:}
This is the most obvious and direct source of leakage in wireless networks in general and in \ensuremath {\mathit{CRN}}{\xspace} s in particular. The wireless medium and its inherent open and broadcast nature in \ensuremath {\mathit{CRN}}{\xspace} s makes it much easier for an attacker to compromise a \ensuremath {\mathit{SU}}{\xspace}'s privacy and more specifically its location~\cite{zekavat2011handbook,sithamparanathan2012cognitive}. Despite the many efforts to protect the private location information at the system level, mainly using encrypted signal transmissions, the signal itself can still be used to potentially localize a \ensuremath {\mathit{SU}}{\xspace}. Classic approaches for localization are usually based on a small set of measurements on the wireless signals, that include time-based ranging, received signal strength (\ensuremath {\mathit{RSS}}{\xspace}) and angle of arrival (\ensuremath {\mathit{AoA}}{\xspace})~\cite{zekavat2011handbook}.
\begin{itemize}
\item Time-based ranging: This is used to estimate the distance between two communicating nodes by measuring the signal propagation delay, known also as time-of-flight, $\tau_F = d/c$, where $d$ is the actual distance between the nodes and $c$ is the propagation speed ($c\simeq 3.10^8 m/s$)~\cite{sithamparanathan2012cognitive}. This can be accomplished using either time-of-arrival (\ensuremath {\mathit{ToA}}{\xspace}) or time difference-of-arrival (\ensuremath {\mathit{TDoA}}{\xspace}). If at time $\ensuremath {\mathit{t}}{\xspace}_1$ the victim node sends a packet that contains the timestamp $\ensuremath {\mathit{t}}{\xspace}_1$ to a semi-honest or malicious node that receives it at time $\ensuremath {\mathit{t}}{\xspace}_2$, then the latter node can estimate the distance that separates it from the victim node based on $\tau_F = \ensuremath {\mathit{t}}{\xspace}_1 - \ensuremath {\mathit{t}}{\xspace}_2$. This technique is known as \ensuremath {\mathit{ToA}}{\xspace}~ranging~\cite{sithamparanathan2012cognitive,zekavat2011handbook}. \ensuremath {\mathit{ToA}}{\xspace}~needs at least three measurements of distance to localize the target via triangulation~\cite{vossiek2003wireless}, which means that a malicious entity cannot localize precisely a target \ensuremath {\mathit{SU}}{\xspace}~unless it is mobile or it collaborates with two other malicious entities. \ensuremath {\mathit{TDoA}}{\xspace}, on the other hand, does not rely on the absolute distance between a pair of nodes but rather on the measurement of the difference in time between signals arriving at two base nodes.
\item Received signal strength (\ensuremath {\mathit{RSS}}{\xspace})-based ranging: In theory, the energy of a radio signal decreases with the square of the distance from the signal’s source. As a result, a node listening to a radio transmission should be able to use the strength of the received signal to estimate its distance from the transmitter~\cite{bachrach2005localization}. More details about the practicality of \ensuremath {\mathit{RSS}}{\xspace}-based ranging technique and its feasibility given various factors could be found in~\cite{whitehouse2007practical}.
\item Angle of arrival (\ensuremath {\mathit{AoA}}{\xspace})-based ranging: \ensuremath {\mathit{AoA}}{\xspace}~could be defined as the angle between the propagation direction of an incident wave and some reference direction known as orientation~\cite{peng2006angle}. The estimation of \ensuremath {\mathit{AoA}}{\xspace}~could be done using directive antennas or using an array of uniformly separated receivers~\cite{boukerche2007localization}. The relative angle could then be used to derive the distance between the two communicating nodes~\cite{bachrach2005localization}.
\end{itemize}
\textbf{Observations:}
The spectrum sensing measurements and observations that \ensuremath {\mathit{SU}}{\xspace} s share to identify spectrum holes may be another source of location information leakage in \ensuremath {\mathit{CRN}}{\xspace} s. In the case of soft combining rule where \ensuremath {\mathit{SU}}{\xspace} s have to share their raw measurements, \ensuremath {\mathit{SU}}{\xspace} s may see their location information exposed. Indeed, it has been shown in~\cite{fatemieh2011using,li2012location} that the sensing reports containing \ensuremath {\mathit{RSS}}{\xspace}~measurements on \ensuremath {\mathit{PU}}{\xspace} s' signal, are highly correlated to \ensuremath {\mathit{SU}}{\xspace} s' physical location. The \ensuremath {\mathit{RSS}}{\xspace}~measurements could be used to localize \ensuremath {\mathit{SU}}{\xspace} s with respect to \ensuremath {\mathit{PU}}{\xspace} s whose channels are sensed through these measurements. Note that this \ensuremath {\mathit{RSS}}{\xspace}~is different from the \ensuremath {\mathit{RSS}}{\xspace}~that we discussed previously for wireless signal which are obtained through a direct communication through the wireless medium between the adversary and the target victim. If the \ensuremath {\mathit{CRN}}{\xspace}~uses a hard combining rule for the cooperative sensing, \ensuremath {\mathit{SU}}{\xspace} s need just to share their binary decision values. This can still leak some information about \ensuremath {\mathit{SU}}{\xspace} s' location as it can tell whether a \ensuremath {\mathit{SU}}{\xspace}~belongs to the coverage area of a \ensuremath {\mathit{PU}}{\xspace}~especially if the activity of \ensuremath {\mathit{PU}}{\xspace}~is known by the attacker.
\textbf{Identity:}
One cannot talk about a location information leakage if the identity of the target victim is not revealed. Therefore, the identity of the user could also be considered as a source of location information leakage in the way that an attacker can match this identity to a specific location. In other words, if an attacker learns that a \ensuremath {\mathit{SU}}{\xspace}~is located at a specific location but at the same time fails to identify who it is, the location privacy of \ensuremath {\mathit{SU}}{\xspace}~cannot be considered as compromised. So, as long as a \ensuremath {\mathit{SU}}{\xspace}~is anonymous, its location privacy is preserved. In some cases, identity could also give an idea about the location of a \ensuremath {\mathit{SU}}{\xspace}~if combined with publicly known information of this specific \ensuremath {\mathit{SU}}{\xspace}. Take the example of a user whose identity is revealed. Based on this information, an adversary can learn the profession of this user, for instance a doctor that works at a specific hospital. This allows an attacker to estimate the position of the target \ensuremath {\mathit{SU}}{\xspace}~with high probability especially during regular working hours. This shows that the identity could be associated with a specific location of a \ensuremath {\mathit{SU}}{\xspace}.
\textbf{Radio hop count:}
The sensing information needs to be delivered to the appropriate nodes for the final decision, especially in multi-hop \ensuremath {\mathit{CRN}}{\xspace} s which requires deploying efficient routing protocols. These routing protocols usually rely on hop count information~\cite{chowdhury2009search,youssef2014routing}, and such information turns out to be another potential source of location information leakage~\cite{bachrach2005localization}. Many approaches are proposed in the literature, especially in the context of wireless sensor networks, to estimate node position based on the number of hops between pairs of nodes~\cite{niculescu2001ad,rabaey2002robust}.
\textbf{Clustered network:}
\ensuremath {\mathit{SU}}{\xspace} s may need to form or join different clusters during the spectrum sensing phase in order to improve the overall sensing performance and overhead. Different approaches are proposed in the literature for cluster formation in \ensuremath {\mathit{CRN}}{\xspace} s based on several criteria and metrics including geographical location, channel availability, signal strength and channel quality~\cite{yau2014clustering}. This clustering could leak information about \ensuremath {\mathit{SU}}{\xspace} s' location especially if the clustering criteria is based on the positions of \ensuremath {\mathit{SU}}{\xspace} s or on some information correlated to this position. Chang et al.~\cite{chan2005using} show that the clustering information along with some knowledge of the position of some anchor nodes in wireless sensor networks can lead to localizing the remaining nodes in the network. The same idea could be exploited in the context of \ensuremath {\mathit{CRN}}{\xspace} s in case, for example, that some \ensuremath {\mathit{SU}}{\xspace} s are compromised and their location is known to the adversary. In that case, the adversary can localize the remaining \ensuremath {\mathit{SU}}{\xspace} s. Moreover, if the clusters are also overlapping, this could further facilitate localization as shown by Youssef et al. in~\cite{youssef2006wsn16}.
\textbf{Signal-to-noise ratio ($\boldsymbol{\ensuremath {\mathit{SNR}}{\xspace}}$):}
\ensuremath {\mathit{SU}}{\xspace} s may need to share their measured \ensuremath {\mathit{SNR}}{\xspace} s, with respect to the channels of interest, with other \ensuremath {\mathit{SU}}{\xspace} s to cooperate in forming coalitions and selecting the decision making nodes in ad hoc \ensuremath {\mathit{CRN}}{\xspace} s~\cite{hao2011coalition}. The average \ensuremath {\mathit{SNR}}{\xspace}~of \ensuremath {\mathit{PU}}{\xspace}'s received signal measured at \ensuremath {\mathit{SU}}{\xspace}~$i$ is given by:
\begin{equation}
\overline{\ensuremath {\mathit{SNR}}{\xspace}}_{i,\ensuremath {\mathit{PU}}{\xspace}} = \frac{P_\ensuremath {\mathit{PU}}{\xspace} \cdot\kappa}{d_{\ensuremath {\mathit{PU}}{\xspace},i}^\alpha\cdot \sigma^2}
\label{snr}
\end{equation}
with $P_\ensuremath {\mathit{PU}}{\xspace}$ is the transmission power of \ensuremath {\mathit{PU}}{\xspace}, $\sigma^2$ denotes the Gaussian noise variance, $\kappa$ is a path loss constant, $\alpha$ is the path loss exponent and $d_{\ensuremath {\mathit{PU}}{\xspace},i}$ is the distance between \ensuremath {\mathit{PU}}{\xspace}~and \ensuremath {\mathit{SU}}{\xspace}~$i$~\cite{saad2009coalitional}. As Equation~\eqref{snr} shows, the \ensuremath {\mathit{SNR}}{\xspace}~value measured by a \ensuremath {\mathit{SU}}{\xspace}~depends on the distance that separates it from the corresponding \ensuremath {\mathit{PU}}{\xspace}. This could present a source of location information leakage as this information could be exploited to localize \ensuremath {\mathit{SU}}{\xspace}~especially when it has to share its \ensuremath {\mathit{SNR}}{\xspace}~with other \ensuremath {\mathit{SU}}{\xspace} s in the same coalition~\cite{kasiri2015privacy}.
The vulnerabilities and sources of leakage that we have raised previously could lead to serious location privacy risks for \ensuremath {\mathit{SU}}{\xspace} s if exploited by malicious entities in the \ensuremath {\mathit{CRN}}{\xspace}. This leakage could happen in the following scenarios:
\begin{itemize}
\item \textbf{Cooperation and sharing observations:} In order to participate in the cooperation for spectrum sensing, \ensuremath {\mathit{SU}}{\xspace} s need to share their observations of the spectrum either with other \ensuremath {\mathit{SU}}{\xspace} s or with a central entity. Despite the fact that sharing this information considerably improves the spectrum sensing performance, it exposes, however, individual \ensuremath {\mathit{SU}}{\xspace} s observations to other entities in the network. This becomes problematic if, during the sharing process, an external or internal malicious entity to the network gains access to these observations. This is due to the fact that these observations could be exploited to compromise \ensuremath {\mathit{SU}}{\xspace} s' location privacy as discussed earlier.
\item \textbf{Dynamism:} Due to the dynamic nature of \ensuremath {\mathit{CRN}}{\xspace} s, \ensuremath {\mathit{SU}}{\xspace} s can leave or join the collaborative spectrum sensing task at anytime, making privacy-preserving aggregation
techniques designed for static networks to hide individual observations of \ensuremath {\mathit{SU}}{\xspace} s unsuitable for \ensuremath {\mathit{CRN}}{\xspace} s. Indeed, this might allow a malicious entity that is collecting aggregated observations from \ensuremath {\mathit{SU}}{\xspace} s to estimate individual observations of leaving or joining \ensuremath {\mathit{SU}}{\xspace} s which, as discussed previously, is a source of location information leakage.
\item \textbf{Node failure:} The location privacy issue here is very similar to the situation of network dynamism. Indeed, if, for some reason, some \ensuremath {\mathit{SU}}{\xspace} s cannot sense the spectrum or fail to share their sensing reports during the cooperation process, the individual observations of these \ensuremath {\mathit{SU}}{\xspace} s could be estimated. Again, these individual observations could be exploited by an adversary for localization purposes.
\item \textbf{User selection:}
User selection is an important step in cooperative spectrum sensing, which aims to optimally select the
cooperating \ensuremath {\mathit{SU}}{\xspace} s that lead to the maximization of the cooperative gain and the minimization of the cooperation overhead. \ensuremath {\mathit{SU}}{\xspace} s are selected such that all the sensing reports are informative and not correlated while saving energy by avoiding unnecessary sensing operations~\cite{akyildiz2011cooperative}. This selection could be done through a clustering approach that divides \ensuremath {\mathit{SU}}{\xspace} s into different clusters. Malady et al.~\cite{malady2008clustering} propose several approaches for grouping \ensuremath {\mathit{SU}}{\xspace} s into clusters in distributed \ensuremath {\mathit{CRN}}{\xspace} s to keep bandwidth and power requirements manageable. Their methods are based on different criteria including \ensuremath {\mathit{SU}}{\xspace} s' positions with respect to a given reference or to \ensuremath {\mathit{PU}}{\xspace}~if \ensuremath {\mathit{PU}}{\xspace}'s position is known. In~\cite{ding2012decentralized}, Ding et al. propose a decentralized clustering-based user selection algorithm that relies on unsupervised learning to group \ensuremath {\mathit{SU}}{\xspace} s with best detection performance together to lead the sensing process. As discussed previously, the clustering information could be exploited to localize \ensuremath {\mathit{SU}}{\xspace} s during the cooperative spectrum sensing process. Another way for selecting \ensuremath {\mathit{SU}}{\xspace} s for spectrum sensing, which has just started to gain some interest in the context of \ensuremath {\mathit{CRN}}{\xspace} s, is {\em crowdsourcing} as we have explained earlier. Crowdsourcing may, however, give rise to some privacy risks, especially in terms of location privacy as shown by Jin et al.~\cite{jin2016privacy}. The selection process in this case relies on an open call, made by \ensuremath {\mathit{FC}}{\xspace}, for users in order to contribute to the sensing at a specific location. This makes it easy for \ensuremath {\mathit{FC}}{\xspace}~to associate users with the location of interest.
\end{itemize}
\paragraph{Geolocation database}
\label{sourcesDB}
With this architecture, \ensuremath {\mathit{SU}}{\xspace} s are not anymore required to perform spectrum sensing to learn about spectrum opportunities. Instead, they only need to query a geolocation spectrum database to get the list of available channels in their vicinity. This brings new privacy challenges that are completely different from the ones that emerge from the cooperation in spectrum sensing. In the following, we investigate the different sources of location information leakage that may arise from this specific \ensuremath {\mathit{CRN}}{\xspace}~architecture.
\textbf{Query:}
This is the most implicit source of location information, as every \ensuremath {\mathit{SU}}{\xspace}~needs to include its precise location every time it queries \ensuremath {\mathit{DB}}{\xspace}~for available channels. This information is usually sent in a plaintext form, allowing eavesdroppers to retrieve it. And even if the communication channel between \ensuremath {\mathit{SU}}{\xspace} s and \ensuremath {\mathit{DB}}{\xspace}~is authenticated; i.e. it eliminates the risk of an eavesdropper, there is still the risk of having a malicious \ensuremath {\mathit{DB}}{\xspace}.
\textbf{List of available channels in the query's response:}
This information could also be used by an adversary to narrow down the locations where a target \ensuremath {\mathit{SU}}{\xspace}~might possibly be. Indeed, knowing which channels are available for a certain \ensuremath {\mathit{SU}}{\xspace}~allows a malicious entity to attribute this \ensuremath {\mathit{SU}}{\xspace}~to multiple \ensuremath {\mathit{PU}}{\xspace} s coverage areas especially if the adversary, \ensuremath {\mathit{DB}}{\xspace}~for example, is aware of these \ensuremath {\mathit{PU}}{\xspace} s' activities and status.
\textbf{ Maximum transmit power ($\boldsymbol{MTP}$):} The $MTP$ over a specific spectrum band is included in \ensuremath {\mathit{DB}}{\xspace}'s response to \ensuremath {\mathit{SU}}{\xspace}, and is assigned to it based on its distance from its corresponding \ensuremath {\mathit{PU}}{\xspace}. It is usually calculated as follows
\begin{equation}
P = \begin{cases} 0, & d \leq r_0 \\ h(d-r_0) , & d > r_0 \end{cases}
\label{mtp}
\end{equation}
where $d$ is the distance between the querying \ensuremath {\mathit{SU}}{\xspace}~and its closest \ensuremath {\mathit{PU}}{\xspace}, $r_0$ is the protected contour radius of the channel of interest and $h(.)$ is a continuous, monotonically increasing function. As shown in Equation~\eqref{mtp}, $MTP$ is highly correlated to the distance of \ensuremath {\mathit{SU}}{\xspace}~from \ensuremath {\mathit{PU}}{\xspace}. In situations where \ensuremath {\mathit{PU}}{\xspace} s' positions are publicly known, an attacker could exploit $MTP$ values that \ensuremath {\mathit{SU}}{\xspace} s receive from \ensuremath {\mathit{DB}}{\xspace}~to infer \ensuremath {\mathit{SU}}{\xspace} s' locations.
These vulnerabilities and sources of leakage could become actual threats when exploited solely or combined together, and can occur in the following scenarios:
\begin{itemize}
\item \textbf{Querying $\boldsymbol{\ensuremath {\mathit{DB}}{\xspace}}$:} When a \ensuremath {\mathit{SU}}{\xspace}~interacts with \ensuremath {\mathit{DB}}{\xspace}~to learn about spectrum availability, its location can easily be revealed as it is included in the query. Even if, somehow, a privacy-preserving scheme is implemented to make \ensuremath {\mathit{DB}}{\xspace}~unable to retrieve \ensuremath {\mathit{SU}}{\xspace}'s location information from its query but at the same time can still provide it with the spectrum availability information at its vicinity, an adversary can still localize \ensuremath {\mathit{SU}}{\xspace}~by exploiting the information included in \ensuremath {\mathit{DB}}{\xspace}'s response as we discuss next.
\item \textbf{$\boldsymbol{\ensuremath {\mathit{DB}}{\xspace}}$'s response:} \ensuremath {\mathit{DB}}{\xspace}'s response to a \ensuremath {\mathit{SU}}{\xspace}'s query includes information like the list of available channels, and the maximum transmit power over each of those channels. This information could be used as explained earlier by a malicious \ensuremath {\mathit{DB}}{\xspace}~or an external adversary to infer the location of a target \ensuremath {\mathit{SU}}{\xspace}.
\item \textbf{Commitment phase:} Some implementations of the database-based \ensuremath {\mathit{CRN}}{\xspace} s require that a \ensuremath {\mathit{SU}}{\xspace}, upon receiving the response from \ensuremath {\mathit{DB}}{\xspace}, informs \ensuremath {\mathit{DB}}{\xspace}~about the channel that it chooses to operate on. This will make \ensuremath {\mathit{SU}}{\xspace}'s usage information available at least to \ensuremath {\mathit{DB}}{\xspace}. Hence, \ensuremath {\mathit{SU}}{\xspace} s in database-based \ensuremath {\mathit{CRN}}{\xspace} s will be prone to attacks that could exploit the vulnerabilities arising from spectrum utilization information as we explain in Section~\ref{sourcesMobility}.
\end{itemize}
\subsection{Location information leakage in spectrum analysis}
\label{sourcesDec}
This is an important step in the cognition cycle as it allows to analyze the information obtained from spectrum sensing to gain knowledge about spectrum holes (e.g. interference estimation, and
duration of availability). Spectrum analysis usually consists of two major components: spectrum characterization, and reconfiguration. In this section, we explain each of these two components and discuss their sources of location information leakage.
\subsubsection{Spectrum characterization}
Available spectrum bands may have different channel characteristics that vary over time. In order to determine the most suitable spectrum band, one needs to characterize these channels. Such a characterization requires the monitoring and observation of the RF environment, as well as the monitoring and awareness of \ensuremath {\mathit{PU}}{\xspace} s activities in these channels~\cite{masonta2013spectrum}.
\paragraph{RF environment characterization}
This process estimates some of the following key parameters to characterize the different spectrum bands.
\begin{itemize}
\item {\em Interference:} It is crucial to estimate and model the interference caused by \ensuremath {\mathit{SU}}{\xspace} s at the primary receiver to derive the permissible power of a \ensuremath {\mathit{SU}}{\xspace}~and ensure coexistence between \ensuremath {\mathit{SU}}{\xspace} s and \ensuremath {\mathit{PU}}{\xspace} s. Rabbachin et al.~\cite{rabbachin2011cognitive} propose a statistical model for aggregate interference generated by \ensuremath {\mathit{SU}}{\xspace} s in a limited or finite region by taking into consideration the shape of the region and the position of \ensuremath {\mathit{PU}}{\xspace}. The interference signal at \ensuremath {\mathit{PU}}{\xspace}~generated by the $i^{th}$ \ensuremath {\mathit{SU}}{\xspace}~is modeled as~\cite{rabbachin2011cognitive}:
\begin{equation}
\label{interference}
I_i = \sqrt{P_I} R_i^{-b} X_i
\end{equation}
where $P_I$ is the interference power at the near-far region limit; $R_i$ is the distance between the $i^{th}$
\ensuremath {\mathit{SU}}{\xspace}~and \ensuremath {\mathit{PU}}{\xspace}; and $X_i$ is the
per-dimension fading channel path gain of the channel from
the $i^{th}$ \ensuremath {\mathit{SU}}{\xspace}~to \ensuremath {\mathit{PU}}{\xspace}.
\item {\em Path loss:} This is closely related to distance and frequency. Path loss increases as the operating
frequency increases, resulting in a decrease in the transmission
range. Increasing the transmission power may be used to compensate for the increased path loss, and hence for the decrease in transmission
range. But this may increase interference at other \ensuremath {\mathit{SU}}{\xspace} s and \ensuremath {\mathit{PU}}{\xspace} s. According to~\cite{rappaport1996wireless}, the average path loss of a channel could be expressed using a path loss exponent $\alpha$. This exponent measures the rate at which the \ensuremath {\mathit{RSS}}{\xspace}~decreases with distance, and its value depends on the specific propagation environment~\cite{mao2007path}. It is also considered as a key parameter in the distance estimation based localization algorithms, where distance is estimated from the \ensuremath {\mathit{RSS}}{\xspace}~\cite{dantu2005robomote}.
\item {\em Channel switching delay:} This is basically the delay introduced by switching from one channel to another. In \ensuremath {\mathit{CRN}}{\xspace} s, the channel switching could be triggered by several events, such as the detection of \ensuremath {\mathit{PU}}{\xspace} s, the return of \ensuremath {\mathit{PU}}{\xspace} s to their channels, and/or the degradation of received \ensuremath {\mathit{QoS}}{\xspace}~in the current channel, as we discuss in Section~\ref{specMobility}.
\item {\em Channel holding time:} It is the expected duration \ensuremath {\mathit{SU}}{\xspace} s can occupy a licensed channel before getting interrupted.
\item {\em Channel error rate:} This is defined as the rate of data elements incorrectly received from the total number of data elements sent during a time interval. This rate may vary depending on the modulation scheme and the interference level of the channel~\cite{masonta2013spectrum}.
\end{itemize}
\paragraph{\ensuremath {\mathit{PU}}{\xspace}~activity modeling}
As spectrum availability depends not only on the RF environment characteristics but also on the activities of \ensuremath {\mathit{PU}}{\xspace} s, it is crucial that \ensuremath {\mathit{PU}}{\xspace}~activities are taken into account when characterizing the spectrum bands. This is essentially done by accounting for how long and how often \ensuremath {\mathit{PU}}{\xspace} s appear on their licensed spectrum bands. Existing approaches adopted for modeling this activity mainly rely on measured data obtained from the numerous spectrum measurement campaigns that have been conducted worldwide to quantify and study the \ensuremath {\mathit{PU}}{\xspace}~spectrum utilization and assess the current status of the spectrum~\cite{chen2016survey,saleem2014primary,hoyhtya2016spectrum}. These measurements are also performed to improve the accuracy of spectrum databases.
Many of these works consider only simple but important statistics of the spectrum occupancy, such as the maximum or the minimum and the average of power levels, the spectrum occupancy, and the duty cycle~\cite{chen2016survey}. These statistics are simple and reliable, but provide an incomplete model of the \ensuremath {\mathit{PU}}{\xspace} s' activities. Other approaches consider more advanced statistical models, such as probability
function models (e.g. CDF and PDF), Markov chains and linear regressions. These measurement-based modeling methods describe the statistical behaviors of the spectrum occupancy as a whole, but do not give the actual state
of the spectrum occupancy, i.e. whether a channel is busy or available.
Some other significant research models the
\ensuremath {\mathit{PU}}{\xspace}~activity as a Poisson process with exponentially distributed
inter-arrivals~\cite{lee2011spectrum,saleem2014primary}. However, such approaches fail to capture the short-term temporal fluctuations or variations exhibited by the \ensuremath {\mathit{PU}}{\xspace}~activity, and do not consider correlations and similarities within the monitored data~\cite{saleem2014primary}.
There are also approaches that try to predict future \ensuremath {\mathit{PU}}{\xspace}~activities and thus locate future spectrum opportunities by using learning techniques and by exploiting the history of spectrum band usage~\cite{masonta2013spectrum,saleem2014primary}. However, the prediction may go wrong resulting in harmful interference to \ensuremath {\mathit{PU}}{\xspace} s.
\paragraph{Sources of location information leakage}
\label{sourcesCharact}
As mentioned earlier, spectrum characterization consists of building knowledge about the radio environment and \ensuremath {\mathit{PU}}{\xspace}~activities. This knowledge, however, could be exploited (maliciously or un-maliciously) to leak location information of \ensuremath {\mathit{SU}}{\xspace} s, as discussed next.
\textbf{Interference:} As shown in Equation~\eqref{interference}, the interference is highly correlated to the distance that separates \ensuremath {\mathit{SU}}{\xspace}~from a \ensuremath {\mathit{PU}}{\xspace}. An adversary that has access to the characteristics of the interference caused by \ensuremath {\mathit{SU}}{\xspace} s can exploit this information to estimate the distance that separates \ensuremath {\mathit{SU}}{\xspace}~from a \ensuremath {\mathit{PU}}{\xspace}.
\textbf{Radio environment map ($\boldsymbol{\ensuremath {\mathit{REM}}{\xspace}}$):} This is a widely used method to characterize the spectrum. It is an integrated database that could be deployed in \ensuremath {\mathit{CRN}}{\xspace} s to store information about the radio environment's interference, signal properties, geographical features, spectral regulations, locations and activities of radios, policies of \ensuremath {\mathit{SU}}{\xspace} s and/or service providers, and past
experiences~\cite{zhao2006overhead,zhao2006network}. The main functionality of a \ensuremath {\mathit{REM}}{\xspace}~is the construction of dynamic interference map for each
frequency at each location of interest. This could be done in two different ways, either via field measurements or via propagation modeling. In the first approach, a \ensuremath {\mathit{REM}}{\xspace}~collects spectrum measurements from nodes with spectrum sensing capabilities. These nodes could be actual \ensuremath {\mathit{SU}}{\xspace} s or dedicated spectrum sensors~\cite{yilmaz2013radio}. Since it is impractical to have measurements all the time at all possible locations, \ensuremath {\mathit{REM}}{\xspace}~fuses
the collected measurements to estimate the
interference level at locations with no measurement data by means of spatial and temporal interpolation~\cite{yilmaz2013radio}.
The field measurement approach is believed to provide the highest location accuracy but not without a price. Its price lies in the need to perform drive tests whenever changes occur in the radio environment to keep the \ensuremath {\mathit{REM}}{\xspace}~up to date. The second approach, propagation modeling, relies on mathematical models for radio propagation prediction, which allow easy, fast and inexpensive updating for the \ensuremath {\mathit{REM}}{\xspace}. Indeed, whenever there is a change in the radio environment, we only need to rerun the propagation models with the new parameters to update the \ensuremath {\mathit{REM}}{\xspace}~\cite{zekavat2011handbook}.
This is different from the spectrum geolocation database in that \ensuremath {\mathit{REM}}{\xspace}~generates spectrum map by processing the data collected from multiple sources with its cognitive engine, and therefore can easily adapt to dynamic operating environments whereas \ensuremath {\mathit{DB}}{\xspace}~stores quasi-static information. \ensuremath {\mathit{REM}}{\xspace}~introduces environment awareness that would be harder to acquire by individual \ensuremath {\mathit{CR}}{\xspace}~capabilities via extensive spectrum analysis.
Hence, \ensuremath {\mathit{REM}}{\xspace}~can also be seen as the network support turning simple nodes into intelligent ones~\cite{yilmaz2013radio}.
This radio map, when it is in the hands of some malicious entity in the network, could be exploited to localize a querying \ensuremath {\mathit{SU}}{\xspace}~that sends its measurement to the REM manager in order to learn about spectrum availability. One way to exploit this information is based on fingerprinting localization technique which basically estimates the target position by simply finding the best-matched pattern or fingerprint for the measurement provided by \ensuremath {\mathit{SU}}{\xspace}~within a certain map~\cite{zekavat2011handbook}. Machine learning techniques could be used to build the radio signal map during the training phase and then to compare the online measured \ensuremath {\mathit{RSS}}{\xspace}~to the preconstructed map during the localization phase~\cite{zekavat2011handbook}. Obviously the map that could be used for the localization is the REM itself. As the REM could be used in a distributed or a centralized manner, either a malicious \ensuremath {\mathit{BS}}{\xspace}~or a malicious \ensuremath {\mathit{SU}}{\xspace}~could exploit it to localize a target \ensuremath {\mathit{SU}}{\xspace}.
\subsubsection{Reconfiguration}
After the channel of choice has been characterized, \ensuremath {\mathit{SU}}{\xspace}'s transceiver parameters have to be reconfigured to adapt to channel conditions and satisfy the \ensuremath {\mathit{QoS}}{\xspace}~requirements and regulatory policies. These parameters include:
\begin{itemize}
\item Transmission power: Controlling this parameter aims to achieve several objectives that include minimizing energy usage, reducing co-channel interference, etc.~\cite{tragos2013spectrum,hoven2005power}.
\item Operating frequency: This parameter represents the capability of \ensuremath {\mathit{SU}}{\xspace} s to reconfigure their central frequency in response to variations in the RF environment.
\item Channel bandwidth: This refers to the width of the spectrum over which a \ensuremath {\mathit{SU}}{\xspace}~operates. It is essential for \ensuremath {\mathit{SU}}{\xspace} s to have variable channel adaptation capabilities to be able to operate in heterogeneous networks.
\item Communication technology: This allows interoperability between different communication technologies such as GSM, LTE, etc.
\end{itemize}
\textbf{Sources of location information leakage:}
Some of the reconfigurable parameters that we have listed could leak some information about \ensuremath {\mathit{SU}}{\xspace} s' location especially if these parameters are controlled in a shared way.
\begin{itemize}
\item {\em Power control}: This process may present a threat to \ensuremath {\mathit{SU}}{\xspace} s' location privacy. Most of the existing approaches for power control rely on the {\em signal-to-noise ratio} (\ensuremath {\mathit{SNR}}{\xspace}) or the {\em signal-to-interference-plus-noise ratio} (\ensuremath {\mathit{SINR}}{\xspace}) metric when solving the power control problem~\cite{islam2008joint,ghorbel2015power,hoven2005power,chakchouk2011traffic}. For example, Hoven et al.~\cite{hoven2005power} use local \ensuremath {\mathit{SNR}}{\xspace} s of primary signals measured by \ensuremath {\mathit{SU}}{\xspace} s as a metric to design an effective power control rule. Other works use \ensuremath {\mathit{SINR}}{\xspace}~as a constraint or a requirement to minimize the total transmission power of the \ensuremath {\mathit{CRN}}{\xspace}~as in~\cite{islam2008joint} and maximize the spectrum utilization of the \ensuremath {\mathit{CRN}}{\xspace}~as in~\cite{hoang2006maximizing}. Yang et al.\cite{yang2010optimal} model this problem as a game with \ensuremath {\mathit{SINR}}{\xspace}-based utility function. Power control might become threatening to the privacy of \ensuremath {\mathit{SU}}{\xspace} s as information like \ensuremath {\mathit{SNR}}{\xspace}~and \ensuremath {\mathit{SINR}}{\xspace}~is usually correlated to the distance that separates a \ensuremath {\mathit{SU}}{\xspace}~from a \ensuremath {\mathit{PU}}{\xspace}. This is problematic especially when the power control process is intended to achieve a system-level goal like minimizing the total transmission power~\cite{islam2008joint} or maximizing the overall spectrum utilization~\cite{hoang2006maximizing} of \ensuremath {\mathit{CRN}}{\xspace} s. In that case, power control will have to be performed jointly between \ensuremath {\mathit{SU}}{\xspace} s in a centralized~\cite{islam2008joint,qian2007power} or distributed~\cite{islam2008joint,hoang2006maximizing,yang2010optimal,qian2007power} way, thereby exposing local \ensuremath {\mathit{SNR}}{\xspace}~and \ensuremath {\mathit{SINR}}{\xspace}~values, for example, to other \ensuremath {\mathit{CRN}}{\xspace}~entities or intruders, putting \ensuremath {\mathit{SU}}{\xspace} s' location information at risk.
\end{itemize}
\subsection{Location information leakage in spectrum sharing}
\label{sourcesSharing}
Multiple \ensuremath {\mathit{SU}}{\xspace} s may try to access the same spectrum bands at the same time, thus necessitating multiple-access coordination mechanisms that allow multiple \ensuremath {\mathit{SU}}{\xspace} s to share the same spectrum~\cite{jiang2015effective}. Spectrum sharing consists then of enabling coexistence of multiple \ensuremath {\mathit{SU}}{\xspace} s while avoiding interference (among \ensuremath {\mathit{SU}}{\xspace} s themselves as well as between \ensuremath {\mathit{SU}}{\xspace} s and \ensuremath {\mathit{PU}}{\xspace} s) and maintaining some target \ensuremath {\mathit{QoS}}{\xspace}~levels. Broadly speaking, this functionality is composed of three elements: resource allocation, spectrum access and spectrum trading.
\subsubsection{Resource allocation}
Enabling dynamic spectrum sharing is crucial to the success of \ensuremath {\mathit{CRN}}{\xspace} s. It allows users to select, use, and share spectrum bands adaptively with the aim of maximizing the overall spectrum utilization efficiency while not causing harmful interference to legacy users~\cite{tragos2013spectrum,nie2007game,hamdaoui2009adaptive,peng2006utilization,khalfi2015dynamic}. In this section, we discuss two resource allocation functions: {\em spectrum selection and assignment} and {\em power control and beamforming}.
\paragraph{Spectrum selection and assignment}
\label{selectionassignment}
Once the spectrum holes are analyzed and characterized, the most suitable channel is selected based on \ensuremath {\mathit{QoS}}{\xspace}~requirements of \ensuremath {\mathit{SU}}{\xspace} s, as well the characteristics of the channels~\cite{peng2006utilization,ghorbel2014resources,tragos2013spectrum}. Several criteria may be taken into account while assigning spectrum bands to \ensuremath {\mathit{SU}}{\xspace} s. These include minimizing interference to \ensuremath {\mathit{PU}}{\xspace} s, maximizing overall spectrum efficiency, maximizing \ensuremath {\mathit{SU}}{\xspace} s' throughput, minimizing network delay, and increasing network connectivity, just to name a few~\cite{khalfi2015distributed,tragos2013spectrum}.
Spectrum assignment could be done in a centralized or a distributed way, and there have been many proposed approaches, both centralized and distributed, that address the spectrum assignment and selection problem in \ensuremath {\mathit{CRN}}{\xspace} s~\cite{tragos2013spectrum,nie2007game,ehsan2012radio,hamdi2015implementation,nie2006adaptive,bkassiny2013survey,alsaleh2011enabling,tan2012channel}. Generally speaking, these approaches are mainly based on one of the four mature concepts: graph theory, game theory, learning and adaptation, and optimization theory. Next, we explore these four concepts and investigate the sources of location information leakage that may arise from using them.
\subparagraph{Graph theory}
Graph theory has been extensively used to address the spectrum assignment problem especially when the structure of the \ensuremath {\mathit{CRN}}{\xspace}~is assumed to be known a priori~\cite{tragos2013spectrum}. Here the network is modeled as a graph, where the vertices usually represent \ensuremath {\mathit{SU}}{\xspace} s and the edges model the connection between these \ensuremath {\mathit{SU}}{\xspace} s. To solve the graph-based spectrum assignment problem, network conflict graphs and graph coloring are widely used~\cite{tragos2013spectrum}.
\begin{itemize}
\item Network conflict graph: This models and captures the interference among \ensuremath {\mathit{SU}}{\xspace} s caused by concurrent transmissions of nearby \ensuremath {\mathit{SU}}{\xspace} s communicating on the same or neighboring channels~\cite{tragos2013spectrum}.
The vertices of the graph represent the communication links among \ensuremath {\mathit{SU}}{\xspace} s, whereas the edges represent the pairs of links whose concurrent communications interfere with one another when assigned the same or adjacent spectrum bands~\cite{tragos2013spectrum,teotia2015conflict,peng2006utilization}. Conflict graphs are mostly used in centralized topologies, where a central entity (\ensuremath {\mathit{BS}}{\xspace}~or \ensuremath {\mathit{FC}}{\xspace}) constructs the graph and uses it to assign channels among \ensuremath {\mathit{SU}}{\xspace} s.
\item Graph coloring: In this approach, the \ensuremath {\mathit{CRN}}{\xspace}~is mapped to a graph that could be either unidirectional or bidirectional depending on the algorithm's characteristics. The vertices in this graph represent \ensuremath {\mathit{SU}}{\xspace} s that need to share the spectrum, and the edges model the interference between \ensuremath {\mathit{SU}}{\xspace} s. \ensuremath {\mathit{PU}}{\xspace} s could also be included in the graph with pre-assigned colors. The spectrum assignment problem using graph coloring is equivalent to coloring each vertex (or edge) using different colors from a specific set of colors, each often representing an available spectrum band~\cite{tragos2013spectrum,yang2009historical,peng2006utilization}. The goal is to improve spectrum efficiency by increasing frequency reuse while meeting interference constraints by ensuring that two connected vertices (\ensuremath {\mathit{SU}}{\xspace} s) cannot be assigned the same color, i.e. the same band.
\end{itemize}
\textbf{Sources of location information leakage:} We identify two main sources of leakage that arise from graph-based approaches during the spectrum selection process: the topology and the connectivity information.
\begin{itemize}
\item {\em Topology:} The topology of the network that could be learned via the graph-based spectrum assignment techniques could be explored to infer \ensuremath {\mathit{SU}}{\xspace} s' location. In fact, some works have already used this information to localize nodes in wireless sensor networks~\cite{priyantha2003anchor,wymeersch2009cooperative}.
\item {\em Connectivity:} This information basically tells which nodes are located within each other's transmission range (i.e., connected to one another). Many approaches have used this information for positioning purposes~\cite{shang2003localization,shang2004localization,lederer2009connectivity,wang2009connectivity} and some of them can be used to localize target nodes even from connectivity information among the nodes themselves only~\cite{shang2003localization,shang2004localization}.
\end{itemize}
\subparagraph{Game theory}
Game theory has also been extensively used to solve the spectrum assignment problem in \ensuremath {\mathit{CRN}}{\xspace} s~\cite{wang2010game,nie2007game,nie2006adaptive}.
A game could be seen as a way of interaction between multiple players competing with each other while trying to adjust their strategies to optimize their utilities~\cite{hossain2009dynamic}.
Game theory is suitable for the spectrum assignment problem in \ensuremath {\mathit{CRN}}{\xspace} s as the spectrum allocation decision of one \ensuremath {\mathit{SU}}{\xspace}~has a direct impact on the performance of other neighboring \ensuremath {\mathit{SU}}{\xspace} s~\cite{tragos2013spectrum}.
Spectrum selection games in \ensuremath {\mathit{CRN}}{\xspace} s~usually consist of three components: The players which represent \ensuremath {\mathit{SU}}{\xspace} s and may include \ensuremath {\mathit{PU}}{\xspace} s, the action space and the utility function(s). The players have a set of functions representing available frequency bands. The action space is the Cartesian product of the sets of actions of all players. Each player has a utility function that is used to translate the action space into the real world needs, e.g. the frequency bands that meet \ensuremath {\mathit{SU}}{\xspace}'s~requirements~\cite{tragos2013spectrum}. The goal of the game is to maximize each \ensuremath {\mathit{SU}}{\xspace}'s utility function. This takes into consideration the impact of each \ensuremath {\mathit{SU}}{\xspace}'s decisions on the other players. For games with specific characteristics, there is always a steady state solution (i.e., a Nash equilibrium), and any unilateral change of a player leads to a lower utility for that specific player~\cite{tragos2013spectrum,wang2010game}.
\textbf{Sources of location information leakage:}
Games may require that \ensuremath {\mathit{SU}}{\xspace} s share their channel selection decisions among one another. This information, just like the case of spectrum availability, could be used for \ensuremath {\mathit{SU}}{\xspace}~localization. In fact, this information reveals the list of channels that a \ensuremath {\mathit{SU}}{\xspace}~may be interested in using; i.e. the list of available channels in its vicinity. Sharing this list with other \ensuremath {\mathit{SU}}{\xspace} s may put into risk \ensuremath {\mathit{SU}}{\xspace}'s own privacy, as this information could be used by an adversary to estimate its position especially if this adversary has a global knowledge of the \ensuremath {\mathit{CRN}}{\xspace}.
\subparagraph{Learning and adaptation}
\ensuremath {\mathit{CRN}}{\xspace} s employ software-defined radios, which are capable of executing complex computational tasks through a specialized software module called the cognitive engine~\cite{huang2010modeling,bkassiny2013survey}. This engine has learning capabilities that allow \ensuremath {\mathit{SU}}{\xspace} s to make spectrum selection decisions and perform tasks in a distributed manner by only relying on what \ensuremath {\mathit{SU}}{\xspace} s learn from the environment~\cite{bahrak2013security,bkassiny2013survey}. This is usually done by means of machine learning techniques, which have recently attracted significant attention in the context of \ensuremath {\mathit{CRN}}{\xspace} s~\cite{noroozoliaee2013efficient,clancy2007applications,oliaee2013adaptive}. For example, in~\cite{baldo2009neural}, the authors propose a cognitive engine based on artificial neural network (ANN) that learns how environmental measurements and the status of the network affect the \ensuremath {\mathit{CRN}}{\xspace}~performance on different channels. Based on this, the cognitive engine can dynamically select the best channel, expected to yield the best performance for \ensuremath {\mathit{SU}}{\xspace} s.
Li et al.~\cite{li2009multi} use a multi-agent Q-learning approach, a model-free type of reinforcement learning, to address the problem of channel selection in multi-user and multi-channel \ensuremath {\mathit{CRN}}{\xspace} s.
Each \ensuremath {\mathit{SU}}{\xspace}~considers both the channel and the other \ensuremath {\mathit{SU}}{\xspace} s as its environment, updates its Q values continuously, and uses the Q-table to select the best channel.
NoroozOliaee et al.~\cite{noroozoliaee2013efficient,noroozoliaee2011achieving} derive new private objective functions suitable for supporting elastic traffic that can be used by learning algorithms to enable cognitive users to locate and exploit
unused spectrum opportunities in a distributed manner while maximizing their received throughput.
These same authors also derive learning-based objective functions for the inelastic traffic model with non-cooperative~\cite{hamdaoui2012coordinating,hamdaoui2011aligning} and cooperative~\cite{noroozoliaee2012maximizing,noroozoliaee2011distributed} users.
Yau et al.~\cite{yau2009context} propose a context-aware and intelligent dynamic channel selection scheme that enables \ensuremath {\mathit{SU}}{\xspace} s to adaptively select channels for data transmission to enhance QoS.
\textbf{Sources of location information leakage:} The learning process may also lead to some location information leakage. This is mainly due to:
\begin{itemize}
\item {\em Environmental measurements:} In centralized \ensuremath {\mathit{CRN}}{\xspace} s, the learning agent, usually \ensuremath {\mathit{FC}}{\xspace}, needs to collect environmental measurements during the training phase~\cite{baldo2009neural} to be able to select the best channels for secondary transmissions. In the case of distributed \ensuremath {\mathit{CRN}}{\xspace} s, the learning process involves multiple agents, which often need to exchange measurement information among themselves. As we have shown previously, this information, when shared among the different \ensuremath {\mathit{CRN}}{\xspace}~entities, may reveal significant information about \ensuremath {\mathit{SU}}{\xspace} s' location.
\item {\em Activity prediction:} Prediction strategies through machine learning techniques could also be used to predict both \ensuremath {\mathit{PU}}{\xspace}~and \ensuremath {\mathit{SU}}{\xspace}~activities based on past measurements and experience~\cite{akter2008modeling,xing2013spectrum}. This can allow a malicious entity to predict which channels a \ensuremath {\mathit{SU}}{\xspace}~might be using in the future. Combining this information with the learned activity model of \ensuremath {\mathit{PU}}{\xspace} s and their coverage areas, it becomes possible to predict a \ensuremath {\mathit{SU}}{\xspace}'s location, just as explained in Section~\ref{sourcesMobility}.
\end{itemize}
\subparagraph{Optimization theory}
Optimization techniques (e.g., convex optimization, linear programming, non-linear programming, etc.) have also been widely used to solve the spectrum assignment problem in \ensuremath {\mathit{CRN}}{\xspace} s. For instance, Tan et al.~\cite{tan2012channel} formulate the channel assignment problem as an integer optimization with the aim to maximize throughput, and propose two greedy non-overlapping and overlapping channel assignment algorithms to solve it. Bkassini et al.~\cite{bkassiny2010optimal} model the channel assignment problem as a weighted bipartite graph, where \ensuremath {\mathit{PU}}{\xspace} s and \ensuremath {\mathit{SU}}{\xspace} s constitute the two disjoint sets of vertices in the bipartite graph. The authors use the well-known Hungarian method~\cite{kuhn1955hungarian} to solve this problem in polynomial time. Ding et al.~\cite{ding2010distributed} formulate the joint spectrum and power allocation problem as a convex optimization problem, and propose a distributed algorithm to solve it.
Ben Ghorbel et al.~\cite{ghorbel2016distributed,ghorbel2014distributed} propose
two-phase optimization heuristics also for joint allocation of the spectrum and power resources. Their proposed heuristics split the spectrum and power allocation problem into two sub-problems, and solve each of them separately. The spectrum allocation
problem is solved during the first phase using learning, whereas the power allocation is formulated as a
real optimization problem and solved, during the second phase, by traditional optimization solvers.
Salameh et al.~\cite{salameh2011throughput} formulate the joint rate/power control and channel assignment as a mixed-integer program with the aim to maximize the sum-rate achieved by all contending \ensuremath {\mathit{SU}}{\xspace} s over all available spectrum opportunities. Due to the NP-hardness nature of this problem, they transform it into a binary linear programming problem which they solve in polynomial time. In~\cite{xin2009joint}, the authors formulate the joint QoS-aware admission control, channel assignment, and power allocation as a non-linear NP-hard optimization problem. In~\cite{salameh2008distance} the channel assignment problem is expressed as an Integer Linear Programming (ILP) problem. These approaches rely on heuristics to solve the spectrum assignment due to the complexity of the formulated optimization problems.
\paragraph{Power control and beamforming}
\label{powerall}
Power control and beamforming are effective methods for mitigating co-channel interference and thus boosting the system capacity. The challenge with power control and beamforming in \ensuremath {\mathit{CRN}}{\xspace} s lies in making sure that \ensuremath {\mathit{SU}}{\xspace} s' transmissions do not cause the received interference at \ensuremath {\mathit{PU}}{\xspace} s to exceed a tolerable limit. In light of this, a number of beamforming and power allocation techniques have been proposed for \ensuremath {\mathit{CRN}}{\xspace} s with various objectives, such as capacity maximization~\cite{zhang2008joint} and transmit power minimization.
For instance, Le et al.~\cite{le2008resource} propose to formulate the joint rate and power allocation problems for the secondary links as optimization problems with both QoS and interference constraints under low network load conditions. This work relies on two popular fairness criteria, namely, the max-min and the proportional fairness criteria. Kim et al.~\cite{kim2008joint} develop joint admission control and rate/power allocation methods subject to QoS and minimum rate requirements as well as maximum transmit power and fairness constraints for \ensuremath {\mathit{SU}}{\xspace} s in MIMO ad hoc \ensuremath {\mathit{CRN}}{\xspace} s.
Zhang et al.~\cite{zhang2008joint} consider beamforming
and power allocation jointly for SIMO-MAC, and formulate it as two optimization problems: sum-rate maximization and $\ensuremath {\mathit{SINR}}{\xspace}$ balancing. These problems are solved using a water-filling based algorithm and constraint decoupling techniques. The goal is to obtain the suboptimal power allocation strategy and to maximize the minimal ratio of the achievable $\ensuremath {\mathit{SINR}}{\xspace}$s relative to the target $\ensuremath {\mathit{SINR}}{\xspace}$s of the users in the system under a sum-power constraint. Zheng et al.~\cite{zheng2009robust} propose beamforming designs for a multi-antenna \ensuremath {\mathit{CRN}}{\xspace}, with the aim of allowing multiple \ensuremath {\mathit{SU}}{\xspace}~transmissions concurrently with the \ensuremath {\mathit{PU}}{\xspace}~presence, to achieve also $\ensuremath {\mathit{SINR}}{\xspace}$ balancing subject to the constraints of the total \ensuremath {\mathit{SU}}{\xspace} s transmit power and the received interference power at the \ensuremath {\mathit{PU}}{\xspace} s. This is achieved by optimizing the beamforming vectors at the \ensuremath {\mathit{SU}}{\xspace}~transmitter based on imperfect
channel state information (CSI).
\subsubsection{Spectrum access}
Spectrum access of \ensuremath {\mathit{CRN}}{\xspace} s is responsible for the sharing of the spectrum among \ensuremath {\mathit{SU}}{\xspace} s by handling medium contention, interference avoidance, multi-user coexistence, etc.~\cite{de2012survey}.
\paragraph{Access paradigms}
There are three spectrum access paradigms in \ensuremath {\mathit{CRN}}{\xspace} s:
\textbf{Spectrum underlay:}
This paradigm mandates
that \ensuremath {\mathit{SU}}{\xspace} s can transmit concurrently with \ensuremath {\mathit{PU}}{\xspace} s only if doing so generates an amount of interference at the primary receivers that is below some acceptable threshold~\cite{goldsmith2009breaking,kim2008joint}.
\textbf{Spectrum overlay:}
Spectrum overlay paradigm also allows concurrent primary and secondary transmissions. But \ensuremath {\mathit{SU}}{\xspace} s are assumed to have knowledge about certain primary transmission parameters to avoid interference with the primary transmissions. The enabling premise for overlay systems is that \ensuremath {\mathit{SU}}{\xspace} s are allowed to use the spectrum for their own transmissions as long as they are willing to use some of their power to relay some of \ensuremath {\mathit{PU}}{\xspace} s' transmissions~\cite{srinivasa2007cognitive}.
\textbf{Spectrum interweave:}
This paradigm is based on the opportunistic spectrum access idea, which has been one of the main drivers for cognitive radio access. Different from the two previous paradigms, this paradigm does not allow simultaneous secondary and primary transmissions on the same frequency band. Instead, it allows \ensuremath {\mathit{SU}}{\xspace} s to access and use the licensed spectrum only when the spectrum is vacant~\cite{goldsmith2009breaking}.
\paragraph{Spectrum access techniques}
Many \ensuremath {\mathit{MAC}}{\xspace}~protocols have been proposed to coordinate \ensuremath {\mathit{SU}}{\xspace} s to access and share the available channels and to avoid (or reduce) collisions among users\cite{zhang2008joint}. Such a coordinated access could be performed in a distributed or a centralized way~\cite{de2012survey}.
These protocols can either be cooperative~\cite{kondareddy2008synchronized,hamdaoui2008mac} in that they require coordination among \ensuremath {\mathit{SU}}{\xspace} s to enable efficient sharing of spectrum and thus improve spectrum utilization, or contention-based~\cite{ma2005dynamic,jia2008hc} in that no coordination is required among users.
In contention-based protocols, cognitive senders and receivers exchange their sensing results through handshaking mechanisms to negotiate which channel they will use for their communications~\cite{de2012survey}. Tan et al.~\cite{tan2012channel} propose an overlapping channel assignment algorithm and design a MAC protocol to resolve the access contention problem when multiple \ensuremath {\mathit{SU}}{\xspace} s attempt to exploit the same available channel. Salameh et al~\cite{salameh2009mac} propose a contention-based protocol that tries to satisfy QoS constraints by limiting the number of used channels per \ensuremath {\mathit{SU}}{\xspace}.
In coordination-based protocols, each \ensuremath {\mathit{SU}}{\xspace}~shares its channel usage information with its neighbors to increase sensing reliability, and to improve overall system performance~\cite{de2012survey}. For instance, Hamdaoui et al.~\cite{hamdaoui2008mac} propose a coordination-based MAC protocol that adaptively and dynamically seeks and exploits opportunities in both licensed and unlicensed spectra and along both the time and the frequency domains. Zhao et al.~\cite{zhao2005distributed} propose a heterogeneous distributed MAC protocol that permits distributed coordination of local clusters in a multi-hop \ensuremath {\mathit{CRN}}{\xspace}~through a local common channel.
\paragraph{Sources of location information leakage}
The sharing of information during this coordination process, though needed for enabling efficient multiple access, could expose the location information of \ensuremath {\mathit{SU}}{\xspace} s to one another.
\textbf{Sensing outcomes:} Contention-based \ensuremath {\mathit{MAC}}{\xspace}~protocols may require \ensuremath {\mathit{SU}}{\xspace} s to share their sensing outcomes with one another to negotiate their access to the spectrum. However, as we have shown in Section~\ref{coopSources}, these sensing outcomes can potentially leak \ensuremath {\mathit{SU}}{\xspace} s' location information.
\textbf{Channel usage information:} Channel usage information, when shared among \ensuremath {\mathit{SU}}{\xspace} s as in coordination-based MAC protocols, is shown to leak details about their location; this will be discussed later in Section~\ref{sourcesMobility}.
\subsubsection{Spectrum trading}
Spectrum trading could be seen as the economic aspect of spectrum sharing~\cite{maharjan2011economic}. It aims to maximize the revenue of the spectrum owners, i.e. \ensuremath {\mathit{PU}}{\xspace} s, while maximizing the satisfaction of \ensuremath {\mathit{SU}}{\xspace} s~\cite{niyato2008spectrum} that compete for gaining access to the spectrum. Spectrum trading can be done between \ensuremath {\mathit{PU}}{\xspace} s and \ensuremath {\mathit{SU}}{\xspace} s or among \ensuremath {\mathit{SU}}{\xspace} s only~\cite{maharjan2011economic}. It relies mainly on two concepts: Auction theory and market theory. Next, we highlight these two concepts and investigate their sources of leakage.
\paragraph{Auction}
A typical dynamic spectrum auction has three main phases: 1) {\em Spectrum discovery phase:} \ensuremath {\mathit{SU}}{\xspace} s obtain spectrum availability information through one of the spectrum opportunity discovery approaches, explained in Section~\ref{specDisc}, and determine the bid price for each available channel based on its quality. 2) {\em Bidding phase:} each \ensuremath {\mathit{SU}}{\xspace}~submits its bids and its location along with its ID to the auctioneer. 3) {Channel assignment:} once the auctioneer collects all the bids from \ensuremath {\mathit{SU}}{\xspace} s, it distributes channels among them and charges the winners accordingly~\cite{liu2013location}. This is suitable for situations when the price of the spectrum is undetermined and depends on \ensuremath {\mathit{SU}}{\xspace}'s requirements~\cite{niyato2008spectrum}. Auction-based spectrum sharing for \ensuremath {\mathit{CRN}}{\xspace} s has been studied intensively in literature (e.g.,~\cite{khaledi2013auction,wang2010spectrum,kasbekar2010spectrum}).
\paragraph{Market theory}
\textbf{Monopoly Market:}
This is the simplest market structure as there is only one seller, i.e. \ensuremath {\mathit{PU}}{\xspace}, in the system. Based on \ensuremath {\mathit{SU}}{\xspace} s' demand, the seller can optimize the trading process to obtain the highest profit~\cite{maharjan2011economic},\cite{tran2015joint,do2014optimal}.
\textbf{Oligopoly Market:}
This is a type of market that lies between full competition and no competition (or monopoly) and is defined as a market with only a small number of firms and with substantial barriers to entry in economics~\cite{hossain2009dynamic}. These firms or primary service providers compete with each other independently to achieve the highest profit by controlling the quantity or the price of the supplied commodity which is the spectrum resource in this case. Unlike the monopoly case, in oligopoly, there are multiple firms that provide the same service, making it necessary for firms to consider each other's strategy~\cite{maharjan2011economic}. The most basic form of oligopoly is duopoly, where only two sellers exist in the market~\cite{tran2015joint,do2014optimal}.
\textbf{Market-equilibrium:}
In this spectrum trading model, the primary service provider or spectrum seller is assumed to be not aware of other service providers, which could be due to the lack of any centralized controller or information exchange among each other. This makes the spectrum seller naively set the price according to the spectrum demand of \ensuremath {\mathit{SU}}{\xspace} s. This price reflects the willingness of the spectrum seller to sell its spectrum which is generally determined by the supply function. On the other hand, the willingness of a \ensuremath {\mathit{SU}}{\xspace}~to buy spectrum is determined by the demand function~\cite{wang2010game}. Market-equilibrium aims at giving a price for which spectrum supply from a primary service provider is equal to spectrum demand from \ensuremath {\mathit{SU}}{\xspace} s~\cite{hossain2009dynamic}. This price achieves two goals: the spectrum supply of the primary service provider meets all spectrum demand of \ensuremath {\mathit{SU}}{\xspace} s, and the spectrum market does not have an excess in the supply~\cite{wang2010game}.
\paragraph{Sources of location information leakage} \label{sourcesTrading}
Spectrum trading may also introduce some sources of location information leakage as we discuss next.
\textbf{Location information:}
During the bidding phase of spectrum auction, \ensuremath {\mathit{SU}}{\xspace} s may need to submit their locations to the auctioneer as suggested in~\cite{liu2013location}. This is clearly an obvious source of location information leakage as it exposes the location information of \ensuremath {\mathit{SU}}{\xspace} s to the auctioneer and to an external adversary that may be eavesdropping the communications of \ensuremath {\mathit{SU}}{\xspace} s during the auction process.
\textbf{Bid channels:}
\ensuremath {\mathit{SU}}{\xspace} s here need to submit their bids for their channels of interest to the auctioneer (or spectrum broker) . An adversary aiming to infer a \ensuremath {\mathit{SU}}{\xspace}'s location can deduce, from the list of channels \ensuremath {\mathit{SU}}{\xspace}~bids for, that \ensuremath {\mathit{SU}}{\xspace}~is located somewhere where these channels are available. Simple intersection of the availability areas of these channels can easily locate \ensuremath {\mathit{SU}}{\xspace}~\cite{liu2013location}.
\textbf{Bid prices:}
For each channel available for auction, a \ensuremath {\mathit{SU}}{\xspace}~can first evaluate its quality and, depending on the channel's quality, establish a price for it. It then submits its bid for the channel to the broker. These prices are shown to be a potential source of \ensuremath {\mathit{SU}}{\xspace} s' location information leakage~\cite{liu2013location}.
\subsection{Location information leakage in spectrum mobility}
\label{specMobility}
\ensuremath {\mathit{SU}}{\xspace} s communicating on a licensed spectrum band may need to vacate their current band at any time, for instance, due to the return of \ensuremath {\mathit{PU}}{\xspace} s to their licensed band. When this happens, \ensuremath {\mathit{SU}}{\xspace} s need to find and switch their ongoing communications to another vacant band to avoid the disruption of their ongoing transmissions. This is known as {\em spectrum mobility} or {\em spectrum handoff}~\cite{akyildiz2009spectrum}.
There are several events that could trigger spectrum handoff in \ensuremath {\mathit{CRN}}{\xspace} s, and next, we list some of them:
\begin{itemize}
\item {\em \ensuremath {\mathit{PU}}{\xspace}'s return:} Whenever a \ensuremath {\mathit{PU}}{\xspace}~returns to its channel, \ensuremath {\mathit{SU}}{\xspace}~is forced to vacate it and switch to another available one, if any. This initiates the handoff process. Finding a new available channel often requires \ensuremath {\mathit{SU}}{\xspace}~to perform spectrum sensing, making handoff more challenging~\cite{chengyu2013spectrum}.
\item {\em \ensuremath {\mathit{SU}}{\xspace}'s mobility:} Because spectrum availability is location dependent, moving while having an ongoing communication may trigger spectrum handoff, as current channel may no longer be available in \ensuremath {\mathit{SU}}{\xspace}'s new location~\cite{lee2012spectrum}.
\item {\em Quality degradation:} Spectrum handoff could be triggered by the degradation of the channel quality. It can be triggered when, for example, the \ensuremath {\mathit{QoS}}{\xspace}~level received by \ensuremath {\mathit{SU}}{\xspace}~goes below a certain threshold, forcing it to find and switch to another channel.
\end{itemize}
\subsubsection{Spectrum handoff strategies}
Based on the handoff triggering timing, spectrum handoff techniques could be classified into four categories or strategies: Non-handoff strategy, reactive handoff strategy, proactive handoff strategy, and hybrid handoff strategy~\cite{christian2012spectrum,kumar2015spectrum}. We first explore these different strategies, then we investigate their sources of location information leakage.
\paragraph{Non-handoff strategy} In this strategy, when one of the triggering events for handoff occurs, \ensuremath {\mathit{SU}}{\xspace} s stop transmitting over the current channel and choose not to switch to another channel. Instead they remain idle until the channel becomes available again~\cite{wang2012modeling}, as introduced in the non-hopping mode of the IEEE 802.22 WRAN standard~\cite{hu2007cognitive}. How good this handoff strategy is depends on the activities and loads of \ensuremath {\mathit{PU}}{\xspace} s. It causes very little to no \ensuremath {\mathit{PU}}{\xspace}~interference but the waiting latency to resume secondary transmission could be unpredictably very large, as it depends on when \ensuremath {\mathit{PU}}{\xspace}~leaves the spectrum. This strategy is best suited for systems with short \ensuremath {\mathit{PU}}{\xspace}~transmissions~\cite{christian2012spectrum}.
\paragraph{Pure reactive handoff strategy}
In this strategy, the target channel selection and the handoff are performed reactively after a spectrum handoff triggering event occurs~\cite{wang2008spectrum,kumar2015spectrum}. Here, \ensuremath {\mathit{SU}}{\xspace} s need to perform spectrum sensing in order to find the target backup channel to which communication is to be transferred. Several reactive handoff strategy-based approaches are proposed in the literature~\cite{willkomm2005reliable,wang2010modeling}. In general, this strategy has less handoff latency than that of the non-handoff strategy, but has larger latency when compared to the proactive spectrum handoff strategy~\cite{kumar2015spectrum,christian2012spectrum} (described next). The handoff performance of this strategy depends on the accuracy and speed of the spectrum sensing process in identifying a vacant target channel.
\paragraph{Pure proactive handoff strategy}
In this approach, the handoff and the target channel selection are performed proactively before a spectrum handoff triggering event takes place~\cite{song2012prospect,nejatian2013proactive}. \ensuremath {\mathit{SU}}{\xspace} s do so by periodically observing all channels to obtain spectrum usage statistics which allow them to determine the candidate channels for spectrum handoff~\cite{wang2008spectrum}. The selection of the target free channel for future spectrum handoff is usually made based on \ensuremath {\mathit{PU}}{\xspace}~traffic characteristics~\cite{kumar2015spectrum}, where \ensuremath {\mathit{SU}}{\xspace} s can predict \ensuremath {\mathit{PU}}{\xspace}~arrivals in the target spectrum band in advance. Hence, the handoff latency is reduced considerably when compared to the reactive spectrum handoff strategy, which requires taking action after the handoff triggering event takes place. However, if the prediction of \ensuremath {\mathit{PU}}{\xspace}~traffic is inaccurate or if the target backup channel is obsolete, for instance due to being occupied by other \ensuremath {\mathit{SU}}{\xspace} s at handoff time, this could lead to poor handoff performance~\cite{christian2012spectrum}. This makes this strategy best suited to networks with well-modeled \ensuremath {\mathit{PU}}{\xspace}~traffic characteristics.
\paragraph{Hybrid handoff strategy}
This approach combines proactive spectrum sensing with reactive spectrum handoff as suggested by Christian et al.~\cite{christian2012spectrum}. It performs proactive spectrum sensing to decide on the backup target channel in advance and before the handoff is triggered, and makes a reactive handoff decision after the triggering event takes place. Thus, it reduces the handoff latency when compared to the reactive handoff strategy. This hybrid approach could be seen as a tradeoff between reactive and proactive handoff strategies.
\subsubsection{Sources of location information leakage}
\label{sourcesMobility}
Spectrum mobility can also leak some location information about \ensuremath {\mathit{SU}}{\xspace} s, as highlighted next:
{\bf Handoff:}
Recall that a \ensuremath {\mathit{SU}}{\xspace}~utilizing a \ensuremath {\mathit{PU}}{\xspace}~channel is forced to vacate the channel (and possibly switch to another) when \ensuremath {\mathit{PU}}{\xspace}~returns to and claims its channel. \ensuremath {\mathit{PU}}{\xspace}~(and easily other entities) knows, in this situation, that \ensuremath {\mathit{SU}}{\xspace}~is located within its coverage area. Handoff can thus lead to leakage of location information of \ensuremath {\mathit{SU}}{\xspace}~performing handoff.
{\bf Spectrum utilization information:}
A \ensuremath {\mathit{SU}}{\xspace}'s spectrum usage history (e.g., sequence of channels \ensuremath {\mathit{SU}}{\xspace}~has used over some period of time) could easily be used to localize \ensuremath {\mathit{SU}}{\xspace}~(or to track it if it is moving). Recall that when a \ensuremath {\mathit{SU}}{\xspace}~is communicating over a \ensuremath {\mathit{PU}}{\xspace}~channel, it means that \ensuremath {\mathit{SU}}{\xspace}~is outside the coverage areas of all ON \ensuremath {\mathit{PU}}{\xspace} s associated with that channel, or inside the area of an OFF \ensuremath {\mathit{PU}}{\xspace}.
Now, for instance, by tracking which channels \ensuremath {\mathit{SU}}{\xspace}~has used over a period of time and by knowing when and which \ensuremath {\mathit{PU}}{\xspace} s are OFF/ON during that time period, an adversary can easily narrow down the area where \ensuremath {\mathit{SU}}{\xspace}~is located at by intersecting the areas associated with \ensuremath {\mathit{PU}}{\xspace} s~\cite{gao2013location}.
Spectrum utilization history information could then be a significant source of location information leakage.
{\bf Sensing reports:}
Before handoff, a \ensuremath {\mathit{SU}}{\xspace}~may need to sense the spectrum to identify a new target channel (using one of \ensuremath {\mathit{PU}}{\xspace}~detection techniques identified in Section~\ref{pudetection}). If cooperation is further required to select the appropriate channel for handoff, \ensuremath {\mathit{SU}}{\xspace} s will have to share their sensing reports, which can compromise their location privacy.
Location privacy-preserving protocols should therefore be designed with the objective of hiding information that can leak \ensuremath {\mathit{SU}}{\xspace}'s location during the handoff process and also reducing, as much as possible, the occurrences of handoff events.
\subsection{Summary}
In this section, we identified the sources of location privacy leakage emerging from the different components of \ensuremath {\mathit{CRN}}{\xspace} s, namely, spectrum discovery, spectrum analysis, spectrum sharing, and spectrum mobility. We highlighted the different functionalities of each of these components, and discussed how some of these functionalities can present some vulnerabilities that could be exploited to localize \ensuremath {\mathit{SU}}{\xspace} s. In the next section, we will go over a family of renowned privacy enhancing technologies and generic crypto schemes that we believe are the most relevant to \ensuremath {\mathit{CRN}}{\xspace} s. We will also discuss to which extent these technologies could be applied to design location privacy-preserving protocols that could prevent attacks exploiting the identified vulnerabilities.
\section{Limitations of generic privacy enhancing technologies in CRNs}
\label{limitGeneric}
Location privacy preservation is a mature technology for many wireless systems, such as sensor~\cite{conti2013providing}, vehicular~\cite{wei2010safe,tang2008privacy}, WiFi~\cite{jiang2007preserving},
cellular~\cite{gorlatova2011managing}, and others~\cite{gorlach2005survey}.
Depending on the wireless system and application at hand, location information can be leaked through various means, ranging from wireless signal localization~\cite{jiang2007preserving,conti2013providing} to traffic monitoring and analysis~\cite{xi2006preserving}. For instance, in sensor networks, location information can be inferred by monitoring packet reception times~\cite{xi2006preserving} or
analyzing packet traffic~\cite{jian2007protecting,ozturk2004source} of source nodes.
Countermeasure solutions for these attacks have also been proposed, ranging from introducing randomness to multi-hop path selection~\cite{deng2005countermeasures,ngai2013providing} to making the source nodes move randomly~\cite{xi2006preserving} to confuse the attackers.
Unlike other wireless systems, location privacy preservation that addresses vulnerabilities in \ensuremath {\mathit{CRN}}{\xspace} s has not, however, received much attention, though several works related to spectrum sensing~\cite{kasiri2015privacy,gao2013location,troja2014leveraging,troja2015efficient,li2015agent,zhang2015optimal,li2012location}, spectrum auction bids~\cite{huang2015general,liu2013location}, subscriber identification~\cite{reddy2014method}, and database-driven \ensuremath {\mathit{DSA}}{\xspace}~\cite{gao2013location,zeng2014location,troja2014leveraging,troja2015efficient,li2015agent,zhang2015optimal} have been proposed.
\subsection{Adaptation of existing privacy enhancing technologies}
Direct adaptation of existing Privacy Enhancing Technologies (PETs), such as Searchable Encryption (SE) (e.g.,~\cite{DSSE:EfficientUpdate:CCS2014:Hahn,DSSE:Yavuz:SAC:2015,DSSE:MultiKeyword:Fuzzy:Infocom:2014,DSSE:NDSS2014DavidCash,DSSE:SP2014MuhammadNaveed}) and Oblivious Random Access Memory (ORAM) (e.g.,~\cite{ORAM:Goldreich:1996:SPS} \cite{Stefanov_TowardPracticalORAM_NDSS12,ORAM:RevisitedPinkas:2010}), which enable a client to outsource its data to a database in an encrypted form so it can perform search queries on it, cannot, for example, be used as they are in database-driven \ensuremath {\mathit{DSA}}{\xspace}~to enable private spectrum information retrieval. There have also been proposed cryptographic techniques that enable generic (e.g., Fully Homomorphic Encryption (FHE)~\cite{FHE:IntegerRevisited:2015:Eurocrypt,FHE:overInteger:vanDijk:2010,FHE:Smart:2014:FHESIMD}) or specific (e.g., functional encryption~\cite{FunctionaEnc:ShenShiWaters:PredicatePrivacy:TTC:2009,FunctionalEnc:Garg:CandiateIndistObfus:FOCS:2013}) data processing over encrypted data, and these existing PETs cannot be directly adapted either to fit the \ensuremath {\mathit{CRN}}{\xspace}~context, so that \ensuremath {\mathit{SU}}{\xspace} s' location privacy is preserved while still querying the spectrum database for availability information in an effective manner.
Architectural differences and performance requirements of \ensuremath {\mathit{CRN}}{\xspace} s make direct adaptation extremely ineffective. Privacy-preserving search/access techniques, such as SE or ORAM, are specifically designed for a data outsourcing model~\cite{Curtmola:2006:SSE,DSSE:NDSS2014DavidCash,DSSE:SP2014MuhammadNaveed}, in which a client encrypts {\em its own data} with {\em its private key} and then outsources it to the database. However, in database-driven \ensuremath {\mathit{DSA}}{\xspace}, a third party owns and manages the spectrum database. Therefore, it is impractical for database owners to generate a searchable encrypted copy of the database for each single user (note that the initialization phase of these PETs are highly costly~\cite{DSSE:Yavuz:SAC:2015,ORAM:RevisitedPinkas:2010}). Existing, fully generic techniques such as FHE~\cite{FHE:IntegerRevisited:2015:Eurocrypt,FHE:overInteger:vanDijk:2010}) are, on the other hand, extremely costly and therefore impractical for \ensuremath {\mathit{CRN}}{\xspace} s.
That is said, there have been several attempts that aimed to adapt existing PETs to fit the \ensuremath {\mathit{CRN}}{\xspace}~context. In the case of database-driven \ensuremath {\mathit{DSA}}{\xspace}~for example, the proposed techniques that aim to protect the location information of \ensuremath {\mathit{SU}}{\xspace} s when they are querying databases for spectrum availability information rely on either {\em $k$-anonymity}~\cite{samarati2001protecting,khoshgozaran2009private} or \ensuremath {\mathit{PIR}}{\xspace}~({\em private information retrieval})~\cite{chor1998private,wang2010generalizing}.
{\em $k$-anonymity} approaches (e.g.,~\cite{zhang2015optimal}) essentially rely on a third party, known as the anonymizer, to ensure that the probability of identifying the location of a querying user remains under $1/k$, where $k$ is the size of the anonymity set to be received by the untrusted database (alternatively, the anonymity set can be constructed distributedly instead of relying on a third party).
{\em $k$-anonymity} approaches are known to suffer from one major problem: they cannot achieve high location privacy without incurring substantial communication/computation overhead (e.g., higher privacy means higher $k$). They often compromise the location privacy at the benefit of lowering the incurred overhead, or vice-versa~\cite{peddinti2011limitations}.
\ensuremath {\mathit{PIR}}{\xspace}-based approaches~\cite{troja2015efficient,gao2013location,troja2014leveraging}, on the other hand, offer much better privacy than $k$-anonymity approaches, but also incur substantial overhead, thus limiting their practical use for \ensuremath {\mathit{CRN}}{\xspace} s~\cite{ghinita2008private}. Proposed approaches relying on these technologies will be discussed in more details in later sections.
In what follows from this section, we take a closer look at some of the most known and generic PETs and discuss why they cannot be used off-the-shelf as they are in the context of \ensuremath {\mathit{CRN}}{\xspace} s to protect \ensuremath {\mathit{SU}}{\xspace} s from location inference attacks that exploit the vulnerabilities identified in Section~\ref{sec:sources}. These techniques, include {\em homomorphic encryption}, {\em oblivious transfer}, {\em private information retrieval}, {\em data outsourcing-based techniques}, {\em differential privacy}, and {\em secure multiparty computation}.
\subsection{Homomorphic encryption}
Homomorphic encryption is a special form of encryption that allows computations to be performed on ciphertexts. It generates an encrypted result whose decryption matches the result of operations performed on the plaintexts. There are two kinds of homomorphic encryption: full and partial.
\subsubsection{Fully homomorphic encryption}
This is a special type of homomorphic encryption which allows the computation of arbitrary functions on encrypted data without decrypting it. This concept was first introduced by Gentry~\cite{gentry2009fully} and is based on the properties of ideal lattices. Theoretically speaking, this is a very powerful concept as it permits the construction of a program that performs all kind of operations on the ciphertexts. Since such a program does not need to decrypt its inputs, it can be run by an untrusted party without revealing its inputs and internal state, making it an attractive tool for preserving privacy.
This might seem applicable in the context of \ensuremath {\mathit{CRN}}{\xspace}~to hide, for example, the observations of \ensuremath {\mathit{SU}}{\xspace} s (proven to leak information about \ensuremath {\mathit{SU}}{\xspace} s location as discussed in Section~\ref{coopSources}) during the spectrum sensing phase and share them with \ensuremath {\mathit{FC}}{\xspace}~(or other \ensuremath {\mathit{SU}}{\xspace} s) without worrying about \ensuremath {\mathit{SU}}{\xspace} 's location privacy.
The main issue, however, with this type of encryption is that it involves high computation and requires large storage, making it unpractical. Another major issue with this encryption is that the search time resulting from using fully homomorphic encryption is linear in the length of the dataset. This again makes it unpractical, especially for applications with large datasets like spectrum geolocation databases.
\subsubsection{Partially homomorphic encryption}
A partially homomorphic cryptosystem is an encryption scheme that, unlike fully homomorphic encryption, can only perform either multiplication or addition on the ciphertexts, but not both. Several cryptosystems with homomorphic propoerties were proposed in the literature. Paillier cryptosystem~\cite{paillier1999public} is one of the most famous additive homomorphic schemes. Examples of multiplicative homomorphic cryptosystems include El Gamal~\cite{elgamal1984public} and RSA~\cite{rivest1978method}. Thanks to their homomorphic properties, these schemes could be used in situations that require performing some basic operations on sensitive data while hiding user inputs (like when reporting sensing information).
Partially homomorphic encryption is more practical than the fully homomorphic one; however, for them to provide high security level, they incur large communication and computational overhead. This makes it unpractical to use especially for large \ensuremath {\mathit{CRN}}{\xspace} s if not used judiciously.
\subsection{Oblivious transfer}
Oblivious transfer (\ensuremath {\mathit{OT}}{\xspace}) is a privacy enhancing protocol that enables a sender to transfer one of many pieces of data to a receiver, while keeping the sender oblivious as to which piece has been sent and while making sure that the receiver receives only one message. The simplest flavour of this protocol, $1$-$out$-$of$-$2$, was first introduced by Rabin~\cite{rabin2005exchange} and was later generalized to $1$-$out$-$of$-$n$ and $k$-$out$-$of$-$n$ cases. In the $1$-$out$-$of$-$n$ case, as explained in Figure~\ref{obt}, the sender has $n$ messages and the receiver has an index $i$. The receiver wants to learn the $i^{th}$ message without the sender learning $i$. On the other hand, the sender wants that the receiver only learns one message among the $n$ messages. This could be thought of as a suitable approach to use for extracting spectrum availability information from the spectrum \ensuremath {\mathit{DB}}{\xspace}. This approach, however, incurs very large communication and computational overheads which makes it unpractical in a delay sensitive problem like spectrum availability discovery.
\begin{figure}[h!]
\vspace{-2pt}
\center
\includegraphics[width=0.24\textwidth]{ot.pdf}
\caption{\small Oblivious transfer for the case $1$-$out$-$of$-$n$ }
\label{obt}
\end{figure}
\subsection{Private information retrieval (\ensuremath {\mathit{PIR}}{\xspace})}
This concept was first introduced by Chor et al.~\cite{chor1998private}. It allows users to privately retrieve records from a database while preventing the latter from learning which records are being retrieved. This could be thought of as a weaker version of $1$-$out$-$of$-$n$ \ensuremath {\mathit{OT}}{\xspace}~which further requires that the receiver does not learn anything about the other entries in the database.
\ensuremath {\mathit{PIR}}{\xspace}~approaches could be classified into two categories: Information-theoretic \ensuremath {\mathit{PIR}}{\xspace}~and computational \ensuremath {\mathit{PIR}}{\xspace}. In information-theoretic setting, the reconstruction of the client's query is impossible no matter how much computation the adversary would perform. A trivial \ensuremath {\mathit{PIR}}{\xspace}~approach could be to download the entire database. This would offer an information-theoretic privacy, i.e. unbreakable privacy, but on the other hand involves enormous communication overhead. Any information-theoretical \ensuremath {\mathit{PIR}}{\xspace}~solution has a communication overhead of at least the size of the database as proven by Chor~\cite{chor1998private}. Fortunately, this applies only to the case where the database is stored only on a single server. One way to get around this extensive overhead is by assuming that the database is replicated in several servers that do not communicate with each other. This way, a non-trivial theoretic \ensuremath {\mathit{PIR}}{\xspace}~solution that has communication overhead smaller than the database size turns out to be feasible. An information-theoretic approach in this model means that an individual database server cannot learn which element was retrieved by the user, no matter how much computation it may perform as long as it does not collude with the other servers~\cite{cachin1999computationally}. Several approaches proposed in the literature considerably reduce the communication overhead of information theoretic \ensuremath {\mathit{PIR}}{\xspace}~(e.g.~\cite{ambainis1997upper} where the communication cost is $\mathcal{O}(n^{1/2k-1})$ with $k$ is the number of database servers).
On the other hand, in computational \ensuremath {\mathit{PIR}}{\xspace}~approaches, the security is based on hard-to-solve well-known cryptographic problems, e.g. discrete logarithm or factorization~\cite{menezes1996handbook}. This makes them secure against computationally bounded adversaries. But an adversary with sufficient computational resources can learn the client's query by breaking the underlying security system. Some computational \ensuremath {\mathit{PIR}}{\xspace}~approaches are able to provide poly-logarithmic communication complexity~\cite{cachin1999computationally}. Gentry et al.\cite{gentry2005single} propose the most communication efficient \ensuremath {\mathit{PIR}}{\xspace}~that has a constant communication overhead.
Even though research on \ensuremath {\mathit{PIR}}{\xspace}~is making progress in terms of reducing the overhead, \ensuremath {\mathit{PIR}}{\xspace}~approaches still suffer from large overhead that limits their practicality and impedes their off-the-shelf use without adaptation in the context of \ensuremath {\mathit{CRN}}{\xspace} s.
\subsection{Data outsourcing-based techniques}
These techniques are designed for applications that require secure data outsourcing, where a client's sensitive data is outsourced to a third-party storage provider, e.g. the cloud. Existing access control
solutions focus mainly on preserving confidentiality of stored data from
unauthorized access and the storage provider. Next, we discuss two well known data outsourcing based PETs: {\em searchable symmetric encryption (\ensuremath {\mathit{SSE}}{\xspace})} and {\em oblivious random access memory (\ensuremath {\mathit{ORAM}}{\xspace})}.
\subsubsection{Searchable symmetric encryption (\ensuremath {\mathit{SSE}}{\xspace})}
Searchable symmetric encryption is a PET that is largely deployed to privately outsource one's data to another party while maintaining the ability to selectively search over it~\cite{Curtmola:2006:SSE}. This means that a client needs to outsource its data to a database/server in an encrypted form to be able to later perform private search queries on it as shown in Figure~\ref{sse}.
\begin{figure}[h!]
\vspace{-5pt}
\center
\includegraphics[width=0.33\textwidth]{sse.pdf}
\caption{\small Searchable symmetric encryption}
\label{sse}
\end{figure}
Despite its efficiency and the high level of privacy that \ensuremath {\mathit{SSE}}{\xspace}~provides, it cannot be applied to database-based \ensuremath {\mathit{CRN}}{\xspace} s simply because in \ensuremath {\mathit{SSE}}{\xspace}, the data has to be outsourced by the client, whereas in database~based \ensuremath {\mathit{CRN}}{\xspace} s, the data about spectrum availability is generated and provided by the service operator that manages the spectrum database. This means that \ensuremath {\mathit{SU}}{\xspace} s have no control over this data and, thus, they cannot encrypt it and outsource it to \ensuremath {\mathit{DB}}{\xspace}~as required by \ensuremath {\mathit{SSE}}{\xspace}.
\subsubsection{Oblivious random access memory (\ensuremath {\mathit{ORAM}}{\xspace})}
Encrypting its outsourced data is not sufficient for a user to protect the confidentiality of his/her data content as his/her access pattern to the data remains unprotected which may reveal the user's private information. \ensuremath {\mathit{ORAM}}{\xspace}~is introduced by Goldreich et al.~\cite{ORAM:Goldreich:1996:SPS} to not only preserve data confidentiality but also to hide a user's access pattern to its outsourced data blocks. Traditionally, \ensuremath {\mathit{ORAM}}{\xspace}~has been designed to arrange the data such that the user never touches the same piece twice, without an intermediate shuffle. This erases the correlation between block locations and obfuscates the memory accesses of data, so that access patterns do not leak information about the stored data. Just like \ensuremath {\mathit{SSE}}{\xspace}, \ensuremath {\mathit{ORAM}}{\xspace}~can only fit to the problem of data outsourcing which is not suitable to the context of \ensuremath {\mathit{CRN}}{\xspace} s for the same reasons discussed for \ensuremath {\mathit{SSE}}{\xspace}.
\subsection{Differential privacy}
This is a recent privacy concept tailored to the statistical disclosure control problem which is defined as follows: how to release statistical information about a set of people without
compromising the privacy of any individual~\cite{dwork2006calibrating}. Its goal is to assure a good statistical accuracy while preserving individual's privacy. It is a well established definition guaranteeing that queries to a database do not reveal too much information about specific individuals who have contributed to the database as suggested in~\cite{groce2011limits}. The formal definition of this concept could be found in~\cite{dwork2006differential}. The basic idea behind it is that for two almost identical input data sets, the outputs of
the mechanism that provides differential privacy are almost identical.
More precisely, it requires that the probability that a query returns a value $v$ when applied to a database $\mathcal{D}$, compared to the probability to report the same value when applied to an adjacent database $\mathcal{D'}$ ( i.e. $\mathcal{D}$, $\mathcal{D'}$ differ in at most 1 entry) should be within a bound of $\exp^\epsilon$ for some privacy level $\epsilon$. Since differential privacy is a probabilistic concept, any differentially private mechanism is necessarily random. A typical way to achieve this notion is to add controlled random noise, drawn from a Laplace distribution for instance, to the query output. One benefit of this concept is that a mechanism can be shown to be differentially private independently from any side information that the adversary might have.
However, standard differential privacy techniques usually perform poorly in situations where participants contribute various time-series data that could be aggregated and mined for useful information, due to noise~\cite{rastogi2010differentially}. Examples of time-series data may include users' current locations, weather information or information obtained from other participatory sensing applications like spectrum sensing in \ensuremath {\mathit{CRN}}{\xspace} s~\cite{rastogi2010differentially}. Moreover, the nature of differential privacy concept makes it poorly suitable for applications that involve a single user, such as spectrum database-based opportunities discovery, where the location of a single user has to be hidden. Thus, it requires that any change in a user's location have negligible effect on the published output of the query, which makes it impossible to communicate any useful information to the service provider~\cite{andres2013geo}. Despite this, some approaches try to adapt this concept to the context of \ensuremath {\mathit{CRN}}{\xspace} s as we show in Sections~\ref{lpsd} \& \ref{lpoc}.
\subsection{Secure multiparty computation (MPC)}
The concept of secure multiparty computation (\ensuremath {\mathit{MPC}}{\xspace}) originates from the works of Yao~\cite{yao1986generate} and Goldreich et al.~\cite{goldreich1987play}. It allows a group of $n$ mutually distrusting parties $P_1, . . . , P_n$, holding private inputs $x_1, . . . , x_n$ to securely compute a joint function $f(x_1, . . . , x_n)=(y_1, . . . , y_n)$ on these inputs~\cite{bogetoft2009secure}. The goal is to make each party $P_i$ learn only $y_i$ but nothing else. This could be achieved through an interactive protocol, executed between these parties, whose execution should be equivalent to having a trusted party that privately receives $x_i$s from $P_i$s, computes $f$ and returns $y_i$s to $P_i$s. This protocol should be able to give the correct result to honest parties even if some parties are dishonest.
In a \ensuremath {\mathit{CRN}}{\xspace}~context, this could be an attractive tool to provide privacy for any task that involves some computation between several entities. For instance, this could be used in distributed cooperative spectrum sensing during the spectrum discovery phase to allow \ensuremath {\mathit{SU}}{\xspace} s to collaborate in order to compute statistics over the sensing reports while preserving the privacy of their reports and thus their location. Another potential use of \ensuremath {\mathit{MPC}}{\xspace}~could be during the coalition formation process, again in the spectrum discovery phase, to prevent leaking \ensuremath {\mathit{SNR}}{\xspace}~values that can compromise \ensuremath {\mathit{SU}}{\xspace} s' location as explained in Section~\ref{coopSources}. \ensuremath {\mathit{MPC}}{\xspace}~could also be used in game theoretical approaches during the spectrum sharing phase to prevent the leakage that can arise from the local decisions shared between different \ensuremath {\mathit{SU}}{\xspace} s during the game. Furthermore, this could be an attractive tool also to protect the bids of \ensuremath {\mathit{SU}}{\xspace} s during the auction process that is performed to ensure spectrum sharing among \ensuremath {\mathit{SU}}{\xspace} s. As explained in Section~\ref{sourcesTrading}, the auction process may leak some information about \ensuremath {\mathit{SU}}{\xspace} s' location which makes it natural to consider leveraging sealed bids or relying on a trusted party for the auction. Ideally, an \ensuremath {\mathit{MPC}}{\xspace}~protocol should be equivalent to a trusted third party; hence, \ensuremath {\mathit{MPC}}{\xspace}~could play this role and replace an untrusted auctioneer as suggested in~\cite{bogetoft2009secure}.
It is obvious that the potential applications of \ensuremath {\mathit{MPC}}{\xspace}~are multifold due to its flexibility to emulate multiple scenarios. However, the bottleneck is its extensive computational and communication overhead, which makes its deployment difficult in practical situations, and more precisely in the context of \ensuremath {\mathit{CRN}}{\xspace} s, at least for the time being.
\vspace{-5pt}
\subsection{Summary}
In this section, we explored a family of renowned PETs and generic crypto schemes that we believe are the most relevant to \ensuremath {\mathit{CRN}}{\xspace} s. We highlighted the benefits and limitations of applying these schemes to \ensuremath {\mathit{CRN}}{\xspace}~off-the-shelf as they are. In the following section, we will present and discuss location privacy preservation approaches proposed for protecting location privacy during the spectrum opportunity discovery process. We will explore the different threat models, location inference attacks, and location privacy preserving techniques that are specific to this spectrum discovery component.
\section{Location privacy preservation for spectrum opportunity discovery component}
\label{lpsd}
In this section, we investigate the different approaches proposed in the literature to deal with the location privacy issue in \ensuremath {\mathit{CRN}}{\xspace} s during the spectrum opportunity discovery phase. First, we discuss the challenges that face designing \ensuremath {\mathit{SU}}{\xspace}'s location privacy preserving protocols in both cooperative spectrum sensing and geolocation database-based approaches. Then, we list the different threat models that need to be considered in these two approaches. After that, we detail existing and potential attacks that could be performed by malicious entities to localize \ensuremath {\mathit{SU}}{\xspace} s by exploiting the vulnerabilities that we identified in Section~\ref{specDisc}. Subsequently, we describe existing solutions that are proposed to cope with these attacks and preserve \ensuremath {\mathit{SU}}{\xspace} s' location privacy. Finally, we explain the performance metrics that are or could be used to assess the performance and reliability of location privacy preserving protocols in \ensuremath {\mathit{CRN}}{\xspace} s, and present tradeoffs that are considered when designing these protocols.
\subsection{Location privacy in cooperative spectrum sensing}
\label{CPdiscovery}
As discussed in Section~\ref{coopSources}, the cooperation among \ensuremath {\mathit{SU}}{\xspace} s during the sensing process gives rise to several vulnerabilities that could be exploited to compromise \ensuremath {\mathit{SU}}{\xspace} s' location privacy.
Thus, location privacy preservation protocols for cooperative sensing need to be designed with several goals in mind:
\begin{itemize}
\item {\em Hide sensing information.} As explained in Section~\ref{coopSources}, \ensuremath {\mathit{SU}}{\xspace} s' sensing reports may leak information about their locations~\cite{bhattacharjee2013vulnerabilities}. Hence, one main goal of these protocols is to hide sensing reports by concealing the observed sensing information from decision makers or any potential external attackers that might eavesdrop \ensuremath {\mathit{SU}}{\xspace}'s communications~\cite{li2012location,mao2015protecting,wang2015privacy,grissa2015location,grissa2016efficient}.
\item {\em Achieve accurate spectrum availability information.} Protocols need to preserve the location privacy of \ensuremath {\mathit{SU}}{\xspace} s, but without compromising their ability to still provide accurate spectrum availability information. Achieving this design goal is very challenging,
due to its conflicting nature: hiding information for the privacy protection purpose may limit the ability to provide accurate spectrum availability information.
\item {\em Optimize resource usage.} An important limitation that needs to be accounted for when designing privacy preserving protocols is \ensuremath {\mathit{SU}}{\xspace} s' resource capability. It is then important to design protocols that require minimum computation and storage resources and incur limited communication overheads.
This, for instance, implies that expensive cryptographic approaches are to be avoided.
\item {\em Hide \ensuremath {\mathit{SNR}}{\xspace}~values.} Another goal that needs be aimed at is to hide the \ensuremath {\mathit{SNR}}{\xspace}~values that \ensuremath {\mathit{SU}}{\xspace} s might need to exchange to form coalitions, for example. As explained in Section~\ref{coopSources}, \ensuremath {\mathit{SNR}}{\xspace}~may leak significant information about \ensuremath {\mathit{SU}}{\xspace} s' location, and thus a reliable location privacy preserving scheme needs to conceal these values without hindering the \ensuremath {\mathit{CRN}}{\xspace}~operations relying on them.
\end{itemize}
\subsubsection{Threat models}
Several threat models are considered in the literature to study and address \ensuremath {\mathit{SU}}{\xspace} s' location privacy issue in cooperative spectrum sensing:
\begin{itemize}
\item {\em Dolev–Yao threat model.} In this model the adversary, usually an intruder, can overhear, intercept, and synthesize any message that is exchanged between \ensuremath {\mathit{SU}}{\xspace} s and \ensuremath {\mathit{FC}}{\xspace}~or even between \ensuremath {\mathit{SU}}{\xspace} s themselves during the cooperative spectrum sensing process. The adversary is only limited by the constraints of the cryptographic methods used~\cite{dolev1983security}. This model is considered in~\cite{grissa2015location,grissa2015cuckoo,grissa2016efficient}
\item {\em Semi-honest or honest-but-curious threat model.} This means that the adversary, that could be a \ensuremath {\mathit{FC}}{\xspace}~\cite{grissa2015location,grissa2016efficient,li2012location,mao2015protecting}, a \ensuremath {\mathit{SU}}{\xspace}~\cite{grissa2015location,grissa2016efficient} or an additional entity as in~\cite{grissa2016efficient}, follows the sensing protocol honestly without changing any of its parameters. However, it shows some interest in learning the location information of target \ensuremath {\mathit{SU}}{\xspace} s by exploiting their sensing reports.
\item {\em Malicious threat model.} Entities in the \ensuremath {\mathit{CRN}}{\xspace}~may be malicious, meaning that \ensuremath {\mathit{FC}}{\xspace}, \ensuremath {\mathit{SU}}{\xspace}~or any other entity involved in the cooperative spectrum sensing process can change their parameters and lead several attacks to localize a target \ensuremath {\mathit{SU}}{\xspace}.
\item {\em Non-collusion threat model.} \ensuremath {\mathit{FC}}{\xspace}, \ensuremath {\mathit{SU}}{\xspace} s and any other entities in the \ensuremath {\mathit{CRN}}{\xspace}~do not collude to infer target \ensuremath {\mathit{SU}}{\xspace} s' location~\cite{grissa2015location,grissa2016efficient}. This means that these entities do not share what they learned about target \ensuremath {\mathit{SU}}{\xspace} s' location during the cooperative spectrum sensing process.
\item {\em Collusion threat model.} \ensuremath {\mathit{FC}}{\xspace}~or some \ensuremath {\mathit{SU}}{\xspace} s may collude with other \ensuremath {\mathit{SU}}{\xspace} s or entities and work together to infer target \ensuremath {\mathit{SU}}{\xspace} s' location~\cite{li2012location,wang2015privacy} by exploiting their sensing reports and communication signals.
\end{itemize}
\subsubsection{Location inference attacks}
\label{attackcoop}
Location inference attacks exploit the vulnerabilities and the sources of leakage that we explained in Section~\ref{coopSources} to localize \ensuremath {\mathit{SU}}{\xspace} s. These attacks could be performed by an internal entity (e.g. another \ensuremath {\mathit{SU}}{\xspace}~or \ensuremath {\mathit{FC}}{\xspace}) or an external attacker that does not belong to the \ensuremath {\mathit{CRN}}{\xspace}. These attacks can be classified into two categories, based on the information used for localization: Geometric localization and fingerprinting.
\paragraph{Geometric localization based attacks}
These attacks exploit channel parameter measurements including \ensuremath {\mathit{RSS}}{\xspace}, \ensuremath {\mathit{SNR}}{\xspace}, \ensuremath {\mathit{AoA}}{\xspace}, \ensuremath {\mathit{ToA}}{\xspace}~and \ensuremath {\mathit{TDoA}}{\xspace}~to localize a target \ensuremath {\mathit{SU}}{\xspace}. \ensuremath {\mathit{RSS}}{\xspace}, \ensuremath {\mathit{SNR}}{\xspace}~and \ensuremath {\mathit{ToA}}{\xspace}~could be used to get the range information, as explained in Section~\ref{coopSources}, which is essential for the trilateration localization technique~\cite{zekavat2011handbook,kasiri2015privacy}. Trilateration is a very simple and intuitive approach that computes the position of a target node by finding the intersection of three circles that model the range with respect to at least three anchor nodes as depicted in Figure~\ref{trilateration}.
\begin{figure}[h!]
\vspace{-2pt}
\center
\includegraphics[width=0.25\textwidth]{trilateration.pdf}
\caption{{\small Localization of an \ensuremath {\mathit{SU}}{\xspace}~via Trilateration using the ranges $d_1$, $d_2$ and $d_3$ corresponding to $\ensuremath {\mathit{PU}}{\xspace}_1$, $\ensuremath {\mathit{PU}}{\xspace}_2$ and $\ensuremath {\mathit{PU}}{\xspace}_3$ respectively.} }
\label{trilateration}
\end{figure}
In the context of \ensuremath {\mathit{CRN}}{\xspace}, the anchor nodes could be three \ensuremath {\mathit{PU}}{\xspace} s whose locations, depending on the situation, could be publicly known. Thus, an attacker that has access to the \ensuremath {\mathit{RSS}}{\xspace} s that a \ensuremath {\mathit{SU}}{\xspace}~measures with respect to three channels could exploit this knowledge to localize \ensuremath {\mathit{SU}}{\xspace}~using trilateration. \ensuremath {\mathit{SNR}}{\xspace}~could also be used in a similar way, as reported in~\cite{kasiri2015privacy}, for ad hoc \ensuremath {\mathit{CRN}}{\xspace} s. The attack can occur during the process of forming coalitions and choosing coalition heads as these operations require exchanging \ensuremath {\mathit{SNR}}{\xspace}~information between \ensuremath {\mathit{SU}}{\xspace} s. Another attack scenario could involve multiple attackers or colluding nodes that belong to the \ensuremath {\mathit{CRN}}{\xspace}~and that have a direct communication with the target node.
Triangulation is also another technique that exploits channel parameter measurements for localization purposes. It uses angles instead of distances and requires at least two reference nodes to localize the target node~\cite{boukerche2008algorithms}. The two reference nodes measure the \ensuremath {\mathit{AoA}}{\xspace}~of the signal coming from the target node. The position of the target node is the intersection of the two lines along the angles from each reference node as in Figure~\ref{triangulation}. As this attack requires a direct communication between the victim and the attackers, this implies that the attackers, which are also the reference nodes in this case, belong to the \ensuremath {\mathit{CRN}}{\xspace}, e.g. two colluding malicious \ensuremath {\mathit{SU}}{\xspace} s.
\begin{figure}[h!]
\center
\includegraphics[width=0.18\textwidth]{Triangulation.pdf}
\caption{\small{ Localization of an \ensuremath {\mathit{SU}}{\xspace}~via Triangulation using the angles of arrivals, \ensuremath {\mathit{AoA}}{\xspace} s, $\theta_1$ and $\theta_2$ of the \ensuremath {\mathit{SU}}{\xspace} 's signal measured respectively at $\ensuremath {\mathit{PU}}{\xspace}_1$ and $\ensuremath {\mathit{PU}}{\xspace}_2$}}
\label{triangulation}
\end{figure}
Geometric localization attacks may be performed in \ensuremath {\mathit{CRN}}{\xspace} s that deploy crowdsourcing (explained in Section~\ref{coop}) for spectrum sensing. For instance, Jin et al.~\cite{jin2016privacy} propose an attack scenario that targets the location privacy of participants in the crowdsourcing process. They consider a special setting where these participants compete to perform spectrum sensing tasks at specific locations via a reverse combinatorial auction operation~\cite{nisan2007algorithmic}. During this auction, participants send their bids, corresponding to their claimed cost of performing the sensing tasks. This cost, as modeled by the authors, involves the round trip distance that a participant needs to travel to perform the sensing tasks and return back to its current location, called base location, which is the target of the proposed attack. This attack exploits the geometric relationship between users bids and the distance they travel to perform the sensing.
\paragraph{Fingerprinting based attacks}
\label{fcattacks}
These attacks are more suitable in situations where the geometric relationships between \ensuremath {\mathit{SU}}{\xspace} s' positions and measurements cannot be established. It estimates the victim's location by finding the best matched fingerprint for the corresponding measurement within a pre-built RF map. It consists mainly of two phases: An off-line or training phase and an on-line or test phase. In the off-line phase, the RF map is generated. This map could be the \ensuremath {\mathit{REM}}{\xspace}~(discussed in Section~\ref{sourcesCharact}) if the attacker is \ensuremath {\mathit{FC}}{\xspace}~or a \ensuremath {\mathit{SU}}{\xspace}~that has access to it, or it could be a map that an external attacker has built by itself. Figure~\ref{fingerprinting} shows a simplified example of how this kind of localization works.
\begin{figure}[h!]
\vspace{-2pt}
\center
\includegraphics[width=0.35\textwidth]{Fingerprinting.pdf}
\caption{{\small Localization of an \ensuremath {\mathit{SU}}{\xspace}~via Fingerprinting using its \ensuremath {\mathit{RSS}}{\xspace}~signature $[\ensuremath {\mathit{RSS}}{\xspace}_1,\ensuremath {\mathit{RSS}}{\xspace}_2,\ensuremath {\mathit{RSS}}{\xspace}_3]$ with respect to $3$ channels and the \ensuremath {\mathit{REM}}{\xspace}~database.}}
\label{fingerprinting}
\vspace{-5pt}
\end{figure}
Li et al.~\cite{li2012location} consider two attacks that rely on this principle to localize a \ensuremath {\mathit{SU}}{\xspace}~based on its \ensuremath {\mathit{RSS}}{\xspace}~measurements that it shares with \ensuremath {\mathit{FC}}{\xspace}~in a centralized \ensuremath {\mathit{CRN}}{\xspace}. They assume that an attacker constructs a signal propagation model by collecting all the sensing reports transmitted within the network~\cite{li2012location}. The attacker uses machine learning techniques, for example k-means classifier as in~\cite{li2012location}, to partition the \ensuremath {\mathit{RSS}}{\xspace}~data into multiple sets corresponding to various locations. The first attack, called {\em single report location privacy (SRLP)} Attack, involves an external attacker that eavesdrops \ensuremath {\mathit{SU}}{\xspace} s' communications or an internal attacker that could be an untrusted \ensuremath {\mathit{FC}}{\xspace}~or a compromised \ensuremath {\mathit{SU}}{\xspace}. Under this attack, the attacker exploits individual \ensuremath {\mathit{RSS}}{\xspace}~measurements of \ensuremath {\mathit{SU}}{\xspace} s to localize them by computing the distance between each sensing report and the centroids of each cluster in the signal propagation model that is built beforehand by the attacker. The second attack that they propose is called {\em differential location privacy ({\em DLP})} attack which estimates the sensing report of a \ensuremath {\mathit{SU}}{\xspace}~during the aggregation process performed by \ensuremath {\mathit{FC}}{\xspace}. In this attack, the attacker compares the changes of the aggregation results after a \ensuremath {\mathit{SU}}{\xspace}~joins or leaves the \ensuremath {\mathit{CRN}}{\xspace}~and then it infers its location by finding to which cluster the estimated report belongs to, just like in the {\em SRLP} attack.
It is worth mentioning, however, that even though fingerprinting could be attractive for leading location inference attacks, it is not necessarily practical unless the attacker is very powerful with lots of resources. This is due to the fact that the construction of accurate radio maps and fingerprints requires considerable off-line effort and may give rise to several challenges. These include, but are not limited to, the huge number of measurements that need to be taken and also the need to regularly update the radio map due to the inherent time varying nature of wireless channels and networks~\cite{zekavat2011handbook}.
\subsubsection{Location privacy preserving approaches}
As explained in Section~\ref{coop}, \ensuremath {\mathit{SU}}{\xspace} s in cooperative spectrum sensing \ensuremath {\mathit{CRN}}{\xspace} s need, first, to share their observations either with \ensuremath {\mathit{FC}}{\xspace}~(in centralized \ensuremath {\mathit{CRN}}{\xspace} s) or with other \ensuremath {\mathit{SU}}{\xspace} s (in distributed \ensuremath {\mathit{CRN}}{\xspace} s). These local observations are then combined to make a cooperative spectrum availability decision. These observations could be statistics computed over the signal or just local binary decisions made by each \ensuremath {\mathit{SU}}{\xspace}~individually. Both cases present some privacy risks to \ensuremath {\mathit{SU}}{\xspace} s as discussed in Section~\ref{sec:sourcesof}. Thus, research efforts should focus on hiding \ensuremath {\mathit{SU}}{\xspace} s' observations from the other entities in the network. Most of the existent works that we discuss in this Section consider the location inference attack from the sensing reports that \ensuremath {\mathit{SU}}{\xspace} s share. We summarize these works in Table~\ref{solCoop} and we discuss them in more details in the following.
{\large
\begin{table*
\centering
\small
\caption{\small Location privacy preserving schemes in cooperative spectrum sensing}
\label{solCoop}
\resizebox{\textwidth}{!}{%
\renewcommand{\arraystretch}{1.25}{
\begin{tabular}{@{}lp{4cm}lp{5.5cm}p{5.5cm}@{}}
\toprule[1.5pt]
Countermeasures & Attacks Considered & Techniques & Pros & Cons \\ \midrule
Li et al.~\cite{li2012location} & - Location inference from sensing reports (e.g. \ensuremath {\mathit{RSS}}{\xspace}) & \begin{tabular}[c]{@{}p{4cm}@{}}- Privacy preserving aggregation with encryption \\ - Dummy report injection \end{tabular} & \begin{tabular}[c]{@{}p{5.5cm}@{}}- Relatively efficient against differential privacy attacks \end{tabular}& \begin{tabular}[c]{@{}p{5.5cm}@{}}- Very high computational and communication overhead \\ - No fault tolerance \\ - Has a little negative effect on the sensing performance\end{tabular} \\ \addlinespace[5pt] \hline \addlinespace[5pt]
Grissa et al.~\cite{grissa2015location} &- Location inference from sensing reports (e.g. \ensuremath {\mathit{RSS}}{\xspace}) & \begin{tabular}[c]{@{}p{4cm}@{}}- Private comparisons using Yao's millionaires protocol\\ - Order preserving encryption\end{tabular} & \begin{tabular}[c]{@{}p{5.5cm}@{}}- Low communication overhead\\ - High location privacy\end{tabular} & - Relatively high computational overhead \\ \addlinespace[5pt] \hline \addlinespace[5pt]
Mao et al.~\cite{mao2015protecting} & - Location inference from sensing reports (e.g. \ensuremath {\mathit{RSS}}{\xspace}) & - {\em El Gamal} cryptosystem & - Considers both semi-honest and malicious adversaries & \begin{tabular}[c]{@{}p{5.5cm}@{}} - High communication overhead \\- Prone to {\em DLP} attack\end{tabular} \\ \addlinespace[5pt] \hline \addlinespace[5pt]
Wang et al.~\cite{wang2015privacy} & \begin{tabular}[c]{@{}p{4cm}@{}} - Location inference from sensing reports (e.g. \ensuremath {\mathit{RSS}}{\xspace}) \\- Collusion between service providers \\- Collusion between service providers and \ensuremath {\mathit{SU}}{\xspace} s \end{tabular} & \begin{tabular}[c]{@{}p{4cm}@{}} - Cloaking of sensing reports - Dimension reduction of sensing data through non-invertible projection \end{tabular}& \begin{tabular}[c]{@{}p{5.5cm}@{}} - Considers multiple malicious service providers \\ - Considers collusion between some entities \\ - Provides differential privacy \end{tabular} & \begin{tabular}[c]{@{}p{5.5cm}@{}} - Privacy level decreases with the decrease of service providers \\- Privacy level decreases with the increase of \ensuremath {\mathit{SU}}{\xspace} s\\- Some information distortion during the cloaking process \end{tabular}\\ \addlinespace[5pt] \hline \addlinespace[5pt]
Kasiri et al.~\cite{kasiri2015privacy} & - Location inference from \ensuremath {\mathit{SNR}}{\xspace}~during coalition formation & - Anonymization of \ensuremath {\mathit{SNR}}{\xspace} s & - Takes into account \ensuremath {\mathit{SU}}{\xspace} s' mobility & \begin{tabular}[c]{@{}p{5.5cm}@{}}- Privacy level decreases as the number of sensed channels increases \\- Providing high location privacy degrades sensing performance \end{tabular}\\ \addlinespace[5pt] \hline \addlinespace[5pt]
Grissa et al.~\cite{grissa2016efficient} &- Location inference from sensing reports (e.g. \ensuremath {\mathit{RSS}}{\xspace}) & \begin{tabular}[c]{@{}p{4cm}@{}}- Additional entity in the network\\ - Order preserving encryption\end{tabular} & \begin{tabular}[c]{@{}p{5.5cm}@{}}- Very low communication \& computational overhead\\ - High location privacy\end{tabular} & - Additional entity that needs to be managed by a third party for non-collusion \\ \addlinespace[5pt] \hline \addlinespace[5pt]
Jin et al.~\cite{jin2016privacy} & \begin{tabular}[c]{@{}p{4cm}@{}}- Location inference from sensing cost during reverse auction \\- Location inference from auction result \\ - Location inference from changes in auction participation \end{tabular} & \begin{tabular}[c]{@{}p{4cm}@{}}- Exponential mechanism for differential privacy \end{tabular}& - Offers differential location privacy & - The lower the social cost the higher the location information leakage \\ \bottomrule[1.5pt]
\end{tabular}}}
\vspace{-7pt}
\end{table*}
}
Li et al.~\cite{li2012location} introduce an approach that uses secret sharing and the privacy preserving aggregation process proposed in~\cite{shi2011privacy} to conceal the content of the sensing reports. This scheme uses also dummy report injections to replace the report of a leaving \ensuremath {\mathit{SU}}{\xspace}~in order to cope with the differential location privacy attack (explained in Section~\ref{fcattacks}) and prevent a malicious \ensuremath {\mathit{FC}}{\xspace}~from estimating the sensing report of the leaving \ensuremath {\mathit{SU}}{\xspace}. Moreover, this scheme can bear collusion attacks involving \ensuremath {\mathit{FC}}{\xspace}~and some compromised \ensuremath {\mathit{SU}}{\xspace} s. Despite its merits, it has several limitations: $(i)$ \ensuremath {\mathit{FC}}{\xspace}~needs to collect all the sensing reports in order to be able to decode the aggregated result. Obviously, this could not be fault tolerant, since some reports may be missing due, for example, to the unreliable nature of wireless channels. $(ii)$ It cannot handle network dynamism if multiple \ensuremath {\mathit{SU}}{\xspace} s join or leave the network simultaneously, as it can only deal with the event of one \ensuremath {\mathit{SU}}{\xspace}~leaving or joining the network at a time. $(iii)$ The pairwise secret sharing requirement, that this scheme has, incurs extra communication overhead and delay. $(iv)$ The
underlying encryption scheme requires solving the discrete logarithm problem~\cite{menezes1996handbook} for the decryption, which is extremely costly and is only possible for very small plaintext space.
Grissa et al.~\cite{grissa2015location,grissa2017preserving} propose a location privacy preserving protocol that aims to hide \ensuremath {\mathit{SU}}{\xspace}'s sensing reports (specifically \ensuremath {\mathit{RSS}}{\xspace}) from \ensuremath {\mathit{FC}}{\xspace}~and the sensing threshold used for the decision from \ensuremath {\mathit{SU}}{\xspace} s. This prevents \ensuremath {\mathit{FC}}{\xspace}~from trying to localize \ensuremath {\mathit{SU}}{\xspace} s using their sensing reports and, at the same time, prevents malicious \ensuremath {\mathit{SU}}{\xspace} s from using the sensing threshold to manipulate their measurements and impact \ensuremath {\mathit{FC}}{\xspace}'s decision. This scheme relies on {\em order preserving encryption}~\cite{boldyreva2009order} to make \ensuremath {\mathit{SU}}{\xspace} s encrypt their sensing reports and allow \ensuremath {\mathit{FC}}{\xspace}~to learn only the relative order of these reports. Using this order and following a binary search-like technique, \ensuremath {\mathit{FC}}{\xspace}~executes at most $\log \: n$ private comparisons between \ensuremath {\mathit{SU}}{\xspace} s' \ensuremath {\mathit{RSS}}{\xspace} s and \ensuremath {\mathit{FC}}{\xspace}'s sensing threshold using {\em yao’s millionaire} protocol~\cite{yao1982protocols}. The order learned by \ensuremath {\mathit{FC}}{\xspace}~aims to make the number of private comparisons logarithmic in the number of \ensuremath {\mathit{SU}}{\xspace} s. This is shown to provide high location privacy to \ensuremath {\mathit{SU}}{\xspace} s while enabling an efficient sensing performance. However, even though this approach has a low communication overhead and a logarithmic computational overhead as a function of the number of \ensuremath {\mathit{SU}}{\xspace} s, the computation incurred is still relatively high. This is due to the use of the expensive {\em yao’s millionaire} protocol~\cite{yao1982protocols} that, itself, relies on expensive homomorphic encryption.
Some approaches consider an intermediate node or entity to help addressing the location privacy issue, e.g.~\cite{mao2015protecting,grissa2016efficient}. Mao et al.~\cite{mao2015protecting} provide an approach that requires \ensuremath {\mathit{SU}}{\xspace} s to encrypt their \ensuremath {\mathit{RSS}}{\xspace}~values using a derivative of {\em El Gamal}~\cite{elgamal1984public} encryption scheme. In their approach, one of \ensuremath {\mathit{SU}}{\xspace} s is picked to play the role of a helper to \ensuremath {\mathit{FC}}{\xspace}. First, the {\em Helper} and \ensuremath {\mathit{FC}}{\xspace}~collaborate to construct a public/secret key pair and each of them keeps a part of the secret key for itself. Then, \ensuremath {\mathit{FC}}{\xspace}~and {\em Helper} share the public key with \ensuremath {\mathit{SU}}{\xspace} s. Subsequently, \ensuremath {\mathit{SU}}{\xspace} s send their \ensuremath {\mathit{RSS}}{\xspace} s encrypted using this public key to the {\em Helper} that permutes them, decrypts them with the secret part that it has, and then sends them to \ensuremath {\mathit{FC}}{\xspace}~which decrypts them using its part of the key. Once decrypted, \ensuremath {\mathit{FC}}{\xspace}~aggregates the \ensuremath {\mathit{RSS}}{\xspace}~values to make a final decision. The authors consider a semi-honest threat model for \ensuremath {\mathit{FC}}{\xspace}~and Helper and a restricted malicious model where only \ensuremath {\mathit{SU}}{\xspace} s are malicious. However, even though this approach guarantees that individual sensing reports cannot be revealed neither to \ensuremath {\mathit{FC}}{\xspace}~nor to the Helper, it incurs high communication overhead. In order to provide high enough security level, the keys of El Gamal cryptosystem, and hence the size of the ciphertexts, need to be very large. This makes the communication cost very high, especially when the number of \ensuremath {\mathit{SU}}{\xspace} s is large. Moreover, as \ensuremath {\mathit{FC}}{\xspace}~can learn aggregated sensing reports of \ensuremath {\mathit{SU}}{\xspace} s, this scheme is still prone to the {\em DLP} attack explained in Section~\ref{fcattacks}.
Grissa et al.\cite{grissa2016efficient,grissa2017preserving} propose another approach that relies also on {\em order preserving encryption (\ensuremath {\mathit{OPE}}{\xspace})} and on deploying an additional node, referred to as {\em gateway (\ensuremath {\mathit{GW}}{\xspace})}. \ensuremath {\mathit{GW}}{\xspace}~is deployed to perform private comparisons between \ensuremath {\mathit{SU}}{\xspace} s' sensing reports and the decision criteria or threshold of \ensuremath {\mathit{FC}}{\xspace}. This is done by making each \ensuremath {\mathit{SU}}{\xspace}~encrypt its \ensuremath {\mathit{RSS}}{\xspace}, using \ensuremath {\mathit{OPE}}{\xspace}~and a unique secret key shared with \ensuremath {\mathit{FC}}{\xspace}, and send it to \ensuremath {\mathit{GW}}{\xspace}. \ensuremath {\mathit{FC}}{\xspace}~also sends $n$ encryptions of its sensing threshold, using \ensuremath {\mathit{OPE}}{\xspace}~and the $n$ keys established with \ensuremath {\mathit{SU}}{\xspace} s, and sends them to \ensuremath {\mathit{GW}}{\xspace}. On top of the \ensuremath {\mathit{OPE}}{\xspace}~encryption, each entity communicating with \ensuremath {\mathit{GW}}{\xspace}~encrypts its data with a key uniquely established with \ensuremath {\mathit{GW}}{\xspace}~to secure the communication. \ensuremath {\mathit{GW}}{\xspace}~removes the second layer encryption and compares each \ensuremath {\mathit{OPE}}{\xspace}~encrypted \ensuremath {\mathit{RSS}}{\xspace}~to its corresponding \ensuremath {\mathit{OPE}}{\xspace}~encrypted sensing threshold (the one that \ensuremath {\mathit{FC}}{\xspace}~has constructed with the same secret key). The main advantage of this approach is its high efficiency in terms of communication and computational complexity due to its reliance on symmetric encryption only. The high efficiency benefits of this technique comes, however, at the cost of needing an additional architectural entity, \ensuremath {\mathit{GW}}{\xspace}, that has to be managed by a third party to avoid collusion with \ensuremath {\mathit{SU}}{\xspace} s or \ensuremath {\mathit{FC}}{\xspace}~and to provide the claimed privacy guarantees.
Other approaches consider a different \ensuremath {\mathit{CRN}}{\xspace}~scenario that consists of multiple service providers (\ensuremath {\mathit{SP}}{\xspace} s) that may exchange sensing data among themselves as in~\cite{wang2015privacy}. Wang et al.~\cite{wang2015privacy} propose a framework that aims to preserve \ensuremath {\mathit{SU}}{\xspace} s' privacy in collaborative spectrum sensing from malicious \ensuremath {\mathit{SP}}{\xspace} s. It assumes that the only trustworthy \ensuremath {\mathit{SP}}{\xspace}~for a \ensuremath {\mathit{SU}}{\xspace}~is the one serving it. The remaining \ensuremath {\mathit{SP}}{\xspace} s and \ensuremath {\mathit{SU}}{\xspace} s may collude to infer private information about a target \ensuremath {\mathit{SU}}{\xspace}, including its location. To preserve \ensuremath {\mathit{SU}}{\xspace} s' privacy, this framework hides individual sensing data of \ensuremath {\mathit{SU}}{\xspace} s by making each \ensuremath {\mathit{SP}}{\xspace}~transform sensing reports of corresponding \ensuremath {\mathit{SU}}{\xspace} s into cloaks. To find the optimal cloaking strategy, each \ensuremath {\mathit{SP}}{\xspace}~projects its original sensing data to a single-dimensional space, with minimal data distortion~\cite{wang2015privacy}, using a privacy-preserving non-invertible projection and shares statistical information of the projected data with one \ensuremath {\mathit{SP}}{\xspace}~picked as a leader. The leader uses this information to decide about the optimal cloaking strategies and shares it with the other \ensuremath {\mathit{SP}}{\xspace} s. The authors rely on dynamic programming to obtain the optimal cloaking strategy that minimizes information distortion and that is obtained through collaboration between \ensuremath {\mathit{SP}}{\xspace} s. This scheme considers collusion between different malicious entities and provides {\em differential privacy} to \ensuremath {\mathit{SU}}{\xspace} s. However, its privacy level decreases with the decrease of the number of \ensuremath {\mathit{SP}}{\xspace} s and the increase of the number of \ensuremath {\mathit{SU}}{\xspace} s. It also introduces some distortion to the sensing information which may impact the sensing accuracy.
Some works try also to address the location privacy issue in distributed cooperative sensing. For example, Kasiri et al.~\cite{kasiri2015privacy} address this issue in multi-channel cognitive radio {\em MANET}s. They propose a scheme that relies on the notion of anonymization to prevent location information leakage from \ensuremath {\mathit{SNR}}{\xspace}~values that are exchanged between \ensuremath {\mathit{SU}}{\xspace} s for coalition formation purposes. Anonymization is achieved by means of random manipulation and distortion of the exchanged \ensuremath {\mathit{SNR}}{\xspace} s, which can leak information about the location of \ensuremath {\mathit{SU}}{\xspace} s as shown in Section~\ref{coopSources}. Each \ensuremath {\mathit{SU}}{\xspace}~creates an anonymization area with respect to each sensed channel. However, a major limitation of this scheme is that the more channels sensed by a \ensuremath {\mathit{SU}}{\xspace}~the more likely it is to be located as the adversary can intersect the anonymization areas to narrow down \ensuremath {\mathit{SU}}{\xspace}'s location. Another limitation is that it cannot achieve high location privacy without degrading the sensing performance of the \ensuremath {\mathit{CRN}}{\xspace}. Indeed, the authors present a tradeoff between privacy and performance as both cannot be maximized together.
Some works try also to preserve the location privacy of users that participate in the crowdsourcing process, which is used to recruit distributed mobile users to sense a given channel around specific locations. For instance, Jin et al.~\cite{jin2016privacy} formulate participants selection process as a reverse auction problem where participants compete to perform spectrum sensing tasks in return for rewards. Each participant's true cost for performing the sensing tasks is closely related to its current location as explained in Section~\ref{attackcoop}. The authors rely on the exponential mechanism to protect the location information and prevent the attack that they have identified (explained in Section~\ref{attackcoop}). Users are selected iteratively for each sensing sub-task following the exponential mechanism to guarantee differential privacy for their bids, and consequently differential location privacy. While protecting location privacy, this approach aims to minimize the social cost that represents the sum of the real costs of users completing all the sensing tasks. However, minimizing this cost deteriorates the location privacy level, which is the main limitation of this approach.
\subsubsection{Performance metrics and tradeoffs}
\paragraph{Performance metrics}
\label{coopPerf}
\noindent \textbf{Computational complexity:} This is an important metric as \ensuremath {\mathit{SU}}{\xspace} s are usually resource constrained. Thus, it is paramount to consider this when designing a location privacy preserving scheme for \ensuremath {\mathit{CRN}}{\xspace} s. This metric usually accounts for the overhead resulting from the various operations required by the scheme (e.g., cryptographic operations) and incurred by all different entities involved in the privacy preserving protocol, and could be measured separately for each entity or as a whole for the entire system. Computational complexity has a direct impact on the delay that a \ensuremath {\mathit{SU}}{\xspace}~may experience before getting the decision about the spectrum availability.
Computational complexity is considered in most of the research works that address the location privacy issue in cooperative spectrum sensing in \ensuremath {\mathit{CRN}}{\xspace} , e.g.~\cite{li2012location,mao2015protecting,kasiri2015privacy,grissa2015location,grissa2016efficient}.
\noindent \textbf{Communication overhead:}
Communication overhead is another important metric that needs to be considered. Location privacy preserving schemes must not overwhelm the network by incurring high communication overhead that may lead to the degradation of the overall system performance, especially provided that bandwidth and/or energy resources are often limited. Encryption, which most proposed solutions rely on to ensure privacy, tends to incur, depending on the size of ciphertexts, heavy communication overheads. Another factor that also tends to contribute to this overhead is the number of \ensuremath {\mathit{SU}}{\xspace} s involved in the cooperative sensing task.
\noindent \textbf{Spectrum availability accuracy:}
It is important to protect \ensuremath {\mathit{SU}}{\xspace} s' location privacy, but while making sure that doing so does not interfere with the cooperative sensing task. Therefore, another important metric is the ability of these privacy preserving schemes to perform the sensing task accurately. This is quantified, for example in~\cite{kasiri2015privacy}, using the detection probability to capture the impact of the privacy preserving scheme on detecting \ensuremath {\mathit{PU}}{\xspace} s presence.
\noindent \textbf{Location privacy level:}
As the ultimate goal of any location privacy preserving protocol is to preserve the location privacy of \ensuremath {\mathit{SU}}{\xspace} s, it is then paramount to have a metric that can be used to assess and quantify the privacy level. There are several metrics that could be used for capturing this:
\begin{itemize}
\item {\em Anonymity level}: This measures the level of anonymity provided by the cloaking algorithm and usually refers to the size of the area to which a \ensuremath {\mathit{SU}}{\xspace}~generalizes its location to achieve anonymity. One way to quantify this is by computing a relative measure normalized by the anonymity level required by a \ensuremath {\mathit{SU}}{\xspace}. Kasiri et al.~\cite{kasiri2015privacy} rely on a similar approach and define the location privacy level of a specific \ensuremath {\mathit{SU}}{\xspace}~as the ratio between the anonymized area with respect to all \ensuremath {\mathit{PU}}{\xspace} s and the maximum anonymized area of that \ensuremath {\mathit{SU}}{\xspace}. The privacy level for the whole network is obtained by computing the average of the location privacy levels over all \ensuremath {\mathit{SU}}{\xspace} s.
\item {\em Entropy}: This shows how uniform the probability of locating a \ensuremath {\mathit{SU}}{\xspace}~at a specific position is and it is used to measure the uncertainty level that an adversary has~\cite{shokri2011quantifyingsymp}. Li et al.~\cite{li2012location} have used this concept to quantify the location privacy level of their schemes. The area covered by the \ensuremath {\mathit{CRN}}{\xspace}~is divided into sub-regions, forming a set $\mathcal{G} = \{g_1,g_2,\cdots,g_m\}$. The uncertainty of the adversary, and thus the location privacy level of a \ensuremath {\mathit{SU}}{\xspace}~$i$ involved in the cooperative spectrum sensing, is then defined as:
\vspace{-6pt}
\begin{equation}
\label{uncertainty}
\mathcal{A}(i) = - \sum_{b=1}^{m} p_{i|b} \log(p_{i|b})
\end{equation}
where $p_{i|b}$ is the probability that \ensuremath {\mathit{SU}}{\xspace}~$i$ is located in sub-region $g_b$. The location privacy level for the overall system is then given by
$\mathcal{A} = \sum_{i=1}^{n} \mathcal{A}(i)$,
where $n$ is the number of \ensuremath {\mathit{SU}}{\xspace} s. If an attacker can uniquely infer that \ensuremath {\mathit{SU}}{\xspace}~$i$ is located at sub-region $g_b$, then $p_{i|b} = 1$, i.e. $\mathcal{A}(i) = 0$. On the other hand, if the attacker is unable to tell which sub-region \ensuremath {\mathit{SU}}{\xspace}~is located in, which means \ensuremath {\mathit{SU}}{\xspace}~could be located at any region with equal probability $p_{i|b} = 1/m$, then the privacy level for \ensuremath {\mathit{SU}}{\xspace}~$i$ would be $\mathcal{A}(i) = \log m$, which is the maximum privacy level it can get when participating in the cooperative sensing.
\item {\em $\epsilon$-differential privacy}:
This concept is based on the differential privacy concept (discussed in Section~\ref{limitGeneric}). A mechanism $\mathcal{M}$ is said to provide $\epsilon$-differential privacy for a \ensuremath {\mathit{SU}}{\xspace}~$i$ if for any two sets of sensing reports, $R = [r_1,\cdots,r_i,\cdots,r_\ensuremath {\mathit{n}}{\xspace}]$ and $R' = [r_1,\cdots,r'_i,\cdots,r_\ensuremath {\mathit{n}}{\xspace}]$, that differ only on $i$'s sensing report, we have:
\begin{equation}
\vert\ln \frac{Pr[\mathcal{M}(R)= \mathcal{O}]}{Pr[\mathcal{M}(R')= \mathcal{O}]} \vert \leq \epsilon
\end{equation}
for all $\mathcal{O} \in Range(\mathcal{M})$ with $Range(\mathcal{M})$ is the set of all possible outputs of $\mathcal{M}$~\cite{wang2015privacy} .
The privacy level is controlled by the parameter $\epsilon$ with higher privacy is ensured by lower $\epsilon$ values. Very low values of $\epsilon$~ensure that $Pr[\mathcal{M}(R)= \mathcal{O}]$ and $Pr[\mathcal{M}(R')= \mathcal{O}]$ are roughly the same, meaning that the output $\mathcal{O}$ is not sensitive to the changes of any single \ensuremath {\mathit{SU}}{\xspace}'s sensing reports.
\end{itemize}
Location privacy could also be quantified using the concepts of {\em inaccuracy} and {\em incorrectness} introduced by Shokri et al.~\cite{shokri2011quantifyingsymp}. These concepts could be redefined to fit the context of location privacy in \ensuremath {\mathit{CRN}}{\xspace} s as done in~\cite{bahrak2014protecting}. First, let $\Theta$~denote the observed sensory information that could be used to localize a \ensuremath {\mathit{SU}}{\xspace}, and $x$ and $x_c$ represent the location estimated by the attacker and the actual \ensuremath {\mathit{SU}}{\xspace}'s location, respectively. Let also $p(x|\Theta)$ be the probability distribution of all possible values of the target \ensuremath {\mathit{SU}}{\xspace}'s location given the observed information. Essentially, this probability models the adversary's extracted information from its observations.
\begin{itemize}
\item {\em Inaccuracy}: This is the discrepancy between the posterior distributions $p(x|\Theta)$ and $\hat{p}(x|\Theta)$ which basically quantifies the difference between \ensuremath {\mathit{SU}}{\xspace}'s real location distribution and the adversary's estimated location distribution.
\item {\em Incorrectness}: This is the distance (or expected distance) between the true \ensuremath {\mathit{SU}}{\xspace}'s location and that inferred by the attacker. This metric is shown in~\cite{shokri2011quantifyingsymp} to be the most appropriate for quantifying location privacy. The expected distance, which is the adversary's expected estimation error, can be written as
$\sum_x \hat{p}(x|\Theta) \Vert x - x_c\Vert$,
where $\Vert\cdot\Vert$ is a distance, e.g. euclidean, between $x$ and $x_c$.
\end{itemize}
\paragraph{Performance tradeoffs}
Several performance tradeoffs could be made when designing location privacy preserving schemes for cooperative spectrum sensing:
\textbf{Scheme overhead vs. hardware cost:}
Scheme overhead in terms of communication, computation, and/or energy could be reduced at the cost of additional architectural components. For example, Grissa et al.~\cite{grissa2016efficient} introduce and rely on an extra network entity to reduce both communication and computational overheads while also improving privacy.
This reduction in overhead is achieved by means of this new entity, introduced to carry out the private comparisons between \ensuremath {\mathit{SU}}{\xspace} s and \ensuremath {\mathit{FC}}{\xspace}~without disclosing \ensuremath {\mathit{RSS}}{\xspace}~values.
Without such an entity, these comparisons would have been very expensive, resulting in an excessive scheme overhead.
\textbf{Privacy level vs. scheme overhead:}
Achieving higher location privacy at the cost of deploying more expensive cryptosystems with higher communication and/or computation overhead is another tradeoff researchers often make. For example, the works in~\cite{li2012location,mao2015protecting,grissa2015location} make such tradoffs in order to improve the location privacy of their schemes.
\textbf{Privacy level vs. sensing accuracy:}
Higher location privacy can also be obtained at the cost of willing to degrade the sensing performance of the \ensuremath {\mathit{CRN}}{\xspace}. For example, such a tradeoff is made in the approach proposed by Kasiri et al.~\cite{kasiri2015privacy}, where the anonymization area, capturing the privacy level, is increased but at the cost of decreasing the average detection probability, representing the \ensuremath {\mathit{CRN}}{\xspace}~sensing performance.
\subsection{Location privacy in database-based spectrum discovery}
\label{DBdiscovery}
Here, the location privacy issue is completely different from that of the cooperative sensing-based \ensuremath {\mathit{CRN}}{\xspace} s. In fact, as explained in Section~\ref{db}, each \ensuremath {\mathit{SU}}{\xspace}~is now required to send its exact location to \ensuremath {\mathit{DB}}{\xspace}
~in order to learn about spectrum opportunities in its vicinity. This makes preserving the location privacy of \ensuremath {\mathit{SU}}{\xspace} s more challenging, since an adversary does not need to perform any extra computation to estimate the position, and the location information here could be easily extracted from the query itself.
Thus, location information preserving schemes for database-based \ensuremath {\mathit{CRN}}{\xspace} s need to be designed with two conflicting goals: $i)$ hiding or not including \ensuremath {\mathit{SU}}{\xspace}'s location information in the query to be sent to \ensuremath {\mathit{DB}}{\xspace},
and $ii)$ in response to a \ensuremath {\mathit{SU}}{\xspace}'s query, \ensuremath {\mathit{DB}}{\xspace}~needs to inform \ensuremath {\mathit{SU}}{\xspace}~about spectrum availability in \ensuremath {\mathit{SU}}{\xspace}'s vicinity.
The second goal above somehow entails that \ensuremath {\mathit{DB}}{\xspace}~needs to know where \ensuremath {\mathit{SU}}{\xspace}~is located at, and thus,
meeting these two conflicting requirements is very challenging. As we will see later, this cannot be achieved without making some performance tradeoffs.
\subsubsection{Threat models}
Several threat models are considered in the literature to study and address \ensuremath {\mathit{SU}}{\xspace} s' location privacy issue in database-driven \ensuremath {\mathit{CRN}}{\xspace} s:
\begin{itemize}
\item {\em Dolev–Yao threat model}: The adversary, usually an intruder, can overhear, intercept, and synthesize any message exchanged between \ensuremath {\mathit{SU}}{\xspace} s and \ensuremath {\mathit{DB}}{\xspace}. More specifically the adversary can learn the location of an \ensuremath {\mathit{SU}}{\xspace}~from the query that the latter sends to \ensuremath {\mathit{DB}}{\xspace}~to learn spectrum opportunities. The adversary here is only limited by the constraints of the used cryptographic schemes~\cite{dolev1983security}. This model has been considered in several works~\cite{gao2013location,zhang2015privacy}.
\item {\em Semi-honest} or {\em honest-but-curious threat model}: The adversary, usually \ensuremath {\mathit{DB}}{\xspace}, follows the sensing protocol honestly without changing any of its parameters, but shows some interest in learning the location of target \ensuremath {\mathit{SU}}{\xspace} s~\cite{zhang2015optimal,li2015agent,troja2015efficient,gao2013location}. This means that it responds to \ensuremath {\mathit{SU}}{\xspace} s queries with correct spectrum availability information, but at the same time tries to learn their whereabouts.
\item {\em Malicious-entity threat model}: \ensuremath {\mathit{DB}}{\xspace}, or an intermediate \ensuremath {\mathit{BS}}{\xspace}, may be malicious, i.e. they can change protocol parameters to localize a target \ensuremath {\mathit{SU}}{\xspace}~that is querying \ensuremath {\mathit{DB}}{\xspace}. In some situations, the malicious entity could even be a sophisticated adversary that has considerable resources and has access to information from \ensuremath {\mathit{DB}}{\xspace}~\cite{zhang2015achieving}.
\end{itemize}
\subsubsection{Location inference attacks}
The most straightforward and basic attack is based on \ensuremath {\mathit{SU}}{\xspace}'s query content. A \ensuremath {\mathit{SU}}{\xspace}~needs to include its exact location in its query to \ensuremath {\mathit{DB}}{\xspace}. This makes it vulnerable to an intruder, that can learn its location by eavesdropping its queries, or even to \ensuremath {\mathit{DB}}{\xspace}~that has access to these queries.
Typically, \ensuremath {\mathit{DB}}{\xspace}'s response to a \ensuremath {\mathit{SU}}{\xspace}'s query contains spectrum availability information; e.g., the list of available channels in \ensuremath {\mathit{SU}}{\xspace}'s vicinity and the maximum allowed transmit powers in each of these available channels. An adversary that has access to this information could localize a target \ensuremath {\mathit{SU}}{\xspace}~by overlapping the availability areas of the different channels available at \ensuremath {\mathit{SU}}{\xspace}'s location as explained in Section~\ref{sourcesDB}. This kind of attack assumes that the adversary has knowledge about the RF environment covered by \ensuremath {\mathit{DB}}{\xspace}~as well as the activity and coverage of \ensuremath {\mathit{PU}}{\xspace} s. The adversary can also exploit the fact that the allowable secondary transmit powers are highly correlated to the relative distance between a \ensuremath {\mathit{SU}}{\xspace}~and a \ensuremath {\mathit{PU}}{\xspace}~as discussed in Section~\ref{sourcesDB}. This has been exploited by Zhang et al.~\cite{zhang2015optimal} to identify a unified attack framework to localize both \ensuremath {\mathit{SU}}{\xspace} s and \ensuremath {\mathit{PU}}{\xspace} s based on the $MTP$ function introduced in~\cite{bahrak2014protecting}. The $MTP$ calculated by \ensuremath {\mathit{DB}}{\xspace}~is divided into several levels based on the distance between \ensuremath {\mathit{SU}}{\xspace}~and \ensuremath {\mathit{PU}}{\xspace}. Specifically, when this distance is less than a certain protection radius, \ensuremath {\mathit{SU}}{\xspace}~is not permitted to transmit on \ensuremath {\mathit{PU}}{\xspace}'s channel. Beyond the protection radius, \ensuremath {\mathit{SU}}{\xspace}~can transmit at an increased power level as its distance from \ensuremath {\mathit{PU}}{\xspace}~increases until it reaches the maximum allowed transmit power as regulated by FCC.
\subsubsection{Location privacy preserving approaches}
We summarize the approaches that are proposed in the literature to cope with the location privacy issue in database-based spectrum discovery in Table~\ref{solDB} and we discuss them in more details in the following. Generally speaking, most existing techniques attempt to protect \ensuremath {\mathit{SU}}{\xspace} s' location privacy by adopting one of two techniques/concepts: {\em k-anonymity}~\cite{sweeney2002k} or {\em \ensuremath {\mathit{PIR}}{\xspace}} ({\em private information retrieval})~\cite{chor1998private}.
As discussed in Section~\ref{limitGeneric}, {\em $k$-anonymity}-based approaches try to ensure that the probability of identifying the location of a querying \ensuremath {\mathit{SU}}{\xspace}~remains under $1/k$, where $k$ is the size of the anonymity set to be received by the untrusted \ensuremath {\mathit{DB}}{\xspace}. {\em k-anonymity}-based approaches are known to suffer from one major problem: they cannot achieve high location privacy without incurring substantial communication/computation overhead. Furthermore, it has been shown in a recent study led by Sprint and Technicolor~\cite{zang2011anonymization} that anonymization based techniques are not efficient in providing location privacy guarantees, and may even leak some location information.
For instance, Zhang et al~\cite{zhang2015optimal} rely on the {\em k-anonymity} concept to provide a location privacy preserving mechanism to protect the location privacy of both \ensuremath {\mathit{PU}}{\xspace} s and \ensuremath {\mathit{SU}}{\xspace} s. The proposed scheme requires that each \ensuremath {\mathit{SU}}{\xspace}~queries \ensuremath {\mathit{DB}}{\xspace}~by sending a square cloak region that includes its actual location instead of just sending this location. \ensuremath {\mathit{SU}}{\xspace}~keeps querying \ensuremath {\mathit{DB}}{\xspace}~using the same cloak region to avoid further location information leakage. This scheme requires a tradeoff between high location privacy and spectrum utility, which means that achieving a high location privacy level results in a decrease in spectrum utility. This limits the applicability of this kind of approaches as they impact the main goal of \ensuremath {\mathit{CRN}}{\xspace} s which is optimizing spectrum utilization efficiency. As discussed earlier, a good approach should provide location privacy to \ensuremath {\mathit{SU}}{\xspace} s but without hindering the functioning of \ensuremath {\mathit{CRN}}{\xspace} s.
{\em $k$-anonymity} is also used by Li et al.~\cite{li2015agent} to protect \ensuremath {\mathit{SU}}{\xspace} s' location privacy during the commitment phase in which \ensuremath {\mathit{SU}}{\xspace} s have to register the channels that they are planning to use as explained in Section~\ref{sourcesDB}. In this approach, \ensuremath {\mathit{SU}}{\xspace} s first send their channel requests to the \ensuremath {\mathit{BS}}{\xspace}~that they are associated with, using pseudonyms that are randomly generated by a certification authority. \ensuremath {\mathit{BS}}{\xspace}, then, queries \ensuremath {\mathit{DB}}{\xspace}~on behalf of the querying \ensuremath {\mathit{SU}}{\xspace} s using their pseudonyms. After that, \ensuremath {\mathit{DB}}{\xspace}~performs hash matching of \ensuremath {\mathit{SU}}{\xspace} s' pseudonyms with a hash matrix provided by the certification authority to verify \ensuremath {\mathit{SU}}{\xspace} s' pseudonyms. Subsequently, \ensuremath {\mathit{DB}}{\xspace}~assigns a set of channels to \ensuremath {\mathit{BS}}{\xspace}~based on the latter's location. \ensuremath {\mathit{BS}}{\xspace}~then allocates the channels to its \ensuremath {\mathit{SU}}{\xspace} s using a coloring model to prevent interference between them. Finally, \ensuremath {\mathit{BS}}{\xspace}~registers the used channel of each \ensuremath {\mathit{SU}}{\xspace}~in \ensuremath {\mathit{DB}}{\xspace}~by including dummy information to provide {\em k-anonymity} to the utilization information. This is done by registering more channels than the number of \ensuremath {\mathit{SU}}{\xspace} s' requests to confuse attackers and prevent them from using the utilization information to localize \ensuremath {\mathit{SU}}{\xspace} s. Using \ensuremath {\mathit{BS}}{\xspace}~to register the used channels helps cutting off the relation between the registered channels and \ensuremath {\mathit{SU}}{\xspace} s' identities, which makes it harder for \ensuremath {\mathit{DB}}{\xspace}~to associate this information to corresponding \ensuremath {\mathit{SU}}{\xspace} s and, hence, localize them. Thus, the proposed scheme can decrease the probability of localizing \ensuremath {\mathit{SU}}{\xspace} s. However, it requires that \ensuremath {\mathit{BS}}{\xspace}~is trustworthy or it would not be able to protect \ensuremath {\mathit{SU}}{\xspace} s' location. This assumption is not usually realistic as it is hard to guarantee trustworthiness in practice. It suffers from the fact that the probability of localizing \ensuremath {\mathit{SU}}{\xspace} s increases as the number of switching events increases or as the number of \ensuremath {\mathit{BS}}{\xspace} s decreases.
\begin{table*
\centering
\caption{\small Location privacy preserving schemes in database-driven spectrum opportunities discovery}
\label{solDB}
\resizebox{\textwidth}{!}{%
\renewcommand{\arraystretch}{1.25}{
\begin{tabular}{@{}lp{4cm}lp{5.5cm}p{5.5cm}@{}}
\toprule[1.5pt]
Countermeasures & Attacks Considered & Techniques & Pros & Cons \\ \midrule
Zhang et al.~\cite{zhang2015optimal} & \begin{tabular}[c]{@{}p{4cm}@{}}- Location inference from maximum transmission power\\ - Location inference from channel switch\end{tabular} & \begin{tabular}[c]{@{}p{4cm}@{}} - Cloaking the query of \ensuremath {\mathit{SU}}{\xspace}~within a square region based on {\em k-anonymity}\end{tabular} & - Provides location privacy for both \ensuremath {\mathit{SU}}{\xspace} s and \ensuremath {\mathit{PU}}{\xspace} s & - High location privacy degrades spectrum utility \\ \addlinespace[5pt] \hline \addlinespace[5pt]
Li et al.~\cite{li2015agent} & - Location inference from spectrum utilization information & \begin{tabular}[c]{@{}p{4cm}@{}}- Intermediate base stations to forward \ensuremath {\mathit{SU}}{\xspace} s' queries to \ensuremath {\mathit{DB}}{\xspace}\\- Intermediate base stations for spectrum allocation \\ - {\em k-anonymity} for registering used channels\end{tabular} & \begin{tabular}[c]{@{}p{5.5cm}@{}} - Adversaries cannot link usage information to \ensuremath {\mathit{SU}}{\xspace} s \\ - Decreases \ensuremath {\mathit{SU}}{\xspace} s' geolocation probability \end{tabular} & \begin{tabular}[c]{@{}p{5.5cm}@{}} - The probability of geolocating \ensuremath {\mathit{SU}}{\xspace} s increases with the number of available channels. \\ - The probability of geolocating \ensuremath {\mathit{SU}}{\xspace} s increases with the number of switching events \end{tabular} \\ \addlinespace[5pt] \hline \addlinespace[5pt]
Gao et al.~\cite{gao2013location} & \begin{tabular}[c]{@{}p{4cm}@{}}- Location inference from query\\ - Location inference from spectrum utilization information\end{tabular} & \begin{tabular}[c]{@{}p{4cm}@{}}- Query blinding via \ensuremath {\mathit{PIR}}{\xspace}\\ - Spectrum mobility reduction\end{tabular} & \begin{tabular}[c]{@{}p{5.5cm}@{}}- Low communication overhead\\ - Reduces the localization probability of \ensuremath {\mathit{SU}}{\xspace} s \end{tabular} & \begin{tabular}[c]{@{}p{5.5cm}@{}} - High computational overhead \end{tabular} \\ \addlinespace[5pt] \hline \addlinespace[5pt]
Grissa et al.~\cite{grissa2015cuckoo} & - Location inference from query & \begin{tabular}[c]{@{}p{4cm}@{}}- Sending portion of \ensuremath {\mathit{DB}}{\xspace}~to \ensuremath {\mathit{SU}}{\xspace}~using cuckoo filter \end{tabular}& \begin{tabular}[c]{@{}p{5.5cm}@{}}- Very low computational overhead\\ - Provides ideal location privacy\end{tabular} & - Large communication overhead if \ensuremath {\mathit{DB}}{\xspace}~is huge \\ \addlinespace[5pt] \hline \addlinespace[5pt]
Troja et al.~\cite{troja2014leveraging} & - Location inference from query & \begin{tabular}[c]{@{}p{4cm}@{}} - Collaboration between \ensuremath {\mathit{SU}}{\xspace} s \\ - {\em private information retrieval} \end{tabular} & \begin{tabular}[c]{@{}p{5.5cm}@{}} - Minimal number of \ensuremath {\mathit{PIR}}{\xspace}~queries via collaboration between \ensuremath {\mathit{SU}}{\xspace} s \\ - Takes into account \ensuremath {\mathit{SU}}{\xspace}'s mobility \end{tabular} & \begin{tabular}[c]{@{}p{5.5cm}@{}}- Large communication overhead \\ - Relatively high computational overhead \end{tabular} \\ \addlinespace[5pt] \hline \addlinespace[5pt]
Troja et al.~\cite{troja2015efficient} & - Location inference from query & \begin{tabular}[c]{@{}p{4cm}@{}}- Hilbert space filling curve indexing of \ensuremath {\mathit{DB}}{\xspace} \\ - {\em private information retrieval} \end{tabular}& \begin{tabular}[c]{@{}p{5.5cm}@{}} - Takes into account \ensuremath {\mathit{SU}}{\xspace}'s mobility \\ - Minimal number of \ensuremath {\mathit{PIR}}{\xspace}~queries via trajectory prediction \end{tabular} & - Relatively high computational overhead \\ \addlinespace[5pt] \hline \addlinespace[5pt]
Zhang et al.~\cite{zhang2015achieving} & - Location inference from query & \begin{tabular}[c]{@{}p{4cm}@{}}- Random obfuscation using Laplacian noise \end{tabular}& - Provides differential location privacy for both \ensuremath {\mathit{SU}}{\xspace} s and \ensuremath {\mathit{PU}}{\xspace} s & - Increasing the location privacy level decreases the utility of both \ensuremath {\mathit{PU}}{\xspace} s and \ensuremath {\mathit{SU}}{\xspace} s \\ \bottomrule[1.5pt]
\end{tabular}}}
\vspace{-8pt}
\end{table*}
\ensuremath {\mathit{PIR}}{\xspace}-based approaches~\cite{gao2013location,troja2014leveraging,troja2015efficient}, on the other hand, offer much
better privacy than {\em $k$-anonymity}-based approaches, but incur substantial computation and communication overhead, thus limiting their practical use for \ensuremath {\mathit{CRN}}{\xspace} s~\cite{ghinita2008private}, unless used judiciously as discussed in Section~\ref{limitGeneric}. For instance, Gao et al.~\cite{gao2013location} propose a \ensuremath {\mathit{PIR}}{\xspace}-based location information preserving scheme by adopting the \ensuremath {\mathit{PIR}}{\xspace}~protocol of Trostle et al.~\cite{trostle2010efficient}. Instead of sending its location, \ensuremath {\mathit{SU}}{\xspace}~hides its coordinates within other locations and transforms this information in such a way that \ensuremath {\mathit{SU}}{\xspace}~is the only one that can revert it. Upon receiving the blinded query, \ensuremath {\mathit{DB}}{\xspace}~multiplies it with the spectrum availability information matrix and sends the outcome back to \ensuremath {\mathit{SU}}{\xspace}. \ensuremath {\mathit{SU}}{\xspace}~will be able to only retrieve the availability information in its location using the secure parameters that it used to transform the original query. \ensuremath {\mathit{SU}}{\xspace}~is the only one who knows the blinding factors and the transformation used to transform the original query. Hence, only \ensuremath {\mathit{SU}}{\xspace}~can recover the spectrum availability information from the result sent by \ensuremath {\mathit{DB}}{\xspace}. However, this approach suffers from large computational overhead which is due to the use of the \ensuremath {\mathit{PIR}}{\xspace}~protocol, known to be expensive to execute as we highlighted earlier.
Grissa et al.~\cite{grissa2015cuckoo} propose an approach that offers an unconditional privacy to \ensuremath {\mathit{SU}}{\xspace} s within the \ensuremath {\mathit{DB}}{\xspace}'s coverage area. This approach uses set membership data structure, more precisely {\em cuckoo filter}~\cite{fan2014cuckoo}, to send a compressed version of \ensuremath {\mathit{DB}}{\xspace}~to~\ensuremath {\mathit{SU}}{\xspace}. In this scheme, \ensuremath {\mathit{SU}}{\xspace}~only sends its characteristics, but not its location, to \ensuremath {\mathit{DB}}{\xspace}, which it uses to adapt the content of the {\em cuckoo filter}. After receiving the filter, \ensuremath {\mathit{SU}}{\xspace}~constructs a query that includes its location and a combination of other parameters (e.g. band frequency, transmission power level, etc) and queries the filter to check whether it contains the constructed query. If it is the case, \ensuremath {\mathit{SU}}{\xspace}~can deduce that the channel is available and can use it by following the parameters specified in the query. Otherwise, \ensuremath {\mathit{SU}}{\xspace}~concludes that the specified combination does not exist in \ensuremath {\mathit{DB}}{\xspace}~and keeps querying the filter with different combinations until it finds one or reaches the filter's capacity. Obviously, the main advantage of this scheme is that it provides optimal location privacy to \ensuremath {\mathit{SU}}{\xspace} s as opposed to the other approaches. However, it incurs a relatively large communication overhead especially when the size of \ensuremath {\mathit{DB}}{\xspace}~is huge. The authors try to address this issue by proposing to sacrifice one of \ensuremath {\mathit{SU}}{\xspace}'s coordinates to considerably reduce the size of the filter while providing reasonable privacy. This is not needed when the size of \ensuremath {\mathit{DB}}{\xspace}~is not large.
Troja et al.~\cite{troja2014leveraging} propose another \ensuremath {\mathit{PIR}}{\xspace}-based approach to protect the location privacy of mobile \ensuremath {\mathit{SU}}{\xspace} s. The \ensuremath {\mathit{PIR}}{\xspace}~mechanism used in this work allows a \ensuremath {\mathit{SU}}{\xspace}~to learn spectrum availability in multiple-cell block containing its current cell. As they move, \ensuremath {\mathit{SU}}{\xspace} s gradually develop a trajectory-specific spectrum knowledge cache, via a series of \ensuremath {\mathit{PIR}}{\xspace}~queries. \ensuremath {\mathit{SU}}{\xspace} s within communication range of each other form groups and interact in a peer-to-peer (P2P) manner to privately exchange their anonymized cached channel availability information. This reduces considerably the number of \ensuremath {\mathit{PIR}}{\xspace}~queries as less \ensuremath {\mathit{SU}}{\xspace} s need to query \ensuremath {\mathit{DB}}{\xspace}~since they could learn opportunities from \ensuremath {\mathit{SU}}{\xspace} s within their group. However, this still incurs large communication cost and relatively high computational overhead, especially when the group size is relatively large.
Troja et al.~\cite{troja2015efficient} propose another \ensuremath {\mathit{PIR}}{\xspace}-based privacy-preserving protocol that relies on the Hilbert space filling curve which is a continuous fractal
that maps space from {\em 2-D} to {\em 1-D}~\cite{kamel1993packing}. \ensuremath {\mathit{DB}}{\xspace}~is indexed based on this curve to address \ensuremath {\mathit{SU}}{\xspace} s' mobility which allows neighboring cells to be stored in consecutive locations in \ensuremath {\mathit{DB}}{\xspace}. \ensuremath {\mathit{DB}}{\xspace}~is split into multiple disjoint segments which enables \ensuremath {\mathit{SU}}{\xspace}~to retrieve channel availability information for a large number of consecutive cells surrounding \ensuremath {\mathit{SU}}{\xspace}'s location with a single \ensuremath {\mathit{PIR}}{\xspace}~query. \ensuremath {\mathit{SU}}{\xspace} s use trajectory information, known a priori or generated on the fly via a prediction mechanism, to minimize the number of future \ensuremath {\mathit{PIR}}{\xspace}~queries as a \ensuremath {\mathit{SU}}{\xspace}~can obtain availability information for current and future positions in just one query. Despite its merit in providing location privacy to mobile \ensuremath {\mathit{SU}}{\xspace} s with efficient communication overhead, this approach incurs relatively large computational overhead. The main advantages of this scheme are that it considers mobile \ensuremath {\mathit{SU}}{\xspace} s and exploits trajectory information to reduce the number of \ensuremath {\mathit{PIR}}{\xspace}~queries to \ensuremath {\mathit{DB}}{\xspace}~in order to reduce overhead. However, it still suffers from one of the well known limitations of \ensuremath {\mathit{PIR}}{\xspace}-based approaches, i.e. the high computational overhead, despite its nice effort in reducing the number of required queries.
\vspace{-2pt}
Other approaches try to adapt the {\em differential privacy} concept, explained in Section~\ref{limitGeneric}, and apply it in the context of database-driven \ensuremath {\mathit{CRN}}{\xspace} s. For instance, Zhang et al.~\cite{zhang2015achieving} propose an approach to protect bilateral location privacy of both \ensuremath {\mathit{PU}}{\xspace} s and \ensuremath {\mathit{SU}}{\xspace} s. \ensuremath {\mathit{SU}}{\xspace} s obfuscate their location using a two dimensional {\em Laplacian} distribution noise satisfying the {\em $\epsilon$-geo-indistinguishability} mechanism, derived from {\em differential privacy}, introduced in~\cite{andres2013geo}. The obfuscation depends on the privacy preserving level that is decided by both \ensuremath {\mathit{SU}}{\xspace} s and \ensuremath {\mathit{PU}}{\xspace} s by solving an optimization problem that maximizes their bilateral utility. \ensuremath {\mathit{SU}}{\xspace}~sends its obfuscated location along with the privacy level which represents the maximum distance that separates the sent location from the actual location. Based on these parameters, \ensuremath {\mathit{DB}}{\xspace}~decides about the transmit power and radius or distance from \ensuremath {\mathit{PU}}{\xspace}~that~\ensuremath {\mathit{SU}}{\xspace}~cannot exceed. The main advantage of this approach is that it provides differential location privacy for both \ensuremath {\mathit{PU}}{\xspace} s and \ensuremath {\mathit{SU}}{\xspace} s while allowing them to adjust their privacy level to maximize their utility. However, as this approach aims to maximize both the utility and privacy level, which are always conflicting, increasing the privacy level of both \ensuremath {\mathit{PU}}{\xspace} s and \ensuremath {\mathit{SU}}{\xspace} s often results in decreasing their utility, and striking a balance is challenging.
\subsubsection{Performance metrics and tradeoffs}
\paragraph{Performance metrics}
\textbf{Computational complexity:}
Making sure that these schemes do not require heavy computation at both ends, \ensuremath {\mathit{SU}}{\xspace}~and \ensuremath {\mathit{DB}}{\xspace}, is crucial to the design of such schemes. This is important merely because these \ensuremath {\mathit{SU}}{\xspace}~devices, again, are usually resource constrained (in both energy and CPU), and the applications running on them may not tolerate delays. In addition, it is highly desirable not to overwhelm \ensuremath {\mathit{DB}}{\xspace}~by involving it in heavy computations, which can lead to congestion.
Several works (e.g.,\cite{troja2015efficient,gao2013location,grissa2015cuckoo,troja2014leveraging}) use this as a metric for assessing the effectiveness of their proposed approaches. For example, Troja et al.~\cite{troja2015efficient} captures the computation overhead by measuring the average cumulative response time that their proposed scheme leads to. This time includes the query generation time at \ensuremath {\mathit{SU}}{\xspace}, the processing time at \ensuremath {\mathit{DB}}{\xspace}, the network transfer time, and the resulting extraction time at \ensuremath {\mathit{SU}}{\xspace}.
\textbf{Communication overhead:}
Another crucial performance metric is to assess how much network data the proposed scheme generates. This assesses whether adding a privacy preserving scheme would inundate the network and degrade its performance. Indeed, a large communication overhead may introduce a considerable delay that may leave the spectrum availability outdated and cause interference to \ensuremath {\mathit{PU}}{\xspace} s if \ensuremath {\mathit{SU}}{\xspace} s decide to use channels based on this outdated information.
\textbf{Location privacy level:}
In addition to the privacy concepts already discussed in Section~\ref{coopPerf}, the following can be used to assess the privacy level of any given scheme.
\begin{itemize}
\item {\em Localization probability:} This is basically the probability that a \ensuremath {\mathit{SU}}{\xspace}~is geolocated successfully by an attacker under a given scheme. It may be influenced by different parameters, e.g. the number of channel switching events, the number of \ensuremath {\mathit{BS}}{\xspace} s in the network, etc. Some approaches like~\cite{li2015agent} have considered this metric to evaluate their approach's privacy level.
\item {\em Size of possible location set:} This measures the granularity of the location that an attacker can infer about a \ensuremath {\mathit{SU}}{\xspace}. A privacy preserving scheme fails completely to protect the location of a \ensuremath {\mathit{SU}}{\xspace}~if the size of this set is equal to $1$, which means that the attacker has succeeded to determine the exact cell in which \ensuremath {\mathit{SU}}{\xspace}~is located~\cite{gao2013location}.
\end{itemize}
\paragraph{Performance tradeoffs}
\textbf{Location privacy vs. spectrum utilization:} This tradeoff consists on sacrificing some utility to provide high location privacy guarantees. This means that seeking a higher privacy level will necessary reduce the utility in question. For instance, Zhang et al.~\cite{zhang2015optimal} make a tradeoff between the location privacy of both \ensuremath {\mathit{SU}}{\xspace} s and \ensuremath {\mathit{PU}}{\xspace} s, and spectrum utilization. \ensuremath {\mathit{SU}}{\xspace} s and \ensuremath {\mathit{PU}}{\xspace} s can adjust their privacy levels to maximize their utilities. In this case, increasing the location privacy level would decrease the spectrum utilization and vice versa.
\textbf{False positive rate vs ideal privacy:} Some approaches, like~\cite{grissa2015cuckoo}, use set membership data structures to construct a compact representation of \ensuremath {\mathit{DB}}{\xspace}~and make \ensuremath {\mathit{SU}}{\xspace} s query it for spectrum availability. However, this kind of data structures, despite its efficiency in compacting large sets of data, could introduce some false positives when it is queried. This means that the result of query may reveal that a channel is available while in reality it is not. Some data structures, like the {\em cuckoo filter} used in~\cite{grissa2015cuckoo}, give the possibility to control this rate. Minimizing this rate will, however, increase the communication overhead. So the tradeoff here is to allow some false positives in the filter to guarantee ideal privacy to \ensuremath {\mathit{SU}}{\xspace} s.
\subsection{Summary}
In this section, we discussed the location privacy issues in the spectrum opportunity discovery component for both cooperative spectrum sensing-based and database-driven spectrum discovery. We detailed the different threat models and attacks that target the location information of \ensuremath {\mathit{SU}}{\xspace} s. We then presented the different approaches that are proposed in the literature to deal with these issues. Finally, we explained the different performance metrics that are or could be used to assess the efficiency and the privacy level of location privacy preserving protocols in \ensuremath {\mathit{CRN}}{\xspace} s. In the following section, we will follow the same structure and reasoning to discuss the location privacy issues in the remaining \ensuremath {\mathit{CRN}}{\xspace}~components.
\section{Location privacy preservation in other \ensuremath {\mathit{CRN}}{\xspace}~components}
\label{lpoc}
In this Section, we investigate \ensuremath {\mathit{SU}}{\xspace} s' location privacy issue in the remaining \ensuremath {\mathit{CRN}}{\xspace}~components of the cognition cycle. Unlike the spectrum opportunity discovery component, much less attention has been given by the research community to the location privacy issue in these components. The design goals of privacy preserving schemes for each of these components are then to address the sources of location information leakages discussed in Section~\ref{sourcesDec} (spectrum analysis), Section~\ref{sourcesSharing} (spectrum sharing), and Section~\ref{specMobility} (spectrum mobility).
\ccomment{
\subsection{Design challenges of location privacy preserving protocols in the remaining components}
There are some challenges that may impede designing privacy preserving protocols to protect the location information of \ensuremath {\mathit{SU}}{\xspace} s during spectrum decision, spectrum sharing and spectrum mobility phases. Here we list some of these challenges.
\begin{itemize}
\item {Hide bids information during spectrum trading:} As this is shown to be a significant source of location information leakage, a privacy preserving protocol should be able to conceal this information while preserving the trading process for spectrum sharing.
\item {Leverage accurate decision} As discussed in Section~\ref{sourcesDec}, several sources of location information leakage could be exploited, e.g. interference, \ensuremath {\mathit{REM}}{\xspace}, topology, connectivity information, etc. A location privacy preserving spectrum decision protocol needs to determine the best spectrum bands to be used among \ensuremath {\mathit{SU}}{\xspace} s but at the same time needs to address the aforementioned vulnerabilities.
\item {Hide sensitive information during spectrum mobility} As highlighted previously, several vulnerabilities may arise from the decision of a \ensuremath {\mathit{SU}}{\xspace}~to switch to another channel. A location privacy preserving protocol has to allow \ensuremath {\mathit{SU}}{\xspace} s perform the spectrum handoff, if needed, while minimizing the location information that could be leaked from this process.
\end{itemize}
\subsection{Threat models}
The same threat models that we have discussed previously in the spectrum opportunity discovery phase apply to the remaining components of the cognition cycle. Thus, we skip these threat models here and we refer the reader to Sections~\ref{CPdiscovery} (cooperative spectrum sensing) and~\ref{DBdiscovery} (database-based spectrum opportunity discovery) for more details.
\subsection{Location inference attacks}
\label{attacksOther}
Some of these attacks may target \ensuremath {\mathit{SU}}{\xspace}'s location during the dynamic spectrum auction process. For instance, Liu et al.~\cite{liu2013location} identify an attack that exploits two sources of leakage, highlighted in Section~\ref{sourcesTrading}: bid channels and bid prices. The first attack uses bid channels (i.e. channels that are bid for by a \ensuremath {\mathit{SU}}{\xspace}). As explained earlier, a \ensuremath {\mathit{SU}}{\xspace}~bids only for channels that are available for it, i.e. \ensuremath {\mathit{SU}}{\xspace}~belongs to the complement area of each corresponding \ensuremath {\mathit{PU}}{\xspace}'s coverage. Hence, a malicious auctioneer can use the \ensuremath {\mathit{SU}}{\xspace}'s available set of channels, obtained from the \ensuremath {\mathit{SU}}{\xspace}'s bids, to decrease its possible location range by intersecting the complements of the corresponding \ensuremath {\mathit{PU}}{\xspace}'s coverage areas as shown in Figure~\ref{bpmbcm}.
\begin{figure}[th!]
\centering
\includegraphics[width=0.4\textwidth]{auctionAttack.pdf}
\caption{{\small An example of the attacks identified in~\cite{liu2013location} which first estimate the position of an \ensuremath {\mathit{SU}}{\xspace}~to be in the intersection of the available areas of channels $1$, $2$ and $3$. Then, the attacker further narrows down the estimated area by picking the cell having the smallest distance between the exact channels' qualities and those estimated from bid prices.}}
\label{bpmbcm}
\end{figure}
The second attack exploits the bid prices, which depend on the quality and characteristics of the spectrum known to be highly correlated to \ensuremath {\mathit{SU}}{\xspace}'s location. It could be used after the first attack to further narrow down the possible location area of the target \ensuremath {\mathit{SU}}{\xspace}. A higher bid price means that the \ensuremath {\mathit{SU}}{\xspace}~perceives a high spectrum quality, and hence, the auctioneer can estimate the channel quality perceived by a \ensuremath {\mathit{SU}}{\xspace}~from the \ensuremath {\mathit{SU}}{\xspace} s' bid price information. Since an attacker can easily have (or can reasonably be assumed to have) access to the statistics of channels' qualities in each cell, it can then compute the distance between these exact channels' qualities and those estimated from bid prices. The cell with the minimum distance corresponds then to \ensuremath {\mathit{SU}}{\xspace}'s location with high probability, as depicted in Figure~\ref{bpmbcm}.
Other attacks may exploit the spectrum utilization information to localize \ensuremath {\mathit{SU}}{\xspace} s as explained in Section~\ref{specMobility}. Gao et al.~\cite{gao2013location}, for example, identify an attack that infers \ensuremath {\mathit{SU}}{\xspace} s' location in database-driven \ensuremath {\mathit{CRN}}{\xspace} s by exploiting the channels' utilization information.
The first component of the proposed attack arises from the fact that a \ensuremath {\mathit{SU}}{\xspace}~cannot access a \ensuremath {\mathit{PU}}{\xspace}~channel if the \ensuremath {\mathit{PU}}{\xspace}~is present, and hence, if a \ensuremath {\mathit{SU}}{\xspace}~is active in the presence of a \ensuremath {\mathit{PU}}{\xspace}, then the \ensuremath {\mathit{SU}}{\xspace}~must be outside the \ensuremath {\mathit{PU}}{\xspace}'s coverage area. This gives the attacker a clue that the \ensuremath {\mathit{SU}}{\xspace}~is located at the complement of the \ensuremath {\mathit{PU}}{\xspace}'s coverage area.
If the \ensuremath {\mathit{CRN}}{\xspace}~covered area is modeled as a grid, as shown in Figure~\ref{pucc}, the adversary keeps incrementing a score, initially initialized to $0$, for each cell that belongs to an available area of a specific channel. The location of the target \ensuremath {\mathit{SU}}{\xspace}~will be the cell with the maximum score, which represents the area where all available areas of the channels overlap as illustrated in Figure~\ref{pucc}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.4\textwidth]{gaoAttack.pdf}
\caption{{\small An example of the attack identified in~\cite{gao2013location} which uses the complement of the coverage area of each transmitting \ensuremath {\mathit{PU}}{\xspace}~to gradually localize an \ensuremath {\mathit{SU}}{\xspace}~by incrementing a score for each cell situated outside the coverage area of each \ensuremath {\mathit{PU}}{\xspace}. The inferred location willl be the cell with the highest score.}}
\label{pucc}
\end{figure}
The second component of the proposed attack relies on the fact/event that a \ensuremath {\mathit{SU}}{\xspace}~plans to switch from some channel $\ensuremath {\mathit{chn}}{\xspace}_{k_1}$ to another channel $\ensuremath {\mathit{chn}}{\xspace}_{k_2}$ when $\ensuremath {\mathit{PU}}{\xspace}_{k_1}$ returns to its channel. In this situation there are two possible scenarios: First, when $\ensuremath {\mathit{PU}}{\xspace}_{k_2}$ is also present and is using its channel $\ensuremath {\mathit{chn}}{\xspace}_{k_2}$. In this case, since \ensuremath {\mathit{SU}}{\xspace}~cannot interfere with $\ensuremath {\mathit{PU}}{\xspace}_{k_2}$, the attacker can learn that the target \ensuremath {\mathit{SU}}{\xspace}~is situated in the $\ensuremath {\mathit{PU}}{\xspace}_{k_1}$ coverage area and the complement of $\ensuremath {\mathit{PU}}{\xspace}_{k_2}$ coverage area. Second, when $\ensuremath {\mathit{PU}}{\xspace}_{k_2}$ is absent. In this case, the adversary can learn that \ensuremath {\mathit{SU}}{\xspace}~must be within the coverage area of $\ensuremath {\mathit{PU}}{\xspace}_{k_1}$, as it must have switched to $\ensuremath {\mathit{chn}}{\xspace}_{k_2}$ after $\ensuremath {\mathit{PU}}{\xspace}_{k_1}$'s return. This same attack is also used by Zhang et al.~\cite{zhang2015optimal} as a second component of their attack framework.
Physical-layer information based attacks are also possible during the spectrum sharing process.
In fact, an adversary can directly extract position-related parameters like \ensuremath {\mathit{RSS}}{\xspace}, \ensuremath {\mathit{AoA}}{\xspace}, \ensuremath {\mathit{ToA}}{\xspace}, etc, from \ensuremath {\mathit{SU}}{\xspace} s' signals and exploit them to locate \ensuremath {\mathit{SU}}{\xspace} s, as explained in Section~\ref{attackcoop}. As an example, this kind of attacks is considered by Zhang et al.~\cite{zhang2015privacy}.
\subsection{Location privacy preserving approaches}
\label{lpparc}
Few works have addressed the location privacy issue in spectrum sharing and mobility but none, to the best of our knowledge, have addressed this problem during spectrum analysis phase. These works are summarized in Table~\ref{solSharing}.
\begin{table*
\centering
\caption{\small Location privacy preserving schemes in spectrum sharing and spectrum mobility}
\label{solSharing}
\resizebox{\textwidth}{!}{%
\renewcommand{\arraystretch}{1.25}{
\begin{tabular}{@{}lp{4cm}p{4cm}p{5.5cm}p{5.5cm}@{}}
\toprule[1.5pt]
Countermeasures & Attack Considered & Techniques & Pros & Cons \\ \midrule
Liu et al.~\cite{liu2013location} & - Location inference from Bid channels and prices & \begin{tabular}[c]{@{}p{4cm}@{}} - Prefix membership matching \\ - HMAC \end{tabular} & \begin{tabular}[c]{@{}p{5.5cm}@{}}- Efficient in thwarting attacks that use bid prices \\ - Defends to some extent against attacks that exploit bid channels \end{tabular} & \begin{tabular}[c]{@{}p{5.5cm}@{}} - Requires a {\em trusted third party} \\ - Requires a tradeoff between location privacy and auction performance \end{tabular} \\ \addlinespace[5pt] \hline \addlinespace[5pt]
Zhang et al.~\cite{zhang2015privacy} & - \ensuremath {\mathit{RSS}}{\xspace}-based PHY-layer attack & - Random power perturbation & - Mitigates a PHY-layer attack which is usually hard to thwart & - High location privacy level incurs significant degradation of network throughput \\ \addlinespace[5pt] \hline \addlinespace[5pt]
Gao et al.~\cite{gao2013location} & - Location inference from spectrum utilization information & - Spectrum mobility reduction & \begin{tabular}[c]{@{}p{5.5cm}@{}}- Low communication overhead\\ - Reduces the localization probability of \ensuremath {\mathit{SU}}{\xspace} s \end{tabular} & \begin{tabular}[c]{@{}p{5.5cm}@{}} - High computational overhead \\ - The localization probability of \ensuremath {\mathit{SU}}{\xspace} s increases with the increase of channel switches. \end{tabular} \\
\bottomrule[1.5pt]
\end{tabular}}}
\vspace{-5pt}
\end{table*}
\subsubsection{Spectrum sharing}
Some approaches try to prevent the location information leakage by hiding sensitive information exchanged during spectrum auction, e.g. location, bid channels, and bid prices, as discussed in Section~\ref{sourcesSharing}. Liu et al.~\cite{liu2013location} propose an approach that aims to preserve the location privacy of the \ensuremath {\mathit{SU}}{\xspace} s that participate in spectrum auction. This approach consists of two main components: The first component enables \ensuremath {\mathit{SU}}{\xspace} s to submit their encrypted locations and bid prices, while allowing the auctioneer to construct the conflict graph (explained in Section~\ref{selectionassignment}) and determine the maximum bid price. This is done using {\em HMAC}~\cite{krawczyk1997hmac} and the prefix membership verification scheme proposed in~\cite{chen2010safeq}. The second component enables the auctioneer to launch the auction using a greedy spectrum allocation algorithm to allocate the spectrum among \ensuremath {\mathit{SU}}{\xspace} s and a charging algorithm to securely determine the winning bids with the help of a trusted third party. Despite its merit in reducing the effectiveness of some of the attacks presented in Section~\ref{attacksOther}, and increasing the location privacy of \ensuremath {\mathit{SU}}{\xspace} s by hiding the bid prices and channels, this scheme suffers from some limitations. First, it relies on a trusted third party which is not always realistic. Second, it cannot achieve high location privacy without degrading the auction's performance.
Other approaches try also to prevent physical-layer based attacks during spectrum sharing, where attackers can capture the target \ensuremath {\mathit{SU}}{\xspace} s' transmitted signal when they try to access the spectrum and use it to extract position related measurements like \ensuremath {\mathit{RSS}}{\xspace}, \ensuremath {\mathit{ToA}}{\xspace}, \ensuremath {\mathit{AoA}}{\xspace}, etc, as explained in Section~\ref{coopSources}. For instance, Zhang et al.~\cite{zhang2015privacy} try to prevent attackers from measuring \ensuremath {\mathit{RSS}}{\xspace}~and using it to localize \ensuremath {\mathit{SU}}{\xspace} s following some of the approaches presented in Section~\ref{attackcoop}. The authors propose to rely on a random power perturbation approach where \ensuremath {\mathit{SU}}{\xspace} s perturb their power transmission level to obfuscate their \ensuremath {\mathit{RSS}}{\xspace}~values measured at the adversary side. This perturbation consists of reducing the transmission power to prevent an attacker from correctly estimating \ensuremath {\mathit{SU}}{\xspace} s' positions. They also provide a design of a socially-aware spectrum sharing algorithm that can operate well together with the power perturbation based privacy protection approach. The main advantage of this scheme is that it tries to address a physical-layer attack that is usually hard to prevent. However, the main shortcoming of this approach comes from the fact that the higher the privacy level, the more significant the degradation of network throughput. This means that using their scheme to preserve the location privacy of \ensuremath {\mathit{SU}}{\xspace} s would degrade system performance.
\subsubsection{Spectrum mobility}
\label{solMobility}
Spectrum mobility necessarily involves the usage of different spectrum bands over time and as \ensuremath {\mathit{SU}}{\xspace} s move. However, as explained in Section~\ref{sourcesMobility}, spectrum utilization information can become a serious source of location information leakage especially when the number of used channels increases.
Gao et al.~\cite{gao2013location} propose a technique to prevent this in database-driven \ensuremath {\mathit{CRN}}{\xspace} s by relying on two observations: The first is that higher location information leakage takes place during the channel switching process; i.e., when \ensuremath {\mathit{SU}}{\xspace}~switches from one channel to another. This means that if there is a way to make a \ensuremath {\mathit{SU}}{\xspace}~only switch to a channel that it has already used previously, then this would not give extra information that could be exploited by the adversary. The second is that \ensuremath {\mathit{SU}}{\xspace} s that choose the most stable channels are less likely to switch channels. Based on these two observations, each \ensuremath {\mathit{SU}}{\xspace}~constructs a list that stores its used channels and a prediction list that contains the prediction of the duration of channels availability. \ensuremath {\mathit{SU}}{\xspace}~chooses a channel from the first list, containing the usage history, if it is available. Otherwise, \ensuremath {\mathit{SU}}{\xspace}~uses the second list containing the predicted availability duration of each channel to make sure that it picks the one with the best estimated duration, i.e. the most stable. Despite its merit in reducing the localization probability of \ensuremath {\mathit{SU}}{\xspace} s, this approach does not completely thwart the attack based on \ensuremath {\mathit{SU}}{\xspace}'s spectrum mobility. It just reduces the action space of the adversary which is still able to approximate \ensuremath {\mathit{SU}}{\xspace}'s location when it tunes to other channels. Hence, as the number of channel switching events increases, the localization probability increases. In addition, it suffers from a relatively high computational overhead.
\subsection{Performance metrics and tradeoffs}
\subsubsection{Performance metrics}
\textbf{Computational complexity:}
This is again an essential metric that needs to be used to evaluate any proposed scheme. It has already been discussed in previous sections.
\textbf{Communication overhead:}
This is also an essential metric due to bandwidth constraints in \ensuremath {\mathit{CRN}}{\xspace} s, and has also been discussed in previous sections.
\textbf{Privacy level:}
The approaches used here are very similar to the approaches stressed in the previous sections. For instance, Liu et al.~\cite{liu2013location} rely on the previously discussed concepts of {\em uncertainty} and {\em incorrectness} (see Section~\ref{coopPerf}) to assess the privacy level of their proposed scheme.
Another metric could be the {\em number of used channels} as it is important to minimize the frequency of \ensuremath {\mathit{SU}}{\xspace} s' switching events to avoid attacks relying on the channel utilization as explained in Section~\ref{attacksOther}. So, the number of used channels could be seen as a suitable metric to evaluate how a privacy-preserving scheme performs in preventing such attacks as done in~\cite{gao2013location}.
\subsubsection{Performance tradeoffs}
As in the spectrum discovery phase, designing location privacy preserving protocols for spectrum analysis, sharing and mobility may require some tradeoffs between providing location privacy and maintaining some utility. For example, Zhang et al.~\cite{zhang2015privacy} consider making tradeoffs between achieving high location privacy and maintaining high network throughput. Indeed, increasing the location privacy level using their approach, as explained in Section~\ref{lpparc}, is equivalent to increasing the perturbation level on the transmission power of \ensuremath {\mathit{SU}}{\xspace} s to prevent the adversary from accurately localizing them. However, as the perturbation level increases, and so does the privacy level, the network throughput decreases, hindering thus the \ensuremath {\mathit{CRN}}{\xspace}~performance.
\subsection{Summary}
In this section, we discussed the location privacy issues in the spectrum analysis, spectrum sharing and spectrum mobility components. We detailed the different threat models, location inference attacks, and location privacy preserving approaches that are proposed in the literature to protect the location privacy in \ensuremath {\mathit{CRN}}{\xspace} s with a focus on the aforementioned components. Finally, we explained the different performance metrics that could be used to assess the efficiency and the privacy level of location privacy preserving protocols in these components. In the following section, we will discuss some of the open research problems and challenges with respect to the location privacy in \ensuremath {\mathit{CRN}}{\xspace} s.
\section{Open research problems}
\label{openproblems}
There are still open research problems that could be further investigated when it comes to location privacy in \ensuremath {\mathit{CRN}}{\xspace} s. The following is a list of some of these challenges.
{\bf Location privacy in spectrum analysis:}
Location privacy issues arising during the spectrum analysis process have received little attention by the research community in spite of, as discussed in Section~\ref{sourcesDec}, the several vulnerabilities and sources of location information leakage this process has. Much work still needs to be done when it comes to investigating inference attack models that can exploit these sources of leakage, as well as developing countermeasure solution protocols that tackle those inference attacks.
For instance, an attack framework could combine information like topology, connectivity, interference and \ensuremath {\mathit{REM}}{\xspace}~to localize \ensuremath {\mathit{SU}}{\xspace} s, since this information could be accessible during the spectrum analysis process as highlighted in Section~\ref{sourcesDec}. To the best of our knowledge, none of the existing works have exploited these vulnerabilities, nor did they try to defend them.
{\bf Location privacy in spectrum sharing and mobility:}
Not many approaches in the literature have addressed the location privacy issue in these components of the cognition cycle despite the amount of information that could be leaked during spectrum sharing and mobility as stressed in Sections~\ref{sourcesSharing}~\&~\ref{specMobility}. This is still an open issue that requires further efforts from the research community.
{\bf Location privacy in distributed cooperative sensing:}
The research efforts on providing location privacy to \ensuremath {\mathit{SU}}{\xspace} s in cooperative spectrum sensing have focused on centralized approaches but little has been done to address this issue for distributed cooperative sensing. Little work has been done in this regard (e.g.~\cite{kasiri2015privacy}); this research area is still not mature enough and requires further investigation.
{\bf Location privacy with malicious adversaries:}
Most of the existing location privacy preserving protocols in \ensuremath {\mathit{CRN}}{\xspace} s consider attack scenarios that assume no collusion between the different network entities; for example, in the context of cooperative spectrum sensing, it is almost always assumed that there is no collusion between \ensuremath {\mathit{FC}}{\xspace}~and some \ensuremath {\mathit{SU}}{\xspace} s. However, it is not unrealistic to assume that different entities can collude with one another to infer location information, especially that collusion often leads to better inference. Techniques that address colluding attackers still need to be developed and investigated, as not much has been done in this regard.
{\bf Location privacy for crowdsourced spectrum sensing:}
Crowdsourcing is an emerging tool that is gaining lots of interest in the context of \ensuremath {\mathit{CRN}}{\xspace} s. It enables the discovery of spectrum opportunities in regions with insufficient presence of \ensuremath {\mathit{SU}}{\xspace} s. In such cases, one can rely on other users (not necessary \ensuremath {\mathit{SU}}{\xspace} s) to assess which and whether other channels are available, mainly through an open call kind of process. To participate, these other users can be encouraged through various types of incentives (e.g., monetary, credit, etc.). In the context of \ensuremath {\mathit{CRN}}{\xspace} s, crowdsourcing suffers from location privacy risks that may expose the whereabouts of participating mobile users. Dealing with this issue is still an open problem and only a few works in the literature have dealt with it~\cite{jin2016privacy}.
{\bf Location privacy of ${\boldsymbol \ensuremath {\mathit{PU}}{\xspace}}$s:}
This is another direction that is worth investigating, as the location of \ensuremath {\mathit{PU}}{\xspace} s could be of paramount importance, especially in the case of military incumbent systems that have stringent requirements in terms of security and privacy. Also, \ensuremath {\mathit{CRN}}{\xspace}~solutions that rely on the cooperation of \ensuremath {\mathit{PU}}{\xspace} s may fail or poorly perform if \ensuremath {\mathit{PU}}{\xspace} s are concerned about their location privacy.
Addressing the location privacy of
\ensuremath {\mathit{PU}}{\xspace} s is still in its infancy, and more still needs to be done~\cite{bahrak2014protecting,zhang2015achieving,zhang2015optimal,clarkcan2016can}.
{\bf Location privacy in emerging ${\bm \ensuremath {\mathit{CR}}{\xspace}}$-based technologies:} Emerging \ensuremath {\mathit{CR}}{\xspace}-based technologies~\cite{wang2011emerging} may bring additional location privacy challenges on top of the ones that we have discussed in this paper. For instance, in cognitive radio-based cellular networks~\cite{elsawy2013stochastic,thilina2015cellular,guizani2015large}, multiple base stations may localize or track \ensuremath {\mathit{SU}}{\xspace} s as they move across different cells. The relatively small size of the cells in this kind of networks could make it easier to localize \ensuremath {\mathit{SU}}{\xspace} s. In \ensuremath {\mathit{CRN}}{\xspace}-enabled smart grids~\cite{khalfi2014optimal,khan2016cognitive,bicen2012spectrum}, smart meters act as \ensuremath {\mathit{SU}}{\xspace} s and opportunistically search for the available spectrum to transmit their data. The location privacy concern here is quite different as it does not involve tracking a user but can lead to identifying his own personal address if a smart meter is localized. The location information when augmented with power consumption data sent by the smart meters can further reveal the presence or absence of home owners and could lead to burglary for example. Another emerging \ensuremath {\mathit{CR}}{\xspace}-based technology is cognitive radio sensor networks ($CRSN$)~\cite{akan2009cognitive,bukhari2016survey} where the sensor nodes are required to sense the environment and also the spectrum. Depending on the spectrum availability, sensor
nodes, acting as \ensuremath {\mathit{SU}}{\xspace} s, transmit their readings in an opportunistic manner to their next hop cognitive radio sensor nodes, and ultimately, to the sink. As the sensor nodes exchange their sensing results of both the spectrum and the environment with other nodes, this presents considerable threats to the location privacy of these nodes and makes $CRSN$ inherit the location privacy issues of both $WSN$s and \ensuremath {\mathit{CRN}}{\xspace} s. All of these technologies share similar privacy threats but also have their unique vulnerabilities as well. Thus, there cannot be a one-fits-all solution to address the location privacy in these technologies, and further research efforts need to be made to investigate and address issues that are specific to each of these technologies.
{\bf Location privacy in multi-database-driven ${\bm\ensuremath {\mathit{CRN}}{\xspace}}$s:} As \ensuremath {\mathit{FCC}}{\xspace}~has already approved several companies to administrate, operate and manage spectrum databases, leveraging the existence of these multiple databases (which are inherent to spectrum database-driven dynamic spectrum sharing) opens up a new class of very promising, spectrum access techniques that can guarantee the protection of users' location privacy information yet without incurring significant overhead. This area has not been explored yet, and research efforts need to be made to investigate the potential of such an approach.
\section{Conclusion}
\label{con}
In this survey, first, we have investigated \ensuremath {\mathit{SU}}{\xspace} s' location privacy issues in \ensuremath {\mathit{CRN}}{\xspace} s by exploring each functional component and identifying its inherent vulnerabilities. Then, we have discussed when and why generic and well known privacy enhancing approaches cannot be applied off-the shelf to provide location privacy for \ensuremath {\mathit{SU}}{\xspace} s. After that we have explored existing attacks and approaches for providing location privacy solutions in the different \ensuremath {\mathit{CRN}}{\xspace}~components. Finally, we have highlighted some related open research problems that require future investigation and attention.
\section*{Acknowledgment}
This work was supported in part by the US National Science Foundation under NSF award CNS-1162296.
The authors would like to thank the editor and the reviewers for their valuable feedback that has improved this survey paper greatly.
\small{
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2018-06-05T02:09:50",
"yymm": "1806",
"arxiv_id": "1806.00750",
"language": "en",
"url": "https://arxiv.org/abs/1806.00750"
}
|
\section{Introduction}\label{intro}
We consider two types of radial similarity flows
for the compressible Euler system. These are particular types of
solutions with planar (slab), cylindrical, or spherical symmetry.\footnote{While all three types of flows are ``one-dimensional"
in the sense that they depend on a single spatial variable $r$, we reserve this term for
the case of slab symmetry (i.e., the case when there is a fixed direction in physical
space such that, at each fixed time, all flow quantities are constant in planes normal to this direction).}
Under a similarity assumption the Euler system reduces to a coupled,
nonlinear system of ODEs with respect to a similarity
variable $x=t/r^\lambda$, where $t$ is time, $r$ is distance to the origin, and $\lambda$ is the
similarity exponent.
Similarity flows provide a rare instance where exact solutions
to the multi-dimensional compressible Euler system can be constructed ``by hand''
and studied in considerable detail. Following Guderley's pioneering
study \cite{gud}, they have attracted substantial attention from physicists,
engineers, and mathematicians. For a recent overview of the literature, see
\cite{rkb_12} and references therein.
The existing literature provides examples of similarity flows
where a single (spherical or cylindrical) incoming shock wave propagates into a quiescent region
about the origin (i.e., the fluid there is at rest and at constant density
and pressure). The shock strengthens as it approaches the origin and the shock speed becomes
unbounded at the instance of collapse at the
origin. (For convenience, the time of collapse is chosen as $t=0$.)
One can construct a complete (similarity) solution for all later times as well by
having a diverging shock wave reflect off the origin. A different type of
similarity solution describes the situation where a gas fills a spherical or cylindrical
cavity (vacuum region) near the origin. Again, the speed of the fluid-vacuum interface blows
up at collapse. Also in this case
a global-in-time similarity solution can be constructed by inserting an
outgoing shock after collapse. We refer to these two types of solutions
as {\em similarity shock} and {\em similarity cavity flows}, respectively.
In either case the profiles for the fluid velocity, pressure, sound speed, and
temperature at time of collapse are unbounded, with behavior given by negative
powers of $r$ (in the cavity case, this applies also to the density profile).
For this the similarity exponent must satisfy $\lambda>1$.
In Section \ref{eqns} we record the multi-dimensional (multi-d) Euler equations for compressible
flow of an ideal and polytropic gas with adiabatic exponent $\gamma>1$, including its
radial form. We also posit the form of the radial similarity solutions under consideration.
Section \ref{Gud_soln} outlines the setup for each type of solutions and collects various properties
(initial data, jump relations, characteristics, etc.) of the similarity solutions under consideration.
For the actual construction of physically relevant similarity solutions with these properties,
we follow Lazarus \cite{L} who treats both shock and cavity flows. A complete breakdown
of the various possibilities, including the key determination of allowed values of the
similarity exponent $\lambda$, requires a detailed analysis and numerical calculations.
Our main purpose of verifying that the Euler system admits unbounded weak solutions,
does not require a full breakdown of all the cases.
Instead, Section \ref{constr_sim_solns} outlines enough of this analysis to obtain {\em some}
cases of Euler flows with unbounded amplitudes. In particular, we restrict attention to the
standard value of the similarity exponent $\lambda$. This is the so-called
``analytic'' value, denoted $\lambda_{std}$ by Lazarus \cite{L}. See Section
\ref{constr_sim_solns} for details, where we also describe how the solutions are
propagated past collapse to yield complete (i.e., global-in-time), radial similarity flows.
The resulting, well-known, solutions can be studied in detail.
In particular, we deduce their asymptotic behavior at $x=+\infty$, which plays a key role
in the analysis that follows. It turns out that the behavior of the resulting flows
after collapse is markedly different near the center of motion in the shock case
and in the cavity case; see Section \ref{refl_shck}.
We also include a discussion to the effect that, at least among similarity flows,
the continuation beyond collapse appears to be uniquely determined for both
types of flows.
Note that all jump discontinuities appearing in these similarity flows are,
by construction, entropy admissible: both the incoming and the reflected shocks are
compressive.
We then turn to our main concern: to what extent these types of similarity flows
represent genuine weak solutions of the original, multi-d compressible Euler system.
As the similarity solutions are singular and suffer blowup of primary flow variables at
the origin, it is not immediately clear in what sense the weak form is satisfied.
While some authors \cites{bk,L} have addressed the constraint of locally finite energy
for the similarity flows under consideration, we are not aware of a complete
analysis. Concentrating on similarity shock solutions, we demonstrate that the
flows constructed in the literature
are indeed {\it bona fide} weak solutions whenever the similarity exponent $\lambda$
satisfies the constraint $\lambda\leq\frac{n}{2}+1$, where $n$ is $2$ or $3$ for
cylindrical or spherical flow, respectively. The numerical values available in the
literature indicate that the solutions corresponding to the particular value
$\lambda_{std}$ always satisfy this constraint.
We shall show that the similarity shock solutions under consideration are {\it bona fide} weak
solutions in the following sense: all terms occurring in the weak formulation of
the multi-d Euler system are locally integrable in space-time; the amounts of mass,
momentum, and energy within any fixed, compact spatial region change continuously
with time (in particular, they are finite); and finally, the weak forms of the equations are satisfied.
(Their total mass, momentum and energy in all of space are
not bounded; however, this could be arranged via suitable modifications away from the origin
without affecting the blowup behavior near the origin.)
We emphasize that we verify the weak form of the original, {\em multi-d} Euler system.
Since the similarity solutions under consideration are radially symmetric, it is
convenient to first derive the corresponding weak formulation for general radial solutions.
This requires some care as the latter formulation involves different types of
``test functions'' for the different conservation laws. For completeness we include
the derivation of the radial weak form of the equations (see Definition
\ref{rad_symm_weak_soln} and Proposition \ref{rad_md}; here we follow the
analysis \cite{hoff} for radial Navier-Stokes flow).
With these preparations, Section \ref{sim_weak_solns} provides the details
of the proof that genuine multi-d weak solutions are obtained from the radial
symmetry solutions.
\paragraph{\bf Discsussion}
The existence of singular flows suffering point-wise blowup of flow variables
is of obvious relevance in connection with the general Cauchy problem for the
compressible Euler system. With the notable exception of small-variation data
near a strictly hyperbolic state (Glimm \cite{glimm}), there is currently no general,
global-in-time existence result available for the one-dimensional (1-d) Cauchy problem
for hyperbolic systems. (See \cites{liu77,temple81} for extensions that cover
certain types of large variation data specifically for the Euler system.)
In more than one space dimension the situation is bleaker, and symmetric
flows offer a natural case to consider in isolation.
For results on isothermal and isentropic radial flow with general data,
see \cites{cp,cs,mmu}.
In view of the blowup exhibited by similarity shock and similarity cavity
solutions, it would appear that any existence result, applying to ``general'' data,
for the multi-d Euler system would necessarily have to involve
unbounded solutions. However, one should be careful not to draw
too general conclusions on the basis of the similarity flows we study
here. These are exceedingly special solutions, some aspects of which are
borderline physical.
In particular, both types of flows involve regions of vanishing pressure prior
to collapse.
In the case of a collapsing cavity this is due to the vacuum, and there is no reason why
the Euler model should provide an accurate description close to its collapse.
For the converging shock case, it turns out that the quiescent state into which the
converging shock propagates, must necessarily be at zero pressure (due to vanishing
temperature there) in order to generate an exact solution. In approximate
treatments this amounts to a ``strong shock'' assumption.
For the case of an incoming shock, it is physically reasonable that a nonzero
counter pressure would slow it down and possibly prevent unbounded
amplitudes.
This would provide a mechanism to ``save'' the Euler model from actual
blowup.\footnote{The situation for radial {\em isentropic} similarity flow
(constant entropy throughout, disregarding the energy equation
\cite{daf}) does not contradict this picture. In that case
a converging similarity shock can propagate into quiescent region
only if $\lambda=1$; no blowup of primary flow variables occurs, and the upstream pressure
is strictly positive. The same applies to radial isothermal similarity flow.}
In particular, if indeed correct, this would
show that the strong shock approximation fails to capture a crucial aspect of
exact solutions near collapse of symmetric shock waves (blowup vs.\ no blowup of
primary flow variables). We note that a number of works consider the effect of a positive
counter pressure, e.g.\ \cites{ah,phpm,welsh,vrt} and
references therein. However, while amplitude blowup is still present in these works,
none of them provide {\em exact} weak solutions to the Euler system.
The conventional point of view appears to be that the blowup
exhibited by radial similarity flows results from multi-d wave focusing,
much like what occurs for radial solutions to the linear 3-d wave equation.
The remarks above raise the possibility that the unbounded amplitudes could
be due to the presence of regions of vanishing pressure.
We are not aware of a definite argument one way or the other -
possibly both effects are required to generate blowup in $L^\infty$.
Unfortunately, 1-d (slab symmetry) similarity flows do not help in assessing
the situation: such solutions fail to generate physically
acceptable flows; see Remark \ref{no_1_d_ex}.
\section{Equations}\label{eqns}
The full, multi-d Euler system for compressible gas flow is given by
\begin{align}
\rho_t+\dv\left(\rho \vec u\right)&= 0\label{m_d_mass}\\
\left(\rho \vec u \right)_t+\dv\left(\rho \vec u\otimes \vec u\right)+\grad p &= 0\label{m_d_mom}\\
\Big[\rho e+\frac{\rho|\vec u|^2}{2}\Big]_t
+\dv\Big[\Big(\rho e+\frac{\rho |\vec u|^2}{2}+p\Big)\vec u\Big]&= 0.\label{m_d_energy}
\end{align}
The variables are $\rho=$ density, $\vec u=$ fluid velocity, $p=$ pressure, $e=$ specific
internal energy.
Under the assumption of radial symmetry (i.e., all unknowns depend only on time $t$ and
radial distance $r$ to the origin or an axis of symmetry, and
$\vec u$ is purely radial), the system takes the form: ($u=|\vec u|$)
\begin{align}
\left(r^m\rho\right)_t+\left(r^m\rho u\right)_r &= 0\label{mass}\\
\left(r^m\rho u \right)_t+\left(r^m(\rho u^2+p)\right)_r &= mr^{m-1}p\label{mom}\\
\Big(r^m\rho \Big[e+\frac{u^2}{2}\Big] \Big)_t
+\Big(r^m\rho u\Big[e+\frac{u^2}{2}+\frac{p}{\rho}\Big]\Big)_r &= 0.\label{energy}
\end{align}
Here $r$ varies over $\mathbb{R}^+$, subscripts denote differentiation, and $m=1,\, 2$ for flows with
cylindrical or spherical symmetry, respectively. With $m=0$ and $r$ varying over $\mathbb{R}$,
we have the one-dimensional Euler system.
We consider an ideal, polytropic gas with equation of state
\begin{equation}\label{perf}
p=(\gamma-1)\rho e=(\gamma-1)c_v\rho \theta,
\end{equation}
where $\gamma>1$ and $c_v$ are positive constants, and $\theta=$ temperature.
The specific entropy $S$ is related to $p$ and $\rho$ by
\begin{equation}\label{entr}
p\rho^{-\gamma}=\text{Constant}\cdot \exp\Big(\frac{S}{c_v}\Big).
\end{equation}
It is a consequence of the conservation laws above that $S$ remains constant along
particle trajectories in smooth regions of the flow:
\begin{equation}\label{entrpy_eul}
S_t+uS_r = 0.
\end{equation}
The sound speed $c$ is given by
\begin{equation}\label{sound}
c^2:=\frac{\gamma p}{\rho}\equiv \gamma(\gamma-1)e,
\end{equation}
and with $u$, $c$, and $\rho$ as primary unknowns, the system
\eq{mass}-\eq{energy} takes the form:
\begin{align}
u_t+ uu_r +\frac{1}{\gamma\rho}(\rho c^2)_r&= 0\label{u}\\
c_t+uc_r+\frac{(\gamma-1)}{2}c\Big(u_r+\frac{mu}{r}\Big)&=0\label{c}\\
\rho_t+u\rho_r+\rho\Big(u_r+\frac{mu}{r}\Big) &= 0.\label{rho}
\end{align}
Following the notation and setup of Lazarus \cite{L}, we introduce the similarity
coordinate
\begin{equation}\label{sim_coord}
x=\frac{t}{r^\ensuremath{\lambda}},
\end{equation}
where $\lambda$ is the similarity exponent (to be determined), and make the ansatz
\begin{align}
u(t,r) &= -\frac{r}{\ensuremath{\lambda} t}\ V(x)=-\frac{r^{1-\ensuremath{\lambda}}}{\ensuremath{\lambda}}\frac{V(x)}{x} \label{V}\\
c(t,r) &= -\frac{r}{\ensuremath{\lambda} t}\ C(x)=-\frac{r^{1-\ensuremath{\lambda}}}{\ensuremath{\lambda}}\frac{C(x)}{x} \label{C}\\
\rho(t,r) &=r^\kappa R(x),\label{R}
\end{align}
where $\kappa$ is a constant.
We refer to solutions with this particular structure as {\em similarity flows}.
Their relevance relies on the fact that they include physically
meaningful flows where either symmetric shocks or cavities implode
(converge, focus, collapse) at the origin. Similarity flows are determined via solutions to
ODEs for $V$, $C$, and $R$. These are the {\em similarity ODEs} which we record
in Section \ref{sim_ODEs} below. We stress that, differently from many other
cases of similarity solutions, the similarity exponent $\lambda$ is not given a priori,
but must be determined as part of the solution.
\section{Similarity shock and similarity cavity solutions}\label{Gud_soln}
\subsection{Similarity shock solutions}\label{sim_shocks}
We shall first consider similarity flows where a single (spherical, cylindrical,
or planar) shock moves toward the origin for negative times, and
focuses at the origin at time $t=0$. Taking the existence
of such similarity flows for granted for now, in this section we consider the
Rankine-Hugoniot conditions, describe various constraints
that should be met by physically relevant similarity flows, and describe
a particular (critical) characteristic which plays a central role in
the construction of such flows.
First, the flows on both sides of the shock are assumed to be similarity flows
with the same values of $\ensuremath{\lambda}$, $\gamma$, and $\kappa$ in \eq{V}-\eq{R}.
We assume that the converging shock path is described by a constant
value of the similarity variable $x$, say
\begin{equation}\label{path}
x\equiv -1 \qquad\text{so that}\qquad r_{shock}=(-t)^\frac{1}{\ensuremath{\lambda}},\quad t<0.
\end{equation}
We shall only consider situations where the shock reaches the origin
with infinite speed, so that
\begin{equation}\label{constr_1}
\ensuremath{\lambda}>1.
\end{equation}
We follow \cite{L} and let subscripts $0$ and $1$ denote
evaluation immediately ahead of and behind of the shock,
respectively. The (exact) jump relations and entropy condition
then take the forms
\begin{align}
1+V_1 &=\frac{\gamma-1}{\gamma+1}(1+V_0)
+\frac{2C_0^2}{(\gamma+1)(1+V_0)}\label{V_jump}\\
C_1^2 &= C_0^2+\frac{\gamma-1}{2}[(1+V_0)^2-(1+V_1)^2] \label{C_jump}\\
R_1(1+V_1) &=R_0(1+V_0)\label{R_jump}\\
C_0^2 &< (1+V_0)^2.\label{entropic}
\end{align}
Here \eq{entropic} expresses that the shock is supersonic relative to the
state ahead; together these imply $C_1^2 > (1+V_1)^2$, amounting to
the admissibility of the similarity shocks.
The fluid on the inside (ahead) of the converging shock is assumed to be at rest and
at constant density and pressure (quiescent state). According to \eq{R},
the constant density there dictates that $\kappa=0$ and $R(x)$ is constant;
for concreteness let
\[R(x)\equiv 1\qquad\text{for $-\infty<x<-1$.}\]
Next, for an ideal gas $c^2\propto \frac{p}{\rho}$, so that
the sound speed is constant in the quiescent region.
As we assume $\ensuremath{\lambda}\neq 1$, \eq{C} implies that $C$ must
vanishes identically there. As the fluid near the origin is assumed to
be at rest, we therefore have
\[V(x)=C(x)\equiv 0 \qquad \text{for $-\infty<x<-1$.}\]
We are thus considering a
single, converging shock which moves into a quiescent region at
zero pressure and unit density. For an ideal polytropic gas, this
means that the temperature vanishes identically in the region
inside the converging shock.
With $(V_0,C_0,R_0)=(0,0,1)$, inequality \eq{entropic} is satisfied, and the jump
relations \eq{V_jump}-\eq{R_jump} give the following
initial conditions for the similarity variables $V$, $C$, $R$ at $x=-1^+$:
\begin{equation}\label{init_data}
V(-1)=V_1=-\frac{2}{\gamma+1},\qquad
C(-1)=C_1=\frac{\sqrt{2\gamma(\gamma-1)}}{\gamma+1},\qquad
R(-1)=R_1=\frac{\gamma+1}{\gamma-1}.
\end{equation}
Along the immediate outside of the converging shock, the primary
flow variables are therefore given by \eq{V}-\eq{R} as (recall that
$\kappa=0$ in the present shock case):
\begin{equation}\label{u_c_rho_at_skock}
u=\frac{V_1}{\ensuremath{\lambda}}r^{1-\ensuremath{\lambda}}\qquad c
=\frac{C_1}{\ensuremath{\lambda}}r^{1-\ensuremath{\lambda}}\qquad \rho\equiv R_1.
\end{equation}
As we assume $\ensuremath{\lambda}>1$, it follows that the velocity $u$ and
the sound speed $c$ become unbounded along the outside
of the shock as it collapses at the origin, while the density
remains finite. (The same applies along any curve given by
$x\equiv constant \in(-1,0)$.)
Next, we are only interested in solutions where the
flow variables $u$, $c$ and $\rho$ are ``well behaved'' at
any location $r>0$ at time $t=0$.
In particular, for any fixed $r>0$ we require that
$u(t,r)$ and $c(t,r)$ tend to finite limits as $t\to 0$,
i.e., as $x\to 0$.
According to \eq{V} and \eq{C} we must therefore have that
\begin{equation}\label{V/x_C/x-zero}
\ell:=\lim_{x\to 0} \frac{V(x)}{x}\qquad \text{and}\qquad
L:=\lim_{x\to 0} \frac{C(x)}{x}\qquad\text{are finite,}
\end{equation}
Thus, in particular, we have
\begin{equation}\label{vc_zero}
V(0)=C(0)=0.
\end{equation}
It then follows from \eq{V}-\eq{C} and \eq{V/x_C/x-zero} that, at time of collapse
($t=0$), the radial flow speed $u$ and the sound speed $c$ blow up
according to
\begin{equation}\label{uc_0}
u(0,r)=-\frac{\ell}{\lambda}r^{1-\ensuremath{\lambda}}\qquad\text{and}\qquad c(0,r)=-\frac{L}{\lambda}r^{1-\ensuremath{\lambda}},
\end{equation}
while the density is constant, $\rho(0,r)\equiv R(0)$.
As a consequence, the pressure and temperature profiles at time of collapse
blow up according to
\begin{equation}\label{ptheta_0}
p(0,r),\, \theta(0,r)\, \propto \, r^{2(1-\ensuremath{\lambda})}.
\end{equation}
We point out that the limits in \eq{V/x_C/x-zero} will turn out to be non-zero and finite
for the solutions constructed below. It follows that all three characteristic speeds
($u\pm c$ and $u$) are bounded at all points except at $(t,r)=(0,0)$.
In particular, all fluid particles, except the one at the origin, are located away from
$r=0$ at time $t=0$; in other words, the solutions under consideration are not
of ``cumulative'' type where all (or a part of) the mass concentrates at the origin at collapse
(examples of such flows are given in \cites{kell,am}).
Next we note that, by \eq{init_data},
\begin{equation}\label{init_over}
C>1+V>0\qquad\text{at $x=-1$},
\end{equation}
while \eq{vc_zero} shows that the opposite inequality holds at $x=0$.
Thus, for some critical $x_c\in (-1,0)$ we must have
\[1+V(x_c)=C(x_c).\]
(For the solutions considered below, there is a unique critical value $x_c$.)
Now, to determine the full solution of the flow problem before collapse,
we must integrate the similarity ODEs
for $V(x)$, $C(x)$, and $R(x)$ for $x\in (-1,0)$, subject to the initial data in
\eq{init_data}. It so happens that these ODEs are singular at points where $1+V=C$
(see \eq{V_sim2}-\eq{D}), and we have
just seen that this must occur at some point $x_c\in (-1,0)$.
The corresponding curve in the $(t,r)$-plane turns out to be a 1-characteristic for the
corresponding Euler flow. (More generally, a calculation shows that the curve
$x\equiv \bar x=constant$ is a 1-characteristic if and only if $1+V(\bar x)=C(\bar x)$.)
Passing through $x=x_c$ corresponds to crossing the {\em critical
1-characteristic}, i.e. the 1-characteristic that catches up with the converging shock
as it collapses at the origin. See Figure 1.
\begin{figure}\label{Figure_1}
\centering
\includegraphics[width=9cm,height=8cm]{incoming_shock.pdf}
\caption{Converging similarity shock before collapse (schematic).}
\end{figure}
We point out that, in considering {\em weak} solutions, one should admit
solutions with jumps in the derivatives of the flow variables across characteristics.
In particular, $V$ and $C$ could enter and exit $x=x_c$ with different slopes.
However, we shall not exploit this feature in the present work.
\subsection{Similarity cavity solutions}\label{sim_cavs}
For the case of a collapsing cavity we consider a spherical vacuum
region centered at the origin, surrounded by fluid moving radially inward. Assuming
for now the existence of similarity flows \eq{V}-\eq{R} with this
structure, we assume that the vacuum-fluid interface
follows the path $x=-1$ for negative times. Again we consider
the case where this curve hits the origin
with infinite speed at time $t=0$, so that $\lambda>1$. The interface
is a particle trajectory, giving the initial condition for $V$ at $x=-1^+$ as
\begin{equation}\label{V_init_vac}
V(-1)=-1.
\end{equation}
To select initial conditions for $R$ and $C$ at $x=-1$,
we impose the further constraint that the entropy takes a fixed,
constant value $\bar S$ throughout the fluid region for negative times (before a
shock is reflected off the origin). The fluid pressure is then given
by $A\rho^\gamma$, where $A=A(\bar S)$ is a constant.
As the fluid pressure must vanish along the vacuum interface, it follows
that the same holds for the density $\rho$, and also the sound speed
$c=\sqrt{\gamma A\rho^{\gamma-1}}$.
Equations \eq{C} and \eq{R} thus gives the initial conditions for
$C$ and $R$ at $x=-1^+$ as
\begin{equation}\label{CR}
C(-1)=R(-1)=0.
\end{equation}
For later reference we note that isentropic similarity flow requires
\begin{equation}\label{spec_kappa}
\kappa=-\frac{2(\lambda-1)}{\gamma-1};
\end{equation}
this is a consequence of the momentum equation \eq{u} with
$\rho c^2=\gamma A\rho^\gamma$, upon substituting for $u$
and $\rho$ from \eq{V} and \eq{R}, respectively.
It turns out that the similarity cavity flows constructed below
immediately leaves the starting point $(V,C)=(-1,0)$ by moving into the
region $C>1+V>0$. Just as for the shock case discussed above,
we insist on ``well-behaved'' solutions satisfying \eq{V/x_C/x-zero}.
It follows that the cavity solution has to move back across the critical line
$\{C=1+V\}$, for some $x_c\in(-1,0)$, before continuing on toward
the origin.
We note that, in contrast to the case of a similarity shock,
in similarity cavity flow only the fluid velocity $u$ blows up along the
curve $x\equiv -1$, while $c$, $\rho$, $p$, and $\theta$ all
vanish there. On the other hand, \eq{V}-\eq{R} imply that all of
$u$, $c$, $\rho$, $p$, and $\theta$ blow up along all other
curves $x\equiv constant \in (-1,0)$ as $t\uparrow 0$. (This last
assertion requires that $V$ and $C$ does not vanish at any
$x\in (-1,0)$; this will be the case for the similarity cavity flows
constructed below.) Furthermore, the profiles for $u$, $c$, $p$,
and $\theta$ at time of collapse are again given by
\eq{uc_0}-\eq{ptheta_0} (provided the limits in \eq{V/x_C/x-zero}
are non-zero, which holds for the cavity flows constructed below).
Finally, for similarity cavity flow, also the density is unbounded
at time $t=0$:
\[\rho(0,r)=R(0)r^\kappa,\]
where $\kappa$, given by \eq{spec_kappa}, is strictly negative since
$\lambda>1$.
As the sound speed $c$ vanishes along the vacuum interface,
the characteristics degenerate there and become tangent to
the interface; a representative situation is recorded in Figure 2.
\begin{remark}
It can be verified that the situation in Figure 2 is valid for
the cavity flows constructed below. In particular, \eq{dZdV}
yields $C\sim \sqrt{1+V}$ near $x=-1$, and this implies that
any 1-characteristic between the interface $x=-1$ and the
critical characteristic $x=x_c$ will meet the interface at a
time strictly before collapse. It does so tangentially; at the
same point a 3-characteristic starts off tangentially into the
flow, as indicated.
\end{remark}
\begin{figure}\label{Figure_2}
\centering
\includegraphics[width=9cm,height=8cm]{incoming_cavity.pdf}
\caption{Similarity cavity flow before collapse (schematic).}
\end{figure}
\subsection{Similarity ODEs}\label{sim_ODEs}
Substituting \eq{V}-\eq{R} into \eq{u}-\eq{rho} we obtain a system of three
{\em similarity ODEs} for $V$, $C$, $R$. It is well-known that the constancy
of specific entropy along particle trajectories provides one exact integral
for the similarity ODEs (see \cite{rj}). Specifically, in any region where
the flow is smooth, we have
\begin{equation}\label{exact_integral}
R(x)^{q+1-\gamma}\left(\frac{C(x)}{x}\right)^2|1+V(x)|^q\equiv \text{constant},
\end{equation}
where
\begin{equation}\label{power}
q=\frac{\kappa(\gamma-1)+2(\lambda-1)}{\kappa+n},
\end{equation}
where $n=1,2,3$ is the spatial dimension. In the case of an incoming cavity,
the flow is isentropic for $t<0$, and $q$ vanishes according to \eq{spec_kappa},
while the right-hand side of \eq{exact_integral} is determined once the constant
value $\bar S$ of the entropy is assigned.
One can therefore obtain a closed
system for two of the unknowns, the standard choice being $V$ and $C$.
The resulting ODEs are (see \cites{cf,L})
\begin{align}
V'(x)&=-\frac{1}{\lambda x}\frac{G(V(x),C(x),\ensuremath{\lambda})}{D(V(x),C(x))}\label{V_sim2}\\
C'(x)&=-\frac{1}{\lambda x}\frac{F(V(x),C(x),\ensuremath{\lambda})}{D(V(x),C(x))}\label{C_sim2}
\end{align}
where $'=\frac{d}{dx}$ and the polynomial functions $D$ and $G$, and the rational
function $F$ are given by
\begin{align}
D(V,C)&=(1+V)^2-C^2\label{D}\\
G(V,C,\ensuremath{\lambda})&=C^2\left[nV+{\textstyle\frac{2(\ensuremath{\lambda}-1)}{\gamma+s-1}}\right]-V(1+V)(\ensuremath{\lambda}+V)\label{G}\\
F(V,C,\ensuremath{\lambda})&=C\left\{C^2\left[1+{\textstyle\frac{s(\ensuremath{\lambda}-1)}{\gamma(1+V)}}\right]
-\left[1+{\textstyle\frac{(n-1)(\gamma-1)}{2}}\right](1+V)^2\right.\label{F}\\
&\quad\qquad\left.+\left[{\textstyle\frac{(n-1)(\gamma-1)+(\gamma-3)(\ensuremath{\lambda}-1)}{2}}\right](1+V)
-{\textstyle\frac{(\gamma-1)(\ensuremath{\lambda}-1)}{2}}\right\}. \nonumber
\end{align}
Here $s$ is a logical variable: $s=1$ for the shock case and $s=0$ for the cavity case.
Combining \eq{V_sim2} and \eq{C_sim2} we obtain a single ODE
\begin{equation}\label{CV_ode}
\frac{dC}{dV}=\frac{F(V,C,\ensuremath{\lambda})}{G(V,C,\ensuremath{\lambda})}
\end{equation}
relating $V$ and $C$ along similarity solutions.
\section{Construction of complete similarity flows}\label{constr_sim_solns}
In this section we discuss the existence of solutions to the similarity
ODEs, and how these are used to build physically meaningful similarity
shock and similarity cavity flows. We seek {\em complete} solutions defined for all times.
The overall approach is, in principle, to
solve \eq{CV_ode} for $C=C(V)$ with the appropriate initial data, and
substitute the result into \eq{V_sim2}-\eq{C_sim2} to obtain
$x$-parametrizations for $V=V(x)$ and $C=C(x)$ via quadrature. From these $R=R(x)$
can be determined from the exact integral in \eq{exact_integral}. For the discontinuous
solutions under consideration, the Rankine-Hugoniot relations \eq{V_jump}-\eq{R_jump}
are used. These will uniquely determine the value of the constant on the right-hand side of
\eq{exact_integral} in each region where the solution is smooth. The
original flow variables $\rho$, $u$, and $c$ are then given via \eq{sim_coord}-\eq{R}.
Finally one needs to verify that the solution so obtained is
physically acceptable.
The analysis is complicated by the fact that the ODE \eq{CV_ode}
possesses a number of critical points (common zeros of $F$ and $G$),
whose location varies with $\gamma$, $\lambda$, and $s$.
Furthermore, these may or may not be located on the critical
lines
\[\mathcal C_\pm:=\{C=\pm(1+V)\},\]
along which the denominator $D$ in \eq{V_sim2} and \eq{C_sim2} vanishes.
As discussed below, this is a key issue.
Among the many treatments in the literature we find the work \cite{L} by Lazarus
to be the most useful for our needs. (Lazarus also studies solutions with several
converging similarity shocks, a scenario we do not consider in the present work.)
The location of the initial data for $(V,C)$ at $x=-1$ implies that the
solutions of \eq{CV_ode} need to cross the critical line $\mathcal C_+$,
before continuing on to the origin in the $(V,C)$-plane.
Let
\[\mathcal F:=\{(V,C)\,|\, F(V,C,\lambda)=0\},\qquad \mathcal F_\pm:=\mathcal F\cap\{C\gtrless 0\},\]
and define $\mathcal G$, $\mathcal G_\pm$ similarly by replacing $F(V,C,\lambda)$ by
$G(V,C,\lambda)$. As shown in \cite{L}, the set $\mathcal F\cap\mathcal G$ of critical
points for \eq{CV_ode} can contain up to nine distinct points. One of these is $(V,C)=(-1,0)$, which
is the initial point for similarity cavity flow. In addition there may be up to two more critical points
located on $\mathcal C_+$; we follow Lazarus' terminology and refer to these as points 6 and 8.
Now, a similarity flow must solve the full ODE system \eq{V_sim2} and \eq{C_sim2}.
It follows from the form of these equations that any solution reaching the critical line $\mathcal C_+$,
in order to continue on to the origin in the $(V,C)$-plane, must cross at a common zero of both
$F$ and $G$. (Note that $F$ and $G$ are proportional along $\mathcal C_\pm$.)
It is this restriction that is used to determine what the relevant values of $\lambda$ can
be, for given values of $\gamma$, $n$, and $s$.
Lazarus \cite{L} provides a detailed analysis of the subtle issue of which
$\lambda$-values give complete flows. In particular, Lazarus defines a function
$\lambda_{std}=\lambda_{std}(\gamma,n,s)$ by the property that
the solution of \eq{CV_ode}, with $\lambda=\lambda_{std}$ and starting at the
appropriate initial point, passes {\em analytically} through point 6 or point 8.
As pointed out in \cite{L}, most other
authors have considered $\lambda_{std}$ to be the only physically relevant
value of the similarity exponent.
Lazarus argues against this and shows that
by removing the analyticity constraint one can, for fixed $\gamma$,
$n$, and $s$, obtain whole families of complete similarity flows as $\lambda$
varies over certain non-trivial intervals. To obtain a complete breakdown of the
possible cases requires numerical integration of the similarity ODEs.
Most of the details of this analysis are included in \cite{L}. In particular,
the numerical values of $\lambda_{std}$ for $n=2,\, 3$ and $s=0,\, 1$ have
been determined to several decimal places for a large number of
$\gamma$-values (cf.\ Tables 6.2-6.5 in \cite{L}). According to Lazarus,
``Numerically, it has been
determined beyond question that it [i.e., the function $\lambda_{std}$] exists
for the shock problem for all $\gamma>1$, and for the cavity problem for
$\gamma>\gamma_{std}$.'' Here $\gamma_{std}$ depends on the spatial
dimension and is approximately given by 2.9780 for $n=2$, and 2.4058
for $n=3$. In what follows we take these statements for granted.
Differently from many other cases of similarity solutions to PDEs, the similarity exponent $\lambda$
is not apriori given; no analytic expression for $\lambda_{std}$ is known.
Having determined those $\lambda$-values which gives relevant solutions
to the similarity ODEs \eq{V_sim2} and \eq{C_sim2} for $x\in(-1,0)$,
it remains to continue the solution through the origin and extend it to all
$x>0$. As commented earlier, this is accomplished by inserting
an expanding similarity shock following a path of the form
$r(t)=(\frac{t}{B})^\frac{1}{\lambda}$ for $t>0$ (i.e., $x\equiv B$, where
$B>0$ is a constant).
The determination of $B$ and the construction of the solution for $x\in(B,\infty)$
are outlined in Section \ref{refl_shck} below; again, it appears necessary to
do so through numerical integration of the equations.
Having constructed a complete similarity shock or cavity solution in this manner,
it still remains to verify that the resulting flow is physically meaningful.
This includes describing the solution behavior at the origin $r=0^+$ for
$t>0$ (e.g., the velocity there should vanish), as well as checking that the
mass, momentum, and total energy are locally bounded quantities. As we show in
Section \ref{sim_weak_solns} (where we verify in detail that the similarity solutions
are genuine weak solutions to the Euler system), the latter integral constraints require that the
similarity exponent satisfies $\lambda<1+\frac{n}{2}$. It turns out that this is
satisfied for all known values of $\lambda_{std}$ (cf.\ Tables 6.2-6.5 in \cite{L}).
While we agree with \cite{L} on the relevance of non-analytic similarity flows,
the more important point, for our purposes, is that we obtain {\em some} examples
of shock and cavity flows that exhibit blowup. We therefore restrict attention to
solutions corresponding to the ``analytic'' similarity exponent $\lambda_{std}$.
\subsection{Existence of similarity shock solutions prior to collapse}\label{existn_shock}
For the shock problem we first observe that, by construction, the converging shock
along $x=-1$ is compressive. The same holds for the diverging shock following
collapse. For the present
case of an ideal gas, this implies that a fluid particle crossing the shock will suffer an increase
in its physical entropy \cite{gr_96}; i.e., all discontinuities under consideration involving jumps of primary
(undifferentiated) flow variables, are genuine, ``entropy-satisfying'' shocks.
Next, there is no issue near the
initial point $(V_1,C_1)$ given by the two first expressions in \eq{init_data}:
the ODE \eq{CV_ode} is well behaved there and has a local solution
for any values of $\lambda>1$ and $\gamma>1$. As outlined earlier, the solution must
cross the critical line $\mathcal C_+=\{C=1+V\}$ before reaching $(V,C)=(0,0)$.
As explained above we restrict attention to the particular value $\lambda=\lambda_{std}$
for which the solution crosses the critical line $\mathcal C_+$ in an analytic manner.
\begin{remark}\label{no_1_d_ex}
The similarity ODEs \eq{V_sim2}-\eq{C_sim2} remain valid for $n=1$.
However, an analysis reveals that the solution starting out from
$(V_1,C_1)$ does not reach the critical line in this case, instead
ending at a critical point $(\bar V,\bar C)$ lying strictly above $\mathcal C_+$
(this corresponds to ``point 4'' in Lazarus' terminology \cite{L}).
The same applies to the case of 1-d similarity cavity flow.
At $(\bar V,\bar C)$, $F(V,C)$ and $G(V,C)$ vanish and are Lipschitz continuous,
while $D(V,C)$ does not vanish; therefore, the critical point is reached for $x=0$.
However, \eq{V} and \eq{C} then imply that the resulting flow is
physically meaningless at time of collapse in this case.
One could still attempt to build a 1-d flow exhibiting blowup by using only a
part of the similarity flow just described, say the part corresponding to
$x\in(-\infty,x_0)$, for an $x_0<0$. The idea would be to complete
the flow to all negative $x$, say, by a non-similarity
flow (e.g., a simple wave). However, any change made in the original similarity
flow for $x>x_0$ will necessarily influence the flow along the interface at $x=-1$,
{\em strictly} before $t=0$, and thus possibly prevent blowup. This is a
consequence of the fact that the original similarity solution does not reach the
critical line $\mathcal C_+$: there is no critical 1-characteristic in this case
(cf.\ Figure 2).
\end{remark}
After crossing the critical line the $\lambda_{std}$-solution approaches the origin
$(V,C)=(0,0)$, which is a star point for \eq{CV_ode}. $F(V,C)$ and $G(V,C)$ both vanish and
are Lipschitz continuous at the origin, while $D(V,C)$ does not vanish there.
It follows that the solution $(V(x),C(x))$ reaches the origin at $x=0$.
This critical point is again crossed in an analytic manner and the solution continues
into the lower half of the $(V,C)$-plane; see Section \ref{refl_shck}.
\begin{remark}
According to \eq{V/x_C/x-zero} the solution $(V(x),C(x))$
approaches the origin with a slope $L/\ell$. For all cases we
are aware of it is evident from numerical integration of the equations that
the limits in \eq{V/x_C/x-zero} are non-zero and finite. It follows from \eq{uc_0}
that the flow in these cases is ``well-behaved'' and physically meaningful
at time of collapse.
\end{remark}
\subsection{Existence of cavity similarity solutions prior to collapse}\label{existn_cavity}
For the cavity problem the initial point $(V,C)=(-1,0)$ for the ODE \eq{CV_ode} lies
on the critical line $\mathcal C_+=\{C=1+V\}$. This is a saddle point; a linearization about it in
the variables $(V,Z=C^2)$ shows that there is a solution leaving along the
direction
\begin{equation}\label{dZdV}
\frac{dZ}{dV}=\frac{\gamma(\gamma-1)(\lambda-1)}{n(\gamma-1)-2(\lambda-1)}.
\end{equation}
The solution $C(V)$ to \eq{CV_ode} therefore enters immediately the region
$\{C>1+V>0\}$, provided $\lambda<1+\frac{n}{2}(\gamma-1)$, which we assume
in what follows (for $s=0$).
\begin{remark}
The corresponding solution $(V(x),C(x))$ of \eq{V_sim2}-\eq{C_sim2}
has $C(x)\to 0$ as $x\downarrow -1$. Note that
\eq{exact_integral} (with $q=0$) also gives $R(x)\to 0$ as $x\downarrow -1$.
It follows that the density $\rho$ vanishes as the
interface $\{x=-1\}$ is approached from within the fluid. Therefore,
the constructed solution satisfies the physical boundary condition that
$p\propto \rho^{\gamma-1}$ vanishes along the vacuum interface.
\end{remark}
Further along the solution, the situation is similar to that for the shock case: the
similarity exponent $\lambda$ must be chosen so that the solution of \eq{CV_ode}
crosses the critical line $\mathcal C_+$ at a common zero of $F$ and $G$, i.e., through
one of the critical points labeled 6 or 8 in \cite{L}.
Differently form the shock case, this will not occur for
all values of $\gamma>1$. As noted earlier, for the cavity case, there is a minimal
$\gamma_{std}(n)$ below which no value of $\lambda$ yields a solution with the
required behavior.
After crossing the critical line $\mathcal C_+$, the situation is as in the
shock case. The solution proceeds toward the origin in the
$(V,C)$-plane, and passes through it in an analytical manner for $x=0$.
\subsection{Existence of similarity solutions beyond collapse; the reflected shock}\label{refl_shck}
The works \cites{L,RL_78,bg_96,RichtL_75,hun_60} consider the
continuation of similarity shock and cavity solutions beyond collapse,
to complete flows defined for all times.
We are not aware of a general result addressing the unique continuation of solutions to
\eq{m_d_mass}-\eq{m_d_energy}, symmetric or not, for unbounded initial
data. On the other hand, it is reasonable to assume that no symmetry breaking
occurs at time of collapse, and restrict attention to radial similarity flows with the
same values of $\lambda$ and $\kappa$ also for $t>0$.
Furthermore, the unbounded pressure distribution at time of collapse (cf.\ \eq{ptheta_0})
suggests searching for a solution in which an expanding shock wave is generated
at the origin at time zero.
Following \cites{L,RL_78}, we outline the construction of a reflected similarity
shock propagating along a path $x=B=constant>0$. This
shock will decay as it moves outward through the originally converging flow, leaving a
non-isentropic flow region in its wake.
Providing a complete solution requires the continuation of the similarity solution
$(V(x),C(x))$ of \eq{V_sim2}-\eq{C_sim2} found earlier beyond $x=0$, the determination
of the reflected shock path (i.e., the value of $B$), and the solution of
\eq{V_sim2}-\eq{C_sim2} for all $x>B$. The latter part of the solution provides the
flow in the wake of the reflected shock; in particular, the asymptotic behaviors of $V(x)$
and $C(x)$ as $x\uparrow \infty$ yield the behavior of the flow variables at the center of motion
($r=0$).
Continuing the solution $(V(x),C(x))$ through the star point (proper node)
at the origin in the $(V,C)$-plane does not present any problem. This can be done in a
unique analytic manner, and the solution $(V(x),C(x))$ is continued into the lower half-plane
until it meets the critical line $\mathcal C_-=\{C=-1-V\}$.
Following \cite{L} we call this first part of the solution curve (in the lower half of
the $(V,C)$-plane) ``arc (a).''
For each point $(\tilde V_0,\tilde C_0)$ on arc (a), we
then apply the Rankine-Hugoniot relations \eq{V_jump} and \eq{C_jump} to
determine the unique point $(\tilde V_1,\tilde C_1)$, with $\tilde C_1<0$,
to which the system can potentially jump.
(Recall that the form \eq{V_jump}-\eq{R_jump} of the Rankine-Hugoniot relations
assumes the discontinuity follows a ``similarity path'' $x=constant$, with the same
values of $\lambda$, $\gamma$, and $\kappa$ on both sides of the discontinuity.)
As was noted in connection with \eq{V_jump}-\eq{R_jump}, since $\tilde C_0^2 < (1+\tilde V_0)^2$
along arc (a), the corresponding points $(\tilde V_1,\tilde C_1)$ necessarily
lie below the critical line $\mathcal C_-$.
As $x$ increases from $0$, the point $(\tilde V_0,\tilde C_0)\equiv (V(x),C(x))$
moves away from the origin along arc (a). At the same time the corresponding point
$(\tilde V_1,\tilde C_1)$ traces out a certain simple curve; we follow \cite{L} and refer to
it as the {\em jump locus} (of arc (a)).
(This jump locus is the smiley, dotted curve in the lower half plane indicated in Figure 3 below.)
According to \eq{V_jump}-\eq{C_jump} its left endpoint is $(V_1,-C_1)$ (corresponding to
the point $(\tilde V_0,\tilde C_0)=(0,0)$), where
$V_1$ and $C_1$ are given by \eq{init_data}. Its right end point lies on the critical
line $\mathcal C_-$ and coincides with the end point of arc (a).
At this stage, each point on the jump locus (except its endpoints) provides possible
initial data for \eq{V_sim2}-\eq{C_sim2}, from which a solution trajectory
should be continued for all $x>B$.
The issue now is to argue that there is a unique point $(\hat V_1,\hat C_1)$ on the
jump locus from which the solution can be continued to provide a
physically meaningful solution to \eq{m_d_mass}-\eq{m_d_energy}.
A computation shows that the ODE \eq{CV_ode} has a critical point at $(V,C)=(V_0,-\infty)$, where
\begin{equation}\label{V_naught}
V_0=-\frac{2(\ensuremath{\lambda}-1)}{n(\gamma+s-1)}
\end{equation}
gives the vertical asymptote for the zero-level of $G(V,C,\lambda)$ in the $(V,C)$-plane.
This point corresponds to a saddle point at the origin in the variables $(v,\zeta)=(V-V_0,C^{-2})$.
There is therefore exactly one solution of \eq{CV_ode} which approaches the vertical
asymptote $V=V_0$. Furthermore, it appears that this solution, when integrated in from infinity,
always lies entirely below the critical line $\mathcal C_-$, before intersecting the formerly determined
jump locus at a single point $(\hat V_1,\hat C_1)$. This solution trajectory is
referred to as ``arc (b).'' We then apply \eq{V_jump} and
\eq{C_jump} to find the corresponding point $(\hat V_0,\hat C_0)$ on arc (a).
The $x$-value $B$ at which the expanding shock is located is then determined by
the condition that $(V(x),C(x))|_{x=B}=(\hat V_0,\hat C_0)$, where $(V(x),C(x))$ denotes the
$x$-parametrization of arc (a). Modulo the $x$-parametrization
of arc (b), this procedure determines the solution for all $x>0$, and provides a complete
solution for both types of radial similarity flows.
\begin{remark}\label{stagnation}
As is evident from Figure 8.30 in \cite{L}, and explicitly pointed out in \cite{bg_96},
for $\gamma\gtrsim3$ and $n=3$, the similarity shock solution suffers stagnation ($u=0$)
ahead of the reflected shock. In the phase plane this corresponds to the situation where
the solution $(V(x),C(x))$ moves along arc (a) into the left half plane
$\{V<0\}$ before jumping to arc (b).
\end{remark}
Before addressing the uniqueness of this solution, we
record how Lazarus \cite{L} obtains the $x$-parametrization of arc (b).
First $V$ and $C$ are expanded
in powers of the new independent variable $w=kx^{-\sigma}$, where $k$ and $\sigma>0$
are constants to be determined. With the ansatz
\begin{equation}\label{V_Z_of_w}
V(w)=\sum_{i=0}^\infty V_iw^i \qquad\text{and}\qquad C(w)=-\frac{1}{w}+\sum_{i=0}^\infty C_iw^i,
\end{equation}
substitution into \eq{V_sim2} and \eq{C_sim2} yields the value in \eq{V_naught} for $V_0$, and
\begin{equation}\label{sigma_z}
\sigma=\frac{1}{\lambda}\Big[1+\frac{s(n-1)z}{1+V_0}\Big] \qquad
\text{where}\qquad z=\frac{\lambda-1}{(n-1)(\gamma+s-1)}.
\end{equation}
To integrate the ODE system in from the critical point $(V_0,-\infty)$ at infinity, Lazarus
instead integrates the system for $V(w)$ and $C(w)$ from $w=0$, and thus obtains
the $w$-parametrization of arc (b). This provides the value $w_1$ for which
$(V(w),C(w))_{w=w_1}=(\hat V_1,\hat C_1)$, the point where arc (b) intersects
the jump locus of arc (a). As explained above, this determines, via the Rankine-Hugoniot relations
\eq{V_jump}-\eq{C_jump} and the $x$-parametrization of arc (a),
the location $x=B$ of the reflected shock. Finally, the $x$-parametrization of
arc (b) requires the determination of the constant $k$, which is now given by $k=B^\sigma w_1$.
\begin{example}
In Figure 3 we have used Maple to display the complete similarity shock
solution ($s=1$) in the $(V,C)$-plane for the case $n=\gamma=3$.
We have used the values $\lambda=\lambda_{std}(3,3,1)\approx
1.5713126233$ and $B\approx 0.693970$ given by Table 6.5 in \cite{L}
(see erratum in \cite{L_errat}). The solution starts at the starred point above the critical line
$\{C=1+V\}$, moves downward, crosses $\{C=1+V\}$ and the origin smoothly,
and then crosses the critical line $\{C=-1-V\}$ by jumping, before continuing along arc (b)
toward the critical point at $(V_0,-\infty)$. Note that, in accordance with
Remark \ref{stagnation}, the first jump point, corresponding to the state ahead of the
reflected shock, is close to $\{V=0\}$.
\begin{figure}\label{Figure_3}
\centering
\includegraphics[width=9cm,height=9cm]{complete_shock.pdf}
\caption{Complete trajectory of similarity shock solution ($n=\gamma=3$) in the $(V,C)$-plane.
Thick dash $=$ solution for $-1<x<0$, thick solid $=$ arc (a), dotted $=$ jump locus, solid $=$ arc (b),
thin dash $=$ zero-level of $G(V,C)$, dash-dot $=$ critical lines, star $=$ starting point, circles $=$
jump points.}
\end{figure}
\end{example}
We note that, according to \eq{V}, the physical requirement that the particle velocity $u(t,r)$
vanishes at the center of motion $r=0$ for all $t>0$, imposes the condition
$V(x)/x^\frac{1}{\lambda}\to 0$ as $x\uparrow\infty$.
Of course, this is satisfied for the solution determined above since $V(x)$ in that case
tends to the finite limit $V_0$ as $x\uparrow\infty$.
By combining the asymptotic behavior of $V(x)$ and $C(x)$ with the exact integral
\eq{exact_integral} we obtain that of $R(x)$, and thus a complete description of the
flow near the center of motion. A calculation shows that the result depends on
the value of $s$; at any fixed time $t>0$ and as $r\downarrow 0$, we have:
\begin{itemize}
\item[(O1)] for similarity cavity flow ($s=0$): $\rho(t,r)$, $p(t,r)$, and
$\theta(t,r)\propto c(t,r)^2$ all tend to nonzero constants (cf.\ Figures 8.19-8.22 in \cite{L});
\item[(O2)] for similarity shock flow ($s=1$): $\rho(t,r)\to 0$, $p(t,r)$
tends to a strictly positive constant, while $c(t,r)$ and
$\theta(t,r)$ both tend to $+\infty$ (cf.\ Figures 8.25-8.28 in \cite{L}).
\end{itemize}
(For a representative calculation, see the proof of Lemma \ref{prelim} below.)
It is noteworthy that, in the case of similarity shock flow, the density vanishes at the
center of motion after collapse, without the pressure tending to zero there. For the
ideal gas under consideration, this
yields unbounded temperature and sound speed
at $r=0$ for $t>0$. (This contradicts Lazarus' statement on p.\ 330 in \cite{L}
when $s=1$.)
In our view, this is another manifestation of the borderline physicality
of the radial similarity solutions under consideration.
It remains to discuss the uniqueness of the solution determined above, which
was obtained by exploiting the critical (saddle) point $(V_0,-\infty)$ at infinity for
the ODE \eq{CV_ode}. Consider first similarity cavity flow ($s=0$), in which case
\eq{CV_ode} has critical points also at $(-\infty,-\infty)$ and at $(\infty,-\infty)$.
However, neither of these appear to be reachable from the jump locus of arc (a).
Indeed, from the phase portraits it appears that all solution trajectories $(V(x),C(x))$
starting from points on the jump locus lying to the left of $(\hat V_1,\hat C_1)$ end up
(for a finite value of $x$) on the critical line $\mathcal C_+$, while all trajectories
starting from points on the jump locus lying to the right of $(\hat V_1,\hat C_1)$ end up
on $\mathcal C_-$. There is no way to continue these solutions to all $x>0$
and obtain complete, physically meaningful flows.
For the case of similarity shock flow ($s=1$), the ODE \eq{CV_ode} has an
additional critical point at $(V,C)=(-1,-\infty)$ (due to the $(1+V)^{-1}$-term
in $F(V,C,\lambda)$ in this case, cf.\ \eq{F}). From the phase portraits it appears
that all solution trajectories $(V(x),C(x))$ starting from points on the jump locus
lying to the left of $(\hat V_1,\hat C_1)$ approaches this point. (All trajectories
starting from points on the jump locus lying to the right of $(\hat V_1,\hat C_1)$
appear again to end up on $\mathcal C_-$ for finite $x$-values). Changing to the variables
$(V,\frac{1}{C})$ and linearizing, reveals that the point $(V,C)=(-1,-\infty)$ is necessarily
reached for a finite $x$-value, say $\check x$ (depending on where along the jump
locus the solution started). According to \cite{RL_78}, this shows that the critical point
$(-1,-\infty)$ cannot describe the physical state at $r=0^+$ for $t>0$ (since this
corresponds to $x=+\infty$), and is therefore irrelevant.
However, this does not resolve the issue completely. A calculation shows that if
$(V(x),C(x))$ of \eq{V_sim2}-\eq{C_sim2} tends to $(-1,-\infty)$ as $x\uparrow \check x$,
then the density $\rho(t,r)$ at a fixed time $t>0$ will satisfy
\[\rho(t,r)\downarrow 0\quad\text{as}\quad r\downarrow
\big(\textstyle\frac{t}{\check x}\big)^\frac{1}{\lambda};\]
that is, a vacuum is reached.
This solution structure is not unreasonable: one might well
imagine an expanding vacuum region opening up in the wake of a strong,
expanding shock (a possibility considered by Hunter \cite{hun_60} for
the particular case of similarity cavity flow with $\gamma=7$). However, a further
calculation reveals that the pressure $p(t,r)$ does {\em not} tend to zero as
$ r\downarrow (t/\check x)^{1/\lambda}$ (for $t>0$ fixed).
This type of solutions is therefore rejected as unphysical.
While these observations do not provide rigorous proof, they support the view that the
only way to obtain a complete and physically admissible solution,
is by having $(V(x),C(x))$ approach the
saddle point at $(V_0,-\infty)$ as $x\uparrow \infty$. It therefore appears that both similarity shock
and similarity cavity solutions are uniquely determined beyond collapse - at least among
similarity flows.
\section{Weak and radial weak Euler solutions}\label{weak_solns}
We next consider whether the radial similarity solutions constructed
above, considered as function of time and space, provide weak solutions
to the original multi-d Euler system \eq{m_d_mass}-\eq{m_d_energy}.
For concreteness, in what follows, we focus on the case of similarity shock solutions,
in which case the radial velocity, sound speed, pressure and temperature
are unbounded at time of collapse, cf.\ \eq{uc_0}-\eq{ptheta_0}.
The formulation and verification of the weak form of the equations
therefore requires attention. Somewhat surprisingly this does not
appear to have been addressed in the existing literature.
In this section we formulate the weak form of the Euler system (in the
absence of vacuum regions), first for general, multi-d solutions, and
then specialize it to the case of radial solutions.
\subsection{General, multi-d weak solutions}\label{multi-d_weak_solns}
We write $\rho(t)$ for $\rho(t,\cdot)$ etc., $\vec u=(u_1,\dots,u_n)$,
$u:=|\vec u|$, and let $z=(z_1,\dots,z_n)$ denote the spatial variable in $\mathbb{R}^n$.
We restrict attention to non-vacuum solutions.
\begin{definition}\label{weak_soln}
Consider the compressible Euler system \eq{m_d_mass}-\eq{m_d_energy}
in $n$ space dimensions, with a given pressure function $p=p(\rho,e)\geq 0$,
and let the measurable functions $\rho,\, u_1,\dots,u_n,\, e:\mathbb{R}_t\times \mathbb{R}_z^n\to \mathbb{R}$
be given. We say that these constitute a (non-vacuum) weak solution to
\eq{m_d_mass}-\eq{m_d_energy} provided that:
\begin{itemize}
\item[(i)] the functions $\rho$ and $e$ satisfy $\rho(t,z)>0$
and $e(t,z)\geq 0$ for a.a.\ $(t,z)\in \mathbb{R}\times \mathbb{R}^n$;
\item[(ii)] the maps $t\mapsto \rho(t)$, $t\mapsto \rho(t) u(t)$,
and $t\mapsto \rho(t)(e(t)+\frac{u(t)^2}{2})$ belong to $C(\mathbb{R}_t;L^1_{loc}(\mathbb{R}^n_z))$;
\item[(iii)] the functions $\rho u^2$, $p$, and
$\big[\rho \big(e+\textstyle\frac{u^2}{2}\big)+p\big]u$ belong to $L^1_{loc}(\mathbb{R}_t\times\mathbb{R}^n_z)$;
\item[(iv)] the conservation laws for mass, momentum, and energy are
satisfied weakly in sense that
\begin{align}
\int_\mathbb{R}\int_{\mathbb{R}^n} \rho\varphi_t+\rho\vec u\cdot\nabla_{z}\varphi
\, dzdt &=0\label{m_d_mass_weak}\\
\int_\mathbb{R}\int_{\mathbb{R}^n} \rho u_i\varphi_t
+\rho u_i\vec u\cdot\nabla_{z}\varphi+p\varphi_{z_i}\, dzdt &=0
\qquad \text{for $i=1,\dots, n$} \label{m_d_mom_weak}\\
\int_\mathbb{R}\int_{\mathbb{R}^n} \rho \big(e+\textstyle\frac{u^2}{2}\big) \varphi_t
+\left[\rho \big(e+\textstyle\frac{u^2}{2}\big)+p\right]
\vec u\cdot\nabla_{z}\varphi\, dzdt &=0 \label{m_d_energy_weak}
\end{align}
whenever $\varphi\in C_c^1(\mathbb{R}_t\times \mathbb{R}^n_z)$ (the space of $C^1$-smooth functions with compact support).
\end{itemize}
\end{definition}
\begin{remark}
Note that we allow for the possibility that the density vanishes on sets of
measure zero.
This is relevant since, as noted above, the similarity shock solutions constructed earlier include a
vacuum state at the center of motion after collapse.
Also, we do not address admissibility of weak solutions. While not the
only possible approach, we consider the similarity shock solutions under
consideration to be admissible since their discontinuities are, by construction,
compressive shocks in ideal gases.
\end{remark}
\subsection{Radial weak Euler solutions}\label{rad_weak_solns}
Next, for completeness we detail the relationship between weak solutions of the multi-d
Euler system \eq{m_d_mass}-\eq{m_d_energy} and ``radial weak solutions'' of the radial
version \eq{mass}-\eq{energy}. This analysis has been provided earlier by
Hoff \cite{hoff} for radial solution of the compressible, isentropic Navier-Stokes
system.
Setting $m:=n-1$ we let
\[\mathbb{R}^+=(0,\infty),\qquad \mathbb{R}_0^+=[0,\infty),\qquad
L^1_{(loc)}(dt\times r^mdr)=L^1_{(loc)}(\mathbb{R}\times\mathbb{R}^+_0,dt\times r^mdr),\]
and $C^1_c(\mathbb{R}\times\mathbb{R}^+_0)$ denotes the set of real-valued functions
$\psi(t,r)$ defined on $\mathbb{R}\times\mathbb{R}^+_0$ and with the property that $\psi$ is $C^1$ smooth
on $\mathbb{R}\times\mathbb{R}^+_0$ and vanishes outside
$[-\bar t,\bar t]\times[0,\bar r]$ for some $\bar t,\, \bar r\in\mathbb{R}^+$.
In particular, for any $\psi$ in $C^1_c(\mathbb{R}\times\mathbb{R}^+_0)$
the derivatives $\partial^l_t\partial_r^k \psi$ with $0\leq l+k \leq 1$ have well-defined (finite),
continuous, and possibly non-vanishing, traces along the $t$-axis.
Finally, we let $C^1_0(\mathbb{R}\times\mathbb{R}^+_0)$ denote the set of functions
$\psi\in C^1_c(\mathbb{R}\times\mathbb{R}^+_0)$ with the additional property that $\psi(t,0)\equiv 0$.
\begin{remark}\label{psi_0_rmk}
It follows from this that for any $\psi\in C^1_0(\mathbb{R}\times\mathbb{R}^+_0)$, and any
compact time interval $[-T,T]$, there is a constant $A=A_{\psi,T}$ so that
\[|\psi(t,r)|\leq Ar\quad\text{for all $t\in[-T,T]$.}\]
\end{remark}
The relevance of these function classes is the following: when the weak
formulation of the full multi-d Euler system
\eq{m_d_mass}-\eq{m_d_energy} is applied to radial solutions, then the relevant
``test functions'' for the radial continuity and energy equations
will belong to $C^1_c(\mathbb{R}\times\mathbb{R}^+_0)$, while the relevant ``test functions'' for
the radial momentum equation will belong to $C^1_0(\mathbb{R}\times\mathbb{R}^+_0)$.
Before verifying this we define ``radial weak solutions.''
\begin{definition}\label{rad_symm_weak_soln}
Consider the radial version \eq{mass}-\eq{energy} of the compressible Euler
system \eq{m_d_mass}-\eq{m_d_energy}, where $(t,r)$ ranges over $\mathbb{R}\times \mathbb{R}^+$
and $p=p(\rho,e)\geq 0$ is a given pressure function.
Let the measurable functions $\rho,\, u,\, e:\mathbb{R}_t\times \mathbb{R}^+_r\to \mathbb{R}$
be given. We say that these constitute a (non-vacuum) radial weak solution to
\eq{mass}-\eq{energy} provided that:
\begin{itemize}
\item[(i)] the functions $\rho$ and $e$ satisfy $\rho(t,r)>0$
and $e(t,r)\geq 0$ for a.a.\ $(t,r)\in \mathbb{R}\times \mathbb{R}^+$;
\item[(ii)] the maps $t\mapsto \rho(t)$, $t\mapsto \rho(t)u(t)$, and
$t\mapsto \rho(t)(e(t)+\frac{u(t)^2}{2})$ belong to
$C(\mathbb{R}_t;L^1_{loc}(r^mdr))$;
\item[(iii)] the functions $\rho u^2$, $p$, and
$\big[\rho \big(e+\textstyle\frac{u^2}{2}\big)+p\big]u$ belong to $L^1_{loc}(dt\times r^mdr)$;
\item[(iv)] the conservation laws for mass, momentum, and energy are
satisfied in the distributional sense that
\begin{align}
\int_{\mathbb{R}}\int_{\mathbb{R}^+} \left(\rho\psi_t+\rho u\psi_r\right)\, r^mdrdt &=0
\qquad\forall \psi\in C^1_c(\mathbb{R}\times\mathbb{R}^+_0) \label{radial_mass_weak}\\
\int_{\mathbb{R}}\int_{\mathbb{R}^+} \left(\rho u\psi_t
+\rho u^2\psi_r+p\big(\psi_r+\textstyle\frac{m\psi}{r}\big)\right)\, r^mdrdt &=0
\qquad\forall \psi\in C^1_0(\mathbb{R}\times\mathbb{R}^+_0)\label{radial_mom_weak}\\
\int_\mathbb{R}\int_{\mathbb{R}^+} \left(\rho \big(e+\textstyle\frac{u^2}{2}\big) \psi_t
+\left[\rho \big(e+\textstyle\frac{u^2}{2}\big)+p\right] u\psi_r\right)\, r^mdrdt &=0
\qquad\forall \psi\in C^1_c(\mathbb{R}\times\mathbb{R}^+_0).\label{radial_energy_weak}
\end{align}
\end{itemize}
\end{definition}
\begin{proposition}\label{rad_md}
Consider the multi-d Euler system \eq{m_d_mass}-\eq{m_d_energy} with a
given pressure function $p=p(\rho,e)$, together with its radially symmetric
version \eq{mass}-\eq{energy}. Then:
given a radial weak solution
$(\tilde \rho, \tilde u, \tilde e)$ of \eq{mass}-\eq{energy},
and setting
\begin{equation}\label{rad_to_gen_soln}
\rho(t,z)=\tilde\rho(t,r)\qquad \vec u(t,z)
=\tilde u(t,r)\frac{z}{r}\qquad e(t,z)=\tilde e(t,r)
\qquad (r=|z|),
\end{equation}
we obtain a weak solution $(\rho,\vec u,e)$ of the multi-d Euler system
\eq{m_d_mass}-\eq{m_d_energy}.
\end{proposition}
\begin{proof}
First, it is immediate that the properties in parts (i)-(iii) of Definition
\ref{rad_symm_weak_soln}, together with \eq{rad_to_gen_soln},
imply parts (i)-(iii) of Definition \ref{weak_soln}, respectively.
It remains to verify the weak form of the equations.
To verify \eq{m_d_mass_weak} we fix $\varphi\in C_c^1(\mathbb{R}\times \mathbb{R}^n)$ and set
\begin{equation}\label{psi_1}
\psi(t,r):=\int_{|y|=1}\varphi(t,ry)\, dS_{y}.
\end{equation}
Then $\psi\in C^1_c(\mathbb{R}\times\mathbb{R}^+_0)$ and \eq{radial_mass_weak} gives
\begin{align*}
0&=\int_{\mathbb{R}}\int_{\mathbb{R}^+} \left(\tilde \rho\psi_t+\tilde \rho \tilde u\psi_r\right)\, r^mdrdt\\
&=\int_{\mathbb{R}}\int_{\mathbb{R}^+} \Big[\tilde \rho \int_{|y|=1}\varphi_t(t,ry)\, dS_{y}
+\tilde \rho \tilde u\int_{|y|=1}\partial_r\left(\varphi(t,ry)\right)\, dS_{y}\Big]\,
r^mdrdt\\
&=\int_{\mathbb{R}}\int_{\mathbb{R}^+}\int_{|y|=1} \Big[\tilde \rho\varphi_t(t,ry)
+\tilde \rho \tilde u \nabla_{z}\varphi(t,ry)\cdot y\Big]\, r^m dS_{y}drdt
=\int_\mathbb{R}\int_{\mathbb{R}^n} \rho\varphi_t+\rho\vec u\cdot\nabla_{z}\varphi \, dzdt,
\end{align*}
verifying the weak form \eq{m_d_mass_weak} of the continuity equation
\eq{m_d_mass} in the multi-d Euler system.
Next, to verify \eq{m_d_mom_weak} we fix $i$ ($1\leq i\leq n$) and
$\varphi\in C_c^1(\mathbb{R}\times \mathbb{R}^n)$, and set
\begin{equation}\label{psi_2}
\psi(t,r):=\int_{|y|=1}y_i\varphi(t,ry)\, dS_{y}.
\end{equation}
Then $\psi\in C^1_0(\mathbb{R}\times\mathbb{R}^+_0)$ and \eq{radial_mom_weak} gives
\begin{equation}\label{m2}
\int_{\mathbb{R}}\int_{\mathbb{R}^+} \Big(\underbrace{\tilde \rho \tilde u\psi_t}_{I}
+\underbrace{\tilde \rho \tilde u^2\psi_r}_{I\!I}
+\underbrace{\tilde p\big(\psi_r+{\textstyle\frac{m\psi}{r}}\big)}_{I\!I\!I}\Big)\, r^mdrdt=0,
\end{equation}
where $\tilde p=p(\tilde \rho,\tilde e)$. Treating each term in turn, we have:
\begin{align*}
I&=\int_{\mathbb{R}}\int_{\mathbb{R}^+} \tilde \rho \tilde u\psi_t\, r^mdrdt
=\int_{\mathbb{R}}\int_{\mathbb{R}^+} \tilde \rho \tilde u\Big[ \int_{|y|=1}
y_i\varphi(t,ry)\, dS_{y}\Big]_t\, r^mdrdt\\
&=\int_{\mathbb{R}}\int_{\mathbb{R}^+}\int_{|y|=1} \tilde \rho \tilde u y_i \varphi_t(t,ry)\,
r^m dS_{y}drdt
=\int_\mathbb{R}\int_{\mathbb{R}^n} \rho u_i\varphi_t \, dzdt,
\end{align*}
and
\begin{align*}
I\!I&=\int_{\mathbb{R}}\int_{\mathbb{R}^+} \tilde \rho \tilde u^2\psi_r\, r^mdrdt
=\int_{\mathbb{R}}\int_{\mathbb{R}^+} \tilde \rho \tilde u^2\Big[ \int_{|y|=1}
y_i\varphi(t,ry)\, dS_{y}\Big]_r\, r^mdrdt\\
&=\int_{\mathbb{R}}\int_{\mathbb{R}^+}\int_{|y|=1}
\tilde \rho \tilde u^2 y_i \nabla_{z}\varphi(t,ry)
\cdot y\, r^m dS_{y}drdt
=\int_\mathbb{R}\int_{\mathbb{R}^n} \rho u_i\vec u\cdot\nabla_{z}\varphi \, dzdt.
\end{align*}
For $I\!I\!I$ we first calculate
\begin{align*}
\left(r^m\psi\right)_r&=\partial_r\Big(r^m\int_{|y|=1} y_i\varphi(t,ry)\,
dS_{y}\Big)
= \partial_r\Big(\int_{|z|=r} \varphi(t,z){\textstyle\frac{z_i}{|z|}}\, dS_{z}\Big)\\
&= \partial_r\Big(\int_{|z|\leq r} \varphi_{z_i}(t,z)\,
dz\Big)
= \partial_r\Big(\int_0^r\int_{|y|=1} \varphi_{z_i}(t,sy)\, s^m dS_{y}ds\Big)
= r^m\int_{|y|=1}\varphi_{z_i}(t,ry)\, dS_{y}.
\end{align*}
Using this we obtain that
\begin{align*}
I\!I\!I &=\int_{\mathbb{R}}\int_{\mathbb{R}^+} \tilde p\big(\psi_r+{\textstyle\frac{m\psi}{r}}\big)\, r^mdrdt
=\int_{\mathbb{R}}\int_{\mathbb{R}^+} \tilde p\left(r^m\psi\right)_r\, drdt\\
&=\int_{\mathbb{R}}\int_{\mathbb{R}^+}\int_{|y|=1} \tilde p \varphi_{z_i}(t,ry)
r^m\, dS_{y}drdt
=\int_\mathbb{R}\int_{\mathbb{R}^n} p\varphi_{z_i} \, dzdt.
\end{align*}
Substituting these expressions for $I$, $I\!I$, and $I\!I\!I$ back into \eq{m2}, shows that the
weak form \eq{m_d_mom_weak} of the momentum equation \eq{m_d_mom} in the multi-d
Euler system is satisfied.
Finally, to verify \eq{m_d_energy_weak} we fix $\varphi\in C_c^1(\mathbb{R}\times \mathbb{R}^n)$ and again
define $\psi(t,r)$ by \eq{psi_1}. Then $\psi\in C^1_c(\mathbb{R}\times\mathbb{R}^+_0)$ and
\eq{radial_energy_weak} gives
\begin{align*}
0&=\int_\mathbb{R}\int_{\mathbb{R}^+} \Big(\tilde \rho \big(\tilde e+{\textstyle\frac{\tilde u^2}{2}}\big) \psi_t
+\Big[\tilde \rho \big(\tilde e+{\textstyle\frac{\tilde u^2}{2}}\big)+\tilde p\Big] \tilde u\psi_r\Big)\,
r^mdrdt\\
&=\int_{\mathbb{R}}\int_{\mathbb{R}^+} \Big\{\tilde \rho \big(\tilde e+{\textstyle\frac{\tilde u^2}{2}}\big)
\int_{|y|=1}\varphi_t(t,ry)\, dS_{y}\\
&\qquad\qquad\qquad\qquad
+\Big[\tilde \rho \big(\tilde e+{\textstyle\frac{\tilde u^2}{2}}\big)+\tilde p\Big]
\tilde u\int_{|y|=1}
\partial_r\left(\varphi(t,ry)\right)\, dS_{y}\Big\}\, r^mdrdt\\
&=\int_{\mathbb{R}}\int_{\mathbb{R}^+} \int_{|y|=1}\Big\{\tilde \rho \big(\tilde e
+{\textstyle\frac{\tilde u^2}{2}}\big) \varphi_t(t,ry)\\
&\qquad\qquad\qquad\qquad
+\Big[\tilde \rho \big(\tilde e+{\textstyle\frac{\tilde u^2}{2}}\big)+p\Big] \tilde u
y\cdot \nabla_{z}\varphi(t,ry)\Big\}\, r^mdrdt\\
&=\int_{\mathbb{R}}\int_{\mathbb{R}^n} \rho \big(e+{\textstyle\frac{u^2}{2}}\big) \varphi_t
+\Big[\rho \big(e+{\textstyle\frac{u^2}{2}}\big)+p\Big]\vec u\cdot \nabla_{z}\varphi\,
dzdt,
\end{align*}
verifying the weak form \eq{m_d_energy_weak} of the energy equation
\eq{m_d_energy} in the multi-d Euler system.
\end{proof}
\begin{remark}
Note that the ``test function'' $\psi$ in \eq{psi_1} typically has non-vanishing trace
along the $t$-axis (e.g., when $n=3$, $\psi(t,r)\to 4\pi \cdot \varphi(t,0)$ as $r\downarrow 0$),
while its $r$-gradient does vanish as $r\downarrow 0$.
Also, the ``test-function'' $\psi$ in \eq{psi_2} behaves in
the opposite manner: $\psi(t,r)\to 0$ as $r\downarrow 0$, while typically
$\psi_r(t,r)\not\to 0$ as $r\downarrow 0$.
\end{remark}
\section{Similarity shock solutions as radial weak solutions}\label{sim_weak_solns}
In this section we return to the case of an ideal gas and consider
the similarity shock solutions constructed in Section \ref{constr_sim_solns}
as candidates for weak solutions of the Euler system. The main result
is that these provide {\em bona fide} weak solution that suffer blowup
of primary flow variables at collapse. This conclusion holds for flows in two
and three space dimensions provided the similarity shock solution $(R(x),V(x),C(x))$ satisfies
the properties listed in (P1)-(P3) below. We stress that numerical computations
clearly indicate that these properties are satisfied for the ``standard'' similarity
solutions with $ \lambda=\lambda_{std}(\gamma,n,1)$, for a large range of
$\gamma$-values (see Tables 6.4-6.5 in \cite{L}).
\begin{itemize}
\item[(P1)] the function $1+V(x)$ is uniformly bounded below away from zero,
and from above, as $x$ varies over all of $\mathbb{R}$;
\item[(P2)] the limits $\ell$ and $L$ in \eq{V/x_C/x-zero} satisfy $-\infty<L<0<\ell<\infty$;
\item[(P3)] $(V(x),C(x))\to(V_0,-\infty)$ as $x\uparrow \infty$, where $V_0$ is given by \eq{V_naught}.
\end{itemize}
We now fix $n=2$ or $n=3$ and let $s=1$, such that $\kappa$ in \eq{R} vanishes,
and $\rho(t,r)=R(x)$.
\begin{lemma}\label{prelim}
With $n=2$ or $3$, and $1<\lambda<1+\frac{n}{2}$, assume (P1)-(P3) are satisfied for the
solution $(R(x),V(x),C(x))$ under consideration.
Then $R(x)>0$ for all $x\in\mathbb{R}$, the functions $R(x)$, $V(x)$, $V(x)/x$ are globally
bounded on $\mathbb{R}$, and the functions $R(x)$, $V(x)/x$, $C(x)/x$ are continuous
at $x=0$. Finally, the function $R(x)(C(x)/x)^2$ is globally bounded.
\end{lemma}
\begin{proof}
Clearly, (P1) and (P2) imply global boundedness of $V(x)$, continuity of
$V(x)/x$, $C(x)/x$ at $x=0$ (when the latter two functions are defined to
take values $\ell$ and $L$ there, respectively), and therefore also global
boundedness of $V(x)/x$.
Next, linearization of the ODE \eq{CV_ode} about $(V_0,-\infty)$
shows that the leading order behaviors of $V$ and $C$ there
are given by \eq{V_Z_of_w}-\eq{sigma_z}:
\begin{equation}\label{asymp_vals}
V(x)\sim V_0= -\frac{2(\ensuremath{\lambda}-1)}{\gamma n}\qquad\text{and}\qquad
C(x)\sim -x^\sigma \qquad\text{as $x\uparrow\infty$},
\end{equation}
where
\begin{equation}\label{sigma_q_shock}
\sigma=\frac{1}{\lambda}\Big(1+\frac{\lambda-1}{\gamma-q}\Big)\qquad\text{with}\qquad
q=\frac{2(\lambda-1)}{n}.
\end{equation}
We note that the constraint $\lambda<1+\frac{n}{2}$ implies $-1<V_0<0$, and thus
\begin{equation}\label{reln}
\gamma-q\equiv\gamma(1+V_0)>0.
\end{equation}
Also recall that the function $R(x)$ takes the value $1$ for $x<-1$; a calculation
using the Rankine-Hugoniot relations \eq{V_jump}-\eq{R_jump} together
with \eq{exact_integral}, shows that $R(x)>0$ for all $x>-1$ as well. By \eq{exact_integral},
the continuity of $V(x)$ and $C(x)/x$ at $x=0$ implies that of $R(x)$.
According to \eq{exact_integral} we also obtain
\begin{equation}\label{R_bnd}
R(x)\sim\big(\textstyle\frac{C(x)}{x}\big)^{-\frac{2}{q+1-\gamma}}\sim x^{-\frac{2}{\gamma-q}(1-\frac{1}{\lambda})}
\qquad\text{as $x\uparrow\infty$.}
\end{equation}
Thus, according to \eq{reln}, we have that $R(x)$ tends to zero as $x\uparrow\infty$ (establishing the
first part of (O2) in Section \ref{refl_shck}); it is therefore globally bounded. Finally, a similar
calculation shows that
\begin{equation}\label{aux}
R(x)\Big|\frac{C(x)}{x}\Big|^2\sim x^{-2(1-\frac{1}{\lambda})}.
\end{equation}
Together with the continuity of $C(x)/x$ at $x=0$, this shows that $R(x)(C(x)/x)^2$ is globally bounded.
\end{proof}
For the solution $(R(x),V(x),C(x))$ under consideration we now define $\rho$, $u$, $c$, and $e$
via \eq{V}-\eq{R} and \eq{sound}.
\begin{theorem}\label{main_thm}
With $n=2$ or $3$, and under the assumption that (P1)-(P3) hold, the triple $(\rho,u,e)$ constitutes a radial
weak solution to \eq{mass}-\eq{energy}, with ideal pressure
law \eq{perf}, according to Definition \ref{rad_symm_weak_soln} whenever
\begin{equation}\label{max_lam}
1<\lambda<1+\textstyle\frac{n}{2}.
\end{equation}
\end{theorem}
According to Proposition \ref{rad_md}, it follows that these solutions
provide (non-vacuum) weak solutions of the multi-d Euler system
\eq{m_d_mass}-\eq{m_d_energy}, according to Definition \ref{weak_soln},
with unbounded amplitudes.
The proof of Theorem \ref{main_thm} is organized as follows. First,
part (i) of Definition \ref{rad_symm_weak_soln} is
immediate from Lemma \ref{prelim} and the definitions of $\rho$ and $e$.
The next two subsections consider
the continuity and integrability requirements in parts (ii) and (iii) of
Definition \ref{rad_symm_weak_soln}, respectively. Subsection \ref{weak_forms} finishes the proof by
analyzing the weak form of the equations (part (iv) of Definition \ref{rad_symm_weak_soln}).
\subsection{Continuity and local integrability}\label{cont_and_loc_integr}
For a fixed $\bar r>0$ and with
\[M(t;\bar r):=\int_0^{\bar r} \rho(t,r)r^m\, dr,\qquad I(t;\bar r):=\int_0^{\bar r} \rho(t,r)|u(t,r)|r^m\, dr,\]
\[E(t;\bar r):=\int_0^{\bar r} \rho(t,r)e(t,r)r^m\, dr+\frac{1}{2}\int_0^{\bar r}\rho(t,r)u^2(t,r)r^m\, dr
=:E_P(t;\bar r)+E_K(t;\bar r),\]
the issue is to show that the maps $t\mapsto M(t;\bar r)$, $t\mapsto I(t;\bar r)$,
and $t\mapsto E(t;\bar r)$ are continuous at all times $t\in\mathbb{R}$.
Recall that the incoming and outgoing shock waves follow the paths
$r=r_i(t)= (-t)^{1/\lambda}$ and $r=r_o(t)=(t/B)^{1/\lambda}$,
respectively. In what follows we consider times $t$ small enough that $r_i(t)<\bar r$
if $t<0$ and $r_o(t)<\bar r$ if $t>0$. The calculations for the other cases are
simpler and do not change the conclusions. We set
\[\alpha:=\frac{n}{\lambda}.\]
\subsubsection{Continuity of $M(t;\bar r)$}
For $t<0$ we have $\rho(t,r)=1$ for $0<r<r_i(t)$, such that
\begin{equation}\label{M_neg_t}
M(t;\bar r)=\int_{0}^{r_i(t)} r^m\, dr+\int_{r_i(t)}^{\bar r} \rho(t,r)r^m\, dr
=\frac{|t|^{\alpha}}{n}+\frac{1}{\lambda}|t|^{\alpha}
\int_{-1}^{\frac{t}{\bar r^\lambda}}R(x)\,
\frac{dx}{|x|^{\alpha+1}},
\end{equation}
while for $t>0$ we have
\begin{equation}\label{M_pos_t}
M(t;\bar r)=\Big[\int_0^{r_o(t)} + \int_{r_o(t)}^{\bar r} \Big] \rho(t,r)r^m\, dr
= \frac{1}{\lambda}t^{\alpha}
\Big[\int_{\frac{t}{\bar r^\lambda}}^B+\int_B^{\infty}\Big]R(x)\,
\frac{dx}{x^{\alpha+1}}.
\end{equation}
As $R(x)$ is globally bounded, the integrals in \eq{M_neg_t} and
\eq{M_pos_t} are all finite, and
$t\mapsto M(t;\bar r)$ is continuous at all times $t\neq0$.
For $t=0$ we have
\begin{equation}\label{M_at_t_0}
M(0;\bar r)=\frac{\bar r^{n}}{n}R(0).
\end{equation}
Observe that, as $R(x)$ is globally bounded, the second integral
on the right-hand side of \eq{M_pos_t} and the first term on the right-hand side of \eq{M_neg_t}
are of order $|t|^{\alpha}$, and thus vanish when $t\downarrow 0$ and $t\uparrow 0,$ respectively.
Therefore, continuity from above at $t=0$
of $M(t;\bar r)$ follows once it is established that
\[\frac{1}{\lambda}t^{\alpha}
\int_{\frac{t}{\bar r^\lambda}}^B R(x)\,
\frac{dx}{x^{\alpha+1}}
\to M(0;\bar r)\qquad\text{as $t\downarrow 0$.}\]
This may be verified by using L'H\^opital's rule and the continuity of the
map $x\mapsto R(x)$ at $x=0$. The same argument shows that
\[\frac{1}{\lambda}|t|^{\alpha}
\int_{-1}^{\frac{t}{\bar r^\lambda}}R(x)\,
\frac{dx}{|x|^{\alpha+1}}
\to M(0;\bar r)
\qquad\text{as $t\uparrow 0$}\]
as well. Thus, the map $t\mapsto M(t;\bar r)$ is continuous
at all times.
\subsubsection{Continuity of $I(t;\bar r)$}
For $t<0$ we have $u(t,r)=0$ for $0<r<r_i(t)$ such that
\begin{equation}\label{I_neg_t}
I(t;\bar r)=\int_{r_i(t)}^{\bar r} \rho(t,r)|u(t,r)|r^m\, dr
=\frac{1}{\lambda^2}|t|^{\alpha-1+\frac{1}{\lambda}}
\int_{-1}^{\frac{t}{\bar r^\lambda}}R(x)\frac{|V(x)|}{|x|}\,
\frac{dx}{|x|^{\alpha+\frac{1}{\lambda}}},
\end{equation}
while for $t>0$ we have
\begin{equation}\label{I_pos_t}
I(t;\bar r)=\Big[\int_0^{r_o(t)} + \int_{r_o(t)}^{\bar r} \Big] \rho(t,r)|u(t,r)|r^m\, dr
= \frac{1}{\lambda^2}t^{\alpha-1+\frac{1}{\lambda}}
\Big[\int_{\frac{t}{\bar r^\lambda}}^B+\int_B^{\infty}\Big]R(x)\frac{|V(x)|}{x}\,
\frac{dx}{x^{\alpha+\frac{1}{\lambda}}}.
\end{equation}
As $R(x)$ and $V(x)/x$ are globally bounded, and $\alpha+\frac{1}{\lambda}>1$
(by assumption \eq{max_lam}), the integrals in \eq{I_neg_t} and
\eq{I_pos_t} are all finite, and $t\mapsto I(t;\bar r)$ is continuous
at any time $t\neq0$. For $t=0$ we have, by property (P2) and
with $\ell$ given by \eq{V/x_C/x-zero},
\begin{equation}\label{I_at_t_0}
I(0;\bar r)=\frac{1}{\lambda}R(0)\ell\frac{\bar r^{n+1-\lambda}}{n+1-\lambda}.
\end{equation}
As the second term on the right-hand side of \eq{I_pos_t} is of order
$t^{\alpha-1+\frac{1}{\lambda}}$, and thus vanishes when $t\downarrow 0$
(by \eq{max_lam}), the continuity of $I(t;\bar r)$ from above at $t=0$
follows once it is established that
\[\frac{1}{\lambda^2}t^{\alpha-1+\frac{1}{\lambda}}
\int_{\frac{t}{\bar r^\lambda}}^B R(x)\frac{|V(x)|}{x}\,
\frac{dx}{x^{\alpha+\frac{1}{\lambda}}}
\to I(0;\bar r)\qquad\text{as $t\downarrow 0$.}\]
This may be verified by using L'H\^opital's rule and the continuity of the
map $x\mapsto R(x)\frac{|V(x)|}{x}$ at $x=0$. The same argument shows that
\[\frac{1}{\lambda^2}|t|^{\alpha-1+\frac{1}{\lambda}}
\int_{-1}^{\frac{t}{\bar r^\lambda}}R(x)\frac{|V(x)|}{|x|}\,
\frac{dx}{|x|^{\alpha+\frac{1}{\lambda}}}
\to I(0;\bar r)
\qquad\text{as $t\uparrow 0$}\]
as well. Thus, the map $t\mapsto I(t;\bar r)$ is continuous
at all times.
\subsubsection{Continuity of $E(t;\bar r)$}
We consider first the kinetic energy
\[E_K(t;\bar r)=\frac{1}{2}\int_0^{\bar r} \rho(t,r)u^2(t,r)r^m\, dr,\]
which is given for $t<0$ and $t>0$ by
\begin{equation}\label{E_K_neg_t}
E_K(t;\bar r)
=\frac{|t|^{\alpha-2+\frac{2}{\lambda}}}{2\lambda^3}
\int_{-1}^{\frac{t}{\bar r^\lambda}}R(x)\Big|\frac{V(x)}{x}\Big|^2\,
\frac{dx}{|x|^{\alpha-1+\frac{2}{\lambda}}}
\end{equation}
and
\begin{equation}\label{E_K_pos_t}
E_K(t;\bar r)
= \frac{t^{\alpha-2+\frac{2}{\lambda}}}{2\lambda^3}
\Big[\int_{\frac{t}{\bar r^\lambda}}^B+\int_B^{\infty}\Big]R(x)\Big|\frac{V(x)}{x}\Big|^2\,
\frac{dx}{x^{\alpha-1+\frac{2}{\lambda}}},
\end{equation}
respectively. Global boundedness of $R(x)$ and $V(x)/x$, together with assumption
\eq{max_lam}, imply that $t\mapsto E_K(t;\bar r)$ is finite
and continuous whenever $t\neq 0$.
Evaluating at time $t=0$ yields, thanks to \eq{max_lam},
\[E_K(0;\bar r)=\frac{1}{2\lambda^2}R(0)\ell^2\frac{\bar r^{n+2-2\lambda}}{n+2-2\lambda}.\]
As the second term on the right-hand side of \eq{E_K_pos_t} is of order
$t^{\alpha-2+\frac{2}{\lambda}}$, and thus vanishes when $t\downarrow 0$
(by \eq{max_lam}), the continuity of $E_K(t;\bar r)$ from above at $t=0$
follows once it is established that
\[\frac{t^{\alpha-2+\frac{2}{\lambda}}}{2\lambda^3}
\int_{\frac{t}{\bar r^\lambda}}^B R(x)\left|\frac{V(x)}{x}\right|^2\,
\frac{dx}{x^{\alpha-1+\frac{2}{\lambda}}}
\to E_K(0;\bar r)\qquad\text{as $t\downarrow 0$.}\]
Again, this follows by continuity of $R(x)|V(x)/x|^2$ at $x=0$
and L'H\^opital's rule. The same argument applied to \eq{E_K_neg_t}
shows that $E_K(t;\bar r)$ tends to the same limit as $t\uparrow0$.
This shows that the map $t\mapsto E_K(t;\bar r)$ is continuous
at all times.
Finally, consider the potential energy:
\[E_P(t;\bar r)=\int_0^{\bar r} \rho(t,r)e(t,r)r^m\, dr=\frac{1}{\gamma(\gamma-1)}\int_0^{\bar r} \rho(t,r)c^2(t,r)r^m\, dr,\]
which is given for $t<0$ and $t>0$ by
\begin{align}
E_P(t;\bar r)
=\frac{|t|^{\alpha-2+\frac{2}{\lambda}}}{\lambda^3\gamma(\gamma-1)}
\int_{-1}^{\frac{t}{\bar r^\lambda}}R(x)\Big|\frac{C(x)}{x}\Big|^2\,
\frac{dx}{|x|^{\alpha-1+\frac{2}{\lambda}}}\label{E_P_neg_t}
\end{align}
and
\begin{align}
E_P(t;\bar r)
= \frac{t^{\alpha-2+\frac{2}{\lambda}}}{\lambda^3\gamma(\gamma-1)}
\Big[\int_{\frac{t}{\bar r^\lambda}}^B+\int_B^{\infty}\Big]R(x)\Big|\frac{C(x)}{x}\Big|^2\,
\frac{dx}{x^{\alpha-1+\frac{2}{\lambda}}},\label{E_P_pos_t}
\end{align}
respectively. Global boundedness of $R(x)$ and $C(x)/x$, together with assumption
\eq{max_lam}, imply that $t\mapsto E_P(t;\bar r)$ is finite and continuous at all times $t\neq 0$.
Evaluating at time $t=0$ yields, thanks to \eq{max_lam},
\[E_P(0;\bar r)=\frac{1}{\lambda^2\gamma(\gamma-1)}R(0)L^2\frac{\bar r^{n+2-2\lambda}}{n+2-2\lambda},\]
As the second term on the right-hand side of \eq{E_P_pos_t} is of order
$t^{\alpha-2+\frac{2}{\lambda}}$(by \eq{max_lam}), the continuity of $E_P(t;\bar r)$ from above at $t=0$
follows once it is established that
\[\frac{t^{\alpha-2+\frac{2}{\lambda}}}{\lambda^3\gamma(\gamma-1)}
\int_{\frac{t}{\bar r^\lambda}}^BR(x)\left|\frac{C(x)}{x}\right|^2\,
\frac{dx}{x^{\alpha-1+\frac{2}{\lambda}}}\to E_P(0;\bar r)
\qquad\text{as $t\downarrow 0$.}\]
As above this follows by L'H\^opital's rule and the continuity of
$R(x)|C(x)/x|^2$ at $x=0$. Finally, the continuity of $E_P(t,\bar r)$
from below at time $t=0$ is established in the same manner.
This concludes the verification of part (ii) of Definition \ref{rad_symm_weak_soln}.
\subsubsection{Local space-time integrability}
Next, for part (iii) of Definition \ref{rad_symm_weak_soln}, we need to verify the
local integrability in time and space of the functions $\rho u^2$, $p$, and
$\big[\rho \big(e+\textstyle\frac{u^2}{2}\big)+p\big]u$. Recall that we consider
an ideal gas \eq{perf}, and that the incoming and outgoing shocks propagate
along $x=-1$ and $x=B$, respectively. As a consequence, to verify part (iii)
it suffices to show that, for any fixed $\bar r>0$, the space-time integrals
\[I_\beta(\bar r):=\int_{-\bar r^\lambda}^{B\bar r^\lambda}\int_0^{\bar r}
\rho|u|^\beta r^m\, drdt,\qquad\text{for $\beta=2,\, 3$,}\]
and
\[P_\beta(\bar r):=\int_{-\bar r^\lambda}^{B\bar r^\lambda}\int_0^{\bar r}
p|u|^\beta r^m\, drdt,\qquad\text{for $\beta=0,\, 1$,}\]
are finite. Transforming to $dxdt$-integrals, and recalling that the fluid is at
rest on the inside of the incoming shock, we have
\begin{align}
I_\beta(\bar r)&=\frac{1}{\lambda^{\beta+1}}
\left\{\int_{-1}^B\frac{R(x)|V(x)|^\beta}{|x|^{\alpha+1+\frac{\beta}{\lambda}}}
\Big[\int_0^{|x|\bar r^\lambda}t^{\alpha+\beta\left(\frac{1}{\lambda}-1\right)}\, dt\Big]\, dx\right.\nonumber\\
&\qquad\qquad\qquad\left.+\Big[\int_{B}^\infty\frac{R(x)|V(x)|^\beta}{|x|^{\alpha+1+\frac{\beta}{\lambda}}}\, dx\Big]
\Big[\int_0^{B\bar r^\lambda}t^{\alpha+\beta\left(\frac{1}{\lambda}-1\right)}\, dt\Big]\right\}\nonumber\\
&=\frac{1}{\lambda^{\beta+1}}\frac{\bar r^{\lambda(\alpha+1)+\beta(1-\lambda)}}{(\alpha+1)+\beta\left(\frac{1}{\lambda}-1\right)}
\left\{\int_{-1}^B\frac{R(x)|V(x)|^\beta}{|x|^\beta}\, dx+B^{\alpha+1+\beta\left(\frac{1}{\lambda}-1\right)}
\int_{B}^\infty\frac{R(x)|V(x)|^\beta}{x^{\alpha+1+\frac{\beta}{\lambda}}}\, dx \right\}.\label{I_beta}
\end{align}
Here we have used that the $dt$-integrals are finite since, for all values of
$\lambda$, $n$, and $\beta$ under consideration, \eq{max_lam} yields
\[\alpha+\beta\Big(\frac{1}{\lambda}-1\Big)>-1.\]
As $R(x)$, $V(x)/x$, and $V(x)$ are all globally bounded, it follows from \eq{I_beta}
that $I_\beta(\bar r)<\infty$ for any value of $\bar r$ and $\beta=2$ or $3$.
A similar computation for $P_\beta(\bar r)$ (now using that the pressure $p$
vanishes on the inside of the incoming shock), yields
\begin{align}
P_\beta(\bar r)&=\frac{1}{\gamma\lambda^{\beta+3}}
\left\{\int_{-1}^B R(x)\Big|\frac{C(x)}{x}\Big|^2\Big|\frac{V(x)}{x}\Big|^\beta
\frac{1}{|x|^{\alpha+1+(2+\beta)(\frac{1}{\lambda}-1)}}
\Big[\int_0^{|x|\bar r^\lambda}t^{\alpha+(2+\beta)\left(\frac{1}{\lambda}-1\right)}\, dt\Big]\, dx\right.\nonumber\\
&\qquad\qquad\qquad\left.+\Big[\int_{B}^\infty R(x)\Big|\frac{C(x)}{x}\Big|^2\Big|\frac{V(x)}{x}\Big|^\beta
\frac{dx}{|x|^{\alpha+1+(2+\beta)(\frac{1}{\lambda}-1)}}\Big]
\Big[\int_0^{B\bar r^\lambda}t^{\alpha+(2+\beta)\left(\frac{1}{\lambda}-1\right)}\, dt\Big] \right\}\nonumber\\
&=\frac{1}{\gamma\lambda^{\beta+3}}
\frac{\bar r^{\lambda(\alpha+1)+(2+\beta)(1-\lambda)}}{(\alpha+1)+(2+\beta)\left(\frac{1}{\lambda}-1\right)}
\left\{\int_{-1}^BR(x)\Big|\frac{C(x)}{x}\Big|^2\Big|\frac{V(x)}{x}\Big|^\beta\, dx\right.\nonumber\\
&\qquad\qquad\qquad \left.+B^{\alpha+1+(2+\beta)\left(\frac{1}{\lambda}-1\right)}
\int_{B}^\infty R(x)\Big|\frac{C(x)}{x}\Big|^2\Big|\frac{V(x)}{x}\Big|^\beta
\frac{dx}{|x|^{\alpha+1+(2+\beta)(\frac{1}{\lambda}-1)}} \right\}.\label{P_beta}
\end{align}
Here we have used that the $dt$-integrals are finite since, for all values of
$\lambda$, $n$, and $\beta$ under consideration, \eq{max_lam} yields
\[\alpha+(2+\beta)\left(\frac{1}{\lambda}-1\right)>-1.\]
By global boundedness of $R(x)$, $V(x)$, and $C(x)/x$, and by \eq{max_lam},
both integrals on the right-hand side of \eq{P_beta} are finite for both $\beta=0$ and $\beta=1$.
This concludes the verification of part (iii) of Definition \ref{rad_symm_weak_soln},
under the constraint \eq{max_lam}.
\subsection{Weak form of the equations}
Finally, for part (iv) of Definition \ref{rad_symm_weak_soln}, we need to verify the
weak forms \eq{radial_mass_weak}, \eq{radial_mom_weak}, \eq{radial_energy_weak}
of the radial equations. This requires some care since the
solutions under consideration are unbounded at the origin. To handle this
we shall exploit that the local integrability properties in parts (ii) and (iii) of Definition
\ref{rad_symm_weak_soln} have been verified under the condition \eq{max_lam}.
The issue then reduces to estimating the fluxes of the conserved quantities
across spheres of vanishing radii.
\subsubsection{Weak form of the mass equation}\label{weak_forms}
For a fixed $\psi\in C^1_c(\mathbb{R}\times\mathbb{R}^+_0)$, with $\supp\psi\subset[-T,T]\times [0,A]$,
and for any $\delta>0$, we have
\begin{align}
M(\psi)&:=\int_{\mathbb{R}}\int_{\mathbb{R}^+} \left(\rho\psi_t+\rho u\psi_r\right)\, r^mdrdt
= \left\{\int_{\mathbb{R}}\int_0^\delta +\iint_{I_\delta} +\iint_{I\!I_\delta}+\iint_{I\!I\!I_\delta}\right\}
\left(\rho\psi_t+\rho u\psi_r\right)\, r^mdrdt \nonumber\\
&=:M_\delta(\psi)
+\left\{\iint_{I_\delta} +\iint_{I\!I_\delta}+\iint_{I\!I\!I_\delta}\right\}
\left(\rho\psi_t+\rho u\psi_r\right)\, r^mdrdt
\end{align}
where the (open) regions $I_\delta$, $I\!I_\delta$, and $I\!I\!I_\delta$ are indicated in Figure 4
(e.g., $I_\delta$ is bounded below by $\{t=-T\}$, on the left by $\{r=\delta\}$, and on the right by
the incoming shock path).
\begin{figure}\label{Figure_4}
\centering
\includegraphics[width=8cm,height=9cm]{regions}
\caption{Regions of integration in the weak formulation.}
\end{figure}
Let ${\Gamma^-_\delta}$, ${\Gamma^0_\delta}$, and ${\Gamma^+_\delta}$
denote the parts of their boundaries $\partial I_\delta$, $\partial I\!I_\delta$, and $\partial I\!I\!I_\delta$,
respectively, contained in the set $\{(t,r)\,|\,r=\delta\}$. Recall that the similarity
shock solution is a bounded, classical solution of \eq{mass} in each of the regions
$I_\delta$, $I\!I_\delta$, and $I\!I\!I_\delta$, and that the Rankine-Hugoniot conditions
are satisfied\ across the incoming and outgoing shocks. Applying the divergence
theorem therefore gives
\[M(\psi)=M_\delta(\psi)
+\delta^m\left\{\int_{{\Gamma^0_\delta}}+\int_{{\Gamma^+_\delta}}\right\}
(\rho u\psi)(t,\delta)\,dt,\]
where we have used that $u$ vanishes along ${\Gamma^-_\delta}$.
By making the change of variables
$t\mapsto x=t/\delta^\lambda$, we obtain
\begin{equation}\label{mass1}
M(\psi)=M_\delta(\psi)
-\frac{\delta^n}{\lambda}\int_{-1}^{\frac{T}{\delta^\lambda}}
R(x)\frac{V(x)}{x}\psi(x\delta^\lambda,\delta)\,dx.
\end{equation}
As $R(x)$, $V(x)/x$ are globally bounded, the last term in \eq{mass1} is of order
$\delta^{n-\lambda}$, which vanishes as $\delta\downarrow 0$ by \eq{max_lam}.
Finally, it follows from the analysis in Section \ref{cont_and_loc_integr} that both
$\rho$ and $\rho u$ belong to $L^1_{loc}(r^m drdt)$. Thus, $M_\delta(\psi)\to0$
as $\delta\downarrow 0$, so that $M(\psi)=0$ for each $\psi\in C^1_c(\mathbb{R}\times\mathbb{R}^+_0)$.
This shows that the weak form \eq{radial_mass_weak} of the radial mass equation
is satisfied.
\subsubsection{Weak form of the momentum equation}
For a fixed $\psi\in C^1_0(\mathbb{R}\times\mathbb{R}^+_0)$ and any $\delta>0$ we have
\begin{align}
I(\psi)&:=\int_{\mathbb{R}}\int_{\mathbb{R}^+} \left(\rho u\psi_t
+\rho u^2\psi_r+p\big(\psi_r+\textstyle\frac{m\psi}{r}\big)\right)\, r^mdrdt \nonumber\\
&= \left\{\int_{\mathbb{R}}\int_0^\delta +\iint_{I_\delta} +\iint_{I\!I_\delta}+\iint_{I\!I\!I_\delta}\right\}
\left(\rho u\psi_t
+\rho u^2\psi_r+p\big(\psi_r+\textstyle\frac{m\psi}{r}\big)\right)\,r^mdrdt \nonumber\\
&=:I_\delta(\psi)
+\left\{\iint_{I_\delta}+\iint_{I\!I_\delta}+\iint_{I\!I\!I_\delta}\right\}
\left(\rho u\psi_t
+\rho u^2\psi_r+p\big(\psi_r+\textstyle\frac{m\psi}{r}\big)\right)\, r^mdrdt.
\end{align}
Arguing as above and applying the divergence theorem gives ($x=t/\delta^\lambda$)
\begin{align}
I(\psi)&=I_\delta(\psi)
+\delta^m\left\{\int_{{\Gamma^0_\delta}}+\int_{{\Gamma^+_\delta}}\right\}
((\rho u^2+p)\psi)(t,\delta)\,dt\nonumber\\
&= I_\delta(\psi)
+\frac{\delta^{n+1-\lambda}}{\lambda^2}\int_{-1}^{\frac{T}{\delta^\lambda}}
R(x)\Big[\Big|\frac{V(x)}{x}\Big|^2+\frac{1}{\gamma}\Big|\frac{C(x)}{x}\Big|^2\Big]
\psi(x\delta^\lambda,\delta)\,dx,\label{I_psi}
\end{align}
where we have used that $u$ and $p$ both vanish along ${\Gamma^-_\delta}$.
Recalling the observation in Remark \ref{psi_0_rmk}, and using global boundedness
of $R(x)(V(x)/x)^2$ and $R(x)(C(x)/x)^2$, we obtain that
\[\delta^{n+1-\lambda}\int_{-1}^{\frac{T}{\delta^\lambda}}
R(x)\Big[\Big|\frac{V(x)}{x}\Big|^2+\frac{1}{\gamma}\Big|\frac{C(x)}{x}\Big|^2\Big]
\psi(x\delta^\lambda,\delta)\,dx\lesssim \delta^{n+2-2\lambda},\]
which tends to zero as $\delta\downarrow 0$ by \eq{max_lam}. Finally, to show that $I_\delta(\psi)$ also
vanishes with $\delta$ we first use Remark \ref{psi_0_rmk} to bound the
function $\frac{\psi}{r}$ by a constant, and then use that, according to the analysis above,
the quantities $\rho u$, $\rho u^2$, and $p$ all belong to $L^1_{loc}(r^m drdt)$.
This shows that also $I_\delta(\psi)\to0$ as $\delta\downarrow 0$. Thus,
$I(\psi)=0$ for each $\psi\in C^1_0(\mathbb{R}\times\mathbb{R}^+_0)$,
showing that the weak form \eq{radial_mom_weak} of the momentum equation
is satisfied.
\subsubsection{Weak form of the energy equation}
For a fixed $\psi\in C^1_c(\mathbb{R}\times\mathbb{R}^+_0)$ and any $\delta>0$ we have
\begin{align}
E(\psi)&:=\int_\mathbb{R}\int_{\mathbb{R}^+} \left(\rho \big(e+\textstyle\frac{u^2}{2}\big) \psi_t
+\left[\rho \big(e+\textstyle\frac{u^2}{2}\big)+p\right] u\psi_r\right)\, r^mdrdt \nonumber\\
&= \left\{\int_{\mathbb{R}}\int_0^\delta +\iint_{I_\delta} +\iint_{I\!I_\delta}+\iint_{I\!I\!I_\delta}\right\}
\left(\rho \big(e+\textstyle\frac{u^2}{2}\big) \psi_t
+\left[\rho \big(e+\textstyle\frac{u^2}{2}\big)+p\right] u\psi_r\right)\, r^mdrdt \nonumber\\
&=:E_\delta(\psi)
+\left\{\iint_{I_\delta}+\iint_{I\!I_\delta}+\iint_{I\!I\!I_\delta}\right\}
\left(\rho \big(e+\textstyle\frac{u^2}{2}\big) \psi_t
+\left[\rho \big(e+\textstyle\frac{u^2}{2}\big)+p\right] u\psi_r\right)\, r^mdrdt .
\end{align}
Arguing as above and applying the divergence theorem gives ($x=t/\delta^\lambda$)
\begin{align}
E(\psi)&=E_\delta(\psi)
+\delta^m\left\{\int_{{\Gamma^0_\delta}}+\int_{{\Gamma^+_\delta}}\right\}
\left[\rho u\big(e+\frac{1}{2}u^2+\frac{p}{\rho}\big)\psi\right](t,\delta)\,dt\nonumber\\
&= E_\delta(\psi)
+\frac{\delta^{n+2-2\lambda}}{\lambda^3}\int_{-1}^{\frac{T}{\delta^\lambda}}
R(x)\frac{V(x)}{x}\left(\frac{1}{2}\left|\frac{V(x)}{x}\right|^2+\frac{1}{\gamma-1}\left|\frac{C(x)}{x}\right|^2\right)
\psi(x\delta^\lambda,\delta)\,dx,\label{E_psi}
\end{align}
where we have used that $u$ vanishes along ${\Gamma^-_\delta}$.
Recalling the global boundedness
of $R(x)$, $V(x)$, $V(x)/x$, and $R(x)(C(x)/x)^2$, as well as the bound
\eq{aux}, we obtain that the last integral in \eq{E_psi} is bounded by
\[\lesssim 1+\int_{1}^{\frac{T}{\delta^\lambda}} x^{-3}
+x^{-2(1-\frac{1}{\lambda})-1}\,dx\lesssim1+\delta^{2\lambda}
+\delta^{2(\lambda-1)}\qquad\text{as $\delta\downarrow 0$.}\]
According to \eq{max_lam} we therefore have that the last term on the
right-hand side of \eq{E_psi} vanishes as $\delta\downarrow 0$.
Finally, under the same constraint on $\lambda$, the argument in Section
\ref{cont_and_loc_integr} showed that the quantities $\rho e\propto p$,
$\rho u^2$, $\rho ue\propto up$, and $\rho u^3$, all
belong to $L^1_{loc}(r^mdrdt)$. In particular, it follows that $E_\delta(\psi)$
vanishes as $\delta\downarrow 0$. Thus, $E(\psi)=0$ for each $\psi\in C^1_c(\mathbb{R}\times\mathbb{R}^+_0)$,
showing that the weak form \eq{radial_energy_weak} of the energy equation
is satisfied.
This concludes the proof of Theorem \ref{main_thm}.
\bigskip
\paragraph{\bf Acknowledgment:}
This work was supported in part by NSF awards DMS-1311353 (Jenssen)
and DMS-1714912 (Tsikkou).
\begin{bibdiv}
\begin{biblist}
\bib{am}{book}{
author={Atzeni, S.},
author={Meyer-ter-Vehn, J.},
title={The Physics of Inertial Fusion},
series={International Series of Monographs on Physics},
volume={125},
publisher={Oxford University Press, Oxford},
date={2004},
}
\bib{ah}{article}{
author={Axford, R. A.},
author={Holm, D. D.},
title={Converging finite-strength shocks },
journal={Physica D. Nonlinear phenomena},
volume={2},
date={1981},
number={1},
pages={194--202},
}
\bib{bg_96}{article}{
author={Bilbao, L. E.},
author={Gratton, J.},
title={Spherical and cylindrical convergent shocks},
journal={Il Nuovo Cimento D},
volume={18},
date={1996},
number={9},
pages={1041--1060},
}
\bib{bk}{article}{
author={Bru\v slinski\u\i , K. V.},
author={Ka\v zdan, Ja. M.},
title={Self-similar solutions of certain problems in gas dynamics},
language={Russian},
journal={Uspehi Mat. Nauk},
volume={18},
date={1963},
number={2 (110)},
pages={3--23},
issn={0042-1316},
review={\MR{0172577}},
}
\bib{cp}{article}{
author={Chen, Gui-Qiang G.},
author={Perepelitsa, Mikhail},
title={Vanishing viscosity solutions of the compressible Euler equations
with spherical symmetry and large initial data},
journal={Comm. Math. Phys.},
volume={338},
date={2015},
number={2},
pages={771--800},
issn={0010-3616},
review={\MR{3351058}},
}
\bib{cs}{article}{
author={Chen, Gui-Qiang G.},
author={Schrecker, Matthew R. I.},
title={Vanishing Viscosity Approach to the Compressible Euler
Equations for Transonic Nozzle and Spherically Symmetric Flows},
journal={Arch. Ration. Mech. Anal.},
date={2018},
doi={https://doi.org/10.1007/s00205-018-1239-z}
}
\bib{cf}{book}{
author={Courant, R.},
author={Friedrichs, K. O.},
title={Supersonic flow and shock waves},
note={Reprinting of the 1948 original;
Applied Mathematical Sciences, Vol. 21},
publisher={Springer-Verlag},
place={New York},
date={1976},
}
\bib{daf}{book}{
author={Dafermos, Constantine M.},
title={Hyperbolic conservation laws in continuum physics},
series={Grundlehren der Mathematischen Wissenschaften [Fundamental
Principles of Mathematical Sciences]},
volume={325},
edition={4},
publisher={Springer-Verlag, Berlin},
date={2016},
pages={xxxviii+826},
isbn={978-3-662-49449-3},
isbn={978-3-662-49451-6},
review={\MR{3468916}},
}
\bib{glimm}{article}{
author={Glimm, James},
title={Solutions in the large for nonlinear hyperbolic systems of
equations},
journal={Comm. Pure Appl. Math.},
volume={18},
date={1965},
pages={697--715},
issn={0010-3640},
review={\MR{0194770}},
}
\bib{gr_96}{book}{
author={Godlewski, Edwige},
author={Raviart, Pierre-Arnaud},
title={Numerical approximation of hyperbolic systems of conservation
laws},
series={Applied Mathematical Sciences},
volume={118},
publisher={Springer-Verlag, New York},
date={1996},
pages={viii+509},
isbn={0-387-94529-6},
review={\MR{1410987}},
doi={10.1007/978-1-4612-0713-9},
}
\bib{gud}{article}{
author={Guderley, G.},
title={Starke kugelige und zylindrische Verdichtungsst\"osse in der N\"ahe
des Kugelmittelpunktes bzw. der Zylinderachse},
language={German},
journal={Luftfahrtforschung},
volume={19},
date={1942},
pages={302--311},
review={\MR{0008522}},
}
\bib{hoff}{article}{
author={Hoff, David},
title={Spherically symmetric solutions of the Navier-Stokes equations for
compressible, isothermal flow with large, discontinuous initial data},
journal={Indiana Univ. Math. J.},
volume={41},
date={1992},
pages={1225--1302},
}
\bib{hun_60}{article}{
author={Hunter, C.},
title={On the collapse of an empty cavity in water},
journal={J. Fluid Mech.},
volume={8},
date={1960},
pages={241--263},
}
\bib{kell}{article}{
author={Keller, J. B.},
title={Spherical, cylindrical and one-dimensional gas flows},
journal={Quart. Appl. Math.},
volume={14},
date={1956},
pages={171--184},
}
\bib{mmu}{article}{
author={Makino, Tetu},
author={Mizohata, Kiyoshi},
author={Ukai, Seiji},
title={The global weak solutions of compressible Euler equation with
spherical symmetry},
journal={Japan J. Indust. Appl. Math.},
volume={9},
date={1992},
number={3},
pages={431--449},
issn={0916-7005},
review={\MR{1189949}},
}
\bib{L}{article}{
author={Lazarus, Roger B.},
title={Self-similar solutions for converging shocks and collapsing
cavities},
journal={SIAM J. Numer. Anal.},
volume={18},
date={1981},
number={2},
pages={316--371},
}
\bib{L_errat}{article}{
author={Lazarus, Roger B.},
title={Erratum: ``Self-similar solutions for converging shocks and
collapsing cavities''\ [SIAM J. Numer. Anal. {\bf 18} (1981), no. 2,
316--371;\ MR 82i:76054]},
journal={SIAM J. Numer. Anal.},
volume={19},
date={1982},
number={5},
pages={1090},
issn={0036-1429},
review={\MR{672580}},
doi={10.1137/0719079},
}
\bib{liu77}{article}{
author={Liu, Tai Ping},
title={Initial-boundary value problems for gas dynamics},
journal={Arch. Rational Mech. Anal.},
volume={64},
date={1977},
number={2},
pages={137--168},
issn={0003-9527},
review={\MR{0433017}},
doi={10.1007/BF00280095},
}
\bib{phpm}{article}{
author={Ponchaut, N. F.},
author={Hornung, H. G.},
author={Pullin, D. I.},
author={Mouton, C. A.},
title={On imploding cylindrical and spherical shock waves in a perfect
gas},
journal={J. Fluid Mech.},
volume={560},
date={2006},
pages={103--122},
issn={0022-1120},
review={\MR{2265076}},
}
\bib{rkb_12}{article}{
author={Ramsey, Scott D.},
author={Kamm, James R.},
author={Bolstad, John H.},
title={The Guderley problem revisited},
journal={Int. J. Comput. Fluid Dyn.},
volume={26},
date={2012},
number={2},
pages={79--99},
issn={1061-8562},
review={\MR{2892836}},
doi={10.1080/10618562.2011.647768},
}
\bib{RichtL_75}{article}{
author={Richtmyer, R. D.},
author={Lazarus, R. B.},
title={Singularity Fitting in Hydrodynamical Calculations II},
journal={Los Alamos Scientific Laboratory},
volume={LA-6108-MS},
pages={16 pp.},
place={Los Alamos, New Mexico},
date={1975},
}
\bib{RL_78}{article}{
author={Rodriguez, Manuel},
author={Li\~n\'an, Amable},
title={Implosiones autosemejantes isentr\'opicas y no isentr\'opicas.},
journal={Junta de Energia Nuclear,},
volume={J.E.N. 405},
pages={149 pp.},
place={Madrid, Spain},
date={1978},
}
\bib{rj}{book}{
author={Ro{\v{z}}destvenski{\u\i}, B. L.},
author={Janenko, N. N.},
title={Systems of quasilinear equations and their applications to gas
dynamics},
series={Translations of Mathematical Monographs},
volume={55},
note={Translated from the second Russian edition by J. R. Schulenberger},
publisher={American Mathematical Society},
place={Providence, RI},
date={1983},
}
\bib{sed}{book}{
author={Sedov, L. I.},
title={Similarity and dimensional methods in mechanics},
note={Translated from the Russian by V. I. Kisin},
publisher={``Mir''},
place={Moscow},
date={1982},
}
\bib{stan}{book}{
author={Stanyukovich, K. P.},
title={Unsteady motion of continuous media},
series={Translation edited by Maurice Holt; literal translation by J.
George Adashko},
publisher={Pergamon Press},
place={New York},
date={1960},
}
\bib{temple81}{article}{
author={Temple, J. Blake},
title={Solutions in the large for the nonlinear hyperbolic conservation
laws of gas dynamics},
journal={J. Differential Equations},
volume={41},
date={1981},
number={1},
pages={96--161},
issn={0022-0396},
review={\MR{626623}},
doi={10.1016/0022-0396(81)90055-3},
}
\bib{vrt}{article}{
author={Vallet, A.},
author={Ribeyre, X.},
author={Tikhonchuk, V.},
title={Finite Mach number spherical shock wave, application to shock ignition},
journal={Physics of Plasmas},
volume={20},
date={2013},
pages={082702},
}
\bib{welsh}{article}{
author={Welsh, R. L.},
title={Imploding shocks and detonations},
journal={J. Fluid Mech.},
volume={29},
date={1967},
pages={61--79},
}
\end{biblist}
\end{bibdiv}
\end{document}
|
{
"timestamp": "2018-06-05T02:13:35",
"yymm": "1806",
"arxiv_id": "1806.00918",
"language": "en",
"url": "https://arxiv.org/abs/1806.00918"
}
|
\section{Introduction}
We consider the problem of spread of information between mobile agents on a $d$-dimensional torus of side-length $n$.
We will denote by \(N=n^d\) the number of vertices on the torus, and will refer to the agents as \emph{particles}.
At time 0, the particles are distributed on the vertices of the torus as a Poisson point process of intensity \(\lambda\).
Then, particles move by performing independent continuous-time simple random walks on the torus;
that is, at rate $1$ a particle chooses a neighboring vertex uniformly at random and jumps there.
It is not difficult to check that this system of particles is in stationarity. Thus, at any given time $t$,
the location of the particles is a Poisson point process of intensity $\lambda$
on the torus.
However, the configuration of particles at time $t$ is not independent of the configuration of particles at time $0$,
and as we will explain below, it is this dependence that makes this model challenging to analyze.
Assume that at time \(0\) there is a particle at the origin with a piece of information that has to be distributed to all other particles.
Then, any uninformed particle (a particle that does not know the information) receives the information whenever it is at the same vertex as an informed particle
(a particle that knows the information).
We study the time it takes the information to reach all the particles, which is commonly referred to as the \emph{flooding time}.
A big challenge in analyzing this model is due to the heavily dependent structure of the particles.
In fact, though particles move independently of one another, dependences do arise over time.
For example, if a ball of radius \(R\) centered at some vertex \(x\) of the torus turns out to have no particles at time 0, then the ball \(B(x,R/2)\) of radius \(R/2\) centered at \(x\)
will continue to be empty of particles up to time \(R^2\), with positive probability.
This means that the probability that the \((d+1)\)-dimensional, space-time cylinder \(B(x,R/2)\times[0,R^2]\) has no particle is at least \(\exp\{-cR^d\}\) for some constant \(c\).
This is just a stretched exponential on the volume of the cylinder, which prevents us from applying classical methods based on comparison with independent percolation~\cite{LSS}, since those require exponential
decay of correlations.
In addition to this, whenever one finds such a ball of radius $R$ empty of particles at time $0$, this affects regions of the torus in the vicinity of this ball. In particular,
during a time interval of length $R^2$, the density of particle in the vicinity of the ball will be smaller than the expected density $\lambda$.
In this work we develop a framework to control such dependences.
When the transmission radius is large (in the sense that information can be transmitted between particles at distance \(O( \log^{1/d}(n))\) of each other)
or the jump range is large (in the sense that a particle can
jump a distance of order $O(\log^{1/d} n)$ in one step), then the dependences can be more easily controlled.
These cases where analyzed in~\cite{Clementi2011,Clementi2013},
where tight bounds on the flooding time (up to constant factors) were established\footnote{In fact, \cite{Clementi2011} studies the flooding time for a larger class of dynamic graphs. However, due to space limitations, we
restrict our discussion to results on the specific model of spread of information among random walk particles.}.
Having a large transmission radius or jump range helps the analysis because of the following.
Tessellate the torus into boxes of side-length \(\Theta(\log^{1/d}(n))\), and tessellate time into intervals of constant length.
Then, since the system of particles is in stationary and boxes are so large,
we can apply a Chernoff bound for Poisson random variables
to show that, for any given
box and time interval, with probability $1-n^{-C}$, there is a large enough number of
particles inside the box during that time interval (when this happen, call the cell of the tessellation \emph{good}).
Then a union bound can be used to show that all cells of the tessellation are good.
Then, if the transmission radius is large enough to allow particles from neighboring boxes to exchange information,
one can establish a tight bound on the flooding time.
If it is the jump range that is large enough, then one can use the fact that, after a time interval of order 1,
the configuration of particles inside any given box is close to stationarity. In other words, the system of particles has a small mixing time.
This washes away the dependences of the system, and allowed a tight bound (up to constant factors) to be derived.
An important open problem has been to analyze the case where both the transmission radius and the jump range are of order 1, which is our setting here.
This was studied in \cite{Lam2011},
where it was shown that, with high probability,
the flooding time is at most \(\tilde\Theta(n)\),
where the notation $\tilde\Theta(\cdot)$ means that poly-logarithmic factors are neglected\footnote{We remark that \cite{Lam2011} considers also the case where the number of particles can be of order much smaller than $N$,
and~\cite{PPPU,Peres2012} analyze a variant of this model, but these settings are out of the scope of this work.}.
This bound is tight up to poly-logarithmic factors since, for a transmission radius and jump range of order 1, the flooding time is $\Omega(n)$ in all dimensions.
We note that, when neglecting poly-logarithmic factors, one can still work with the above tessellation --- of cells of side-length \(\Theta(\log^{1/d}(n))\) ---
for which all cells of the tessellation are good. This is because one can do some suboptimal estimates to allow information to spread inside a cell (thereby
losing only a poly-logarithmic factor), and then use the fact
that cells are good, and full of particles, to let the information spread from one cell to the next.
Getting a bound that is tight up to \emph{constant} factors, on the other hand, involves a rather delicate issue,
since one is forced to consider tessellations of \emph{constant} side-length, which will naturally
contain a positive density of bad cells, forcing a more careful control of the dependences of the system.
Turning back to our setting, where particles can jump only across neighboring vertices and information can be transmitted only between particles located at the same vertex,
\cite{Kesten2005} analyzes the process in the whole of $\mathbb{Z}^d$ and shows that the information spreads with positive speed.
To prove this, the authors developed a complicated multi-scale framework to control the dependences of the system,
where tessellations of different side-lengths were considered and controlled.
This multi-scale technique is quite powerful, and has been employed in the mathematics literature to solve other processes with slow decay of
correlations~\cite{Sidoravicius2009,Sznitman2012,Candellero2015}.
However, this technique is usually very difficult to implement,
and has to be tailored to each specific model and question being studied.
The goal of our work is to develop a robust and flexible multi-scale framework that can be more easily applied to answer questions involving systems of random walk particles, and we illustrate its usefulness by deriving
tight bounds on the flooding time.
\subsection{Our results}
We start considering a more general setup.
Let \(\mathbb{T}^d\) be the \(d\)-dimensional integer torus of side length \(n\).
Let \(G=(\mathbb{T}^d,E)\) be the nearest neighbor graph on \(\mathbb{T}^d\). Let \(\{\mu_{x,y}\}_{(x,y)\in E}\) be a collection of i.i.d.\ symmetric weights, which we call \emph{conductances}.
We assume that the conductances are \emph{uniformly elliptic}; that is,
\begin{equation}\label{eq:mu_bounds_new}
\textrm{there exists a constant \(C_M>0\), such that }\mu_{x,y}\in[C_M^{-1},C_M]\textrm{ for all }(x,y)\in E.
\end{equation}
We say \(x\sim y\) if \((x,y)\in E\) and define \(\mu_x=\sum_{y\sim x}\mu_{x,y}\). At time \(0\), consider a Poisson point process of particles on \(\mathbb{T}^d\),
with intensity measure \(\lambda(x)=\lambda_0\mu_x\) for some constant \(\lambda_0>0\) and all \(x\in\mathbb{T}^d\).
That is, for each \(x\in\mathbb{T}^d\), the number of particles at \(x\) at time \(0\) is an independent Poisson random variable of mean \(\lambda_0\mu_x\).
Then, let the particles perform independent continuous-time simple random walks on the weighted graph so that a particle at \(x\in\mathbb{T}^d\) jumps to a neighbor \(y\sim x\) at rate \(\frac{\mu_{x,y}}{\mu_x}\).
It follows from the thinning property of Poisson random variables that the system of particles is in stationarity.
Assume that at time \(0\) there is an informed particle at the origin, and all other particles are uninformed.
One of the main results of this paper is the following.
\begin{thrm}\label{thrm:total}
If \(d\geq 2\) and the conductances satisfy (\ref{eq:mu_bounds_new}), then with probability $1-n^{-\omega(1)}$ the flooding time is $\Theta(n)$.
\end{thrm}
Another main contribution of this paper is the framework we develop to establish Theorem~\ref{thrm:total}, which we believe gives a robust and more
easy to apply framework to address problems involving systems of moving particles. The idea is as follows.
We tessellate space and time into cells of constant length.
Then, for each cell we are given a local event, and call the cell good if the event of that cell
holds.
Then, if for any given cell, we have that the probability that the cell is good is close enough to 1, then we can find a subset of good cells that form what we call
a Lipschitz surface and a Lipschitz net.
These Lipschitz surface and Lipschitz net have some percolative and geometric features that allow
the good event to propagate through space and time.
For example, for the problem of spread of information, the local event we use is to say that a given cell is good
if the following two things happen: (i) the cell contains sufficiently many particles, and
(ii) if there is an informed particle inside the cell, then that particle is able to inform a large number of other particles
that will move to neighboring cells. With this definition and the existence of the Lipschitz surface and net,
we obtain that once the information enters a cell of the Lipschitz surface, we guarantee that the information can
propagate throughout the surface, from one cell of the surface to the next.
We believe our approach is flexible enough to allow other processes on moving particles to be analyzed. The main task reduces to defining a suitable local event.
Since this framework is quite involved, we will give its construction and all main technical theorems in Section~\ref{sec:lipschitz}.
Then, in Section~\ref{sec:spread}, we use this framework to analyze the spread of information.
Due to space limitations, we will not be able to give full proofs of the above framework, for which we refer to the full version~\cite{Gracar2016a}.
This extended abstract has yet one additional result with respect to~\cite{Gracar2016a}, which is the construction and proof of the Lipschitz net, which is adapted to analyzing processes on finite graphs.
\section{Lipschitz net framework}\label{sec:lipschitz}
For the remainder of this paper, we assume \(d\geq 2\). Fix \(\ell>0\) and tessellate \(\mathbb{T}^d\) into cubes of side length \(\ell\in\mathbb{R}\), indexed by \(i\in \mathbb{Z}^{d}\).
To simplify the notation, assume that $n/\ell$ is an integer.
Next, tessellate time into intervals of length \(\beta\), indexed by \(\tau\in\mathbb{Z}\).
With this we denote by the \emph{space-time cell} \((i,\tau)\in\mathbb{Z}^{d+1}\) the region \(\prod_{j=1}^d[i_j\ell,(i_j+1)\ell]\times[\tau\beta,(\tau+1)\beta]\).
In the following, $\beta$ and $\ell$ are constants such that the ratio \(\beta/\ell^2\) is fixed first to be some small value, and then later \(\ell\) is made large enough.
We will also need to consider overlapping space-time cells.
Let \(\eta\geq 1\) be an integer which will represent the amount of overlap between cells. For each cube \(i=(i_1,\dots,i_d)\) and time interval \(\tau\), define the \emph{super cube} \(i\) as \(\prod_{j=1}^d[(i_j-\eta)\ell,(i_j+\eta+1)\ell]\) and the \emph{super interval} \(\tau\) as \([\tau\beta,(\tau+\eta)\beta]\). We define the \emph{super cell} \((i,\tau)\) as the Cartesian product of the super cube \(i\) and the super interval \(\tau\).
For any time $s$, let $\Pi_s$ be the set of particles at time $s$, seen as a collection of vertices of $G$ with multiplicity when there is more than one particle at a vertex.
We say an event \(E\) is \emph{increasing} for \((\Pi_s)_{s\geq 0}\) if the fact that \(E\) holds for \((\Pi_s)_{s\geq 0}\) implies that it holds for all \((\Pi'_s)_{s\geq 0}\) for which \(\Pi_s'\supseteq \Pi_s\) for all \(s\geq 0\).
We need the following definitions.
\begin{mydef}\label{def:restricted}
We say an event \(E\) is \emph{restricted} to a region \(X\subset\mathbb{T}^d\) and a time interval \([t_0,t_1]\) if it is measurable with respect to the \(\sigma\)-field generated by all the particles that are inside \(X\) at time \(t_0\) and their positions from time \(t_0\) to \(t_1\).
\end{mydef}
\begin{mydef}\label{def:displacement}
We say a particle has displacement inside \(X'\) during a time interval \([t_0,t_0+t_1]\), if the location of the particle at all times during \([t_0,t_0+t_1]\) is inside \(x+X'\), where \(x\) is the location of the particle at time \(t_0\).
\end{mydef}
\begin{mydef}\label{def:probassoc}
\(\nu_E\) is called the \emph{probability associated} to an increasing event \(E\) that is restricted to \(X\) and a time interval \([0, t]\) if,
for an intensity measure \(\zeta\) and a region \(X'\in\mathbb{T}^d\), \(\nu_E(\zeta,X,X',t)\) is the probability that \(E\) happens given that, at time \(0\),
the particles in \(X\) are distributed as a Poisson point process of intensity \(\zeta\)
and their motions from \(0\) to \(t\) are independent continuous time random walks on the weighted graph \((G,\mu)\), where the particles are conditioned to have displacement inside \(X'\) during \([0,t]\).
\end{mydef}
For each \((i,\tau)\in\mathbb{T}^{d}\times \mathbb{Z}\), let \(E_{\mathrm{st}}(i,\tau)\) be an increasing event restricted to the super cube \(i\) and the super interval \(\tau\). Here the subscript \(\mathrm{st}\) refers to space-time.
We say that a cell \((i,\tau)\) is \emph{bad} if \(E_{\mathrm{st}}(i,\tau)\) does not hold; otherwise, \((i,\tau)\) is called \emph{good}.
Our framework will establish that if for any given $(i,\tau)$, the event $E_{\mathrm{st}}(i,\tau)$ occurs with large enough probability, then not only do the good cells percolate but the good cells form a particularly
useful geometry, which we will call the Lipschitz net.
Before defining the Lipschitz net, we need to introduce a different way to index space-time cells, which we refer to as the \emph{base-height index}.
In the base-height index, we pick one of the \(d+1\) space-time dimensions and denote it as \emph{height}, using index \(h\in\mathbb{Z}\), while the remaining \(d\) space-time dimensions will form the base,
which will be indexed by \(b\in\mathbb{Z}^d\).
In this way, for each space-time cell \((i,\tau)\) there will be \((b,h)\in\mathbb{Z}^{d+1}\) such that the base-height cell \((b,h)\) corresponds to the space-time cell \((i,\tau)\). With this,
we set \(E_\mathrm{bh}(b,h)=E_\mathrm{st}(i,\tau)\). (Here the subscript \(\mathrm{bh}\) refers to base-height.)
It might be tempting to choose time as the height dimension, however it turns out that selecting one of the spatial dimensions to act as height is a better choice, as will be shown below.
With this choice, note that $b\in \mathbb{T}^{d-1}\times \mathbb{Z}$ and $h\in \mathbb{T}$; thus, for notation purpose, we define $\mathbb{T}^{d}_* = \mathbb{T}^{d-1}\times \mathbb{Z}$
and $\mathbb{T}^{d+1}_* = \mathbb{T}^{d-1}\times \mathbb{Z}\times \mathbb{T}$.
\subsection{Two-sided Lipschitz surface}
\begin{mydef}\label{def:lip_fun}
A function \(F:\mathbb{T}^d_*\rightarrow \mathbb{T}\) is called a \emph{Lipschitz function}
if \(|F(x)-F(y)|\leq 1\) whenever \(\|x-y\|_1 = 1\).
\end{mydef}
\begin{mydef}\label{def:lip_surf}
A \emph{two-sided Lipschitz surface} \(F\) is a set of base-height cells \((b,h)\in\mathbb{T}^{d+1}_*\) such that for all \(b\in\mathbb{T}^d_*\) there are exactly two (possibly equal) integer values \(F_+(b)\geq 0\) and \(F_-(b)\leq0\) for which \((b,F_+(b)),(b,F_-(b))\in F\) and, moreover, \(F_+\) and \(F_-\) are Lipschitz functions.
\end{mydef}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.8\linewidth]{graph_light_jpg.jpg}
\end{center}
\caption{A realization of the two-sided Lipschitz surface for the case \(d=2\).}\label{fig:surface}
\end{figure}
We say a space-time cell \((i,\tau)\) belongs to \(F\) if its corresponding base-height cell \((b,h)\) belongs to \(F\).
For a positive integer $D$, we say a two-sided Lipschitz surface \emph{surrounds} a cell \((b',h')\) at distance \(D\)
if any path \((b',h')=(b_0,h_0),(b_1,h_1),\dots,(b_m,h_m)\) for which \(\|(b_i,h_i)-(b_{i-1},h_{i-1})\|_1=1\) for all \(i\in\{1,\dots m\}\) and \(\|(b_m,h_m)-(b_0,h_0)\|_1>D\), intersects with \(F\).
For any \(z\in\mathbb{Z}_+\), define the cube \(Q_z=[-z/2,z/2]^d\).
The following theorem establishes the existence of the Lipschitz surface. Due to space limitations, the proof is given in~\cite{Gracar2016a}.
\begin{thrm}\label{thrm:surface_event_simple}
Consider the graph \((G,\mu)\) satisfying \eqref{eq:mu_bounds_new}, and the tessellation defined above.
There exist positive constants \(c_1, c_2, c_3, c_4\) and \(c_5\) such that, if \(\beta/\ell^2\leq c_5\), then the following holds.
Let \(E_{\mathrm{st}}(i,\tau)\) be any increasing event restricted to the space-time super cell \((i,\tau)\).
Fix \(\epsilon\in(0,1)\) and fix \(w\geq\sqrt{\frac{\eta\beta}{c_2\ell^2}\log\left(\frac{8c_1}{\epsilon}\right)}.\)
Then, there exists a positive number \(\alpha_0\) that depends on \(\epsilon\), \(\eta\) and the ratio \(\beta/\ell^2\) so that if
\begin{equation}
\min\left\{C_{M}^{-1}\epsilon^2\lambda_0\ell^d,\log\left(\frac{1}{1-\nu_{E_{\mathrm{st}}}((1-\epsilon)\lambda,Q_{(2\eta+1)\ell},Q_{w\ell},\beta)}\right)\right\}\geq\alpha_0,
\label{eq:lip}
\end{equation}
a two-sided Lipschitz surface \(F\) where \(E_{\mathrm{st}}(i,\tau)\) holds for all \((i,\tau)\in F\) exists almost surely, and
the probability that $F$ does not surround the origin at distance $r$ is at most
\begin{align*}
\begin{array}{ll}
\sum_{s\geq r}s^d\exp\left\{-c_3\lambda_0\frac{\ell s}{\log^{c_4}(\ell s)}\right\},&\textrm{for }d=2\\
\sum_{s\geq r}s^d\exp\left\{-c_3\lambda_0\ell s\right\},&\textrm{for }d\geq 3.
\end{array}
\end{align*}
\end{thrm}
\begin{remark}
The proofs in \cite{Gracar2016a} give the existence of the two-sided Lipschitz surface on the whole of \(\mathbb{Z}^d\), but the very same proof works for the torus.
\end{remark}
\begin{remark}
Theorem~\ref{thrm:surface_event_simple} is key to our framework.
We now briefly explain how it can be used.
The event $E_\mathrm{st}$ can be any local event, where in $\nu_{E_\mathrm{st}}$, $Q_{(2\eta+1)\ell}$ gives the region on which the event is measurable.
To control dependences, we consider the larger cube $Q_{w\ell}$, inside which the particles that start from $Q_{(2\eta+1)\ell}$ are conditioned to stay during the time interval $\beta$.
Then $w$ has to be large enough, as specified in the theorem, so that this conditioning is likely to happen.
Then, $\nu_{E_\mathrm{st}}$ gives the probability that the event happens given that the initial configuration of particle is a Poisson point process of intensity measure
$(1-\epsilon)\lambda$, just slightly smaller than the intensity measure $\lambda$ we started with. We disregard an ``$\epsilon$-fraction of the particles'' because naturally, in any given space-time cell,
some particles move atypically and
will not
be organized exactly as a Poisson point process; but those particles can be neglected using the assumption that $E_\mathrm{st}$ is increasing.
Thus~\eqref{eq:lip} requires that $\nu_{E_\mathrm{st}}$ is at least $1-\exp(-\alpha_0)$ for a Poisson point process of intensity $(1-\epsilon)\lambda$.
This is usually achievable by properly defining the event $E_\mathrm{st}$ to be such that its occurrence increases
with $\ell$ (the size of the tessellation).
Then, \eqref{eq:lip} also requires that $C_{M}^{-1}\epsilon^2\lambda_0\ell^d\geq \alpha_0$. After fixing $\epsilon$,
this can be satisfied either by setting $\ell$ large enough or by assuming that the constant $\lambda_0$ governing the density of particles is large enough.
This condition is natural in applications: one either requires the size of cells to be large (which will be the case in our application for the flooding time) or the tessellation is more restricted (for example,
limited to the transmission radius of the particles) and one requires the density of particles to be large enough, as in~\cite{Stauffer2014}.
\end{remark}
\subsection{Lipschitz net}
We are now ready to define the \emph{Lipschitz net} on the torus \(\mathbb{T}^d\) that we will use to prove Theorem \ref{thrm:total}.
The Lipschitz net, roughly speaking, will be an interlacement of Lipschitz surfaces, where we will take each spatial coordinate as being height in the base-height index,
and for each of them we will have a pile of surfaces. More formally,
let \(k\in\left\{0,1,\dots,\left\lfloor\frac{n}{\ell\log^3(n/\ell)}\right\rfloor\right\}\) and \(q\in\{1,2,\dots,d\}\).
For any $i\in\mathbb{T}^d$, let $i=(i_1,i_2,\ldots,i_d)$.
Define \(\mathbb{L}_k^q\) to be the \(d\)-dimensional
hyperplane on the space-time tessellation that is orthogonal to the \(q\)-th spatial coordinate, with distance from the origin of \(k\left\lceil\log^3(n/\ell)\right\rceil\), i.e.
\[
\mathbb{L}_k^q=\left\{(i,\tau)\in\mathbb{T}^{d}\times\mathbb{Z}:\:i_{q}=k\lceil\log^3(n/\ell)\rceil\right\}.
\]
We define \(F_k^q\) to be the Lipschitz surface corresponding to \(\mathbb{L}_k^q\), i.e.\
\(F_k^q\) is the two-sided Lipschitz surface for which \(h\) in the base-height index corresponds to \(i_q\) in the space-time index, and for which the Lipschitz functions satisfy \(F_+(b)\geq k\lceil\log^3(n/\ell)\rceil\) and \(F_-(b)\leq k\lceil\log^3(n/\ell)\rceil\). We define the \emph{height of the surface} \(F_k^q\) at \(b\in \mathbb{Z}^d\) to be
\[
\max_{h:(b,h)\in F_k^q}\left|k\lceil\log^3(n/\ell)\rceil-h\right|.
\]
Let \(C_0>0\) be an integer constant of our choosing. From now on we assume that \(F_k^q\) is a Lipschitz surface for which the height is at most \(\frac{\log^3(n/\ell)}{2}\) for all \((i,\tau)\in F_k^q\) satisfying \(\tau\in\{0,1,\dots,C_0n/\ell\}\).
\begin{mydef}
The \emph{Lipschitz net} \(F_{\textrm{net}}\) with constant \(C_0\) is the set of space-time cells \((i,\tau)\in\mathbb{T}^{d}\times\mathbb{Z}\) contained in the union of all \(F_k^q\); i.e,
$
F_\mathrm{net}=\bigcup_{q=1}^d\bigcup_{k=0}^{\left\lfloor\frac{n}{\ell\log^3(n/\ell)}\right\rfloor} F_k^q.
$
Moreover, we say that $F_\mathrm{net}$ surrounds the origin at distance $D$ if $F_0^q$ surrounds the origin of $\mathbb{L}_0^q$ at distance $D$ for all $q\in\{1,2,\ldots,d\}$.
\end{mydef}
Note that we have for all \((i,\tau)\in F_{\textrm{net}}\) that the event \(E_{\textrm{st}}(i,\tau)\) holds, which follows directly from the fact that every space-time cell in \(F_{\textrm{net}}\) belongs to at least one Lipschitz surface \(F_k^q\) for some \(k\) and some \(q\).
\begin{thrm}\label{thrm:net}
For any constant $C_0$, there exist a constant $C_1>0$ such that, for any $\delta>0$ and any $\ell=O(n^{1-\delta})$ with $\ell\geq C_1$,
the Lipschitz net $F_{\textrm{net}}$ with constant $C_0$ exists and surrounds the origin at distance $O(\log^2 n)$ with probability $1-n^{-\omega(1)}$.
\end{thrm}
\begin{proof}
Start by considering the plane \(\mathbb{L}_0^1\) and its corresponding Lipschitz surface \(F_0^1\).
If the height of \(F_0^1\) at the origin is more than \(\frac{\log^3(n/\ell)}{2}\),
then the Lipschitz surface cannot surround the origin at a distance \(\frac{\log^3(n/\ell)}{2}\). Therefore, since \(\ell\) and \(\log^3(n/\ell)\) are both assumed sufficiently large, we have
by Theorem \ref{thrm:surface_event_simple} that the probability that a two-sided Lipschitz surface around the origin with height at most \(\frac{\log^3(n/\ell)}{2}\) does not exists is at most
\[
\sum\nolimits_{s\geq\log^3(n/\ell)/2}s^d\exp\left\{-C\lambda_0\frac{\ell s}{\log^c(\ell s)}\right\}
\leq \exp\left(-\omega(\log^2n)\right).
\]
Using this and a uniform bound across all space-time cells for which \(\tau\in\{0,1,\dots,C_0n/\ell\}\), we have that the probability that \(F_0^1\) has height at most \(\frac{\log^3(n/\ell)}{2}\)
for all \((i,\tau)\in F_k^q\) satisfying \(\tau\in\{0,1,\dots,C_0n/\ell\}\), is at least $1-\exp\left(-\omega(\log^2n)\right)$.
Next, consider the planes \(\mathbb{L}_k^q\). Since the probability space is translation invariant due to the weights \(\mu_{x,y}\) being i.i.d., this bound holds for any \(k\) and any $q$.
Therefore, by applying a uniform bound across \(k\in\left\{0,1,\dots,\left\lfloor\frac{n}{\ell\log^3(n/\ell)}\right\rfloor\right\}\) and \(q\in\{1,2,\dots,d\}\)
we obtain that the probability that \(F_k^q\) has maximum height at most \(\frac{\log^3(n/\ell)}{2}\) for all $k$ and $q$ is at least $1-\exp\left(-\omega(\log^2n)\right)$.
Under this assumption, for any given $q$ and two distinct $k,k'$, the surfaces $F_k^q$ and $F_{k'}^q$ do not intersect, producing the Lipschitz net.
\end{proof}
The usefulness of the Lipschitz net is that, once we know it exists for any local event $E_\mathrm{st}$ that is likely enough,
then one just needs to find a suitable choice for the event $E_\mathrm{st}$ and use the Lipschitz net to show that this event propagates throughout the torus.
For the case of spread of information, we will use the Lipschitz net to show that once an informed particle enters a cell that is part of the Lipschitz net, then
information spreads evenly across the torus resulting in a density of informed particles.
For this, we will use a specific increasing event \(E_{\textrm{st}}\) to obtain that the information spreads with positive speed on each individual surface of \(F_{\textrm{net}}\).
Then, in order to show that the information also moves across different surfaces of the net, we will need the following geometric property.
\begin{lemma}\label{lem:change_surface}
Let \(F_{\textrm{net}}\) be the Lipschitz net with constant \(C_0\) and let
\( F_{k}^{q}\) and \( F_ {k'}^{q'}\) be any two given Lipschitz surfaces that are part of \(F_{\textrm{net}}\),
where \(q\neq q'\) and \(k,k'\in\left\{0,1,\dots,\left\lfloor\frac{n}{\ell\log^3(n/\ell)}\right\rfloor\right\}\).
For any \(\tau\in\{0,1,\dots,C_0n/\ell\}\) there exist space-time cells \((i,\tau)\in F_{k}^{q}\) and \((i',\tau)\in F_{k'}^{q'}\) such that \(\|(i',\tau)-(i,\tau)\|_{1}\leq 1\).
\end{lemma}
\begin{proof}
Let \(q=1\) and \(q'=2\); the proof for other combinations of parameters \(q\) and \(q'\) goes similarly.
We want to show that for any \(\tau\in\{0,1,\dots,C_0n/\ell\}\) there exist a space-time cell \((i_1,\dots,i_{d},\tau)\in F_k^1\) and a
space-time cell \((j_1,\dots,j_{d},\tau)\in F_{k'}^2\) such that \(\|(i_1,\dots,i_{d})-(j_1,\dots,j_{d})\|_{1}\leq 1\).
Fix \(\tau\in\{0,1,\dots,C_0n/\ell\}\) and set the components \((i_3,\dots,i_{d})\) to be the same as \((j_3,\dots,j_d)\).
Let \(F^1\) be either of the two Lipschitz functions (see Definition \ref{def:lip_surf}) corresponding to \( F_k^1\),
and let \(F^2\) be either of the Lipschitz functions corresponding to \( F_{k'}^2\).
Since \((i_3,\dots,i_{d})=(j_3,\dots,j_d)\), to simplify notation we
write \(F^1(y):=F^1(y,i_3,\dots,i_d,\tau)\in\mathbb{T}\) and
\(F^2(y):=F^2(y,i_3,\dots,i_d,\tau)\in\mathbb{T}\).
Therefore it remains to show that there exists $x,y\in\mathbb{T}$ such that
$|(F^1(x),x)-(y,F^2(y))|\leq 1$. Assume, by contradiction, that this is not the case.
Let $m^2=k'\lceil\log^3(n/\ell)\rceil$, which is the height of $\mathbb{L}_{k'}^2$, that is, $(0,m^2,0,0,\ldots,0)\in \mathbb{L}_{k'}^2$.
Next, if $F^2$ is the Lipschitz function corresponding to $F_+$ of $F_{k'}^2$ (refer to Definition~\ref{def:lip_surf}) then set $h^2=m^2+\frac{\log^3(n/\ell)}{2}+1$; otherwise,
set $h^2=m^2-\frac{\log^3(n/\ell)}{2}-1$. So for all $y\in\mathbb{T}$, we have $|F^2(y)-m^2| \leq |F^2(y)-h^2|$.
For any point $(x,y)\in\mathbb{T}^2$ we say that it is \emph{under} $F^2$ if $|y-m^2|\leq |F^2(x)-m^2|$; otherwise we say it is above $F^2$.
Note that $(F^1(m^2),m^2)$ is ``under'' the surface $F^2$, and $(F^1(h^2),h^2)$ is above $F^2$. Therefore, we take the shortest sequence $x_1,x_2,\ldots,x_\iota$ from $m^2$ to $h^2$ there must
exist a point $x_r$ such that $(F^1(x_r),x_r)$ is under $F^2$ but $(F^1(x_{r+1}),x_{r+1})$ is above $F^2$. Since $F^1$ is Lipschitz, this implies that one of these two points is within distance $1$ from $F^2$.
\end{proof}
\section{Spread of information using the Lipschitz net}\label{sec:spread}
We proceed to showing how the information spreads on \(F_{\textrm{net}}\). We do this by applying
Theorem \ref{thrm:net} with an event that results in the information spreading with positive speed along each individual Lipschitz surface of \(F_{\textrm{net}}\). More precisely, from now on let the increasing event \(E_{\textrm{st}}(i,\tau)\) be defined as below in Definition \ref{def:mainEvent}.
\begin{mydef}[Increasing event \(E_{\textrm{st}}(i,\tau)\)]\label{def:mainEvent}
Take any $(i,\tau)\in\mathbb{T}^d\times \mathbb{Z}$. Let \(\Upsilon\) be the collection of particles located inside $\prod_{j=1}^d[(i_j-\eta)\ell,(i_j+\eta+1)\ell]$ at time $\tau\beta$.
Consider a distinguished particle $x_0$ located inside $\prod_{j=1}^d[i_j\ell,(i_j+1)\ell]$ at time $\tau\beta$.
Define \(E_{\mathrm{st}}(i,\tau)\) to be the event that at time \((\tau+1)\beta\), for all \(i'\in\mathbb{T}^d\) with \(\|i-i'\|_{\infty}\leq\eta\),
there is at least one particle from \(\Upsilon\) in \(\prod_{j=1}^d[i_j'\ell,(i_j'+1)\ell]\) that collided with \(x_0\) during \([\tau\beta,(\tau+1)\beta]\).
\end{mydef}
For \(E_{\textrm{st}}(i,\tau)\) defined as above, we have the following result.
The proof of this result uses a few heat-kernel estimates for random walks on $\mathbb{Z}^d$ with i.i.d.\ conductances.
\begin{lemma}\label{prop:event}
Fix any $\epsilon$, $\eta$ and the ratio $\beta/\ell^2$. Let $w$ satisfy the condition in
Theorem~\ref{thrm:surface_event_simple}. Then, if \(\ell\) is sufficiently large,
then there exists a positive constant \(C\) such that for \(E_{\textrm{st}}(i,\tau)\) as defined in Definition \ref{def:mainEvent} and for any $(i,\tau)\in\mathbb{T}^d\times \mathbb{Z}$, we have
$
\nu_{E_{\mathrm{st}}}((1-\epsilon)\lambda,Q_{(2\eta+1)\ell},Q_{w\ell},\beta)\geq1-\exp\{-C(1-\epsilon)\lambda_0\ell^{1/3}\}.
$
\end{lemma}
\begin{proof}
Let \(T=\ell^{5/3}\).
Since $\beta/\ell^2$ is fixed, we can set $\ell$ a large enough constant so that $T \ll \beta$ (i.e., $T$ is much smaller than the length of the time interval in the tessellation).
Define \(Q^*:=\prod_{j=1}^d[(i_j-\eta)\ell,(i_j+\eta+1)\ell]\) and assume that at time \(\tau\beta\), for all sites \(x\in Q^*\), the number of particles at \(x\) is a Poisson random variable with mean \((1-\epsilon)\lambda_0\mu_x\).
We start by stating two two claims and using them to prove the lemma. Then, we give the proof of the claims.
\begin{claim}\label{cl:1}
If the distinguished particle \(x_0\) is inside \(\prod_{j=1}^d[i_j\ell,(i_j+1)\ell]\) at time \(\tau\beta\) and \(x_0\) follows a fixed path \(\left(\rho(t)\right)_{\tau\beta\leq t\leq \tau\beta+T}\), then by time \(\tau\beta+T\) the number of particles that have collided with \(x_0\) during \([\tau\beta,\tau\beta+T]\), but were not at the same site as \(x_0\) at time \(\tau\beta\), is a Poisson random variable with intensity at least \(
C_1(1-\epsilon)\lambda_0\ell^{1/3}
\) for some positive constant \(C_1\), independent of \(\rho(t)\).
\end{claim}
\begin{claim}\label{cl:2}
Given that there are \(N\) particles inside of \(Q^*\) at time \(\tau\beta+T\), the probability that at least one of these particles is inside \(Q^{**}:=\prod_{j=1}^d[(i'_j)\ell,(i'_j+1)\ell]\) for any \(i'\) for which \(|i-i'|\leq\eta\) is at least \(
1-\exp\{-Nc_p\},
\)
where \(c_p\) is a positive constant that is bounded away from \(0\) and depends only on \(d\), \(\eta\) and the ratio \(\beta/\ell^2\).
\end{claim}
Now we use the above claims to prove the lemma.
Note that by Definition \ref{def:mainEvent}, \(E_{\textrm{st}}(i,\tau)\) is restricted to the super cube \(Q^*\) and time interval \([\tau\beta,(\tau+1)\beta]\).
We now define the following 3 events.
\begin{description}
\item[\(F_1\):] The distinguished particle \(x_0\) never leaves \(\prod_{j=1}^d[(i_j-\eta+1)\ell,(i_j+\eta-1)\ell]\) during \([\tau\beta,\tau\beta+~T]\).
\item[\(F_2\):] Let \(C_1\) be the constant from Claim~\ref{cl:1}. During the time interval \([\tau\beta,\tau\beta+T]\) the distinguished particle \(x_0\) collides with at least \(\frac{C_1\lambda_0\ell^{1/3}}{2}\) different particles from \(\Upsilon\) that are in the super cube \(Q^{*}\) at time \(\tau\beta+T\).
\item[\(F_3\):] Out of the \(\frac{C_1\lambda_0\ell^{1/3}}{2}\) or more particles from \(F_2\), at least one of them is in the cube \(Q^{**}\) at time \((\tau+1)\beta\), for all \(Q^{**}\) for which \(Q^{**}\subset Q^{*}\).
\end{description}
By definition of the events, we clearly have that \(\mathbb{P}[E_{\mathrm{st}}(i,\tau)]\geq \mathbb{P}[F_1\cap F_2\cap F_3]\). Also note that \(F_1,F_2\) and \(F_3\) are clearly restricted to the super cube \(Q^*\) and the time interval \([\tau\beta,(\tau+1)\beta]\) and are all increasing events.
Using the exit probability bound from \cite[Proposition 3.7]{Barlow2004} we have
\begin{equation}\label{for:F1}
\mathbb{P}[F_1]\geq 1-C_2\exp\{-C_3\ell^2/T\}=1-C_2\exp\{-C_3\ell^{1/3}\}
\end{equation}
for some positive constants \(C_2\) and \(C_3\).
For the event \(F_2\), we apply the result of Claim~\ref{cl:1}. Note that the bound from Claim~\ref{cl:1} is uniform across all paths \(\rho(\cdot)\) and in particular holds for any path the distinguished particle from the event \(F_1\) might follow. This gives that the intensity of the Poisson point process of particles that are in \(Q^{*}\) at time \(\tau\beta\) and collide with \(x_0\) during \([\tau\beta,\tau\beta+T]\) is at least \((1-\epsilon)\lambda_0 C_1\ell^{1/3}\) for some positive constant \(C_1\). Since every particle that collides with \(x_0\) enters \(\prod_{j=1}^d[(i_j-\eta+1)\ell,(i_j+\eta)\ell]\) during \([\tau\beta,\tau\beta+T]\), we can again use the exit probability bound from \cite[Proposition 3.7]{Barlow2004} to bound the probability that the particle is outside of \(Q^{*}\) at time \(\tau\beta+T\) from below by
\[
1-C_a\exp\left\{-\frac{C_b\ell^2}{T}\right\}=1-C_a\exp\{-C_b\ell^{1/3}\},
\]
for some positive constants \(C_a\) and \(C_b\). This term can be made as close to \(1\) as possible by having \(\ell\) sufficiently large. We assume \(\ell\) is large enough so that this term is larger than \(2/3\). This gives that the intensity of the process of particles from \(\Upsilon\) that collided with \(x_0\) during \([\tau\beta,\tau\beta+T]\) and are in \(Q^{*}\) at time \(\tau\beta+T\) is at least
\[
\frac{2(1-\epsilon)\lambda_0 C_1\ell^{1/3}}{3}.
\]
Using Chernoff's bound (see Lemma \ref{lem:chernoff}) we have that
\begin{equation}\label{for:F2}
\mathbb{P}[F_2]\geq 1-\exp\{-(2/3)^2C_1(1-\epsilon)\lambda_0\ell^{1/3}\}.
\end{equation}
We now turn to \(F_3\). Using the result of Claim~\ref{cl:2}, and a uniform bound across the number of cubes inside a super cube, we have that
\begin{equation}\label{for:F3}
\mathbb{P}[F_3]\geq 1-(2\eta+1)^d\exp\left\{-\frac{C_1(1-\epsilon)\lambda_0\ell^{1/3}}{2}c_p\right\},
\end{equation}
where \(c_p\) is a small but positive constant. Taking the product of the probability bounds in (\ref{for:F1}), (\ref{for:F2}) and (\ref{for:F3}), we see that the probability that \(E_{\mathrm{st}}(i,\tau)\) holds is at least
\begin{equation*}
1- \exp\{-C(1-\epsilon)\lambda_0\ell^{1/3}\}
\end{equation*}
for some constant \(C\) and all large enough \(\ell\), which proves the claim.
\end{proof}
\begin{proof}[Proof of Claim~\ref{cl:1}]
For each time $t\in [\tau\beta,\tau\beta+T]$, let $\Psi_t$ be the Poisson point process on $\mathbb{T}^d$ giving the locations at time $t$ of the particles that belong to $\Upsilon$, excluding all particles located at \(\rho(\tau\beta)\) at time \(\tau\beta\). Since the particles that start in \(Q^*\) move around and can leave \(Q^*\), we need to find a lower bound for the intensity of \(\Psi_t\) for times in \([\tau\beta,\tau\beta+T]\). Note that the distinguished particle \(x_0\) we are tracking is not part of \(\Psi\), since \(\Psi\) does not include particles located at \(\rho(\tau\beta)\) at time \(\tau\beta\).
We will need to apply heat kernel bounds from \cite[Theorem 2.2]{Hambly2009} to the particles in \(Q^*\), so we need to ensure that the time intervals we consider are large enough for the bounds to hold.
We will only consider times \(t\in[\ell^{4/3},T]\) so that for large enough \(\ell\), we have \(t\geq\sup_{\substack{x\in Q^*\\y\in Q^*}}\|x-y\|_1\) and so the heat kernel bounds from \cite[Theorem 2.2]{Hambly2009} hold.
Then, we have that for all sites \(x\in Q^*\) that are at least \(\ell\) away from the boundary of \(Q^*\) and at any such time \(t\) the intensity of \(\Psi_{\tau\beta+t}\) at vertex $x\in \mathbb{T}^d$ is at least
\begin{equation*}
\Psi_{\tau\beta+t}(x)\geq\sum_{\substack{y\in Q^*\\y\neq \rho(\tau\beta)}}(1-\epsilon)\lambda_0\mu_y\cdot \mathbb{P}_y[Y_t=x]
= (1-\epsilon)\lambda_0\mu_x\sum_{\substack{y\in Q^*\\y\neq \rho(\tau\beta)}}\mathbb{P}_x[Y_t=y],
\end{equation*}
where $Y_t$ stands for the location of a simple random walk at time $t$, and $\mathbb{P}_y$ is the measure induced by a simple random walk starting from $y$.
In the last step above, we used that the simple random walk is reversible with respect to the measure $\mu$.
We now use the exit probability bound from \cite[Proposition 3.7]{Barlow2004} to get that
\[
\sum_{\substack{y\in Q^*}}\mathbb{P}_x[Y_t=y]\geq 1-c_3\exp\{-c_4\ell^2/t\}.
\]
Next, we use \cite[Theorem 2.2]{Hambly2009} to account for the particles at \(\rho(\tau\beta)\), yielding
\begin{equation*}
\sum_{\substack{y\in Q^*\\y\neq \rho(\tau\beta)}}\mathbb{P}_x[Y_t=y]\geq 1-c_3\exp\left\{-c_4\ell^2/t\right\}-C_Mc_5t^{-d/2}.
\end{equation*}
This gives that for any \(t\in[\ell^{4/3},T]\), the intensity of \(\Psi_{\tau\beta+t}\) is at least
\[
\Psi_{\tau\beta+t}(x)\geq (1-\epsilon)\lambda_0 \mu_x(1-c_3\exp\{-c_4\ell^2/T\}-C_Mc_5\ell^{-2d/3}).
\]
Let \([\tau\beta,\tau\beta+T]\) be divided into subintervals of length \(W\in(0,T]\), where we set \(W=\ell^{4/3}\) so that it is large enough to allow the use of the heat kernel bounds from \cite[Theorem 2.2]{Hambly2009}. Let \(J=\{1,\dots, \lfloor T/W\rfloor \}\) and \(t_j:=\tau\beta+jW\).
Then the intensity of particles that share a site with the distinguished particle \(x_0\) at least once among times \(\{t_1, t_2, \dots, t_{ \lfloor T/W\rfloor }\}\) is at least
\begin{align*}
&\sum_{j\in J}\Psi_{t_j}(\rho(t_j))\mathbb{P}_{\rho(t_j)}[Y_{r-t_j}\neq \rho(r) \;\forall r\in\{t_{j+1},\dots,t_{\lfloor T/W\rfloor}\}]\\
&\geq (1-\epsilon)\lambda_0 C_M^{-1}(1-c_3\exp\{-c_4\ell^2/T\}-C_Mc_5\ell^{-2d/3})\sum_{j\in J}\left(1-\sum_{z>j}\mathbb{P}_{\rho(t_j)}[Y_{t_z-t_j}= \rho(t_z)]\right).
\end{align*}
We want to make all of the terms of the sum over \(J\) positive, so we consider the term \(\sum_{z>j}\mathbb{P}_{\rho(t_j)}[X_{t_z-t_j}= \rho(t_z)]\) and show that it is smaller than \(\frac{1}{2}\) for large enough \(\ell\).
To do this, we use \cite[Theorem 2.2]{Hambly2009}, which hold when \(W\geq\ell^{4/3}\) and \(\ell\) is large enough, to bound it from above by
\begin{align}
\sum_{z>j}\mathbb{P}_{\rho(t_j)}[Y_{t_z-t_j}= \rho(t_z)]
&\leq \sum_{z>j}C_MC_{HK}(t_z-t_j)^{-d/2}\nonumber\\
&\leq C_M C_{HK}W^{-d/2}\sum_{z=1}^{T/W-j}z^{-d/2}\label{for:TW}
\end{align}
where \(C_{HK}\) is a constant coming from \cite[Theorem 2.2]{Hambly2009}. Then, (\ref{for:TW}) can be bounded from above by
\begin{equation}
C_M C_{HK} W^{-d/2}\left(2+\sum_{z=3}^{T/W-j}z^{-d/2}\right)
\leq C_M C_{HK}W^{-d/2}\left(2+\int_{2}^{T/W}z^{-d/2}dz\right)\label{for:TWintegral}.
\end{equation}
Let \(C\) be a constant that can depend on \(C_{HK}\), \(C_M\) and \(d\). Then for \(d=2\), (\ref{for:TWintegral}) is smaller than \(CW^{-1}\log(T/W)\), and for \(d\geq 3\) the expression in (\ref{for:TWintegral}) is smaller than \(CW^{-d/2}\). Thus, setting \(\ell\) large enough, both terms are smaller than \(\frac{1}{2}\).
Then, as a sum of Poisson random variables, we get that \(\Upsilon'\) is a Poisson random variable with a mean at least
\[
(1-\epsilon)\lambda_0 C_M^{-1}(1-c_3\exp\{-2c_4\ell^2/T\}-C_Mc_5\ell^{-2d/3})\tfrac{T}{2W}.
\]
Using that \(T=\ell^{5/3}\) and setting \(\ell\) large enough establishes the claim, with \(C_1\) being any constant satisfying \(C_1<\frac{C_M^{-1}}{2}\).
\end{proof}
\begin{proof}[Proof of Claim~\ref{cl:2}]
We now prove that for large enough \(\ell\), if there are \(N\) particles inside of \(Q^*\) at time \(\tau\beta+T\), there is at least one of them inside \(Q^{**}\) at time \((\tau+1)\beta\) with probability at least
\(
1-\exp\{-Nc_p\}.
\)
For \(t^{2/3}\geq\sup_{\substack{x\in Q^*\\y\in Q^{**}}}\|x-y\|_1\), define \(p_t:=\inf_{\substack{x\in Q^*}}\sum_{y\in Q^{**}}\mathbb{P}_x[Y_t=y]\). Then, if we define \(\mathrm{bin}(N,p_t)\) to be a binomial random variable with parameters \(N\in\mathbb{N}\) and \(p_t\in[0,1]\), it directly follows that we can bound probability of one of the \(N\) particles from \(Q^*\) being inside \(Q^{**}\) at time \((\tau+1)\beta\) from below by
\[
\mathbb{P}[\mathrm{bin}(N,p_t)\geq1]\geq1-\exp\{-Np_t\}.
\]
It remains to show that for \(t=\beta-T\), we have that \(p_t\geq c_p>0\) for some constant \(c_p\). We will again use the heat kernel bounds from \cite[Theorem 2.2]{Hambly2009} for the pair \(x,y\), which hold if \(\|x-y\|_1^{3/2}\leq\beta-T\) for all \(x\in Q^*,y\in Q^{**}\). Given the ratio \(\beta/\ell^2\), \(d\) and \(\eta\), this is satisfied if \(\ell\) is large enough. Then we have that
\begin{align*}
p_{\beta-T}=\inf_{x\in Q^*}\sum_{y\in Q^{**}}\mathbb{P}_x[Y_{\beta-T}=y
\geq\inf_{x\in Q^*}C_{M}^{-1}\sum_{y\in Q^{**}}c_1 \beta^{-d/2}\exp\left\{-c_2\frac{\|x-y\|_1^2}{{\beta-T}}\right\}.
\end{align*}
Now we use that \(x\) and \(y\) can be at most \(c_\eta\ell\) apart where \(c_\eta\) is a constant depending on \(d\) and \(\eta\) only, and that \(\beta-T\geq\beta/2\) for \(\ell\) large enough. Hence,
\begin{align*}
p_{{\beta-T}}&\geq \inf_{x\in Q^*}C_M^{-1}\sum_{y\in Q^{**}}c_1 \beta^{-d/2}\exp\left\{-c_2\frac{2(c_\eta\ell)^2}{\beta}\right\}\\
&= C_M^{-1}c_1\ell^d\left(\frac{1}{\beta}\right)^{d/2}\exp\left\{-c_2\frac{2(c_\eta\ell)^2}{\beta}\right\}\\
&\geq c_p.
\end{align*}
\end{proof}
Lemma \ref{prop:event} implies that for \(\ell\) large enough, by setting \(\eta\) to be a large enough constant, and defining the increasing event \(E_{\textrm{st}}\) as in Definition~\ref{def:mainEvent},
the information spreads among neighboring cells.
Since the Lipschitz net surrounds the origin at distance $O(\log^2n)$, we have that in at most poly-logarithmic time,
the initially informed particle will enter some cell \(\prod_{j=1}^d[i_j\ell,(i_j+1)\ell]\) for which \((i,\tau)\) is in some Lipschitz surface \(F\) of $F_\mathrm{net}$.
Once that holds, we know that the event \(E_{\mathrm{st}}(i,\tau)\) occurs.
By the definition of \(E_{\mathrm{st}}(i,\tau)\), we obtain that the initially informed particle in \((i,\tau)\) informs other particles causing
the information to spread to each \((i',\tau+1)\) for which \(\|i'-i\|_{\infty}\leq \eta\).
Let \((b,h)\) be the base-height index of the cell \((i,\tau)\in F\).
Recall that \(h\) is one of the spatial dimensions.
We will also select one of the \(d-1\) spatial dimensions from \(b\) and denote it \(b_1\).
Let \(b'\in\mathbb{T}^d_*\) be obtained from \(b\) by increasing the time dimension from
\(\tau\) to \(\tau+1\), and by increasing the chosen spatial dimension from \(b_1\) to \(b_1+1\).
Since \(\|b-b'\|_1=2\), we can choose \(h'\in\mathbb{T}\) such that \((b',h')\in F\) and \(|h-h'|\leq 2\), where the latter holds by the Lipschitz property of \(F\).
Therefore, there must exists \(i'\in\mathbb{T}^d\) such that \((i',\tau+1)\) is the space-time cell corresponding to \((b',h')\) and \(\|i-i'\|_{\infty}\leq 4\).
Hence, at time \((\tau+1)\beta\), there is an informed particle in the cube indexed by \(i'\) if $\eta$ is at least $4$ and $E_\mathrm{st}(i,\tau)$ holds.
Using this mechanism, we can show that after some time of order \(n\), the information has spread along the surfaces across the entire torus.
\begin{lemma}\label{prop:phase1}
Let \(F_{\textrm{net}}\) be the Lipschitz net with constant \(C_0\) which surrounds the origin at distance $O(\log^2n)$.
There exists a constant \(C_T>0\), independent of \(C_0\), such that for every \((i,\tau)\in F_{\textrm{net}}\) for which \(\tau\beta\geq C_Tn\),
there is at least one informed particle inside the cube \(\prod_{j=1}^d[(i_j-\eta+1)\ell,(i_j+\eta)\ell]\) for all times in \([\tau\beta,(\tau+1)\beta]\).
\end{lemma}
\begin{proof}
Let \(E_{\mathrm{st}}(i,\tau)\) be defined as in Lemma \ref{prop:event} and let \(F_{\textrm{net}}\) be the Lipschitz net with constant \(C_0\),
corresponding to the event \(E_{\mathrm{st}}(i,\tau)\).
We have by the fact that \( F_\mathrm{net}\) surrounds the origin at a distance \(O(\log^2n)\) and that each cell represents a time interval of length \(\beta\),
that it takes at most $O(\beta \log^2n)$ time for the information to enter \( F_\mathrm{net}\).
Once the informed particle is in a space-time cell of some surface \( F_{k}^q\) of $F_\mathrm{net}$,
we have by the definition of \(E_{\mathrm{st}}(i,\tau)\) with \(\eta=d\),
that it takes at most \(\frac{2n}{\ell}\) steps for the information to spread across the surface (moving between neighboring cells),
so that all space-time cells \((i,\tau)\in F_k^q\) for which \(\tau=\frac{2n}{\ell}+O(\log^2 n)\) contain an informed particle.
Next, for any $q',k'$ with $q'\neq q$, we know by Lemma \ref{lem:change_surface} that for any $\tau$ there are neighboring cells $(i,\tau)\in F_k^q$ and $(i',\tau)\in F_{k'}^{q'}$.
Therefore, it takes at most \(\beta\) time for the information to enter any surface $F_{k'}^{q'}$ with $q' \neq q$,
and another $\frac{2n}{\ell}\beta$ amount of time to spread to all cells in those surfaces, so that all cells \((i,\tau)\in F_{k'}^{q'}\)
for which \(\tau=\frac{4n}{\ell}+1+O(\log^2n)\) contains an informed particle.
It still remains to spread the information to the surfaces $F_{k'}^q$ with $k'\neq k$. Again, this takes at most $\frac{2n}{\ell}\beta+\beta$ time by the same argument above.
Putting everything toghether, we obtain that for any \(k\), any \(q\), and all \((i,\tau)\in F_k^q\) for which
\(\tau\beta\geq C_Tn\geq (\frac{6n}{\ell}+2+O(\log^2n))\beta\),
where we set \(C_T\) large enough for the second inequality to hold, there is at least one informed particle in the cube \(\prod_{j=1}^d[(i_j-\eta+1)\ell,(i_j+\eta)\ell]\) for all times in \([\tau\beta,(\tau+1)\beta]\).
\end{proof}
Using Lemma \ref{prop:phase1} and the geometric properties of the Lipschitz net, we can show that there is a density of informed particles everywhere on the torus for an interval of time of order \(n\).
\begin{thrm}\label{prop:phase1density}
There exists constants \(C_{\beta}\geq 1\) and \(C_{\ell}>0\) such that the following holds.
Let \(C_T\) be the constant from Lemma \ref{prop:phase1}. Tessellate \(\mathbb{T}^d\) into cubes \((Q_m)_m\) of side length \(C_{\ell}\log^3(n)\). Then, for all times \(t\in[C_Tn,(C_T+C_{\beta})n]\), there is at least one informed particle in each subcube \(Q_m\) with probability at least
$
1-n^{-\omega(1)}.
$
\end{thrm}
\begin{proof}
Fix \(\ell\) sufficiently large for Lemma \ref{prop:event} and Theorem \ref{thrm:surface_event_simple} to hold and recall that the ratio \(\beta/\ell^2\) is fixed.
Let also \(n\gg \ell\).
Then, there exists a constant \(C_T\) so that, for any large enough choice of \(C_0\), Lemma \ref{prop:phase1} gives that
for every space-time cell \((i,\tau)\) of the Lipschitz net \(F_{\textrm{net}}\) that satisfies \(\tau\beta\geq C_Tn\),
there is at least one informed particle in the region \(\prod_{j=1}^d[(i_j-\eta+1)\ell,(i_j+\eta)\ell]\) at all times in \([\tau\beta,(\tau+1)\beta]\).
We can, without loss of generality, assume \(C_T\) is such that \(C_Tn=\beta\tau^*\) for some \(\tau^*\in\mathbb{N}\). Then, we only have to show that for all cubes \(Q_m\) of side length \(C_{\ell}\log^3(n)\), there exist space-time cells \((i,\tau)\) such that the region \(\prod_{j=1}^d[(i_j-\eta+1)\ell,(i_j+\eta)\ell]\) is contained in \(Q_m\) and such that \([C_Tn,(C_T+C_{\beta}) n]\subseteq\bigcup_{\tau}[\tau\beta,(\tau+1)\beta]\), where \(C_{\beta}\) is a constant greater or equal to \(1\).
Let \(C_{\beta}=k^*\beta\) where \(k^*\) is the smallest integer for which \(k^*\beta\geq 1\) and fix the Lipschitz net constant \(C_0\) to be greater or equal to \((C_T+C_{\beta})\ell/\beta\). Then, we have from Theorem \ref{thrm:net} that the Lipschitz net with constant \(C_0\) exists with probability at least
$
1-n^{-\omega(1)}.
$
We now show that if this Lipschitz net exists, the lemma holds.
Let \( F_s^q\) and \( F_{s+1}^q\) be any two consecutive two-sided surfaces of the Lipschitz net and let \((b,h)\in F_s^q\) and \((b,h')\in F_{s+1}^q\) be two base-height cells with the same base.
By definition of the Lipschitz net, we have that the height of each Lipschitz surface in the net is at most \(\frac{\log^3(n/\ell)}{2}\) for all space-time cells that satisfy \(\tau\in\{0,1,\dots,C_0n/\ell\}\).
Since the base-height cells \((b,h)\) and \((b,h')\) might belong to opposite sides of the two-sided Lipschitz surfaces, we therefore have that \(|h-h'|\leq 2\log^3(n/\ell)\) for all base-height cells for which \(\tau\beta<(C_T+C_{\beta})n\leq C_0\beta n/\ell\). Note that this holds for all \(q\in\{1,\dots,d\}\) and
recall that by Lemma \ref{prop:phase1} there is an informed particle inside the region \(\prod_{j=1}^d[(i_j-\eta+1)\ell,(i_j+\eta-1)\ell]\) throughout the entire time interval \([\tau\beta,(\tau+1)\beta]\).
Therefore, for every cube of side length at least \(2\ell\log^3(n/\ell)+2\eta\ell\) on the torus and throughout every time interval of the form above, there is at least one informed particle inside the cube.
By repeating this argument for all \(\tau\) that satisfy \(\tau\beta\in[C_Tn,(C_T+C_{\beta})n)\), we have that this holds for the entire time interval \([C_Tn,(C_T+C_{\beta})n]\).
\end{proof}
Before turning to the proof of Theorem~\ref{thrm:total}, we state a theorem that gives that if we start with a density of particles on a cube,
regardless of how they are placed inside some subcubes,
we can couple their positions after some time with a Poisson point process that is independent of their initial locations.
This gives a type of local mixing property for random walks on $\mathbb{T}^d$ with i.i.d.\ conductances.
For the proof of this technical result, refer to \cite[Theorem 3.1]{Gracar2016}.
\begin{thrm}\label{thrm:mixing}
Let \(G\) be a uniformly elliptic graph with edge weights \(\mu_{x,y}\). There exist constants \(c_0\), \(c_1\), \(C>0\) such that the following holds.
Fix \(K>\ell>0\) and \(\epsilon\in(0,1)\). Consider the cube \(Q_K\) tessellated into subcubes \((T_i)_{i}\) of side length \(\ell\) and assume that $\ell$ is large enough.
Let \((x_j)_{j}\subset Q_{K}\) be the locations at time \(0\) of a collection of particles, such that each subcube \( T_i\) contains at least \(\sum_{y\in T_i}\beta\mu_y\) particles for some \(\beta>0\).
Let \(\Delta\geq c_0\ell^2\epsilon^{-4/\Theta}\) where \(\Theta\) is a constant that depends on the weight bounds.
For each \(j\) denote by \(Y_j\) the location of the \(j\)-th particle at time \(\Delta\).
Fix \(K'>0\) such that \(K-K'\geq\sqrt{\Delta}c_1\epsilon^{-1/d}\). Then there exists a coupling \(\mathbb{Q}\) of an independent Poisson point process \(\psi\) with intensity measure \(\zeta(y)=\beta(1-\epsilon)\mu_y\), \(y\in \mathcal{C}_{\infty}\), and \((Y_j)_{j}\) such that within \( Q_{K'}\subset Q_K\), \(\psi\) is a subset of \((Y_j)_{j}\) with probability at least
\[
1-\sum_{y\in Q_{K'}}\exp\left\{-C\beta\mu_y\epsilon^2\Delta^{d/2}\right\}.
\]
\end{thrm}
\subsection{Proof of Theorem \ref{thrm:total}}
\begin{proof}
Let \(C_T\) be the constant from Lemma \ref{prop:phase1} and let \(C_{\beta}\) and \(C_{\ell}\) be the constants from Theorem \ref{prop:phase1density}.
We want to bound the probability that at time \((C_T+C_{\beta})n\) there is at least one particle on \(\mathbb{T}^d\) that is not informed.
By using that the particles on \(\mathbb{T}^d\) form a Poisson point process with intensity \(\lambda(y)=\lambda_0\mu_y\), we have that this probability can be bounded from above by
\lambda_0p_n\sum_{y\in\mathbb{T}^d}\mu_y,
where \(p_n\) is an upper bound for the probability that a single particle is not informed by time \((C_T+C_{\beta})n\) on the torus of side length \(n\), uniformly on the initial location of the particle.
We now proceed to find the bound \(p_n\).
Let \(F\) be the event that a particle located somewhere on the torus does not become informed during \([C_Tn,(C_T+C_{\beta})n]\). Note that the probability that a particle does not get informed by time \((C_T+C_{\beta})n\) is smaller than the probability of \(F\), so \(p_n\leq\mathbb{P}[F]\). Let \(t\in(0,C_{\beta}n)\) be a time step we will fix later and consider the time interval \([C_Tn,(C_T+C_{\beta})n]\) split into subintervals of length \(t\), i.e.\ let the interval be split into subintervals of the form \([C_Tn+kt,C_Tn+(k+1)t]\) for \(k\in\{0,1,\dots,\lfloor C_{\beta}n/t\rfloor-1\}\). Let \(F_k\) denote the event that a particle located somewhere on the torus does not become informed during the time interval \([C_Tn+kt,C_Tn+(k+1)t]\). We then have that
\[
\mathbb{P}[F]\leq\mathbb{P}[F_0\cap F_1\cap\dots\cap F_{\lfloor C_{\beta}n/t\rfloor-1}].
\]
Tessellate \(\mathbb{T}^d\) into cubes \((Q_i)_i\) of side length \(C_{\ell}\log^3(n)\), indexed by \(i\). Let \(D_k\) be the event that time \(C_Tn+kt\) there is at least one informed particle in every cube \(Q_i\). We can then write
\begin{equation}
\mathbb{P}[F_0\cap F_1\cap\dots\cap F_{\lfloor C_{\beta}n/t\rfloor-1}]
\leq \mathbb{P}\left[\bigcap_{k=0}^{\lfloor C_{\beta}n/t\rfloor-1}\left(F_{k}\cap D_{k}\right)\right]
+\mathbb{P}\left[\bigcup_{k=0}^{\lfloor C_{\beta}n/t\rfloor-1}D^\mathsf{c}_{k}\right]\label{eq:density_noDensity}.
\end{equation}
To bound the second term, we apply Theorem \ref{prop:phase1density}, which gives that there is at least one informed particle in every cube \(Q_i\) of side length \(C_{\ell}\log^3(n)\) for all times during \(t\in[C_Tn,(C_T+C_{\beta})n]\) with high probability. Therefore, it holds that
\begin{equation}\label{eq:densityBound}
\mathbb{P}\left[\bigcup_{k=0}^{\lfloor C_{\beta}n/t\rfloor-1}D^\mathsf{c}_{k}\right]= n^{-\omega(1)}.
\end{equation}
We now focus on the first term of (\ref{eq:density_noDensity}). By rearranging the expression inside the probability and using the chain rule, we have that
\[
\mathbb{P}\left[\left(\bigcap_{k=0}^{\lfloor C_{\beta}n/t\rfloor-1}F_{k}\right)\cap\left(\bigcap_{k=0}^{\lfloor C_{\beta}n/t\rfloor-1}D_{k}\right)\right]\leq \mathbb{P}[F_{0}\cap D_0]\prod_{k=1}^{\lfloor C_{\beta}n/t\rfloor-1}\mathbb{P}\left[F_{k}\cap D_{k}\;\Big|\;\bigcap_{j<k}F_{j}\cap D_{j}\right].
\]
In order to bound the terms \(\mathbb{P}\left[F_{k}\cap D_{k}\;\middle|\;\bigcap_{j<k}F_{j}\cap D_{j}\right]\), first note that
\begin{align*}
\mathbb{P}\left[F_{k}\cap D_{k}\;\Big|\;\bigcap_{j<k}F_{j}\cap D_{j}\right]&\leq\mathbb{P}\left[F_{k}\;\Big|\;D_{k}\cap\bigcap_{j<k}F_{j}\cap D_{j}\right]\mathbb{P}\left[D_{k}\;\Big|\;\bigcap_{j<k}F_{j}\cap D_{j}\right]\\
&\leq \mathbb{P}\left[F_{k}\;\Big|\;D_{k}\cap\bigcap_{j<k}F_{j}\cap D_{j}\right],
\end{align*}
and similarly,
$
\mathbb{P}[F_{0}\cap D_0]=\mathbb{P}[F_{0}\;|\; D_0]\mathbb{P}[D_0]\leq \mathbb{P}[F_{0}\;|\; D_0].
$
Next, we show a bound for \(\mathbb{P}\left[F_{k}\;\Big|\;D_{k}\cap\bigcap_{j<k}F_{j}\cap D_{j}\right]\) that holds uniformly on all configurations for which \(D_k\) holds.
We do this by applying Theorem \ref{thrm:mixing} to find a uniform bound on the probability of a particle remaining uninformed, given there is a density of informed particles on the torus \(\mathbb{T}^d\) at the beginning of the time interval we consider. More precisely, we set the terms of Theorem \ref{thrm:mixing} as follows, where we mark them with a bar to help distinguish them from other terms in this proof. Let
\(\bar K=n\),
\(\bar \ell=C_{\ell}\log^3(n)\), and
\(\bar \epsilon=\frac{1}{2}\). Let
\(\bar \Delta=C_{\Theta}\log^8(n)\), where \(C_{\Theta}\) is a constant sufficiently large for \(\bar \Delta\) to satisfy the conditions of Theorem \ref{thrm:mixing} for all \(n\). We fix the time step \(t\) to be equal to \(\bar \Delta\) and let
\(\bar K'=n-C_{\bar\epsilon}\sqrt{\bar\Delta}\), where \(C_{\bar\epsilon}=c_1\bar\epsilon^{-1/d}\).
We now have by the definition of \(D_k\) for every \(k\in\{0,1,\dots,\lfloor C_{\beta}n/\bar\Delta\rfloor-1\}\) that at time \(C_Tn+kt\) there is at least one informed particle in every subcube \(Q_i\), so there are at least
\[
\frac{1}{C_MdC_{\ell}^d\log^{3d}(n)}\sum_{y\in Q_i}\mu_y
\]
informed particles in every cube. We set the parameter \(\bar\beta\) from Theorem \ref{thrm:mixing} to be \(\bar\beta=\frac{1}{C_MdC_{\ell}^d\log^{3d}(n)}\) and apply the theorem. This gives us that after the informed particles move around for time \(\bar\Delta\), they stochastically dominate a Poisson point process of intensity \(\bar\zeta(y)=\frac{1}{2}\frac{1}{C_MdC_{\ell}^{d}\log^{3d}(n)}\mu_y\) inside the cube of side length \(\bar K'\). Using (\ref{eq:mu_bounds_new}), we have that this coupling fails with probability at most
\begin{equation}\label{eq:couplingFails}
\sum_{y\in Q_{K'}}\exp\left\{-C\frac{1}{4}\frac{1}{C_MdC_{\ell}^{d}\log^{3d}(n)}C_{\Theta}^{d/2}\log^{4d}(n)\mu_y\right\}\leq n^d\exp\{-C_1\log^d(n)\},
\end{equation}
where \(C\) is the constant from Theorem \ref{thrm:mixing} and \(C_1\) is some constant that depends on \(d\).
Note that this bound only depends on the size of \(Q_{K'}\) and as such is independent of the site the cube is centered around.
Next, if \(D_k\) holds and the coupling succeeds, the number of informed particles at a given site \(y\) of the torus at time \(C_Tn+(k+1)t\) stochastically dominates a Poisson random variable of intensity \(\frac{1}{2C_MdC_{\ell}^{d}\log^{3d}(n)}\mu_y\). Since the probability that a particle is not informed during the interval \([C_Tn+kt,C_Tn+(k+1)t]\) is smaller than the probability of not getting the information only at the end of the interval, we have that \(\mathbb{P}[F_k\;|\;\{\textrm{coupling succeeds}\}\cap D_k]\) can be bound by the probability that at the end of the time interval, there are no informed particles at the location of the particle we are considering. Using (\ref{eq:mu_bounds_new}) to bound \(\mu_y\), we have for some constant \(C_2\) that \(\mathbb{P}[F_k\;|\;\{\textrm{coupling succeeds}\}\cap D_k]\) is at most the probability that a Poisson random variable with intensity \(\frac{C_2}{\log^{3d}(n)}\) is \(0\), i.e.
\begin{equation}\label{eq:noParticle}
\mathbb{P}[F_k\;|\;\{\textrm{coupling succeeds}\}\cap D_k]\leq\exp\left\{-\frac{C_2}{\log^{3d}(n)}\right\}.
\end{equation}
This bound holds uniformly across all sites of the torus where the particle might be located and across all configurations of particles for which \(D_k\) holds.
Combining (\ref{eq:couplingFails}) and (\ref{eq:noParticle}) we therefore have for all \(k\in\{0,1,\dots,\lfloor C_{\beta}n/t\rfloor-1\}\) that
\begin{equation*}
\mathbb{P}\left[F_{k}\;\Big|\;D_{k}\cap\bigcap_{j<k}F_{j}\cap D_{j}\right]
\leq n^d\exp\{-C_1\log^d(n)\}+\exp\left\{-\tfrac{C_2}{\log^{3d}(n)}\right\}.
\end{equation*}
Using the definition of \(t\), the bound from (\ref{eq:densityBound}) and applying the above bound for all \(k\in\{0,1,\dots,\lfloor C_{\beta}n/t\rfloor-1\}\), we have that \(\mathbb{P}[F_0\cap F_1\cap\dots\cap F_{\lfloor C_{\beta}n/t\rfloor-1}]\) from (\ref{eq:density_noDensity}) is smaller than
\begin{equation*}
\left(n^d\exp\{-C_1\log^d(n)\}\right)^{C_{\beta}n/(C_{\Theta}\log^8(n))}+\exp\left\{-\tfrac{C_2C_{\beta}n}{C_{\Theta}\log^{3d+8}(n)}\right\}+n^{-\omega(1)}.
\end{equation*}
Using that \(p_n\leq \mathbb{P}[F_0\cap F_1\cap\dots\cap F_{\lfloor C_{\beta}n/t\rfloor-1}]\)
and \(\mu_y
\leq C_Md\) by (\ref{eq:mu_bounds_new}), we get that the probability that there exists a particle that has not been informed by time \((C_T+C_{\beta})n\) is at most
\begin{equation*}
C_Md\lambda_0n^d\bigg(\left(n^d\exp\{-C_1\log^d(n)\}\right)^{C_{\beta}n/(C_{\Theta}\log^8(n))}+\exp\left\{-\tfrac{C_2C_{\beta}n}{C_{\Theta}\log^{3d+8}(n)}\right\}+n^{-\omega(1)}\bigg).
\end{equation*}
Since the above is $n^{-\omega(1)}$, the proof is completed.
\end{proof}
\section{Conclusion}
We have established a tight bound on the flooding time (up to constant factors) for the spread of information between random walk particles on the discrete torus of size \(n\), equipped with i.i.d., uniformly elliptic conductances.
To prove this, we develop a framework to control dependences, which given any increasing, local event that is likely enough,
one can find a Lipschitz surface and a Lipschitz net through space-time where this event holds.
We believe this result can be applicable to analyze other processes and algorithms on systems of random walk particles.
We also believe that this framework can be adapted to work with different types of particle systems, for example, when the particles do not move independently of one another, but nonetheless obey some
local mixing.
|
{
"timestamp": "2018-06-05T02:18:44",
"yymm": "1806",
"arxiv_id": "1806.01140",
"language": "en",
"url": "https://arxiv.org/abs/1806.01140"
}
|
\section{Introduction}
The nearly macroscopic size of highly excited Rydberg atoms has inspired a variety of experiments that explore quantum-classical correspondence. These include excitation of wavepackets with varying degrees of localization~\cite{ten_wolde_observation_1988,yeazell_classical_1989,naudeau_core_1997,campbell_complete_1999,mestayer_realization_2008,dunning_engineering_2009}, creation of a Schr\"{o}dinger cat-like state~\cite{noel_excitation_1996,chen_dynamics_1997}, and studies in combined electric and magnetic fields, where an equivalent classical system would exhibit chaos~\cite{raithel_quasi-landau_1991,iu_diamagnetic_1991,yeazell_observation_1993,Freund_absorption_2002}. The large coupling between neighboring Rydberg states allows pairs of atoms to exchange energy in a dipole-dipole interaction~\cite{gallagher_resonant_1992}. For an ultracold highly-excited sample, many-body effects play an important role in this energy exchange~\cite{anderson_resonant_1998,mourachko_many-body_1998,carroll_many-body_2006}. An excitation blockade resulting from this strong coupling has also been exploited to entangle atoms and build quantum gates~\cite{jaksch_fast_2000,wilk_entanglement_2010,saffman_quantum_2010,maller_rydberg-blockade_2015,saffman_quantum_2016}. The large polarizability of Rydberg atoms make them useful for precision measurements of electromagnetic fields~\cite{osterwalder_using_1999,carter_electric-field_2012,facon_sensitive_2016} as well as the quantum state of a nanomechanical oscillator~\cite{sanz-mora_-chip_2017}.
In many experiments, population is spread among several Rydberg states, either during excitation or by subsequent interactions. Understanding the dynamics of these Rydberg systems typically requires accurate measurement of the electron's state distribution. Selective field ionization (SFI) is often used for this purpose \cite{gallagher_field_1977}. In this technique, an electric field ramp is applied to a sample of Rydberg atoms. As the field increases, more tightly bound states are ionized. Therefore, the time-resolved electron signal provides a measure of the distribution of population among Rydberg states, with earlier arrival corresponding to high principal quantum number and later arrival to more tightly bound states. While this simple picture provides a reasonable qualitative understanding of SFI, the details of the field ionization process complicate the signal, often making neighboring states difficult to resolve.
A modification to SFI was recently developed in which the electron is directed through the many Stark states it encounters on the way to ionization, thus controlling the shape of the time-resolved signal~\cite{gregoric_quantum_2017}. This is done by perturbing the electric field ramp with a continuous series of small fluctuations in the electric field. These perturbations manipulate the phase evolution of the Stark states, thus controlling the output amplitudes at each avoided crossing. A genetic algorithm (GA) is used to optimize the perturbation to manipulate the time-resolved signal.
In this work, we present the results of an experiment in which we use this directed field ionization to separate the signals from two nearby states, the $33p_{3/2,|m_j|=1/2}$ and the $34s$, whose time-resolved signals are almost completely indistinguishable when obtained using traditional SFI. Our choice of states is motivated by the $np_{3/2} + np_{3/2}\rightarrow (n+1)s + ns$ dipole-dipole interaction. Since the $34s$ and $33s$ signals are difficult to resolve from the $33p_{3/2}$, this dipole-dipole energy exchange is challenging to measure.
\begin{figure*}
\centering
\includegraphics{path33_34.eps}
\caption{(Color online) Calculated paths to ionization, population ionized from each state, and time-resolved field ionization signals for the unperturbed SFI ramp for $|m_j|=1/2$. The calculation was performed by constructing the time evolution operator using a basis including the Stark states from $n=26$ to $n=36$ with a time resolution of 0.01~ns, following the method previously described in~\cite{feynman_quantum_2015}. The paths to ionization for the $34s$ and $33p$ initial states are shown in (a) and (c), respectively. Each line is colored according to the population remaining in that state using the legend in (a). Note that there is very little overlap among the states populated by the $34s$ path and the $33p$ path. This can be seen by following the $32d$ state. At around 70~V/cm, where the $n=30$ and $n=31$ manifolds collide, the $32d$ state is in between the $34s$ and $33p$ states, neither of which couple significantly to the $32d$ until past 200~V/cm. The population ionized from each state in each 50~ns time interval for the initially populated $34s$ and $33p$ states is shown in (b) and (d) respectively, with each line colored by the legend in (b). Note that in (b) and (d) the color refers to the population \textit{leaving} the state, in contrast to (a) and (c) which show the population remaining in each state. Even though the $34s$ and $33p$ paths spread across a different, and nearly non-overlapping, set of states, they ionize at roughly the same fields. This is seen clearly in the calculated time-resolved field ionization signal shown in (e), where the $34s$ (red, solid) and $33p$ (blue, dashed) signals have a significant overlap of 73.2\%.
}
\label{fig:path}
\end{figure*}
Figure~\ref{fig:path}(a) and (c) show the calculated path to ionization using the unperturbed SFI ramp for the $34s$ and $33p$ states, respectively. This field rises to 600~V/cm in 1500~ns, resulting in a slew rate of 0.4~(V/cm)/ns. We will refer to the Stark states by the label of the zero-field state to which they are adiabatically connected. The population in each state is indicated by its color. As the field increases, each state encounters many avoided crossings. This leads to a spreading of population across many states as the ionization threshold is approached, resulting in an ionization signal that is spread out in time.
It is interesting to note that while the population that was initially in the $34s$ and $33p$ states both spread across many states during field ionization, there is not much overlap in the set of states that each populate near threshold. In spite of this, the time-resolved signals for field ionization of the $34s$ and $33p$ states are almost completely overlapped. This can be understood by considering Fig.~\ref{fig:path}(b) and (d), which show how much population has ionized from each state in each 50~ns time interval. Here, the color indicates the population that is ionizing rather than the population remaining. While the states do not overlap, much of the population ionizes over the same range of fields, thus producing the well-overlapped calculated ionization signal in Fig.~\ref{fig:path}(e), which compares favorably to the experimental signals shown in Fig.~\ref{fig:data}(a)~--~(c). In Fig.~\ref{fig:path}(b) and (d) we also see that neighboring states ionize with dramatically different rates. This is due to the relative orientation of the electron wave function and the electric field, providing the GA with opportunities near threshold to control the timing of ionization.
\section{Experiment}
To experimentally achieve state separation of the $34s$ and $33p$ states, we first confine about a million rubidium-85 atoms in a magneto-optical trap (MOT), which cools the atoms to approximately 200~$\mu$K. Homemade external cavity diode lasers of wavelengths 780~nm, 776~nm, and 1022~nm are used to excite the trapped atoms to the $34s$ state~\cite{fahey_excitation_2011}; for the $33p$ state, a 1270~nm laser is used in place of the 1022~nm laser~\cite{fahey_imaging_2015}. To alternate between exciting the $34s$ and $33p$ states on subsequent shots of the experiment, we tune the 1270~nm and 1022~nm lasers in and out of resonance by adjusting the acoustic frequency of two acousto-optic modulators.
After excitation, the Rydberg atoms are field ionized and the time-resolved ionization signal is recorded. This experimental cycle is repeated at a 60~Hz rate.
The electric field experienced by the atoms is controlled by three coaxial cylindrical electrodes as shown in Fig.~\ref{fig:exp}. Two concentric cylinders on one end of the trap can be independently biased to control the homogeneity of the electric field. A sufficiently homogeneous field was achieved when equal voltages were applied to these two cylinders. A static DC voltage is applied to these cylinders, producing an electric field of 13~V/cm which allows us to resolve the different $\left|m_j\right|$ sublevels so that we can selectively excite the $33p_{3/2,|m_j|=1/2}$ state. The ionizing field ramp is also applied to these electrodes using a trigger transformer circuit controlled by a MOSFET switch. The perturbing electric field to be optimized by the GA is applied to a third electrode on the opposite end of the trap. The arbitrary waveform generator used to produce this perturbing field has 14-bit resolution, a sample rate of 1~GS/s, and can switch from $+10$~V to $-10$~V (corresponding to electric fields of $\pm3.8$~V/cm) in 3.3~ns. This results in possible slew rates for the combined electric field ranging from $-1.6$~(V/cm)/ns to $+3.0$~(V/cm)/ns.
\begin{figure}
\centering
\includegraphics{exp8.eps}
\caption{(Color online) Electrode geometry. The MOT sits on the axis of a set of coaxial cylinders. Two cylinders, labeled inner and outer, are on one side of the MOT and a third, labeled detector-side, is on the opposite side. The field ionization ramp is applied to the inner and outer cylinders and the perturbing field is applied to the detector-side cylinder.
}
\label{fig:exp}
\end{figure}
The GA starts by generating a population of 120 random electric field perturbations. Each pulse is assigned a fitness score based on how well it achieves the desired outcome; either by moving the arrival times of the $34s$ and $33p$ ionization signals in opposite directions or by reducing the overlap of the two ionization signals. The next generation is populated with the top eight best scoring members of the population along with offspring that are created by mixing the genes, in this case the field values of the perturbation, from the more successful parents. Tournament selection with a tournament size of four is use to select the parents. Each gene is subjected to a 1\% chance of mutating; if a gene mutates, it is reset to a random value. For a fuller description of our algorithm, see~\cite{gregoric_quantum_2017}.
\begin{figure*}
\centering
\includegraphics{20180417_data.eps}
\caption{(Color online) GA scans to separate the $34s$ (red, solid) and $33p$ (blue, dashed) states. The unperturbed traces are shown in (a)~--~(c), while the best results from the last generation are shown in (d)~--~(f). In (g)~--~(i), the overlap between the two states is plotted vs.~generation for each member of the GA population; the large, open circles represent the unperturbed overlap, while the large, filled circles show the minimum overlap achieved in the last generation. The left and center columns correspond to GAs using the weighted shift and minimize overlap fitness scores, respectively. For the right column, a weighted shift fitness score was used for the first 40 generations before switching to the minimize overlap fitness score for the remainder of the optimization.
}
\label{fig:data}
\end{figure*}
The performance of a GA is highly dependent on how the fitness score is calculated, since this determines which genetic material is passed down to future generations. We have tested several different methods for calculating fitness scores in the case of two-state separation. One example is a ``weighted shift'' fitness score, in which the normalized signals from each state are multiplied by a linear weighting function. To shift the $34s$~state to the left or to earlier electron arrival times, this linear weight has a value of one on the left of the total signal gate and a value of zero on the right side of the gate. For shifting the $33p$~state to the right or to later electron arrival times, the weighting function is reflected horizontally (rising linearly from zero on the left side of the gate to one on the right side). The total fitness score is the geometric mean of the weighted signals.
We have used this fitness score to separate the $34s$~and $33p$~states, as shown in the left column of Fig.~\ref{fig:data}. The initial (unperturbed) signals for the~$34s$ state (red, solid) and the~$33p$ state (blue, dashed) are shown in Fig.~\ref{fig:data}(a), while the traces for the best result in the final generation are shown in Fig.~\ref{fig:data}(d). For these traces, the total area under each curve is normalized to one. The overlap between the $34s$~and $33p$~states is plotted in Fig.~\ref{fig:data}(g) as a function of generation for each perturbation tested. The large open and closed circles mark the overlap for the unperturbed case and the best result in the final generation, respectively. Using the weighted shift fitness score, we were able to decrease the overlap between the $34s$~and $33p$~states from~$77.0\%$ to~$37.9\%$.
While the weighted shift fitness score was able to significantly reduce the state overlap, we have made more progress by directly including the overlap into the fitness score calculation. Specifically, we define this fitness score as the difference between one and the overlap integral of the $34s$~and $33p$~signals, so that a smaller overlap corresponds to higher fitness. We calculate the overlap integral of two discrete signal traces by taking the dot-product. Our ``minimize overlap'' fitness score is normalized by dividing the overlap integral by the norms of the signal vectors. The results of running a GA using this fitness score are shown in the center column of Fig.~\ref{fig:data}, following the same conventions used for the left column. Compared to the weighted shift, the minimize overlap fitness score performs significantly better, decreasing the overlap from~$76.8\%$ to~$22.8\%$ over the course of the GA.
One potential issue with the minimize overlap fitness score is that it can result in signals which alternate in time between the~$34s$ and~$33p$ states, such as the interleaved signals in Fig~\ref{fig:data}(e). To avoid this, we have also tested a hybrid ``shift then overlap'' GA, which initially uses the weighted shift fitness score for a fixed number of generations before switching to the minimize overlap fitness score for the remainder of the optimization. This hybrid GA, shown in the right column of Fig.~\ref{fig:data}, outperforms both of the previous datasets, decreasing the overlap from~$76.6\%$ to~$15.4\%$ while avoiding interleaved signals. Note the repeated pattern of decrease and then plateau in the overlap for the hybrid GA in Fig.~\ref{fig:data}(i); this is a result of switching the fitness score from weighted shift to minimize overlap at generation 40.
The unperturbed electric field ramp is shown along with one of the optimized ramps in Fig.~\ref{fig:pulse}. While the perturbations are small compared to the size of the ramp, they are sufficient to control the phase along the path to ionization through many avoided crossings. Given the complexity of the Stark map along with the uncertainty in completely characterizing the experimental conditions, it is difficult to correlate the individual fluctuations in the optimized field with particular avoided crossings.
\begin{figure}
\centering
\includegraphics{pulsePlot.eps}
\caption{(Color online) (a) The unperturbed electric field ramp (dashed blue) and the optimized ramp (solid red) for the fitness score shown in Fig.~\ref{fig:data}(i). Since the perturbations are quite small on the scale of the whole ramp, a typical region is shown in (b). The perturbations extend through ionization, which is completed by about 400~V/cm for the $34s$ and $33p$ states studied here, and each perturbation is a few V/cm.
}
\label{fig:pulse}
\end{figure}
One way to gain some physical insight into the GA optimization process is to probe different regions of the Stark map by altering the duration of the field perturbations between otherwise identical GA scans. We have explored this by taking several datasets using the weighted shift fitness score. In each dataset, we begin the perturbation at an electric field of 9~V/cm which is before both the~$34s$ and~$33p$ states hit the high-$\ell$ manifolds. The perturbation end time is varied between datasets. For GA runs where the perturbation ends before either state hits a manifold, no change is observed in the ionization signals, as expected. If the perturbation is extended past 17~V/cm, corresponding to the point at which the $34s$~state hits the $n=31$ manifold, the GA is then able to shift the $34s$~state signal left. However, the GA is not able to shift the $33p$~state to the right unless the perturbation is extended nearly to the ionization threshold, well past the point at which the $33p$~state hits the $n=30$ manifold at 45~V/cm. This matches the computational analysis of the paths to ionization presented in Fig.~\ref{fig:path}. The~$34s$ state takes a mix of the adiabatic and diabatic pathways through the first few avoided crossings with the $n=31$ manifold, allowing our perturbations to shift this behavior toward either extreme. For the $33p$ state, however, the pathway is strongly adiabatic until $\approx$200~V/cm. As a result, our perturbations are not large enough to significantly shift the $33p$ state's ionization pathway toward the diabatic regime during these early crossings.
\section{Discussion}
A significant fraction of the success of the GA is due to the details of the ionization process near threshold. The addition of the ionizing ramp potential to the coulomb potential creates a saddle point in the total potential. For electrons of sufficient energy, ionization is classically allowed at the saddle point, while electrons of lower energies can tunnel to ionization~\cite{littman_tunneling_1976}. Each state can be characterized by the spatial distribution of its wavefunction. Higher energy states, in which the electron is on the opposite side of the atom from the saddle point, are harder to ionize and typically referred to as ``blue'' states. Lower energy states, in which the electron is localized to the same side of the atom as the saddle point, are easier to ionize and typically referred to as ``red'' states.
In nonhydrogenic atoms like rubidium, the red and blue states are coupled by their interaction with the core. Rather than crossing as they do in hydrogen, red and blue states from neighboring $n$ will exhibit avoided crossings~\cite{gallagher_rydberg_1994}. In the region of the Stark map near ionization, coupled states can have dramatically different ionization rates; our calculated ionization rates (using the method of~\cite{damburg_hydrogen_1979}) show that it is easy to find examples of neighboring Stark states with ionization rates differing by more than five orders of magnitude. Adjacent states in Na around $n=13$ have been shown to reach a threshold ionization rate of $10^7$~s$^{-1}$ at fields differing by more than 10~kV/cm~\cite{littman_tunneling_1976}. The ionization rates can also change due to interference between the decay channels at an avoided crossing, an effect studied in the photoionization peaks of Rb~\cite{feneuille_field-induced_1982} and line narrowing in the photoionization spectrum of Na~\cite{liu_interference_1985}. These widely varying ionization rates provide an ideal landscape for the GA, which can choose perturbations that move population into either rapidly- or slowly-ionizing states, depending on whether it is desired to move the ionization signal earlier or later in time.
\begin{figure*}
\centering
\includegraphics{path33_34_ev.eps}
\caption{(Color online) Calculated paths to ionization, population ionized from each state, and time-resolved field ionization signals for an evolved ramp for $|m_j| = 1/2$. The ramp was evolved using the same hybrid fitness score as in Fig.~\ref{fig:data}(i), but limited to only 30 total generations due to computational time constraints. The calculation was performed in the same way as for the unevolved paths shown in Fig.~\ref{fig:path}. The evolved paths to ionization for the $34s$ and $33p$ initial states are shown in (a) and (c), respectively. Each line is colored according to the population remaining in that state using the legend in (a), which is the same scale as used in Fig.~\ref{fig:path}(a). The population ionized from each state in each 50 ns time interval for the initially populated 34s and 33p states is shown in (b) and (d) respectively, with each line colored by the legend in (b), which is the same scale as used in Fig.~\ref{fig:path}(b). Note that in (b) and (d) the color refers to the population leaving the state, in contrast to (a) and (c) which show the population remaining in each state. In comparing these evolved paths to Fig.~\ref{fig:path}, it is clear that the GA has made some effort to push the amplitudes to higher and generally earlier ionizing states for the $34s$ and to lower and generally later ionizing states for the $33p$. However, the local variation in ionization rates among neighboring states is as important as the general trend of higher ionization rates at higher energies. Even though the set of states from which the $33p$ and $34s$ finally ionize do not significantly overlap, there is still an overlap in the field ionization signals as seen in (e). The simulated GA successfully reduces the overlap from 73.2\% in Fig.~\ref{fig:path}(e) to 41.0\%.
}
\label{fig:evpath}
\end{figure*}
We have also simulated the GA by repeating the same calculation as shown in Fig.~\ref{fig:path} in parallel for a population of 48 electric field ramps over 30 generations. The final evolved paths to ionization are shown in Fig.~\ref{fig:evpath} along with the simulated time-resolved signal. The simulated GA reduced the overlap of the $34s$ and $33p$ states from 73.2\% in Fig.~\ref{fig:path}(e) to 41.0\% in Fig.~\ref{fig:evpath}(e). While the states from which the electron amplitude ionizes do not overlap, as shown in Fig.~\ref{fig:evpath}(b) and Fig.~\ref{fig:evpath}(d), there still remains some overlap in the time-resolved signal. This is because the local variation in ionization rates among neighboring states is significant compared to the general trend of higher ionization rates at higher energies.
Our simulations show that the GA transfers amplitude between slow and fast ionizing states near threshold. We have run simulations to compare perturbations that end much earlier than the ionization region to perturbations that are only present around the ionization region. Similar to the experimental datasets with varying perturbation length discussed above, the perturbations that are present only around the ionization region perform better. We have determined that about 2/3 of the improvement in fitness score is due to the portion of the perturbations just before and during ionization.
While our model is successful in accounting for many of the observed experimental features and yields information not accessible in the experiment, it cannot be used for more than general guidance for three primary reasons. First, the model is incomplete in the sense that its limited basis includes only bound states. We calculate ionization rates using a semi-empirical formula rather than directly from the couplings to free states. In Feynman, \textit{et al}.\ essentially the same model was unable to correctly account for the phase evolution near ionization~\cite{feynman_quantum_2015}. Second, a significant advantage of the GA is that it automatically takes into account uncharacterized experimental conditions, such as electric and magnetic field inhomogeneity. Both the model and the experiment reveal that small changes in the electric field can have large effects. Since it is not feasible to measure all of the particular experimental conditions, the model cannot calculate a path to ionization that precisely captures the experiment. Finally, the simulated optimization of Fig.~\ref{fig:evpath} takes about 10 days to run on a modern supercomputer. The experiment is far more efficient, completing a similar optimization in only about one hour.
\section{Conclusion}
We have demonstrated the ability of our GA to separate the overlapped ionization signals from the~$34s$ and~$33p$ states of rubidium. By changing the fitness score calculation partway through the GA, we have been able to decrease the state overlap while avoiding interleaved signals. This technique will be useful in experiments requiring differentiation between the~$34s$ and~$33p$ states. Specifically, we plan to use the results of this work to study the dipole-dipole interaction \mbox{$np + np \rightarrow ns + (n+1)s$.} It should be straightforward to use this technique to separate the signals from other states whose ionizations signals are overlapped when traditional SFI is used. This optimization technique may be useful for other goals as well. For example, the production of high-brightness, monochromatic electron beams using field ionized Rydberg atoms may benefit from the addition of an optimized perturbation to the ionizing field \cite{kime_high-flux_2013,mcculloch_field_2017,moufarej_forced_2017}.
This work was supported by the National Science Foundation under Grants No. 1607335 and No. 1607377.
|
{
"timestamp": "2018-11-20T02:21:20",
"yymm": "1806",
"arxiv_id": "1806.00889",
"language": "en",
"url": "https://arxiv.org/abs/1806.00889"
}
|
\section{Introduction}
Counting problems in graphs can be very difficult, i.e. $\#P$-hard in
the general case, even for simple objects such as trees and independent sets.
Research on graph classes has been motivated by such ``hard" decision or
optimization problems, and restricting the input to given graph classes
has led to numerous polynomial-time algorithms.
Despite this, only a few useful algorithms for counting problems exist, and these are relatively recent.\\
In this paper, we focus on maximal matching counting and path
matching (linear forest) counting problems.
Matching counting and all extensions considered in this paper have been
proved $\#P$-complete in the general case.
Some sparse graph classes such as planar graphs or graphs of bounded
tree-width allow polynomial-time algorithms for perfect matching counting
(see \cite{bib11} and \cite{bib1}); on the negative side, Valiant,
when introducing the class $\#P$, proved that counting perfect matchings
as well as general matchings in bipartite graphs was
$\#P$-complete \cite{bib17,bib18}.
Valiant's proof concerning matchings has since been extended to 3-regular bipartite graphs \cite{bib8}, bipartite graphs of maximum degree 4 and bipartite planar graphs of maximum degree 6 \cite{bib16}.
The problem of counting perfect matchings in chordal and chordal bipartite graphs is also
$\#P$-complete \cite{bib14}, but good results on independent sets \cite{bib13}
give the impression that the chordal structure could nevertheless be interesting
regarding matching counting. This led us to focus on a related graph class,
the $(5,2)$-crossing-chordal graphs.
We especially make use of the bounded clique-width of this graph class.
Courcelle et al.~introduced clique-width in \cite{bib5} as a generalization
of tree-width, and it attracted attention mainly for two reasons.
On the one hand, in a similar fashion as the tree width, putting a bound on
the clique-width makes many difficult problems solvable in polynomial time
(see for example \cite{bib6}).
On the other hand, this class contains dense graphs as well as sparse graphs,
which makes for more general results.
Makowsky et al.~already proved as a consequence of a result in \cite{bib12}
that matching counting on graphs of bounded clique-width is polynomial.
In this paper, we will extend this result by adapting their method to
maximal matchings and path matchings.
Our algorithms are polynomial of the graph size, but exponential of the
clique-width $k$, i.e., $O(n^{poly(k)})$ time.
It might be hard to develop a fixed parameter tractable algorithm such as an
$O(c^{poly(k)}poly(n))$ time algorithm, since many graph
algorithms, e.g. vertex coloring, have to spend $O(n^{poly(k)})$ time
unless FPT $\ne$ $W[1]$ \cite{bib9a}.
The existing matching counting algorithms can not be used to count
maximal matchings directly.
The algorithms in \cite{bib12} classify matchings of local graphs according to their sizes and the colors of the endpoints, and then get information about larger graphs my merging the matchings.
However, in this way, each classified group may contain both matchings included in maximal matchings and those not included in any maximal matching.
Actually, it seems to be difficult to characterize the number of matchings included in some maximal matching, by using only their sizes and their endpoints.
In this paper, we introduce matching-cover pairs for this task. When we restrict a maximal matching to a subgraph, it can be decomposed into the matching edges belonging to the subgraph and end vertices of matching edges not included in the subgraph.
From the maximality, the end vertices form a vertex cover of the edges of the subgraph.
Thus, we count such pairs of matching and vertex cover according to their sizes and colors, and obtain a polynomial time algorithm for the problem.
For the problem of counting paths and path matchings, we have to have some way to handle
the connectivity of edge sets.
Actually, connectivity is not easy to handle; for example,
checking for the existence of Hamiltonian path is equivalent to checking whether
the number of paths of length $n-1$ is larger than zero or not.
Gimenez et al. devised an algorithm based on Tutte polynomial computation
to count the number of forests in bounded-clique-width graphs in
sub-exponential time, running in $2^{O(n^c)}$ time for constant
$c<1$ \cite{GmHlNy05}.
We use the properties of bounded-clique-width graphs so that we can
classify the path matchings in a polynomial number of groups of
equivalent path matchings, and thereby compute the number of paths
and path matchings in polynomial time.
\section{Clique Width}
We shall introduce clique-width on undirected, non-empty labeled graphs by a construction method. Let $G_i$ be the subgraph of vertices labeled $i$ in a graph $G$. We define the singleton $S_i$ as the labeled graph with one vertex of label $i$ and no edge, and the following construction operations:
\begin{itemize}
\renewcommand{\labelitemi}{-}
\item Renaming : $\rho_{i\rightarrow j}(G)$ is $G$ where all labels $i$ are replaced by labels $j$;
\item Disjoint union : $(V_1,E_1)\oplus(V_2,E_2) = (V_1\cup V_2,E_1\cup E_2)$;
\item Edge creation : $\eta_{i,j}((V,E)) = (V,E\cup \{(v_1,v_2)\ |\ v_1 \in G_i, v_2\in G_j\})$.
\end{itemize}
The class of graphs with clique-width $\leq k$ is the smallest class containing the singletons $S_i$, closed under $\rho_{i\rightarrow j}, \oplus$ and $\eta_{i,j}$ ($1\leq i,j \leq k$). In other words, the {\em clique-width} of a graph $G$, denoted as $cwd (G)$, is the minimal number of labels necessary to construct $G$ by using singletons and renaming, disjoint union and edge creation operations. \\
For an unlabeled graph $G$, we define its clique-width by labeling all vertices with label 1. This is necessarily the best labeling, since any labeling can be renamed to a monochromatic labeling. Note that the clique-width of a graph of order $n$ is at most $n$.
$(5,2)$-crossing-chordal graphs are known to have clique-width $\leq 3$ \cite{bib3}
(we recall that a $(5,2)$-crossing-chordal graph is a graph where any cycle of length $\geq 5$ has a pair of crossing diagonals).
Other interesting results include: cographs are exactly the graphs with
$cwd(G)\leq 2$, planar graphs of bounded diameter have bounded
clique-widths, and any graph class of treewidth $\leq k$ also has a bounded
clique-width of $\leq 3.2^{k-1}$ \cite{bib4}.
A complete review can be found in \cite{bib10}.\\
An {\em $l$-expression} is a term using
$S_i, \rho_{i\rightarrow j}, \eta_{i,j}$ and $\oplus$ (with $i,j \leq l$) that respects the arity of each operation. It can be represented more conveniently in a tree structure, and we can inductively associate the current state of the construction to each node. If $G$ is the graph associated with the root, we say that this term is an $l$-expression for $G$, and it is a certificate that $G$ is of clique-width $\leq l$. An example is given in Fig.1.
\begin{figure}[t]
\centering
\begin{tikzpicture}
[style/.style={circle,draw = black, inner sep=1pt,minimum size=3.5mm},
small/.style={circle,draw = black, inner sep=0.5pt,minimum size=1.5mm}]
\node (1) at (0,3) [style] {};
\node (2) at (0,5) [style] {};
\node (3) at (1,4) [style] {};
\node (4) at (2,3) [style] {};
\node (5) at (2,5) [style] {};
\draw (4) -- (1) -- (2) -- (3) -- (4) -- (5) -- (2) (3) -- (5);
\node (0) at (6,7) [style] {\scriptsize$\eta_{1,3}$};
\node (1) at (6,6) [style] {\scriptsize{$\oplus$}};
\node (10) at (5,5) [style] {\scriptsize$S_3$};
\node (11) at (7,5) [style] {\scriptsize$\eta_{1,2}$};
\node (12) at (7,4) [style] {\scriptsize{$\oplus$}};
\node (120) at (6,3) [style] {\scriptsize{$\oplus$}};
\node (121) at (8,3) [style] {\scriptsize$\rho_{2\to1}$};
\node (1200) at (5,2) [style] {\scriptsize$S_2$};
\node (1201) at (7,2) [style] {\scriptsize$S_2$};
\node (122) at (8,2) [style] {\scriptsize$\eta_{1,2}$};
\node (123) at (8,1) [style] {\scriptsize{$\oplus$}};
\node (1230) at (7,0) [style] {\scriptsize$S_1$};
\node (1231) at (9,0) [style] {\scriptsize$S_2$};
\draw (0) -- (1) -- (10) (1) -- (11) -- (12) -- (120) -- (1200) (120) -- (1201) (12) -- (121) -- (122) -- (123) -- (1230) (123) -- (1231);
\node (1) at (9,6.5) [small] {\tiny3};
\node (2) at (9,7.5) [small] {\tiny2};
\node (3) at (9.5,7) [small] {\tiny1};
\node (4) at (10,6.5) [small] {\tiny2};
\node (5) at (10,7.5) [small] {\tiny1};
\draw (4) -- (1) -- (2) -- (3) -- (4) -- (5) -- (2) (3) -- (5);
\draw [->, thick, >=stealth] (8.4,7) -- (6.7, 7);
\node (2) at (9,5.5) [small] {\tiny2};
\node (3) at (9.5,5) [small] {\tiny1};
\node (4) at (10,4.5) [small] {\tiny2};
\node (5) at (10,5.5) [small] {\tiny1};
\draw (2) -- (3) -- (4) -- (5) -- (2) (3) -- (5);
\draw [->, thick, >=stealth] (9,5) -- (7.6, 5);
\node (1) at (9.5,2.75) [small] {\tiny1};
\node (2) at (10,3.25) [small] {\tiny1};
\draw (1) -- (2);
\draw [->, thick, >=stealth] (9.3,3) -- (8.5, 3);
\node (1) at (9.5,1.75) [small] {\tiny2};
\node (2) at (10,2.25) [small] {\tiny1};
\draw (1) -- (2);
\draw [->, thick, >=stealth] (9.3,2) -- (8.5, 2);
\node (1) at (4,2.75) [small] {\tiny1};
\node (2) at (4.5,3.25) [small] {\tiny1};
\draw (1) -- (2);
\draw [->, thick, >=stealth] (4.7,3) -- (5.5, 3);
\end{tikzpicture}
\caption {Graph of clique-width 3, and a possible 3-expression tree (the last renaming operations are omitted).}
\end{figure}
Fellows et al.~proved the NP-hardness of computing the minimum clique-width for general graphs \cite{bib9}. The current best approximation is due to Oum and Seymour \cite{bib15}, who provided an algorithm in linear time that, given a graph $G$ and an integer $c$ as input, returns an $2^{3c+2}$-expression for $G$ or certifies that the graph has a clique-width larger than $c$.
This implies that we can compute in quadratic time a $2^{3k+2}$-expression for a graph of clique-width $k$ by applying this algorithm for $c=1,2\dots$. As the bound is independent of $n$, algorithms requiring expressions as input will still be in polynomial time, although the time complexity will usually be extremely poor. For $(5,2)$-crossing-chordal graphs, though, this is not a concern since it is possible to compute a 3-expression in linear time \cite{bib3}.
An $l$-expression is called {\em irredundant} if every edge-creation operation $\eta_{i,j}$ is applied to a graph where no pair of vertices in $G_i$ and $G_j$ are adjacent. Any $l$-expression can be turned into an $l$-irredundant expression in linear time \cite{bib7}. Therefore, we can assume w.l.o.g. that the input expression is irredundant.
\section{Framework of Our Algorithms}
The input of our algorithms is a graph $G$ on $n$ vertices and an $l$-expression for $G$, and the output is the number of objects (ex. matchings, paths) in $G$. The procedure works by counting these objects at each step of the construction, by using the expression tree : we start from the leaves and process a node once all its children have been processed. Finally, the value at the root of the tree is the output of the algorithm. Instead of doing it directly with the considered object, we introduce appropriate intermediate objects, and we compute tables of values at each step.
To avoid tedious case studies, we shall assume that requesting the value of
any vector outside of the range $\{0\dots n\}$ returns the value 0. Also, $\Delta_r(l)$ is the vector $(\delta_{i,r})_{1\leq i\leq l}$,
and $\Delta_{r,s}(l)$ is the vector $(\delta_{i,r}\cdot\delta_{j,s})_{\substack{0\leq i\leq j\leq l\\(i,j)\not = (0,0)}}$, where $\delta_{i,j}$ is the {\em Kronecker delta}:\[\delta_{i,j} = \left\{\begin{array}{rl}1&\mbox{if }i=j\\0&\mbox{otherwise}\end{array}\right.\]We will omit the $l$ when it is obvious in context.
\section{Counting Maximal Matchings}
\begin{theorem}
Computing the number of maximal matchings of a graph with $n$ vertices with a corresponding $l$-expression can be done in polynomial time in $n$ (but exponential w.r.t $l$).
\end{theorem}
We cannot directly use the previous framework on maximal matchings. Indeed, consider $M$ a maximal matching of $G = \eta_{i,j}(G')$ and $M'$ the
induced matching in $G'$: $M'$ is not necessarily maximal. However, we can keep track of the vertices of $G'$ that are covered in $M$, and those vertices must form a vertex cover of the subgraph left uncovered by $M'$. See Fig.2 for an example.\\
\begin{figure}[t]
\centering
\begin{tikzpicture}
[style/.style={circle,draw = black, inner sep=1pt,minimum size=3.5mm},
tvick/.style={circle, very thick, draw = black, inner sep=1pt,minimum size=3.5mm}]
\node (0) at (0,6) [style] {\small3};
\node (10) at (2,6) [style] {\small2};
\node (11) at (2,4.5) [style] {\small2};
\node (12) at (2,3) [style] {\small2};
\node (13) at (2,1.5) [style] {\small2};
\node (14) at (2,0) [style] {\small2};
\node (21) at (4,4.5) [style] {\small1};
\node (22) at (4,3) [style] {\small1};
\node (23) at (4,1.5) [style] {\small1};
\node (24) at (4,0) [style] {\small1};
\draw (0) -- (10) -- (21) -- (11) -- (22) -- (12) -- (23) -- (14) -- (21) -- (13) -- (24) -- (12) -- (11) -- (24) -- (10) -- (23) -- (22) -- (13) -- (12) -- (21) (10) -- (22) -- (14) (23) -- (11);
\draw [very thick] (14) -- (24) (13) -- (23) (10) -- (11) (21) -- (22);
\draw (11) to [bend right = 20] (13);
\node (0) at (7,6) [style] {\small3};
\node (10) at (9,6) [style] {\small2};
\node (11) at (9,4.5) [style] {\small2};
\node (12) at (9,3) [style] {\small2};
\node (13) at (9,1.5) [tvick] {\small2};
\node (14) at (9,0) [tvick] {\small2};
\node (21) at (11,4.5) [style] {\small1};
\node (22) at (11,3) [style] {\small1};
\node (23) at (11,1.5) [tvick] {\small1};
\node (24) at (11,0) [tvick] {\small1};
\draw (0) -- (10) (11) -- (12) -- (13) (22) -- (23);
\draw [very thick] (10) -- (11) (21) -- (22);
\draw (11) to [bend right = 20] (13);
\end{tikzpicture}
\caption{Maximal matching of $\eta_{1,2}(G')$, and the corresponding matching-cover pair of $G'$.}
\end{figure}
A {\em matching-cover} pair of a graph $G = (V,E)$ is a pair $(m,c)$ such that:
\begin{itemize}
\item $m\subseteq E$ is a matching of $G$ (i.e. no vertex is covered more than once);
\item $c\subseteq V$ is a vertex cover of the subgraph left uncovered by $m$ (i.e. every {\em edge} is covered at least once).
\end{itemize}
We show that computing the number of matching-cover pairs of a graph with $n$ vertices with a corresponding $l$-expression can be done in polynomial time in $n$.\\
Let $M = (m_i)_{1\leq i \leq l}$ and $C= (c_i)_{1\leq i\leq l}$ be two vectors of non-negative integers. For a graph $G$, we say that a pair $(m,c)$ satisfies the condition $\varphi_{M,C}(G)$ if $m$ covers $m_i$ vertices in $G_i$ and $c$ uses $c_i$ vertices in $G_i$ for all $i$, and we denote by $mc_{M,C}(G)$ the number of pairs that satisfy $\varphi_{M,C}(G)$. Note that maximal matchings are exactly pairs with an empty cover; therefore, the number of maximal matchings of $G$ is $\sum_{k\leq n} mc_{k\cdot\Delta_1,0}(G)$.
Now we will follow the framework described above and compute $mc_{M,C}$ for all possible $M$ and $C$, at each step of the construction. We associate to each node of the tree a table of size $n^{2l}$ corresponding to the values of $mc_{M,C}$ on this graph for $M$ and $C$ ranging from $(0,..,0)$ to $(n,..,n)$. For a singleton $S_i$, we can easily see that:
\[ mc_{M,C}(S_i) =\left\{\begin{array}{rl}
1 & \mbox{if } M = 0\mbox{ and }C = 0 \mbox{ or }\Delta_i \\
0 & \mbox{otherwise}
\end{array}\right. \]
For the renaming operation $G = \rho_{i\rightarrow j}(G')$, the graph is not modified, but all vertices of label $i$ are set to label $j$. Hence, we modify the entries $i$ and $j$ accordingly.
\[mc_{M,C}(G)=\sum_{\substack{M':(M,M')\vdash \phi_{i,j}\\C':(C,C')\vdash \phi_{i,j}}} mc_{M',C'}(G')\]
\[ \mbox{where }(X,X')\vdash\phi_{i,j} \Leftrightarrow \left(\begin{array}{l}
x_j = x'_i+x'_j\\
x_i=0\\
\forall k \not\in \{i,j\}, x_k=x'_k
\end{array}\right)\]
For the disjoint union of two graphs $G=G_1\oplus G_2$, we have a bijection between matching-cover pairs $(m,c)$ in $G$ and pairs $(m_1,c_1),(m_2, c_2)$ of matching-cover pairs in $G_1$ and $G_2$, respectively. Moreover, if $(m,c)$ satisfies $\varphi_{M,C}$, $(m_1,c_1)$ satisfies $\varphi_{M_1,C_1}$ and $(m_2, c_2)$ satisfies $\varphi_{M_2,C_2}$, we have $M=M_1+M_2$ and $C=C_1+C_2$.
\[mc_{M,C}(G)=\sum_{\substack{M_1+M_2=M\\C_1+C_2=C}} mc_{M_1,C_1}(G_1)\cdot mc_{M_2,C_2}(G_2)\]
For the edge creation operation $G=\eta_{i,j}(G')$, we have to choose the extremities of the edges added to the matching among the vertices in the vertex cover. If $q$ is the number of new edges, we have:
\[mc_{M,C}(G)=\sum_{q=0}^n mc_{M',C'}(G')\cdot
\left( \begin{array}{l}
c'_i\\q
\end{array} \right)
\cdot
\left( \begin{array}{l}
c'_j\\q
\end{array} \right)\cdot q!
\]
\[\mbox{where } M'=M-q\Delta_i-q\Delta_j,\ C'=C+q\Delta_i+q\Delta_j\]
Once the maximal matchings of all sizes are computed, it is straightforward
to count the number of perfect matchings and the number of minimum maximal matchings in polynomial time.
Note that counting perfect matchings can be achieved in $O(n^{2l+1})$ time by adapting the matching counting algorithm presented in \cite{bib12} in a similar fashion.\\
{\bf Complexity study:}
Obviously, there are exactly $n$ singleton operations, and each operation requires a constant amount of time. Every other operation requires one to compute $n^{2l}$ values.
As the expression is irredundant, every edge creation operation adds at least one edge, so there are at most $n^2$ edge creation operations, processed in linear time. As a disjoint union operation has two children in the tree, and there are $n$ leaves, there are $n-1$ disjoint union operations, and they require $O(n^{2l})$ time.
For the renaming operation, consider the number of different labels at each step of the construction. This number is one for a singleton, the edge creation operation has no effect, the disjoint union is an addition in the worst case (no shared label) and the renaming operation diminishes this number by one. Therefore, there are at most $n$ renaming operations, and they are done in $O(n^4)$ time. The final sum requires $O(n^l)$ operations.\\
Therefore, the overall complexity of the algorithm is
\[O(n)+O(n^{2l}) \cdot \left(O(n^5)+O(n^{2l+1})+ O(n^3)\right)+ O(n^l)= O(n^{4l+1})~(l\geq 2).\]
For $(5,2)$-crossing-chordal graphs, we can compute an expression of width $l=3$ in linear time and the algorithm runs in time $O(n^{13})$.
\section{Counting paths and path matchings}
A {\em path matching} (or {\em linear forest}) is a disjoint union of paths, in other words, a cycle-free set of edges such that no vertex is covered more than twice.
\begin{theorem}
Computing the number of paths $pth(G)$ and the number of path matchings
$pm(G)$ of a graph of clique-width $\leq k$ can be done in polynomial
time (but exponential w.r.t. $k$).
\end{theorem}
\begin{proof}
Let $K = (k_{i,j})_{\substack{0\leq i\leq j\leq l\\(i,j)\not = (0,0)}}$ be a vector of non-negative integers. We say that a path matching $P$ of $G$ satisfies the condition $\psi_K$ if:
\begin{itemize}
\renewcommand{\labelitemi}{-}
\item $\forall i>0, k_{0,i}$ vertices in $G_i$ are left uncovered by $P$;
\item $\forall (i,j),i\leq j$, $k_{i,j}$ paths in $P$ have extremities in $G_i$ and $G_j$.
\end{itemize}
We denote the number of path matchings in $G$ satisfying $\psi_K$ by $pm_K(G)$. If $i>j$, we denote $k_{i,j}= k_{j,i}$. As $K$ is of size $\frac{l(l+3)}{2}$, we compute tables of size $n^{\frac{l(l+3)}{2}}$ at each step.
For a singleton $S_i$, the only possible path matching is empty and leave the vertex uncovered.
\[\forall K, pm_K(S_i) =\left\{\begin{array}{rl}
1 & \mbox{ if }K =\Delta_{0,i}\\
0 & \mbox{otherwise}
\end{array}\right. \]
For the renaming operation $G= \rho_{i\rightarrow{} j}(G')$, the method is the same as for maximal matchings.
\[pm_K(G) = \sum_{K':(K,K')\vdash \phi} pm_{K'}(G')\]
\[\mbox{where }(K,K')\vdash \phi \Leftrightarrow \left(\begin{array}{l}
k_{j,j}=k'_{j,j}+k'_{i,j}+k'_{i,i}\\
\forall a \not\in \{i,j\}), k_{a,j} = k'_{a,i}+k'_{a,j}\\
\forall a, k_{a,i} = 0\\
\forall a \not\in \{i,j\}, b \not\in \{0,i,j\}, k_{a,b}=k'_{a,b}
\end{array}\right)\]
For the disjoint union operation $G = G_1 \oplus G_2$, we have a bijection between path matchings $p$ in $G$ and pairs $(p_1, p_2)$ of path matchings in $G_1$ and $G_2$, respectively. Plus, if $p_1$ satisfies $\psi_{K_1}$, $p_2$ satisfies $\psi_{K_2}$ and $p$ satisfies $\psi_K$, we have $K=K_1+K_2$.
\[pm_K(G) = \sum_{K_1+K_2=K} pm_{K_1}(G_1)\cdot pm_{K_2}(G_2)\]
Consider now the edge creation operation $G=\eta_{i,j}(G')$. We say a path matching $P$ in $G$ is an {\em extension} of a path matching $P'$ in $G'$ if $P\cap G'=P'$, so that $P=P'\cup E_{i,j}$ where $E_{i,j}$ is a subset of the edges added by the operation. Now, if we consider a path matching $P'$ in $G'$ that satisfies $\psi_{K'}$, we claim that the number of extensions of $P'$ in $G$ that satisfy $\psi_K$ depends only on $i,j,K'$ and $K$ (and not on $P'$ or $G'$), and we represent it as $N_{i,j}(K,K')$. Since every path matching of $G$ is an extension of an unique path matching of $G'$, we have:
\[ pm_K(G)=\sum_{K'} pm_{K'}(G')\cdot N_{i,j}(K', K) \]
Moreover, we can compute all the $N_{i,j}(K',K)$ beforehand in $O(n^{l(l+4)})$ time. The proof of these claims is given in the appendix.
We can then compute the number of paths $pth(G)$ and the number of path matchings $pm(G)$ with the formulas:
\[\begin{array}{rrlrl}
pth(G) =&\displaystyle \sum_{0\leq a \leq n}&pm_{K(a)}(G) &\mbox{where}&K(a)=a\cdot \Delta_{0,1}+\Delta_{1,1}\\
pm(G) =&\displaystyle \sum_{1\leq a+2b \leq n} &pm_{K(a,b)}(G) &\mbox{where}&K(a,b)=a\cdot \Delta_{0,1}+b\cdot \Delta_{1,1}
\end{array}\]
\end{proof}
{\bf Complexity study:} A singleton operation requires constant time. Every other operation requires us to compute $n^{\frac{l(l+3)}{2}}$ values. For each value, the renaming operation in processed in linear time, the disjoint union operation in $O(n^{l^2})$ time and the edge creation operation in $O(n^{\frac{l(l+3)}{2}})$ time.
The overall complexity of the algorithm is:
\[\left\{\begin{array}{rl}
O(n^{l^2+4l})&\mbox{for }l\leq 5 \\
O(n^{\frac{3}{2}(l^2+l)+1}) & \mbox{for } l>5
\end{array}\right.\]
For $(5,2)$-crossing-chordal graphs, we can compute in linear time an expression of width $l=3$ and we have an algorithm running in $O(n^{21})$ time.
\section{Conclusion}
These results seem to confirm the intuition that bounding clique-width is an efficient restriction on the input of $\#P$-hard problems in order to allow the use of polynomial algorithms. Notably, being able to count paths and path matchings in polynomial time is interesting because connected structures are usually very difficult to count. In that sense, the next logical step was to study the tree (or, equivalently, forest) counting problem. However, our attempts to do so by using a method similar to the one we used in the paper, only produced algorithms running in exponential time. Our feeling is that the tree counting problem remains $\#P$-complete for graphs of bounded clique-width, as this intuitive method keeps giving bad results. It remains an open problem for now.
|
{
"timestamp": "2018-06-05T02:10:54",
"yymm": "1806",
"arxiv_id": "1806.00791",
"language": "en",
"url": "https://arxiv.org/abs/1806.00791"
}
|
\section{\normalsize{Introduction}}
\label{sec:introduction}
Real data often contain outliers,
which can create serious problems when analyzing
it. Many methods have been developed to deal with
outliers, often by constructing a fit that is robust to
them and then detecting the outliers by their large
deviation (distance, residual) from that fit.
For a brief overview of this approach see
\cite{Rousseeuw:WIRE-Anomaly}.
Unfortunately, most robust methods cannot handle data
with missing values, some rare exceptions being
\cite{Cheng:Missing} and \cite{Danilov:GSE}.
Moreover, they are typically restricted to casewise
outliers, which are cases that deviate from the majority.
We call these {\it rowwise outliers} because multivariate
data are typically represented by a rectangular matrix
in which the rows are the cases and the columns are the
variables (measurements).
In general, robust methods require that fewer than half
of the rows are outlying, see e.g.
\cite{Lopuhaa:BDP}.
However, recently a different type of outliers, called
{\it cellwise outliers}, have received much attention
\citep{Alqallaf:Propag,VanAelst:HSD,Agostinelli:Cellwise}.
These are suspicious cells (entries) that can occur
anywhere in the data matrix.
Figure \ref{fig:Cellmap_DDC} illustrates the difference
between these types of outliers.
The regular cells are shown in gray, whereas black means
outlying.
Rows 3 and 7 are rowwise outliers, and the other rows
contain a fairly small percentage of cellwise outliers.
As in this example, a small proportion of
outlying cells can contaminate over half the rows, which
causes most methods to break down.
This effect is at its worst when the dimension
(the number of columns) is high.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.42\textwidth]
{Cellmap_DDC_2rows.pdf}
\caption{Data matrix with missing values and
cellwise and rowwise contamination.}
\label{fig:Cellmap_DDC}
\end{figure}
In high-dimensional situations, which are becoming
increasingly common, one often applies principal
component analysis (PCA) to reduce the dimension.
However, the classical PCA (CPCA) method is not robust
to either rowwise or cellwise outliers.
Robust PCA methods that can deal with rowwise outliers
include \citet{Croux:Proj}, \citet{Hubert:RAPCA},
\citet{Locantore:Funcdata}, \citet{Maronna:ORreg} and the
ROBPCA method \citep{Hubert:ROBPCA}. The latter method
combines projection pursuit ideas with robust covariance
estimation.
In order to deal with missing values,
\cite{Nelson:miss} and \cite{Kiers:WLS} developed
the {\it iterative classical PCA algorithm} (ICPCA),
see \citet{Walczak:TutorialI} for a tutorial.
The ICPCA follows the spirit of the EM algorithm.
It starts by replacing the missing values by
initial estimates such as the columnwise means.
Then it iteratively fits a CPCA, yielding scores
that are transformed back to the original space
resulting in new estimates for the missing values,
until convergence.
\citet{Serneels:MPCA} proposed a rowwise
robust PCA method that can also cope with
missing values.
We will call this method MROBPCA (ROBPCA for
missing values) as its key idea is to combine the
ICPCA and ROBPCA methods.
MROBPCA starts by imputing the NA's by robust
initial estimates. The main difference with the
ICPCA algorithm is that in each iteration the PCA
model is fit by ROBPCA, which yields different
imputations and flags rowwise outliers.
As of yet there are no PCA methods that can deal
with cellwise outliers in combination with
rowwise outliers and NA's.
This paper aims to fill that gap by constructing a
new method called MacroPCA, where `Macro' stands
for {\bf M}issingness {\bf A}nd {\bf C}ellwise
and {\bf R}owwise {\bf O}utliers.
It starts by applying a multivariate method called
DetectDeviatingCells \citep{Rousseeuw:DDC} for
detecting cellwise outliers, which provides initial
imputations for the outlying cells and the NA's
as well as an initial measure of rowwise outlyingness.
In the next steps MacroPCA combines ICPCA and
ROBPCA to protect against rowwise outliers and to
create improved imputations of the outlying cells
and missing values.
MacroPCA also provides graphical displays to
visualize the different types of outliers.
R code for MacroPCA is publicly available
(Section \ref{sec:software}).
\section{The MacroPCA algorithm}
\label{sec:MacroPCA}
\subsection{Model}
\label{subsec:Model}
The data matrix is denoted as $\bX_{n,d}$ in which
the subscripts are the number of rows (cases) $n$
and the number of columns (variables) $d$ .
In the absence of outliers and missing values the
goal is to represent the data in a lower dimensional
space, i.e.\
\begin{equation} \label{eq:pcamodel}
\bX_{n,d} = \bone_n \bmu_d' +
\bmT_{n,k} (\bmP_{d,k})' + \bmE_{n,d}
\end{equation}
with $\bone_n$ the column vector with all $n$
components equal to 1, $\bmu_d$ the
$d$-variate column vector of location,
$\bmT_{n,k}$ the $n \times k$ score matrix,
$\bmP_{d,k}$ the $d \times k$ loadings matrix
whose columns span the PCA subspace, and
$\bmE_{n,d}$ the $n \times d$ error matrix.
The reduced dimension $k$ can vary from
1 to $d$ but we assume that $k$ is low.
The $\bmu_d$\,, $\bmT_{n,k}$\, and $\bmP_{d,k}$
are unknown, and estimates of them will be
denoted by $\bm_d$\,, $\bT_{n,k}$ and
$\bP_{d,k}$\,.
Several realities complicate this simple model.
First, the data matrix may not be fully observed,
i.e., some cells $x_{ij}$ may be missing.
Here we assume that they are
\textit{missing at random} (MAR),
meaning that the missingness of a cell is
unrelated to the value the cell would have had,
but may be related to the values of other
cells in the same row; see, e.g.,
\cite{Schafer:missing}.
This is the typical assumption underlying
EM-based methods such as ICPCA and MROBPCA
that are incorporated in our proposal.
Secondly, the data may contain rowwise outliers,
e.g. cases from a different population.
The existing rowwise robust methods require
that fewer than half of the rows are outlying,
so we make the same assumption here.
Thirdly, cellwise outliers may occur
as described in the introduction.
The outlying cells may be imprecise, incorrect or
just unusual.
Outlying cells do not necessarily stand out in their
column because the correlations between the columns
matter as well, so these cells may not be detectable by
simple univariate outlier detection methods.
There can be many cellwise outliers, and
in fact each row may contain one or
more outlying cells.
\subsection{Dealing with missing values
and cellwise and rowwise outliers}
\label{subsec:Algorithm}
We propose the MacroPCA algorithm for analyzing data
that may contain one or more of the following issues:
missing values, cellwise outliers, and rowwise outliers.
Throughout the algorithm we will use the following
two notations:
\begin{itemize}
\item the {\it NA-imputed matrix} $\bcX_{n,d}$ only
imputes the missing values of $\bX_{n,d}$\;;
\item the {\it cell-imputed matrix} $\bfX_{n,d}$ has
imputed values for the outlying cells that do not
belong to outlying rows, and for all missing
values.
\end{itemize}
Both of these matrices still have $n$ rows.
Neither is intended to simply replace the true data
matrix $\bX_{n,d}$\;.
Note that $\bfX_{n,d}$ does not try to impute outlying
cells inside outlying rows, which would mask these rows
in subsequent computations.
Since we do not know in advance which cells and
rows are outlying, the set of flagged cellwise
and rowwise outliers (and hence $\bcX_{n,d}$ and
$\bfX_{n,d}$) will be updated in the course of
the algorithm.
The first part of MacroPCA is the
DetectDeviatingCells (DDC) algorithm.
The description of this method can be found in
\cite{Rousseeuw:DDC} and in
Section 1 of the Supplementary Material.
The main purpose of the DDC method is to detect
cellwise outliers.
DDC outputs their positions $I_{c,DDC}$ as
well as imputations for these outlying cells
and any missing values.
It also yields an initial outlyingness
measure on the rows, which is however not
guaranteed to flag all outlying rows.
The set of flagged rows $I_{r,DDC}$ will be
improved in later steps.
The second part of MacroPCA constructs principal
components along the lines of the ICPCA
algorithm but employing a version of ROBPCA
\citep{Hubert:ROBPCA} to fit subspaces.
It consists of the following steps, with all
notations listed in Section 2
of the Supplementary Material.
\begin{enumerate}
\item {\bf Projection pursuit.} The goal of this
step is to provide an initial indication of which
rows are the least outlying.
For this ROBPCA starts by
identifying the $h < n$ least outlying rows by a
projection pursuit procedure.
We write $0.5 \leqslant \alpha=h/n < 1$.
This means that we can withstand up to a fraction
$1-\alpha$ of outlying rows.
To be on the safe side the default is
$\alpha=0.5$\,.
However, due to cellwise outliers there may be far
fewer than $h$ uncontaminated rows, so we cannot
apply this step to the original data $\bX_{n,d}$.
We also cannot use the entire imputed matrix
$\btX_{n,d}$ obtained
from DDC in which all outlying cells are imputed, even
those in potentially outlying rows, as this could mask
outlying rows.
Instead we use the cell-imputed matrix
$\bfX_{n,d}^{(0)}$ defined as follows:
\begin{enumerate}
\item In all rows flagged as outlying we keep the
original data values. Only the missing values in
these rows are replaced by the values imputed by
DDC. More precisely, for all $i$ in $I_{r,DDC}$
we set $\bfx_i^{(0)} = \bcx_i^{(0)}$.
\item In the $h$ unflagged rows with the fewest cells
flagged by DDC we impute those cells,
i.e. $\bfx_i^{(0)} = \btx_i$.
\end{enumerate}
As in ROBPCA the outlyingness of a point
$\bfx_i^{(0)}$ is then computed as
\begin{equation}
\text{outl}(\bfx_i^{(0)}) = \max_{\bv \in B}
\frac{|\bv'\bfx_i^{(0)} - \lmcd(\bv'\bfx_j^{(0)})|}
{\smcd(\bv'\bfx_j^{(0)})} \label{outlo} \ \ ,
\end{equation}
where $\lmcd(\bv'\bfx_j^{(0)})$ and
$\smcd(\bv'\bfx_j^{(0)})$ are the univariate MCD location
and scale estimators \citep{Rousseeuw:RobReg} of
$\{\bv'\bfx_1^{(0)},\ldots,\bv'\bfx_n^{(0)}\}$ .
The set $B$ contains 250 directions through two data
points (or all of them if there are fewer than 250).
Finally, the indices of the $h$ rows $\bfx_i^{(0)}$
with the lowest outlyingness and not belonging to
$I_{r,DDC}$ are stored in the set $H_0$\,.
\item {\bf Subspace dimension.}
Here we choose the number of principal components.
For this we build a new cell-imputed matrix
$\bfX^{(1)}_{n,d}$ which imputes the outlying cells in
the rows of $H_0$ and imputes the NA's in all rows.
This means that $\bfx_i^{(1)} = \btx_i$ for
$i \in H_0$\,, and $\bfx_i^{(1)} = \bcx_i^{(0)}$ if
$i \notin H_0$.
Then we apply classical PCA to the $\bfx_i^{(1)}$ with
$i \in H_0$.
Their mean $\bm_{d}^{(1)}$ is an estimate of the center,
whereas the spectral decomposition of their covariance
matrix yields a loading matrix $\bP_{d,d}^{(1)}$ and a
diagonal matrix $\bL_{d,d}^{(1)}$ with the eigenvalues
sorted from largest to smallest.
These eigenvalues can be used to construct a screeplot
from which an appropriate dimension $k$ of the subspace
can be derived.
Alternatively, one can retain a certain cumulative
proportion of explained variance, such as 80\%.
The maximal number of principal components that MacroPCA
will consider is the tuning constant $k_{\max}$ which is
set to 10 by default.
\item {\bf Iterative subspace estimation.}
This step aims to estimate the $k$-dimensional
subspace fitting the data.
As in ICPCA this requires iteration, for $s \gs 2$:
\begin{enumerate}
\item The scores matrix in \eqref{eq:pcamodel} based
on the cell-imputed cases is computed as
$\bfT_{n,k}^{(s-1)} = (\bfX_{n,d}^{(s-1)} -
\boldsymbol 1_n
(\bm_{d}^{(s-1)})') \bP_{d,k}^{(s-1)}$\;.
The predicted data values are set to
$\bhX_{n,d}^{(s)} = \boldsymbol 1_n (\bm_d^{(s-1)})'
+ \bfT_{n,k}^{(s-1)} (\bP_{d,k}^{(s-1)})'$\;.
We then update the imputed matrices to
$\bcX_{n,d}^{(s)}$ and $\bfX_{n,d}^{(s)}$ by replacing
the appropriate cells by the corresponding cells of
$\bhX_{n,d}^{(s)}$.
That is, for $\bcX_{n,d}^{(s)}$ we update all the
imputations of missing cells, whereas for
$\bfX_{n,d}^{(s)}$ we update the imputations of the
outlying cells in rows of $H_0$ as well as the
NA's in all rows.
\item The PCA model is re-estimated by applying
classical PCA to the $\bfx_i^{(s)}$ with $i \in H_0$.
This yields a new estimate $\bm_{d}^{(s)}$
as well as an updated loading matrix
$\bP_{d,k}^{(s)}$\;.
\end{enumerate}
The iterations are repeated until $s=20$ or until
convergence is reached, i.e.\ when
the maximal angle between a vector in the new subspace
and the vector most parallel to it in the previous
subspace is below some tolerance
(by default 0.005).
Following~\citet{Krzanowski:GroupPC} this angle is
computed as $arccos(\sqrt{\delta_k})$
where $\delta_k$ is the smallest eigenvalue of
$(\bP_{d,k}^{(s)})' \bP^{(s-1)}_{d,k}
(\bP^{(s-1)}_{d,k})' \bP_{d,k}^{(s)}$\;.
After all iterations we have the cell-imputed
matrix $\bfX_{n,d}^{(s)}$ as well as the estimated
center $\bm_{d}^{(s)}$ and the
loading matrix $\bP_{d,k}^{(s)}$\,.
\item {\bf Reweighting.}
In robust statistics one often follows an initial
estimate by a reweighting step in order to improve
the statistical efficiency at a low computational
cost, see e.g.
\citep{Rousseeuw:RobReg,Engelen:PCA}.
Here we use the orthogonal distance of each
$\bfx_i^{(s)}$ to the current PCA subspace:
\begin{equation*}
\fod_i = \|\bfx_i^{(s)} - \bfhx_i^{(s)}\| =
\| \bfx_i^{(s)}- (\bm_d^{(s)} +
(\bfx^{(s)}_i- \bm_d^{(s)})\bP_{d,k}^{(s)}
(\bP_{d,k}^{(s)})') \| \ .
\end{equation*}
The orthogonal distances to the power 2/3 are roughly
Gaussian except for the outliers \citep{Hubert:ROBPCA},
so we compute the cutoff value
\begin{equation}\label{eq:cutoffOD}
c_{\od} :=
\left(\lmcd(\{\fod_j^{2/3}\})+
\smcd(\{\fod_j^{2/3}\})
\, \Phi^{-1}(0.99) \right)^{3/2} \;\;.
\end{equation}
All cases for which $\fod_i \ls c_{\od}$
are considered non-outlying with respect to the
PCA subspace, and their indices are stored in a
set $H^*$. As before, any $i \in I_{r,DDC}$ is
removed from $H^*$. The cases not in $H^*$ are
flagged as rowwise outliers.
The final cell-imputed matrix $\bfX_{n,d}$
is given by
$\fx_{i,j} = \hfx_{i,j}^{(s)}$ if
$i \in H^*$ and $j \in I_{c,DDC}$ and
$\fx_{i,j} = \cx_{i,j}$ otherwise.
Applying classical PCA to the $n^*$ rows
$\bfx_i$ in $H^*$ yields a new center $\bm_d^*$
and a new loading matrix $\bP_{d,k}^*$\;.
\item {\bf DetMCD.} Now we want to estimate
a robust basis of the estimated subspace.
The columns of $\bP_{d,k}^*$ from step 4 need not
be robust, because some of the $n^*$ rows in
$H^*$ might be outlying inside the subspace.
These so-called
good leverage points do not harm the
estimation of the PCA subspace but they can still
affect the estimated eigenvectors and eigenvalues,
as illustrated by a toy example in Section
\ref{A:toy} of the Appendix.
In this step we first project the $n^*$ points
of $H^*$ onto the subspace, yielding
\begin{equation*}
\bfT_{n^*,k} = \left(\bfX_{n^*,d} -
\boldsymbol 1_{n^*} \bm_d^{*'} \right)
\bP_{d,k}^* \ \ .
\end{equation*}
Next, the center and scatter matrix of the scores
$\bfT_{n^*,k}$ are estimated by the DetMCD method
of \citet{Hubert:DetMCD}.
This is a fast, robust and deterministic algorithm
for multivariate location and scatter, yielding
$\bm_k^{\mcd}$ and $\bS_{k,k}^{\mcd}$.
Its computation is feasible because the dimension
$k$ of the subspace is quite low.
The spectral decomposition of $\bS_{k,k}^{\mcd}$
yields a loading matrix $\bP_{k,k}^{\mcd}$
and eigenvalues $\hlam_j$ for $j=1,\ldots,k$\;.
We set the final center to
$\bm_d = \bm_d^{*} +
\bP_{d,k}^{*}\bm_k^{\mcd}$
and the final loadings to
$\bP_{d,k} = \bP_{d,k}^*\bP_{k,k}^{\mcd}$.
\item{\bf Scores, predicted values and residuals.}
We now provide the final output.
We compute the scores of $\bcX_{n,d}$
as $\bcT_{n,k} = (\bcX_{n,d} -
\boldsymbol 1_{n}\bm_d')\bP_{d,k}$
and the predictions of $\bcX_{n,d}$
as $\bchX_{n,d} = \boldsymbol 1_{n} \bm_d' +
\bcT_{n,k} (\bP_{d,k})'$\;.
(The formulas for $\bfT_{n,k}$ and $\bfhX_{n,d}$
are analogous.)
This yields the difference matrix
$\bcX_{n,d}-\bchX_{n,d}$ which we then
robustly scale by column, yielding the
final standardized residual matrix $\bR_{n,d}$\,.
The orthogonal distance of $\bcx_i$ to the PCA
subspace is given by
\begin{equation}\label{eq:od}
\cod_i = \| \bcx_i - \bchx_i \| \;.
\end{equation}
\end{enumerate}
See Section \ref{sec:software} for the R code
carrying out MacroPCA.
MacroPCA can be carried out
in $O(nd(\min(n,d) +\log(n) + \log(d)))$
time (see\linebreak
Section \ref{A:complexity} of the Appendix)
which is not much more than the
complexity\linebreak
$O(nd\min(n,d))$ of classical PCA.
Figure \ref{fig:times} shows times as a
function of $n$ and $d$ indicating that
MacroPCA is quite fast.
The fraction of NA's in the data had no
substantial effect on the computation
time, as seen in Figure
\ref{fig:timesNAs}
in Section \ref{A:complexity}.
\begin{figure}[!ht]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.45\textwidth]
{MCAR_A09_Comptimes_in_n.pdf} &
\includegraphics[width=0.45\textwidth]
{MCAR_A09_Comptimes_in_d.pdf}
\vspace{-0.4cm}
\end{tabular}
\caption{Computation times of MacroPCA in
seconds on Intel i7-4800MQ at 2.70 GHz,
as a function of the number of
cases $n$ (left) and of the dimension
$d$ (right).}
\label{fig:times}
\end{figure}
Note that PCA loadings are highly influenced by
the variables with the largest variability.
For this the MacroPCA code provides the option to
divide each variable by a robust scale. This does
not increase the computational complexity.
\section{Outlier detection}
\label{sec:detection}
MacroPCA provides several tools for outlier detection.
We illustrate them on a dataset collected by
\cite{Alfons:robustHD} from the website of the
Top Gear car magazine.
It contains data on 297 cars, with 11 continuous
variables.
Five of these variables (price, displacement, BHP,
torque, top speed) are highly skewed, and were
logarithmically transformed.
The dataset contains 95 missing cells, which is only
2.9\% of the $297 \times 11 = 3267$ cells.
We retained two principal components ($k=2$).
\begin{figure}[ht!]
\centering
\includegraphics[width=1.0\textwidth]
{TopGear_IPCA_MacroPCA_residualMap_cropped}
\vspace{-1.0cm}
\caption{Residual map of selected rows from Top Gear
data: (left) when using ICPCA; (right) when using
MacroPCA. The numbers shown in the cells are
the original data values (with price in units
of 1000 UK Pounds).}
\label{fig:Cars1}
\end{figure}
The right hand panel of Figure~\ref{fig:Cars1} shows
the results of MacroPCA by a modification of the
cell map introduced by \citet{Rousseeuw:DDC}.
The computations were performed on all 297 cars, but
in order to make the map fit on a page it only shows
24 cars, including some of the more eventful cases.
The color of the cells stems from the standardized
residual matrix $\bR_{n,d}$ obtained by MacroPCA.
Cells with $|r_{ij}| \ls \sqrt{\chi^2_{1,0.99}} = 2.57$
are considered regular and colored yellow in the
residual map, whereas the missing values are white.
Outlying residuals receive a color which ranges from light
orange to red when $r_{ij} > 2.57$ and from light purple
to dark blue when $r_{ij} < -2.57$\;.
So a dark red cell indicates that its observed value is
much higher than its fitted value, while a dark blue
cell means the opposite.
To the right of each row in the map is a circle
whose color varies from white to black according
to the orthogonal distance $\cod_i$ given by
\eqref{eq:od} compared to the cutoff
\eqref{eq:cutoffOD}.
Cases with $\cod_i \ls c_{\od}$ lie close to the PCA
subspace and receive a white circle.
The others are given darker shades of gray up to black
according to their $\cod_i$\;.
On these data we also ran the ICPCA method, which handles
missing values in classical PCA.
It differs from MacroPCA in some important ways: the
initial imputations are by nonrobust column means, the
iterations carry out CPCA and do not exclude outlying
rows, and the residuals are standardized by the
nonrobust standard deviation.
By itself ICPCA does not provide a residual map, but
we can construct one anyway by plotting the nonrobust
standardized residuals with the same color scheme,
yielding the left panel of Figure~\ref{fig:Cars1}.
The ICPCA algorithm finds high orthogonal distances
(dark circles)
for the BMW i3, the Chevrolet Volt, the Renault Twizzy
and the Vauxhall Ampera.
These are hybrid or purely electrical cars with a high
or missing MPG (miles per gallon).
Note that the Ssangyong Rodius and Renault Twizzy get
blue cells for their acceleration time of zero seconds,
which is physically impossible.
On this dataset the ICPCA algorithm provides decent results
because the total number of outliers is small compared
to the size of the data, and indeed the residual map of
all 297 cars was mostly yellow.
But MacroPCA (right panel) detects more deviating behavior.
The orthogonal distance of the hybrid Citroen DS5 and the
electrical Mitsubishi i-MiEV are now on the high side,
and the method flags the Bugatti Veyron and Pagani Huayra
supercars as well as the Land Rover Defender and
Mercedes-Benz G all-terrain vehicles.
It also flags more cells, giving a more complete
picture of the special characteristics of some cars.
\begin{figure}[!ht]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.48\textwidth]
{TopGear_ICPCA_outlierMap_cropped} &
\includegraphics[width=0.48\textwidth]
{TopGear_MacroPCA_outlierMap_cropped}
\end{tabular}
\vspace{-0.7cm}
\caption{Outlier map of Top Gear data: (left) when
using ICPCA; (right) when using MacroPCA.}
\label{fig:Cars2}
\end{figure}
We can also compute the {\it score distance} of each
case, which is the robustified Mahalanobis distance of
its projection on the PCA subspace among all such
projected points.
It is easily computed as
\begin{equation}
\csd_i = \sqrt{\sum_{j=1}^k (\ct_{ij})^2/\hlam_j}
\label{eq:sd}
\end{equation}
where $\ct_{ij}$ are the scores and $\hlam_j$ the
eigenvalues obtained by MacroPCA.
This allows us to construct a PCA outlier map of cases
as introduced in ~\citet{Hubert:ROBPCA}, which plots
the orthogonal distances $\cod_i$ on the vertical axis
versus the score distances $\csd_i$\;.
The MacroPCA outlier map of these data is the right
panel of Figure~\ref{fig:Cars2}.
The vertical line indicates the cutoff
$c_{\sd} = \sqrt{\chi^2_{k,0.99}}$
and the horizontal line is the cutoff $c_{\od}$\,.
Regular cases are those with a small
$\csd_i \ls c_{\sd}$ and a small $\cod_i \ls c_{\od}$\;.
Cases with large $\csd_i$ and small $\cod_i$ are called
good leverage points.
The cases with large $\cod_i$ can be divided into
orthogonal outliers (when their $\csd_i$ is small) and
bad leverage points (when their $\csd_i$ is large too).
We see several orthogonal outliers such as the Vauxhall
Ampera as well as some bad leverage points, especially
the BMW i3.
There are also some good leverage points.
The left panel displays the outlier map for ICPCA.
It flags the BMW i3 as an orthogonal outlier.
This behavior is typical because a bad leverage point
will attract the fit of classical methods, making it
appear less special.
For the same reason ICPCA considers some of the good
leverage points as regular cases.
That the ICPCA outlier map is still able to flag some
outliers is due to the fact that this dataset
only has a small percentage of outlying rows.
\section{Online data analysis}
\label{sec:predict}
Applying MacroPCA to a data set $\bX_{n,d}$ yields a
PCA fit.
Now suppose that a new case (row) $\bx$ comes in, and
we would like to impute its missing values,
detect its outlying cells and impute them, estimate
its scores, and find out whether it is a rowwise outlier.
We could of course append $\bx$ to $\bX_{n,d}$ and rerun
MacroPCA, but that would be very inefficient.
Instead we propose a method to analyze $\bx$ using
only the output of MacroPCA on the initial
set $\bX_{n,d}$\;.
This can be done quite fast, which makes the procedure
suitable for online process control.
For outlier-free data with NA's this was
studied by \cite{Nelson:miss} and
\cite{Walczak:TutorialI}.
\cite{FolchFortuny:PCAmissing} call this model
exploitation, as opposed to model building
(fitting a PCA model).
Our procedure consists of two stages, along the
lines of MacroPCA.
\begin{enumerate}[label={\arabic*.}]
\item {\bf DDCpredict} is a new function
which only uses $\bx$ and the output of DDC on the
initial data $\bX_{n,d}$\;.
First the entries of $\bx$ are standardized using
the robust location and scale estimates from DDC.
Then all $x_j$ with
$|x_j| > \sqrt{\chi^2_{1,0.99}} = 2.57$
are replaced by NA's.
Next all NA's are estimated
as in DDC making use of the pre-built
coefficients $b_{jh}$ and weights $w_{jh}$\;.
Also the deshrinkage step uses the original
robust slopes.
The \textit{DDCpredict} stage yields the imputed
vector $\btx^{(0)}$ and the standardized residual
of each cell $x_j$.
\item {\bf MacroPCApredict} improves on the initial
imputation $\btx^{(0)}$\;.
The improvements are based solely on the $\bm_d$
and $\bP_{d,k}$ that were obtained by MacroPCA
on the original data $\bX_{n,d}$\;.
Step $s \gs 1$ is of the following form:
\begin{enumerate}
\item Project the imputed case $\btx^{(s-1)}$
on the MacroPCA subspace to obtain its scores
vector
$\bt^{(s)} = (\bP_{d,k})'(\btx^{(s-1)} - \bm_d)$;
\item transform the scores to the original
space, yielding
$\hat{\bx}^{(s)} = \bm_d + \bP_{d,k} \bt^{(s)}$\;;
\item Reimpute the outlying cells and missing values
of $\bx$ by the corresponding values of
$\hat{\bx}^{(s)}$, yielding $\btx^{(s)}$\,.
\end{enumerate}
These steps are iterated until convergence
(when the new imputed values are within a tolerance
of the old ones) or
the maximal number of steps (by default 20) is
reached.
We denote the final $\btx^{(s)}$ as $\btx$.
Next we create $\bcx$ by replacing
the missing values in $\bx$ by the corresponding
cells in $\btx$.
We then compute the orthogonal distance
$\OD(\bcx)$ and the score distance $\SD(\bcx)$.
If $\OD(\bcx) > c_{\od}$ the new case $\bx$ is
flagged as an orthogonal outlier.
Finally the cell residuals $\cx_j - \chx_j$
are standardized as in the last step of MacroPCA,
and used to flag outlying cells in $\bx$.
\end{enumerate}
To illustrate this prediction
procedure we re-analyze the Top Gear data set.
We exclude the 24 cars shown in the residual map
of Figure~\ref{fig:Cars1}
and build the MacroPCA model on the remaining data.
This model was then provided to analyze the 24
selected cars as `new' data.
Figure~\ref{fig:Cars3} shows the result.
As before the cells are colored according to their
standardized residual, and the circles on
the right are filled according to their $\cod$.
The left panel is the MacroPCA residual map shown
in Figure~\ref{fig:Cars1}, which was obtained by
applying MacroPCA to the entire data set.
The right panel shows the result of analyzing these
24 cases using the fit obtained without them.
The residual maps are quite similar.
Note that each cell now shows its standardized
residual (instead of its data value as in
Figure~\ref{fig:Cars1}), making it
easier to see the differences.
\begin{figure}[!ht]
\centering
\includegraphics[width=1.0\textwidth]
{TopGear_MacroPCApredict_residualMap_cropped}
\vspace{-1.0cm}
\caption{Top Gear data set: residual maps obtained by
(left) including and (right) excluding these 24
cars when fitting the PCA model.}
\label{fig:Cars3}
\end{figure}
\section{Simulations}
\label{sec:Simulations}
We have compared the performance of ICPCA, MROBPCA
and MacroPCA in an extensive simulation study.
Several contamination models were considered with
missing values, cellwise outliers, rowwise outliers,
and combinations of them. Only a few of the results
are reported here since the others yielded similar
conclusions.
The clean data $\bX_{n,d}^0$ are generated from a
multivariate Gaussian with $\bmu = \bzero$ and two
types of covariance matrices $\bSigma_{d,d}$.
The first one is based on the structured
correlation matrix called A09 where each off-diagonal
entry is $\rho_{i,j} = \left(-0.9\right)^{|i-j|}$.
The second type of covariance matrix is based on the
random correlation matrices of
\cite{Agostinelli:Cellwise} and will be called ALYZ.
These correlation matrices are turned into covariance
matrices with other eigenvalues.
More specifically, the diagonal elements of the
matrix $\bL_{d,d}$ from the spectral decomposition
$\bSigma_{d,d}={\bP_{d,d}}\bL_{d,d}{\bP'_{d,d}}$
are replaced by the
desired values listed below.
The specifications of the clean
data are $n=100$, $d=200$,
$\bL_{d,d} = \text{diag}(30, 25, 20, 15, 10, 5,
0.098, 0.0975, \ldots, 0.0020, 0.0015)$ and $k=6$
(since $\sum_{j=1}^{6} \lambda_j /
\sum_{j=1}^{200} \lambda_j = 91.5\% $).
MacroPCA takes less than a second for $n=100$,
$d=200$ as seen in Figure \ref{fig:times}.
In a first simulation setting, the clean data
$\bX_{n,d}^0$ are modified by replacing a random
subset of 5\%, 10\%, ... up to 30\% of the
$n \times d$ cells with NA's.
The second simulation setting generates NA's and
outlying cells by randomly replacing 20\% of the cells
$x_{ij}$ by missing values and 20\% by the value
$\gamma\sigma_j$
where $\sigma^2_{j}$ is the $j$-th diagonal element of
$\bSigma_{d,d}$ and $\gamma$ ranges from 0 to 20.
The third simulation setting generates NA's and
outlying rows.
Here 20\% of random cells are replaced by NA's
and a random subset of 20\% of the rows is replaced
by rows generated from
$N(\gamma \bv_{k+1},\bSigma_{d,d})$ where
$\gamma$ varies from 0 to 50 and $\bv_{k+1}$ is the
$(k+1)$th eigenvector of $\bSigma_{d,d}$.
The last simulation setting generates 20\% of NA's,
together with 10\% of cellwise outliers and 10\% of
rowwise outliers in the same way.
In each setting we consider the set C consisting
of the rows $i$
that were not replaced by rowwise outliers,
with $c := \#C$, and the data matrix
$\bX_{c,d}^0$ consisting of those rows of the
clean data $\bX_{n,d}^0$\,.
As a baseline for the simulation we apply
classical PCA to $\bX_{c,d}^0$ and denote the
resulting predictions by $\hat{x}_{ij}^C$ for
$i$ in $C$.
We then measure the mean squared error (MSE) from
the baseline:
\begin{equation*}
\text{MSE} = \frac{1}{c d}\,
\sum_{i \in C} \sum_{j=1}^{d}
\left(\hat{x}_{ij} - \hat{x}_{ij}^{C}\right)^2
\end{equation*}
where $\hat{x}_{ij}$ is the predicted value for
$x_{ij}$ obtained by applying the different
methods to the contaminated data.
The MSE is then averaged over 100 replications.
\begin{figure}[!ht]
\centering
\vskip2mm
\begin{tabular}{cc}
\hskip5mm A09, fraction $\varepsilon$ of missing
values & \hskip0mm ALYZ, fraction $\varepsilon$
of missing values \\
\includegraphics[width=0.45\textwidth]
{MCAR_d200_A09_MSE} &
\includegraphics[width=0.45\textwidth]
{MCAR_d200_ALYZ_MSE}
\vspace{-0.3cm}
\end{tabular}
\caption{Average MSE as a function of the fraction
$\eps$ of missing values. The data were generated
using A09 (left) and ALYZ (right).}
\label{fig:MCAR}
\end{figure}
Figure \ref{fig:MCAR} shows the performance of
ICPCA, MROBPCA and MacroPCA when some data becomes missing.
As CPCA and ROBPCA cannot deal with NA's, they are
not included in this comparison.
Since there are no outliers the classical ICPCA performs
best, followed by MROBPCA and MacroPCA which perform
similarly to each other, and only slightly worse than
ICPCA considering the scale of the vertical axis which is
much smaller than in the other three simulation settings.
\begin{figure}[!ht]
\centering
\vskip2mm
\begin{tabular}{cc}
\hskip7mm A09, missing values \& cellwise
& \hskip3mm ALYZ, missing values \& cellwise \\
\includegraphics[width=0.45\textwidth]
{ICMMCAR_d200_20_A09_MSE} &
\includegraphics[width=0.45\textwidth]
{ICMMCAR_d200_20_ALYZ_MSE}
\vspace{-0.2cm}
\end{tabular}
\caption{Average MSE for data with 20\% of missing
values and 20\% of cellwise outliers, as a
function of $\gamma$ which determines the
distance of the cellwise outliers.}
\label{fig:ICM}
\end{figure}
Now we set 20\% of the data cells to missing and
add 20\% of cellwise contamination given by $\gamma$.
Figure~\ref{fig:ICM} shows the performance of ICPCA,
MROBPCA and MacroPCA in this situation.
The MSE of both ICPCA and MROBPCA grows very fast with
$\gamma$ which indicates that these methods are not at
all robust to cellwise outliers.
Note that $d=200$ so on average
$1-(1-0.2)^{200}\approx 100 \%$ of the rows are
contaminated, whereas no purely rowwise method
can handle more than 50\%.
MacroPCA is the only method that can withstand cellwise
outliers here. When $\gamma$ is smaller than 5 the MSE
goes up, but this is not surprising as in that case the
values in the contaminated cells are still close to the
clean ones. As soon as the contamination is sufficiently
far away, the MSE drops to a very low value.
\begin{figure}[!ht]
\centering
\vskip2mm
\begin{tabular}{cc}
\hskip8mm A09, missing values \& rowwise &
\hskip3mm ALYZ, missing values \& rowwise \\
\includegraphics[width=0.45\textwidth]
{THCMMCAR_d200_20_A09_MSE} &
\includegraphics[width=0.45\textwidth]
{THCMMCAR_d200_20_ALYZ_MSE}
\vspace{-0.2cm}
\end{tabular}
\caption{Average MSE for data with 20\% of missing
values and 20\% of rowwise outliers, as a
function of $\gamma$ which determines the
distance of the rowwise outliers.}
\label{fig:THCM}
\end{figure}
Figure~\ref{fig:THCM} presents the results of ICPCA,
MROBPCA and MacroPCA when there are 20\% of missing
values combined with 20\% of rowwise contamination.
As expected, the ICPCA algorithm breaks down while
MROBPCA and MacroPCA provide very good results.
MROBPCA and MacroPCA are affected the most (but
not much) by nearby outliers, and very little by far
contamination.
\begin{figure}[!ht]
\centering
\vskip2mm
\begin{tabular}{cc}
\hskip 1mm A09, missing \& cellwise \& rowwise &
\hskip 1mm ALYZ, missing \& cellwise \& rowwise \\
\includegraphics[width=0.45\textwidth]
{BOTHMCAR_d200_20_A09_MSE} &
\includegraphics[width=0.45\textwidth]
{BOTHMCAR_d200_20_ALYZ_MSE}
\vspace{-0.2cm}
\end{tabular}
\caption{Average MSE for data with 20\% of missing
values, 10\% of cellwise outliers and 10\% of
rowwise outliers, as a function of $\gamma$
which determines the distance of both the
cellwise and the rowwise outliers.}
\label{fig:BOTH}
\end{figure}
Finally, Figure~\ref{fig:BOTH} presents the results
in the situation of 20\% of missing values combined
with 10\% of cellwise and 10\% of rowwise
contamination.
In this scenario the ICPCA and MROBPCA algorithms
break down whereas MacroPCA still provides reasonable
results.
In this section the missing values were generated
in a rather simple way. In Section \ref{A:MAR}
they are generated in a more challenging way but
still MAR, with qualitatively similar results.
\section{Real data examples}
\label{sec:Examples}
\subsection{Glass data}
\label{sec:glass}
The {\it glass} dataset \citep{Lemberge:PLS}
contains spectra with $d=750$ wavelengths of $n=180$
archeological glass samples.
It is available in the R package
{\it cellWise} \citep{Raymaekers:cellWise}.
The MacroPCA method selects 4 principal components
and yields a $180 \times 750$ matrix of
standardized residuals. There is not enough
resolution on a page to show so many individual
cells in a residual map.
Therefore we created a map (the top panel of
Figure \ref{fig:Glass1}) which combines the residuals
into blocks of $5 \times 5$ cells.
The color of each block now depends on the most
frequent type of outlying cell in it, the resulting
color being an average.
For example, an orange block indicates that quite a
few cells in the block were red and most of the
others were yellow.
The more red cells in the block, the darker red the
block will be.
We see that MacroPCA has flagged a lot of cells,
that happen to be concentrated in a minority of the
rows where they show patterns.
In fact, the colors indicate that some of the glass
samples (between 22 and 30) have a higher
concentration of phosphor, whereas rows 57--63 and
74--76 had an unusually high concentration of calcium.
The bottom part of the residual map looks very
different,
due to the fact that the measuring instrument was
cleaned before recording the last 38 spectra.
One could say that those outlying rows belong to a
different population.
\begin{figure}[htb]
\centering
\includegraphics[width=1.0\textwidth]
{Glass_MacroPCA_ROBPCA_residualMap_cropped}
\vspace{-1.0cm}
\caption{Residual maps of the glass dataset when
fitting the PCA model by MacroPCA (top) and
ROBPCA (bottom).}
\label{fig:Glass1}
\end{figure}
Since the dataset has no NA's and we found that
fewer than half of the rows are outlying, it can
also be analyzed by the original ROBPCA method as
was done by \cite{Hubert:ROBPCA}, also for $k = 4$.
This detects the same rowwise outliers.
In principle ROBPCA is a purely rowwise method that
does not flag cells.
Even though ROBPCA does not produce a residual map,
we can construct one analogously to that of MacroPCA.
First we construct the residual matrix of ROBPCA,
the rows of which are given by $\bx_i - \bhx_i$
where $\bhx_i$ is the projection of $\bx$ on the
ROBPCA subspace.
We then standardize the residuals in each
column by dividing them by a
robust 1-step scale M-estimate.
This yields the bottom panel of
Figure \ref{fig:Glass1}.
We see that the two residual maps look quite
similar.
This example illustrates that purely rowwise robust
methods can be useful to detect cellwise outliers
when these cells occur in fewer than 50\% of the rows.
But if the cellwise outliers contaminate more rows,
this approach is insufficient.
\subsection{DPOSS data}
\label{sec:DPOSS}
In our last example we analyze data from the
Digitized Palomar Sky Survey (DPOSS) described by
\cite{Odewahn:Sky}.
This is a huge database of celestial objects,
from which we have drawn 20,000 stars at random.
Each star has been observed in the color bands
J, F, and N.
Each band has 7 variables.
Three of them measure light intensity:
for the J band they are MAperJ, MTotJ and
MCoreJ where the last letter indicates the band.
The variable AreaJ is the size of the star
based on its number of pixels.
The remaining variables IR2J, csfJ and EllipJ
combine size and shape.
(There were two more variables in the original
data, but these measured the background rather
than the star itself.)
There are substantial correlations between these
21 variables.
\begin{figure}[!ht]
\centering
\vspace{0.3cm}
\begin{tabular}{cc}
\includegraphics[width=0.485\textwidth]
{DPOSS_MacroPCA_loadings_cropped} &
\includegraphics[width=0.485\textwidth]
{DPOSS_MacroPCA_Scores12_cropped}\\
\end{tabular}
\vspace{-0.8cm}
\caption{DPOSS stars data: (left) loadings of the
first (black full line) and the second (blue
dashed line) component of MacroPCA, with vertical
lines separating the three color bands; (right)
plot of
the first two scores, with filled red circles
for stars with high orthogonal distance
$\protect\cod$ and
open black circles for the others.}
\label{fig:DPOSSloadings}
\end{figure}
In this dataset 84.6\% of the rows contain NA's
(in all there are 50.2\% missing entries.)
Often an entire color band is missing, and
sometimes two.
We applied MacroPCA to these data, choosing $k=4$
components according to the scree plot.
The left panel of Figure \ref{fig:DPOSSloadings}
shows the loadings of the first and second
component.
It appears that the first component captures the
overall negative correlation between two groups
of variables: those measuring light intensity
(the first 3 variables in each band) and the
others (variables 4 to 7 in each band).
The right panel is the corresponding scores plot,
in which the 150 stars with the
highest orthogonal distance $\cod$ are shown in red.
Most of these stand out in the space of PC1 and
PC2 (bad leverage points), whereas some only
have a high $\cod$ (orthogonal outliers).
\begin{figure}[!ht]
\centering
\includegraphics[width=0.7\textwidth]
{DPOSS_MacroPCA_residualMap_cropped}
\vspace{-0.5cm}
\caption{MacroPCA residual map of stars in the
DPOSS data, with 25 stars per row block.
The six row blocks at the top correspond
to the stars with highest
$\protect\cod$.}
\label{fig:DPOSScellmap}
\end{figure}
Figure \ref{fig:DPOSScellmap} shows the residual
map of MacroPCA, in which each row
block combines 25 stars.
The six rows at the top correspond to the
150 stars with highest $\cod$.
We note that the outliers tend to be more
luminous (MTot) than expected
and have a larger Area, which suggests
giant stars.
The analogous residual map of ICPCA (not shown)
did not reveal much.
Note that the non-outlying rows in the bottom
part of the residual map are yellow, and the
missing color bands show up as blocks in
lighter yellow (a combination of yellow and
white cells).
\section{Conclusions}
\label{sec:Conclusions}
The MacroPCA method is able to handle missing
values, cellwise outliers, and rowwise outliers.
This makes it well-suited for the analysis of
possibly messy real data.
Simulation showed that its performance is similar
to a classical method in the case of outlier-free
data with missing values, and to an existing
robust method when the data only has rowwise
outliers.
The algorithm is fast enough to deal with many
variables, and we intend to speed it up
by recoding it in C.
MacroPCA can analyze new data as they come in,
only making use of its existing output obtained
from the initial dataset.
It imputes missing values in the new data,
flags and imputes outlying cells, and flags
outlying rows.
This computation is fast, so it can be used to
screen new data in quality control or even
online process control. (One can update the
initial fit offline from time to time.)
The advantage of MacroPCA is that it not only
tells us when the process goes out of control,
but also which variables are responsible.
Potential extensions of MacroPCA include methods
of PCA regression and partial least squares
able to deal with rowwise and
cellwise outliers and missing values.
\section{Software Availability}
\label{sec:software}
The R code of MacroPCA, as well as the data
sets and an example script, are available at
{\it https://wis.kuleuven.be/stat/robust/software}.
It will be incorporated in the R package
{\it cellWise} on CRAN.
|
{
"timestamp": "2018-12-11T02:14:12",
"yymm": "1806",
"arxiv_id": "1806.00954",
"language": "en",
"url": "https://arxiv.org/abs/1806.00954"
}
|
\section{Introduction}
\label{S_Intro}
This paper establishes asymptotic refinements of the nonparametric iid bootstrap for $t$ tests and confidence intervals (CI's), and Wald tests and confidence regions based on the empirical likelihood (EL), the exponential tilting (ET), and the exponentially tilted empirical likelihood (ETEL) estimators. This is done without recentering the moment function in implementing the bootstrap, which has been considered as a critical procedure for overidentified moment condition models. Moreover, the proposed bootstrap is robust to misspecification, i.e., the resulting bootstrap tests and CI's achieve asymptotic refinements for the true parameter when the model is correctly specified, and the same rate of refinements is achieved for the pseudo-true parameter when misspecified. This is a new result because in the literature, there is no formal proof for asymptotic refinements of the bootstrap for EL, ET, or ETEL estimators even under correct specification. In fact, any bootstrap procedure with recentering for these estimators would be inconsistent if the model is misspecified because recentering imposes the correct model specification in the sample. This paper is motivated by three questions: (i) Why these estimators? (ii) Why bootstrap? (iii) Why care about misspecification?
First of all, EL, ET, and ETEL estimators are used to estimate a finite dimensional parameter characterized by a moment condition model. Traditionally, the generalized method of moments (GMM) estimators of Hansen (1982) have been used to estimate such models. However, it is well known that the two-step GMM may suffer from finite sample bias and inaccurate first-order asymptotic approximation to the finite sample distribution of the estimator when there are many moments, the model is non-linear, or instruments are weak. See Altonji and Segal (1996) and Hansen, Heaton, and Yaron (1996) among others on this matter.
Generalized empirical likelihood (GEL) estimators of Newey and Smith (2004) are alternatives to GMM as they have smaller asymptotic bias. GEL circumvents the estimation of the optimal weight matrix, which has been considered as a significant source of poor finite sample performance of the two-step GMM. GEL includes the EL estimator of Owen (1988, 1990), Qin and Lawless (1994), and Imbens (1997), the ET estimator of Kitamura and Stutzer (1997) and Imbens, Spady, and Johnson (1998), the continuously updating (CU) estimator of Hansen, Heaton, and Yaron (1996), and the minimum Hellinger distance estimator (MHDE) of Kitamura, Otsu, and Evdokimov (2013). Newey and Smith (2004) show that EL has the most favorable higher-order asymptotic properties than other GEL estimators. Although EL is preferable to other GEL estimators as well as GMM, its nice properties no longer hold under misspecification. In contrast, ET is often considered as robust to misspecification. Schennach (2007) proposes the ETEL estimator that shares the same higher-order property with EL under correct specification while possessing robustness of ET under misspecification. Hence, this paper considers the most widely used, EL, the most robust, ET, and a hybrid of the two, ETEL.\footnote{Precisely speaking, ETEL is not a GEL estimator. However, the analysis is quite similar because it is a combination of two GEL estimators. Therefore, this paper uses the term ``GEL'' to include ETEL as well as EL and ET to save space and to prevent any confusion.} An extension of the result to other GEL estimators is possible, but not attempted to make the argument succinct.
Secondly, many efforts have been made to accurately approximate the finite sample distribution of GMM. These include analytic correction of the GMM standard errors by Windmeijer (2005) and the bootstrap by Hahn (1996), Hall and Horowitz (1996), Andrews (2002), Brown and Newey (2002), Inoue and Shintani (2006), Allen, Gregory, and Shimotsu (2011), Lee (2014), among others. The bootstrap tests and CI's based on the GMM estimators achieve asymptotic refinements over the first-order asymptotic tests and CI's, which means their actual test rejection probability and CI coverage probability have smaller errors than the asymptotic tests and CI's. In particular, Lee (2014) applies a similar idea of non-recentering to GMM by using Hall and Inoue (2003)'s misspecification-robust variance estimators to achieve the same sharp rate of refinements with Andrews (2002).
Although GEL estimators are favorable alternatives to GMM, there is little evidence that the finite sample distribution of GEL test statistics is well approximated by the first-order asymptotics. Guggenberger and Hahn (2005) and Guggenberger (2008) find by simulation studies that the first-order asymptotic approximation to the finite sample distribution of EL estimators may be poor. Thus, it is natural to consider bootstrap $t$ tests and CI's based on GEL estimators to improve upon the first-order asymptotic approximation. However, few published papers deal with bootstrapping for GEL. Brown and Newey (2002) and Allen, Gregory, and Shimotsu (2011) employ the EL implied probability in resampling for GMM, but not for GEL. Canay (2010) shows the validity of a bootstrap procedure for the EL ratio statistic in the moment inequality setting. Kundhi and Rilstone (2012) argue that analytical corrections by Edgeworth expansion of the distribution of GEL estimators work well compared to the bootstrap, but they assume correct model specification.
Lastly, the validity of inferences and CI's critically depends on the correctly specified model assumption. Although model misspecification can be asymptotically detected by an overidentifying restrictions test, there is always a possibility that one does not reject a misspecified model or reject a correctly specified model in finite sample. Moreover, there is a view that all models are misspecified and will be rejected asymptotically. The consequences of model misspecification are twofold: a potentially biased probability limit of the estimator and a different asymptotic variance. The former is called the pseudo-true value, and it is impossible to correct the bias in general. Nevertheless, there are cases such that the pseudo-true values are still the object of interest: see Hansen and Jagannathan (1997), Hellerstein and Imbens (1999), Bravo (2010), and Almeida and Garcia (2012). GEL pseudo-true values are less arbitrary than GMM ones because the latter depend on a weight matrix, which is an arbitrary choice by a researcher. In contrast, each of the GEL pseudo-true values can be interpreted as a unique minimizer of a well-defined discrepancy measure, e.g. Schennach (2007).
The asymptotic variance of the estimator, however, can be consistently estimated even under misspecification. If a researcher wants to minimize the consequence of model misspecification, a misspecification-robust variance estimator should be used for $t$ tests or CI's, and for Wald tests and confidence regions. The proposed bootstrap uses the misspecification-robust variance estimator for EL, ET, and ETEL in constructing the $t$ or Wald statistics. This makes the proposed bootstrap robust to misspecification without recentering, and enables researchers to make valid inferences and CI's against unknown misspecification.
The remainder of the paper is organized as follows. Section \ref{S_outline} explains the idea of non-recentering by using a misspecification-robust variance estimator. Section \ref{S_est} defines the estimators and test statistics. Section \ref{S_MRB} describes the nonparametric iid misspecification-robust bootstrap procedure. Section \ref{S_Main} states the assumptions and establishes asymptotic refinements of the misspecification-robust bootstrap. Section \ref{S_MC} presents Monte Carlo experiments. An application to estimate the returns to schooling of Hellerstein and Imbens (1999) is presented in Section \ref{S_HI}. Section \ref{S_Con} concludes the paper. Lemmas and proofs are collected in Appendix A. A longer version of Lemmas and Proofs is available at the author's website.
\section{Asymptotic Refinement without Recentering}
\label{S_outline}
How does the proposed procedure achieve asymptotic refinements without recentering? The key idea is to construct an asymptotically pivotal statistic regardless of misspecification. Bootstrapping an asymptotically pivotal statistic is critical to get asymptotic refinements of the bootstrap (Beran, 1988; Hall, 1992; Horowitz, 2001).
Suppose that $\chi_{n}=\{X_{i}:i\leq n\}$ is an iid sample. Let $F$ be the corresponding cumulative distribution function (cdf). Let $\theta$ be a parameter of interest and $g(X_{i},\theta)$ be a moment function. The moment condition model is correctly specified if $H_{C}: Eg(X_{i},\theta_{0}) = 0$ holds for a unique $\theta_{0}$.\footnote{This definition is from Hall and Inoue (2003) and assumes point identification.} The hypothesis is denoted by $H_{C}$. The hypothesis of interest is $H_{0}:\theta = \theta_{0}.$ The usual $t$ statistic $T_{C}$ is asymptotically standard normal under $H_{0}$ \textit{and} $H_{C}$.
Now define the bootstrap sample. Let $\chi_{n_{b}}^{*}=\{X_{i}^{*}:i\leq n_{b}\}$ be a random draw with replacement from $\chi_{n}$ according to the empirical distribution function (edf) $F_{n}$. In this section, I distinguish the sample size $n$ and the bootstrap sample size $n_{b}$, following Bickel and Freedman (1981). The bootstrap versions of $H_{C}$ and $H_{0}$ are $H_{C}^{*}: E^{*}g(X_{i}^{*},\hat{\theta})=0$ and $H_{0}^{*}: \theta=\hat{\theta}$, where $E^{*}$ is the expectation taken over the bootstrap sample and $\hat{\theta}$ is a GEL estimator. Note that $\hat{\theta}$ is considered as the true value in the bootstrap world. The bootstrap version of the usual $t$ statistic $T_{C}^{*}$, however, is not asymptotically pivotal conditional on the sample because $H_{C}^{*}$ is not satisfied in the sample if the model is overidentified: $E^{*}g(X_{i}^{*},\hat{\theta}) = n^{-1}\sum_{i=1}^{n}g(X_{i},\hat{\theta}) \neq 0.$ Thus, Hall and Horowitz (1996), Andrews (2002), and Brown and Newey (2002) recenter the bootstrap version of the moment function to satisfy $H_{C}^{*}$. The resulting $t$ statistic based on the recentered moment function, $T_{C,R}^{*}$, tends to the standard normal distribution as $n_{b}$ grows conditional on the sample almost surely, and asymptotic refinements of the bootstrap are achieved.
This paper takes a different approach. Instead of jointly testing $H_{C}$ and $H_{0}$, I solely focus on $H_{0}$, leaving that $H_{C}$ may not hold. If the model is misspecified, then there is no such $\theta$ that satisfies $H_{C}$, i.e. $Eg(X_{i},\theta) \neq 0, \forall \theta\in\Theta,$ where $\Theta$ is a compact parameter space. This may happen if the model is overidentified. Since there is no true value, the pseudo-true value $\theta_{0}$ should be defined. Instead of $H_{C}$, $\theta_{0}$ is defined as a unique minimizer of the population version of the empirical discrepancy used in the estimation. For EL, this discrepancy is the Kullback-Leibler Information Criterion (KLIC). For ET, it maximizes a quantity named entropy. This definition is more flexible since it includes correct specification as a special case when $H_{C}$ holds at $\theta_{0}$. Without assuming $H_{C}$, we can find regularity conditions for $\sqrt{n}-$consistency and asymptotic normality of $\hat{\theta}$ for the pseudo-true value $\theta_{0}$. Under misspecification and suitable regularity conditions, as the sample size grows,
\begin{equation}
\sqrt{n}(\hat{\theta}-\theta_{0})\rightarrow_{d}N(0,\Sigma_{MR}).
\end{equation}
The asymptotic variance matrix $\Sigma_{MR}$ is different from the standard asymptotic variance matrix, but it coincides with the standard one under correct specification. $\Sigma_{MR}$ can be consistently estimated using the formula given in the next section. Let $\hat{\Sigma}_{MR}$ be a consistent estimator for $\Sigma_{MR}$. The misspecification-robust $t$ statistic $T_{MR}$ is studentized with $\hat{\Sigma}_{MR}$. Thus, $T_{MR}$ is asymptotically standard normal under $H_{0}$, without assuming $H_{C}$.
Similarly, we construct the bootstrap version of the $t$ statistic using the same formula as the sample misspecification-robust $t$ statistic. Conditional on the sample almost surely, $T_{MR}^{*}$ tends to the standard normal distribution as $n_{b}$ grows under $H_{0}^{*}$. Since the conditional asymptotic distribution does not depend on $H_{C}^{*}$, we need not recenter the bootstrap moment function to satisfy $H_{C}^{*}$. In other words, the misspecification-robust $t$ statistic $T_{MR}$ is asymptotically pivotal under $H_{0}$, while the usual $t$ statistic $T_{C}$ is asymptotically pivotal under $H_{0}$ \textit{and} $H_{C}$. This paper develops a theory for bootstrapping $T_{MR}$, instead of $T_{C}$. Note that both can be used to test the null hypothesis $H_{0}:\theta=\theta_{0}$ under correct specification. Under misspecification, however, only $T_{MR}$ can be used to test $H_{0}$ because $T_{C}$ is not asymptotically pivotal. This is useful when the pseudo-true value is an interesting object even if the model is misspecified.
To find the formula for $\Sigma_{MR}$, I use a just-identified system of the first-order conditions (FOC's) of EL, ET, and ETEL estimators. This idea is not new, though. Schennach (2007) uses the same idea to find the asymptotic variance matrix of the ETEL estimator robust to misspecification. For GMM estimators, the idea of rewriting the overidentified GMM as a just-identified system appears in Imbens (1997,2002) and Chamberlain and Imbens (2003).
A natural question is whether we can use GEL implied probabilities to construct the cdf estimator $\hat{F}$ and use it instead of the edf $F_{n}$ in resampling. This is possible when the moment condition is correctly specified. For instance, Brown and Newey (2002) argue that using the EL-estimated cdf $\hat{F}_{EL}(z)\equiv\sum_{i}\mathbf{1}(X_{i}\leq z)p_{i}$, where $p_{i}$ is the EL implied probability, in place of the edf $F_{n}$ in resampling would improve efficiency of bootstrapping for GMM. Their argument relies on the fact that $\hat{F}_{EL}$ is an efficient estimator of the true cdf $F$. If the moment condition is misspecified, however, then the cdf estimator based on the implied probability is inconsistent for $F$ because $E_{\hat{F}}g(X_{i},\hat{\theta})=\sum_{i}p_{i}g(X_{i},\hat{\theta})=0$ holds even in large sample, while $Eg(X_{i},\theta_{0})\neq 0$.\footnote{Bootstrapping the EL ratio test statistics of Owen (1988, 1990) under misspecification may not be a good idea for this reason, because the EL likelihood function is a product of EL implied probabilities that are inconsistent.} In contrast, the edf $F_{n}$ is uniformly consistent for $F$ regardless of whether the moment condition holds or not by Glivenko-Cantelli Theorem. For this reason, I mainly focus on resampling from $F_{n}$ rather than $\hat{F}$ in this paper.\footnote{However, a shrinkage-type cdf estimator combining $F_{n}$ and $\hat{F}$, similar to Antoine, Bonnal, and Renault (2007), can be used to improve both robustness and efficiency. For example, a shrinkage that has the form
$\pi_{i} = \epsilon_{n}\cdot p_{i} + (1-\epsilon_{n})\cdot n^{-1}$, where $\epsilon_{n}\rightarrow 0$ as $n$ grows, would work with the proposed misspecification-robust bootstrap because $E_{\pi}g(X_{i},\hat{\theta}) = (1-\epsilon_{n})n^{-1}\sum_{i}g(X_{i},\hat{\theta}) \neq 0,$ where the expectation is taken with respect to $\hat{F}_{\pi}(z)\equiv\sum_{i}\mathbf{1}(X_{i}\leq z)\pi_{i}$.}
\section{Estimators and Test Statistics}
\label{S_est}
Let $g(X_{i},\theta)$ be an $L_{g}\times 1 $ moment function where $\theta\in\Theta\subset \mathbf{R}^{L_{\theta}}$ is a parameter of interest, where $L_{g}\geq L_{\theta}$. Let $G^{(j)}(X_{i},\theta)$ denote the vectors of partial derivatives with respect to $\theta$ of order $j$ of $g(X_{i},\theta)$. In particular, $G^{(1)}(X_{i},\theta)\equiv G(X_{i},\theta)\equiv(\partial/\partial\theta')g(X_{i},\theta)$ is an $L_{g}\times L_{\theta}$ matrix and $G^{(2)}(X_{i},\theta)\equiv(\partial/\partial\theta')vec\{G(X_{i},\theta)\}$ is an $L_{g}L_{\theta}\times L_{\theta}$ matrix, where $vec\{\cdot\}$ is the vectorization of a matrix. To simplify notation, write $g_{i}(\theta)=g(X_{i},\theta)$, $G_{i}^{(j)}(\theta) = G^{(j)}(X_{i},\theta)$, $\hat{g}_{i}=g(X_{i},\hat{\theta})$, and $\hat{G}_{i}^{(j)}=G^{(j)}(X_{i},\hat{\theta})$ for $j=1,...,d+1$, where $\hat{\theta}$ is EL, ET or ETEL estimator. In addition, let $g_{i0}=g_{i}(\theta_{0})$ and $G_{i0}=G_{i}(\theta_{0})$, where $\theta_{0}$ is the (pseudo-)true value.
\subsection{Empirical Likelihood and Exponential Tilting Estimators}
Following the notation of Newey and Smith (2004) and Anatolyev (2005), let $\rho(\nu)$ be a concave function in a scalar $\nu$ on the domain that contains zero. For EL, $\rho(\nu)=\log(1-\nu)$ for $\nu\in(-\infty,1)$. For ET, $\rho(\nu) = 1-e^{\nu}$ for $\nu\in\mathbf{R}$. In addition, let $\rho_{j}(\nu)=\partial^{j}\rho(\nu)/\partial\nu^{j}$ for $j=0,1,2,\cdots$.
The EL or the ET estimator, $\hat{\theta}$, and the corresponding Lagrange multiplier, $\hat{\lambda}$, solve a saddle point problem
\begin{equation}
\min_{\theta\in\Theta}\max_{\lambda}n^{-1}\sum_{i=1}^{n}\rho(\lambda'g_{i}(\theta)).
\label{GEL}
\end{equation}
The FOC's for $(\hat{\theta},\hat{\lambda})$ are
\begin{equation} \underset{L_{\theta}\times 1}{0}=n^{-1}\sum_{i=1}^{n}\rho_{1}(\hat{\lambda}'\hat{g}_{i})\hat{G}_{i}'\hat{\lambda},\hspace{1em}\underset{L_{g}\times 1}{0}=n^{-1}\sum_{i=1}^{n}\rho_{1}(\hat{\lambda}'\hat{g}_{i})\hat{g}_{i}.
\end{equation}
A useful by-product of the estimation is the implied probabilities. The EL and the ET implied probabilities for the observations are, for $i=1,...,n$,
\begin{equation}
\text{EL: } p_{i} = \frac{1}{n(1-\hat{\lambda}'\hat{g}_{i})},\hspace{1em}\text{ET: } p_{i} = \frac{e^{\hat{\lambda}'\hat{g}_{i}}}{\sum_{j=1}^{n}e^{\hat{\lambda}'\hat{g}_{j}}}.
\end{equation}
The FOC's hold regardless of model misspecification and form a just-identified moment condition. Let $\psi(X_{i},\beta)$ be a $(L_{\theta}+L_{g})\times 1$ vector such that
\begin{equation}
\psi(X_{i},\beta)\equiv\left[
\begin{array}{c}
\psi_{1}(X_{i},\beta) \\
\psi_{2}(X_{i},\beta) \\
\end{array}
\right]
=\left[
\begin{array}{c}
\rho_{1}(\lambda'g_{i}(\theta))G_{i}(\theta)'\lambda \\
\rho_{1}(\lambda'g_{i}(\theta))g_{i}(\theta) \\
\end{array}
\right].
\label{GEL_EE}
\end{equation}
Then, the EL or the ET estimator and the corresponding Lagrange multiplier denoted by an augmented vector, $\hat{\beta}=(\hat{\theta}',\hat{\lambda}')'$, solves $n^{-1}\sum_{i=1}^{n}\psi(X_{i},\hat{\beta})=0$. In the limit, the pseudo-true value $\beta_{0}=(\theta_{0}',\lambda_{0}')'$ solves the population version of the FOC's:
\begin{equation} \underset{L_{\theta}\times 1}{0}=E\rho_{1}(\lambda_{0}'g_{i0})G_{i0}'\lambda_{0},\hspace{1em}\underset{L_{g}\times 1}{0}=E\rho_{1}(\lambda_{0}'g_{i0})g_{i0}.
\end{equation}
The asymptotic distribution of $\hat{\beta}=(\hat{\theta}',\hat{\lambda}')'$ can be derived by using standard asymptotic theory of just-identified GMM, e.g. Newey and McFadden (1994).
For EL, Chen, Hong, and Shum (2007) provide regularity conditions for $\sqrt{n}$-consistency and asymptotic normality under misspecification. In particular, they assume that the moment function is uniformly bounded:
\begin{equation}
\textbf{UBC: } \sup_{\theta\in\Theta,x\in\chi}\|g(x,\theta)\|<\infty \hspace{0.5em}\text{ and } \inf_{\theta\in\Theta,\lambda\in\Lambda(\theta),x\in\chi}(1-\lambda'g(x,\theta))>0,
\label{UBC}
\end{equation}
where $\Theta$ and $\Lambda(\theta)$ are compact sets and $\chi$ is the support of $X_{1}$. Schennach (2007) shows that the EL estimator is no longer $\sqrt{n}$-consistent if the moment function is unbounded for any $\theta$. Nevertheless, if the data is bounded or the moment function is constructed to satisfy UBC, then the EL estimator would be $\sqrt{n}$-consistent for the pseudo-true value and the bootstrap can be implemented. For ET, UBC is not required.
Assuming regularity conditions such as Assumption 3 of Chen, Hong, and Shum (2007) for EL, and Assumption 3 of Schennach (2007) for ET.\footnote{Schennach's assumptions are for ETEL but can be easily modified for ET. First, Assumption 3(2) needs to be replaced with the ET saddle-point problem. In addition, we only require $k_{2}=0,1,2$ instead of $k_{2}=0,1,2,3,4$ in Assumption 3(6).}, we have the following proposition:
\begin{proposition}
Let $\hat{\beta}=(\hat{\theta}',\hat{\lambda}')'$ be either the EL or the ET estimator and its Lagrange multiplier, and $\beta_{0}=(\theta_{0}',\lambda_{0}')'$ be the corresponding pseudo-true value. Then,
\[\sqrt{n}(\hat{\beta}-\beta_{0})\rightarrow_{d}N(0,\Gamma^{-1}\Psi(\Gamma')^{-1}),\]
where $\Gamma = E(\partial/\partial\beta')\psi(X_{i},\beta_{0})$ and $\Psi=E\psi(X_{i},\beta_{0})\psi(X_{i},\beta_{0})'$.
\label{PropEL}
\end{proposition}
The Jacobian matrix for EL or ET is given by
\begin{equation}
\frac{\partial\psi(X_{i},\beta)}{\partial\beta'}=\left[
\begin{array}{cc}
(\partial/\partial\theta')\psi_{1}(X_{i},\beta) & (\partial/\partial\lambda')\psi_{1}(X_{i},\beta) \\
(\partial/\partial\theta')\psi_{2}(X_{i},\beta) & (\partial/\partial\lambda')\psi_{2}(X_{i},\beta) \\
\end{array}
\right],
\end{equation}
where
\begin{eqnarray}
\frac{\partial\psi_{1}(X_{i},\beta)}{\partial\theta'} &=& \rho_{1}(\lambda'g_{i}(\theta))(\lambda'\otimes I_{L_{\theta}})G_{i}^{(2)}(\theta) + \rho_{2}(\lambda'g_{i}(\theta))G_{i}(\theta)'\lambda\lambda'G_{i}(\theta),\\
\nonumber \frac{\partial\psi_{1}(X_{i},\beta)}{\partial\lambda'} &=& \frac{\partial\psi_{2}(X_{i},\beta)}{\partial\theta} = \rho_{1}(\lambda'g_{i}(\theta))G_{i}(\theta)' + \rho_{2}(\lambda'g_{i}(\theta))G_{i}(\theta)'\lambda g_{i}(\theta)',\\
\nonumber \frac{\partial\psi_{2}(X_{i},\beta)}{\partial\lambda'} &=& \rho_{2}(\lambda'g_{i}(\theta))g_{i}(\theta)g_{i}(\theta)'.
\end{eqnarray}
$\Gamma$ and $\Psi$ can be estimated by
\begin{equation}
\hat{\Gamma}=n^{-1}\sum_{i=1}^{n}\frac{\partial\psi(X_{i},\hat{\beta})}{\partial\beta'}\hspace{0.5em}\text{ and }\hspace{0.5em}\hat{\Psi}=n^{-1}\sum_{i=1}^{n}\psi(X_{i},\hat{\beta})\psi(X_{i},\hat{\beta})',
\label{VAREST}
\end{equation}
respectively. The upper left $L_{\theta}\times L_{\theta}$ submatrix of $\Gamma^{-1}\Psi(\Gamma')^{-1}$, denoted by $\Sigma_{MR}$, is the asymptotic variance matrix of $\sqrt{n}(\hat{\theta}-\theta_{0})$. This matrix coincides with the usual asymptotic variance matrix $\Sigma_{C}=(EG_{i0}'(Eg_{i0}g_{i0}')^{-1}EG_{i0})^{-1}$ under correct specification. Let $\hat{\Sigma}_{MR}$ be the corresponding submatrix of the variance estimator $\hat{\Gamma}^{-1}\hat{\Psi}(\hat{\Gamma}')^{-1}$. Even under correct specification, $\hat{\Sigma}_{MR}$ is different from $\hat{\Sigma}_{C}$, the conventional variance estimator, because $\hat{\Sigma}_{MR}$ contains additional terms which are assumed away in $\hat{\Sigma}_{C}$.
\subsection{Exponentially Tilted Empirical Likelihood Estimator}
Schennach (2007) proposes the ETEL estimator which is robust to misspecification without UBC, while it maintains the same nice higher-order properties with EL under correct specification. The ETEL estimator and the Lagrange multiplier $(\hat{\theta},\hat{\lambda})$ solve
\begin{equation}
\argmin_{\theta\in\Theta}-n^{-1}\sum_{i=1}^{n}\log n\hat{w}_{i}(\theta),\hspace{1em}\hat{w}_{i}(\theta)=\frac{e^{\hat{\lambda}(\theta)'g_{i}(\theta)}}{\sum_{j=1}^{n}e^{\hat{\lambda}(\theta)'g_{j}(\theta)}},
\label{ETEL}
\end{equation}
where $\hat{\lambda}\equiv \hat{\lambda}(\hat{\theta})$ and
\begin{equation}
\hat{\lambda}(\theta)=\argmax_{\lambda}-n^{-1}\sum_{i=1}^{n}e^{\lambda'g_{i}(\theta)}.
\end{equation}
This estimator is a hybrid of the EL estimator and the ET implied probability. Equivalently, the ETEL estimator $\hat{\theta}$ minimizes the objective function
\begin{equation}
\hat{l}_{n}(\theta) = \log \left(n^{-1}\sum_{i=1}^{n}e^{\hat{\lambda}(\theta)'(g_{i}(\theta)-\bar{g}_{n}(\theta))}\right),
\label{ETEL1}
\end{equation}
where $\bar{g}_{n}(\theta)=n^{-1}\sum_{i=1}^{n}g_{i}(\theta)$. To derive the asymptotic distribution of the ETEL estimator, Schennach introduces auxiliary parameters to formulate the problem into a just-identified GMM. Let $\beta = (\theta',\lambda',\kappa',\tau)'$, where $\kappa\in\mathbf{R}^{L_{g}}$ and $\tau\in\mathbf{R}$. By Lemma 9 of Schennach (2007), the ETEL estimator $\hat{\theta}$ is given by the subvector of $\hat{\beta}=(\hat{\theta}',\hat{\lambda}',\hat{\kappa}',\hat{\tau})'$, that solves
\begin{eqnarray}
n^{-1}\sum_{i=1}^{n}\psi(X_{i},\hat{\beta})=0,
\label{OBJ}
\end{eqnarray}
where
\begin{equation}
\psi(X_{i},\beta)\equiv\left[
\begin{array}{c}
\psi_{1}(X_{i},\beta) \\
\psi_{2}(X_{i},\beta) \\
\psi_{3}(X_{i},\beta) \\
\psi_{4}(X_{i},\beta) \\
\end{array}
\right]
=\left[
\begin{array}{c}
e^{\lambda'g_{i}(\theta)}G_{i}(\theta)'\left(\kappa + \lambda g_{i}(\theta)'\kappa - \lambda\right) + \tau G_{i}(\theta)'\lambda \\
(\tau-e^{\lambda'g_{i}(\theta)})\cdot g_{i}(\theta) + e^{\lambda'g_{i}(\theta)}\cdot g_{i}(\theta)g_{i}(\theta)'\kappa \\
e^{\lambda'g_{i}(\theta)}\cdot g_{i}(\theta) \\
e^{\lambda'g_{i}(\theta)}-\tau \\
\end{array}
\right].
\label{ETEL_EE}
\end{equation}
Note that the estimators of the auxiliary parameters, $\hat{\kappa}$ and $\hat{\tau}$ are given by
\begin{equation}
\hat{\tau} = n^{-1}\sum_{i=1}^{n}e^{\hat{\lambda}'\hat{g}_{i}}\hspace{0.5em}\text{and}\hspace{0.5em}\hat{\kappa} = -\left(n^{-1}\sum_{i=1}^{n}\frac{e^{\hat{\lambda}'\hat{g}_{i}}}{\hat{\tau}}\hat{g}_{i}\hat{g}_{i}'\right)^{-1}\hat{\bar{g}}_{n},
\end{equation}
where $\hat{\bar{g}}_{n}=n^{-1}\sum_{i=1}^{n}\hat{g}_{i}$. The probability limit of $\hat{\beta}$ is the pseudo-true value $\beta_{0}= (\theta_{0}',\lambda_{0}',\kappa_{0}',\tau_{0})'$ that solves $E\psi(X_{i},\beta_{0})=0$. In particular, a function $\lambda_{0}(\theta)$ is the solution to $Ee^{\lambda'g_{i}(\theta)}g_{i}(\theta)=0$, where $\lambda_{0}\equiv\lambda_{0}(\theta_{0})$ and $\theta_{0}$ is a unique minimizer of the population objective function:
\begin{equation}
l_{0}(\theta) = \log \left(Ee^{\lambda_{0}(\theta)'(g_{i}(\theta)-Eg_{i}(\theta))}\right).
\end{equation}
By Theorem 10 of Schennach,
\begin{equation}
\sqrt{n}(\hat{\beta}-\beta_{0})\rightarrow_{d}N(0,\Gamma^{-1}\Psi(\Gamma')^{-1}),
\end{equation}
where $\Gamma = E(\partial/\partial\beta')\psi(X_{i},\beta_{0})$ and $\Psi=E\psi(X_{i},\beta_{0})\psi(X_{i},\beta_{0})'$.
$\Gamma$ and $\Psi$ are estimated by the same formula with \eqref{VAREST}. In order to estimate $\Gamma$, we need a formula of $(\partial/\partial\beta')\psi(X_{i},\beta)$. The partial derivative of $\psi_{1}(X_{i},\beta)$ is given by
\begin{equation}
\frac{\partial\psi_{1}(X_{i},\beta)}{\partial\beta'}=\left(
\begin{array}{cccc}
\underset{L_{\theta}\times L_{\theta}}{\frac{\partial\psi_{1}(X_{i},\beta)}{\partial\theta'}}& \underset{L_{\theta}\times L_{g}}{\frac{\partial\psi_{1}(X_{i},\beta)}{\partial\lambda'}} &\underset{L_{\theta}\times L_{g}}{\frac{\partial\psi_{1}(X_{i},\beta)}{\partial\kappa'}}& \underset{L_{\theta}\times 1}{\frac{\partial\psi_{1}(X_{i},\beta)}{\partial\tau}}\\
\end{array}
\right),
\end{equation}
where
{
\allowdisplaybreaks
\begin{eqnarray}
\frac{\partial\psi_{1}(X_{i},\beta)}{\partial\theta'} &=& e^{\lambda'g_{i}(\theta)}\left\{G_{i}(\theta)'(\kappa\lambda'+\lambda\kappa'+\lambda g_{i}(\theta)'\kappa\lambda'-\lambda\lambda')G_{i}(\theta) \right.\\
\nonumber && \left. + ((\kappa'+\kappa'g_{i}(\theta)\lambda'-\lambda')\otimes I_{L_{\theta}})G_{i}^{(2)(\theta)}\right\} + \tau(\lambda'\otimes I_{L_{\theta}})G_{i}^{(2)}(\theta),\\
\frac{\partial\psi_{1}(X_{i},\beta)}{\partial\lambda'} &=& e^{\lambda'g_{i}(\theta)}G_{i}(\theta)'\left\{( \lambda g_{i}(\theta)'\kappa+\kappa-\lambda) g_{i}(\theta)' + (g_{i}(\theta)'\kappa-1)I_{L_{g}}\right\}\\
\nonumber && +\tau G_{i}(\theta)',\\
\frac{\partial\psi_{1}(X_{i},\beta)}{\partial\kappa'} &=& e^{\lambda'g_{i}(\theta)}G_{i}(\theta)'(I_{L_{g}}+\lambda g_{i}(\theta)'),\\
\frac{\partial\psi_{1}(X_{i},\beta)}{\partial\tau} &=& G_{i}(\theta)'\lambda.
\end{eqnarray}
}
The partial derivative of $\psi_{2}(X_{i},\beta)$ is given by
\begin{equation}
\frac{\partial\psi_{2}(X_{i},\beta)}{\partial\beta'}=\left(
\begin{array}{cccc}
\underset{L_{g}\times L_{\theta}}{\frac{\partial\psi_{2}(X_{i},\beta)}{\partial\theta'}} &\underset{L_{g}\times L_{g}}{\frac{\partial\psi_{2}(X_{i},\beta)}{\partial\lambda'}} & \underset{L_{g}\times L_{g}}{e^{\lambda'g_{i}(\theta)}g_{i}(\theta)g_{i}(\theta)'} & \underset{L_{g}\times 1}{g_{i}(\theta)}
\end{array}
\right),
\end{equation}
where
\begin{eqnarray}
\frac{\partial\psi_{2}(X_{i},\beta)}{\partial\theta'}&=&\frac{\partial\psi_{1}(X_{i},\beta)}{\partial\lambda},\\
\frac{\partial\psi_{2}(X_{i},\beta)}{\partial\lambda'}&=&e^{\lambda'g_{i}(\theta)}g_{i}(\theta)g_{i}(\theta)'(\kappa g_{i}(\theta)'-I_{L_{g}}).
\end{eqnarray}
The partial derivative of $\psi_{3}(X_{i},\beta)$ is given by
\begin{equation}
\frac{\partial\psi_{3}(X_{i},\beta)}{\partial\beta'}=\left(
\begin{array}{cccc}
\underset{L_{g}\times L_{\theta}}{\frac{\partial\psi_{1}(X_{i},\beta)}{\partial\kappa}} & \underset{L_{g}\times L_{g}}{e^{\lambda'g_{i}(\theta)}g_{i}(\theta)g_{i}(\theta)'} &\underset{L_{g}\times L_{g}}{\mathbf{0}} &\underset{L_{g}\times 1}{\mathbf{0}} \\
\end{array}
\right),
\end{equation}
and the partial derivative of $\psi_{4}(X_{i},\beta)$ is given by
\begin{equation}
\frac{\partial\psi_{4}(X_{i},\beta)}{\partial\beta'}=\left(
\begin{array}{cccc}
\underset{1\times L_{\theta}}{e^{\lambda'g_{i}(\theta)}\lambda'G_{i}(\theta)} & \underset{1\times L_{g}}{e^{\lambda'g_{i}(\theta)}g_{i}(\theta)'} & \underset{1\times L_{g}}{\mathbf{0}} & \underset{1\times 1}{-1} \\
\end{array}
\right).
\end{equation}
The upper left $L_{\theta}\times L_{\theta}$ submatrix of $\Gamma^{-1}\Psi(\Gamma')^{-1}$, denoted by $\Sigma_{MR}$, is the asymptotic variance matrix of $\sqrt{n}(\hat{\theta}-\theta_{0})$. Let $\hat{\Sigma}_{MR}$ be the corresponding submatrix of the variance estimator $\hat{\Gamma}^{-1}\hat{\Psi}(\hat{\Gamma}')^{-1}$. Again, $\Sigma_{MR}$ is different from $\Sigma_{C}$ in general under misspecification, but they become identical under correct specification.\footnote{Under correct specification, the asymptotic variance matrix $\Sigma_{C}$ is the same for EL, ET, and ETEL, which is the asymptotic variance matrix of the two-step efficient GMM.}
\subsection{Test statistics}
Let $\hat{\theta}$ be either the EL, the ET, or the ETEL estimator and let $\hat{\Sigma}_{MR}$ be the corresponding variance matrix estimator. Let $\theta_{r}$, $\theta_{0,r}$, and $\hat{\theta}_{r}$ denote the $r$th elements of $\theta$, $\theta_{0}$, and $\hat{\theta}$ respectively. Let $\hat{\Sigma}_{MR,r}$ denote the $r$th diagonal element of $\hat{\Sigma}_{MR}$. The $t$ statistic for testing the null hypothesis $H_{0}:\theta_{r}=\theta_{0,r}$ is
\begin{equation}
T_{MR} = \frac{\hat{\theta}_{r}-\theta_{0,r}}{\sqrt{\hat{\Sigma}_{MR,r}/n}}.
\end{equation}
Since $T_{MR}$ is studentized with the misspecification-robust variance estimator $\hat{\Sigma}_{MR,r}$, it has an asymptotic $N(0,1)$ distribution under $H_{0}$, without assuming the correct model, $H_{C}$. This is the source of achieving asymptotic refinements without recentering regardless of misspecification. In contrast, the usual $t$ statistic $T_{C}$ is studentized with $\hat{\Sigma}_{C}$, a non-robust variance estimator. Hence, it is not asymptotically pivotal if the model is misspecified. Note that the only difference between $T_{MR}$ and $T_{C}$ is the variance estimator.
We also consider the Wald statistic for multivariate tests and confidence regions. Let $\eta(\theta)$ be an $\mathbf{R}^{L_{\eta}}$-valued function that is continuously differentiable at $\theta_{0}$. The Wald statistic for testing $H_{0}:\eta(\theta_{0})=0$ against $H_{1}:\eta(\theta_{0})\neq0$ is
\begin{equation}
\label{wald}
\mathcal{W}_{MR}=n\cdot \eta(\hat{\theta})'\left(\frac{\partial}{\partial\theta'}\eta(\hat{\theta})\hat{\Sigma}_{MR}\left(\frac{\partial}{\partial\theta'}\eta(\hat{\theta})\right)'\right)^{-1}\eta(\hat{\theta}).
\end{equation}
This Wald statistic is different from the conventional one because $\hat{\Sigma}_{MR}$ is used. Thus, its asymptotic distribution is a chi-square distribution with $L_{\eta}$ degrees of freedom, denoted by $\chi^{2}_{L_{\eta}}$, regardless of misspecification.
Both one-sided and two-sided $t$ tests with asymptotic significance level $\alpha$ and CI's with asymptotic confidence level $1-\alpha$ are considered. The asymptotic one-sided $t$ test of $H_{0}: \theta_{r}\leq\theta_{0,r}$ against $H_{1}:\theta_{r}>\theta_{0,r}$ rejects $H_{0}$ if $T_{MR}>z_{\alpha}$, where $z_{\alpha}$ is the $1-\alpha$ quantile of the standard normal distribution. This one-sided test corresponds to the lower endpoint one-sided CI, $[\hat{\theta}_{r}- z_{\alpha}\sqrt{\hat{\Sigma}_{MR,r}/n},\infty)$. The asymptotic two-sided $t$ test of $H_{0}: \theta_{r}=\theta_{0,r}$ against $H_{1}:\theta_{r}\neq\theta_{0,r}$ rejects $H_{0}$ if $|T_{MR}|>z_{\alpha/2}$. The two-sided asymptotic CI is $[\hat{\theta}_{r}\pm z_{\alpha/2}\sqrt{\hat{\Sigma}_{MR,r}/n}]$. The asymptotic Wald test of $H_{0}:\eta(\theta_{0})=0$ against $H_{1}:\eta(\theta_{0})\neq0$ rejects the null if $\mathcal{W}_{MR}>z_{\mathcal{W},\alpha}$, where $z_{\mathcal{W},\alpha}$ is the $1-\alpha$ quantile of a chi-square distribution with $L_{\eta}$ degrees of freedom. The Wald-based asymptotic confidence region for $\eta(\theta_{0})$ is $\{\eta\in\mathbf{R}^{L_{\eta}}:n\cdot (\eta(\hat{\theta})-\eta)'((\partial\eta(\hat{\theta})/\partial\theta')\hat{\Sigma}_{MR}(\partial\eta(\hat{\theta})/\partial\theta')')^{-1}(\eta(\hat{\theta})-\eta)\leq z_{\mathcal{W},\alpha}\}$. All the tests and CI's have the correct asymptotic significance and confidence levels regardless of misspecification because they are based on the misspecification-robust test statistics.
\section{The Misspecification-Robust Bootstrap Procedure}
\label{S_MRB}
The nonparametric iid bootstrap is implemented by resampling $X_{1}^{*},\cdots,X_{n}^{*}$ randomly with replacement from the sample $X_{1},\cdots,X_{n}$. Based on the bootstrap sample, $\chi_{n}^{*}=\{X_{i}^{*}:i\leq n\}$, the bootstrap GEL estimator $\hat{\theta}^{*}$ solves \eqref{GEL} for EL or ET, and \eqref{ETEL} for ETEL. The bootstrap version of the variance matrix estimator is $\hat{\Gamma}^{*-1}\hat{\Psi}^{*}(\hat{\Gamma}^{*'})^{-1}$ which can be calculated using the same formula with \eqref{VAREST} using the bootstrap sample instead of the original sample. Let $\hat{\Sigma}^{*}_{MR}$ be the upper left $L_{\theta}\times L_{\theta}$ submatrix of $\hat{\Gamma}^{*-1}\hat{\Psi}^{*}(\hat{\Gamma}^{*'})^{-1}$. I emphasize that no additional corrections such as recentering as in Hall and Horowitz (1996) and Andrews (2002) are required.
The misspecification-robust bootstrap $t$ and Wald statistics are
\begin{eqnarray}
T_{MR}^{*} &=& \frac{\hat{\theta}_{r}^{*}-\hat{\theta}_{r}}{\sqrt{\hat{\Sigma}^{*}_{MR,r}/n}},\\
\mathcal{W}_{MR}^{*} &=& n\cdot (\eta(\hat{\theta}^{*})-\eta(\hat{\theta}))'\left(\frac{\partial}{\partial\theta'}\eta(\hat{\theta}^{*})\hat{\Sigma}_{MR}^{*}\left(\frac{\partial}{\partial\theta'}\eta(\hat{\theta}^{*})\right)'\right)^{-1}(\eta(\hat{\theta}^{*})-\eta(\hat{\theta})).
\end{eqnarray}
Let $z^{*}_{T,\alpha}$, $z^{*}_{|T|,\alpha}$, $z^{*}_{\mathcal{W},\alpha}$ denote the $1-\alpha$ quantile of $T_{MR}^{*}$, $|T_{MR}^{*}|$, and $\mathcal{W}^{*}_{MR}$, respectively. Let $P^{*}$ be the probability distribution of the bootstrap sample conditional on the sample. Following Andrews (2002), we define $z^{*}_{|T|,\alpha}$ to be a value that minimizes $|P^{*}(|T_{MR}^{*}|\leq z)-(1-\alpha)|$ over $z\in \mathbf{R}$, because the distribution of $|T_{MR}^{*}|$ is discrete. The definitions of $z^{*}_{T,\alpha}$ and $z^{*}_{\mathcal{W},\alpha}$ are analogous. Each of the following bootstrap tests are of asymptotic significance level $\alpha$. The one-sided bootstrap $t$ test of $H_{0}: \theta_{r}\leq\theta_{0,r}$ against $H_{1}:\theta_{r}>\theta_{0,r}$ rejects $H_{0}$ if $T_{MR}>z_{T,\alpha}^{*}$. The symmetric two-sided bootstrap $t$ test of $H_{0}:\theta_{r}=\theta_{0,r}$ versus $H_{1}:\theta_{r}\neq \theta_{0,r}$ rejects if $|T_{MR}|>z^{*}_{|T|,\alpha}$. The equal-tailed two-sided bootstrap $t$ test of the same hypotheses rejects if $T_{MR}<z^{*}_{T,1-\alpha/2}$ or $T_{MR}>z^{*}_{T,\alpha/2}$. The bootstrap Wald test of $H_{0}:\eta(\theta_{0})=0$ against $H_{1}:\eta(\theta_{0})\neq0$ rejects the null if $\mathcal{W}_{MR}>z^{*}_{\mathcal{W},\alpha}$. Similarly, each of the following bootstrap CI's are of asymptotic confidence level $1-\alpha$. The lower endpoint one-sided bootstrap CI is $[\hat{\theta}_{r}- z_{T,\alpha}^{*}\sqrt{\hat{\Sigma}_{MR,r}/n},\infty)$ which corresponds to the one-sided bootstrap $t$ test above. The symmetric and the equal-tailed bootstrap percentile-$t$ intervals are $[\hat{\theta}_{r}\pm z^{*}_{|T|,\alpha}\sqrt{\hat{\Sigma}_{MR,r}/n}]$ and $[\hat{\theta}_{r}- z^{*}_{T,\alpha/2}\sqrt{\hat{\Sigma}_{MR,r}/n},\hat{\theta}_{r}- z^{*}_{T,1-\alpha/2}\sqrt{\hat{\Sigma}_{MR,r}/n}]$\footnote{The formula may look confusing to readers. It is correct that the upper end of the CI is $\hat{\theta}_{r}- z^{*}_{T,1-\alpha/2}\sqrt{\hat{\Sigma}_{MR,r}/n}$, not $\hat{\theta}_{r}+ z^{*}_{T,1-\alpha/2}\sqrt{\hat{\Sigma}_{MR,r}/n}$ because $z^{*}_{T,1-\alpha/2}<z^{*}_{T,\alpha/2}$.}, respectively. The Wald-based bootstrap confidence region for $\eta(\theta_{0})$ is $\{\eta\in\mathbf{R}^{L_{\eta}}:n\cdot (\eta(\hat{\theta})-\eta)'((\partial\eta(\hat{\theta})/\partial\theta')\hat{\Sigma}_{MR}(\partial\eta(\hat{\theta})/\partial\theta')')^{-1}(\eta(\hat{\theta})-\eta)\leq z^{*}_{\mathcal{W},\alpha}\}$.
In sum, the misspecification-robust bootstrap procedure is as follows:
\begin{enumerate}
\itemsep0em
\item Draw $n$ random observations $\chi_{n}^{*}$ with replacement from the original sample, $\chi_{n}$.
\item Calculate $\hat{\theta}^{*}$ and $\hat{\Sigma}^{*}_{MR}$ using the same formula with their sample counterparts.
\item Construct and save $T_{MR}^{*}$ or $\mathcal{W}^{*}_{MR}$.
\item Repeat steps 1-3 $B$ times and get the distribution of $T_{MR}^{*}$ or $\mathcal{W}^{*}_{MR}$.
\item Find $z^{*}_{|T|,\alpha}$, $z^{*}_{T,\alpha}$, or $z^{*}_{\mathcal{W},\alpha}$ from the distribution of $|T_{MR}^{*}|$, $T_{MR}^{*}$, or $\mathcal{W}^{*}_{MR}$.
\end{enumerate}
\section{Main Result}
\label{S_Main}
Let $f(X_{i},\beta)$ be a vector containing the unique components of $\psi(X_{i},\beta)$ and its derivatives with respect to the components of $\beta$ through order $d$, and $\psi(X_{i},\beta)\psi(X_{i},\beta)'$ and its derivatives with respect to the components of $\beta$ through order $d-1$.
\begin{assumption}
$X_{i},i=1,2,...n$ are iid.
\label{A1}
\end{assumption}
\begin{assumption}\
\vspace{-0.5em}
\begin{description}
\itemsep0em
\item[(a)] $\Theta$ is a compact parameter space of $\theta$ such that $\theta_{0}$ is an interior point of $\Theta$; $\Lambda(\theta)$ is a compact parameter space of $\lambda(\theta)$ such that it contains a zero vector and $\lambda_{0}(\theta)$ is an interior point of $\Lambda(\theta)$.
\item[(b)] $(\hat{\theta},\hat{\lambda})$ solves \eqref{GEL} for EL or ET, or \eqref{ETEL} for ETEL; $(\theta_{0},\lambda_{0})$ is the pseudo-true value that uniquely solves the population version of \eqref{GEL} for EL or ET, or \eqref{ETEL} for ETEL.
\item[(c)] For some function $C_{g}(x)$, $\|g(x,\theta_{1})-g(x,\theta_{2})\|<C_{g}(x)\|\theta_{1}-\theta_{2}\|$ for all $x$ in the support of $X_{1}$ and all $\theta_{1},\theta_{2}\in\Theta$; $EC_{g}^{q_{g}}(X_{1})<\infty$ and $E\|g(X_{1},\theta)\|^{q_{g}}<\infty$ for all $\theta\in\Theta$ for all $0<q_{g}<\infty$.
\item[(d)] For some function $C_{\rho}(x)$, $|\rho(\lambda_{1}'g(x,\theta_{1}))-\rho(\lambda_{2}'g(x,\theta_{2}))|<C_{\rho}(x)\|(\theta_{1}',\lambda_{1}')-(\theta_{2}',\lambda_{2}')\|$ for all $x$ in the support of $X_{1}$, all $\theta_{l}\in\Theta$, and all $\lambda_{l}\in\Lambda(\theta_{l})$ for $l=1,2$; $EC_{\rho}^{q_{1}}(X_{1})<\infty$ for some $q_{1}>4$. In addition, UBC \eqref{UBC} holds for EL.
\end{description}
\label{A2}
\end{assumption}
\begin{assumption}\
\vspace{-0.5em}
\begin{description}
\itemsep0em
\item[(a)] $\Gamma$ is nonsingular and $\Psi$ is positive definite.
\item[(b)] $g(x,\theta)$ is $d+1$ times differentiable with respect to $\theta$ on $N(\theta_{0})$, some neighborhood of $\theta_{0}$, for all $x$ in the support of $X_{1}$, where $d\geq 4$.
\item[(c)] There is a function $C_{G}(x)$ such that $\|G^{(j)}(x,\theta)-G^{(j)}(x,\theta_{0})\|\leq C_{G}(x)\|\theta-\theta_{0}\|$ for all $x$ in the support of $X_{1}$ and all $\theta\in N(\theta_{0})$ for $j=0,1,...,d+1$; $EC_{G}^{q_{G}}(X_{1})<\infty$ and $E\|G^{(j)}(X_{1},\theta_{0})\|^{q_{G}}<\infty$ for $j=0,1,...,d+1$ for all $0<q_{G}<\infty$.
\item[(d)] There is a function $C_{\partial\rho}(x)$ such that \[|\rho_{j}(\lambda'g(x,\theta))-\rho_{j}(\lambda_{0}'g(x,\theta_{0}))|\leq C_{\partial\rho}(x)\|(\theta',\lambda')-(\theta_{0}',\lambda_{0}')\|\] for all $x$ in the support of $X_{1}$, all $\lambda\in\Lambda(\theta)$, all $\theta\in N(\theta_{0})$ for $j=1,...,d+1$; $EC_{\partial\rho}^{q_{2}}(X_{1})<\infty$ for some $q_{2}>16$.
\item[(e)] $f(X_{1},\beta_{0})$ is once differentiable with respect to $X_{1}$ with uniformly continuous first derivative.
\item[(f)] For the Wald statistic, the $\mathbf{R}^{L_{\eta}}$-valued function $\eta(\cdot)$ is $d$ times continuously differentiable at $\theta_{0}$ and $(\partial/\partial\theta')\eta(\theta_{0})$ is full rank $L_{\eta}\leq L_{\theta}$.
\end{description}
\label{A3}
\end{assumption}
\begin{assumption}
For $t\in\mathbf{R}^{dim(f)}$, $\limsup_{\|t\|\rightarrow\infty}\left|Ee^{it'f(X_{1},\beta_{0})}\right|<1,$ where $i=\sqrt{-1}$.
\label{A4}
\end{assumption}
Assumption \ref{A1} is that the sample is iid, which is also assumed in Schennach (2007) and Newey and Smith (2004). Assumption \ref{A2}(a)-(c) are similar to Assumption 2(a)-(b) of Andrews (2002). Assumption \ref{A2}(d) is similar to but slightly stronger than Assumption 3(4) of Schennach (2007) for ET or ETEL, and it includes Assumption 3(1) of Chen, Hong, and Shum (2007) for EL to avoid a negative implied probability under misspecification. Assumption \ref{A2}(c)-(d) are required to have the uniform convergence of the objective function. Assumption \ref{A3}(a) is a standard regularity condition for a well-defined asymptotic covariance matrix. Assumption \ref{A3} except for (d) is similar to Assumption 3 of Andrews (2002). The assumptions on $q_{g}$ and $q_{G}$ are slightly stronger than necessary, but yield a simpler result. This is also assumed in Andrews (2002) for the same reason. Assumption \ref{A3}(d) is similar to but stronger than Assumption 3(6) of Schennach (2007). It ensures that the components of higher-order Taylor expansion of the FOC have well-defined probability limits.\footnote{The values of $q_{g}$, $q_{G}$, $q_{1}$, and $q_{2}$ are determined to ensure the existence of higher-order moments which is required for asymptotic refinements through Edgeworth expansions. Consistency of the bootstrap, however, can be shown under weaker assumptions.} Assumption \ref{A4} is the standard Cram\'{e}r condition for Edgeworth expansion, and it is satisfied if the distribution of $f(X_{1},\beta_{0})$ has a probability density with respect to Lebesgue measure (Horowitz, 2001).
Theorem \ref{T1} formally establishes asymptotic refinements of the bootstrap $t$ and Wald tests based on EL, ET, and ETEL estimators. This result is new, because asymptotic refinements of the bootstrap for this class of estimators have not been established in the literature even under correct model specifications.
\begin{theorem}
(a) Suppose Assumptions \ref{A1}-\ref{A4} hold with $q_{1}>4$ and $q_{2}>\frac{16}{1-2\xi}$ for some $\xi\in[0,1/2)$. Under $H_{0}:\theta_{r}=\theta_{0,r}$,
\begin{eqnarray}
\nonumber && P(T_{MR}>z^{*}_{T,\alpha}) = \alpha + o(n^{-(1/2+\xi)}) \text{ and}\\
\nonumber && P(T_{MR}<z^{*}_{T,\alpha/2} \text{ or }T_{MR}>z^{*}_{T,1-\alpha/2})=\alpha+o(n^{-(1/2+\xi)}).
\end{eqnarray}
(b) Suppose Assumptions \ref{A1}-\ref{A4} hold with $q_{1}>6$, $q_{2}>\frac{30}{1-2\xi}$ for some $\xi\in[0,1/2)$, and $d\geq5$. Under $H_{0}:\theta_{r}=\theta_{0,r}$,
\[P(|T_{MR}|>z^{*}_{|T|,\alpha})=\alpha+o(n^{-(1+\xi)}).\]
Under $H_{0}:\eta(\theta_{0})=0$,
\[P(\mathcal{W}_{MR}>z^{*}_{\mathcal{W},\alpha})=\alpha+o(n^{-(1+\xi)}).\]
(c) Suppose Assumptions \ref{A1}-\ref{A4} hold with $q_{1}>8$, a sufficiently large $q_{2}$, and $d\geq6$. Under $H_{0}:\theta_{r}=\theta_{0,r}$,
\[P(|T_{MR}|>z^{*}_{|T|,\alpha})=\alpha+O(n^{-2}).\]
\label{T1}
\end{theorem}
\vspace{-2em}
\begin{remark}[Remark 1]
By the duality of $t$ tests and CI's, asymptotic refinements of the same rate for the bootstrap CI's follow from Theorem \ref{T1}. The equal-tailed percentile-$t$ CI corresponds to Theorem \ref{T1}(a). The symmetric percentile-$t$ CI corresponds to Theorem \ref{T1}(b)-(c). The Wald confidence region corresponds to Theorem \ref{T1}(b). Recall that the asymptotic $t$, Wald tests, and CI's based on $T_{MR}$ and $\mathcal{W}_{MR}$ are correct up to $O(n^{-1/2})$, $O(n^{-1})$, and $O(n^{-1})$ for (a), (b), and (c), respectively. The two-sided bootstrap $t$ and Wald tests, and the symmetric percentile-$t$ CI achieve a higher rate of refinements because the $O(n^{-1/2})$ terms of the Edgeworth expansions of the corresponding statistics are zero by a symmetry property.
\end{remark}
\begin{remark}[Remark 2]
The result in Theorem \ref{T1}(c) is sharp and based on the argument of Hall (1988). By using the Edgeworth and Cornish-Fisher expansions, Hall showed that the $O(n^{-3/2})$ term of the coverage probability of the symmetric percentile-$t$ CI is also zero. Since his derivation is based on the one-dimensional $t$ statistic, I do not formally state a similar result for the Wald test. However, it is likely that the same sharp rate of refinements would hold (e.g. Hall, 1992, Section 4.2; Horowitz, 2001, Section 3.3).
\end{remark}
The proof of Theorem \ref{T1} follows the steps of Andrews (2002) that establish asymptotic refinements of the bootstrap for GMM estimators under correct specification. I briefly outline the proof. The conclusion of Theorem \ref{T1}(a) follows from
\begin{equation}
\label{outline1}
P\left(\sup_{z\in\mathbf{R}}\left|P^{*}(T_{MR}^{*}\leq z)-P(T_{MR}\leq z)\right|>n^{-(1/2+\xi)}\varepsilon\right)=o(n^{-1})
\end{equation}
for any $\varepsilon>0$, which leads to
\begin{equation}
\label{outline2}
1-\alpha-n^{-(1/2+\xi)}\varepsilon+o(n^{-1})\leq P(T_{MR}\leq z^{*}_{T,\alpha}) \leq 1-\alpha+n^{-(1/2+\xi)}\varepsilon+o(n^{-1}).
\end{equation}
Since the $o(n^{-1})$ terms in \eqref{outline2} are directly related to the $o(n^{-1})$ term on the right-hand side (RHS) of \eqref{outline1}, it is critical to show in \eqref{outline1} that the random cdf $P^{*}(T^{*}_{MR}\leq z)$ differs from the nonrandom cdf $P(T_{MR}\leq z)$ by a small amount, $n^{-(1/2+\xi)}$, on a set with probability $o(n^{-1})$, rather than $o(1)$. Similar arguments apply to the conclusions of Theorem \ref{T1}(b)-(c). \eqref{outline1} is shown by using Hall (1992)'s argument on Edgeworth expansion of a smooth function of sample averages. That is, I show that $T_{MR}$ and $T_{MR}^{*}$ are well approximated by a smooth function of the sample and the bootstrap sample moments (Lemma \ref{L7}), and the smooth function allows Edgeworth expansions up to a certain order (Lemma \ref{L9}). The argument of the smooth function consists of the elements of $n^{-1}\sum_{i=1}^{n}f(X_{i},\beta_{0})$ and $n^{-1}\sum_{i=1}^{n}f(X_{i}^{*},\hat{\beta})$, whose consistency is shown in Lemma \ref{L6}. The components of the Edgeworth expansions are well defined and consistent (Lemma \ref{L8}). Lemmas \ref{L2}-\ref{L5} establish consistency of the sample and the bootstrap GEL estimators.
Since I derive the asymptotic distribution of GEL under misspecification by using the fact that GEL FOC forms a just-identified GMM, one might wonder why the proof is different from that of GMM. The proof of this paper can be divided into two parts: (i) consistency, and (ii) higher-order analysis, and each part is a nontrivial extension. First, consistency of GEL estimators should be shown for the solution to the original GEL minimax criterion as the FOC can have multiple roots even when the original minimax criterion has a unique solution (e.g. Newey and McFadden, 1994, p. 2117). Additional complications arise due to the fact that we require a stronger result than usual consistency to control the error in the bootstrap approximation as in \eqref{outline1}. Thus, the consistency proof of this paper cannot be simplified to that of a just-identified GMM. Second, an economical higher-order analysis is required because the existence and finiteness of GEL higher-order moments may restrict model misspecification. Although it is commonly assumed that all of higher-order moments of GMM moment function and its higher-order derivatives are finite (Andrews, 2002), this does not affect robustness to misspecification of the bootstrap (Lee, 2014). However, this conclusion cannot be directly applied to GEL, because GEL FOC, which forms a just-identified GMM moment function, contains the Lagrange multiplier $\hat{\lambda}$ whose probability limit is zero under correct specification but is non-zero under misspecification. For example, suppose that Assumptions \ref{A2}(d) and \ref{A3}(d) hold for all $0<q_{l}<\infty$ for $l=1,2$, that would be assumed if we naively mapped the assumptions of GMM onto GEL. Since a zero vector is in $\Lambda(\theta)$, this implies $Ee^{q_{l}\lambda_{0}'g(X_{i},\theta_{0})}<\infty$ for ET, which is a strong assumption on DGP and the model. Since $\lambda_{0}\neq 0$ under misspecification, $Ee^{q_{l}\lambda_{0}'g(X_{i},\theta_{0})}$ may not be finite if $q_{l}$ is too large, and the bootstrap would not achieve desired asymptotic refinements. Lee (2014b) provides an example that the model cannot be too misspecified to satisfy the assumptions and the set of possible misspecification shrinks to zero as $q_{l}$ gets larger. This implies that by assuming $0<q_{l}<\infty$, one may completely rule out model misspecification. Thus, it is important to find stringent conditions for $q_{1}$ and $q_{2}$, and this requires an analysis of GEL higher-order moments and their higher-order derivatives.
\section{Monte Carlo Results}
\label{S_MC}
This section compares the finite sample CI coverage probabilities under correct specification and misspecification. To reduce computational burden of calculating GEL estimators $B$ times for each Monte Carlo repetition, the warp-speed Monte Carlo method of Giacomini, Politis, and White (2013) is used. The method also appears in White (2000) and Davidson and MacKinnon (2002, 2007), but the validity of the method is formally established by Giacomini, Politis, and White. The key difference between the warp-speed method and a usual Monte Carlo is that the bootstrap sample is drawn only once for each Monte Carlo repetition rather than $B$ times, and thus computation time is significantly reduced. The number of Monte Carlo repetition is 5,000 throughout this section. I consider the AR(1) dynamic panel model of Blundell and Bond (1998). For $i=1,...,n$ and $t=1,...T$,
\begin{eqnarray}
y_{it} &=& \rho_{0} y_{i,t-1} + \eta_{i} + \nu_{it},
\label{AR1}
\end{eqnarray}
where $\eta_{i}$ is an unobserved individual-specific effect and $\nu_{it}$ is an error term. To estimate $\rho_{0}$, we use two sets of moment conditions:
\begin{eqnarray}
Ey_{i(t-s)}(\Delta y_{it}-\rho_{0}\Delta y_{i(t-1)}) = 0,&& \hspace{0.5em}t=3,...T, \text{ and }s\geq 2,
\label{DIF}\\
E\Delta y_{i(t-1)}(y_{it}-\rho_{0}y_{i(t-1)}) = 0,&& \hspace{0.5em}t=3,...T.
\label{SYS}
\end{eqnarray}
The first set \eqref{DIF} is derived from taking differences of \eqref{AR1}, and uses the lagged values of $y_{it}$ as instruments. The second set \eqref{SYS} is derived from the initial conditions on DGP and mitigates the weak instruments problem from using only the lagged values. Blundell and Bond (1998) suggest to use the system-GMM estimator based on the two sets of moment conditions. The number of moment conditions is $(T+1)(T-2)/2$.
Four DGP's are considered: two correctly specified and two misspecified models. For each of the DGP's, $T=4, 6$ and $n=100, 200$ are considered. To minimize the effect of the initial condition, I generate 100+T time periods and use the last T periods for estimation. In Tables \ref{table_C1}-\ref{table_M2}, ``Boot'' and ``Asymp'' mean the bootstrap CI and the asymptotic CI, respectively. The third column shows on which estimator the CI is based. GMM denotes the two-step GMM based on the system moment conditions. The fourth column shows which standard error is used: ``C'' denotes the conventional standard error and ``MR'' denotes the misspecification-robust one. The fifth column shows how the bootstrap is implemented for the bootstrap CI's: ``L'' denotes the misspecification-robust bootstrap proposed in this paper and in Lee (2014), ``HH'' denotes the recentering method of Hall and Horowitz (1996), and ``BN'' denotes the efficient bootstrapping of Brown and Newey (2002). The columns under ``CI'' show the coverage probabilities. The column under ``J test'' shows the rejection probabilities of the overidentification tests: the Hall-Horowitz bootstrap J test, the asymptotic J test, the likelihood-ratio tests based on EL, ET, and ETEL, are presented.
In sum, eight bootstrap CI's and eight asymptotic CI's are compared. Boot-GMM-C-HH serves as a benchmark, as its properties have been relatively well investigated. Boot-GMM-MR-L is suggested by Lee (2014). The theoretical advantage of Boot-EL-MR-L is established in this paper. Boot-EL-MR-BN uses the EL probabilities in resampling. This paper does not establish asymptotic refinements for this CI and the efficient resampling method (BN) is not robust to misspecification as is discussed in Section 2. However, its performance is worth attention and the efficient resampling may be modified using shrinkage to make the CI robust to misspecification. CI's based on ET and ETEL are defined similarly. Note that CI's using the conventional standard error (C), either bootstrap or asymptotic, are not robust to misspecification.
The DGP for a correctly specified model is identical to that of Bond and Windmeijer (2005). For $i=1,...,n$ and $t=1,...T$,
\begin{eqnarray}
\nonumber \text{DGP C-1: }&& y_{it} = \rho_{0} y_{i,t-1} + \eta_{i} + \nu_{it},\\
\nonumber && \eta_{i}\sim N(0,1);\nu_{it}\sim \frac{\chi_{1}^{2}-1}{\sqrt{2}},\\
\nonumber && y_{i1} = \frac{\eta_{i}}{1-\rho_{0}}+u_{i1};u_{i1}\sim N\left(0,\frac{1}{1-\rho_{0}^{2}}\right).
\end{eqnarray}
Since the bootstrap does not solve weak instruments (Hall and Horowitz, 1996), I let $\rho_{0}=0.4$ so that the performance of the bootstrap is not affected by the problem. The simulation result is given in Table \ref{table_C1}. First of all, the bootstrap CI's show significant improvement over the asymptotic CI's across all the cases considered. Second, similar to the result of Bond and Windmeijer (2005), the bootstrap CI's coverage probabilities tend to be too high for $T=6$. This over-coverage problem becomes less severe as the sample size increases, especially for those based on EL, ET, and ETEL. Interestingly, efficient resampling (BN) seems to mitigate this problem. Third, the asymptotic CI's using the robust standard error (MR) work better than the ones using the usual standard error (C). This result is surprising given that the model is correctly specified. One reason is that both standard errors underestimate the standard deviation of the estimator while the robust standard error is relatively large in this case. For example, when $T=6$ and $n=100$, the difference in the coverage probabilities between Asymp-ET-C and Asymp-ET-MR is quite large. The unreported standard deviation of the ET estimator is .085, while the mean of the robust and the conventional standard errors are .059 and .047, respectively. Finally, the overidentification tests except for the asymptotic J test show severe size distortion, especially when $T=6$.
Next a heteroskedastic error term across individuals is considered. The DGP is
\begin{eqnarray}
\nonumber \text{DGP C-2: }&& y_{it} = \rho_{0} y_{i,t-1} + \eta_{i} + \nu_{it},\\
\nonumber && \eta_{i}\sim N(0,1);\nu_{it}\sim N(0,\sigma_{i}^{2});\sigma_{i}^{2}\sim U[0.2.1.8],\\
\nonumber && y_{i1} = \frac{\eta_{i}}{1-\rho_{0}}+u_{i1};u_{i1}\sim N\left(0,\frac{\sigma_{i}^{2}}{1-\rho_{0}^{2}}\right).
\end{eqnarray}
The result is given in Table \ref{table_C2}. The findings are similar to that of Table \ref{table_C1}, except that the over-coverage problem of the bootstrap CI's based on GEL estimators improves more quickly as the sample size grows.
To allow misspecification, suppose that the DGP follows an AR(2) process while the model is based on the AR(1) specification, \eqref{AR1}. For $i=1,...,n$ and $t=1,...T$,
\begin{eqnarray}
\nonumber \text{DGP M-1: }&& y_{it} = \rho_{1} y_{i,t-1} + \rho_{2} y_{i,t-2} + \eta_{i} + \nu_{it},\\
\nonumber && \eta_{i}\sim \text{tr}N(0,1);\nu_{it}\sim \frac{\text{tr}\chi_{1}^{2}-1}{\sqrt{2}},\\
\nonumber && y_{i1} = \frac{\eta_{i}}{1-\rho_{1}-\rho_{2}}+u_{i1};u_{i1}\sim \sqrt{\frac{1-\rho_{2}}{(1+\rho_{2})[(1-\rho_{2})^{2}-\rho_{1}^{2}]}}\cdot \text{tr}N\left(0,1\right),
\end{eqnarray}
where $\text{tr}N(0,1)$ and $\text{tr}\chi_{1}^{2}$ are truncated standard normal between -4 and 4, and truncated chi-square distribution with 1 degrees of freedom between 0 and 16, respectively. The truncated distributions are used to satisfy the UBC \eqref{UBC}. DGP M-2 is identical to DGP M-1 except that $\nu_{it}$ is truncated log normal distribution between $-\sqrt{e}$ and $e^{3.5}$ with mean zero.
If the model is misspecified, then there is no true parameter that satisfies the moment conditions simultaneously. It is important to understand what is identified and estimated under misspecification. The moment conditions \eqref{DIF} and \eqref{SYS} impose
\begin{equation}
\frac{Ey_{i1}\Delta y_{it}}{Ey_{i1}\Delta y_{i(t-1)}} = \cdots = \frac{Ey_{i(t-3)}\Delta y_{it}}{Ey_{i(t-3)}\Delta y_{i(t-1)}} = \frac{Ey_{i(t-2)}\Delta y_{it}}{Ey_{i(t-2)}\Delta y_{i(t-1)}}=\frac{E\Delta y_{i(t-1)} y_{it}}{E\Delta y_{i(t-1)} y_{i(t-1)}},
\label{MCS}
\end{equation}
for $t=3,...,T$. Under correct specification, the restriction \eqref{MCS} holds and a unique parameter is identified. However, each of the ratios identifies different parameters under misspecification, and the probability limits of GMM and GEL estimators are weighted averages of the parameters. For example, when $T=4$, we have five moment conditions. Four of them identify $\rho_{T4}^{a}\equiv\rho_{1}-\rho_{2}$ and the other identify $\rho_{T4}^{b}\equiv\rho_{1} + \frac{\rho_{2}}{\rho_{1}-\rho_{2}}$. Thus, the pseudo-true value $\rho_{0}$ is defined as $\rho_{0}=w\rho_{T4}^{a}+(1-w)\rho_{T4}^{b}$ where $w$ is between 0 and 1. Similarly, the pseudo-true value when $T=6$ is a weighted average of four different parameters. Since GMM and GEL use different weights, the pseudo-true values would be different. If $\rho_{2}=0$, then the pseudo-true values coincide with $\rho_{1}$, the AR(1) coefficient. Thus, $\rho_{0}$ captures the deviation from the AR(1) model. If $|\rho_{2}|$ is relatively small, then $\rho_{0}$ would not be much different from $\rho_{1}$, while there is an advantage of using a parsimonious model. If one accepts the possibility of misspecification and decides to proceed with the pseudo-true value, then GEL pseudo-true values have better interpretation than GMM ones because GEL weights are implicitly calculated according to a well-defined distance measure while GMM weights depend on the choice of a weight matrix by a researcher.
Tables \ref{table_M1}-\ref{table_M2} show the CI coverage probabilities under DGP M-1 and M-2, respectively. I set $\rho_{1}=0.6$ and $\rho_{2}=0.2$. The pseudo-true values are calculated using the sample size of $n=30,000$ for $T=4$ and $n=20,000$ for $T=6$.\footnote{The two-step GMM and GEL pseudo-values are not that different. They are around 0.4 when $T=4$ and around 0.5 when $T=6$.} It is clearly seen that the bootstrap CI's outperform the asymptotic CI's. In particular, the performances of Boot-EL-MR-L, Boot-ET-MR-L, and Boot-ETEL-MR-L CI's are excellent for $T=4$. When $T=6$, these CI's exhibit slight over-coverage but less severe than Boot-GMM-MR-L.\footnote{Observing that the over-coverage problem of the bootstrap CI's becomes severe as $T$ gets larger, I conjecture that this problem is related to the estimation of the misspecification-robust variance matrix because the dimension of the matrix increases along with $T$.} The bootstrap CI's using the efficient resampling (BN) show some improvement on the over-coverage problem, but they are not robust to misspecification. Indeed, their coverage probabilities deviate from the nominal ones as the sample size grows in DGP M-2. One may wonder why the HH bootstrap CI works quite well under misspecification even though the CI is not robust to misspecification. This is spurious and cannot be generalized. In this case, the conventional standard error is considerably smaller than the robust standard error, while the HH bootstrap critical value is much larger than the asymptotic one, which offsets the smaller standard error. Lee (2014) reports that the performance of the HH bootstrap CI under misspecification is much worse than that of the MR bootstrap CI. In addition, the HH bootstrap J test shows very low power relative to the asymptotic tests. Among the asymptotic CI's, those based on GEL estimators and the robust standard errors (MR) show better performances.
Finally, Table \ref{table_W1} compares the width of the bootstrap CI's under different DGP's. Since this paper establishes asymptotic refinements in the size and coverage errors, the width of CI's is not directly related to the main result. Nevertheless, the table clearly demonstrates a reason to consider GEL as an alternative to GMM, especially when misspecification is suspected. Under correct specification (C-1 and C-2), all the bootstrap CI's have similar width. This conclusion changes dramatically under misspecification (M-1 and M-2). Among robust CI's, (Boot-)GMM-MR-L is much wider than those based on GEL. For example, when $T=4$ and $n=200$ in DGP M-1, the width of the (Boot-)GMM-MR-L 95\% CI is 2.418, while that of (Boot-)ETEL-MR-L 95\% CI is only .880. The main reason is that the GEL standard errors are smaller than the GMM ones under misspecification, at least for the considered DGP's. The bootstrap CI's using the efficient resampling (BN) are generally narrower than those using the iid resampling. This suggests that the GEL probabilities with appropriate shrinkage may be used to shorten CI's under misspecification.
The findings of Monte Carlo experiments can be summarized as follows. First, the misspecification-robust bootstrap CI's based on GEL estimators are generally more accurate than other bootstrap and asymptotic CI's regardless of misspecification. Not surprisingly, the coverage of non-robust CI's are very poor under misspecification. Second, the GEL-based bootstrap CI's improve on the severe over-coverage of the GMM-based bootstrap CI's, which is also a concern of Bond and Windmeijer (2005). Lastly, it is recommended to use the misspecification-robust variance estimator in constructing $t$ statistics and CI's regardless of whether the model is correctly specified or not, because the coverage of the misspecification-robust CI's tends to be more accurate even under correct specification.
\section{Application: Returns to Schooling}
\label{S_HI}
Hellerstein and Imbens (1999) estimate the Mincer equation by weighted least squares, where the weights are calculated using EL. The equation of interest is
\begin{eqnarray}
\nonumber \log(\text{wage}_{i}) &=& \beta_{0} + \beta_{1}\cdot \text{education}_{i} + \beta_{2}\cdot \text{experience}_{i} + \beta_{3} \cdot \text{experience}^{2}_{i}\\
&& + \beta_{4}\cdot \text{IQ}_{i} + \beta_{5} \cdot \text{KWW}_{i} + \varepsilon_{i},
\label{Mincer}
\end{eqnarray}
where KWW denotes Knowledge of the World of Work, an ability test score. Since the National Longitudinal Survey Young Men's Cohort (NLS) dataset reports both ability test scores and schooling, the equation \eqref{Mincer} can be estimated by OLS. However, the NLS sample size is relatively small, and it may not correctly represent the whole population. In contrast, the Census data is a very large dataset which is considered as the whole population, but we cannot directly estimate the equation \eqref{Mincer} using the Census because it does not contain ability measures. Hellerstein and Imbens calculate weights by matching the Census and the NLS moments and use the weights to estimate the equation \eqref{Mincer} by the least squares. This method can be used to reduce the standard errors or change the estimand toward more representative of the Census.
Let $y_{i}\equiv \log(\text{wage}_{i})$ and $\mathbf{x}_{i}$ be the regressors on the right-hand-side of \eqref{Mincer}. The Hellerstein-Imbens weighted least squares can be viewed as a special case of the EL estimator using the following moment condition:
\begin{equation}
E_{s}g_{i}(\beta_{0})=0,
\label{mc}
\end{equation}
where $E_{s}[\cdot]$ is the expectation over a probability density function $f_{s}(y_{i},\mathbf{x}_{i})$, which is labeled the \textit{sampled population}. The moment function $g_{i}(\beta)$ is
\begin{equation}
g_{i}(\beta) = \left(\begin{array}{c} \mathbf{x}_{i}(y_{i}-\mathbf{x}_{i}'\beta) \\ m(y_{i},\mathbf{x}_{i}) - E_{t}m(y_{i},\mathbf{x}_{i})\end{array}\right),
\label{mf}
\end{equation}
where $\beta$ is a parameter vector, $m(y_{i},\mathbf{x}_{i})$ is a $13 \times 1$ vector, and $E_{t}[\cdot]$ is the expectation over a probability density function $f_{t}(y_{i},\mathbf{x}_{i})$, labeled the \textit{target population}. The first set of the moment condition is the FOC of OLS and the second set matches the sample (NLS) moments with the known population (Census) moments. In particular, the thirteen moments consisting of first, second, and cross moments of log(wage), education, experience, and experience squared are matched. If the sampled population is identical to the target population, i.e., the NLS sample is randomly drawn from the Census distribution, the moment condition model is correctly specified and \eqref{mc} holds. Otherwise, the model is misspecified and there is no such $\beta$ that satisfies \eqref{mc}. In this case, the probability limit of the EL estimator solves the FOC of OLS with respect to an artificial population that minimizes a distance between the sampled and the target populations. This pseudo-true value is an interesting estimand because we are ultimately interested in the parameters of the target population, rather than the sampled population.
Table \ref{tb_est} shows the estimation result of OLS, two-step GMM, EL, ET, and ETEL estimators. Without the Census moments, the equation \eqref{Mincer} is estimated by OLS and the estimate of the returns to schooling is 0.054 with the standard error of 0.010. By using the Census moments, the coefficients estimates and the standard errors change. The two-step GMM estimator is calculated using the OLS estimator as a preliminary estimator, and it serves as a benchmark. EL, ET, and ETEL produce higher point estimates and smaller standard errors than those of OLS. Since the J-test rejects the null hypothesis of correct specification for all of the estimators using the Census moments, it is likely that the target population differs from the sampled population. If this is the case, then the conventional standard errors are no longer valid, and the misspecification-robust standard errors should be used. The misspecification-robust standard errors, s.e.$_{MR}$, of EL, ET, and ETEL are slightly larger than the usual standard errors assuming correct specification, s.e.$_{C}$, but still smaller than the standard errors of OLS. In contrast, s.e.$_{MR}$ of GMM is much larger than s.e.$_{C}$, which is consistent with the simulation result given in Section \ref{S_MC}.
Table \ref{tb_CI} shows the lower and upper bounds of CI's based on various estimators and their respective width. The width of the GMM based CI's are wider than those based on GEL estimators. Among the GEL estimators, the ET estimator has the widest CI, while the EL estimator has the narrowest. The asymptotic CI's are narrower than the bootstrap CI's, but this is likely to cause under-coverage given the simulation result in Section \ref{S_MC}. The upper bounds of the bootstrap CI's range from 9.6\% to 11.5\%, which are higher than those of the asymptotic CI's. I also present a nonparametric kernel estimate of the bootstrap distribution of the $t$ statistics based on GMM, EL, ET, and ETEL estimators in Figure \ref{fig_bootdist}. The distributions are skewed to the left, which implies the presence of a downward bias. Overall, the estimation of \eqref{Mincer} using GEL estimators and the resulting bootstrap CI's suggest that the returns to schooling is likely to be higher than originally estimated by Hellerstein and Imbens.
\section{Conclusion}
\label{S_Con}
GEL estimators are favorable alternatives to GMM. Although asymptotic refinements of the bootstrap for GMM have been established, the same for GEL have not been done yet. In addition, the current literature on bootstrapping does not consider model misspecification that adversely affects the refinement and validity of the bootstrap. This paper formally established asymptotic refinements of the bootstrap for $t$ and Wald tests, and CI's and confidence regions based on GEL estimators. Moreover, the proposed bootstrap is robust to misspecification, which means the refinements are not affected by model misspecification. Simulation results did support this finding. As an application, the returns to schooling was estimated by extending the method of Hellerstein and Imbens (1999). The exercise found that the estimates of Hellerstein and Imbens were robust across different GEL estimators, and the returns to schooling could be even higher.
\section*{Acknowledgment}
I am very grateful to Bruce Hansen and Jack Porter for their encouragement and helpful comments. I also thank the co-editor Han Hong, an associate editor, three anonymous referees, Guido Imbens, Xiaohong Chen, and Yoon-Jae Whang, as well as seminar participants at UW-Madison, Monash, ANU, Adelaide, UNSW, and U of Sydney for their suggestions and comments. This paper was also presented at the 2013 NASM and SETA 2013.
|
{
"timestamp": "2018-06-06T02:04:24",
"yymm": "1806",
"arxiv_id": "1806.00953",
"language": "en",
"url": "https://arxiv.org/abs/1806.00953"
}
|
\section{Introduction}
Computer-aided diagnosis (CAD) methods started on the 60's\cite{CAD_1960} but without great success on that time. Large scale CAD usage arrived in the 80's with the new approach of not replacing the medical professional but only assist in their diagnosis \cite{CAD_review}.
Recent growth of computational capacity allowed convolutional neural networks (ConvNets) applications on recognizement and detection of images to expand \cite{Lecun_2015}, this is true specially after the introduction of \texttt{AlexNet} in 2012 \cite{ImageNet}. This growth also applies for CAD systems using ConvNets to classify patients.
Normally ConvNets can be subdivided in two networks. One network that extract features from the image, processing attributes like edges in a variety of orientations (e.g. horizontal or vertical ones) to form larger attributes of the images like the presence of square or rounded objects in it. The second network is the classifier that receives the features or attributes processed on the first layer and use all this information to classify the image in classes (i.e. groups of similar images). The process of training the network in extracting the features and classifying is a time-consuming one and demand a large dataset to get good features extracted instead of not so good ones.
To understand the problem of training those networks better the dataset for the ImageNet\cite{ILSVRC15} competition have 1.000.000 images and even with this large dataset some networks seem to work better than others, even the same network can outperform itself due to a better training. In order to avoid this training problem two important techniques have emerged in reusing ConvNets for classification of small datasets:
\begin{description}
\item[Transfer Learning] is the method of using a network well trained on a similar task by copying its parameters and weights to the adapted network on the current task. Commonly we use the trained network only changing its final classifying layer or layers to solve the current task (e.g. removing the last layer outputting the 1.000 classes on ImageNet problem and adding one layer that outputs just 2 classes for our current problem). We can also handle this method by just adding an extra layer that will receive the original output and convert it to the current problem classes.
\item[Data augmentation] is one of the techniques used on \texttt{AlexNet} \cite{ImageNet} to improve its training and consist on geometric transformations that generates new training images from the original ones. Examples of those transformation can be doing an horizontal flip of the original images or just cropping and resizing it.
\end{description}
On the current task we will build a CAD system to classify x-rays chest images in two groups: \texttt{Normal} and \texttt{Pneumonia}, using ConvNets and comparing the performance using four different strategies:
\begin{description}
\item[Scratch] Networks initialized from scratch with random parameters, with no prior training.
\item[Transfer Learning] Networks initialized with parameters copied from a network trained for ImageNet, replacing the final layers with new random initialized layers and only training those final layers.
\item[Fine Tuning] Networks also initialized with parameters copied from ImageNet trained ones and with the final layer replaced but now training the classifier or even the entire network.
\item[Data Augmentation] Networks trained with the same strategy as Fine Tuning but now with an augmented dataset, as explained later on.
\end{description}
\section{Methodology}
We used the \texttt{PyTorch}\footnote{\url{https://pytorch.org/}} platform for all our neural network code since it replaced and also uses the code for the \texttt{Caffe2} platform that was used by many other papers in this field.
The dataset used for our study can be found on \cite{Dataset} and was also used for an study in \cite{Kermany2018}. Some numbers on this dataset are on Table \ref{tabela_dataset}. On this study we only use the Test and Train sets ignoring the Validation set as it is too small and can't provide a good estimative on the trained network quality or accuracy. The original images have different sizes but all way above the size normally used on ImageNet networks that is $224 \times 224$\footnote{ConvNets normally uses $224 \times 224$ RGB images with exception of Inception that uses $299 \times 299$}.
\begin{table}[ht]
\centering
\begin{tabular}{lrr}
Subset & Original & Balanced\\
\midrule
Validation & 23 & 16\\
Test & 631 & 468 \\
Train & 5,223 & 2,682 \\
\end{tabular}
\caption{Number of chest x-rays images in the dataset from \cite{Dataset}}
\label{tabela_dataset}
\centering
\begin{tabular}{lrrr}
Network & Top-1 & Top-5 & Reference\\
\toprule
\texttt{Inception} v3 & 22.55 & 6.44 & \cite{inception}\\
\texttt{ResNet} 18 & 30.24 & 10.92 & \cite{resnet}\\
\texttt{SqueezeNet} 1.1 & 41.81 & 19.38 & \cite{squeezenet}\\
\texttt{AlexNet} & 43.45 & 20.91 & \cite{ImageNet}\\
\end{tabular}
\caption{List of the ConvNets used}
\label{tabela_convnets}
\end{table}
We will use the ConvNets listed on Table \ref{tabela_convnets} noting that in the initial phase of our study only some networks were used to minimize time spent on training.\footnote{A complete list of ConvNets available on the platform used can be seen at \url{https://pytorch.org/docs/master/torchvision/models.html}}
The original dataset was unbalanced and for better usage we eliminated exceeding images from the \texttt{Pneumonia} class since it had the greater number. This was a random process made to have an 1:1 ratio between both \texttt{Training} and \texttt{Test} sets.
For the optimizer choice to train the network we made some simple tests with all the choices from \texttt{PyTorch} that had a simple and exchangeable operation on our code. The results can be seen on Fig. \ref{grafico_opt}. Overall results are very similar on the long term and with exception of \texttt{SGD} we used the default parameters on the platform. We choose to use the \texttt{Adam} optimizer on the other tests since it showed the fastest convergence with higher accuracy and lower loss on the training set. Here is the full list of optimizers tested:
\begin{enumerate*}[noitemsep,nolistsep,label={\alph*)}]
\item \texttt{ADADELTA} \cite{adadelta}
\item \texttt{Adagrad} \cite{adagrad}
\item \texttt{Adam} \cite{adam}
\item \texttt{Adamax}
\item \texttt{ASGD} \cite{asgd}
\item \texttt{RMSprop} \cite{rmsprop}
\item \texttt{Rprop}
\item \texttt{SGD} \cite{sgd}
\end{enumerate*}
\begin{figure}[h]
\subfloat[][]{\includegraphics[width=0.9\linewidth]{paper_files/optimizers_train}}
\\
\subfloat[][]{\includegraphics[width=0.9\linewidth]{paper_files/optimizers_val}}
\caption{Optimizers comparison results using default parameters except where noted on SGD}
\label{grafico_opt}
\end{figure}
For the transfer learning strategy we made some tests using the many methods seem on other studies, all on the \texttt{AlexNet} model:
\begin{itemize}[noitemsep]
\item Adding an extra layer with 2 neurons connected to the 1.000 outputs of the original last layer, training only this extra layer.
\item Changing the final layer of the original network so it only have 2 neurons getting in this way only 2 output classes, training only this final layer
\item Change the entire \texttt{AlexNet} classifier network, diminishing the number of neurons on the hidden layers to analyse the impact on performance.
\end{itemize}
And for the fine tuning strategy:
\begin{itemize}[noitemsep]
\item Changing the final layer on the original network to only have 2 output neurons but now training the entire classifier network.
\end{itemize}
In the data augmentation strategy we also used the fine tuning strategy by changing the final layer to get 2 output classes but now doing the data augmentation with the following methods from the PyTorch class \texttt{torchvision.transforms}\footnote{\url{https://pytorch.org/docs/master/torchvision/transforms.html}}:
\begin{description}
\item[transforms.RandomResizedCrop()] this method generates a new image by resizing and cropping the original image to the network input image size, using a random resize scale.
\item[transforms.RandomHorizontalFlip()] this method will randomly do the horizontal flip on the image
\end{description}
\section{Results}
\begin{figure}[h]
\subfloat[][]{\includegraphics[width=0.9\linewidth]{paper_files/scratch_train}}
\\
\subfloat[][]{\includegraphics[width=0.9\linewidth]{paper_files/scratch_val}}
\caption{Training results of \texttt{ResNet18} network from scratch}
\label{grafico_zero}
\end{figure}
Using the \texttt{ResNet18} network from scratch we got a maximum validation accuracy of $86.38\%$ (Fig. \ref{grafico_zero}). As the epoch grows the training accuracy got to $100\%$ with its loss getting as low as $0.00034$ but without any reflex on the validation accuracy and a considerable growth in the validation loss. This is a possible evidence of over fitting where the network has adjusted too much to the training set using attributes irrelevant to our classification problem since it didn't help the validation accuracy to increase. Similar results occurred with other networks used in the same strategy.
\begin{figure}[h]
\subfloat[][]{\includegraphics[width=0.9\linewidth]{paper_files/tl_train}}
\\
\subfloat[][]{\includegraphics[width=0.9\linewidth]{paper_files/tl_val}}
\caption{Transfer Learning results using the \texttt{AlexNet} network}
\label{grafico_tl}
\end{figure}
Using the \texttt{AlexNet} network with the transfer learning strategy the accuracy gets a little better than the previous strategy but not all the tests made could reduce the training loss. This shows a limitation on the capacity of a single layer to classify our problem.
Only with fine tuning when we trained all the classifier network of the \texttt{AlexNet} network we could eliminate the training loss, but in this case we could also see some over fitting on the validation loss. The general validation accuracy of the three approaches to transfer learning where similar as can be seen on Fig. \ref{grafico_tl}.
\begin{figure}[h]
\subfloat[][]{\includegraphics[width=0.9\linewidth]{paper_files/aug_train}}
\\
\subfloat[][]{\includegraphics[width=0.9\linewidth]{paper_files/aug_val}}
\caption{Training results of ConvNets using data augmentation - solid lines are accuracy and dotted lines loss}
\label{grafico_augmentation}
\end{figure}
The best results in accuracy came from the transfer learning technique with data augmentation, as can be seen on Fig. \ref{grafico_augmentation}. The data augmentation technique reduced the over fitting effect on the validation loss and also did not allowed the network to come close to an $100\%$ training accuracy or simply minimized the training loss. This can be credited to the data augmentation not allowing the network to adjust itself to the training set since it's now been changing by data augmentation on every iteration. The best results in terms of accuracy come from the \texttt{ResNet18} ($96.37\%$) and the \texttt{Inception} ($95.51\%$).
\begin{figure}[h]
\includegraphics[width=0.9\linewidth]{paper_files/confusion}
\caption[Confusion matrices]{Confusion matrices - NOR stands for Normal, PNE for Pneumonia and (pre) means predicted}
\label{grafico_confusao}
\end{figure}
Fig. \ref{grafico_confusao} shows the confusion matrices of different strategies used in this study, we can perceive a high number of false positives in networks without data augmentation (top matrices) even with a high number of hits in the \texttt{Pneumonia} class. When we use the data augmentation technique the false positive and false negative numbers dropped down (bottom matrices) with a high accuracy of both training and validation sets.
\begin{figure}[h]
\subfloat[][]{\includegraphics[width=0.45\linewidth]{paper_files/roc_normal}}
\subfloat[][]{\includegraphics[width=0.45\linewidth]{paper_files/roc_pneumonia}}
\\
\subfloat[][]{\includegraphics[width=0.45\linewidth]{paper_files/roc_aug_normal}}
\subfloat[][]{\includegraphics[width=0.45\linewidth]{paper_files/roc_aug_pneumonia}}
\caption{ROC curves for networks}
\label{grafico_roc}
\end{figure}
Fig. \ref{grafico_roc} shows the ROC curves for the networks that explain better the selectivity of the classifiers. The confusion matrices shows that networks without data augmentation had a better accuracy on the \texttt{Pneumonia} class than the ones with data augmentation but now on the ROC curves we can see that this is not really the case. The data augmentation provided a better separation of the two classes making a more robust classifier reflecting on the ROC curves getting near the northwest corner of the graph.
\section{Conclusions}
It is possible and realistic to create a computer-aided diagnosis (CAD) system using ConvNets even with little computational resources for network training and a small dataset. In the better accuracy cases showed we only needed a few hours to complete the network training and some acceptable results even emerged on the first epochs of training.
To get a reliable CAD system it is convenient to rely on data augmentation techniques to avoid the over fitting problem of ConvNets. In the cases of large datasets maybe this may not be necessary but considering our current dataset of $2,682$ images data augmentation was necessary to get reliable results. One can even argument that the current dataset is not small since in the ImageNet case each class had $1,000$ images and they also used data augmentation techniques to improve the results in \cite{ImageNet}.
\section*{Acknowledgement}
The author would like to thanks professor Dr. Renato Tinós for his class in Bio-Inspired Computation that helped to grasp the concepts behind neural networks.
\printbibliography
\end{document}
|
{
"timestamp": "2018-06-05T02:11:42",
"yymm": "1806",
"arxiv_id": "1806.00839",
"language": "en",
"url": "https://arxiv.org/abs/1806.00839"
}
|
\section{Introduction}
Decay studies
of exotic nuclear species at the focal plane of the FAIR-NUSTAR Super Fragment Separator in the DESPEC experiment \cite{DESPEC} will provide information on the nuclear structure and the astrophysics impact of exotic nuclei. Far from stability, the $Q_{\beta}$ values are very large, and the corresponding increase in level density implies, on the one hand, the fragmentation of the $\beta$ feeding into many levels populated in the decay and, on the other hand, the fragmentation of the $\gamma$ intensity between many possible cascades. Total Absorption $\gamma$-Ray Spectroscopy (TAGS) has been shown to be an accurate tool to determine $\beta$-decay intensity distributions for such nuclei far from the valley of $\beta$ stability. This technique avoids the so-called \textit{Pandemonium} effect \cite{Pandemonium}, related to the relatively poor efficiency of
HPGe detectors. Instead of detecting individual $\gamma$ rays as in high-resolution experiments with HPGe detectors, TAGS aims to detect the full $\beta$-delayed electromagnetic cascade. This is achieved with large scintillator crystals covering a solid angle of $\sim$ 4$\pi$.
For this reason, a new spectrometer has been designed and constructed for the DESPEC experiment \cite{DTAS_design}. The Decay Total Absorption $\gamma$-Ray Spectrometer (DTAS) is a segmented detector that consists of a maximum of eighteen NaI(Tl) crystals with dimensions 150~mm $\times$ 150~mm $\times$ 250~mm \cite{DTAS_design}. The advantage of the segmentation in this case is threefold: the possibility to extract information from the multiplicity spectra, as will be explained later, the possibility of using the individual modules as single $\gamma$ detectors, and the mechanical flexibility of the set-up. In fact, we consider two main configurations for DTAS: a sixteen-module configuration designed for
experiments at fragmentation facilities,
and an eighteen-module configuration for experiments at ISOL-type facilities.
Both configurations without shielding can be seen in Fig. \ref{DTAS_conf}. In the eighteen-module configuration
side holes
can be made
by moving away the modules of the horizontal central plane, thus allowing access from both sides of the detector, as shown in Fig. \ref{DTAS_conf} bottom.
In this way DTAS can be combined with ancilliary detectors and it is possible to position a beam pipe in the centre of the spectrometer.
This configuration has recently been commissioned at IGISOL \cite{NIMB_DTAS}, with holes of 10~cm used to place a HPGe detector from one side and the beam pipe with a $\beta$ detector from the other side. The two central modules were separated by 16~cm instead of 10~cm in order to lower their counting rate, so that it was comparable to the
external modules.
The configuration foreseen for FAIR \cite{DTAS_design}, with sixteen modules, will be coupled to the Advanced Implantation Detector Array (AIDA) \cite{AIDA}. In order to place AIDA in the center of DTAS, the two central modules in the eighteen-module configuration are removed and the two modules above the central hole are supported by a specially designed aluminium frame with external dimensions identical to a module, as shown in Fig. \ref{DTAS_conf} upper panel.
The shielding surrounding DTAS is composed of stainless steel sheets, lead bricks and aluminium, and it served to reduce the background counting rate by one order-of-magnitude in the measurements of this work. The
allocation of individual modules to positions in the arrangement was done
according to their resolutions, ranging from 7$\%$ to 9$\%$
at 661.7~keV, so that the positions associated with the lowest counting rates (the eight corners of the assembly shown in Fig. \ref{DTAS_conf}) were occupied by the modules with the poorest resolution.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.37 \textwidth]{DTAS16.JPG}
\vspace{0.2cm}
\includegraphics[width=0.37 \textwidth]{DTAS18.JPG}
\caption{DTAS detector in the sixteen-module configuration (top) and in the eighteen-module configuration (bottom)
without radiation shielding.}
\label{DTAS_conf}
\end{center}
\end{figure}
The outline of the article is the following: in section \ref{sec-1} we will describe the procedure to reconstruct the full energy deposited in the detector from the signals of the individual modules. In section \ref{sec-2} a method to evaluate the summing-pileup contamination will be explained, and its validation with calibration sources will be discussed. Finally, the Monte Carlo (MC) response function of the detector will be described in section \ref{MC_DTAS}, and the reproduction of several calibration sources and the neutron contamination coming from $\beta$-delayed neutron emitters will be
discussed.
\section{Total energy reconstruction: hardware sum and software sum}\label{sec-1}
In this section we will describe the electronic chain employed to process the signals from the individual modules of DTAS, and the procedure to reconstruct the total energy deposited in the detector. In particular, two methods to calculate the total
energy sum
will be
discussed:
the hardware sum and the software sum.
\subsection{Signal processing}\label{electronics}
In order to analyse data from DTAS we have to reconstruct accurately, for each event, the energy deposited in the full spectrometer and its multiplicity, $M_m$ (number of modules that fire above the threshold). The full energy released in the spectrometer is obtained by summing the energy deposited in the individual modules, either electronically or via software. The electronic chain to process the signals from the modules was designed with this idea in mind, and it is represented in Fig. \ref{electronics}.
\begin{figure*}[h]
\begin{center}
\includegraphics[width=1 \textwidth]{electronics.png}
\caption{Schematic diagram of the electronic chain. The labels correspond to: Preamplifier (Preamp), Spectroscopic Amplifier (Amp), Timing Filter Amplifier (TFA), Constant Fraction Discriminator (CFD), Gate/Delay Generator (GDG), Time to Digital Converter (TDC), Analog to Digital Converter (ADC).}
\label{electronics}
\end{center}
\end{figure*}
We use Mesytec MSI-8p preamplifiers \cite{mesytec} for both
anode and dynode signals from the photomultiplier tubes (PMTs).
After the preamplifier, dynode
signals
are split into two branches; one branch is sent to a CAEN N625 Quad Linear FAN-in FAN-out \cite{caen}, and the other to Mesytec MSCF-16
shapers.
The N625 module acts as an analog signal adder
and one of the outgoing signals is processed in an ORTEC 671 amplifier \cite{ortec} to produce the sum energy signal
(hardware sum)
sent to the analog to digital converter (ADC), a CAEN V785 module,
of the data acquisition system (DACQ).
Another output from the N625 module is used to construct a
common stop signal sent to a time to digital converter (TDC), CAEN V775, using an ORTEC 474 Timing Filter Amplifier and an
ORTEC 584 Constant Fraction Discriminator.
The MSCF-16 shapers provide individual energy and timing output signals that are sent to the individual channels of the ADC
and TDC modules respectively.
The anode signals after the preamplifier are sent to sampling digitizers of a second digital DACQ,
running in self-triggered mode, which is not discussed in this publication.
In order to carry out
the hardware
sum properly we need to match the gains of
the different PMTs
by adjusting the high voltage (HV) applied to them, so that
the signals of individual modules
are aligned. Note that aligned here means having the same amplitude for the same
energy deposited.
The software sum is reconstructed offline from the individual signals processed with the
MSCF-16
shapers. In the following subsections we will show a method of correcting possible changes in the gain of the modules, as well as the way to perform properly the alignment and determine the software sum of these signals.
\subsection{Gain correction system}\label{gain}
A system to correct changes in the gain of
individual
modules has been developed. These changes may be due to temperature
variations
\cite{Temperature_NaI}, drift of the PMT
current
and fluctuations in the HV supply.
In this system the gain of each module is monitored
checking the position of
the peak produced by a pulsed light source.
An additional external reference detector,
with a weak $^{137}$Cs radioactive source,
is used to monitor the stability of the light pulse generator. The following elements are employed in this system:
\begin{itemize}
\item An external reference well-type NaI(Tl) detector of 3'' diameter $\times$ 3'' length manufactured by Saint Gobain \cite{saint-gobain}. The well has 15~mm diameter and 40~mm depth. The
crystal is
mounted on a 3'' diameter ETI 9305 PMT as shown in Fig. \ref{Ref_det}.
\item A 490~nm light pulse generator model 6010 from BNC \cite{bnc}.
The generator is triggered with an external 100~Hz clock signal.
\item A 2~m long bundle of borosilicate glass fibres split into 20 bundles of 2~mm diameter, manufactured by FiberTech Optica \cite{fibertech}. The fibres are terminated with SMA type connectors.
\item A weak $^{137}$Cs source of $\sim$ 300~Bq.
\end{itemize}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.4 \textwidth]{wellDet.png}
\caption{NaI(Tl) reference well type detector. Inset: a view of the front face with the hole where the weak $^{137}$Cs source is placed.}
\label{Ref_det}
\end{center}
\end{figure}
The fibre bundle
splitter
is used to distribute the light pulse from the generator to the reference detector and to each of the eighteen modules. The $^{137}$Cs source is placed inside the well of the reference detector. The reference detector is surrounded by lead shielding and is placed close to DTAS. Since both the reference detector and DTAS have shielding, this weak source does not affect the DTAS measurements.
The position of the 661.7~keV peak in the well detector provides a reference for possible
changes in the gain of this detector. Comparing the position of the light pulser peak
with this peak we can determine if there are variations of the intensity of the light source.
With this information we can separate in each module variations in the gain from variations
in the light source intensity.
The
gain
correction is calculated for short
time intervals,
and the procedure will be detailed in the next subsection. An example of the spectra of the reference detector and
one
individual module of DTAS showing the light pulser peaks can be seen in Fig. \ref{Reference_peaks}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.5 \textwidth]{Individual_draw_FINAL.eps}
\vspace{0.2cm}
\includegraphics[width=0.5 \textwidth]{Ref_draw_FINAL.eps}
\caption{Individual DTAS detector spectrum with the light pulser peak in a $^{60}$Co measurement (top). Reference detector spectrum with the 661.7~keV peak from the weak $^{137}$Cs source, and the light pulser peak (bottom).}
\label{Reference_peaks}
\end{center}
\end{figure}
In order not to disturb the measured individual spectra, the
peak due to the light pulser has to be located beyond the energy region of interest,
see Fig. \ref{Soft_sum_ex} as an example.
When choosing an optical fibre bundle for each module, we took into account that each of the 20
bundles
does not transport the same amount of light, and the individual modules do not convert the same amount of incident light into the same signal amplitude in the PMT. For both reasons, in order to
minimize the difference in position of light pulser peaks between modules
we assigned the
bundles
that transport more light with the worst modules in terms of light conversion.
Apart from applying the gain correction offline, the gain correction system could also be used for maintaining the alignment of the signals of the modules
during the measurement
by applying
periodic
HV corrections to the PMTs. This
requires information about the dependence of the gain with the HV for each module. Although we have tested this online correction
method, it was not used in the actual measurements.
\subsection{Software sum}\label{sum}
Just as in the case of the hardware sum, before performing the software sum the amplitude of the signals
stored for each event must be properly aligned.
Although
signal
amplitudes were
gain-matched
before the FAN-in FAN-out for the hardware sum, and even though the gains of the shapers are set to a common value,
the stored amplitude information needs to be corrected
due to slight variations in gain and offset of the individual electronic channels.
The first idea for
making
this alignment was to convert signal amplitude (proportional to light collected) into energy for each of the individual channels. This conversion between light collected and deposited energy is what we will call energy calibration. A solution like this has been successfully adopted for a 12-fold segmented BaF$_2$ spectrometer in previous works \cite{vTAS_PRL,Zak_PRL,vTAS_PRC,Simon_PRC}. Nevertheless, we soon realized that it can not be applied in the case of a segmented detector made of NaI(Tl) because of the non-proportionality of the light yield in this material \cite{Non_Prop_Discover, non-prop3}. The reason is
related to
what was
pointed out in \cite{TAS_MC},
explaining the shift of
the position of full energy peaks due to $\gamma$-ray cascades
with respect to single $\gamma$-ray peaks of the same energy.
For every primary electron created by $\gamma$-ray interactions there is a shift of about
10~keV in the apparent energy. Since $\gamma$-rays of several hundreds of keV to a few MeV
typically require of the order of three interactions (two Compton, one photoelectric) to deposit the full energy
this explains why
for a $\gamma$-cascade of two $\gamma$-rays ($\gamma$-multiplicity, $M_{\gamma}$=2) the shift is approximately 30~keV, while for $M_{\gamma}$=3 it is 60~keV and so on. In the case of a segmented detector the situation is more complicated, and the shift depends not only on the $\gamma$-multiplicity, $M_{\gamma}$, but also on the number of modules where the energy is deposited, $M_m$,
which determines the distribution of the number of primary electrons in each module.
Taking into account the different ways that $3 \times M_{\gamma}$ electrons
can be distributed in $M_{m}$ modules one can determine that
the apparent energy shifts follow approximately the numbers in Table \ref{non-prop_TABLE}.
The first row in the table corresponds to the behaviour of a single NaI(Tl) crystal spectrometer like LUCRECIA at ISOLDE \cite{LucreciaTAS} or the LBNL spectrometer used at GSI \cite{NIMB_TAS_GSI}.
\begin{table}[h]
\begin{center}
\begin{tabular}{ccccc}
\textbf{$M_{\gamma}$} &\makebox[3em]{1}&\makebox[3em]{2}&\makebox[3em]{3}
&\makebox[3em]{4}\\
\hline
\textbf{$M_m$} &&&&\\
$1$ & $0$ & $+30$ & $+60$ & $+90$\\
$2$ & $-30$ & $0$ & $+30$ & $+60$\\
$3$ & $-60$ & $-30$ & $0$ & $+30$\\
$4$ & $-90$ & $-60$ & $-30$ & $0$\\
\hline
\end{tabular}
\caption{Shift in keV of the sum peak
position
due to the non-proportionality of the light yield in a segmented NaI(Tl) spectrometer
when
the individual modules
are
calibrated in energy before the software sum.}
\label{non-prop_TABLE}
\end{center}
\end{table}
For a single crystal spectrometer the non-proportionality is not a problem as far as this effect is included in the MC simulations in the way detailed in \cite{TAS_MC}. Likewise, it does not present any problem for the hardware sum of a segmented NaI(Tl) spectrometer,
as long as the PMTs are gain-matched.
However, the consequence of
applying an energy calibration to individual modules before summing,
is that the resolution of the sum peaks is worsened due to the displacement of the different multiplicities contributing to the sum. The non-proportionality of the light yield in NaI(Tl) is known to have an important contribution to the resolution of single crystal detectors \cite{non-prop1,non-prop2}, but this is an additional effect for multi-crystal detectors. In Fig. \ref{Shift_22Na} these shifts are shown for a measurement of $^{22}$Na ($M_{\gamma}$=3) and for the corresponding MC simulation of this source that includes the non-proportionality of the light yield as in \cite{TAS_MC}. In both cases an energy calibration has been applied to all the individual modules before summing. The vertical black line corresponds to 2296.5~keV, the sum of the energies of the three $\gamma$-rays involved: 511~keV, 511~keV and 1274.5~keV. The sum peaks of the different multiplicities are not aligned, showing a displacement in agreement with Table \ref{non-prop_TABLE}. Only $M_m$=3 is aligned with the nominal sum, since it corresponds to a 0~keV shift in Table \ref{non-prop_TABLE}, with three $\gamma$-rays detected in three crystals. Note that the experimental spectra are not background subtracted, whereas the MC is only widened by the light function from \citep{TAS_MC}, without taking into account
additional contributions to the resolution.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.5 \textwidth]{non_prop_22Na_MC_zoom_label.eps}
\vspace{0.2cm}
\includegraphics[width=0.5 \textwidth]{non_prop_22Na_exp_zoom_label.eps}
\caption{$^{22}$Na software sum of the light produced in the individual detectors
calibrated
in energy. Both the MC (top) and the experimental measurement (bottom) show the shifts from Table \ref{non-prop_TABLE}. The vertical black line corresponds to the sum of the energies of the three $\gamma$-rays involved. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this paper.)}
\label{Shift_22Na}
\end{center}
\end{figure}
In summary, in order
to maintain
the resolution, we have to align the
stored
amplitudes of the individual modules, thus reproducing with our software sum the same behaviour of the hardware sum, and, equivalently, of a single crystal detector.
In addition there are other effects that may worsen the resolution, like changes in the gain of the PMTs and the electronic chain. The correction we have to apply to counteract these effects is applied sequentially on a reduced number of events. The number of events should be sufficiently large to determine the peak positions accurately and sufficiently small to limit the effect of gain variations during the acquisition time. We have verified that 1 million events (that corresponds to approximately 4 minutes for a typical counting rate of 4-5~kHz in DTAS), fulfill this condition.
The stored amplitude is represented by the bin number in the histogram accumulated by each ADC
channel (detector module) and the first step is to determine the offset and the gain. To determine the ADC offset for each channel we use the position of the peak due to the electronic noise. The gain is obtained from the position of the two peaks from a calibration run with a $^{22}$Na source (511~keV and 1274.5~keV). With the offset and gain so obtained the alignment
of the first million events is performed choosing one arbitrary module as a reference.
After the alignment, the reference values of the parameters involved in the gain correction procedure are determined.
The ADC offset is represented by $a_j$, with $j=0...18$, with $j=0$ being the well detector and $j=1...18$ the DTAS modules.
The reference position of the light pulser peak for each module, $L_j$, is
obtained by peak fitting.
Analogously, the $^{137}$Cs peak and the light pulser peak reference positions for the well detector, $P_0$ and $L_0$ respectively, are determined.
The next group of one million events is then processed.
We define $L^{'}_j$ as the new light pulser peak position of module $j$, $b_j$ as the gain change factor of module $j$, and $C$ as the change factor in the light source intensity.
The procedure described below is followed in order to calculate the gain corrections and sum the amplitudes of all modules stored in each event:
\begin{itemize}
\item The new position of the $^{137}$Cs peak for the
well detector, $P^{'}_0$, is determined, as well as the position of the light pulser peak $L^{'}_0$.
\item The change in the gain of the PMT of the
well detector is calculated:
\begin{center}
\begin{equation}
b_0=\frac{P_0-a_0}{P^{'}_0-a_0}
\end{equation}
\end{center}
\item The
change in the light produced by the light pulse generator, $C$, is calculated:
\begin{center}
\begin{equation}\label{Lfluc}
C=\frac{L_0-a_0}{b_0(L{'}_0-a_0)}
\end{equation}
\end{center}
\item $L^{'}_j$ is determined for each of the DTAS modules, and with this value the gain change
factor
is calculated taking into account the change of intensity of the light source from Equation \ref{Lfluc}:
\begin{center}
\begin{equation}
b_j=\frac{L_j-a_j}{C(L^{'}_j-a_j)}
\end{equation}
\end{center}
\end{itemize}
Once the parameters $b_j$ are determined we reprocess the same group of events applying the
gain correction factor in order to align the amplitudes of all modules to the first group of events used
as a reference.
As a result of applying this procedure, the software sum
can be performed properly,
as can be seen in Figure \ref{Soft_sum_ex} for a $^{60}$Co source. At this point an energy calibration can be applied (a conversion between light collected and energy) by using single peaks ($M_{\gamma}$=1), as in the case of the hardware sum.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.5 \textwidth]{60Co_SOFT_cal_FINAL_labels.eps}
\caption{Software sum spectrum of a $^{60}$Co source
measurement and individual module spectra showing the alignment. The peaks in individual modules
above 8~MeV are due to the light pulser.}
\label{Soft_sum_ex}
\end{center}
\end{figure}
The software sum reconstructed in this way exhibits the same behaviour as the hardware sum in terms of the non-proportionality of the light yield, as seen in Fig. \ref{hard-soft} for two calibration sources. In both cases the segmented detector behaves as a single crystal detector in terms of the position of the sum peak. The main differences are related to a slightly better resolution in the software sum with respect to the hardware sum due to the gain corrections, and a
different shape in the pileup region that will be commented on next section.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.5 \textwidth]{60Co_soft_hard_NIM_BW_FINAL.eps}
\includegraphics[width=0.5 \textwidth]{22Na_soft_hard_NIM_BW_FINAL.eps}
\caption{Comparison of the hardware sum (grey) and the software sum (black) in DTAS for a $^{60}$Co source (top) and a $^{22}$Na source (bottom).}
\label{hard-soft}
\end{center}
\end{figure}
In order to ensure that the treatment of the non-proportionality is correct, we can check the spectra of the multiplicities after this process. In Fig. \ref{Mult_ok}, we show the good alignment of the different $M_m$ multiplicity spectra achieved with this method for a $^{22}$Na source ($M_{\gamma}$=3) and a $^{60}$Co source ($M_{\gamma}$=2), in contrast with the results shown in Fig. \ref{Shift_22Na}. The vertical black lines correspond to the sum peak positions calculated using the shift associated with single crystals according to \cite{TAS_MC} (first row of Table \ref{non-prop_TABLE}): 2296.5~keV+60~keV for the $^{22}$Na source, and 2505.7~keV+30~keV for the case of $^{60}$Co.
We should point out that the procedure followed here solves the misalignment problems
between simulation and experiment encountered in the calibration of the Modular Total Absorption
Spectrometer (MTAS) \cite{non-prop4}, as will be shown in Section \ref{MC_DTAS}.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.5 \textwidth]{22Na_mult_NIM_FINAL.eps}
\vspace{0.2cm}
\includegraphics[width=0.5 \textwidth]{60Co_mult_NIM_FINAL.eps}
\caption{Multiplicity
spectra
of a $^{22}$Na source (top) and a $^{60}$Co source (bottom). Vertical black lines show the corresponding energy of the sum peak
for a single crystal detector.
(For interpretation of the references to color in this figure caption, the reader is referred to the web version of this paper.)}
\label{Mult_ok}
\end{center}
\end{figure}
\section{Summing-pileup calculation}\label{sec-2}
An important source of distortion in the measured spectra is the random superposition of electronic signals within the time length of the ADC gate, due to the statistical nature of the decays. This superposition affects the pulse shape of a single
detector
leading to so-called pulse-pileup \cite{TAS_pileup}. This
applies to
individual crystals as well as to the hardware sum of a multi-crystal detector.
In the software sum of a segmented detector the distortion due to the superposition of events within the ADC gate
takes an additional form; namely, the sum of the signals detected in different modules and corresponding to different decays that are, however, stored in the same event.
Thus to calculate the distortion of the final spectrum
both processes must be taken into account,
the pulse-pileup (that we will simply call pileup) and the random summing (that will simply be called summing, but has to be distinguished from the traditional use of this term in spectroscopy). We have developed a method to treat the distortion of spectra due to
summing-pileup
that was already used in previous works \cite{vTAS_PRL,Zak_PRL,vTAS_PRC,Simon_PRC}, and will be detailed here. The
quality of the reproduction of this type of spectrum distortion
for a set of calibration sources, listed in Table \ref{sources}, has been studied.
\begin{table}[h]
\begin{center}
\begin{tabular}{c|c}
Source & Rate [kHz] \\ \hline
$^{22}$Na & 4 and 5\\
$^{60}$Co & 7 \\
$^{24}$Na & 14 \\
$^{137}$Cs & 19 \\
$^{152}$Eu-$^{133}$Ba & 44 \\
Background & 3 \\ \hline
\end{tabular}
\caption{Set of sources used in the study of the summing-pileup, and their counting rates in DTAS, together with the environmental background
rate.
All these measurements were performed with shielding.}
\label{sources}
\end{center}
\end{table}
\subsection{Procedure}
The evaluation of the summing-pileup contamination is based on the event structure of the experimental data, and on the true
electronic
pulse shape of the individual modules after the MSCF-16 shapers. For the first order
summing-pileup
calculation, two arbitrary random events are read
from the list-mode event file
and the time difference between them is sampled randomly within the ADC gate length. If an individual detector has fired in both events, two pulses with their corresponding amplitude are summed, and the maximum within our effective ADC gate $\tau=$ 5.6~$\mu$s (the ADC gate minus the peaking time of the individual signals) is taken, according to \cite{TAS_pileup}. If, on the contrary, the individual detector has only fired in one event, it contributes to the
summing evaluation. The total summing-pileup is the sum of all contributions, as depicted in Fig. \ref{Summing-pileup_scheme}.
This procedure assumes implicitly that the distortion
of measured events is small. This approximation is valid if the rate is below 10~kHz. For higher rates
a similar procedure, but based on simulated data, is used as explained in the next subsection.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.5 \textwidth]{Summing-pileup_F.pdf}
\caption[First order summing-pileup scheme]{First order summing-pileup scheme. The red squares represent a signal over the threshold stored in the ADC for
a module in a given event.
When the same module fires in the two events used for the summing-pileup reconstruction, it is processed as a pulse pileup. In any other case, the signals are added giving rise to the summing contribution.}
\label{Summing-pileup_scheme}
\end{center}
\end{figure}
It is worth mentioning that, in
our measurements,
the majority of the summing-pileup events
are coming from the
summing contribution, as shown in Fig. \ref{Summing-pileup_contributions} for a $^{60}$Co source, where the total summing-pileup contains around 87$\%$ events with only
summing, $\sim$1$\%$ with only pulse pileup, and $\sim$12$\%$ where both contribute.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.5 \textwidth]{60Co_pileup_contributions_L.eps}
\caption{First order summing-pileup for the $^{60}$Co source. The contributions of the pulse pileup (dotted red) and the random summing (dashed blue) are separated to show the region where they are affecting the spectrum. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this paper.)}
\label{Summing-pileup_contributions}
\end{center}
\end{figure}
The normalization factors needed to compare
the calculated summing-pileup contribution
with experimental spectra are obtained from the theoretical expression of Eq. \ref{Pileup_teo_factor}, which is based on the expression
used
for pileup order $n$ in \cite{TAS_pileup}, but adapted to a segmented detector. Here $\alpha_i$ are the individual counting rates of the 18 crystals and $\tau$ is the length of the effective ADC gate.
\begin{equation}
N^{n}_{theo}=\sum_{i=1}^{18}e^{-\alpha_i \tau}(1-e^{-\alpha_i \tau})^n
\label{Pileup_teo_factor}
\end{equation}
When the counting rate is high (approximately above 10~kHz), second order
summing-pileup
contributions must be evaluated. This is the case for the $^{24}$Na source and the $^{137}$Cs source, whereas for the $^{152}$Eu-$^{133}$Ba source even the third order contribution was needed in order to reproduce the measured spectrum. The procedure in those cases represents just an extension of the method already described. In the second order contribution, for example, three events are taken each time, instead of two.
The quality of the reproduction of this contamination in the set of calibration sources of Table \ref{sources} can be seen in Figs. \ref{Summing-pileup_contributions} and \ref{Summing-pileup_sources}.
\begin{figure*}[!h]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.5 \textwidth]{22Na_pileup_NIM_L.eps} &
\includegraphics[width=0.5 \textwidth]{24Na_pileup_NIM_L.eps} \\
\includegraphics[width=0.5 \textwidth]{152Eu133Ba_pileup_NIM_L.eps} &
\includegraphics[width=0.5 \textwidth]{137Cs_pileup_NIM_L.eps}
\end{tabular}
\caption{Summing-pileup contributions for a set of calibration sources. Two sources of $^{22}$Na with different counting rates are compared, normalized by the time of the measurement. For the $^{137}$Cs and the $^{152}$Eu-$^{133}$Ba sources the
calculated summing-pileup contribution is based on simulated data (see text for details).}
\label{Summing-pileup_sources}
\end{center}
\end{figure*}
\subsection{Calculation with MC simulated data}
We encountered difficulties
to reproduce the shape of the summing-pileup contribution for
the
sources with high counting rates,
$^{137}$Cs and $^{152}$Eu-$^{133}$Ba.
This is because in these sources, a large fraction of the events that are used for the calculation are already distorted
by summing-pileup.
However we are unable to distinguish if a measured event is distorted or not.
The way out of this dilemma is to use realistical simulated data.
For this purpose, we simulated the decay of sources with Geant4 \cite{GEANT4} as will be explained in Section \ref{MC_DTAS}
and we stored the relevant information for modules fired in each decay event in a format
similar to experimental data.
In the simulation the deposited energy is converted into light and the experimental resolution is introduced. The proper light to experimental amplitude calibration is then applied.
For the calculation of the summing-pileup contribution,
the same procedure explained in the previous section is used with small modifications.
In particular,
we have to supplement the simulated data file with a real background data file.
We assume that the summing-pileup distortion in background events is small.
Consequently
only the source-source and source-background summing-pileup contributions are calculated. For this reason the first event is always chosen from the pure source (MC simulation) and the second is taken either from the source, or from the background experimental file. The proportion between source and background for the second event is roughly fixed by the counting rates, and it is a parameter that can be adjusted by looking at the resulting spectra.
The use of MC
data
files to reconstruct the summing-pileup contribution has
proven
to be successful, and the summing-pileup of $^{137}$Cs and $^{152}$Eu-$^{133}$Ba shown in Fig. \ref{Summing-pileup_sources} has been reconstructed by using MC simulated data,
instead of using the experimental source file, thus validating this method for high counting rates.
\section{Validation of MC simulations}\label{MC_DTAS}
The aim of the TAGS technique is to determine a $\beta$-intensity distribution from an experimental measured spectrum by solving the inverse problem represented by:
\begin{center}
\begin{equation}\label{inverse}
d_i=\sum\limits_{j}R_{ij}f_j + c_i
\end{equation}
\end{center}
\noindent where $d_i$ is the number of counts in channel $i$ of the experimental spectrum, $f_j$ is the number of events that feed level $j$ in the daughter nucleus, and $R_{ij}$ is the response function of the detector that represents the probability that feeding to the level $j$ gives a count in channel $i$ of the spectrum.
The sum of all contaminants in channel $i$ is represented by $c_i$.
In order to perform this de-convolution
and obtain
the feeding distribution, a method was developed by the group of Valencia
\cite{TAS_algorithms}
which has been successfully applied to a large number of
cases. An essential ingredient of this process is the determination of the response function, that is unique to each detector and to each decay scheme, and has to be calculated by means of MC codes. For this reason, a mandatory step in the characterization of the detector
is
to validate the MC
simulation.
This is achieved by
comparison of simulations with measured calibration sources to verify that the best possible agreement is reached.
The package Geant4 \cite{GEANT4} has been used for this purpose, and the geometry of DTAS has been included in great detail, as shown in Fig. \ref{MC_setup_geometry}. In addition the relevant physics processes involved in particle detection have been incorporated. In particular, the non-proportional light yield in NaI(Tl) has been taken into account according to the parametrization and the procedure detailed in \cite{TAS_MC}.
In the next subsection we present the results of such a comparison.
\begin{figure}[!hbt]
\begin{center}
\includegraphics[width=0.5 \textwidth]{DTAS18.eps}
\includegraphics[width=0.3 \textwidth]{inner_setup3.eps}
\caption{Geometry of the set-up implemented in the MC. The DTAS detector in the eighteen-module configuration (top) is represented. Inside we considered the beam pipe, a plastic $\beta$ detector and a HPGe detector (bottom).}
\label{MC_setup_geometry}
\end{center}
\end{figure}
The efficiency of the detector for $\gamma$-rays and $\beta$ particles can be
obtained from the
MC simulations once the geometry and the physics have been validated. In Fig. \ref{efficiency} we
show the calculated efficiencies
for the complete set-up used in the commissioning with radioactive beams performed at IGISOL
with the
eighteen-module configuration \cite{NIMB_DTAS}. The beam pipe, a 3~mm thick plastic scintillator $\beta$ detector with its PMT, and a HPGe detector,
all inserted in DTAS, are included in the geometry.
The efficiency shown is calculated without applying an energy threshold to the individual modules
before reconstructing the sum energy.
The total efficiency is above 80$\%$ over the whole range, while the peak efficiency at 1~MeV is 66$\%$. When we consider the individual modules in the array, the peak $\gamma$ efficiency at 1~MeV is 50$\%$.
\begin{figure}[!hbt]
\begin{center}
\includegraphics[width=0.5 \textwidth]{Efficiency_NIM_2018_VLC.eps}
\caption{Simulated efficiencies as a function of the energy of the DTAS detector in the eighteen-module configuration. Total $\gamma$ efficiency (solid line) and peak $\gamma$ efficiency (dashed-dotted line) are shown. The peak $\gamma$ efficiency for the array of individual modules is shown as a dotted line. The efficiency for $\beta$ particles as a function of the end-point energy is shown as a dashed line.
Ancillary detectors are included in the simulation (see text for details).}
\label{efficiency}
\end{center}
\end{figure}
The total and peak efficiencies are limited by the solid angle covered and the amount of both sensitive and dead material.
The solid angle covered can be increased and the dead
material decreased if we remove the HPGe detector and we close the gap between the modules that it ocupies. In this case the efficiencies will increase
to $\varepsilon_{\gamma}^{T} = 94$\% and $\varepsilon_{\gamma}^{P} = 69$\% respectively at 1~MeV.
In comparison with the Lucrecia spectrometer at ISOLDE \cite{LucreciaTAS}
and the LBNL spectrometer at GSI \cite{NIMB_TAS_GSI}, both single crystal spectrometers,
the efficiency of the present eighteen module configuration is similar.
When compared with the recently built MTAS spectrometer \cite{MTAS_effic} our peak efficiency is 9\% smaller
at 1~MeV and 25\% smaller at 3~MeV.
This difference is a consequence of the much larger NaI(Tl) volume in MTAS, which is a factor of
2.5 larger than DTAS. It should be noted that the smaller efficiency of DTAS does not affect its performance
as a total absorption spectrometer. A nice example is provided by the decay of $^{137}$I which has been
measured by DTAS \cite{NIMB_DTAS} (see also subsection 4.2) and MTAS \cite{MTAS_137I}. Figure 4
in \cite{MTAS_137I} compares the spectrum measured with the full MTAS with the spectrum measured
with the sub-detector consisting of the 7 most central modules. This sub-detector is equivalent to
DTAS in volume (about 100 litres) and efficiency. As can be observed the differences are minimal
except for the contamination induced by the interaction of delayed neutrons emitted in the decay, which
is much larger in the full MTAS. The reason why the much larger volume brings a seemingly small
effect is to be found in the complex de-excitation pattern, with relatively large cascade multiplicities,
and the effect of $\beta$ penetration which tends to wash out the increase in single $\gamma$-ray
peak efficiencies. Such a consideration was taken into account during the design of DTAS
when choosing the detector size \cite{DTAS_design}.
\subsection{Reproduction of the calibration sources}
In this subsection we compare the results of the MC simulations with measurements for the calibration
sources in Table \ref{sources}. In the comparison, the different sources of contamination in the
measured spectra are taken into account. The environmental background is subtracted from
the measured spectra. However we choose to show explicitly the summing-pileup contribution, calculated
as described in the previous section, adding it to the MC simulated spectra for the comparison.
We use the DECAYGEN event generator \cite{TAS_decaygen} to generate the primary particles in the MC simulations. As can be observed in Fig. \ref{MC-exp}, we obtain an excellent reproduction of the experimental spectra.
\begin{figure*}[h]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.5 \textwidth]{24Na_NIM_2.eps} &
\includegraphics[width=0.5 \textwidth]{137Cs_NIM_2.eps} \\
\includegraphics[width=0.5 \textwidth]{60Co_NIM_2.eps} &
\includegraphics[width=0.5 \textwidth]{152Eu133Ba_NIM_2.eps}
\end{tabular}
\caption{Experimental
spectra
of calibration sources after subtracting the environmental background (solid grey) compared with the MC simulations (solid black) taking into account the summing-pileup contamination (dashed blue).}
\label{MC-exp}
\end{center}
\end{figure*}
One of the key features of DTAS is its segmentation. This allows one to obtain much richer information, provided by the
energy spectra of the individual modules and more importantly by the sum energy spectra gated with different conditions
on the number of modules that fired ($M_m$). These additional spectra are sensitive to the details of the de-excitation
cascades (energies and multiplicities $M_{\gamma}$).
In the case of laboratory sources with known decay schemes the multiplicity information provides a more stringent
test of the accuracy of the MC simulation, both of the geometry and the physical processes included.
As can be seen in Fig. \ref{MC_multiplicities} for the $^{22}$Na source an excellent agreement is obtained
proving that we have the MC simulations under good control.
It should be noted that all calculated spectra shown in Fig. \ref{MC_multiplicities} are obtained simultaneously
using the same energy calibration, a common normalization factor and the same summing-pileup calculation.
\begin{figure*}[h]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.5 \textwidth]{22Na_NIM_2.eps} &
\includegraphics[width=0.5 \textwidth]{22Na_1_NIM_2.eps} \\
\includegraphics[width=0.5 \textwidth]{22Na_2_NIM_2.eps} &
\includegraphics[width=0.5 \textwidth]{22Na_3_NIM_2.eps} \\
\includegraphics[width=0.5 \textwidth]{22Na_4_NIM_2.eps} &
\includegraphics[width=0.5 \textwidth]{22Na_5_NIM_2.eps}
\end{tabular}
\caption{$^{22}$Na experimental
spectra
after subtracting the environmental background (solid grey) compared with the MC simulations (solid black) taking into account the summing-pileup contamination (dashed blue).
The sum energy spectrum without conditions and with a condition on
module multiplicity $M_m$ from 1 to 5 is shown.}
\label{MC_multiplicities}
\end{center}
\end{figure*}
\subsection{Reproduction of neutron interactions}
The emission of $\beta$-delayed neutrons in the decay of exotic neutron-rich nuclei is a source of background for total absorption spectrometers like DTAS.
Neutrons
interact with
detector materials
producing $\gamma$-rays, either in an inelastic reaction or after neutron capture.
When detected these $\gamma$-rays
are indistinguishable from $\beta$ delayed $\gamma$-rays.
The ability to reproduce this type of background correctly with MC simulations is fundamental for the analysis of
TAGS spectra from $\beta$-delayed neutron emitters. The issues related to the simulation of neutron interactions
in inorganic scintillators, and in particular the use of the Geant4 simulation tool, have been discussed before
\cite{neutrons,DTAS_design}. One important item is the quality of the information in nuclear data bases concerning
reaction cross sections for all the materials encountered by the neutrons. In the present simulations we used the library ENDF-VII.0, which gave good results before.
The data from this library were converted into the G4NDL data format \cite{CIEMAT_neutrons}.
Another important item is the description of the $\gamma$-ray de-excitations
of excited states resulting from neutron interactions. We replace the standard capture cascade generator of Geant4,
which is rather schematic, with a generator that uses the statistical model to describe realistically the multiplicity
and energy distribution of the cascades. In the case of inelastic scattering we use the standard Geant4
PhotonEvaporation data base which relies on evaluated spectroscopic data \cite{ENSDF}.
We have studied the decay of two $\beta$-delayed neutrons emitters measured at IGISOL:
$^{137}$I and $^{95}$Rb. Preliminary results for $^{137}$I were presented in \cite{NIMB_DTAS}. Relevant decay information for these nuclei is well established in the data bases. The values from ENSDF \cite{ENSDF} of $Q_{\beta}$, neutron separation energy in the daughter $S_n$ and neutron emission probability $P_n$ are given in Table~\ref{bdn}. We recently performed an accurate $P_n$ measurement for both isotopes using the BELEN neutron counter \cite{NIM_BELEN} which gave 9.08(14)$\%$ for $^{95}$Rb and 7.76(14)$\%$ for $^{137}$I, close to the values in the table.
\begin{table}[h]
\begin{center}
\begin{tabular}{cccc}
Isotope & $Q_{\beta}$ [MeV] & $S_n$ [MeV] & $P_n$ [\%]\\ \hline
$^{137}$I & 6.027(9) & 4.02556(10) & 7.14(23) \\
$^{95}$Rb & 9.228(21) & 4.348(7) & 8.7(3) \\
\hline
\end{tabular}
\caption{Properties of $\beta$-delayed neutron emitters used to test MC simulations.}
\label{bdn}
\end{center}
\end{table}
We compare with the simulation the sum energy spectra gated with $\beta$ particles detected in
a thin plastic scintillator. These spectra are free from environmental background and are affected
by the end-point energy dependence of the $\beta$ efficiency which suppresses the $\beta$ decays
to states close to $Q_{\beta}$. In order to take this effect properly into account
an event generator was implemented \cite{vTAS_PRC} that reproduces the known sequence of
$\beta$-neutron-$\gamma$ emission in the decay. It reproduces also the measured
neutron spectra obtained from the ENDF/B VII.1 database, based on the work in \cite{Brady_thesis}.
The generator requires the reconstruction of the $\beta$ intensity distribution from the measured neutron spectra using the information on neutron branchings to the excited levels in the final nucleus, $I_n$. The generator uses the associated $\gamma$ branchings, $I_{\gamma}$, as well. Both $I_n$ and $I_{\gamma}$ data were retrieved from the ENSDF database \cite{ENSDF}.
In the simulations a time window for accumulation of the energy deposited after multiple neutron interactions was applied. This window takes into account the existence of a delay between neutron-induced $\gamma$-rays
and the prompt $\beta$ signal. A window of 500~ns was employed in accordance with the experimental coincidence time window between DTAS and the $\beta$ plastic detector. This window ensures the collection of all the energy deposited. Figure \ref{neutrons_MC} shows the comparison of measured and simulated spectra for the two $\beta$-delayed neutron emitters studied. As can be observed
the reproduction of the gross structure above 6.8 MeV, mainly due to neutron capture in the
iodine ($^{127}$I) in the crystal, is very good. In the case of $^{137}$I the shape
of the structure depends on the $\beta$-delayed neutron energy spectrum.
It should be noted however that
in the case of $^{95}$Rb this structure
includes partial summing of capture $\gamma$-rays with
$\gamma$-rays emitted from excited states populated
after neutron emission.
The strongest of these $\gamma$-rays is also visible as a peak in the simulated
spectra at 837~keV superimposed on the $\gamma$-ray background from neutron inelastic collisions.
We found that the shape of the gross structure is quite sensitive to the de-excitation pattern after
neutron emission. For $^{95}$Rb, the use of the evaluated decay scheme available in ENSDF
produced a wrong shape for the spectrum. However when we use the de-excitation scheme
in $^{94}$Sr measured by Kratz et al. \cite{Kratz_95Rb} a good reproduction was obtained
as can be seen in Fig. \ref{neutrons_MC}. Thus except when the neutron emission
proceeds entirely to the ground state, the effect of the neutron energy distribution on the shape of the capture peak is obscured by the final nucleus $\gamma$ spectra. Since the latter is often unknown or poorly known, it seems difficult to obtain reliable information about the shape of the $\beta$-delayed neutron spectrum from TAGS spectra in the general case.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.5 \textwidth]{137I_NIM_2018_500.eps}
\includegraphics[width=0.5 \textwidth]{95Rb_NIM_2018_500.eps}
\caption{
Simulation of the $\beta$-delayed neutron decay branch for
neutron emitters measured in the commissioning of DTAS at IGISOL: $^{137}$I (top) and $^{95}$Rb (bottom). Experimental $\beta$-gated spectra (in grey)
are compared to simulations (in black).}
\label{neutrons_MC}
\end{center}
\end{figure}
Ideally the normalization of the $\beta$-delayed neutron contribution to the total spectrum should be determined by the $P_n$ value. From the normalization of the simulated and measured counts in the capture bump at 6.8~MeV we have obtained $P_n$ values of 6.8$\%$ and 10.9$\%$ for $^{137}$I and $^{95}$Rb respectively, after taking into account all the contaminants (summing-pileup and activity of the descendants). Note that in comparison with the numbers given in Table \ref{bdn}, we found a 5$\%$ smaller value for $^{137}$I, while for $^{95}$Rb a 25$\%$ larger value is obtained. Compared to our recently measured values \cite{NIM_BELEN} the differences are -12$\%$ and +20$\%$ respectively. We studied the dependence of the extracted $P_n$ on the length of the time window applied to the experiment (coincidence gate) and to the MC simulation. We found that in the range 300-500~ns the results were stable within 3$\%$. It should be noted that the value we obtain is 14$\%$ lower than the value of 7.9(4)$\%$ obtained by a similar procedure with MTAS \cite{MTAS_137I}. In view of this discrepancy and the fact that for $^{95}$Rb we also obtain a large difference but of opposite sign, we conclude that further investigations are needed before deciding on the reliability of $P_n$ extraction from TAGS spectra \cite{MTAS_137I}.
In any case, the key point for us is the reproducibility of the shape of the spectra of the $\beta$-delayed neutron contamination that affects the extraction of $I_{\beta}(E_x$) from the analysis of TAGS spectra. A proper determination of this background component is particularly relevant when extracting an accurate value for the $\beta$ intensity above the neutron separation energy that proceeds by $\gamma$ emission, $P_{\gamma}$ \cite{vTAS_PRC}. The investigation of $\gamma$/neutron competition from neutron unbound states is a topic of current active research. The importance of the correction of the background due to $\beta$-delayed neutrons for the determination of $P_{\gamma}$ using TAGS spectrometers made of NaI(Tl) can not be overlooked. This material has a large capture cross-section resulting in large $\beta$-delayed neutron detection efficiencies, of the order of 40$\%$. We note, in particular, that this correction has been ignored in a recent measurement of $P_{\gamma}$ for $^{70}$Co decay with the SuN spectrometer \cite{PRL_70Co} and might change their result significantly.
The sensitivity of the MC simulation of the $\beta$-n decay contamination to the knowledge of the decay (neutron spectrum and $\gamma$-cascades
after neutron emission) represents a challenge for very neutron-rich nuclei in the general case where this information
is poorly known or not known at all. Given that $\gamma$-rays produced by neutron interactions are delayed
with respect to $\beta$-particle emission one can use timing information to discriminate between these signals \cite{DTAS_design}. We have tested this idea for $^{137}$I and $^{95}$Rb with reasonable results, as will be shown in a forthcoming publication. However, this type of time discrimination cannot be applied for the $\gamma$-ray de-excitation in the final nucleus after neutron emission since they are prompt with respect to the $\beta$-particles. The best option here seems to use the spectrometer itself to obtain information about this type of contamination as was suggested in \cite{MTAS_effic}. The modularity of DTAS helps here, since there will be a certain degree of spatial separation between $\gamma$-rays coming from the final nucleus and those coming from neutron interactions. This can be exploited to tag $\beta$-delayed neutron events by setting a coincidence gate on the neutron capture "peak" observed, for example, in one half of the spectrometer and looking at the spectra in the other half of the spectrometer. Work to demonstrate the feasibility of this approach is in progress.
\section{Conclusions}
The characterization of the DTAS detector has been carried out. A gain stabilization system based on a light pulse generator has been tested successfully. The non-proportionality of the light yield effects in a NaI(Tl) multi-crystal spectrometer were taken into account to reconstruct properly the sum of the total energy deposited in the spectrometer.
The summing-pileup distortion of the spectrum was successfully
computed
using a revision of a method previously developed, and for high-rate measurements an improvement in this method has been introduced with the help of MC
simulated data.
A careful Geant4 MC simulation of the DTAS detector
response to $\beta$-decays
has been performed.
The quality of the response function, needed for any TAGS analysis, has been validated after obtaining excellent agreement when comparisons were made with measurements of calibration sources.
This includes in particular a good agreement of multiplicity gated spectra.
A nice agreement between the measured and simulated shape of the $\beta$-delayed neutron background was also obtained
for two well known neutron emitters.
\section{Acknowledgements}
This work has been supported by the Spanish Ministerio de Econom\'ia y Competitividad under grants FPA2011-24553, AIC-A-2011-0696, FPA2014-52823-C2-1-P and the program Severo Ochoa (SEV-2014-0398), by the European Commission under the FP7/EURATOM contract 605203, and by the Spanish Ministerio de Educaci\'on Cultura y Deporte under the FPU12/01527 grant. The work was also supported by the UK Science and Technology Facilities Council (STFC) grant ST/P005314/1. E. Ganio\u{g}lu was supported by the Istanbul University Scientific Research Project Unit under FYO-2017-24144 project.
|
{
"timestamp": "2018-06-05T02:18:43",
"yymm": "1806",
"arxiv_id": "1806.01138",
"language": "en",
"url": "https://arxiv.org/abs/1806.01138"
}
|
\section{INTRODUCTION}
\label{sec:intro}
In the year 1600 the Luminous Blue Variable (LBV) P~Cygni (P~Cyg) experienced a major eruption \citep{deGroot1969,deGroot1988}, also known as a ``Supernova Impostor'' (e.g., \citealt{DavidsonHumphreys2012}).
Recalling that the historical event precedes the invention of the telescope,
the eruption (at the time referred to as a nova) caused the star to increase in visible magnitude from below detection by naked eye to $\simeq 3$.
Later on in the seventeenth century the star underwent a series of 4 more eruptions, with decreasing time intervals between them.
P~Cyg was traditionally considered to be a single star.
Even though P~Cyg is the closest LBV to us (at a distance of $1.7 \pm 0.1 ~\rm{kpc}$ ; \citealt{Najarroetal1997}), no companion has ever been observed.
After realizing the progenitor is an LBV, its eruptions were associated with a single star processes (e.g., \citealt{HumphreysDavidson1994,LamersdeGroot1992}).
These kinds of pre-supernova eruptions are thought to occur in the final evolutionary stages of a star.
The best investigated example of a very massive star that had gone through such eruptions and survived is $\eta$~Car \citep{HumphreysMartin2012}.
But the latter is at least $90 ~\rm{M_{\sun}}$, and probably twice this value \citep{KashiSoker2016Massive}, while P~Cyg is only $\approx 25 ~\rm{M_{\sun}}$
(though we note that there are higher estimates for the mass, such as the one of
\cite{ElEidHartmann1993} who suggested that stellar evolution tracks support a $50 ~\rm{M_{\sun}}$ star, and \citealt{Lamersetal1983a, Lamersetal1983b}
who favored a $60$--$80 ~\rm{M_{\sun}}$ star).
The peculiar morphology of the nebula which was formed by the eruption of P~Cyg
\citep{Notaetal1995} lead \cite{IsraeliandeGroot1999} to suggest that a different physical process is responsible to the eruptions of $\eta$~Car and P~Cyg,
though the details of such a process were not investigated.
\cite{Kashietal2010} showed that the eruption of P~Cyg lies on a
strip in the total energy vs. timescale diagram (ETD) together with other intermediate
luminosity optical transients, including the two nineteenth century eruptions of $\eta$~Car.
\cite{KashiSoker2010b} suggested that the same physical mechanism that is applicable to the giant eruptions of LBV stars applies to the other transients in the ETD, including P~Cyg: accretion onto a main-sequence (MS) companion star and release of gravitational energy.
\cite{Kashi2010} explained the series of eruptions of P~Cyg by mass transfer to a B-type binary companion in an eccentric orbit.
He assumed that the luminosity peaks occurred close to periastron passages, as
at these times mass was accreted by the companion and liberated gravitational
energy, part of which went to an increase in luminosity.
\cite{Kashi2010} suggested that mass transfer of $\approx 0.1 ~\rm{M_{\sun}}$ to a B-type binary
companion of $\approx 3$--$6 ~\rm{M_{\sun}}$ can account for the energy of the
eruption, and for the continuously decreasing time interval between the peaks in
the visual light curve of the eruption.
Such a companion was predicted to have an orbital period of $\approx 7$~years, and it was calculated that its Doppler shifts should be detectable with high resolution spectroscopic observations.
An early attempt to find a periodicity in the observations of P~Cyg has been performed by \cite{Israelianetal1996}, who suggested a period of $206 \pm 11$ days.
This periodicity was found in spectra of \ion{Fe} {3} lines.
\cite{Richardsonetal2011} performed a spectroscopic analysis over a period of 15 years but found no periodic radial velocity variation. As they state, the radial velocity variation in the H$\alpha$ line they observed cannot be caused by the companion as the line is formed in a volume much larger than the semi-major axis of the companion predicted by \citet{Kashi2010}.
\cite{Richardsonetal2013} count the non-detection of \cite{Richardsonetal2011} as an argument disfavoring the existence of the companion, but this is inconsistent with the statement of \cite{Richardsonetal2011} regarding the large H$\alpha$ volume.
Recently, \cite{Kochiashvilietal2018} used unpublished observations of P~Cyg obtained by Kharadze and Magalashvili at the Abastumani Observatory to deduce a number of quasi-periods:
($1480 \pm 31$) days; ($736 \pm 27$) days; ($1123 \pm 36$) days; $\sim 579$ days and $\sim128.7$ days.
The reason for the quasi-periodicity was not discussed in their paper.
In this paper we use photometric observations from the last $2/3$ century in an attempt to recover a periodic signal.
Our premise is that if such a signal exists it would be buried in the data, and if a companion star is present in an orbit of a few years, it is likely
obscured by the high density LBV wind for most of the orbit, and visible only for a short time.
\section{Observations and Analysis}
\label{sec:observations}
We use photometric observations taken from the American Association of Variable Star Observers (AAVSO, \citealt{Kafka2018}).
In addition we use two sets of observations described in \cite{Kochiashvilietal2018}.
Those observations were obtained by Nino Magalashvili and Eugene Kharadze using the 33 cm and 48 cm reflectors of the Abastumani Astrophysical observatory during 1951--1983. They used 29 Cyg and 36 Cyg as comparison and check stars, and obtained two sets of observations corresponding to these two references.
The accuracy of the AAVSO data is $\approx 0.01$ mag in the V band.
This translates to a precision of about $1\%$ in the flux measurements.
However, at most of the nights there are multiple observations, taken by different observers. The average of these observations increases the precision to $0.1$--$0.5 \%$.
Assuming P~Cyg is binary system with two stars with the masses quoted above,
we take for the LBV $T_1 = 18\,200 ~\rm{K}$ and $L_1=5.6 \times 10^5 ~\rm{L_{\sun}}$ \citep{Najarroetal1997}, and for the companion MS star we take the most favorable values from \citep{Kashi2010} to allow detection $T_2 = 19\,000 ~\rm{K}$ and $L_2= 1\,500 ~\rm{L_{\sun}}$.
Calculating black-body emission, the expected ratio in the intensity in the visible is $\approx 0.3 \%$.
We thus get an estimation for the magnitude of variation which may be detected using V-Band filter observation.
We therefore conclude that only for optimistic parameters the AAVSO data is of about the required precision to be used in our analysis.
We nevertheless proceed with the analysis with the hope of detecting a binary signal.
The other observations we use are of much higher quality and can therefore be used with no concern.
We first have to join the P~Cyg photometric data from the three sources into one coherent dataset.
To do so, we average same-night observations to obtain a single observation per-night, for each source.
We zero pad the signal at times were no observations have been taken.
Namely, for nights with no data we take $\Delta V =0$.
The next step is to apply a median filter to each signal (per source).
We then re-normalize the data using the following technique. We identify similar measurement points for the three sources, and use them to normalize all data.
We do that by re-quantifying the data to generate a normalized unified dataset that has one point for each night.
All our following analysis is done on this unified, renormalized dataset from our three sources. We hereafter refer to it as the unified signal.
Next, we analyze the frequency spectrum using two methods:
\begin{enumerate}
\item Performing a conventional Fourier transformation using the Fast Fourier Transform (FFT) algorithm.
\item Calculating the power spectrum density (PSD), defined as the spectral power of the auto-correlation of the unified signal.
\end{enumerate}
In order to validate our methods, we add a synthetic signal with a period of 1 year and intensity equal to the variance of the unified signal, and recover it using each of the two methods.
In the upper panel of Fig. \ref{fig:m_vs_t} we show the unified signal.
The time axis is in days starting June 6 1951 (JD~2433804) and contains about 66 years (24292 days) of measurements.
The vertical axis is V-magnitude relative to the data as described above (unified signal).
The lower panel shows the synthetic signal, defined as the unified signal with the added 1~year period signal.
At times where no data is available for the unified signal we did not add the 1~year period signal to our analysis.
The inset zooms on part of the signal to illustrate the way the synthetic signal was constructed.
From the knowledge of the synthetic period we can reverse the analysis process to get a perspective of what we look for in our analysis and how it should be seen.
We use it to ensure the correctness of the analysis and as a prove our work methods.
\begin{figure*}
\includegraphics[width=0.95\textwidth]{m_vs_t.eps}
\includegraphics[width=0.95\textwidth]{m_vs_t_synth.eps}
\caption{Upper panel: normalized magnitude per day from all three data sets. We use data set normalized to 29~Cyg and 36~Cyg (July 1951 -- September 1983) and AAVSO (August 1972 -- December 2017). We normalize the data using the overlap time range, merge it to obtain unified data and create a data set with continuous sample rate of 1 day.\\
Lower panel: We add a synthetic signal that simulates measurements similar to the original signal, with 1~year period. In the figure we add the full values of the 1~year sinusoidal signal for clarity. This signal is used to validate our analysis.
}
\label{fig:m_vs_t}
\end{figure*}
In Fig. \ref{fig:spec} we see the spectrum of the unified signal obtained from our FFT analysis.
We find different periods in the signal: $P_1=1735 \pm 115$ days, $P_2=1428 \pm 79$ days, $P_3=1619 \pm 101$ days, $P_4=398 \pm 7$ days and more, in a descending order of strength.
We notice that although we have higher resolution in the low frequencies we find there distinct peaks which differ from their surroundings.
We can clearly see that the synthetic signal produced both the peaks of the unified signal ($27.5$ db at $1735$ days), and the 1~year period signal with power of $37.5$ db. The signal is very strong.
We calculated the statistical properties of the FFT intensity of the synthetic signal and found the mean intensity (the absolute value of the real and complex parts of the FFT) to be 4.2 and the standard deviation to be 2.6.
In this linear scale the intensity of the 1~year peak is 75.6.
This very distinct synthetic signal gives perspective to the other periodicities we find.
\begin{figure*}
\includegraphics[width=1\textwidth]{spec.eps}
\includegraphics[width=1\textwidth]{spec_synth.eps}
\caption{Upper panel: spectrum where the x axis is in days and the y axis in db (i.e. $20\log_{10}\left( \left| m(\nu) \right| \right)$. We regularized our data to a sample rate of 1/day. As a result we can see periods starting from 2 days up to $\simeq 33$ years. As expected we see a lot of variability in the high frequency region (low period). From $P_1=1735$ days onward to short frequencies we see a continuous decline in the signal.\\
Lower panel: We see the influence of a 365 days periodic signal on the analysis. The strong peak represents a simulation of synthetic sinusoidal signal is $\sim37$ db while the maximum value of the signal is seen at $P_1=1735$ days with about $27.5$ db followed by peaks weaker by $1$--$2$~db.
}
\label{fig:spec}
\end{figure*}
To get the spectral density estimation we use Welch's method.
The result is presented in Fig. \ref{fig:welch}, as the Power Spectrum Density (PSD).
Here we can see the real signal peak at $(6.7 \pm 0.23) \times10^{-9}$~Hz and the synthetic 1~year signal as a peak at $(3.1 \pm 0.23) \times 10^{-8}$~Hz, as expected.
\begin{figure*}
\includegraphics[width=1\textwidth]{welch.eps}
\includegraphics[width=1\textwidth]{welch_synth.eps}
\caption{Upper panel: the Power Spectrum Density (PSD). The most distinct peak $62.7$ db found at frequency $(6.7 \pm 0.23) \times10^{-9}$~Hz (corresponding to $\sim 4.7$ years). \\
Lower panel: our simulated signal (along side the real signal) with peak $74.7$ db at $(3.1 \pm 0.23) \times 10^{-8}$~Hz ($\simeq 1$ year).
}
\label{fig:welch}
\end{figure*}
\section{Results}
\label{sec:resultss}
From the FFT analysis (Fig. \ref{fig:spec}), supported by the power spectrum (Fig. \ref{fig:welch}) we find a few peaks.
The most evident period in the signal is $P_1$ at $1735 \pm 115$~days with $27.5$~db. There is a secondary peak $P_2$ with a power of $26.1$ db and peaks $P_3$ to $P_5$ with a power of the order of $\simeq25$ db. We also notice that even though there is naturally high temporal resolution in the high frequencies (the resolution is $\propto \nu$, up to the frequency corresponding to the sampling of 1 day), there is a distinct gap between the strong peaks at the high frequencies region.
The peaks at the low frequencies are considerably stronger than the peaks at high frequencies.
From point $P_1$ onward to lower frequencies the signal declines.
The synthetic signal gives a peak of $37.5$~db at $362 \pm 5$~days, as expected.
The peak of the synthetic signal has its origin in a pure sinusoidal function with an amplitude of $\Delta V \simeq 0.1$ mag. Thus, its large power is a direct result of this large amplitude.
As expected, the power we obtain from our data is much weaker, since it relies on much smaller amplitudes.
Examining the PSD, we also find that the most distinct peak of $62.7$ db is at $\approx 4.7 \pm 0.3$ years.
As both methods provided the same periods, we conclude that the periodicities exist in the data.
Using Welsh method we obtain a flat signal of which most is contained below 45db. This indicates that the noise in the signal is white, namely frequency independent. The strong peak we obtained is almost 20db stronger than the level of white noise.
To show the significance of the peak quantitatively we go back to the FFT of the unified signal.
We calculated the statistical properties of the FFT intensity and found the mean intensity (the absolute value of the real and complex parts of the FFT) to be 3.8 and the standard deviation to be 4.1 (note that the zero-padding has no significance on the statistical properties of the properties but only on the frequency resolution).
The obtained strongest peak gives a normalized intensity of 23.7. Even the fifth peak has an intensity of 18.8.
We therefore conclude that the peaks are detected with high certainty.
\section{SUMMARY and Discussion}
\label{sec:summary}
Explaining the eruption of P~Cyg by mass transfer further supports
the conjecture that all major LBV eruptions are triggered by interaction
of an unstable LBV with a stellar companion.
The model of \cite{Kashi2010} predicted an orbital period of about 7 years while the longest period we found here was 4.7 years.
This gives rise to a few questions.
(a) \textit{Does the peak at 4.7 years indicate the existence of a binary companion?} There is no conclusive answer, but the chances are quite good. At first glance it may seem that in order for the periodiciy in the signal to be related to the light from a binary star the P~Cyg system needs to be almost edge-on, so that the companion will be obscured for most of the orbit. But this is not the case. The radius of the LBV is large, and we can add to it a wide region of dense wind that is optically thick in the visible range up to a considerable distance. It is therefore quite likely that a companion will shine for part of the orbit when it is on the observer side, and be obscured for the remainder of the orbit.
(b) \textit{If not a binary, what else can the period indicate?} There are a few other possibilities: internal variation of the star, magnetic periodicity, unknown effects related to the LBV recovery from its eruption,
instabilities in the wind, and more.
(c) \textit{Is an orbit of 4.7 years compatible with the predictions of \cite{Kashi2010}?}
It is possible, but not probable, as we now explain.
The prediction of \cite{Kashi2010} that a companion exists with an orbital period of $\simeq 7$ years comes from the assumption that mass accretion and mass loss from the LBV ended after its series of eruptions during 1654--1685. The period between the last two peaks was $\simeq 7$ years and no eruption has been observed since then.
A period of $4.7$ years is incompatible with the model of \cite{Kashi2010}, unless the orbit was reduced from $\simeq7$ years in 1685 to 4.7 years at the twentieth century.
In order to examine whether an orbit of 4.7 years is compatible with the predictions of \cite{Kashi2010} we repeat their calculations and find that in order to have an orbital period reduced to 4.7 years at the end of the twentieth century, the LBV should have transferred $\simeq0.28 ~\rm{M_{\sun}}$ to the secondary. Since not all the mass lost from the LBV is accreted, the mass loss from the LBV over $\approx 350$ years would have to be $\approx 0.5 ~\rm{M_{\sun}}$, or on average the mass loss rate would have to be $\approx 1.4 \times 10^{-3} ~\rm{M_{\odot}}~\rm{yr^{-1}}$. While this is theoretically possible for an LBV, the accreted mass would have to emit its gravitational energy, at least partially as radiation that would have an observational signature.
However, no increase in luminosity as a result of these hypothetical eruptions has been recorded in the literature. Theoretically, every one or a few orbital periods an eruption might have occurred, but they all were weak and obscured by the ejecta from the strong eruptions observed in the seventeenth century.
Other mechanisms such as tidal interactions are weak and cannot account for the shortening of the period (see discussion in \citealt{Kashi2010}).
In summary, it is theoretically possible that we detected a companion star as predicted by \cite{Kashi2010}, but it requires the rather strong assumption of obscuration of eruptions that succeeded the ones observed.
Even if the 4.7 years period does not represent the companion proposed by \cite{Kashi2010},
their model still valid for three reasons.
First, it still explains almost perfectly the series of eruptions of P~Cyg in the seventeenth century.
Second, our present search for periodicity had a very small chance to find such a long period of $\approx 7$ years, since it only spanned 66 years. It is very clear from Fig. \ref{fig:spec} that a period of $7$ years is at the edge of the figure where the frequencies spread very thinly, so the observational duration is too short to allow finding such a long period.
Third, as discussed in section \ref{sec:observations}, the precision of the AAVSO observations could only detect the companion suggested by \cite{Kashi2010} for the most optimistic companion parameters and best observations precision.
Not detecting the companion suggested by \cite{Kashi2010} with the presently available data is therefore not a big surprise.
We also note that the ratio between the periods $\approx 4.7$ and $\simeq 7$ years is $2:3$, which might indicate some resonance with the suggested companion suggested by \cite{Kashi2010}.
We hope that AAVSO observers and other campaigns will continue to document the photometric variation of P~Cyg with an increasing precision, such that this exercise can be repeated in a few decades with a longer duration of observations.
\vspace{0.5cm}
We thank an anonymous referee for helpful comments that helped improving the paper.
We thank Noam Soker for very helpful discussions.
We acknowledge with thanks the variable star observations from the AAVSO International Database contributed by observers worldwide and used in this research.
AK acknowledges support from the R\&D authority in Ariel University and the Rector of Ariel University.
NK acknowledges Shota Rustaveli National Science Foundation (SRNSF grant No 218070).
|
{
"timestamp": "2018-06-26T02:09:27",
"yymm": "1806",
"arxiv_id": "1806.00769",
"language": "en",
"url": "https://arxiv.org/abs/1806.00769"
}
|
\section{Introduction}
The quantum theories of elementary particles are tested by comparing the results of scattering
experiments to the theoretical predictions of the corresponding quantum field theories. The latter
are organized by a sum over Feynman graphs, which encode the possible (virtual) particle histories.
These predictions then amount to the calculation of integrals, which (for $D$-dimensional euclidean
scalar theories) take the form
\begin{equation*}
I_{G}(q,m) := \int_{H_{1}(G,\mathbb{R}^{D})}\prod_{e\in E_{G}}\frac{1}{(k_{e}+q_{e})^{2}+m^{2}_{e}}\mathrm{d} k,
\end{equation*}
where the integration is over a collection of internal momenta $k\in \mathbb{R}^{D}$, one for each
independent cycle of the graph. These integrals are in general ill-defined. UV divergences present
themselves, when some of the internal momenta become large and the integrand does not fall off
sufficiently rapidly. If some of the edges are massless ($m_{e}=0$), then the integral can also
develop IR-divergences for small momenta, due to non-integrable singularities in the integrand.
Since the work of Weinberg (\cite{Weinberg_1960}) it is known that, at least for generic values of
the external momenta, these singularities are completely described by a simple power-counting
argument for each subgraph $\gamma\subset G$. To make these singularities more apparent, it is often
convenient to work in a parametric representation, which expresses $I_{G}$ as the projective
integral
\begin{equation*}
I_{G}(q,m) = \Gamma(\omega_{G})\int_{P^{E_{G}}(\mathbb{R}^{+})}\left( \frac{\psi_{G}}{\Phi_{G}} \right)^{\omega_{G}}\frac{\Omega_{P^{E_{G}}}}{\psi_{G}^{D/2}}.
\end{equation*}
Here $P^{E_{G}}$ is the projective space over the set of edges $E_{G}$ and
$\omega_{G}=|E_{G}|-\frac{D}{2}\rk H_{1}(G)$. The polynomials $\psi_{G}$ and $\Phi_{G}$ are the
Symanzik polynomials of the graphs. In this representation, the overall divergence of the graph has
been absorbed in the factor $\Gamma(\omega_{G})$. The UV and IR divergences due to subgraphs
$\gamma\subset G$ now appear because the vanishing loci $V(\psi_{G})$ and $V(\Phi_{G})$ of
$\psi_{G}$ and $\Phi_{G}$ can intersect the boundary of the integration domain
$P^{E_{G}}(\mathbb{R}^{+})\cong \Delta^{E_{G}}$ in the coordinate linear space corresponding to
$\gamma$.
For some applications, this representation is still somewhat inconvenient. Motivated by arithmetic
questions, Bloch, Esnault and Kreimer (\cite{Bloch_2006}) and later Brown (\cite{Brown_2017})
constructed new projective varieties by iteratively blowing-up certain coordinate linear spaces, such
that the strict transforms of $V(\psi_{G})$ and $V(\Phi_{G})$ did not meet the strict transform of
the integration domain. The possible singularities then simply manifest as poles along the
exceptional divisors.
For a completely different purpose, Binoth and Heinrich (\cite{Binoth_2000}) developed an iterative
strategy to decompose the integration domain $P^{E_{G}}(\mathbb{R}^{+})$ into cubical sectors, such
that, after an appropriate coordinate change, the integral over each sector takes the form
\begin{equation}\label{eq:1}
I_{s} = \int_{[0,1]^{|E_{G}|-1}}x^{\lambda}F(x,q,m)\mathrm{d} x,
\end{equation}
where $F(x,q,m)$ is a rational function which is regular on $[0,1]^{|E_{G}|-1}$. This allowed them
to completely automate the calculation of $I_{G}$ in dimensional regularization. Kaneko and Ueda
(\cite{Kaneko_2010}) later introduced a different, noniterative sector decomposition strategy, which
is based on polyhedral geometry.
The goal of this paper is to show that both construction can be unified in the framework of toric
geometry. A complex variety is toric if it has an action of a complex torus with a dense orbit.
Intuitively, we can think of a toric variety as constructed from a torus by coherently adding tori
of lower dimension at infinity. Normal toric varieties are constructed in terms of a collection of
rational polyhedral cones $\Sigma$, called a fan. The corresponding toric variety $X_{\Sigma}$ is constructed by gluing affine pieces
$U_{\sigma},U_{\sigma'}$ corresponding to cones $\sigma_{1},\sigma_{2}\in\Sigma$ along the open,
dense subvariety $U_{\tau}$ given by the intersection $\tau=\sigma_{1}\cap \sigma_{2}$. This gives a
fruitful interplay between polyhedral and toric geometry. In particular, we can attach a toric
variety $X_{P}$ to every lattice polytope $P\subset \mathbb{Z}^{n}$. Each toric variety $X_{\Sigma}$
also has a natural real, semi-algebraic subset $X_{\Sigma}(\mathbb{R}^{+})\subset
X_{\Sigma}(\mathbb{R})$ called its real-positive locus, which generalizes the simplex
$P^{n}(\mathbb{R}^{+})\cong \Delta^{n}$.
It will be instructive to work in greater generality. We consider general Mellin transforms of
the form
\begin{equation*}
\mathcal{M}(f_{i},s,c)=\int_{T_{N}(\mathbb{R^{+}})}t^{s}\prod_{i=1}^{k} f_{i}(t)^{-c_{i}}
\frac{\mathrm{d} t}{t},
\end{equation*}
where $(s,c)\in \mathbb{C}^{n}\times \mathbb{C}^{k}$ are analytic parameters and $f_{i}$ are general
Laurent polynomials on the complex torus $T_{N}\cong (\mathbb{C}^{*})^{n}$, with a suitable
conditions on its coefficients such that the powers $f_{i}^{-c_{i}}$ are well-defined. We prove
that, there is always a smooth complete toric variety $X_{\Sigma}$, such that the closure $V(f_{i})$
does not meet the real-positive locus $X_{\Sigma}(\mathbb{R}^{+})$. We will show that the later is
satisfied if and only if $\Sigma$ refines the normal fans of the Newton polytope of $f=f_{1}\cdots
f_{k}$. In the local coordinates corresponding to a maximal cone $\sigma\in\Sigma$, the integrand of
$\mathcal{M}$ can then be factorized as in Equation \eqref{eq:1}. This allows us to give a precise
characterization of the convergence domain $\Lambda(f_{i})\subset \mathbb{C}^{n}\times
\mathbb{C}^{k}$ of $\mathcal{M}$, generalizing results of Nilsson, Passare, Berkesch and Forsg\aa rd
(\cite{Nilsson_2011},\cite{Berkesch_2014}). For each cone, we can consider the cube
$I_{\sigma}=[0,1]^{n}\subset [0,\infty)^{n}\cong U_{\sigma}(\mathbb{R}^{+})$. These cubes cover
$X_{\Sigma}(\mathbb{R}^{+})$, intersect in a set of measure zero, and in local coordinates, the
integrand has the sector form \eqref{eq:1}. Hence each such toric variety $X_{\Sigma}$ gives a
sector decomposition strategy. In fact, this is just a reformulation of the strategy given by Kaneko
and Ueda.
Coming back to Feynman integrals., it is then natural to promote the integral $I_{G}$ to the Mellin
transform
\begin{equation*}
I_{G}(\lambda,D,q,m) = \Gamma(\omega_{G})\int_{P^{E_{G}}(\mathbb{R}^{+})}\prod_{i\in E_{G}}\frac{\alpha_{i}^{\lambda_{i}-1}}{\Gamma(\lambda_{i})}
\left( \frac{\psi_{G}}{\Phi_{G}} \right)^{\omega_{G}}\frac{\Omega_{P^{E_{G}}}}{\psi_{G}^{D/2}}.
\end{equation*}
This corresponds to analytic regularization, i.e. raising each propagator
$(k_{i}^{2}+q_{i}^{2}+m^{2}_{i})^{-1}$ to a power $\lambda_{i}\in \mathbb{C}$.
We will compute the Newton polytope $P_{G}=P(\psi_{G}\Phi_{G})$ for a Feynman graph with generic
euclidean kinematics. In this case, a facet presentation can be easily deduced from the
factorization formulas given by Brown (\cite{Brown_2017}).
As a corollary, we obtain that the
convergence of the Feynman integral is determined by power-counting. It is actually sufficient
to check convergence only for those subgraphs $\gamma\subset G$, such that both $\gamma$ and its quotient
$G/\gamma$ do not contain detachable subgraphs without kinematics. Following Smirnov
(\cite{Smirnov_2012}), we call such graphs s-irreducible.
The polytope $P_{G}$ turns out to be a generalized permutahedron in the sense of (\cite{Postnikov_2009},\cite{aguiar17:hopf}). To such
a polytope, we can canonically attach a smooth refinement of its normal fan, using a combinatorial
version of the wonderful model construction introduced by Feichtner and Kozlov
(\cite{Feichtner_2004}).
For Feynman graphs, this is an iterated blow-up of projective space along coordinate linear spaces
of s-irreducible subgraphs $\gamma\subsetneq G$. The corresponding sectors are the ones constructed
by Smirnov (\cite{Smirnov_2009},[\cite{Smirnov_2012}]). A further refinement then gives the motic
blowup of Brown (\cite{Brown_2017}), which specializes for massive graphs to the original
Bloch-Esnault-Kreimer construction. The toric structure of the later variety has already been used
by Bloch and Kreimer (\cite{Bloch2008}) to interpret renormalization in terms of mixed Hodge
structures. (\cite{Bloch_2006}). We will also reinterpret Speer's sectors \cite{Speer:1975dc} as
given by a certain smooth toric variety, which is not a blowup of projective space in general.
In the last section, we will use our results to give a rigorous construction of dimensional
regularization. We review two approaches, one based on sector decomposition (\cite{Smirnov_1983},
\cite{HEINRICH_2008}, \cite{Bogner_2008}), and the other based on analytic continuation
(\cite{Panzer:2015ida}, \cite{von_Manteuffel_2015}). The results of this section are well-known in
the physics literature, but we included a detailed exposition here since we found it difficult to
find rigorous proofs. Note that this generalizes the construction given by Etingof
(\cite{DEFJKMMRW99}) for massive graphs.
\paragraph{Acknowledgements.}
I thank Dirk Kreimer for encouragement and support. I also greatly benefitted from discussions with Christian Bogner, Henry Ki\ss{}ler
and Marko Berghoff.
\section{Toric varieties}
For the convenience of the reader, we will give a very brief review of the theory of toric
varieties. Our presentation is based on \cite{CLS}, which provides a very comprehensive introduction
to toric geometry.
\paragraph{Cones and fans.}
\label{sec:TV:fans}
Let $N$ be a lattice, i.e. a free abelian group of finite rank $n=\rk N$. Its dual lattice is
$M=\Hom_{\mathbb{Z}}(N,\mathbb{Z})$. The algebraic torus associated to $N$ is
$T_{N}=N\otimes_{\mathbb{Z}}\mathbb{C}^{*}\cong G_{m}^{n}(\mathbb{C})$. Elements $m\in M$ define
characters $t^{m}\in \Hom_{\mathbb{Z}}(T_{N},\mathbb{C}^{*})$. Under this identification, the
coordinate ring of $T_{N}$ is the ring of Laurent polynomials
\begin{equation*}
\mathcal{O}(T_{N})=\mathbb{C}[M].
\end{equation*}
Similarly, elements $u\in N$ define one-parameter subgroups
\begin{equation*}
\mathbb{C}^{*}\rightarrow T_{N},\quad \lambda\mapsto u \otimes \lambda.
\end{equation*}
\begin{defin}
A complex variety $X$ is \emph{toric} if it has an action of a torus $T_{N}$ with a dense torus
orbit.
\end{defin}
We will only consider \emph{normal} toric varieties. They can be completely described by certain
polyhedral data. The relevant definitions of polyhedral geometry are collected in the appendix.
Let $\sigma\subset N_{\mathbb{R}}:= N \otimes_{\mathbb{Z}} \mathbb{R}$ be a strongly convex
polyhedral cone. $\sigma$ is a \emph{rational} if there are lattice elements $u_{1},\ldots,u_{s}\in
N$, such that
\begin{equation*}
\sigma = \pos(u_{1},\ldots,u_{s}).
\end{equation*}
Its dual cone
\begin{equation*}
\sigma^{\vee} := \{m\in M_{\mathbb{R}}\ \rvert\ \langle m, u \rangle\ge 0 \text{ for all } u\in\sigma\}
\subseteq M_{\mathbb{R}}
\end{equation*}
is again a rational polyhedral cone and the set $S_{\sigma}=M\cap \sigma^{\vee}$ is a finitely
generated semigroup. The toric variety associated to $\sigma$ is
$U_{\sigma}=\Spec(\mathbb{C}[S_{\sigma}])$. The torus action is given in terms of the coordinate
rings by
\begin{equation*}
\Delta:\mathbb{C}[S_{\sigma}]\rightarrow \mathbb{C}[M]\otimes \mathbb{C}[S_{\sigma}], \quad t^{m}\mapsto
t^{m}\otimes t^{m}.
\end{equation*}
The inclusion $T_{N}\hookrightarrow X_{\sigma}$ of the dense torus orbit is dually given by the map
$\mathbb{C}[S_{\sigma}]\hookrightarrow \mathbb{C}[M]$.
We can construct all normal toric varieties by gluing such affine varieties along open subsets, as
long as the corresponding cones intersect nicely.
\begin{defin}
A \emph{fan} $\Sigma$ in $N$ is a collection of rational, strongly convex polyhedral cones such
that:
\begin{enumerate}
\item If $\sigma\in\Sigma$ and $\tau\subseteq \sigma$ is a face of $\Sigma$ then $\tau\in\Sigma$.
\item Two cones $\sigma_{1},\sigma_{2}\in\Sigma$ are either disjoint or intersect in a common face
$\tau\in\Sigma$.
\end{enumerate}
The set of cones of dimension $k$ is denoted by $\Sigma(k)$. Let
$|\Sigma|=\bigcup_{\sigma\in\Sigma}\sigma$ be the support of $\Sigma$.
We call $\Sigma$ a \emph{generalized fan} if it consists of rational polyhedral cones which are
not necessarily strongly convex.
\end{defin}
If $\sigma_{1},\sigma_{2}$ are two rational, strongly convex cones intersecting in the common face
$\tau=\sigma_{1}\cap \sigma_{2}$, then the dual inclusions $\sigma_{1}^{\vee}\subset
\tau^{\vee}\supset \sigma_{2}^{\vee}$ define the inclusions
\begin{equation*}
\mathbb{C}[S_{\sigma_{1}}] \hookrightarrow \mathbb{C}[S_{\tau}] \hookleftarrow \mathbb{C}[S_{\sigma_{2}}].
\end{equation*}
One can show that $\mathbb{C}[S_{\tau}]$ is a common localization of $\mathbb{C}[S_{\sigma_{i}}]$,
such that $U_{\tau}\subset U_{\sigma_{i}}$. Gluing $U_{\sigma_{1}}$ and $U_{\sigma_{2}}$ along the
dense open subset $U_{\tau}$ gives a new toric variety. This can be done coherently for all cones in
a fan and we obtain the following:
\begin{prop}[{\cite[Thm. 3.1.5]{CLS}}]
If $\Sigma$ is a fan in $N_{\mathbb{R}}$ then the $U_{\sigma}$ for $\sigma\in\Sigma$ glue together
to give a normal toric variety $X_{\Sigma}$ and every normal toric varieties is of this form up to
isomorphism.
\end{prop}
Properties of $X_{\Sigma}$ are reflected in properties of the fan:
\begin{prop}\label{prop:properties-fan-variety}
\begin{enumerate}
\item $X_{\Sigma}$ is complete if $|\Sigma|:=\bigcup_{\sigma\in\Sigma}\sigma=N_{\mathbb{R}}$.
\item $X_{\Sigma}$ is smooth if and only if every cone $\sigma\in\Sigma$ can be generated by part
of a $\mathbb{Z}$-basis of $N$.
In this case the fan $\Sigma$ and its cones $\sigma\in\Sigma$ are called smooth.
\end{enumerate}
\end{prop}
\begin{proof}
1. is \cite[Thm. 3.4.1]{CLS} and 2. is \cite[Thm. 3.1.19 ]{CLS}
\end{proof}
\begin{defin}
Let $X_{\Sigma_{1}},X_{\Sigma_{2}}$ be toric varieties with fans $\Sigma_{i}\subseteq
(N_{i})_{\mathbb{R}}$. A morphism $\varphi:X_{\Sigma_{1}}\rightarrow X_{\Sigma_{2}}$ is \emph{toric}
if $\varphi|_{T_{N_{1}}}$ induces a group morphism $T_{N_{1}}\rightarrow T_{N_{2}}$.
\end{defin}
Being toric automatically implies that $\varphi$ is $T_{N_{i}}$-equivariant. We can identify $N_{i}$
with the one-parameter subgroups of $T_{N_{i}}$ and since $\varphi$ is a group morphism, we get a
homomorphism
\begin{equation*}
\overline\varphi:N_{1}\rightarrow N_{2}
\end{equation*}
of lattices. We say that such a morphism is compatible with the fans $\Sigma_{i}$ if for every
$\sigma_{1}\in\Sigma_{1}$ there is $\sigma_{2}\in\Sigma_{2}$ with
$\overline\varphi(\sigma_{1})\subseteq \sigma_{2}$.
\begin{prop}[{\cite[Thm 3.3.4]{CLS}}]
If $\varphi:X_{\Sigma_{1}}\rightarrow X_{\Sigma_{2}}$ is toric, then the induced map
$\varphi:N_{1}\rightarrow N_{2}$ is compatible with the fans $\Sigma_{1},\Sigma_{2}$.
Conversely, every morphism $\overline\varphi:N_{1}\rightarrow N_{2}$ compatible with the fans
uniquely determines a toric morphism $\varphi:X_{\Sigma_{1}}\rightarrow X_{\Sigma_{2}}$ which extends
\begin{equation*}
\overline\varphi \otimes 1:N_{1} \otimes \mathbb{C}^{*}=
T_{N_{1}}\rightarrow N_{2} \otimes \mathbb{C}^{*}=T_{N_{2}}.
\end{equation*}
\end{prop}
\begin{remark}\label{rem:proper-toric-morphism}
One can show that $\varphi:X_{\Sigma_{1}}\rightarrow X_{\Sigma_{2}}$ is proper if and only if
$\varphi^{-1}(|\Sigma_{2}|)=|\Sigma_{1}|$. See \cite[Thm 3.4.11]{CLS}.
\end{remark}
\begin{exam}
Suppose $\varphi$ is the identity and $\Sigma_{1}$ is a refinement of $\Sigma_{2}$, i.e.
$|\Sigma_{1}| = |\Sigma_{2}|$ and every cone $\sigma_{1}\in \Sigma_{1}$ is contained in some cone
$\sigma_{2}\in\Sigma_{2}$. Then the corresponding map $X_{\Sigma_{1}}\rightarrow X_{\Sigma_{2}}$
is proper and birational.
\end{exam}
For some applications, it is useful to work with the following weakened version of smoothness.
\begin{defin}
A strongly convex polyhedral cone $\sigma$ is called \emph{simplicial} if its generators are
linearly independent. A toric variety $X_{\Sigma}$ is \emph{simplicial} if its fan $\Sigma$
consists of simplicial cones.
\end{defin}
We will see below that a simplicial variety has only abelian quotient singularities. For many
purposes, this is as good as smoothness.
\paragraph{The orbit-cone correspondence.}
\label{sec:TV:Orbit-Cone}
A cone $\sigma\in\Sigma$ defines a distinguished point $\gamma_{\sigma}\in U_{\sigma}\subseteq
X_{\Sigma}$: $\gamma_{\sigma}$ is given by the semigroup morphism:
\begin{equation*}
m\in S_{\sigma}\mapsto
\begin{cases}
1 ,\quad m\in \sigma^{\bot}\cap M\\
0 ,\quad \text{otherwise}
\end{cases}
\end{equation*}
This is a fixed point for the $T_{N}$ action if and only if $\dim\sigma=\dim N_{\mathbb{R}}$.
\begin{thm}[Orbit-Cone Correspondence]\label{sec:orbit-cone-corr}
There is a bijective correspondence
\begin{align*}
\{\sigma\in\Sigma \} &\longleftrightarrow \{T_{N}\text{-orbits}\subseteq \Sigma\} \\
\sigma &\longleftrightarrow O(\sigma):= T_{N}\cdot \gamma_{\sigma}
\end{align*}
having the following properties:
\begin{enumerate}
\item $\dim \sigma + \dim O(\sigma)=\dim N_{\mathbb{R}}$
\item The affine open set $U_{\sigma}$ decomposes into orbits as
\begin{equation*}
U_{\sigma}=\bigcup_{\tau\preceq\sigma}O(\tau)
\end{equation*}
\item $\tau\preceq\sigma$ if and only if $O(\sigma)\subseteq \overline{O(\tau)}$, and
\begin{equation*}
V(\tau):=\overline{O(\tau)} = \bigcup_{\tau\preceq\sigma}O(\sigma)
\end{equation*}
\end{enumerate}
\end{thm}
\begin{proof}
\cite[Theorem 3.2.6 and Prop. 3.2.7]{CLS}
\end{proof}
\paragraph{Divisors and the homogeneous coordinate ring.}
\label{sec:TV:divisor-coordinate-ring}
Let $X_{\Sigma}$ be a toric variety associated to the fan $\Sigma$. A one-dimensional cone
$\rho\in\Sigma(1)$ gives a torus-invariant divisor $D_{\rho}=V(\rho)$ under the orbit-cone
correspondence and every torus-invariant divisor is a sum of these. Denoting the latter group by
$Div_{T}(X_{\Sigma})$, we have an identification
\begin{equation*}
\mathbb{Z}^{\Sigma(1)}\cong Div_{T}(X_{\Sigma}).
\end{equation*}
Since $\rho$ is a one-dimensional rational cone, there is a unique smallest lattice generatore of
$\rho$, i.e. an element $u_{\rho}\in N$ such that $\rho=\mathbb{R}^{+}u_{\rho}$.
\begin{prop}[{\cite[Thm 4.1.3]{CLS}}]\label{prop:divis-toric-vari}
There is an exact sequence
\begin{center}
\begin{tikzcd}
M \arrow{r}{} & \mathbb{Z}^{\Sigma(1)} \arrow{r}{} & \Cl(X_{\Sigma{}}) \arrow{r} & 0
\end{tikzcd}
\end{center}
where $\Cl(X_{\Sigma})$ denotes the class group. The first morphism maps $m\in M$ to the divisor
\begin{equation*}
div(t^{m}) = \sum_{\rho\in\Sigma(1)}\langle m, u_{\rho} \rangle D_{\rho}
\end{equation*}
of the rational function $t^{m}$. The second is the natural quotient map
$\mathbb{Z}^{\Sigma(1)}\cong Div_{T}(X_{\Sigma})\rightarrow \Cl(X_{\Sigma})$. If $X_{\Sigma}$ has
no torus factors, i.e. it is not of the form $X_{\Sigma}\cong X_{\Sigma'}\times T^{k}$, then there
is a short exact sequence
\begin{center}
\begin{tikzcd}
0 \arrow{r} & M \arrow{r}{} & \mathbb{Z}^{\Sigma(1)} \arrow{r}{} & \Cl(X) \arrow{r} & 0.
\end{tikzcd}
\end{center}
\end{prop}
The global sections of torus-invariant divisors are described by polyhedra as follows (\cite[Prop.
4.3.3]{CLS}): If $D=\sum_{\rho}a_{\rho}D_{\rho}$ is a torus-invariant divisor on $X_{\Sigma}$, then
\begin{equation*}
\Gamma(X_{\Sigma},\mathcal{O}_{X_{\Sigma}}(D)) = \bigoplus_{m\in P_{D}\cap M}\mathbb{C}\cdot t^{m},
\end{equation*}
where
\begin{equation*}
P_{D} := \{m\in M_{\mathbb{R}}\ \rvert\ \langle m, u_{\rho} \rangle\ge -a_{\rho}\}.
\end{equation*}
Now suppose $X_{\Sigma}$ is a simplicial toric variety without torus factors. We want to describe
$X_{\Sigma}$ by a graded ring $S_{\Sigma}$, generalizing the homogeneous coordinate description of
projective space. Tensoring the exact sequence of the class group with
$\Hom_{\mathbb{Z}}(-,\mathbb{C}^{*})$ gives the exact sequence
\begin{center}
\begin{tikzcd}
1 \arrow{r} & G \arrow{r}{} & (\mathbb{C}^{*})^{\Sigma(1)}\arrow{r} & T_N \arrow{r} & 1
\end{tikzcd}
\end{center}
where $G=\Hom_{\mathbb{Z}}(\Cl(X),\mathbb{C}^{*})$ is the character group of $\Cl(X)$. This is a
reductive group isomorphic to the product of a torus and a finite group. We can describe $G$
concretely as
\begin{equation*}
G = \{(t_{\rho})\in
(\mathbb{C}^{*})^{\Sigma(1)}\ \rvert\ \prod_{\rho}t_{\rho}^{\langle m,
u_{\rho} \rangle}=1 \text{ for all } m\in M\}.
\end{equation*}
\begin{defin}
The \emph{homogeneous coordinate ring} of $X_{\Sigma}$ is
\begin{equation*}
S_{\Sigma}=\mathbb{C}[x_{\rho}\ \rvert\ \rho\in\Sigma(1)]=\mathcal{O}(\mathbb{C}^{\Sigma(1)}).
\end{equation*}
\end{defin}
The ring $S_{\Sigma}$ is graded by $\Cl(X)$:
\begin{equation*}
\deg(x_{\rho}) = [D_{\rho}]
\end{equation*}
This gives an action of $G$ by duality, which is just the restriction of the natural scaling action
of $(\mathbb{C}^{*})^{\Sigma(1)}$. The corresponding eigenspaces are the graded components of
$S_{\Sigma}$:
\begin{equation*}
S_{\Sigma} = \bigoplus_{\beta\in \Cl(X)}S_{\beta}.
\end{equation*}
We want to describe $X_{\Sigma}$ as a suitable quotient of
$\Spec(S_{\Sigma})=\mathbb{C}^{\Sigma(1)}$ by $G_{}$. For this to work, we first have to throw out
some badly behaved $G$-orbits.
\begin{defin}
For $\sigma\in\Sigma$ let $x^{\hat\sigma}=\prod_{\rho\notin \sigma(1)}x_{\rho}\in S_{\Sigma}$. The
\emph{irrelevant ideal} is
\begin{equation*}
B_{\Sigma}=\langle x^{\hat\sigma}\ \rvert\ \sigma\in\Sigma_{max}\rangle.
\end{equation*}
The corresponding zero set $Z_{\Sigma}=V(B_{\Sigma})$ is the \emph{irrelevant locus}.
\end{defin}
The variety $\mathbb{C}^{\Sigma(1)}\backslash Z_{\Sigma}$ is again toric. Its fan can be described
as follows: For $\sigma\in\Sigma$ let
\begin{equation*}
\tilde\sigma = \pos(e_{\rho}\ \rvert\ \rho\in\sigma)\subseteq \mathbb{R}^{\Sigma(1)}.
\end{equation*}
The collection of all $\tilde\sigma$ constitute the fan $\tilde\Sigma$ of
$\mathbb{C}^{\Sigma(1)}\backslash Z$. The lattice morphism
\begin{align*}
\overline \pi:\mathbb{Z}^{\Sigma(1)}\rightarrow N,\quad e_{\rho}\mapsto u_{\rho}
\end{align*}
is obviously compatible with the fans. Hence we get a toric morphism
\begin{equation*}
\pi:\mathbb{C}^{\Sigma(1)}\backslash Z_{\Sigma}\rightarrow X_{\Sigma}.
\end{equation*}
\begin{thm}[{\cite[Thm. 5.1.11]{CLS}}]\label{thm:geometric-quotient}
Let $X_{\Sigma}$ be a simplicial toric variety without torus factors. The map $\pi$ describes
$X_{\Sigma}$ as the geometric quotient
\begin{equation*}
X_{\Sigma}=\mathbb{C}^{\Sigma(1)}\backslash Z_{\Sigma}//G
\end{equation*}
\end{thm}
\begin{remark}
We refer to \cite[Section 5.0]{CLS} for background on geometric invariant theory and geometric
quotients. Let us point out some consequences of this result:
\begin{enumerate}
\item The $G$-orbits on $\mathbb{C}^{\Sigma(1)}\backslash Z_{\Sigma}$ are closed and the closed
points of $X_{\Sigma}$ is the orbit space.
\item For an affine open subset $U=\Spec(R)\subseteq X_{\Sigma}$ we have $\pi^{-1}(U)=\Spec(\tilde
S)$, where $\tilde S$ is a localization of $S_{\Sigma}$ with an induced $G$-action.
That $\pi$ is a geometric quotient implies that $R=\tilde S^{G}$, i.e. $R$ is the subring of
$G$-invariants.
\end{enumerate}
\end{remark}
Let us specialize the above remark to an affine open $U_{\sigma}\subseteq X_{\Sigma}$ given by a
cone $\sigma\in\Sigma$. For the inverse image we have
\begin{align*}
\pi^{-1}(U_{\sigma}) = U_{\tilde\sigma}=\Spec(\mathbb{C}[\tilde \sigma^{\vee}\cap M])
\end{align*}
where $\tilde\sigma=\pos(e_{\rho} \ \rvert\ \rho\in\sigma(1)).$ The coordinate ring is then
\begin{equation*}
\mathbb{C}[\tilde \sigma^{\vee}\cap M]
=\mathbb{C}\left[\prod_{\rho}x_{\rho}^{a_{\rho}}\ \rvert\ a_{\rho}\ge 0 \text{ for } \rho\in\sigma(1)\right]
:= S_{x^{\hat\sigma}},
\end{equation*}
i.e. we invert every variable $x_{\rho}$ for $\rho\notin \sigma(1)$. Hence we get
\begin{equation*}
\pi^{-1}(U_{\sigma})=\Spec(S_{x^{\hat\sigma}}).
\end{equation*}
The map on coordinate rings is given by homogenization:
\begin{align*}
\pi^{*}:\mathbb{C}[\sigma^{\vee}\cap M]&\longrightarrow S_{x^{\hat\sigma}}\\
\pi^{*}(t^{m})&=\prod_{\rho}x^{\langle m, u_{\rho} \rangle}_{\rho}
\end{align*}
Its image is the space of $G$-invariants $S^{G}_{x^{\hat\sigma}}$. This gives the isomorphism
\begin{equation*}
U_{\sigma}=\Spec( \mathbb{C}[\sigma^{\vee}\cap M])\cong \Spec(S^{G}_{x^{\hat\sigma}}).
\end{equation*}
For top-dimensional cones $\sigma\in\Sigma(\dim N)$, we can describe the above isomorphism by
dehomogenization, i.e. setting some of the variables $x_{\rho}$ to 1. Let
\begin{align*}
\varphi_{\sigma}&:\mathbb{C}^{\sigma(1)}\rightarrow \mathbb{C}^{\Sigma(1)} \\
\varphi_{\sigma}(a)_{\rho} &=
\begin{cases}
a_{\rho}, &\quad \rho\in\sigma(1) \\
1, &\quad \rho\notin\sigma(1).
\end{cases}
\end{align*}
The diagram
\begin{center}\begin{tikzcd}
\mathbb{C}^{\sigma(1)}\arrow[hook]{r}{\varphi_\rho}\arrow{d}{\pi_{\sigma{}}} & \mathbb{C}^{\Sigma(1)}\backslash Z(\Sigma{}) \arrow{d}{\pi_{\Sigma{}}} \\
U_\sigma{} \arrow[hook]{r} & X_\Sigma{}
\end{tikzcd}\end{center}
commutes and the left vertical map is an isomorphism if $\sigma$ is smooth. For simplicial cones, we
still have the following:
\begin{prop}
The map $\pi_{\sigma}:\mathbb{C}^{\sigma(1)}\rightarrow U_{\sigma}$ is the geometric quotient of
$\mathbb{C}^{\sigma(1)}$ by the finite group
$G_{\sigma}=\Hom_{\mathbb{Z}}(\Cl(U_{\sigma}),\mathbb{C}^{*})$.
\end{prop}
\begin{proof}
This is a special case of Theorem \ref{thm:geometric-quotient}. We have $Z_{\sigma}=\emptyset$ and
the class group is the quotient
\begin{equation*}
\Cl(U_{\sigma}) = \mathbb{Z}^{\sigma(1)}/ M.
\end{equation*}
This group is torsion and thus finite, since the ray generators of $\sigma(1)$ furnish an
isomorphism $\mathbb{R}^{\sigma(1)}\cong N_{\mathbb{R}}$. The character group $G_{\sigma}$ must
then also be finite.
\end{proof}
\begin{col}
A simplicial toric variety without torus factors $X_{\Sigma}$ has a natural orbifold structure
with abelian local groups.
\end{col}
There is a general correspondence between graded $S_{\Sigma}$-modules and quasi-coherent sheaves on
$X_{\Sigma}$. We want to explain this in the case of the canonical sheaf, which is defined as
follows: Let $j:X^{sm}_{\Sigma}\rightarrow X_{\Sigma}$ be the inclusion of the smooth part of
$X_{\Sigma}$. The canonical sheaf is then
\begin{equation*}
\omega_{\Sigma} = j_{*}\Omega^{n}_{X^{sm}_{\Sigma}},
\end{equation*}
where $\Omega^{n}_{X_{\Sigma}^{sm}}$ is the sheaf of holomorphic $n$-forms on $X^{sm}_{\Sigma}$. In
other words, a section of $\omega_{\Sigma}$ over an open $U\subseteq X_{\Sigma}$ is given by a
holomorphic $n$-form on $U^{sm}=U \cap X^{sm}$.
Let $(e_{1},\ldots,e_{n})$ be a basis of $M$ and $I=\{\rho_{1},\ldots,\rho_{n}\}\subseteq \Sigma(1)$
an $n$-element subset. Let $u_{I}=\det(\langle e_{i}, u_{\rho_{j}} \rangle_{_{ij}})$ and set
\begin{equation*}
\Omega_{\Sigma} = \sum_{I}u_{I}\left( \prod_{\rho\notin I}x_{\rho} \right)
dx_{\rho_{1}}\wedge\ldots\wedge dx_{\rho_{n}}.
\end{equation*}
This is an element of the $S_{\Sigma}$-module
\begin{equation*}
\bigwedge^{n}\Omega^{1}_{S_{\Sigma}} \cong \Gamma(\mathbb{C}^{\Sigma(1)},\Omega^{n}_{\mathbb{C}^{\Sigma(1)}}),
\end{equation*}
where $\Omega^{1}_{S_{\Sigma}}$ is the module of K\"ahler differentials over $S_{\Sigma}$. Note that
$\Omega_{X_{\Sigma}}$ is independent of the above choices up to sign (i.e. up to the choice of an
orientation of $M$ or $N$). The group $G$ acts on $\bigwedge \Omega^{1}_{\Sigma_{\Sigma}}$ by
pullback. The action of $t^{m}\in G\subseteq (\mathbb{C}^{*})^{\Sigma(1)}$ is given by
\begin{equation*}
t^{m}\cdot \Omega_{\Sigma} = t^{\langle m, \sum_{\rho}u_{\rho}\rangle}\Omega_{\Sigma}.
\end{equation*}
Hence $\Omega_{\Sigma}$ has degree
\begin{equation*}
\beta = \left[\sum_{\rho}D_{\rho}\right]\in \Cl(X_{\Sigma}).
\end{equation*}
If $F,H\in S_{\Sigma}$ are polynomials, such that $\deg F-\deg H=-\beta$, then the meromorphic
$n$-form
\begin{equation*}
\frac{F(x)}{H(x)}\Omega_{\Sigma}
\end{equation*}
is $G$-invariant and descends to a global meromorphic section of $\omega_{\Sigma}$. Conversely, let
$f,h\in \mathcal{O}(T_{N})$ be Laurent polynomials and consider the section
\begin{equation*}
\alpha = \frac{f(t)}{h(t)}\frac{dt_{1}}{t_{1}}\wedge \ldots \wedge \frac{dt_{n}}{t_{n}} \in \Gamma(T_{N},\omega_{T_{N}}),
\end{equation*}
where $t_{1},\ldots,t_{n}$ are generators of $\mathcal{O}(T_{N})$. Pulling back along the quotient
map $\pi: \mathbb{C}^{\Sigma(1)} \backslash Z_{\Sigma}\rightarrow X_{\Sigma}$ gives a meromorphic
form $\pi^{*}\alpha$ on $\mathbb{C}^{\Sigma(1)}$, which we can describe as follows.
\begin{prop}\label{prop:differential-form}
Let $F=\pi^{*}f$ and $H=\pi^{*}h\prod_{\rho}x_{\rho}$. Then the pullback $\pi^{*}\alpha$ is given
by
\begin{equation*}
\pi^{*}\alpha = \frac{F(x)}{H(x)}\Omega_{\Sigma}.
\end{equation*}
\end{prop}
\begin{proof}
From $\pi^{*}(t^{m})=\prod_{\rho}x_{\rho}^{\langle m, u_{\rho} \rangle}$ it follows that
\begin{equation*}
\pi^{*}\left( \frac{dt_{1}}{t_{1}} \right) = \sum_{\rho}\langle e_{i}, u_{\rho} \rangle \frac{dx_{\rho}}{x_{\rho}}.
\end{equation*}
Taking the wedge product and multiplying by $\prod_{\rho}x_{\rho}$ gives
\begin{equation*}
\left( \prod_{\rho}x_{\rho} \right)\pi^{*}\left( \frac{dt_{1}}{t_{1}}\wedge \ldots \wedge \frac{dt_{n}}{t_{n}} \right) = \Omega_{\Sigma}
\end{equation*}
and the above formula follows.
\end{proof}
Over the maximal cone $\sigma=\pos(u_{i}\ \rvert\ i\in I_{0})$, we formally set $x_{\rho}=1$ for
$\rho\notin\sigma(1)$ and get the section
\begin{equation*}
\alpha\big\rvert_{U_{\sigma}} = u_{\sigma}\frac{\tilde f(x)}{\tilde g(x)}\mathrm{d} x_{i_{1}}\wedge \ldots \wedge \mathrm{d} x_{i_{n}}\in \Gamma(U_{\sigma},\omega_{\Sigma}),
\end{equation*}
where $u_{\sigma}=u_{I_{0}}$ and $\tilde f(x)$ and $\tilde h(x)$ are the dehomogenizations of $F$
and $H$, i.e. the restriction of $F$ and $H$ to the set $\{x_{\rho}=1 \ \rvert\
\rho\notin\sigma(1)\}$.
\paragraph{Lattice polytopes.}
\label{sec:TV:Polytopes}
Now let $P \subseteq M_{\mathbb{R}}$ be a full-dimensional lattice polytope, i.e. it is the convex
hull of finitely many lattice points. Denote by $P(k)$ the set of $k$-dimensional faces. $P$ can be
described by the facet presentation
\begin{equation*}
P = \{m\in M| \langle m,u_{F}\rangle \ge -a_{F}, F\in P(n-1)\},
\end{equation*}
where $a_{F}\in \mathbb{Z}$ and $u_{F}$ is the minimal lattice generator of the cone $\rho_{F}$
consisting of inward pointing normal vectors of $F$.
We can construct the \emph{normal fan} of $P$ as follows: Let $v\in P$ be a vertex and $C_{v}$ the
cone generated by $P\cap M -v$. Its dual cone $\sigma_{v}=C_{v}^{\vee}$ is again rational and
strongly convex. In terms of the facet presentation above, we have
\begin{equation*}
\sigma_{v} = \pos(u_{F}\ \rvert\ F\in P(n-1), v\in F).
\end{equation*}
More generally, we can define for any face $Q\in P(k)$ the cone
\begin{equation*}
\sigma_{Q} = \pos(u_{F}\ \rvert\ F\in P(n-1), Q\subseteq F).
\end{equation*}
\begin{prop}\label{prop:normal-fan-polytope}
The cones $\sigma_{Q}$ constitute a complete fan $\Sigma_{P}$ in $N_{\mathbb{R}}$ and define a
complete toric variety $X_{P}:=X_{\Sigma_{P}}$. A vector $u\in N_{\mathbb{R}}$ defines the face
$F_{u}P=Q$ if and only if $u\in \relint(\sigma_{Q})$. This defines an inclusion reversing
bijection between $\Sigma_{P}$ and the set of faces of $P$.
\end{prop}
\begin{proof}
See \cite[Prop. 2.3.7 and Prop. 3.1.6]{CLS}.
\end{proof}
If $P$ is not full-dimensional, then the same construction will give a generalized fan.
\begin{remark}
The Orbit-Cone correspondence takes the following form: The cones $\sigma\in \Sigma_{P}$
correspond to faces $Q\subseteq P$, hence every $Q$ gives a torus orbit $O(Q)\subseteq \Sigma_{P}$
and its closure $V(Q)$. The latter is a closed toric subvariety and hence again given by a
complete fan, which we can describe as follows: By translating $Q$ by one of its vertices we can
assume that $0\in Q$. Let then $M_{Q}$ be the linear span of $Q$ such that $Q\subseteq M_{Q}$
becomes a full-dimensional lattice polytope which has a normal fan $\Sigma_{Q}$. Then we have
$V(Q)\cong X_{Q}$. See \cite[Prop 3.2.9]{CLS}.
Identifying rays $\rho\in\Sigma_{P}(1)$ with facets $F$ of $P$, we get the divisor
$D_{P}=\sum_{F}a_{F}D_{F}$ canonically attached to $X_{P}$. One can show that $D_{P}$ is ample and
there is a bijective correspondence
\begin{equation*}
P \longleftrightarrow (X_{\Sigma},D)
\end{equation*}
between full dimensional lattice polytopes $P \subseteq M_{\mathbb{R}}$ and complete toric
varieties $X_{\Sigma}$ with fan $\Sigma\subseteq N_{\mathbb{R}}$ together with a distinguished
torus-invariant ample divisor $D$. See \cite[Thm. 6.2.1]{CLS}.
\end{remark}
The following proposition will be needed later.
\begin{prop}\label{prop:maximal-cones}
Suppose $P\subset M$ is a (not necessarily full-dimensional) lattice polytope. An $n$-dimensional
cone $\sigma=\pos(u_{1},\ldots,u_{n})$ is contained in a maximal cone $\tilde
\sigma\in\Sigma_{P}(n)$ if and only if there is a unique vertex $m_{\sigma}\in P(f)$, such that
\begin{equation*}
\langle m_{\sigma}, u_{i} \rangle = \min_{m\in P}\langle m, u_{i} \rangle
\end{equation*}
for all $i=1,\ldots,n$.
\end{prop}
\begin{proof}
Let $\tilde \sigma\in\Sigma_{P}(n)$, correspond to the vertex $m_{0}\in P(0)$. The
cone $\sigma=\pos(u_{1},\ldots, u_{n})$ is contained in $\tilde \sigma$ if and only if every
weight vector $w=\sum_{i=1}^{n}\lambda_{i}u_{i}\in \Int(\sigma)$ defines the face
$F_{w}P=\{m_{0}\}$. This means that
\begin{equation*}
\langle m_{0}, w \rangle < \langle m, w \rangle, \text{ for all } m\in P \backslash\{m_{0}\}.
\end{equation*}
Varying $\lambda_{i}$, it is easy to see that this is only possible if
\begin{equation*}
\langle m_{0}, u_{i} \rangle = \min_{m\in A}\langle m, u_{i} \rangle,
\end{equation*}
for all $i=1,\ldots,n$. Conversely, suppose $m_{0}$ minimizes $\langle m, u_{i} \rangle$ for
all $i$ and thus for all $w\in\sigma$. Suppose there is another $\tilde m\in P$, such that
$\langle \tilde m, w \rangle$ is minimal. Then $\langle\tilde m-m_{0}, w \rangle=0$ and $\langle
\tilde m-m_{0}, v\rangle\ge 0$ for all other $v\in\sigma$. This means that $w$ lies in a face of $\sigma$
and thus $m\notin\Int(\sigma)$.
\end{proof}
For two lattice polytopes $P_{1},P_{2}\subset M_{\mathbb{R}}$, let
\begin{equation*}
Q = P_{1} + P_{2} = \{m_{1}+m_{2} \ \rvert\ m_{1}\in P_{1}, m_{2}\in P_{2}\}
\end{equation*}
be their Minkowski sum. This is clearly a lattice polytope again.
\begin{prop}[{\cite[Prop. 7.12]{Ziegler_1995}}]\label{prop:common-refinement}
The normal fan $\Sigma_{Q}$ of $Q$ is the coarsest common refinement of the normal fans
$\Sigma_{P_{1}},\Sigma_{P_{2}}$.
\end{prop}
\paragraph{Real-positive locus.}
In the sequel, we want to integrate holomorphic forms over a compact $n$-cycle, which is naturally
associated to every complete toric variety. Let $\sigma\in\Sigma$ be a cone in the fan defining
$X_{\Sigma}$. The complex points of the corresponding affine toric variety are given by
\begin{equation*}
U_{\sigma}(\mathbb{C})=\Hom(\sigma^{\vee}\cap M,\mathbb{C}).
\end{equation*}
Restricting the image to $\mathbb{R}^{+}$ gives the locus $U_{\sigma}(\mathbb{R}^{+})$. These glue
together to give the real positive locus $X_{\Sigma}(\mathbb{R}^{+})$. A toric morphism
$X_{\Sigma}\rightarrow X_{\tilde{\Sigma}}$ induces a map $X_{\Sigma}(\mathbb{R}^{+})\rightarrow
X_{\tilde{\Sigma}}(\mathbb{R}^{+})$.
\begin{exam}
Suppose $X_{\Sigma}$ is a projective toric variety associated to the polytope $P$, such that the
divisor $D_{P}$ is very ample. Its sections
\begin{equation*}
t^{m_{i}}\in\Gamma(X_{\Sigma},\mathcal{O}_{X_{P}}(D_{P})),\quad m_{i}\in P\cap M
\end{equation*}
furnish a projective embedding
\begin{equation*}
X_{\Sigma}\rightarrow \mathbb{P}^{s},\quad x\mapsto [t^{m_{0}}(x):\ldots:t^{m_{s}}(x)].
\end{equation*}
The (algebraic) moment is defined as
\begin{align*}
f:X_{\Sigma}&\rightarrow M_{\mathbb{R}}\\
f(x) &= \frac{\sum_{m\in P\cap M}|t^{m}(x)|m}{\sum_{m\in P\cap M}|t^{m}(x)|}.
\end{align*}
By \cite[Thm. 12.2.5]{CLS}, this induces a homeomorphism
\begin{equation*}
f:X_{\Sigma}(\mathbb{R}^{+})\tilde{\longrightarrow} P,
\end{equation*}
which identifies a facet $Q\subseteq P$ with $V(Q)\cap X_{\Sigma}(\mathbb{R}^{+})$.
\end{exam}
\paragraph{Star subdivision of fans.}
\label{sec:TV:blowup}
There is a standard construction to refine a given fan $\Sigma$. Let $\nu\in \Sigma\cap N$ be a
primitive element, i.e. such that $\nu$ is the lattice generator of $\pos(\nu)$. For
$\sigma\in\Sigma$ with $\nu\in\sigma$ let
\begin{equation*}
\Sigma_{\sigma}(\nu) = \{\pos(\tau,\nu)\ \rvert\ \{\nu\}\cup\tau\subseteq \sigma, \nu\notin \tau\}.
\end{equation*}
The \emph{star subdivision} of $\Sigma$ with respect to $\nu$ is the fan
\begin{equation*}
\Sigma^{*}(\nu) = \{\sigma\in\Sigma \ \rvert\ \nu \notin\sigma\}\cup\bigcup_{\nu\in\sigma}\Sigma_{\sigma}(\tau).
\end{equation*}
The identity map $N\rightarrow N$ is compatible with the fans $(\Sigma^{*}(\nu),\Sigma)$ and induces
a toric morphism
\begin{equation*}
\pi:X_{\Sigma^{*}(\nu)}\rightarrow X_{\Sigma},
\end{equation*}
which is proper and birational.
We are interested in two special cases of this construction. First let $X_{\Sigma}$ be a smooth
toric variety associated to the fan $\Sigma$. From Prop. \ref{prop:properties-fan-variety} we know
that every cone $\sigma\in \Sigma$ is smooth, i.e. can be generated by part of a $\mathbb{Z}$-basis
of $N$. For a cone $\tau\in\Sigma$, the closure $V(\tau)=\overline O(\tau)$ is a smooth toric
subvariety. Let $\Sigma_{\tau}$ be the star subdivision of $\Sigma$ with respect to the vector
\begin{equation*}
\nu_{\tau} = \sum_{\rho\in\tau(1)}u_{\rho}
\end{equation*}
\begin{prop}[{\cite[Prop. 1.26]{Oda88}}]
The map $\pi:X_{\Sigma_{\tau}}\rightarrow X_{\Sigma}$ is the blow-up of $X_{\Sigma}$ with center
$V(\tau)$.
\end{prop}
Now let $\Sigma$ be a not necessarily simplicial fan and $\nu_{\rho}$ be the ray generator of a cone
$\rho$. A non-simplicial cone $\sigma\in\Sigma$ containing $\rho$ gets subdivided in
$\Sigma^{*}(\nu_{\rho})$. Iterating this construction gives the following:
\begin{thm}[{\cite[Prop. 11.1.7]{CLS}}]\label{thm:simplicial-refinement}
Let $\Sigma$ be non-simplicial fan. Then there is a fan $\Sigma'$, obtained from $\Sigma$ by a
series of star subdivisions in rays $\rho_{1},\ldots,\rho_{s}\in\Sigma(1)$, such that $\Sigma'$ is
simplicial and $\Sigma'(1)=\Sigma(1)$.
\end{thm}
\paragraph{Toric wonderful models.}
\label{sec:TV:iterblowup}
In this section, we want to describe certain compactifications of the torus $T^{n-1}$ given by
iteratively blowing up coordinate subspaces in the projective compactification
$T^{n-1}\hookrightarrow P^{n-1}$. These are special cases of the wonderful model compactifications
of \cite{De_Concini_1995}.
Let $E$ be a finite set with $n$ elements and $P^{E_{G}}$ be the projective space of dimension $n-1$,
where we label the homogeneous coordinates by elements of $E$. Let
\begin{equation*}
N_{E}=\mathbb{Z}^{E}/\mathbb{Z}\left(\sum_{i\in E}e^{i}\right)\cong \mathbb{Z}^{n-1}
\end{equation*}
and
\begin{equation*}
M_{E} = \left\{ m\in \mathbb{Z}^{E}| \sum_{i\in E}m_{i}=0 \right\}
\end{equation*}
the dual lattice. The fan $\Sigma_{E}$ of $P^{E_{G}}$ is given by the cones
\begin{equation*}
\tau_{I} := \pos([e^{i}]\ \rvert\ i\in I)
\end{equation*}
for all $I\subsetneq E$. Every such proper subset $I\subsetneq E$ then gives the linear subspace
\begin{equation*}
L_{I} = \{[\alpha_{j}]\ \rvert\ \alpha_{i}=0 \text{ for } i\in I\}\cong P^{I^{c}},
\end{equation*}
which is the orbit closure associated to the cone $\tau_{I}$.
Consider a set of subsets $B\subseteq 2^{E}$ satisfying the following conditions.
\begin{enumerate}
\item $E\notin B$
\item $\{i\}\notin B$ for all $i\in E$.
\item $I_{1},I_{2}\in B, I_{1}\cap I_{2}\neq \emptyset \Rightarrow I_{1}\cup I_{2}\in B$.
\end{enumerate}
The iterated Blow-up
\begin{equation*}
\pi_{B}:P^{B}\rightarrow P^{E_{G}}
\end{equation*}
is defined by inductively blowing up the elements of
\begin{equation*}
\mathcal{L}_{B} = \{L_{I}\ \rvert\ I\in B\},
\end{equation*}
in order of increasing dimension. More precisely, let $B=\{I_{1}<\ldots<I_{m}\}$ be linearly ordered
such that $j\ge k$ implies that $I_{j}\subseteq I_{k}$. We then define the sequence of blow-ups by
$P_{0}=P^{E_{G}}$ and $P_{k}=Bl_{\tilde L_{I_{k}}}P_{k-1}$, where $\tilde L_{k}$ is the strict transform
of $L_{k}$ in $P_{k-1}$. The results of the last section show that this is again a smooth projective
toric variety. Let $\Sigma_{k}$ be the fan of $P_{k}$. Then we have that $\Sigma_{k} =
St_{\tau_{k}}\Sigma_{k-1}$ is the star subdivision with respect to the cone
\begin{equation*}
\tau_{k} = \pos\{[e^{i}]\ \rvert\ i\in I_{k}\}.
\end{equation*}
Blowing up in order of increasing dimension ensures that this is well-defined, i.e. that $\tau_{k}$
is indeed a cone of $\Sigma_{k-1}$. The strict transform of $L_{I_{k}}$ in $P_{k-1}$ is just the
orbit closure $V(\tau_{k})$ of $\tau_{k}\in\Sigma_{k-1}$. Let $P^{B}=P_{m}$ be the last blowup and
$\Sigma_{B}=\Sigma_{m}$ its fan.
To fully describe the fan, we will use the combinatorial approach to wonderful models developed in
\cite{Feichtner_2004}. Note that any fan $(\Sigma,\preceq)$ with its face relation is a meet
semi-lattice, i.e. every collection $\sigma_{1},\ldots,\sigma_{k}\in \Sigma$ has the greatest
lower bound
\begin{equation*}
\bigwedge_{i} \sigma_{i} = \bigcap_{i} \sigma_{i}\in\Sigma.
\end{equation*}
The minimal element of $\Sigma$ is the trivial cone $\{0\}$. For $\Sigma=\Sigma_{E}$, there is an
obvious poset isomorphism
\begin{equation*}
(\Sigma_{E},\preceq) \cong (2^{E}\backslash\{E\},\subseteq),
\end{equation*}
identifying a subset $I\subsetneq E$ with the cone $\tau_{I}$.
Now let $(\mathcal{L},\preceq)$ be any finite meet-semilattice with least element $\hat 0$. For a
subset $\mathcal{G}\subseteq\mathcal{L}$, and $X\in\mathcal{L}$, let
\begin{equation*}
\mathcal{G}^{\preceq X} := \{G\in\mathcal{G}|G\preceq X\}
\end{equation*}
and
\begin{equation*}
[\hat 0,X] = \{Y\in\mathcal{L}|0\preceq Y\preceq X\}.
\end{equation*}
\begin{defin}
Let $(\mathcal{L},\preceq)$ be a finite meet-semilattice. A subset
$\mathcal{G}\subseteq\mathcal{L}\backslash\{\hat 0\}$ is called a \emph{building set}, if the
following holds for all $X\in\mathcal{L}$: Let
\begin{equation*}
\max \mathcal{G}^{\preceq X}=\{G_{1},\ldots,G_{k}\}
\end{equation*}
be the maximal elements of $\mathcal{G}^{\preceq X}$. Then there is an order isomorphism
\begin{equation*} [\hat 0, X] \cong \prod_{i=1}^{k}[\hat 0, G_{i}].
\end{equation*}
\end{defin}
\begin{exam}\label{exam:building-set}
Suppose $\mathcal{L}=2^{E}\backslash \{E\}$ with the subset relation. Then a subset
$\mathcal{G}\subseteq \mathcal{L}\backslash\emptyset$ is a building set if for all $I\subsetneq E$, the
maximal elements
\begin{equation*}
\max \mathcal{G}^{\subseteq I}=\{G_{1},\ldots,G_{k}\}
\end{equation*}
form a partition $I=\coprod_{i=1}^{k}G_{i}$. It is easy to check that this is equivalent to the
condition that $\mathcal{G}$ contains all singleton subsets and for all $I_{1},I_{2}\in
\mathcal{G}$:
\begin{equation*}
I_{1}\cap I_{2}\neq \emptyset \Rightarrow I_{1}\cup I_{2}\in \mathcal{G} \text{ or } I_{1}\cup I_{2} = E.
\end{equation*}
The set $\mathcal{\tilde G} = \mathcal{G}\cup \{E\}$ is then a building set in $2^{E}$.
\end{exam}
To describe the face structure of $\Sigma_{B}$, we will also need the notion of nested sets.
\begin{defin}
Let $\mathcal{G}\subseteq \mathcal{L}\backslash \hat 0$ be a building set in a finite
meet-semilattice. A subset $\mathcal{N}\subseteq \mathcal{G}$ is called nested if for all pairwise
non-comparable elements $N_{1},\ldots,N_{k}\in \mathcal{N}$ with $k\ge 2$, the join
$\bigvee_{i=1}^{k}N_{i}\in\mathcal{L}$ exists in $\mathcal{L}$ but is not in $\mathcal{G}$.
\end{defin}
\begin{exam}\label{exam:nested-sets}
Suppose $\mathcal{G}\subset 2^{E}\backslash\{E\}$ is a building set. Then
$\mathcal{I}\subset \mathcal{G}$ is a nested set if and only if:
\begin{enumerate}
\item For all $I_{1},I_{2}$, either $I_{1}\cap I_{2}=\emptyset$ or $I_{1}\subseteq I_{2}$ or
$I_{2}\subseteq I_{1}$.
\item If $I_{1},\ldots,I_{k}\in \mathcal{I}$ are pairwise disjoint and $k\ge 2$, then
\begin{equation*}
\bigcup_{j=1}^{k} I_{j} \notin \mathcal{G} \cup \{E\}.
\end{equation*}
\end{enumerate}
It follows from \cite[Prop. 2.8]{Feichtner_2004} that all maximal nested sets are generated by the
following construction: Let $E=\{i_{1}<\ldots<i_{n}\}$ be a total ordering of $E$. Set
$J_{k}=\{i_{1},\ldots,i_{k}\}$ and $\mathcal{I}_{k}=\max \mathcal{G}^{\subseteq J_{k}}$. The union
$\mathcal{I}=\bigcup_{k=1}^{n} \mathcal{I}_{k}$ is then a maximal nested set.
\end{exam}
The nested sets of a building set $\mathcal{G}$ are partially ordered by inclusion. We denote the
corresponding poset by $\mathcal{N}(G)$. We can now state the results of \cite[Theorem
4.10]{Feichtner_2004}.
\begin{thm}
Let $\Sigma$ be the fan of a toric variety $X_{\Sigma}$ and $\mathcal{G}\subseteq
(\Sigma,\preceq)$ be a building set in its face semilattice. Suppose
$\mathcal{G}=\{\tau_{1}<\ldots<\tau_{k}\}$ is linearly ordered, such that $\tau_{i}<\tau_{j}$
implies $\tau_{j}\preceq \tau_{i}$. Let $\Sigma_{\mathcal{G}}$ be the fan obtained by subdividing
$\Sigma$ along the $G_{i}$ in increasing order. Then there is an isomorphism of semilattices
\begin{equation*}
(\Sigma_{\mathcal{G}},\preceq) \cong (\mathcal{N}(G),\subseteq).
\end{equation*}
identifying a nested set $\mathcal{N}=\{\tau_{i_{1}},\ldots,\tau_{i_{s}}\}\subset G$ with the cone
\begin{equation*}
\tau_{\mathcal{N}} = \pos(\nu_{\tau}\ \rvert\ \tau\in N),
\end{equation*}
where $\nu_{\tau}=\sum_{\rho\in\tau(1)}u_{\rho}$.
\end{thm}
To our original set $B\subset 2^{E}$, we associate the set
\begin{equation*}
\mathcal{G}_{B}=B\cup \{\{i\}\ \rvert\ i\in E \}.
\end{equation*}
This is a building set by example \ref{exam:building-set}. For $I\subseteq E$, we associate the
vector
\begin{equation*}
e^{I} = \sum_{i\in I}e^{i}.
\end{equation*}
Applying the above theorem to $\mathcal{G}_{B}$ then gives:
\begin{col}\label{col:blowup-nested-sets}
$P^{B}$ is a smooth, projective variety, independent of the chosen blowup-order. Its fan
$\Sigma_{B}$ consists of the cones
\begin{equation*}
\sigma_{\mathcal{I}} = \pos([e^{I}]\ \rvert\ I\in \mathcal{I}),
\end{equation*}
where $\mathcal{I}\subset \mathcal{G}_{B}$ ranges over the nested sets with respect to
$\mathcal{G}_{B}$.
\end{col}
In particular we have $\Sigma_{B}(1)\cong B\cup E$. The map $P^{B}\rightarrow P^{E_{G}}$ fits into the
commutative diagram
\begin{center}\begin{tikzcd}
\mathbb{C}^{\Sigma_{B}(1)}\backslash Z_{\Sigma_{B}}\arrow{d}\arrow{r} & P^B \arrow{d} \\
\mathbb{C}^{E}\backslash \{0\} \arrow{r} & P^E
\end{tikzcd}\end{center}
The left vertical map is given on coordinates as
\begin{equation*}
\alpha_{i} = x_{i}\prod_{\substack{I\in B\\ i\in I}}x_{I}.
\end{equation*}
\paragraph{Generalized permutahedra.}
\label{sec:gener-perm}
The Feynman polytopes we consider later will turn out to be generalized permutahedra in the sense of
(\cite{Postnikov_2009},\cite{aguiar17:hopf}), which have an especially nice structure.
Consider first the building set $G_{max}=2^{E}\backslash\{E\}$. Its fan $\Sigma_{G_{max}}$ is spanned
by cones $\sigma=\pos(e_{I_{1}},\ldots, e_{I_{n-1}})$ such that
\begin{equation*}
I_{0}=\emptyset\subsetneq I_{1}\subsetneq \ldots\subsetneq I_{n-1}\subsetneq I_{n}=E
\end{equation*}
is a complete flag of subsets. On the other hand, let $\pi_{E}$ be the convex hull of all points
\begin{equation*}
a_{\sigma} = \sum_{k=1}^{n}ka_{\sigma(k)},
\end{equation*}
where $\sigma$ runs over the bijections $\{1,\ldots,n\}\cong E$. The polytope $\pi_{E}$ is the
(regular) permutahedron of the finite set $E$. The following proposition is then well-known, see
e.g. \cite{Postnikov_2009}.
\begin{prop}
The normal fan $\Sigma_{\pi_{E}}$ of $\pi_{E}$ coincides with $\Sigma_{G_{max}}$.
\end{prop}
The facet structure of $\pi_{E}$ is very well understood. It is advocated in \cite{aguiar17:hopf} to
exploit this fact by expressing many questions in algebraic combinatorics in terms of deformations
of $\pi_{E}$:
\begin{defin}(\cite{aguiar17:hopf}) A lattice polytope $P\subset \mathbb{R}^{E}$ contained in an
affine hyperplane
\begin{equation*}
P \subset \{m\in \mathbb{R}^{E}\ \rvert\ \langle m, e^{E} \rangle = d_{P}\}
\end{equation*}
is a a \emph{generalized permutahedron} if its normal fan is a coarsening of the fan
$\Sigma_{\pi_{E}}$.
\end{defin}
It will be convenient to have alternative characterizations of generalized permutahedra. Suppose
$z:2^{E}\rightarrow \mathbb{Z}\cup \{\infty\}$ is a set function with $z(\emptyset) = 0$. To $z$ we
associate the base polyhedron
\begin{equation*}
P(z) = \{m\in \mathbb{R}^{E} \ \rvert\
\langle m, e^{E} \rangle = z(E),\ \langle m, e^{I} \rangle \ge z(I) \text{ for } I\subsetneq E \}.
\end{equation*}
We will call $z$ \emph{supermodular}, if
\begin{equation*}
z(I) + z(J) \le z(I\cap J) + z(I\cup J),
\end{equation*}
for all $I,J\in 2^{E}$.
\begin{remark}
It is more common in the literature to consider \emph{submodular} functions $\tilde
z:2^{E}\rightarrow \mathbb{R}\cup \{\infty\}$, which satisfy the opposite inequality:
\begin{equation*}
\tilde z(I) + \tilde z(J) \ge \tilde z(I\cap J) + \tilde z(I\cup J)
\end{equation*}
It is easy to show that $\tilde z$ is submodular if and only if its dual $\tilde z^{\#}$, defined
as $\tilde z^{\#}(I)=\tilde z(E)-\tilde z(E \backslash I)$, is supermodular. The translation
between the two convention is usually straightforward.
\end{remark}
\begin{prop}
Let $P\subset \mathbb{R}^{E}$ be a lattice polytope. Then the following are equivalent:
\begin{enumerate}
\item $P$ is a generalized polyhedron.
\item Every edge of $P$ is parallel to an edge of the form $e_{i}-e_{j}$ for $i,j\in E$.
\item There is a supermodular function $z:2^{E}\rightarrow \mathbb{R}$, such that $P=P(z)$.
\end{enumerate}
\end{prop}
\begin{proof}
See \cite[Thm. 12.3]{aguiar17:hopf} and references therein.
\end{proof}
\begin{exam}\label{exam:matroid}
Let $M$ be a matroid on the set $E$ and $B(M)\subset 2^{E}$ its set of bases. We refer to
\cite{oxley2006matroid} for the theory of matroids. The matroid polytope of $M$ is
\begin{equation*}
P_{M} = \Conv(e^{I}\ \rvert\ I\in B(M)).
\end{equation*}
It is proven in \cite{Gelfand_1987} that every edge of $P_{M}$ is of the form $e^{i}-e^{j}$, hence
$P_{M}$ is a generalized permutahedron. The corresponding supermodular function is given by
\begin{equation*}
z(I) = r_{M}(E)-r_{M}(E \backslash I) = r^{\#}(I),
\end{equation*}
where $r_{M}$ is the rank function of the matroid.
\end{exam}
\begin{exam}
Let $\mathcal{G}\subseteq 2^{E}\backslash\{E\}$ be a building set and $\mathcal{\tilde
G}=\mathcal{G}\cup \{E\}$. For $I\in \mathcal{\tilde G}$, let $\Delta_{I}=\Conv(e^{i} \ \rvert\ i\in
I)$ be the simplex on $I$ and let $P_{\mathcal{G}}$ be the Minkowski sum
\begin{equation*}
P_{\mathcal{G}} = \sum_{I\in \mathcal{\tilde G}}\Delta_{I}.
\end{equation*}
It is shown in \cite{math/0609184}, that its normal fan is $\Sigma_{\mathcal{G}}$ and that
$P_{\mathcal{G}}$ is the base polyhedron of the supermodular function
\begin{equation*}
z_{\mathcal{G}}(J) = |\{I\in \mathcal{\tilde G} \ \rvert\ I\subseteq J\}|.
\end{equation*}
Hence $P_{\mathcal{G}}$ is a generalized permutahedron and $\Sigma_{\mathcal{G}}$ is a coarsening
of $\Sigma_{\pi_{E}}$.
Suppose $\mathcal{G}_{1}\subset \mathcal{G}_{2}$ are two building sets in $2^{E}\backslash\{E\}$.
It follows from the above description and Prop. \ref{prop:common-refinement} that
$\Sigma_{\mathcal{G}_{2}}$ is a refinement of $\Sigma_{\mathcal{G}_{1}}$.
\end{exam}
For $I\subsetneq E$ define the restriction $z\vert_{I}$ and contraction $z/_{I}$ by
\begin{align*}
z\vert_{I}(J) &= z(J),\quad J\subseteq I,\\
z/_{I}(J) &= z(J\cup I)-z(I),\quad J\subseteq E \backslash I
\end{align*}
It is easy to check that if $z$ is supermodular, then so are its restrictions and contraction. The
face $F_{e^{I}}P(z)$ can then be described as follows.
\begin{prop}[{\cite[Lemma 3.1]{MR1095782}}]\label{prop:supermodular-faces}
Let $P(z)$ be the generalized permutahedron defined by the supermodular function
$z:2^{E}\rightarrow \mathbb{R}$. The natural isomorphism $\mathbb{R}^{I}\oplus
\mathbb{R}^{I^{c}}\cong \mathbb{R}^{E}$ induces a bijection
\begin{equation*}
P(z\vert_{I})\times P(z/_{I}) \cong F_{e^{I}}P(z).
\end{equation*}
\end{prop}
\begin{exam}\label{exam:matroid-restr-contract}
If $z=r^{\#}$ is the dual of the rank function of a matroid $M$ on $E$, then $z\vert_{I}$ and
$z/_{I}$ correspond to the contraction $M/_{I^{c}}$ and restriction $M\vert_{I^{c}}$.
\end{exam}
Let
\begin{equation*}
\mathcal{I}: I_{0}=\emptyset\subsetneq I_{1}\subsetneq \ldots\subsetneq I_{n-1}\subsetneq I_{n}=E
\end{equation*}
be a maximal flag of $2^{E}$. The corresponding cone $\sigma_{\mathcal{I}}=\pos( e^{I} \ \rvert\
I\in\mathcal{I})$ is a maximal cone of $\Sigma_{\pi_{E}}$. Since $\Sigma_{\pi_{E}}$ is a refinement
of $\Sigma_{P(z)}$, any vector $w\in \Int(\sigma)$ defines a vertex $m_{\mathcal{I}}=F_{w}P(z)$.
\begin{prop}{\cite[Corollary 3.17]{MR1095782}}\label{prop:supermodular-vertex-generation}
The coordinates of the vertex $m_{\mathcal{I}}$ are given by
\begin{equation*}
(m_{\mathcal{I}})_{k}=z(I_{k})-z(I_{k-1}).
\end{equation*}
\end{prop}
Let us call a generalized permutahedron $P(z)$ \emph{irreducible}, if there is no decomposition $E =
I \coprod J$, such that $z = z\vert_{I}+z\vert_{J}$.
\begin{prop}{\cite[Thm. 3.38]{MR1095782}}\label{prop:supermodular-decomposition}
For each generalized permutahedron $P(z)$ there is a unique decomposition
$E=\coprod_{k=1}^{r}I_{k}$ such that the $P(z\vert_{I_{k}})$ are irreducible and
\begin{equation*}
z = \sum_{k=1}^{r}z\vert_{I_{k}}.
\end{equation*}
The polytope $P(z)$ is irreducible if and only if it has maximal dimension $|E|-1$.
\end{prop}
\begin{col}\label{col:supermodular-facets}
Suppose $P(z)$ is irreducible. A subset $I\subsetneq E$ defines a facet $F_{e^{I}}P(z)$ of $P(z)$
if and only if $P(z\vert_{I})$ and $P(z/_{I})$ are both irreducible.
\end{col}
Let us use the preceding results to construct a smooth refinement of $P(z)$. Consider the subset
system
\begin{equation*}
\mathcal{\tilde G}_{z} = \{I\subseteq E \ \rvert\ P(z\vert_{I}) \text{ is irreducible }\} \subseteq 2^{E}
\end{equation*}
\begin{prop}\label{prop:supermodular-wonderful-refinement}
$\mathcal{\tilde G}_{z}$ is a building set in $2^{E}$. If $P(z)$ is irreducible, then the fan $\Sigma_{\mathcal{G}_{z}}$
associated to the reduced building set $\mathcal{G}_{z}=\mathcal{G}
\backslash\{E\}$ is a smooth refinement of $\Sigma_{P(z)}$.
\end{prop}
\begin{proof}
The building set property is immediate from the unique decomposition of Prop.
\ref{prop:supermodular-decomposition}.
To prove that $\Sigma_{\mathcal{G}_{z}}$ is a refinement of $\Sigma_{P(z)}$, we must prove that
for each maximal nested set $\mathcal{I}\subset \mathcal{G}_{z}$, there is $m\in P(z)$ such that
$\langle m, e^{I} \rangle = z(I)$ for all $I\in\mathcal{I}$. By Example \ref{exam:nested-sets} we
can find a maximal chain
\begin{equation*}
J_{0}=\emptyset\subsetneq J_{1}\subsetneq \ldots\subsetneq J_{n-1}\subsetneq J_{n}=E
\end{equation*}
such that $\mathcal{I}=\bigcup \mathcal{I}_{k}$, where $\mathcal{I}_{k}=\max
\mathcal{G}_{z}^{\subseteq J_{k}}$. Let $m$ be the vertex of $P(z)$, defined by
$m_{k}=z(J_{k})-z(J_{k-1})$. Since $\mathcal{I}_{k}$ is the decomposition of $J_{k}$ into
irreducible components, we have
\begin{equation*}
\sum_{I\in\mathcal{I}_{k}} z(I) = z(J_{k}) = \langle m, e^{J_{k}} \rangle = \sum_{I\in\mathcal{I}_{k}} \langle m, e^{I} \rangle.
\end{equation*}
Since $m\in P(z)$, this equality is only possible if $\langle m, e^{I} \rangle=z(I)$ for all
$I\in\mathcal{I}_{k}$.
\end{proof}
\section{Multivariate Mellin transforms}
\label{sec:analyt-dimens-regul}
Let us apply the theory of toric varieties to the investigation of Mellin transforms of Laurent
polynomials. It was shown in (\cite{Nilsson_2011}, \cite{Berkesch_2014}), that the convergence
properties of these transforms are controlled by the Newton polytopes of the rational functions. We
supply an alternative proof of their results by considering certain toric compactification
associated to the Newton polytopes, which make the possible singularities apparent.
This gives a precise characterisation of the convergence domain.
For application to dimensional regularization, we will also review their construction of the
meromorphic extension.
In the last part of this section, we will show that the geometric sector decomposition strategy of
Kaneko and Ueda (\cite{Kaneko_2010}) is equivalent to the construction of these compactifications.
\paragraph{Mellin transforms.}
\label{sec:gener-mell-transf}
As in the previous section, let $N$ be a lattice of finite rank, $M$ its dual lattice and $T_{N}$
the associated complex torus. Let
\begin{equation*}
f(z)=\sum_{m\in A}a_{m}t^{m}\in \mathcal{O}(T_{N})
\end{equation*}
be a Laurent polynomial on $T_{N}$, where $A\subset \mathbb{Z}^{n}$ is a finite subset such that
$a_{m}\neq 0$ for $m\in A$. Its \emph{Newton} polytope is the convex hull
\begin{equation*}
P(f) = \Conv(A)\subset M_{\mathbb{R}}.
\end{equation*}
Note that $P(f\cdot g)=P(f)+P(g)$.
Suppose $f_{1},\ldots,f_{k}$ are a collection of Laurent polynomials as above. Assume that all
non-vanishing coefficients of the $f_{i}$ are contained in an open, strongly convex polyhedral cone
$U\subset \mathbb{C}^{N_{i}}$. Then the complex powers $f_{i}(t)^{c_{i}}$ are well-defined for every
$c_{i}\in \mathbb{C}$, $t\in T_{N}(\mathbb{R}^{+})$ and fixed choice of branch of $w\mapsto
w^{c_{i}}=e^{c_{i}\log(w)}$. These conditions imply that each $f_{i}$ is totally non-vanishing on
$T_{N}(\mathbb{R}^{+})$ in the sense of \cite{Nilsson_2011}.
Choose a $\mathbb{Z}$-basis of $N$ such that $N\cong \mathbb{Z}^{n}$ and inducing isomorphisms
$M\cong \mathbb{Z}^{n}, T_{N}\cong (\mathbb{C}^{*})^{n}$. Let
\begin{equation*}
\Log: T_{N} \rightarrow N_{\mathbb{C}}, \quad t\mapsto (\log(t_{1}),\ldots,\log(t_{n}))
\end{equation*}
be the componentwise logarithm. For $s\in M_{\mathbb{C}}\cong \mathbb{C}^{n}$, we define the
multivalued monomial
\begin{align*}
t^{s} = \prod_{i=1}^{n}t_{i}^{s_{i}} = e^{\langle s, \Log t \rangle},
\end{align*}
These definitions are clearly independent of the choice of basis. Let us also denote by
\begin{equation*}
\frac{\mathrm{d} t}{t} = \frac{\mathrm{d} t_{1}}{t_{1}} \wedge \ldots \wedge \frac{\mathrm{d} t_{n}}{t_{n}}
\end{equation*}
the holomorphic $T_{N}$-invariant volume form, which is independent of the choices up to sign, i.e.
up to the choice of an orientation of $N$.
We are interested in the analytic properties of the multivariate Mellin transform
\begin{equation*}
\mathcal{M}(f_{i},s,c)=\int_{T_{N}(\mathbb{R^{+}})}t^{s}\prod_{i=1}^{k} f_{i}(t)^{-c_{i}}
\frac{\mathrm{d} t}{t}.
\end{equation*}
We will see that the convergence properties of the above integral are governed by the polytope
\begin{equation*}
P=P(f_{1})+\ldots+P(f_{k})=P(f_{1}\cdots f_{k}).
\end{equation*}
For a weight vector $u\in N_{\mathbb{R}}$ and Laurent polynomial $h=\sum_{m}h_{m}t^{m}$, let
\begin{equation*}
d_{u}(h) = \min_{m\in P(h)}\langle m, u \rangle.
\end{equation*}
If $u=u_{\rho}\in \mathbb{Z}$ is the lattice generator of a rational ray $\rho\subset
N_{\mathbb{R}}$, then we define $d_{\rho}(h)=d_{u_{\rho}}(h)$. For $c\in \mathbb{C}^{k}$ let us also
set
\begin{equation*}
d_{u}(c) = \sum_{i=1}^{k}c_{i}d_{u}(f_{i})
\end{equation*}
and $d_{\rho}(c)=d_{u_{\rho}}(c)$.
Let $\Sigma_{P}$ be the normal fan of the Newton polytope $P=P(f_{1}\cdots f_{n})$.
and denote by $\Lambda(f_{i})\subseteq M_{\mathbb{C}}\times
\mathbb{C}^{k}$ the region of pairs $(s,c)\in M_{\mathbb{C}}\times \mathbb{C}^{k}$ satisfying
\begin{equation*}
\Real\langle s, u_{\rho} \rangle > \Real d_{\rho}(c)
\end{equation*}
for all $\rho\in\Sigma_{P}(1)$.
\begin{thm}\label{thm:mellin-convergence}
If the polytope $P$ is full-dimensional then $\Lambda(f_{i})$
is nonempty and the Mellin
transform converges for $(s,c)\in M_{\mathbb{C}}\times \mathbb{C}^{k}$ if and only if $(s,c)\in\Lambda(f_{i})$.
Conversely, if $P$ is not full-dimensional then the Mellin transform is not absolutely convergent
for any choice of $(s,c)\in M_{\mathbb{C}}\times \mathbb{C}^{k}$.
\end{thm}
\begin{remark}
For $r=(r_{1},\ldots,r_{k})\in (0,\infty)^{k}$, let
\begin{equation*}
P(r) = r_{1}P_{f_{1}}+\ldots+r_{k}P_{f_{n}}\subset M_{\mathbb{R}}
\end{equation*}
be the Minkowski sum of the scaled Newton polytopes. This polytope is full-dimensional if
$P=P(1,\ldots,1)$ is full-dimensional. Then the interior of $P(r)$ is non-empty and $P(r)$ has the
facet presentation
\begin{equation*}
P(r) = \bigcap_{\rho\in\Sigma_{P}(1)}\{\langle m, u_{\rho} \rangle\ge d_{\rho}(r)\}.
\end{equation*}
Hence the region $\Lambda(f_{i})$ contains the set
\begin{equation*}
\{(s,c)\in M_{\mathbb{C}}\times \mathbb{C}^{k}\ \rvert\ \Real(c)\in (0,\infty)^{k}, \Real(s)\in \Int(P(\Real(c)))\}.
\end{equation*}
This recovers the corresponding results of (\cite{Nilsson_2011},\cite{Berkesch_2014}).
\end{remark}
The basic idea of the proof is to find a compactification $X_{\Sigma}$ of $T_{N}$ such that the
strict transforms $\overline{V(f_{i})}$ do not intersect the real locus
$X_{\Sigma}(\mathbb{R}^{+})$. The convergence of the integral then essentially reduces to the
absence of poles along the divisor $D_{\Sigma}=X_{\Sigma}\backslash T_{N}$.
\begin{prop}\label{prop:simplicial-refinement}
Suppose $f=\sum_{m\in A}a_{m}t^{m}$ is a Laurent polynomial on $T_{N}$, such that the coefficients
$a_{m}$ are contained in a strongly convex polyhedral cone $U\subset \mathbb{C}^{A}$. Let
$X_{\Sigma}$ be a complete, simplicial toric variety with torus $T_{N}$. Then the following are
equivalent:
\begin{enumerate}
\item The fan $\Sigma$ is a refinement of the (possibly degenerate) normal fan $\Sigma_{P(f)}$ of
the Newton polytope of $f$.
\item The closure of the zero set $\overline{ V(f) }$ of $f$ does not intersect the real positive
locus $X_{\Sigma}(\mathbb{R}^{+})$.
\end{enumerate}
\end{prop}
\begin{proof}[Proof of Prop. \ref{prop:simplicial-refinement}]
The variety $X_{\Sigma}$ is covered by the orbifold charts
$U_{\sigma}=\mathbb{C}^{\sigma(1)}//G_{\sigma}$, where
$\sigma=\pos(u_{1},\ldots,u_{n})\in\Sigma(n)$ is a maximal cone. The Laurent monomials are
expressed in the coordinates $x_{i}$ of $U_{\sigma}$ as
\begin{equation*}
t^{m} = \prod_{i=1}^{n}x_{i}^{\langle m,u_{i} \rangle} =: x^{m}.
\end{equation*}
Suppose $\sigma$ is contained in a maximal cone of $\Sigma_{P}$. By Prop. \ref{prop:maximal-cones},
there is $m_{\sigma}\in P(f)$ with
\begin{equation*}
\langle m_{\sigma}, u_{i} \rangle = \min_{m\in P(f)}\langle m, u_{i} \rangle,
\end{equation*}
for all $i=1,\ldots,n$. The Laurent polynomial $f$ is then expressed in these coordinates as
\begin{align*}
f(x) &= \sum_{m\in A}a_{m}x^{m} = x^{m_{\sigma}}\left(a_{m_{\sigma}}+
\sum_{m\in A \backslash\{m_{\sigma}\}}a_{m}x^{m_{\sigma}} \right) \\
&=: x^{m_{\sigma}}f_{\sigma}(x)
\end{align*}
The polynomial $f_{\sigma}(x)$ is regular and non-vanishing on $U_{\sigma}(\mathbb{R}^{+})$. It
follows that
\begin{equation*}
\overline{V(f)}\cap U_{\sigma}(\mathbb{R}^{+}) = V(f_{\sigma})\cap U_{\sigma}(\mathbb{R}^{+})
=\emptyset.
\end{equation*}
Conversely, suppose $\overline{V(f)}\cap U_{\sigma}(\mathbb{R}^{+})$ is empty. The Zariski closure
$\overline{V(f_{i})}\subset U_{\sigma}$ is described by a polynomial $\tilde f\in
k[x_{1},\ldots,x_{n}]$ such that $f(x)=x^{\tilde m}\tilde f(x)$, i.e. $\tilde f$ has the form
\begin{equation*}
\tilde f(x) = \sum_{m\in A}a_{m}x^{m-\tilde m}.
\end{equation*}
Since $\tilde f$ is regular on $\mathbb{C}^{n}$, we must have $\langle m-\tilde m, u_{i}
\rangle\ge 0$ for all $m\in A$ and $i=1,\ldots,k$. The intersection $V(\tilde f)\cap
U_{\sigma}(\mathbb{R}^{+})$ can only be empty if there is at least one $m\in A$, such that
$\langle m-\tilde m, u_{i} \rangle=0$ for all $i$. But this would imply that $\langle m, u_{i}
\rangle$ is minimal for all $i=1,\ldots,n$ and Prop. \ref{prop:maximal-cones} shows that $\sigma$
is contained in a maximal cone of $\Sigma_{P(f)}$.
Since the open sets $U_{\sigma}$ cover $X_{\Sigma}$, we have shown that
\begin{equation*}
\overline{V(f)}\cap
X_{\Sigma}(\mathbb{R}^{+})=\emptyset
\end{equation*}
if and only if every maximal cone of $\Sigma$ is contained in a cone of $\Sigma_{P}$, which means
that $\Sigma$ is a refinement of $\Sigma_{P}$.
\end{proof}
By Prop. \ref{prop:common-refinement}, the normal fan $\Sigma_{P}$ of $P$ is the coarsest common
refinement of the normal fans of the Newton polytopes $P(f_{i})$. Suppose $\Sigma$ is a simplicial
refinement of $\Sigma_{P}$. The above proposition shows that $\overline{V(f_{i})}\cap
X_{\Sigma}(\mathbb{R}^{+})=\emptyset$ for all $i$. By (a straightforward generalization of) Prop.
\ref{prop:differential-form} we can express the Mellin transform as
\begin{equation*}
\mathcal{M}(f_{i},s,c) = \int_{X_{\Sigma}(\mathbb{R}^{+})}
\prod_{\rho\in\Sigma(1)}x_{\rho}^{\langle s, u_{\rho} \rangle -1}\prod_{i=1}^{k}F_{i}(x)^{-c_{i}} \Omega_{\Sigma},
\end{equation*}
where $F_{i}(x)$ is the homogenization of $f_{i}$.
\begin{remark}
The above integral should be understood in the orbifold sense. More precisely, let
$U_{\sigma}=\mathbb{C}^{n}//G_{\sigma}$ be an orbifold chart and $\alpha$ a compactly supported,
$G_{\sigma}$-invariant $n$-form on $\mathbb{C}^{n}$, absolutely integrable on $(\mathbb{R}^{+})^{n}$. We
define the integral over $U_{\sigma}(\mathbb{R}^{+})$ as
\begin{equation*}
\int_{U_{\sigma}(\mathbb{R}^{+})}\alpha = \frac{1}{|G_{\sigma}|}\int_{(\mathbb{R}^{+})^{n}}\alpha.
\end{equation*}
For a general analytic section $\alpha$ of $\omega_{\Sigma}$ which is locally integrable on
$X_{\Sigma}(\mathbb{R}^{+})$, we define
\begin{equation*}
\int_{X_{\Sigma}(\mathbb{R}^{+})}\alpha = \sum_{\sigma\in\Sigma(n)}\int_{U_{\sigma}(\mathbb{R}^{+})}\rho_{\sigma}\alpha,
\end{equation*}
where $(\rho_{\sigma})$ is a smooth partition of unity subordinate to the cover
$(U_{\sigma})_{\sigma\in\Sigma(n)}$. See \cite[Section 2.1]{Adem_2007} for further details.
\end{remark}
\begin{prop}
Let $\Sigma$ be a simplicial refinement of $\Sigma_{P}$. Then the above integral converges
absolutely if and only if
\begin{equation*}
\Real\langle s, u_{\rho} \rangle > \Real d_{\rho}(c),
\end{equation*}
for all $\rho\in\Sigma(1)$.
\end{prop}
\begin{proof}
The fan $\Sigma$ is complete since it is a refinement of the normal fan of a lattice polytope.
Hence the integration domain is compact and it is enough to show that the integrand is locally
integrable. Let
\begin{equation*}
\sigma=\pos(u_{1},\ldots,u_{n})\in\Sigma(n)
\end{equation*}
be a maximal cone. The proof of the previous proposition shows that
\begin{equation*}
F_{i}|_{U_{\sigma}} = \prod_{j=1}^{n}x_{j}^{\langle m_{\sigma,i}, u_{j} \rangle}f_{\sigma,i},
\end{equation*}
where $f_{\sigma,i}$ does not vanish on $U_{\sigma}(\mathbb{R}^{+})$ and $m_{\sigma,i}$ satisfies
\begin{equation*}
\langle m_{\sigma,i}, u_{j} \rangle = d_{u_{j}}(f_{i}).
\end{equation*}
The integrand then has the local expression
\begin{align*}
\prod_{j=1}^{n}\prod_{i=1}^{k}x_{j}^{\langle s, u_{j} \rangle -1}F_{i}(x)^{-c_{i}}\big\vert_{U_{\sigma}}
&= \prod_{j=1}^{n}x_{j}^{\langle s, u_{j} \rangle-d_{\rho}(c) -1}\prod_{i=1}^{k}f_{\sigma,i}(x)^{-c_{i}}.
\end{align*}
This is locally integrable on $U(\mathbb{R}^{+})\cong \mathbb{R}^{n}_{+}//G_{\sigma}$ if and only
if
\begin{equation*}
\Real(\langle s, u_{j} \rangle -d_{u_{j}}(c)) > 0.
\end{equation*}
for all $\mathbb{R}^{+}u_{j}\in\sigma(1)$. Letting $\sigma\in\Sigma(n)$ vary over all
$n$-dimensional cones gives the result.
\end{proof}
\begin{proof}[Proof of Thm. \ref{thm:mellin-convergence}]
Suppose $P$ is full-dimensional. From Theorem \ref{thm:simplicial-refinement}, we can construct a
simplicial refinement $\Sigma$ of $\Sigma_{P}$ with $\Sigma(1)=\Sigma_{P}(1)$. The previous
proposition shows that the integral is convergent if $(s,c)$ satisfy
\begin{equation*}
\Real\langle s, u_{\rho} \rangle > \Real d_{\rho}(c),
\end{equation*}
for all $\rho\in\Sigma(1)=\Sigma_{P}(1)$.
Now suppose $P(r)$ is not full-dimensional, i.e. it is contained in a hyperplane
\begin{equation*}
P \subseteq \{m\in M_{\mathbb{R}}\ \rvert\ \langle m, u \rangle = d\},
\end{equation*}
This is only possible, if $P(f_{i})$ is contained in the hyperplane
\begin{equation*}
\{m\in M_{\mathbb{R}} \ \rvert\ \langle m, u\rangle=d_{u}(f_{i})\}.
\end{equation*}
Every simplicial refinement $\Sigma$ of $\Sigma_{P}$ must contain the cones
$\rho^{\pm}=\mathbb{R}^{\pm}u$. The inequalities
\begin{equation*}
\Real\langle s, u \rangle > \Real d_{u}(c) \quad \Real\langle s,-u \rangle > \Real d_{-u}(c).
\end{equation*}
corresponding to $\rho^{+}$ and $\rho^{-}$ are not both satisfiable, since
\begin{equation*}
d_{u}(c) = \sum_{i=1}^{k}c_{i}d_{u}(f_{i}) = -d_{-u}(c).
\end{equation*}
Thus the integral does not convergence for any choice of $(s,c)\in M_{\mathbb{C}}\times
\mathbb{C}^{k}$.
\end{proof}
\paragraph{Analytic continuations.}
Suppose $g=\sum_{m\in B}b_{m}t^{m}$ is another Laurent polynomial with Newton polytope $P(g)$.
Theorem \ref{thm:mellin-convergence} implies that the integral
\begin{equation*}
\mathcal{M}(f_{i},g,s,c) = \int_{T_{N}(\mathbb{R^{+}})}t^{s}g(t)\prod_{i=1}^{k}f_{i}(t)^{-c_{i}} \frac{\mathrm{d} t}{t}
\end{equation*}
is convergent if
\begin{equation*}
\Real \langle s+m, u_{\rho} \rangle > \Real(d_{\rho}(c)),
\end{equation*}
for all $\rho\in\Sigma_{P}(1)$ and $m\in B$. This is clearly equivalent to
\begin{equation*}
\Real \langle s, u_{\rho} \rangle > \Real(d_{\rho}(c))-d_{\rho}(g).
\end{equation*}
Let $\Lambda(f,g)\subset M_{\mathbb{C}}\times \mathbb{C}^{k}$ be the open set of parameters $(s,c)$
satisfying the above inequalities. The articles \cite{Nilsson_2011} and \cite{Berkesch_2014}
construct a meromorphic continuation of $\mathcal{M}(f_{i},g,s,c)$ to $M_{\mathbb{C}}\times
\mathbb{C}^{k}$. Let us briefly sketch their argument.
For a ray $\rho\in\Sigma_{P}(1)$ with lattice generator $u_{\rho}\in N$, let
\begin{equation*}
\mathbb{C}^{*}\rightarrow T_{N},\quad \lambda \mapsto \lambda_{\rho}:= u_{\rho} \otimes \lambda
\end{equation*}
be the one-parameter subgroup defined by $u_{\rho}$. Composing with the left action of $T_{N}$ and
restricting to $(0,\infty)\subset \mathbb{C}^{*}$ gives the action
\begin{equation*}
(0,\infty)\times T_{N} \rightarrow T_{N},\quad (\lambda,t)\mapsto \lambda_{\rho}\cdot t.
\end{equation*}
Let $h(t)=\sum_{m\in C}h_{m}t^{m}$ be an arbitrary Laurent polynomial. Set
\begin{equation*}
h_{\rho}(t) = \frac{\mathrm{d}}{d\lambda}\left( \frac{h(\lambda_{\rho}\cdot t)}{\lambda^{d_{\rho}(h)}} \right){\bigg\rvert}_{\lambda=1}.
\end{equation*}
For monomials $m\in F_{u_{\rho}}P(h)$, $(\lambda_{\rho}\cdot t)^{m}=\lambda^{d_{\rho}(h)}t^{m}$,
which implies that
\begin{equation*}
d_{\rho}(h_{\rho}) \ge d_{\rho}(h)+1.
\end{equation*}
The differential form $\frac{\mathrm{d} t}{t}$ and integration domain $T_{N}(\mathbb{R^{+}})$ are invariant
under the action of $\lambda_{\rho}$. Hence we have
\begin{align*}
\mathcal{M}(f_{i},g,s,c) &= \int_{T_{N}(\mathbb{R^{+}})}(\lambda_{\rho}\cdot t)^{s}g(\lambda_{\rho}\cdot t)\prod_{i=1}^{k}f_{i}(\lambda_{\rho}\cdot t)^{-c_{i}} \frac{\mathrm{d} t}{t} \\
&= \int_{T_{N}(\mathbb{R^{+}})}\lambda^{\langle s, u_{\rho}\rangle -d_{ \rho}(c)+d_{\rho}(g)}
t^{s}\frac{g(\lambda_{\rho}\cdot t)}{\lambda^{d_{\rho(g)}}}
\prod_{i=1}^{k}\left(\frac{f_{i}(\lambda_{\rho}\cdot t)}
{\lambda^{d_{\rho,i}}}\right)^{-c_{i}} \frac{\mathrm{d} t}{t}.
\end{align*}
Differentation with respect to $\lambda$ and setting $\lambda=1$ gives
\begin{equation*}
\mathcal{M}(f_{i},s,c,g) = \frac{-1}{\langle s, u \rangle - d_{\rho}(c)+d_{\rho}(g)}\left( I_{g}(s,c)-\sum_{i=1}^{k}c_{i}I_{i}(s,c) \right),
\end{equation*}
where
\begin{align*}
I_{g}(s,c) &= \mathcal{M}(f_{i},g_{\rho},s,c) \\
I_{i}(s,c) &= \mathcal{M}(f_{i},gf_{\rho},s,c+e_{i}).
\end{align*}
The integral $I_{g}(s,c)$ converges for $(s,c)$ satisfying
\begin{align*}
\Real(\langle s, u_{\tilde \rho} \rangle) > \Real(d_{\tilde \rho}(c)) - d_{\tilde \rho}(g_{ \rho}),
\end{align*}
for all $\tilde\rho\in\Sigma_{P}(1)$. Since $P(g_{\rho})\subset P(g) $ and $d_{ \rho}(g_{ \rho})\ge
d_{ \rho}(g) +\delta^{\tilde \rho}_{\rho}$ by construction, we have
\begin{equation*}
\Real(d_{\rho}(c)) - d_{\rho}(g_{ \rho})
\le \Real(d_{\rho}(c)) - d_{\rho}(g) - \delta^{ \tilde \rho}_{\rho}.
\end{equation*}
Similarly, $I_{i}(s,c)$ converges iff
\begin{align*}
\Real(\langle s, u_{\tilde \rho} \rangle) > \Real(d_{\tilde \rho}(c+e_{i})) - d_{\tilde \rho}(g) - d_{\tilde \rho}(f_{i, \rho}).
\end{align*}
From $d_{\tilde \rho}(f_{i, \rho})\ge d_{i,\tilde \rho}+\delta^{\tilde\rho}_{\rho}$ we get the
inequality
\begin{align*}
\Real(d_{\tilde\rho}(c+e_{i})) - d_{\tilde\rho}(g) - d_{\tilde\rho}(f_{i, \rho})
&\le \Real(d_{\tilde \rho}(c)) - d_{\tilde\rho}(g) - \delta^{\tilde \rho}_{\rho}.
\end{align*}
Hence the integrals $I_{g}(s,c),I_{i}(s,c)$ converge if
\begin{equation*}
\langle \Real(s), u_{\tilde \rho} \rangle > \Real(d_{\tilde \rho}(c))- d_{\tilde \rho}(g) - \delta_{\rho}^{\tilde \rho}
\end{equation*}
for all $\tilde \rho\in\Sigma_{P}(1)$. Thus we have found an analytic continuation which improves
the convergence in the direction of $\rho$.
Iteratively differentiating with respect to one-parameter subgroups $\varphi_{\rho}(\lambda)$ as above
then gives the following result.
\begin{thm}[{\cite[Theorem 2.4]{Berkesch_2014}}]\label{thm:meromorphic-cont}
Suppose the polytope $P=P(f_{1})+\ldots+P(f_{k})$ is full-dimensional. Then for every
$(s_{0},c_{0})\in M_{\mathbb{C}}\times \mathbb{C}^{k}$ the Mellin transform
$\mathcal{M}(f_{i},g,s,c)$ can be expressed as a sum of the form
\begin{equation*}
\mathcal{M}(f_{i},g,s,c) = \sum_{\beta}L_{\beta}(s,c)\mathcal{M}(f_{i},g_{\beta},s,c+n_{\beta}),
\end{equation*}
for certain Laurent polynomials $g_{\beta}$ and $n_{\beta}\in \mathbb{Z}_{\ge 0}^{k}$, such that
the Mellin transforms on the right hand side are convergent in a neighbourhood of $(s_{0},c_{0})$.
The functions $L_{\beta}(s,c)$ are rational functions of $(s,c)$ with simple poles along divisors
of the form
\begin{equation*}
\{\langle s, u_{\rho} \rangle - d_{\rho}(c) + d_{\rho}(g) = -m\}
\end{equation*}
for $m\in \mathbb{N}$. Thus the Mellin transform can be expressed as
\begin{equation*}
\mathcal{M}(f_{i},g,s,c)=\Phi(s,c)
\prod_{\rho\in\Sigma_{P}(1)}\Gamma(\langle s, u_{\rho} \rangle -d_{\rho}(c)+d_{\rho}(g)),
\end{equation*}
where $\Phi(s,c)$ is entire on $M_{\mathbb{C}}\times \mathbb{C}^{k}$.
\end{thm}
\begin{proof}[Proof Sketch.]
Fix $(s_{0},c_{0})\in M_{\mathbb{C}}\times \mathbb{C}^{k}$ and let
\begin{equation*}
a_{\rho} = -\min(\lceil{ \Real(\langle s_{0}, u_{\rho} \rangle - d_{\rho}(c_{0})+d_{\rho}(g))}\rceil-1,0)
\end{equation*}
For each $\rho\in\Sigma_{P}(1)$ we partially integrate (at most) $a_{\rho}$-times in the direction
$u_{\rho}$. This expresses the Mellin transform as a sum
\begin{equation*}
\mathcal{M}(f_{i},g,s,c) = \sum_{\beta}L_{\beta}\mathcal{M}(f_{i},g_{\beta},s,c+n_{\beta}),
\end{equation*}
where $L_{\beta}$ is the product of the rational factors $(\langle s, u_{\rho} \rangle -
d_{\rho}(c) + d_{\rho}(g) +m)^{-1}$ introduced by the partial integrations. One can check that
these poles are all distinct, so that $L_{\beta}$ is a rational function with simple poles as
prescribed above.
The Mellin transforms on the right hand side are guaranteed to converge in a neighbourhood of
$(s_{0},c_{0})$, since each partial integration in the $\rho$ direction improves the convergence
in this direction by at least one unit. The Gamma functions $\Gamma(\langle s, u_{\rho} \rangle
-d_{\rho}(c)+d_{\rho}(g))$ have simple poles along every hypersurface of the form
\begin{equation*}
\{\langle s, u_{\rho} \rangle - d_{\rho}(c) + d_{\rho}(g)\in -\mathbb{Z}_{+}\}
\end{equation*}
Hence dividing the above expression for $\mathcal{M}(f_{i},g,s,c)$ by
\begin{equation*}
\prod_{\rho\in\Sigma_{P}(1)}\Gamma(\langle s, u_{\rho} \rangle -d_{\rho}(c)+d_{\rho}(g))
\end{equation*}
cancels the simple poles and gives an entire function $\Phi(s,c)$.
\end{proof}
\paragraph{Sector decomposition.}
\label{sec:sect-decomp}
In applications to quantum field theory, one often wants to compute the analytic continuation of
$\mathcal{M}(f_{i},s,c)$ along a single parameter (the dimension of spacetime), i.e. one wants to
compute the restriction of $\mathcal{M}(f_{i},s,c)$ to a line $l\subset M_{\mathbb{C}}\times
\mathbb{C}^{k}$. The corresponding function $M(d)=\mathcal{M}(f_{i},s(d),c(d))$ is a meromorphic
function of $d\in \mathbb{C}$ and the main goal is to compute the coefficients of a Laurent
expansion around a fixed pole $d_{0}\in \mathbb{C}$.
For this purpose, Binoth and Heinrich \cite{Binoth_2000} introduced a recursive strategy, which
iteratively decomposes (a suitable compactification) of the integration domain
$T_{N}(\mathbb{R^{+}})$ into cubical sectors and performs blow-ups along coordinate subspaces until
the integral in every sector is of the form
\begin{equation*}
I_{\alpha} = \int_{[0,1]^{k}}x^{s_{\sigma}(d)}\prod_{i=1}^{k}\tilde f^{c_{i}(d_{i})}_{i},
\end{equation*}
such that $\tilde f_{i}$ does not vanish along the coordinate subspaces $x_{i}=0$. The analytic
continuation in $d$ can then be computed by a simple Taylor expansion.
The original strategies of Binoth and Heinrich had the drawback, that the recursion did not always
terminate. This fault was corrected by Bogner and Weinzierl in \cite{Bogner_2008}, where ideas from
the resolution of singularities were used to devise strategies guaranteed to succeed. Unfortunately,
these strategies often result in a large number of sectors, which greatly impacts the time needed
for numerical computations.
The results of the last section suggest that one should try to find a strategy which is adapted to
the Newton polytope of $f_{1}\cdots f_{k}$. Such a strategy has indeed been described by Kaneko and
Ueda in \cite{Kaneko_2010} and it fits in nicely with our toric point of view. One of the main
results of \cite{Kaneko_2010} can be rephrased as follows.
\begin{prop}\label{prop:sec-decom}
Let $X_{\Sigma}$ be a complete simplicial toric variety of dimension $n$. For $\sigma\in\Sigma(n)$
let
\begin{equation*}
I_{\sigma} := [0,1]^{n}//G_{\sigma}\subset \mathbb{R}_{+}^{n}//G_{\sigma}= U_{\sigma}(\mathbb{R}^{+}).
\end{equation*}
Then
\begin{equation*}
X_{\Sigma}(\mathbb{R}_{+}) = \bigcup_{\sigma\in\Sigma(n)}I_{\sigma}
\end{equation*}
and the intersections $I_{\sigma}\cap I_{\sigma'}$ for $\sigma\neq\sigma'$ have measure zero.
\end{prop}
\begin{proof}
The complement of $T_{N}(\mathbb{R^{+}})\subset X_{\Sigma}(\mathbb{R}_{+})$ has real codimension
one, so it is enough to show that the restrictions
\begin{equation*}
I^{\circ}_{\sigma} = I_{\sigma}\cap T_{N}(\mathbb{R^{+}}) \cong (0,1]^{n}//G_{\sigma}
\end{equation*}
cover $T_{N}(\mathbb{R^{+}})$ and intersect in a set of measure zero. The map
\begin{equation*}
L : T_{N}(\mathbb{R^{+}})\rightarrow N_{\mathbb{R}},\quad L(t) = -\Log(t)
\end{equation*}
is a diffeomorphism. We claim that $L$ identifies $I^{\circ}_{\sigma}$ with the cone
$\sigma\subset \mathbb{R}^{n}$. Let $\sigma$ have lattice generators $u_{1},\ldots,u_{n}$. In the
coordinates of $\sigma$ we have $t_{j}=\prod_{i=1}^{n}x_{i}^{\langle e_{j}, u_{i} \rangle}$ and
\begin{align*}
L(t(x)) &= \sum_{j=1}^{n}-\log(t_{j}(x))e_{j}\\
&= \sum_{i,j=1}^{n}-\log(x_{i})\langle e_{j}, u_{i} \rangle e_{j}\\
&= \sum_{i=1}^{n}-\log(x_{i})u_{i}.
\end{align*}
Since $I^{\circ}_{\sigma}$ is defined by the inequalities $0<x_{i}\le 1$, we get
\begin{equation*}
L(I^{\circ}_{\sigma})=¸\pos(u_{1},\ldots,u_{n})=\sigma.
\end{equation*}
But $X_{\Sigma}$ is a complete fan, so the cones $\sigma\in\Sigma(n)$ cover $\mathbb{R}^{n}$ and
intersect in a set of codimension one. The same must then be true for the $I_{\sigma}^{\circ}$.
\end{proof}
Combining the above proposition with proposition \ref{prop:simplicial-refinement} gives the main
result of \cite{Kaneko_2010}:
\begin{col}[\cite{Kaneko_2010}]\label{col:sec-decomp}
Let $X_{\Sigma}$ be a simplicial refinement of $P=P(f_{1}\cdots f_{k})$. Then the Mellin transform
can be decomposed as
\begin{equation*}
M(f,s,c) = \sum_{\sigma\in\Sigma(n)}\int_{[0,1]^{n}} x_{\sigma}^{s_{\sigma}}\prod_{i=1}^{k}(f_{\sigma,i}(x_{\sigma}))^{-c_{i}} \mathrm{d} x_{\sigma},
\end{equation*}
where:
\begin{enumerate}
\item $x_{\sigma}=(x_{\sigma,1},\ldots,x_{\sigma,n})$ are the coordinates associated to the
maximal cone $\sigma=\pos(u_{1},\ldots,u_{n})\in \Sigma$.
\item The monomial $x_{\sigma}^{s_{\sigma}}$ is given by
\begin{align*}
x_{\sigma}^{s_{\sigma}} = \prod_{j=1}^{n}x_{\sigma,j}^{\langle s - \sum_{i}c_{i}m_{\sigma,i}, u_{j} \rangle -1}
= \prod_{j=1}^{n}x_{\sigma,j}^{\langle s, u_{j}\rangle -d_{u_{j}}(c)-1}
\end{align*}
with $m_{\sigma,i}$ the common minimum of the functions $\langle m, u_{j} \rangle$ on
$P(f_{i})$.
\item The polynomials $f_{\sigma,i}(x_{\sigma})=f_{i}(x_{\sigma})x_{\sigma }^{-m_{\sigma,i}}$ are
regular and non-vanishing on $[0,1]^{n}$.
\end{enumerate}
\end{col}
\begin{proof}
The previous proposition shows that we can write the integral as a sum over the integration
domains $I_{\sigma}=[0,1]^{n}//G_{\sigma}$. Over this domain the integral becomes
\begin{align*}
\int_{I_{\sigma}}t^{s}\prod_{i=1}^{k}(f_{i}(t))^{-c_{i}} \frac{\mathrm{d} t}{t}
&= \int_{I_{\sigma}}\prod_{\rho\in\Sigma(1)}x_{\rho}^{\langle s, u_{\rho} \rangle -1}
\prod_{i=1}^{k}F_{i}(x)^{-c_{i}} \Omega_{\Sigma}{\bigg\rvert}_{U_{\sigma}} \\
&= \frac{u_{\sigma}}{|G_{\sigma}|}\int_{[0,1]^{n}} \prod_{j=1}^{n}x_{j}^{\langle s, u_{j} \rangle -1}
\prod_{i=1}^{k}f_{i}(x_{\sigma})^{-c_{i}} \mathrm{d} x_{\sigma} \\
&= \frac{u_{\sigma}}{|G_{\sigma}|}\int_{[0,1]^{n}} \prod_{j=1}^{n}x_{j}^{\langle s-\sum_{i}c_{i}m_{\sigma,i}, u_{j} \rangle -1}
\prod_{i=1}^{k}f_{\sigma,i}(x_{\sigma})^{-c_{i}} \mathrm{d} x_{\sigma}.
\end{align*}
and it follows from the proof of proposition \ref{prop:simplicial-refinement} that
$f_{\sigma,i}(x_{\sigma})$ is regular and non-vanishing on $[0,1]^{n}$. From the discussion in
section \ref{sec:TV:divisor-coordinate-ring} we can assume that $u_{\sigma}>0$ and we have
\begin{equation*}
u_{\sigma}=\det(u_{1},\ldots,u_{n}) = |\Cl(U_{\sigma})| = |G_{\sigma}|,
\end{equation*}
so the factor $\frac{u_{\sigma}}{|G_{\sigma}|}$ cancels.
\end{proof}
\begin{remark}
The integral over $I_{\sigma}$ converges for $(s,c)\in M_{\mathbb{C}}\times \mathbb{C}^{k}$
satisfying
\begin{equation*}
\Real\langle s, u_{\rho} \rangle > \Real d_{\rho}(c)
\end{equation*}
for all $\rho\in \sigma(1)$. If $r:=\Real(c)>0$, this means that $\Real(s)$ lies in the
interior of the convex cone $m_{\sigma}(r)+\sigma^{\vee}$, where
$m_{\sigma}(r)=\sum_{i}r_{i}m_{\sigma,i}$ is the vertex of $P(r)$ corresponding to $\sigma$. Hence
the convergence region for a single sector is always nonempty, even if $P$ is not
full-dimensional. In fact, if $r>0$ is fixed, then
\begin{equation*}
\bigcap_{\sigma\in\Sigma(1)} m_{\sigma}(r) + \sigma^{\vee} = P(r).
\end{equation*}
Thus the integrals $I_{\sigma}$ convergence simultaneously for $s\in M_{\mathbb{C}}$ if and only
if $s\in \Int(\bigcap_{\sigma\in\Sigma(1)} m_{\sigma}(r) + \sigma^{\vee}) = \Int P(r)$, which
recovers our earlier result.
\end{remark}
\section{Feynman integrals}
\label{sec:feynman-graphs}
Let us finally apply our work to the investigation of Feynman integrals. We will work will scalar
Feynman graphs with generic euclidean kinematics. The parametric representation then expresses the
amplitude as a Mellin transform to which our previous results apply. We will show that this gives
two equivalent ways to rigourously construct the dimensionally regularized amplitudes.
\paragraph{Feynman graphs. }
We will consider a \emph{graph} $G$ to consist of a triple
\begin{equation*}
G = (E_{G},V_{G},\partial)
\end{equation*}
of finite sets of edges $E_{G}$ and vertices $V_{G}$, together with a map
\begin{equation*}
\partial:E_{G}\rightarrow \Sym^{2}V_{G}=V_{G}\times V_{G}/\mathbb{Z}_{2},
\end{equation*}
mapping an edge to its endpoints. This definition allows multiple edges and loops, but our graphs
will not have external half-edges. An edge $e\in E_{G}$ is called a \emph{selfloop} if
$\partial(e)=(v,v)$. A subgraph $\gamma\subset G$ is given by subsets $E_{\gamma}\subset E_{G},
V_{\gamma}\subset V_{G}$, such that $\partial(E_{\gamma})\subset \Sym^{2}V_{\gamma}$.
Every graph has an obvious geometric realization as a one-dimensional CW-complex, so that we can
speak about topological notions like connectedness and simply-connectedness. In particular, we
denote by $h^{0}(G)$ and $h^{1}(G)$ the first and second Betti numbers of (the geometric realization
of) $G$. For a connected graph $G$, we call a connected subgraph $T\subset G$ a \emph{spanning tree}
if $V_{T}=V_{G}$ and $h^{1}(T)=0$. Note that these are precisely the maximal simply-connected
subgraphs of $G$.
A spanning 2-tree is a simply-connected subgraph $F\subset G$, with $V_{F}=V_{G}$ and exactly two
connected components $F=T_{1}\cup T_{2}$. Every spanning 2-tree is obtained from a spanning tree by
deleting an edge.
Every subset $I\subset E_{G}$ gives the \emph{edge subgraph} $\gamma\subset G$, where $E_{\gamma}=I$
and $V_{\gamma}$ consists of all vertices incident to an edge in $I$. We almost exclusively deal
with edge subgraphs, so we will often identify an edge subgraph with its set of edges. Notable
exceptions are spanning 2-trees, where it is important to allow isolated vertices.
A \emph{Feynman graph} is a graph together with distinguished (possibly empty) sets of external
vertices $V^{ext}_{G}\subseteq V$ and massive edges $E_{G}^{M}\subseteq E_{G}$. To every external
vertex $v\in V^{G}$ we associate an inflowing external momentum $q_{v}\in \mathbb{C}^{D}$ and to
every massive edge $e\in E^{M}_{G}$, a mass $m_{e}\in\mathbb{C}\backslash \{0\}$. The external
momenta are additionally subject to momentum conversation in each connected component: If $G=\cup
G_{i}$ is the decomposition of $G$ into connected components, then
\begin{equation*}
\sum_{v\in V^{ext}_{G_{i}}} q_{v} = 0,
\end{equation*}
for all $i$.
Now suppose $G$ is a connected Feynman graph. Choosing an orientation for each edge gives $G$ the
structure of a one-dimensional cell complex. Then a truncated part of the cellular chain complex
gives the exact sequence
\begin{center}\label{eq:graph-exact-sequence}
\begin{tikzcd}
0 \arrow{r} & H_{1}(G,\mathbb{Z}) \arrow{r}{i} & \mathbb{Z}^{E_{G}} \arrow{r}{\partial} & V^0_G
\arrow{r} & 0
\end{tikzcd}
\end{center}
Here, $\partial$ is the boundary map and
\begin{equation*}\label{eq:mom-conv}
V^{0}_{G} := \{(n_{v})\in \mathbb{Z}^{V_{G}}\ \rvert\ \sum n_{v}=0\}
\end{equation*}
is the image of $\partial$, imposing overall momentum conservation. The external momentum $q$ is
then naturally an element of $V^{0}_{G}\otimes \mathbb{C}^{D}$. We can choose a section
$B:V^{0}_{G}\rightarrow \mathbb{Z}^{E_{G}}$ of $\partial$, since the homology $H_{1}(G,\mathbb{Z})$
is free. In the physics literature, it is customary to choose a spanning tree $T\subset G$ and then
defining a section by
\begin{equation*}
B_{T}: V^{0}_{G}=V^{0}_{T}\cong \mathbb{Z}^{E_{T}}\hookrightarrow \mathbb{Z}^{E_{G}}.
\end{equation*}
Let $p\mapsto p_{e}$ be the projection $\mathbb{C}^{E}\otimes \mathbb{C}^{D}\rightarrow
\mathbb{C}^{D}$ to the momentum flowing through the edge $e$. To each edge we associate the affine
quadric
\begin{equation*}
P_{e}:\mathbb{C}^{D}\rightarrow \mathbb{C},\quad P_{e}(k)=(p_{e}^{2}+m^{2}_{e}),
\end{equation*}
where $p_{e}^{2}=\sum_{i=1}^{D}(p_{e}^{i})^{2}$ denotes the squared euclidean metric. Its inverse
$P_{e}^{-1}$ gives the (scalar) propagator. As outlined in the introduction, we will work with
analytically regularized integrals, which means raising each propagator to a complex power
$\lambda_{e}\in \mathbb{C}$.
\begin{defin}
The (formal) euclidean Feynman amplitude $I_{G}(\lambda,p,m)$ is given by
\begin{equation*}
I_{G}(\lambda,D,q,m) := \int_{H_{1}(G,\mathbb{R}^{D})}\prod_{e\in E_{G}}(P_{e}(k+B(q)))^{-\lambda_{e}}\mathrm{d}\mu,
\end{equation*}
where $\mathrm{d} \mu = \frac{\mathrm{d}^{Dl}k}{\pi^{Dl/2}}$ is a convenient multiple of the Lebesgue measure.
\end{defin}
\begin{remark}
In general quantum field theories, (e.g. gauge theories) the Feynman rules also give polynomials
of invariant scalar products in the numerator. But it is well known that these ``tensor''
integrals can be expressed as a linear combination of scalar Feynman integrals with shifted values
of the dimension and propagator decorations, see e.g. \cite{Tarasov_1996}. Since this is naturally
part of our approach anyway, we do not lose any generality if this reduction is understood.
\end{remark}
\paragraph{Parametric representation.}
\label{sec:schwinger}
In order to apply the results of section \ref{sec:analyt-dimens-regul}, we want to express $I_{G}$
as a suitable Mellin transform. This will only work if the external momenta and masses are
sufficiently generic.
\begin{defin}\label{defin:generic-kinematics}
A Feynman Graph $G$ has generic euclidean kinematics, if
\begin{align*}
\Real\left( \sum_{i\in I}q_{i} \right)^{2} &> 0 \\
\Real\left( \sum_{i\in I}q_{i} \right)^{2} + \Real(m^{2}_{e}) &> 0
\end{align*}
for all proper subsets $I\subsetneq V^{ext}_{G}$ and massive edges $e\in E^{m}_{G}$.
\end{defin}
We will assume that $G$ has generic euclidean kinematics from now on.
When none of the $P_{e}$ vanish, we can use the identity
\begin{equation*}
\frac{1}{p^{\lambda}} = \int_{0}^{\infty}\frac{\alpha^{\lambda-1}}{\Gamma(\lambda)} e^{-\alpha p}\mathrm{d} \alpha, \quad \Real(p) > 0,
\end{equation*}
to write the integrand of the momentum-space amplitude as
\begin{equation*}
\prod_{e\in E_{G}}P_{e}^{-\lambda_{e}}(p)=\int_{[0,\infty]^{E}}\prod_{e\in E_{G}}\frac{\alpha^{\lambda_{e}-1}}{\Gamma(\lambda_{e})}d\alpha_{e}e^{-\sum\alpha_{e}P_{e}(p)}.
\end{equation*}
Formally exchanging the integration variables gives the amplitude as
\begin{align*}
I_{G}(\lambda,D,q,m)
&= \int_{[0,\infty]^{E_{G}}}\prod_{e\in E_{G}}\frac{\alpha^{\lambda_{e}-1}}{\Gamma(\lambda_{e})}d\alpha_{e}\int_{H_{1}(G,\mathbb{R}^{D})}e^{-\sum\alpha_{e}P_{e}(k+B(q))}\mathrm{d} \mu \\
&=: \int_{[0,\infty]^{E_{G}}}\prod_{e\in E_{G}}\frac{\alpha^{\lambda_{e}-1}}{\Gamma(\lambda_{e})}\mathrm{d} \alpha_{e}A_{G}(\lambda,D,q,m,\alpha).
\end{align*}
The integrand $A_{G}$ can be computed by reducing it to a gaussian integral. The answer will involve
the following graph polynomials:
\begin{defin}
The \emph{first Symanzik} polynomial of the connected graph $G$ is
\begin{equation*}
\psi_{G}:= \sum_{T}\prod_{e\notin T}\alpha_{e},
\end{equation*}
where the sum is over all spanning trees of $G$. The (massless) \emph{second Symanzik} polynomial of $G$
with external momentum $q\in \mathbb{C}^{E}\otimes \mathbb{C}^{D}$ is
\begin{equation*}
\varphi_{G}(q,\alpha) = \sum_{T_{1}\cup T_{2}}q^{2}_{T_{1}}\prod_{e\notin T_{1}\cup T_{2}}\alpha_{e},
\end{equation*}
where the first sum is over spanning two-forests and
\begin{equation*}
q_{T_{1}} = \sum_{v\in V(T_{1})}q_{v}
\end{equation*}
is the total momentum flowing through $T_{1}$. The full second Symanzik polynomial of $G$ is
defined as
\begin{equation*}
\Phi_{G}(\alpha,q,m) = \varphi_{G}(\alpha,q)+\left( \sum_{i\in E_{G}}\alpha_{i}m^{2}_{i} \right)\psi_{G}(\alpha)
\end{equation*}
\end{defin}
\begin{remark}
By momentum conservation we have
\begin{equation*}
q_{T_{1}}^{2} = (-q_{T_{2}})^{2} = q_{T_{2}}^{2}.
\end{equation*}
Hence the above definition is unambiguous.
\end{remark}
The following theorem is then well-known (See e.g. \cite{Panzer:2015ida},
\cite{itzykson1985quantum}).
\begin{thm}
The integrand $A_{G}$ can be calculated as
\begin{equation*}
A_{G}(\lambda,D,\alpha,q,m) = \frac{e^{-\frac{\Phi_{G}}{\psi_{G}}}}{\psi_{G}^{D/2}}.
\end{equation*}
\end{thm}
Now consider the diffeomorphism
\begin{align*}
F:(0,\infty)\times \Delta^{E_{G}} &\cong \mathbb{R}^{E_{G}}_{+}\backslash\{0\} \\
(t,\alpha) \mapsto t\alpha
\end{align*}
A simple calculation shows that the standard volume form $\prod_{e}\mathrm{d}\alpha_{e}$ pulls back to
\begin{equation*}
F^{*}\left( \prod_{e}\mathrm{d}^{n}\alpha_{e} \right) = t^{E-1}\mathrm{d} t \wedge \Omega_{\Delta_{E_{G}}},
\end{equation*}
where
\begin{equation*}
\Omega_{\Delta_{E_{G}}} = \sum_{i=1}^{E}(-1)^{i-1}a_{i}\mathrm{d} a_{1}\wedge \ldots \widehat{\mathrm{d} a_{i}} \ldots \wedge \mathrm{d} a_{E}\bigg\rvert_{\Delta_{E_{G}}}.
\end{equation*}
and we have fixed some ordering $E_{G}\cong\{1,\ldots,E\}$. Now let
\begin{equation*}
\omega_{G} = \sum_{e\in E_{G}}\lambda_{e} - \frac{D}{2}h^{1}_{G}=:-sd_{G},
\end{equation*}
where $sd_{G}$ is the superficial degree of divergence of $G$.
Performing the integral over $t$ gives
(still formally)
\begin{align*}
I_{G}(\lambda,D,q,m) &= \int_{[0,\infty]^{E}}\prod_{e\in E_{G}}\frac{\alpha^{\lambda_{e}-1}}{\Gamma(\lambda_{e})}\mathrm{d} \alpha_{e}A_{G}(\lambda,D,q,m,\alpha)\\
&= \int_{\Delta_{E_{G}}}\prod_{e\in E_{G}}\frac{\alpha^{\lambda_{e}-1}}{\Gamma(\lambda_{e})}\frac{\Omega_{\Delta_{E_{G}}}}{\psi_{G}^{D/2}}\int_{0}^{\infty}t^{\omega_{G}-1}e^{-t \frac{\Phi}{\psi}}\mathrm{d} t\\
&= \int_{\Delta_{E_{G}}}\prod_{e\in E_{G}}\frac{\alpha^{\lambda_{e}-1}}{\Gamma(\lambda_{e})}\frac{\Omega_{\Delta_{E_{G}}}}{\psi_{G}^{D/2}}\Gamma(\omega_{G})\left( \frac{\psi_{G}}{\Phi_{G}} \right)^{\omega_{G}}.
\end{align*}
There is a natural homeomorphism
\begin{equation*}
P^{E_{G}}(\mathbb{R}^{+}) \rightarrow \Delta^{E_{G}},\quad [\alpha_{1}:\ldots:\alpha_{E_{G}}]\mapsto \left( \frac{\alpha_{i}}{\sum_{i\in E_{G}}\alpha_{i}} \right),
\end{equation*}
which identifies $\Omega_{\Delta_{E_{G}}}$ with the volume form $\Omega_{P^{E_{G}}}$ of $P^{E_{G}}$
constructed in section \ref{sec:TV:divisor-coordinate-ring}. We can then express the above result as
follows.
\begin{col}
The amplitude can be expressed as the (possibly still divergent) projective integral
\begin{equation*}
I_{G}(\lambda,D,q,m) = \Gamma(\omega_{G})\int_{P^{E_{G}}(\mathbb{R}^{+})}\prod_{e\in E_{G}}\frac{\alpha^{\lambda_{e}-1}}{\Gamma(\lambda_{e})}\left( \frac{\psi_{G}}{\Phi_{G}} \right)^{\omega_{G}}\frac{\Omega_{P^{E_{G}}}}{\psi_{G}^{D/2}}.
\end{equation*}
\end{col}
We can interpret the above representation of $I_{G}$ as an integral over the open simplex
\begin{equation*}
T_{N_{E_{G}}}(\mathbb{R}^{+})\subset P^{E_{G}}(\mathbb{R}^{+}),
\end{equation*}
which expresses it as Mellin transform.
For generic euclidean kinematics, the coefficients of $\Phi_{G}$ are contained in an open
strongly convex cone. Hence we are finally in a position to apply the result of section
\ref{sec:analyt-dimens-regul}.
\begin{remark}
We will soon see that (under mild conditions on the graph G) we can choose the $\lambda_e$ such
$\Real(\omega_{G})>0$ and the above expression for $I_{G}$ is absolutely convergent for generic
euclidean momenta. In this case, we get an equality of absolutely convergent integrals
\begin{align*}
I_{G}(\lambda,D,q,m)
&= \int_{H_{1}(G,\mathbb{R}^{D})}\prod_{e\in E_{G}}(P_{e}(k + B(p)))^{-\lambda_{e}}\mathrm{d}\mu\\
&= \Gamma(\omega_{G})\int_{P^{E_{G}}(\mathbb{R}^{+})}\prod_{e\in E_{G}}\frac{\alpha^{\lambda_{e}-1}}{\Gamma(\lambda_{e})}\left( \frac{\psi_{G}}{\Phi_{G}} \right)^{\omega_{G}}\frac{\Omega_{P^{E_{G}}}}{\psi_{G}^{D/2}}.
\end{align*}
\end{remark}
\paragraph{The Newton polytope of a Feynman graph.}
The results of section \ref{sec:analyt-dimens-regul} show that, to understand the convergence domain
of the integral $I_{G}$, we should understand the polytope $P(\psi_{G}\Phi_{G})$.
\begin{defin}
The Newton polytope $P(\psi_{G}\Phi_{G})$ of a Feynman graph $G$ is called the Feynman polytope of
$G$ and denoted by $P_{G}$.
\end{defin}
Let us first recall the factorization formulas of Brown (\cite{Brown_2017}). If $G$ has some massless
edges (and thus possible IR divergences) a special role is played by subgraphs, which carry all the
kinematics, in the following sense:
\begin{defin}
An edge subgraph $\gamma\subset G$ containing all external vertices of $G$ in a single connected
component will be called \emph{momentum spanning}. If $\gamma$ additionally contains all massive
edges, then it will be called \emph{mass-momentum spanning} (m.m. for short).
\end{defin}
For an m.m. subgraph $\gamma$, we set $V^{ext}_{\gamma}=V^{ext}_{G}$ and $E^{M}_{\gamma}=E^{M}_{G}$
and the kinematics of $\gamma$ are the same as those of $G$. Otherwise we consider $\gamma$ to be
scaleless, i.e. $V^{ext}_{\gamma}=E^{M}_{\gamma}=\emptyset$. If $\gamma$ is a possibly disconnected
subgraph, we define the quotient $G/\gamma$ by contracting every connected component to a vertex.
The kinematics of $G/\gamma$ are inherited from $G$ in the obvious way. This implies that $G/\gamma$
has nontrivial kinematics if and only if $\gamma$ is not mass-momentum spanning.
In this way, we can associate to every subgraph $\gamma\subset G$ the ``flat deformation''
\begin{equation*}
G|\gamma = \gamma \cup G/\gamma,
\end{equation*}
where exactly one of $\gamma$ and $G/\gamma$ has nontrivial kinematics. For a possibly disconnected
Feynman graph $\Gamma$ as above, we generalize the definitions of the Symanzik polynomials as
follows: If $\Gamma=\cup_{i=1}^{k} \Gamma_{i}$ is the disjoint union of connected graphs
$\Gamma_{i}$, then we set
\begin{align*}
\psi_{\Gamma} &= \sum_{i=1}^{k}\psi_{\Gamma_{i}}\\
\varphi_{\Gamma} &= \sum_{i=1}^{k}\varphi_{\Gamma_{i}}\prod_{j\neq i}\psi_{\Gamma_{j}}\\
\Phi_{\Gamma} &= \sum_{i=1}^{k}\Phi_{\Gamma_{i}}\prod_{j\neq i}\psi_{\Gamma_{j}}.
\end{align*}
It will also be convenient to define $\delta^{m}_{\gamma}$ to be 1 if $\gamma$ is momentum-spanning
and $0$ otherwise. Similarly, we set $\delta^{mm}_{\gamma}$ to be 1 if $\gamma$ is mass-momentum
spanning and $0$ otherwise. With this notation, we can formulate the factorization formulas as
follows:
\begin{prop}[\cite{Brown_2017}]\label{prop:factor}
Let $G$ be a connected Feynman graph and $\gamma\subset G$ a subgraph with connected components
$\gamma_{0},\ldots,\gamma_{k}$.
\begin{enumerate}
\item There are polynomials $R^{\psi}_{G|\gamma},R^{\varphi}_{G|\gamma}$ and
$R^{\Phi}_{G|\gamma}$, such that
\begin{align*}
\psi_{G} &= \psi_{G|\gamma} + R^{\psi}_{G|\gamma}\\
\varphi_{G} &= \varphi_{G|\gamma} + R^{\varphi}_{G|\gamma} \\
\Phi_{G} &= \Phi_{G|\gamma} + R^{\Phi}_{G|\gamma}.
\end{align*}
The degree $\deg_{\gamma}(R^{\cdot}_{G|\gamma})$ of the rest terms in the variables
$(\alpha_{e})_{e\in\gamma}$ satisfies
\begin{align*}
\deg_{\gamma}(R^{\psi}_{G|\gamma}) &> \deg_{\gamma}(\psi_{G|\gamma})
= h^{1}_{\gamma} \\
\deg_{\gamma}(R^{\varphi}_{G|\gamma}) &> \deg_{\gamma}(\varphi_{G|\gamma})
= h^{1}_{\gamma} + \delta^{m}_{\gamma}\\
\deg_{\gamma}(R^{\Phi}_{G|\gamma}) &> \deg_{\gamma}(\Phi_{G|\gamma})
= h^{1}_{\gamma}+\delta^{mm}_{\gamma}
\end{align*}
\item The polynomial $R^{\psi}_{G|\gamma}$ vanishes if and only if, for any spanning tree
$T\subset G$, the graphs $\gamma_{i}\cap T$ are connected.
\item Suppose $\gamma$ is not momentum spanning. Then the polynomial $R^{\varphi}_{G|\gamma}$
vanishes if and only if the intersections $\gamma_{i}\cap F$ are connected for any spanning
$2$-tree $F=T_{1}\cup T_{2}$ with $(q_{T_{1}})^{2}\neq 0$.
\item Suppose $\gamma$ is momentum spanning and all external vertices are contained in the
component $\gamma_{0}$. Then $R^{\varphi}_{G|\gamma}$ vanishes if and only if, for any spanning
$2$-tree $F=T_{1}\cup T_{2}$ with $(q_{T_{1}})^{2}\neq 0$, the graphs $F\cap \gamma_{i}$ are connected for
$i>0$ and $F\cap \gamma_{0}$ has exactly two connected components.
\item The polynomial $R^{\Phi}_{G|\gamma}$ is given by
\begin{equation*}
R^{\Phi}_{G|\gamma} = R_{G|\gamma}^{\varphi} + \psi_{G|\gamma}\left( \sum_{e\in E^{M}_{G}\cap E_{\gamma}}m^{2}_{e}\alpha_{e} \right) + R^{\psi}_{G|\gamma}\left( \sum_{e\in E^{M}_{G}}m^{2}_{e}\alpha_{e} \right)
\end{equation*}
if $\gamma$ is not mass-momentum spanning and by
\begin{equation*}
R^{\Phi}_{G|\gamma} = R_{G|\gamma}^{\varphi} + R^{\psi}_{G|\gamma}\left( \sum_{e\in E^{M}_{G}}m^{2}_{e}\alpha_{e} \right)
\end{equation*}
if $\gamma$ is mass-momentum spanning.
\end{enumerate}
\end{prop}
\begin{proof}[Proof sketch]
We sketch the proof and refer to \cite{Brown_2017} for more details. For $k=1,2$, consider the
subset $\mathcal{T}^{k}_{\gamma}$ of spanning $k$-trees of $G$, such that the intersections
$T\cup\gamma_{i}$ are connected for all $T\in \mathcal{T}^{k}_{\gamma}$. This set is always
nonempty (\cite[Lemma 2.1.]{Brown_2017}) and its elements are exactly those spanning $k$-trees,
such that $\sum_{i}|T\cap \gamma_{i}|$ is maximal. The corresponding monomial $\alpha^{S}$ for
$S=E_{G}\backslash T$ is then minimal in the variables $(\alpha_{e})_{e\in \gamma}$. Decomposing
the sum over $k$-trees into a sum over $\mathcal{T}^{k}_{\gamma}$ and its complement gives the
decompositions
\begin{align*}
\psi_{G}&=\psi_{\gamma}\psi_{G/\gamma} + R^{\psi}_{G|\gamma} \\
\varphi_{G}&=\psi_{\gamma}\varphi_{G/\gamma} + R^{\varphi}_{G|\gamma},
\end{align*}
and $R^{\psi}_{G|\gamma}$ (resp. $R^{\varphi}_{G|\gamma}$) vanishes, if and only if
$\mathcal{T}^{1}_{\gamma}$ (resp. $\mathcal{T}^{2}_{\gamma}$) consists of all spanning trees
(resp. spanning $2$-trees). If $\gamma$ is not momentum spanning, then
$\varphi_{G|\gamma}=\psi_{\gamma}\varphi_{G/\gamma}$ and the proposition follows. Now suppose
$\gamma$ is momentum spanning, such that all external vertices are contained in the component
$\gamma_{0}$. Then the polynomial $\varphi_{G/\gamma}$ vanishes and we have to use a different
decomposition. In this case, we define $\mathcal{\tilde T}^{2}_{\gamma}$ to be those spanning
$2$-trees $F$, such that $F\cap \gamma_{i}$ are trees for $i>0$ and $F$ splits $\gamma_{0}$ into
two connected components. Splitting the sum over all $2$-trees into a sum over $\mathcal{\tilde
T}^{2}_{\gamma}$ and its complement as above, gives the decomposition
\begin{equation*}
\varphi_{G} = \varphi_{\gamma_{0}}\psi_{\gamma_{1}}\cdots\psi_{\gamma_{k}}\psi_{G/\psi} + R^{\varphi}_{G|\gamma}.
\end{equation*}
The first term is just $\varphi_{G|\gamma}$ and the second term vanishes if and only if the
complement of $\mathcal{\tilde T}^{2}_{\gamma}$ is empty. This establishes the first $4$ points.
The expression for $R^{\Phi}_{G|\gamma}$ follows immediately from the decompositions of $\psi_{G}$
and $\varphi_{G}$.
\end{proof}
\begin{col}
The face of $P_{G}$ corresponding to the weight vector $e^{\gamma}$ is
\begin{equation*}
F_{e^{\gamma}}P_{G} = P_{\gamma} \times P_{G/\gamma}.
\end{equation*}
\end{col}
In the above Corollary, we have identified an edge subgraph $\gamma\subseteq G$ with its set of
edges $E_{\gamma}\subseteq E_{G}$. Using this convention, we define the subset function
$s_{G}:2^{E_{G}}\rightarrow \mathbb{Z}$ by
\begin{equation*}
s_{G}(\gamma) = 2h^{1}(\gamma) + \delta^{mm}_{\gamma}.
\end{equation*}
The above proof shows in particular that
\begin{equation*}
h^{1}(\gamma) = |E_{\gamma}| - |E_{\gamma}\cap E_{T}|,
\end{equation*}
where $T$ is a spanning tree of $G$, such that $T\cap \gamma$ is a maximal forest, i.e. intersects
every connected component of $\gamma$ in a spanning tree. With the notation of section
\ref{sec:gener-perm}, we can write this as
\begin{equation*}
h^{1}(\gamma) = |S| - |E^{c}_{\gamma}\cap S|= (r^{*}_{G})^{\#}(\gamma),
\end{equation*}
where $S=E_{G}\backslash T$ and $r^{*}_{G}$ is the rank function of the dual graph matroid
$M^{*}(G)$, whose bases are the complements of spanning trees. This shows that
\begin{equation*}
h^{1}:2^{E_{G}}\rightarrow \mathbb{Z}
\end{equation*}
is supermodular.
Let us also call a pair $(T,i)$ of a spanning tree $T\subset G$ and an edge $i\in E_{G}$
\emph{admissible} if either $i\in E^{M}_{G}$ or $i\in T$ and both connected components of $T
\backslash i$ contain external momenta. Hence a pair $(T,i)$ is admissible if and only if the
monomial $\alpha_{i}\prod_{j\notin T}\alpha_{j}$ appears in $\Phi_{G}$. A subgraph $\gamma\subset G$
is then mass-momentum spanning if, for every admissible pair $(T,i)$, either $i\in \gamma$ or $T\cap
\gamma$ is not a maximal forest in $\gamma$.
\begin{prop}
The function $s_{G}$ is supermodular.
\end{prop}
\begin{proof}
Since we know from the above discussion that $2h^{1}$ is supermodular, we only need to show that
\begin{equation*}
s_{G}(\gamma_{1}) + s_{G}(\gamma) \le s_{G}(\gamma_{1}\cup \gamma_{2}) + s_{G}(\gamma_{1}\cap \gamma_{2}),
\end{equation*}
where $\gamma_{1}$ and $\gamma_{2}$ (and therefore $\gamma_{1}\cup\gamma_{2}$) are mass-momentum
spanning. We can also assume that
$h^{1}(\gamma_{1})+h^{1}(\gamma_{2})=h^{1}(\gamma_{1}\cup\gamma_{2})+h^{1}(\gamma_{1}\cap
\gamma_{2})$, since the inequality is trivial otherwise and we can reduce to the case
$\gamma_{1}\cup\gamma_{2}=G$. We will show that $\gamma_{1}\cap\gamma_{2}$ is also mass-momentum
spanning. For this we must show that for every admissible pair $(T,i)$, where $T\subset
\gamma_{1}\cup \gamma_{2}$ is a spanning tree such that $T\cap \gamma_{1}\cap \gamma_{2}$ is a
maximal forest, we must have $i\in \gamma_{1}\cap\gamma_{2}$. From the inclusion-exclusion
principle we have
\begin{align*}
h^{1}_{G}(\gamma_{1}\cup \gamma_{2}) + h^{1}_{G}(\gamma_{1}\cap\gamma_{2})
&= |E_{\gamma_{1}\cup\gamma_{2}}| + |E_{\gamma_{1}\cap\gamma_{2}}| - |E_{(\gamma_{1}\cup\gamma_{2})\cap T}| - |E_{\gamma_{1}\cap\gamma_{2}\cap T}| \\
&= |E_{\gamma_{1}}| + |E_{\gamma_{2}}| - |E_{\gamma_{1}}\cap E_{T}| - |E_{\gamma_{2}}\cap E_{T}| \\
&\le h^{1}(\gamma_{1}) + h^{1}(\gamma_{2})
\end{align*}
with equality only if $T\cap \gamma_{1}$ and $T\cap \gamma_{2}$ are maximal forests. Since $(T,i)$
is an admissible pair, we must have $i\in\gamma_{1}$ and $i\in\gamma_{2}$ since both graphs are
mass-momentum spanning. But this means that $i\in \gamma_{1}\cap \gamma_{2}$ and $\gamma_{1}\cap
\gamma_{2}$ must also be mass-momentum spanning.
\end{proof}
Our conventions on the kinematics of sub- and quotient graphs are justified by the following.
\begin{lem}
The restrictions and contractions of $s_{G}$ by a subgraph $\gamma\subsetneq G$ are given by
\begin{align*}
s_{G}\vert_{\gamma} = s_{\gamma}, \quad s_{G}/_{\gamma} = s_{G/\gamma}.
\end{align*}
\end{lem}
\begin{proof}
The equality $s_{G}\vert_{\gamma}=s_{\gamma}$ for the restrictions follows immediately from the
definitions. For $\eta\subset G/\gamma$, let $\tilde \eta$ be the edge subgraph corresponding to
$E_{\gamma}\cup E_{\eta}$. The contraction equality then claims that
\begin{align*}
s_{G}/_{\gamma}(\eta) &= 2h^{1}(\tilde \eta) -2h^{1}(\gamma) + \delta^{mm}_{\eta}-\delta^{mm}_{\gamma}\\
&= 2h^{1}(\eta) + \delta^{mm}_{\eta} = s_{G/\gamma}(\eta).
\end{align*}
The equality $h^{1}(\tilde \eta)-h^{1}(\gamma)=h^{1}(\eta)$ follows from Example
\ref{exam:matroid-restr-contract} and the matroid equality $M^{*}(G)\vert_{E^{c}_{\gamma}} = M^{*}(G/\gamma)$.
We then only have to prove that $\tilde\eta$ is mass-momentum spanning if $\eta$ is. Clearly
$\eta$ contains all massive edges of $G/\gamma$ if and only if $\tilde \eta$ contains all masses
of $G$. Similarly, if $\tilde \eta$ contains all external vertices in a single connected
component, then so does its contraction $\eta$. On the other hand, if all external vertices of
$G/\gamma$ lie in the component $\eta^{0}\subseteq\eta$ and $\gamma_{1},\ldots,\gamma_{s}\subseteq
\gamma$ are the components of $\gamma$ containing external vertices, then the subgraph defined by
the edge set $E_{\eta^{0}}\cup E_{\gamma_{1}}\ldots\cup E_{\gamma_{s}}$ is connected and contains
all external vertices.
\end{proof}
\begin{thm}\label{thm:feynman-supermodular}
The Feynman polytope $P_{G}$ is the generalized permutahedron associated to the function $s_{G}$.
\end{thm}
\begin{proof}
It is immediate from Prop. \ref{prop:factor}, that $P_{G}\subseteq P(s_{G})$. We will prove the
reverse inclusion by induction over the number of edges. Hence we can assume that
$P_{\gamma}=P(s_{\gamma})$ and $P_{G/\gamma}=P(s_{G/\gamma})$ for all nontrivial edge subgraphs
$\gamma\subsetneq G$. The previous lemma then shows that
\begin{equation*}
F_{e^{\gamma}}P_{G} = P_{\gamma}\times P_{G/\gamma} = P(s_{\gamma})\times P(s_{G/\gamma})=F_{e^{\gamma}}P(s_{G}).
\end{equation*}
Since all vertices of $P(s_{G})$ are given by intersections of the above faces, we must have
$P(s_{G})(0)\subset P_{G}$ and thus $P_{G}=P(s_{G})$.
\end{proof}
\begin{remark}
The above factorization formula break down for some special kinematic configurations, e.g. when
some combination of momenta $q_{F}$ are on-shell, i.e. when $q_{F}^{2}=0$ but $q_{F}\neq 0$. In
these cases, $P(\Phi_{G})$ is not always a generalized permutahedron. An example is the box
integral with all external momenta on-shell, which is discussed in \cite{Panzer:2015ida}.
\end{remark}
Let us recall that a graph $G$ is called \emph{1-vertex reducible} if the removal of any vertex
disconnects the graph and \emph{1-vertex irreducible (1VI)} otherwise. We consider graphs with a
single edge to be 1VI and disconnected graphs to be 1-vertex reducible. A graph on two vertices is
1VI if it contains no self-loops. Any graph then has a unique decomposition into 1VI-subgraphs. Note
that these are exactly the connected components of the graph matroid $M(G)$ and its dual $M^{*}(G)$
(See e.g. \cite[Section 2.3]{oxley2006matroid}). In particular we have that $\gamma\cap G$ is a
union of $1VI$ components if and only if the intersection with every spanning tree $T\subset G$ is a
maximal forest. By Prop. \ref{prop:factor}, this is equivalent to $R^{\psi}_{G|\gamma}=0$.
Following \cite{Smirnov_2012}, we will call a Feynman graph with generic euclidean kinematics $G$
\emph{s-irreducible}, if every 1VI-component $\gamma$ has nontrivial kinematics. This means that
either $\gamma$ contains massive edges, or its removal would disconnect the graphs into two
components, each containing external vertices. This is compatible with the notion of irreducibilty
defined in section \ref{sec:gener-perm}:
\begin{prop}\label{prop:polytope-dimension}
Let $G$ be a connected Feynman graph with generically euclidean kinematics. The Feynman polytope
$P_{G}$ is an irreducible generalized permutahedron if and only if $G$ is s-irreducible.
\end{prop}
\begin{proof}
Suppose first that $G$ is s-irreducible. The generalized polyhedron $P_{G}$ is reducible if and
only if it has dimension less than $|E_{G}|-1$. By Prop. \ref{prop:factor}, this is equivalent to
$R^{\psi}_{G|\gamma}=R^{\Phi}_{G|\gamma}=0$ for some $\gamma\subsetneq G$. Then $\gamma$ must be a
union of 1VI-components. If $\gamma$ were mass-momentum spanning, then its complement would be a
union of 1VI-components with no kinematics and $G$ would not be s-irreducible. For $\gamma$ not
mass-momentum spanning, Prop. \ref{prop:factor} gives
\begin{equation*}
R^{\Phi}_{G|\gamma} = R_{G|\gamma}^{\varphi} + \psi_{G|\gamma}\left( \sum_{e\in E^{M}_{G}\cap E_{\gamma}}m^{2}_{e}\alpha_{e} \right).
\end{equation*}
Since every 1VI component of $\gamma$ contains non-trivial kinematics, we must have either
$R^{\varphi}_{G|\gamma}\neq 0$ or $ \sum_{e\in E^{M}_{G}\cap E_{\gamma}}m^{2}_{e}\alpha_{e}\neq
0$, so that $R^{\Phi}_{G|\gamma}$ can not vanish.
On the other hand, suppose $\gamma\subsetneq G$ is a 1VI component with no kinematics. Then
$R^{\psi}_{G|\gamma}=R^{\varphi}_{G|\gamma}=0$ and the above formula shows
$R^{\Phi}_{G|\gamma}=0$. Hence $P_{G}$ is reducible.
\end{proof}
Now suppose $G$ is s-irreducible. Let $\mathcal{F}_{G}$ denote the set of all edge subgraphs
$\gamma\subsetneq G$, such $\gamma$ and $G/\gamma$ are both $s$-irreducible, when given the
kinematics described in the beginning of this section. $\mathcal{F}_{G}$ is the disjoint union of
the two subsets
\begin{align*}
\mathcal{S}_{G} &=\{\gamma\subsetneq G \ \rvert\ \gamma \text{ is $s$-irreducible and m.m, } G/\gamma \text{ is irreducible} \} \\
\mathcal{H}_{G} &=\{\gamma\subsetneq G \ \rvert\ \gamma \text{ is irreducible and not m.m, } G/\gamma \text{ is }s\text{-irreducible} \}.
\end{align*}
By Prop. \ref{col:supermodular-facets}, these are exactly the facets of $P_{G}$.
\begin{col}
Let $G$ be an $s$-irreducible Feynman graph. Then the polytope $P_{G}$ has the facet presentation
\begin{align*}
P_{G} = \{\langle m, e^{E_{G}} \rangle = 2h^{1}(G)+1\}\bigcap_{\gamma\in \mathcal{F}_{G}} \{\langle m, e^{\gamma} \rangle \ge 2h^{1}(\gamma)+\delta^{m.m}_{\gamma}\}.
\end{align*}
\end{col}
\begin{remark}
In the terminology of \cite{Speer:1975dc}, $s$-irreducible, mass-momentum spanning subgraphs
$\gamma\subset G$ are called links and a subgraph $\gamma$ whose quotient $G/\gamma$ is
$s$-irreducible is called saturated.
\end{remark}
\paragraph{Building sets and sectors.}
We continue to assume that $G$ is an s-irreducible Feynman graph. We want to construct smooth
refinements of the normal fan $\Sigma_{P_{G}}$. These correspond to sector decompositions by
Corollary \ref{col:sec-decomp}. Let first $\Sigma_{Hepp}$ be the normal fan of the permutahedron
$\pi_{E_{G}}$ on the set of edges $E_{G}$. By Proposition \ref{thm:feynman-supermodular} we have the
following.
\begin{prop}
The fan $\Sigma_{Hepp}$ is a smooth refinement of $\Sigma_{P_{G}}$.
\end{prop}
Each maximal cone $\sigma\in\Sigma_{Hepp}$ is given by a complete flag $\{\emptyset=I_{0}\subsetneq
\ldots I_{n}=E_{G}\}$, or equivalently by a total order $E_{G}=\{i_{1}<\ldots<i_{n}\}$. The
coordinates of $\sigma$ are then defined by
\begin{equation*}
\alpha_{i} = \prod_{j\le i}x_{j}.
\end{equation*}
The corresponding sectors are the classical Hepp sectors. Thus they are easy to describe but grow
superexponentially with the number of edges.
Let us apply the theory of section \ref{sec:gener-perm} to construct more economical refinements.
First consider the subset system
\begin{equation*}
\mathcal{G}_{s} = \{E_{\gamma} \ \rvert\ \gamma\subsetneq G \text{ is s-irreducible}\}.
\end{equation*}
A special case of Prop. \ref{prop:supermodular-wonderful-refinement} then gives
\begin{prop}
The set $\mathcal{G}_{s}$ is a building set. The corresponding fan $\Sigma_{\mathcal{G}_{s}}$ is a
smooth refinement of $\Sigma_{P_{G}}$.
\end{prop}
\begin{remark}
The sectors corresponding to $\Sigma_{s_{G}}$ are the Smirnov-Speer sectors considered in
(\cite{Smirnov_2012}, \cite{Smirnov_2009}).
\end{remark}
Another possibility was recently introduced in \cite{Brown_2017}. Let us call a subgraph
$\gamma\subseteq G$ \emph{motic} if
\begin{equation*}
s_{G}(\gamma \backslash i) < s_{G}(\gamma)
\end{equation*}
for all edges $i\in E_{\gamma}$, i.e. deleting an edge either drops the loop number, or destroys the
property of being mass-momentum spanning. Note that for massive Feynman graphs, the motic subgraphs
are exactly the disjoint unions of one-particle irreducible (1PI) graphs. Let $B_{motic}$ be the set
of motic subgraphs and
\begin{equation*}
\mathcal{G}_{motic} = B_{motic}\cup\{\{i\} \ \rvert\ i\in E_{G}\}.
\end{equation*}
\begin{prop}
The set $\mathcal{G}_{motic}$ is a building set and the corresponding fan $\Sigma_{motic}$ is a
smooth refinement of $\Sigma_{P_{G}}$.
\end{prop}
\begin{proof}
By \cite[Thm. 3.6]{Brown_2017}, the union of two motic subgraphs is again motic. Hence
$\mathcal{G}_{motic}$ is a building set. Since $s_{G}(\gamma \backslash i)=s_{G}(\gamma)$ implies
that $s_{\gamma}=s_{G}\vert_{\gamma}$ is reducible, we must have $\mathcal{G}_{s}\subseteq
\mathcal{G}_{motic}$. Example \ref{exam:building-set} then shows that $\Sigma_{motic}$ refines
$\Sigma_{s}$ and hence $\Sigma_{P_{G}}$.
\end{proof}
\begin{remark}
The results of section \ref{sec:TV:iterblowup} show that the toric variety associated to
$\Sigma_{motic}$ is $P^{B_{motic}}$, the iterated blowup constructed by Brown in
\cite{Brown_2017}. Its 1PI variant was earlier introduced by Bloch-Esnault-Kreimer
(\cite{Bloch_2006}).
\end{remark}
Let us also mention the original construction of Speer \cite{Speer:1975dc}. To describe his sectors,
we identify the set $\mathcal{S}_{G}$ of mass-momentum spanning facets with the corresponding
quotient graphs:
\begin{equation*}
\mathcal{Q}_{G} = \{q = G/\gamma \ \rvert\ \gamma\in\mathcal{S}_{G}\}.
\end{equation*}
For an irreducible Feynman graph $G$ with generic euclidean kinematics, Speer defined a collection
of sub- and quotient graphs $\mathcal{I}\subseteq \mathcal{H}_{G}\cup \mathcal{Q}_{G}\cup \{G\}$
called s-families.
By definition, these families satisfy $G\in\mathcal{I}$ and if
$\Gamma_{1},\Gamma_{2}\in\mathcal{I}$, then either
$E_{\Gamma_{1}}\subset E_{\Gamma_{2}}$, $E_{\Gamma_{2}}\subset E_{\Gamma_{1}}$ or $E_{\Gamma_{1}}\cap E_{\Gamma_{2}}=\emptyset$.
We refer to \cite{Speer:1975dc} for the (rather involved) complete definition. The key
results of $(\cite{Speer:1975dc})$ can be summarized as follows.
\begin{thm}
Each s-family $\mathcal{I}$ has the following properties:
\begin{enumerate}
\item For each $\Gamma\in\mathcal{I}$, the set
\begin{equation*}
E_{\Gamma} \backslash \bigcup_{\tilde\Gamma\in\mathcal{I}, E_{\tilde\Gamma}\subsetneq E_{\Gamma}}E_{\tilde\Gamma}
\end{equation*}
consists of precisely one element $\beta(\Gamma)$. The map $\beta:\mathcal{I}\rightarrow E_{G}$
is a bijection.
\item There is an admissible pair $(T,i)$ of $G$ adapted do $\mathcal{I}$, such that:
\begin{itemize}
\item $i\notin E_{\Gamma}$ for all $\Gamma\in\mathcal{I}$.
\item $T\cap \gamma$ is a maximal forest for each $\gamma\in\mathcal{H}_{G}\cap \mathcal{I}$.
\item $T/T\cap \gamma$ is a spanning tree for each $q=G/\gamma\in\mathcal{Q}_{G}\cap
\mathcal{I}$.
\end{itemize}
\end{enumerate}
To each s-family, associate the Speer sector $D_{\mathcal{I}}\subset P^{E_{G}}(\mathbb{R}^{+})$
defined by the inequalities
\begin{equation*}
\max_{\substack{\gamma\in\mathcal{H}_{G}\cap \mathcal{I}\\E_{\gamma}\subset E_{\Gamma}}}\alpha_{\beta(\gamma)}\le\alpha_{\beta(\Gamma)}\le \min_{\substack{q\in\mathcal{Q}_{G}\cap \mathcal{I}\\E_{q}\subset E_{\Gamma}}}\alpha_{\beta(q)}
\end{equation*}
for all $\gamma\in \mathcal{H}_{G}\cap \mathcal{I}$ and $q\in\mathcal{Q}_{G}\cap \mathcal{I}$.
Then the sectors $D(\mathcal{I})$ for different s-families cover $P^{E_{G}}(\mathbb{R}^{+})$ and
intersect in sets of measure zero.
\end{thm}
For a quotient graph $q=G/\gamma$, we define $e^{q}=-e^{E_{q}}$. Note that in $N_{E_{G}}$, we have
the equality
\begin{equation*}
[e^{\gamma}] = [e^{q}].
\end{equation*}
We can then rephrase Speer's result in terms of toric geometric as follows:
\begin{prop}
For $\mathcal{I}\subset\mathcal{F}_{G}\cup \{E_{G}\}$, define the cone
\begin{equation*}
\Sigma_{\mathcal{I}} = \pos([e^{\Gamma}] \ \rvert\ \Gamma\in\mathcal{I}\backslash \{G\}).
\end{equation*}
then the cones $\{\sigma_{\mathcal{I}} \ \rvert\ \mathcal{I} \text{ an s-family}\}$ are the
maximal cones of a smooth fan $\Sigma_{Speer}$, which refines the normal fan of $P_{G}$.
\end{prop}
\begin{proof}
Let us first prove that the cones $\sigma_{\mathcal{I}}$ are smooth. Since $G\in\mathcal{I}$, this
is equivalent to showing that
\begin{equation*}
\tilde \sigma_{\mathcal{I}} = \pos(e^{\Gamma} \ \rvert\ \Gamma\in\mathcal{I}).
\end{equation*}
is a smooth cone of $\mathbb{Z}^{E_{G}}$, i.e. the vectors $(e^{\Gamma}\ \rvert\
\Gamma\in\mathcal{I})$ form a $\mathbb{Z}$-basis.
We can then adapt the proof of \cite[Prop. 2]{Feichtner_2004_2}: Choose a linear order
\begin{equation*}
\mathcal{I}=\{\Gamma_{1}<\ldots<\Gamma_{E-1}<\Gamma_{E}\},
\end{equation*}
refining the natural order $\Gamma\preceq\Gamma'\Leftrightarrow E_{\Gamma}\subset E_{\Gamma'}$ by
edge-inclusion and let $E_{G}=\{i_{1}<\ldots<i_{E}\}$ be the order induced by the bijection
$\beta:\mathcal{I}\rightarrow E_{G}$. By construction of $\beta$, we have
\begin{equation*}
e^{\Gamma_{1}}\wedge \ldots \wedge e^{\Gamma_{r}} = \pm e^{\Gamma_{1}}\wedge \ldots \wedge e^{\Gamma_{r-1}}\ldots\wedge e^{i_{r}},
\end{equation*}
and an obvious induction gives
\begin{equation*}
e^{\Gamma_{1}}\wedge \ldots \wedge e^{\Gamma_{E}} = \pm e^{i_{1}}\wedge \ldots \wedge e^{i_{E}},
\end{equation*}
which shows that the $e^{\Gamma_{i}}$ form a $\mathbb{Z}$-basis.
The coordinates $x_{\Gamma}$ of $\sigma_{\mathcal{I}}$ can then be described by
\begin{equation*}
\alpha_{i} = \prod_{i\in\gamma\in {H}_{G}\cap \mathcal{I}}x_{\gamma}\prod_{i\notin q\in{Q}_{G}\cap \mathcal{I}}x_{q},
\end{equation*}
In these coordinates, the Speer sector corresponding to $\mathcal{I}$ is described by $0\le
x_{\Gamma}\le 1$ for $\Gamma\in\mathcal{I}$. Applying the logarithm map from Prop.
\ref{prop:sec-decom} then shows that the cones generate a smooth complete fan.
Let $(T,i)$ be an admissible pair, which is adapted to the s-family $\mathcal{I}$ as above. Note
that $T/T\cap \gamma$ is a spanning tree in $q=G/\gamma$ if and only if $\gamma\cap T$ is a maximal
forest. To $(T,i)$ corresponds the point $m=2e_{^{E_{G}\backslash T}}+e_{i}$ of the Feynman polytope
$P_{G}$ and we have
\begin{align*}
\langle m, e^{\gamma} \rangle &= 2h^{1}(\gamma)=s_{G}(\gamma),\quad \gamma\in\mathcal{H}_{G}\cap\mathcal{I}\\
\langle m, e^{\gamma} \rangle &= 2h^{1}(\gamma)+1=s_{G}(\gamma),\quad G/\gamma\in\mathcal{Q}_{G}\cap\mathcal{I}.
\end{align*}
Since $[e^{G/\gamma}]=[e^{\gamma}]$, we obtain from Prop. \ref{prop:maximal-cones}, that
$\Sigma_{Speer}$ refines $\Sigma_{P_{G}}$.
\end{proof}
The Speer sectors are very economical but they are quite difficult to understand. It would be useful
to have a generalization of Speer's construction which works for all generalized permutahedra. We
conjecture the following:
\begin{con}
Let $\overline{GP}_{E}$ be the set of generalized permutahedra on a set $E$ up to normal
equivalence and $\overline{SGP}_{E}$ the subset consisting of polytopes whose connected
components are simple. Then there is a natural map
\begin{equation*}
\overline{GP}_{E}\rightarrow \overline{SGP}_{E},\quad P\mapsto P^{s},
\end{equation*}
which commutes with contraction and restriction and such that $\Sigma_{P}(1)=\Sigma_{P^{s}}(1)$.
\end{con}
\begin{remark}
In the language of \cite{aguiar17:hopf}, we ask for a (necessarily idempotent) morphism of Hopf
monoids $\overline{GP}_{E}\rightarrow \overline{SGP}_{E}$. If we drop the condition that
$\Sigma_{P}(1)=\Sigma_{P^{s}}(1)$, then mapping $P=P(z)$ to the polytope of its building set of
irreducibility components $\mathcal{G}_{z}$ provides an example of such a morphism.
\end{remark}
\paragraph{Dimensional regularization.}
We can now use the results of section \ref{sec:gener-mell-transf} to define the dimensional
regularization of a Feynman integral
\begin{equation*}
I_{G}(\lambda,D,q,m) = \Gamma(\omega_{G})\int_{P^{E_{G}}(\mathbb{R}^{+})}\prod_{e\in E_{G}}\frac{\alpha^{\lambda_{e}-1}}{\Gamma(\lambda_{e})}\left( \frac{\psi_{G}}{\Phi_{G}} \right)^{\omega_{G}}\frac{\Omega_{P^{E_{G}}}}{\psi_{G}^{D/2}}.
\end{equation*}
Recall that
\begin{equation*}
\omega_{G} = \sum_{i\in E_{G}}\lambda_{i} - \frac{D}{2}h^{1}(\gamma).
\end{equation*}
For $\gamma$ a sub- or quotient graph of $G$, we define similarly
\begin{equation*}
\omega_{\gamma} = \sum_{i\in E_{\gamma}}\lambda_{i} - \frac{D}{2}h^{1}(\gamma).
\end{equation*}
The convergence domain of $I_{G}$ for a graph with generic euclidean kinematics can then be
calculated as follows:
\begin{prop}
Suppose the graph $G$ has generic euclidean kinematics. Then the Feynman Integral
$I_{G}(\lambda,D,q,m)$ has convergence domain
\begin{equation*}
\Lambda_{G}=\{(\lambda,D)\in \mathbb{C}_{E_{G}}\times \mathbb{C}\ \rvert\
\omega_{\gamma} > 0, \gamma\in \mathcal{H}_{G}, \omega_{G/\gamma}<0, \gamma\in \mathcal{S}_{G}\}
\end{equation*}
This domain is nonempty if and only if $G$ is s-irreducible.
\end{prop}
\begin{proof}
Combining Theorem \ref{thm:mellin-convergence} with Proposition \ref{prop:polytope-dimension}
shows that $(\lambda,D)\in \Lambda_{G}$ if and only if
\begin{align*}
\langle \lambda, e^{\gamma} \rangle - \frac{D}{2}h^{1}(\gamma) &> 0, \quad \gamma\in\mathcal{H}_{G} \\
\langle \lambda, e^{\gamma} \rangle - \frac{D}{2}h^{1}(\gamma) - \omega_{G} &> 0, \quad \gamma\in\mathcal{S}_{G}.
\end{align*}
Since $\omega_{G}=\omega_{\gamma}+\omega_{G/\gamma}$ for every subgraph $\gamma$, this is
equivalent to
\begin{align*}
\omega_{\gamma} &> 0, \quad \gamma\in\mathcal{H}_{G} \\
- \omega_{G/\gamma} &> 0, \quad \gamma\in\mathcal{S}_{G}.
\end{align*}
This domain is nonempty if and only if $P_{G}$ has dimension $|E_{G}|-1$, which is equivalent to
s-irreducibility by Proposition \ref{prop:polytope-dimension}.
\end{proof}
\begin{remark}
That $\Gamma_{G}$ is nonempty for $G$ 1VI was originally proven by Speer \cite{Speer:1975dc}. The
extension to s-irreducible graphs seems to be well-known. Another proof can found in
\cite{Smirnov_2012}.
\end{remark}
The proposition shows that singularities corresponding to mass-momentum spanning subgraphs are more
closely associated to the quotient graphs. It will then be convenient to set
\begin{align*}
\tilde \omega_{\gamma} =
\begin{cases}
-\omega_{G/\gamma}, \quad& \text{ if } \gamma \text{ is mass-momentum spanning}\\
\omega_{\gamma}, \quad& \text{ otherwise}
\end{cases}.
\end{align*}
In dimensional regularization, one keeps the analytic parameters $\lambda\in \mathbb{C}^{E_{G}}$
fixed (usually at integer values) and tries to expand the above integral in a Laurent series around
a point $D_{0}\in \mathbb{N}$ of the spacetime dimension. There are essentially three different
procedures to achieve this in the literature:
\begin{enumerate}
\item In the classical approach to dimensional regularization (\cite{Collins},
\cite{_t_Hooft_1972}), the $D$-dimensional euclidean space is embedded into an
infinite-dimensional space and the Feynman integral is split into a finite-dimensional subspace
containing all external momenta and its orthogonal complement. Formally integrating over this
infinite-dimensional complement gives an expression which is naturally analytic in the dimension
$D$.
\item In the sector decomposition approach (\cite{Smirnov_1983}, \cite{HEINRICH_2008},
\cite{Bogner_2008}), one decomposes the integration domain into cubical sectors as in section
\ref{sec:sect-decomp}. The $\epsilon$-expansion is then explicitly computed in each sector by a
Taylor subtraction.
\item In the analytic continuation approach (\cite{Panzer:2015ida}, \cite{von_Manteuffel_2015})),
one applies the integration by parts procedure of section \ref{sec:gener-mell-transf} to
analytically continue the integrals into the domain of absolute convergence.
\end{enumerate}
To our knowledge, there is no mathematical rigorous construction of the first approach. On the other
hand, the second and third fit very naturally into the framework we have developed so far.
Let us start with the sector decomposition approach. Let $\Sigma$ denote one of the smooth
refinements of $\Sigma_{P_{G}}$ constructed in the last section. From Corollary
\ref{col:sec-decomp}, we have the formula
\begin{equation*}
I_{G}(\lambda,D,q,m) = \frac{\Gamma(\omega_{G})}{\prod_{e}\Gamma(\lambda_{e})}\sum_{\sigma}\int_{[0,1]^{|E_{G}|-1}} x_{\sigma}^{\lambda_{\sigma}}
\left( \frac{\psi_{\sigma,G}(x_{\sigma})}{\Phi_{\sigma,G}(x_{\sigma})} \right)^{\omega_{G}}\psi_{\sigma,G}^{-D/2} \mathrm{d} x_{\sigma}.
\end{equation*}
The sum runs over smooth, maximal cones $\sigma=pos(e^{\gamma}\ \rvert\ \gamma\in
\mathcal{I}_{\sigma})\in\Sigma(|E_{G}|-1)$, where $\mathcal{I}_{G}\subset
2^{E_{G}}\backslash\{E_{G}\}$ is a collection of subgraphs
and $x_{\sigma}= (x_{\gamma} \ \rvert\ \gamma\in\mathcal{I}_{\sigma})$ are the associated
coordinates. The leading monomial $x_{\sigma}^{\tilde \lambda}$ in the sector $\sigma$ is given by
\begin{equation*}
x^{\lambda_{\sigma}}_{\sigma} = \prod_{\gamma\in\mathcal{I}_{\sigma}}x_{\gamma}^{\tilde\omega_{\gamma}-1}.
\end{equation*}
The polynomials $\psi_{\sigma},\Phi_{\sigma}$ are obtained as
\begin{equation*}
\psi_{\sigma,G}(x_{\sigma}) = x_{\sigma}^{-h^{1}(\gamma)}\psi_{G}(x_{\sigma}), \quad \Phi_{\sigma,G}(x_{\sigma}) = x^{-h^{1}(\gamma)-\delta^{mm}_{\gamma}}_{\sigma}\Phi_{G}(x_{\sigma}),
\end{equation*}
and are regular and non-vanishing on the sector $[0,1]^{E_{G}-1}$ defined by $\sigma$. Fix
$\lambda^{0}\in \mathbb{Z}^{E_{G}}$ and $D_{0}\in \mathbb{Z}$ and let $\tilde
\omega_{\gamma}^{0}=\tilde \omega_{\gamma}\vert_{\lambda=\lambda^{0},D=D_{0}}$.
Define the multi-index $\alpha_{\sigma}\in \mathbb{N}^{\mathcal{I}_{\sigma}}$ by
$\alpha_{\sigma}(\gamma) = \max(0,-\tilde \omega_{\gamma}^{0}+1)$. Let
\begin{equation*}
F_{\sigma}(x_{\sigma}) = \left( \frac{\psi_{\sigma,G}(x_{\sigma})}{\Phi_{\sigma,G}(x_{\sigma})} \right)^{\omega_{G}}\psi_{\sigma,G}^{-D/2}
\end{equation*}
and consider its Taylor expansion in the $x_{\sigma}$ variables up to $\alpha_{\sigma}$:
\begin{equation*}
F_{\sigma}(x_{\sigma}) = \sum_{\substack{\beta\in \mathbb{N}^{\mathcal{I_{\sigma}}}\\\beta\preceq\alpha_{\sigma}}}\frac{\partial^{\beta} F(0)}{\beta!}x^{\beta}_{\sigma} + \tilde F(x_{\sigma}).
\end{equation*}
The integral over the polynomial part can be explicitly calculated as a rational function in
$\epsilon$:
\begin{equation*}
\int_{[0,1]^{E_{G}-1}}x_{\sigma}^{\lambda_{\sigma}}x^{\beta}\frac{1}{\beta!}\partial^{\beta}F_{\sigma}(0)\mathrm{d} x_{\sigma} =
\frac{1}{\beta!}\partial^{\beta}F_{\sigma}(0)\prod_{\gamma\in\mathcal{I}_{\sigma}}\frac{1}{\tilde \omega_{\gamma}+\beta_{\gamma}}.
\end{equation*}
The integral over $\tilde F(x_{\sigma})$ is analytic around $\epsilon=0$ by construction and can be
expanded as a power series in $\epsilon$.
For Feynman graphs which are not s-irreducible, it is conventional to set $I_{G}=0$ in dimensional
regularization. On the other hand, we have seen that the sector decomposition approach still
provides an $\epsilon$-expansion in this case. Luckily, the the two prescriptions agree.
\begin{prop}
If $G$ is not s-irreducible, then regularization by sector decomposition gives
$I_{G}(\lambda,D,q,m)=0$.
\end{prop}
\begin{proof}
Let us first show that the sector decomposition value is independent of the choice of refinement.
If $\tilde\Sigma$ is another smooth fan, which refines $\Sigma$ (and thus $\Sigma_{P_{G}}$), then
every maximal cone $\sigma\in\Sigma$ is a union of cones $\sigma_{i}\in\tilde\Sigma$, which only
overlap in common faces. Thus the sector corresponding to $\sigma$ is the union of the sectors
corresponding to $\sigma_{k}$. By analytic continuation, the sum over the $\sigma_{k}$-sectors
must equal the contribution of the $\sigma$-sector. Thus the value of $I_{G}$ computed with
respect to $\Sigma$ or $\tilde \Sigma$ are the same.
If now $\Sigma'$ is any other smooth fan which refines $\Sigma_{P_{G}}$, then we can always find a
smooth fan which refines both $\Sigma$ and $\Sigma'$ and which gives the same value for $I_{G}$.
Thus the value of $I_{G}$ is independent of the choice of $\Sigma$.
Suppose $\gamma\subset G$ is a 1VI component with no external kinematics. Choose smooth
refinements $\Sigma_{1},\Sigma_{2}$ of the normal fans
$\Sigma_{P_{\gamma}},\Sigma_{P_{G/\gamma}}$. We have the exact sequence of lattices
\begin{center}
\begin{tikzcd}
0 \arrow{r} & \mathbb{Z}e^{\gamma} \arrow{r}{} & N_{E_G} \arrow{r}{} & N_{E_{\gamma}}\oplus
N_{E_{G/\gamma}} \arrow{r} & 0
\end{tikzcd}
\end{center}
which has a (non-canonical) splitting $N_{E_{G}}\cong \mathbb{Z}e^{\gamma}\oplus
N_{E_{\gamma}}\oplus N_{E_{G/\gamma}}$. Let $\Sigma_{0}=\Sigma_{P^{1}}$ be the fan on
$\mathbb{Z}e^{\gamma}$ with maximal cones $\sigma^{\pm}=\pm\mathbb{R}^{+}e^{\gamma}$. With this
splitting, the fan $\Sigma=\Sigma_{0}\times \Sigma_{1}\times \Sigma_{2}$ is a refinement of
$P_{G}$. Every maximal cone of $\Sigma$ is of the form $\sigma^{\pm}=(\pm \mathbb{R}^{+})\times
\sigma$, for $\sigma\in\Sigma_{1}\times\Sigma_{2}(|E_{G}|-2)$. Let $x_{\gamma}$ be the variable
corresponding to $\mathbb{R}^{+}e^{\gamma}\in \Sigma_{0}$. In the variables of $\sigma^{\pm}$, the
integrand takes the form
\begin{equation*}
x_{\gamma}^{\pm \tilde\omega_{\gamma}-1}x_{\sigma}^{\lambda_{\sigma}}\left( \frac{\psi_{\sigma,G}(x_{\sigma})}{\Phi_{\sigma,G}(x_{\sigma})} \right)^{\omega_{G}}\psi_{\sigma,G}^{-D/2}
= x_{\gamma}^{\pm \tilde\omega_{\gamma}-1}x_{\sigma}^{\lambda_{\sigma}}F_{\sigma}(x_{\sigma}),
\end{equation*}
where $x_{\sigma}$ denotes the variables of the cone $\sigma\in\Sigma_{1}\times\Sigma_{2}$.
Crucially, the function $F_{\sigma}(x_{\sigma})$ does not depend on $x_{\gamma}$.
Let $I_{\sigma}=\int_{[0,1]^{|E_{G}|-2}}x_{\sigma}^{\lambda_{\sigma}}F_{\sigma}(x_{\sigma})$. Then
we get
\begin{align*}
\frac{\prod_{e}\Gamma(\lambda_{e})}{\Gamma(\omega_{G})}I_{G}(\lambda,D,q,m)
&= \sum_{\sigma^{\pm}}I_{\sigma}\int_{[0,1]}x^{\pm\tilde\omega_{\gamma}-1}\mathrm{d} x_{\gamma} \\
&= \sum_{\sigma}I_{\sigma}\left( \frac{1}{\tilde \omega_{\gamma}}-\frac{1}{\tilde\omega_{\gamma}} \right) = 0.
\end{align*}
\end{proof}
The sector decomposition approach has the downside, that the integrals over the different sectors
usually lead to analytic functions which are much more complicated then their sum $I_{G}$. For this
reason, the recent articles (\cite{von_Manteuffel_2015}, \cite{Panzer:2015ida}) advocate calculating
the $\epsilon$-expansion by the integration by parts procedure described in section
\ref{sec:analyt-dimens-regul}. First let us combine Theorem \ref{thm:meromorphic-cont} with our
results on the Feynman polytope $P_{G}$.
\begin{thm}
If $G$ is s-irreducible, then the amplitude $I_{G}$ can be expressed as
\begin{equation*}
I_{G}(\lambda,D,q,m) = \omega_{G}\left( \prod_{\gamma\in\mathcal{F}_{G}}\Gamma(\tilde \omega_{\gamma}) \right)J_{G}(\lambda,D,q,m),
\end{equation*}
where $J_{G}$ is analytic for all $(\lambda,D)\in \mathbb{C}^{E_{G}}\times \mathbb{C}$ and
external momenta and masses $(q,m)$ satisfying the inequalities of definition
\ref{defin:generic-kinematics}.
\end{thm}
We can describe the analytic continuation more concretely as follows. Let again
$a(\gamma)=\max(0,\tilde \omega^{0}_{\gamma}+1)$. For each $\gamma\in\mathcal{F}_{G}$, we integrate
by parts $a(\gamma)$-times and obtain an expression of the form
\begin{equation*}
I(\lambda^{0},D^{0}+2\epsilon,q,m) = \sum_{\beta}L_{\beta}(\epsilon)I(\lambda^{\beta},D^{\beta}+2\epsilon,q,m):= \sum_{\beta}L_{\beta}(\epsilon)I_{\beta}(\epsilon)
\end{equation*}
where $(\lambda^{\beta},D^{\beta})\in \mathbb{Z}^{E_{G}}\times\mathbb{Z}$ are shifted values of the
analytic parameters and dimension and $L_{\beta}(\epsilon)$ are rational functions in $\epsilon$
depending polynomially on the external kinematics. By construction, each $I_{\beta}$ is now analytic
in a neighbourhood of $\epsilon$.
\begin{remark}
The authors of \cite{von_Manteuffel_2015} remark that the partial integrations are easy to
calculate, but the number of terms in the above sum can grow very rapidly. Note that the result
depends on the order of partial integrations in general. The proof of Theorem
\ref{thm:meromorphic-cont} suggests the following naive algorithm: At each stage, choose the next
direction such that the Newton polytopes of the Laurent monomials appearing in the numerator are
as small as possible.
It is known that the space of integrals of the above form is finite-dimensional
(\cite{Smirnov_2010}) and the above sum can be considerably simplified. But reducing a given
integral to a basis of so-called ``Master'' integrals is quite difficult (See e.g.
\cite{LAPORTA_2000},\cite{Chetyrkin_1981} for the classical IPB technique). We refer to the recent
article (\cite{bitoun17:feynm}) for a $D$-module approach which is quite close to our toric
viewpoint.
\end{remark}
We can now express the $\epsilon$-expansion of $I_{G}$ in terms of the homogeneous coordinates of
$X_{\Sigma}$ as follows.
\begin{thm}
Let $G$ be s-irreducible and $\Sigma$ a smooth refinement of $\Sigma_{P_{G}}$. The functions
$I_{\beta}(\epsilon)$ have the series expansion
\begin{align*}
I_{\beta}(\epsilon) = \sum_{k_{1},k_{2}=0}^{\infty}\frac{h^{1}(G)^{k_{2}}}{k_{1}!k_{2}!}\epsilon^{k_{1}+ k_{2}}
\int_{X_{\Sigma}(\mathbb{R}^{+})}x^{\lambda^{\beta}}\left( \frac{\psi_{G}}{\Phi_{G}} \right)^{\omega^{\beta}_{G}}\psi_{G}^{-D^{\beta}/2}
\log^{k_{1}}\left( \psi_{G} \right)\log^{k_{2}}\left( \frac{\psi_{G}}{\Phi_{G}} \right)\Omega_{X_{\Sigma}},
\end{align*}
where $\omega^{\beta}_{G} = \omega_{G}\vert_{\lambda=\lambda^{\beta},D=D^{\beta}}$.
\end{thm}
\begin{remark}
Suppose the masses and momenta are generically euclidean and \emph{rational}. We can write the
logarithmic powers appearing above as
\begin{equation*}
\log^{k}(h(x))=\int_{[0,1]^{k}}\prod_{i=1}^{k}\frac{h(x)-1}{(h(x)-1)t_{i}+1}\mathrm{d} t_{i}.
\end{equation*}
Inserting this relation into the above expression for $I_{\beta}$ shows that the coefficients of
the $\epsilon$-expansion are then periods in the sense of Kontsevich-Zagier
(\cite{Kontsevich_2001},\cite{MR3618276}), a fact that was first proven by Bogner and Weinzierl
(\cite{Bogner_2009}) using sector decompositions.
\end{remark}
|
{
"timestamp": "2018-06-05T02:17:44",
"yymm": "1806",
"arxiv_id": "1806.01086",
"language": "en",
"url": "https://arxiv.org/abs/1806.01086"
}
|
\section{Introduction}
The study of meson-baryon scattering is very important to understand
the properties of hadron resonances.
Because hadron resonances are short living and decay immediately by the strong interaction,
they appear only in the scattering processes and
one can deduce the properties of the hadron resonances only from the investigation of
the scattering process.
Description of scattering amplitude is one of the first steps to
investigate the hadronic resonances.
Once one obtains realistic scattering amplitudes reproducing the scattering cross sections
in terms of analytic functions, one can perform analytic continuation
to complex energy plane
and obtain properties of the resonances, such as
their masses, widths and coupling strengths.
For the purpose of description of the scattering amplitude in an analytic way,
one of the theoretical tools is the chiral effective theory in which
the low energy theorems by chiral symmetry constrain the hadronic interactions.
Chiral perturbation theory describes scattering amplitudes for the lowest channels, while
some unitarization procedure is necessary when one encounters resonances and open channels,
where hadronic dynamics plays an important role. For instance, chiral perturbation theory works
well for the $\pi N$ scattering at low energies, while, for the $\bar KN$ channel, since the
$\Lambda(1405)$ resonance is located in the $I=0$ channel below the threshold and the
$\pi\Sigma$ and $\pi\Lambda$ channels are open, one needs unitarization of the amplitude and
takes into account of coupled channels.
In this article, we reexamine the elastic scattering amplitude of $KN$ in low energies,
$p_{\rm lab}<800$ MeV/c, based on the chiral unitary approach
and study the possibility of an $S=+1$ exotic resonance in $I=0$ channel.
Baryons with strangeness $S=+1$ are so-called exotic hadrons, because
their quantum numbers cannot be described by three constituent quarks.
For the $S=+1$ baryon, one needs at least one anti-strange quark and more than three
quarks to compensate the negative baryon number of the anti-strange quark
to have baryon number $+1$ in total.
Thus, the minimal quark contents are $uudd\bar{s}$ for charge $Q=+1$.
Although there are no reasons to forbid the existence of such states
in quantum chromodynamics,
the experimental evidence for existence of the $S=+1$ baryons is not well confirmed.
The scattering amplitudes of the $K^{+}N$ scattering in low energies have been studied for a long time.
A comprehensive review can be found in Ref. \cite{dover1982}.
There are three $K^{+}N$ amplitudes, $K^{+}p \to K^{+}p$, $K^{+}n \to K^{+}n$ and $K^{+}n \to K^{0}p$
and isospin symmetry reduces two independent amplitudes for $I=0$ and $I=1$.
The $K^{+}p$ amplitude can be observed directly from the $K^{+}p \to K^{+}p$ scattering experiment
and provides the $I=1$ amplitude,
while for the $K^{+}n$ amplitudes one needs nuclear targets, such as deuterium,
and obtains the $I=0$ amplitude using the $I=1$ amplitude.
It is known that
the $K^{+}N$ scatterings are almost elastic for $p_{\rm lab} < 800$ MeV/c and inelastic contributions
are not significant~\cite{Bland:1969cb}. For low energies, the $K^{+}p$ scattering is described
by $S$-wave~\cite{Goldhaber:1962zz}. In addition, the differential cross section of the $K^{+}p$ scattering
in low energies shows constructive interference between the Coulomb and strong interactions
at very forward angles. This implies that the low energy $K^{+}p$ scattering is to be repulsive~\cite{Goldhaber:1962zz,cameron1974}.
In contrast, the $I=0$ amplitude is more ambiguous.
In Refs.~\cite{Slater:1961zz,PhysRev.134.B1111},
it was shown that the scattering amplitude for $I=0$ has $P$-wave contribution
to reproduce the $K^{+}d$ scattering up to $p_{\rm lab} < 500$ MeV/c.
The phase shift analysis up to $1.5$ GeV/c by Ref.~\cite{Giacomelli:1974az} found several solutions,
in which one solution implies that the low energy scattering is described dominantly by $S$-wave,
while another solution reproduces the amplitude mainly by $P$-wave.
The phase shift analyses with new data performed by Refs.~\cite{Sakitt:1975hu,Sakitt:1976ny,Glasser:1977xs}
supported the latter $P$-wave solution.
The analysis carried out by Ref.~\cite{martin1975} treated
both $I=0$ and $I=1$ amplitudes at the same time,
and found that the $P$-wave contribution was significant for the low energy $I=0$ amplitude.
The search for $S=+1$ resonance has been carried out in the past.
In the earlier studies,
a possible $S=+1$ broad resonance $Z^{\ast}$ in the $KN$ scattering with $I=0$
were discussed~\cite{Cool:1966zz,Tyson:1967zz,bugg1968,abrams1969,Giacomelli:1972uj,wilson1972,Giacomelli:1973ed,carroll1973,Giacomelli:1974az}.
In Ref.~\cite{abrams1969}, it was pointed out that there the $K^{+}N$ total cross
sections \cite{Cool:1966zz} and $K^{-}$ photoproduction \cite{Tyson:1967zz} showed
two bump structures which would have risen possible $KN$ resonances with $I=0$.
Although the phase shift analysis by Martin~\cite{martin1975} found that there were
no significant resonances in the partial wave amplitude, the Argand diagram suggested
that there could be some broad resonances appearing in $P_{01}$ and $D_{03}$ \cite{Lea:1968,Watts:1980qs,Robertson:1980ma}.
These resonances were reported as broad resonances above
the energies where the inelastic contributions start to be significant.
The studies of the bump structure in cross sections and the behavior of the Argand diagrams
are not sufficient to fix the existence of the resonance states. One of the promising ways
is to analyze the scattering amplitude as a analytic function and to carry out analytic continuation
of the amplitude into the complex energy plane. Resonance states are expressed as poles of the
scattering amplitude.
Another kind of the resonance with $S=+1$ was suggested by LEPS collaboration
in photoproduction experiments~\cite{nakano}. They claimed a narrow resonance, $\Theta^{+}$,
with $S=+1$ and 1.5 GeV/${\rm c}^{2}$ mass~\cite{nakano,Nakano:2008ee}.
This experiment was motivated by a theoretical work \cite{diakonov}
predicting a resonance with $S=+1$ around the mass 1540 MeV/${\rm c}^{2}$ and narrow width $\Gamma < 15$ MeV.
Further studies in the chiral soliton model were performed in Refs. \cite{jaffe, weigel}.
The $\Theta^{+}$ resonance is obviously different from the previous $Z^{\ast}$ resonance.
Here we revisit the possibility of the existence of a $Z^{*}$-type
broad resonance with $S=+1$ and $I=0$
at lower energies than where the inelastic contributions start to be significant.
As mentioned in the above, it is important to understand the resonance properties by
studying the scattering amplitude, especially in terms of an analytic function.
There are several approaches to describe baryon resonance.
We use chiral unitary model, in which scattering problem is solved in a simplified manner
by considering elastic unitarity of the two-body scattering and the elementary interaction
is given based on chiral perturbation theory, firstly suggested in Ref. \cite{kaiser1995} and
developed in \cite{oset1998}, for the $S=-1$ channel,
and we can find recent progress in a review article~\cite{hyodo2012}.
Chiral unitary model impose unitary condition by infinite summation of the specific diagram
and describes the scattering amplitude as an analytic function.
Thus, it is easy to perform analytic continuation of the scattering amplitude
and to pin down the position of the resonance state in the complex energy plane.
One of the most successful examples of this approach is the finding of the
double pole structure of the $\Lambda(1405)$ and investigation of
its physical significance~\cite{Oller:2000fj,Jido:2003cb}.
It was reported in Refs.~\cite{Hyodo:2006yk,Hyodo:2006kg} that the Tomozawa-Weinberg interactions
for the exotic channels do not provide enough attraction to make two-body bound states
for a Nambu-Goldstone boson and a hadron. Actually the Tomozawa-Weinberg term vanishes
for $I=0$ and $S=+1$. Here the next-to-leading contributions are responsible for the attraction
to provide a broad resonance.
This paper is organized as follows.
In Sec. \ref{sec:sec2}, we construct $KN$ scattering amplitude using chiral perturbation theory
and chiral unitary model.
In Sec. \ref{sec:sec3}, we determine the parameters which reproduce $KN$ scattering data.
The total cross section and differential cross section data are compared with our results.
Using the constructed amplitude, we discuss the possibility of resonance with large width.
In Sec. \ref{sec:sec4}, we summarize the results of this paper.
\section{Formulation}
\label{sec:sec2}
For our theoretical investigation, we would like to represent the $KN$ scattering amplitude
in an analytic function of the center of mass energy $W$. Once we parametrize
the scattering amplitude in an analytic function, analytic continuation allows us
to extend the amplitude to the complex energy plane, where resonances
are represented as poles, and extract the properties of resonances,
such as mass, decay width and coupling strength.
For this purpose, we describe the $KN$ elastic scattering amplitude based on the chiral unitary approach by solving
Lippmann-Schwinger equation
\begin{equation}
T = V + VGT
\end{equation}
in a simplified way. In the chiral unitary approach,
the interaction kernel $V$ is given by chiral perturbation theory and
we restrict the intermediate state to the elastic channel.
The model parameters are determined so as to reproduce the observed $KN$ cross section.
\subsection{Scattering amplitude}
\label{sec:amp}
Let us call the momenta of the kaon and nucleon in the initial (final) state
by $p_{1}$ and $p_{2}$ ($p_{3}$ and $p_{4}$), respectively. According to Lorentz invariance,
the $T$-matrix of the $KN$ scattering can be written in terms of two Lorentz invariant functions,
$A(s,t)$ and $B(s,t)$, in general, as
\begin{equation}
T(s,t) = \bar u (\vec p_{4},s_{4}) \left[A(s,t) + \frac12(p \hspace{-5pt}/ _{1}+p \hspace{-5pt}/ _{3}) B(s,t) \right] u(\vec p_{2},s_{2})
\end{equation}
with the Mandelstam variables $s=(p_{1}+p_{2})^{2}$ and $t=(p_{1}-p_{3})^{2}$ and
the on-shell Dirac spinor $u(\vec p, s)$ for nucleon with momentum $p$ and spin $s$. The Dirac spinor $u(\vec p, s)$ is normalized by $\bar u(\vec p,s) u(\vec p,s^{\prime}) = 2M_{N} \delta_{ss^{\prime}}$ with the nucleon mass $M_{N}$.
For partial wave decomposition, we write the $T$-matrix in the center of mass system
with two functions $f$ and $g$ in terms of the spin-nonflip and spin-flip parts as
\begin{equation}
T(s,t)
= \chi^{\dagger}(\lambda_{4})\left[ f(W,\theta) - i (\vec \sigma \cdot \hat n) g(W,\theta)\right]
\chi(\lambda_{2})
\end{equation}
where $W$ and $\theta$ are the total energy and scattering angle (angle between $\vec p_{1}$ and $\vec p_{3}$)
in the center of mass system, respectively, $\hat n$ is the normal vector of the scattering plane
defined by $\hat n = (\vec p_{3} \times \vec p_{1})/ | \vec p_{3} \times \vec p_{1}|$, and
$\chi(\lambda)$ is Pauli spinor of nucleon with helicity $\lambda$.
The relation between $A, B$ and $f, g$ in the $KN$ elastic scattering is given by
\begin{eqnarray}
f(W,\theta) &=& (E_{N}+M_{N}) (A + \omega B )
+ k^{2} B
+ \frac{(E_{N}+M_{N}+\omega)B-A}{E_{N}+M_{N}} k^{2} \cos\theta, \\
g(W, \theta) &=& \frac{A- (E_{N} + M_{N} + \omega)B}{(E_{N}+M_{N})} k^{2} \sin\theta
\end{eqnarray}
with the kaon energy $\omega$, the 3-momentum in the center of mass system $k$ and
the nucleon energy $E_{N}$.
The amplitudes $f(W,\theta)$ and $g(W,\theta)$ can be decomposed into partial waves with
Legendre polynomials $P_{\ell}(x)$ as
\begin{eqnarray}
f(W,\theta) &=& \sum_{\ell=0}^{\infty} f_{\ell}(W) P_{\ell}(\cos\theta) ,\\
g(W,\theta) &=& \sum_{\ell=1}^{\infty} g_{\ell}(W) \sin\theta \frac{dP_{\ell}(\cos\theta)}{d\cos\theta}.
\end{eqnarray}
It is convenient to introduce the amplitude $T_{\ell \pm}$ having definite total angular momentum
$j=\ell \pm \frac12$ by
\begin{eqnarray}
f_{\ell}(W) &=& (\ell+1) T_{\ell+}(W) + \ell T_{\ell-}(W), \\
g_{\ell}(W) &=& T_{\ell+} (W) - T_{\ell-}(W),
\end{eqnarray}
or equivalently
\begin{eqnarray}
T_{\ell+}(W) &=& \frac{1}{2\ell+1} (f_{\ell} (W) + \ell g_{\ell}(W)), \\
T_{\ell-}(W) &=& \frac{1}{2\ell +1} (f_{\ell}(W) - (\ell+1) g_{\ell}(W)).
\end{eqnarray}
We also introduce the partial-wave decomposed interaction kernels $V_{\ell+}(W)$ and $V_{\ell-}(W)$
in the same way.
Here we show the $KN$ scattering amplitudes in the isospin channels, $T^{I=0}$ and $T^{I=1}$.
The amplitudes in the particle basis can be obtained as
\begin{eqnarray}
T_{K^{+}p \to K^{+}p} &=& T^{I=1}, \\
T_{K^{+}n \to K^{+}n} &=& \frac{1}{2} (T^{I=1} + T^{I=0}), \label{eq:Kn} \\
T_{K^{+}n \to K^{0}p} &=& \frac{1}{2} (T^{I=1} - T^{I=0}). \label{eq:CE}
\end{eqnarray}
Taking spin average in the initial state and spin summation in the final state
for nucleon,
we calculate the differential cross section in the center of mass frame as
\begin{equation}
\frac{d \sigma}{d \Omega} = \frac{1}{64 \pi^{2} s} \left( |f(W,\theta)|^{2} + |g(W,\theta)|^{2} \right)
\end{equation}
and the total cross section by integrating the differential cross section in terms of the scattering angle as
\begin{equation}
\sigma = \frac{1}{32 \pi s} \int_{-1}^{1} d\cos\theta \left( |f(W,\theta)|^{2} + |g(W,\theta)|^{2} \right).
\end{equation}
\subsection{Chiral Lagrangian}
The leading order chiral Lagrangian for the baryon field $B$ reads
\begin{eqnarray}
{\cal L}_{MB}^{(1)}=
{\rm Tr} \left[\bar{B}(i D \hspace{-7pt} / \, - M_{0}) B\right]
- \frac{D}{2} {\rm Tr} \left(\bar{B} \gamma_{\mu} \gamma_{5} \{u^{\mu},B\} \right)
-\frac{F}{2} {\rm Tr} \left (\bar{B} \gamma_{\mu} \gamma_{5} [u^{\mu},B] \right)
\label{eq:MB_1},
\end{eqnarray}
where $M_{0}$ is the baryon mass at the chiral limit,
the baryon and meson fields, $B$ and $\Phi$, are written in the SU(3) matrix form
\begin{eqnarray}
B&=&\left(
\begin{array}{ccc}
\frac{\Sigma^{0}}{\sqrt{2}}+\frac{\Lambda}{\sqrt{6}} &\Sigma^{+}& p \\
\Sigma^{-} & -\frac{\Sigma^{0}}{\sqrt{2}}+\frac{\Lambda}{\sqrt{6}} & n \\
\Xi^{-}& \Xi^{0} & -\frac{2\Lambda}{\sqrt{6}}
\end{array}
\right), \\
\Phi&=&\left(
\begin{array}{ccc}
\frac{\pi^{0}}{\sqrt{2}}+\frac{\eta}{\sqrt{6}} & \pi^{+} &K^{+} \\
\pi^{-} & -\frac{\pi^{0}}{\sqrt{2}}+\frac{\eta}{\sqrt{6}} & K^{0} \\
K^{-}& \bar{K}^{0} & -\frac{2\eta}{\sqrt{6}}
\end{array}
\right).
\end{eqnarray}
We parametrize the chiral field $U$ in the CCWZ form as
\begin{equation}
U = \xi^{2} = \exp \left( i \frac{\sqrt 2}{f} \Phi\right)
\end{equation}
with a scale parameter $f$, which is turned to be identified as the meson decay constant in the leading order
calculation of chiral perturbation theory,
the covariant derivative for the baryon field is introduced as
\begin{equation}
D_{\mu} B= \partial_{\mu} B + [ \Gamma_{\mu}, B ],
\end{equation}
with the mesonic vector current
\begin{equation}
\Gamma_{\mu} = \frac{1}{2} ( \xi^{\dagger} \partial_{\mu} \xi + \xi \partial_{\mu} \xi^{\dagger} ),
\end{equation}
and the meson-baryon coupling is given through the mesonic axial vector current
\begin{equation}
u_{\mu} = i \left(\xi^{\dagger} \partial_{\mu} \xi - \xi \partial_{\mu} \xi^{\dagger} \right)
\end{equation}
with low energy constants $D$ and $F$. The parameters $D$ and $F$ are to be determined
by the axial couplings of the baryons at tree level.
The next-leading order of the chiral Lagrangian
is composed of several terms
\begin{align}
{\cal L}_{MB}^{(2)}
=&b_{D} {\rm Tr} \left(\bar{B} \{ \chi_{+}, B \} \right)
+b_{F} {\rm Tr} \left(\bar{B} [\chi_{+}, B] \right)
+b_{0} {\rm Tr} (\bar{B}B) {\rm Tr} (\chi_{+})
+d_{1} {\rm Tr} \left(\bar{B} \{ u_{\mu}, [u^{\mu}, B] \} \right)
\nonumber \\ &
+d_{2} {\rm Tr} \left(\bar{B} [u_{\mu}, [u^{\mu}, B]] \right)
+d_{3} {\rm Tr} \left(\bar{B} u_{\mu} ) {\rm Tr} (u^{\mu} B \right)
+d_{4} {\rm Tr} \left(\bar{B} B) {\rm Tr} (u^{\mu} u_{\mu} \right)
\nonumber \\
&-\frac{g_{1}}{8M_{N}^{2}} {\rm Tr} \left( \bar B \{ u_{\mu}, [ u_{\nu}, \{D^{\mu},D^{\nu}\}B] \} \right)
-\frac{g_{2}}{8M_{N}^{2}} {\rm Tr} \left( \bar B [ u_{\mu}, [ u_{\nu}, \{D^{\mu},D^{\nu}\}B] ] \right)
\nonumber \\
&-\frac{g_{3}}{8M_{N}^{2}} {\rm Tr} (\bar B u_{\mu} ) {\rm Tr} ( u_{\nu}, \{D^{\mu},D^{\nu}\}B)
-\frac{g_{4}}{8M_{N}^{2}} {\rm Tr} (\bar B\{D^{\mu},D^{\nu}\}B) {\rm Tr} (u_{\mu} u_{\nu})
\nonumber \\
&-\frac{h_{1}}{4} {\rm Tr} \left( \bar B [\gamma^{\mu},\gamma^{\nu}] B u_{\mu} u_{\nu} \right)
-\frac{h_{2}}{4} {\rm Tr} \left( \bar B [\gamma^{\mu},\gamma^{\nu}] u_{\mu} [u_{\nu}, B] \right)
\nonumber \\
&-\frac{h_{3}}{4} {\rm Tr} \left(\bar B [\gamma^{\mu},\gamma^{\nu}] u_{\mu} \{u_{\nu}, B\} \right)
-\frac{h_{4}}{4} {\rm Tr} (\bar B [\gamma^{\mu},\gamma^{\nu}] u_{\mu}) {\rm Tr} (u_{\nu} B) + {\rm h.c.}
\label{eq:MB_2}
\end{align}
where $b_{i}$, $d_{i}$, $g_{i}$ and $h_{i}$ are low energy constants.
The terms with $b_{i}$ and $d_{i}$ appear in the typical SU(3) chiral Lagrangians,
while the terms with $g_{i}$ and $h_{i}$ are introduced as an extension of the SU(2) chiral Lagrangian \cite{Bernard:1995dp,Fettes:2000gb}. (See also Ref.~\cite{Lutz:2001yb}.)
The scalar field $\chi_{+}$ is given by
\begin{equation}
\chi_{+} = 2 B_{0} \left (\xi {\cal M} \xi + \xi^{\dagger} {\cal M} \xi^{\dagger} \right),
\end{equation}
with the quark mass matrix
\begin{equation}
{\cal M}={\rm diag} \left(\hat{m}, \hat{m}, m_{s} \right),
\end{equation}
where $\hat m$ stands for the mass of the $u$ and $d$ quarks by assuming isospin symmetry,
while $m_{s}$ means the strange quark mass.
Parameter $B_{0}$ is a positive constant related to the meson mass and always appears
together with the quark mass.
The parameter is fixed by $B_{0}=M_{K}^{2}/(\hat{m}+m_{s})$ where $M_{K}$ is the kaon mass.
The low energy constants some combinations of
$b_{i}$, $d_{i}$, $g_{i}$ and $h_{i}$ are to be fixed
by the $KN$ scattering cross sections.
\subsection{Interaction kernel}
The tree level amplitude of the $KN$ scattering up to the next-to-leading order
is composed by three parts, the leading order contact term, the hyperon crossed Born term,
and the next-to-leading contact term.
The leading order contact term is called Tomozawa-Weinberg term,
and is determined by the SU(3) group structure of hadrons
without the low energy constants. It is known to be absent for the $KN$ channel with $I=0$:
\begin{eqnarray}
V^{I=0}_{\rm TW} = 0, \qquad
V^{I=1}_{\rm TW} = \frac{1}{2f_{K}^{2}}
\bar{u}(\vec{p}_4,s_4)(p \hspace{-5pt}/ _1+p \hspace{-5pt}/ _{3}) u(\vec{p}_2,s_2).
\end{eqnarray}
The corresponding invariant amplitudes read
\begin{eqnarray}
A^{I=0}_{\rm TW} = B^{I=0}_{\rm TW} = A^{I=1}_{\rm TW} = 0, \qquad
B^{I=1}_{\rm TW} = \frac{1}{f_{K}^{2}}.
\end{eqnarray}
For the Born term, we do not consider explicit baryonic states with strangeness $S=+1$.
The pentaquark $\Theta^{+}$ is a candidate for such a state, but it is known to have a
narrow width and a very weak coupling to $KN$. The hyperons with $S=-1$, $\Sigma$ and $\Lambda$,
contribute to the $KN$ amplitude as crossed Born terms.
With the chiral Lagrangian (\ref{eq:MB_1}), we obtain the crossed Born terms as
\begin{eqnarray}
V^{I=0}_{{\rm Born}}&=& -\frac{3}{4}\frac{(D-F)^{2}}{f^{2}_{K}}
\bar{u}(\vec{p}_4,s_4)p \hspace{-5pt}/ _1\gamma_5
\frac{M_{\Sigma} + (p \hspace{-5pt}/ _2 - p \hspace{-5pt}/ _3) }{M_{\Sigma}^{2} - (p_{2} - p_{3} )^{2} - i\epsilon}
p \hspace{-5pt}/ _3\gamma_5u(\vec{p}_2,s_2)
\nonumber\\
&&+\frac{1}{12} \frac{(3F+D)^{2}}{f^{2}_{K}}\bar{u}(\vec{p}_4 ,s_4)p \hspace{-5pt}/ _1\gamma_5
\frac{M_{\Lambda} + (p \hspace{-5pt}/ _2 - p \hspace{-5pt}/ _{3}) }{M_{\Lambda}^{2} - (p_{2} - p_{3} )^{2} - i\epsilon}
p \hspace{-5pt}/ _3\gamma_5u(\vec{p}_2,s_2), \label{eq:I02} \\
V^{I=1}_{\rm Born} &=& -\frac{1}{4} \frac{(D-F)^{2}}{f^{2}_{K}}
\bar{u}(\vec{p}_4,s_4)p \hspace{-5pt}/ _1\gamma_5
\frac{M_{\Sigma} + (p \hspace{-5pt}/ _2 - p \hspace{-5pt}/ _{3}) }{M_{\Sigma}^{2} - (p_{2} - p_{3} )^{2} - i\epsilon}
p \hspace{-5pt}/ _3\gamma_5 u(\vec{p}_2,s_2)
\nonumber\\
&&-\frac{1}{12}\frac{(3F+D)^{2}}{f^{2}_{K}}\bar{u}(\vec{p}_4 ,s_4)p \hspace{-5pt}/ _1\gamma_5
\frac{M_{\Lambda} + (p \hspace{-5pt}/ _2 - p \hspace{-5pt}/ _{3} ) }{M_{\Lambda}^{2} - (p_{2} - p_{3} )^{2} - i\epsilon}
p \hspace{-5pt}/ _3\gamma_5u(\vec{p}_2,s_2) \label{eq:I12}
\end{eqnarray}
with the $\Sigma$ mass $M_{\Sigma}$ and $\Lambda$ mass $M_{\Lambda}$.
The invariant amplitudes are written as
\begin{eqnarray}
A^{I=0}_{\rm Born} &=& \frac34 \frac{(D-F)^{2}}{f^{2}_{K}}
\frac{(M_{N}+M_{\Sigma})(M_{N}^{2}-u)}{u - M_{\Sigma}^{2}}
-\frac{1}{12} \frac{(3F+D)^{2}}{f^{2}_{K}}
\frac{(M_{N}+M_{\Lambda})(M_{N}^{2} - u)}{u - M_{\Lambda}^{2}},
\\
B^{I=0}_{\rm Born} &=& -\frac34 \frac{(D-F)^{2}}{f^{2}_{K}}
\frac{u+M_{N}^{2} + 2 M_{\Sigma} M_{N} }{u - M_{\Sigma}^{2}}
+\frac{1}{12} \frac{(3F+D)^{2}}{f^{2}_{K}}
\frac{u+M_{N}^{2} +2 M_{\Lambda} M_{N}}{u - M_{\Lambda}^{2}},\\
A^{I=1}_{\rm Born} &=& \frac14 \frac{(D-F)^{2}}{f^{2}_{K}}
\frac{(M_{N}+M_{\Sigma})(M_{N}^{2}-u)}{u - M_{\Sigma}^{2}}
+\frac{1}{12} \frac{(3F+D)^{2}}{f^{2}_{K}}
\frac{(M_{N}+M_{\Lambda})(M_{N}^{2} - u)}{u - M_{\Lambda}^{2}},
\quad \\
B^{I=1}_{\rm Born} &=& -\frac14 \frac{(D-F)^{2}}{f^{2}_{K}}
\frac{u+M_{N}^{2} + 2 M_{\Sigma} M_{N} }{u - M_{\Sigma}^{2}}
-\frac{1}{12} \frac{(3F+D)^{2}}{f^{2}_{K}}
\frac{u+M_{N}^{2} +2 M_{\Lambda} M_{N}}{u - M_{\Lambda}^{2}}
\end{eqnarray}
with Mandelstam variable $u = (p_{1}-p_{4})^{2} = 2M_{N}^{2}+2M_{K}^{2} - s -t $,
the nucleon mass $M_{N}$ and the kaon mass $M_{K}$.
The $KN$ invariant amplitudes at the next-to-leading order chiral perturbation theory
are calculated for each isospin channel as
\begin{eqnarray}
V^{I}_{\rm NLO}&=&
\left[
\frac{4B_0}{f^{2}_{K}}(\hat{m}+m_s)b^{I}
+\frac{2}{f^{2}_{K}} (p_1\cdot p_3) d^{I}
\right .\nonumber \\ && \quad \left.
+\frac{(p_{2}\cdot p_{1})(p_{2}\cdot p_{3}) + (p_{4}\cdot p_{1})(p_{4}\cdot p_{3})}{2M_{N}^{2} f_{K}^{2}} g^{I}
\right]
\bar{u}(\vec{p}_4,s_4)u(\vec{p}_2, s_2)
\nonumber \\ &&
- \frac{ h^{I}}{2f_{K}^{2}} p_{1}^{\mu} p_{3}^{\nu}\, \bar{u}(\vec{p}_4,s_4) [\gamma_{\mu}, \gamma_{\nu}]
u(\vec{p}_2, s_2),
\label{eq:I03}
\end{eqnarray}
and the corresponding invariant amplitudes $A$ and $B$ read
\begin{eqnarray}
A_{\rm NLO}^{I} &=& \frac{4B_0}{f^{2}_{K}}(\hat{m}+m_s)b^{I}
+\frac{2}{f^{2}_{K}} (p_1\cdot p_3) d^{I}
\nonumber \\ &&
+\frac{(p_{2}\cdot p_{1})(p_{2}\cdot p_{3}) + (p_{4}\cdot p_{1})(p_{4}\cdot p_{3})}{2M_{N}^{2} f_{K}^{2}} g^{I}
+ \frac{p_{1}\cdot(p_{2}+p_{4})}{f_{K}^{2}} h^{I}\\
B_{\rm NLO}^{I} &=& -\frac{2 M_{N}}{f_{K}^{2}} h^{I}.
\end{eqnarray}
In these equations, the parameters $b^{I}$, $d^{I}$, $g^{I}$ and $h^{I}$ are defined by
\begin{align}
b^{I=0} &= b_{0} - b_{F}, & b^{I=1} &= b_{0} + b_{D}, \\
d^{I=0} &= 2d_{1} + d_{3} - 2d_{4}, & d^{I=1} &= -2d_{2} - d_{3} - 2d_{4}, \\
g^{I=0} &= 2g_{1} + g_{3} - 2 g_{4}, & g^{I=1} &= -2g_{2} - g_{3} - 2 g_{4}, \\
h^{I=0} &= h_{1}+h_{2}+h_{3}+h_{4}, & h^{I=1} &= h_{1}-h_{2}-h_{3}-h_{4},
\end{align}
in terms of the low energy constants appearing in Lagrangian (\ref{eq:MB_2}).
We treat these combinations of the low energy constants as free parameters
to be adjusted to reproduce observed $KN$ cross sections.
\subsection{Unitarization}
Unitarization is performed in each partial wave amplitude~\cite{Jido:2002zk}.
Because the total angular momentum is a good quantum number, Lippmann-Schwinger equation is also
decomposed into partial waves as
\begin{equation}
T_{\ell \pm}^{I} = V_{\ell \pm}^{I} + V_{\ell \pm}^{I} G T_{\ell \pm}^{I}
\end{equation}
where we have assume that we use a non-relativistic Green's function and do not consider
so-called zig-zag diagrams which mix the large and small components of the Dirac spinor.
Supposing that we take only the on-shell contribution of the interaction kernel
in the loop integral, we can solve Lippmann-Schwinger equation algebraically
\begin{equation}
\label{eq:less_amp}
T_{\ell \pm}^{I} = (1 - V_{\ell \pm}^{I}G)^{-1} V_{\ell \pm}^{I}
\end{equation}
where the loop contribution $G$ for the $KN$ channel is given
as a function of the center of mass energy, $W$, by
\begin{equation}
G(W) = i\int \frac{d^{4} q}{(2\pi)^{4}} \frac{1}{(P-q)^{2} - M_{N}^{2} + i \epsilon}
\frac{1}{q^{2} - M_{K}^{2} + i \epsilon} .
\end{equation}
This integral can be
performed by the dimensional regularization as
\begin{eqnarray}
G(W)&=&\frac{1}{(4 \pi )^{2}}
\biggl \{ a(\mu)
+\ln \frac{M_{N}^{2}}{\mu^{2}}
+\frac{M_{K}^{2}-M_{N}^{2} +s}{2 s} \ln \frac{M_{K}^{2}}{M_{N}^{2}} \nonumber \\
&&+\frac{k}{\sqrt{s}} \Bigl [ \ln(s - (M_{N}^{2} - M_{K}^{2})
+ 2\sqrt{s} k) + \ln \left(s + \left(M_{N}^{2} - M_{K}^{2} \right) +2\sqrt{s} k \right) \nonumber \\
&&-\ln \left(-s + \left(M_{N}^{2} - M_{K}^{2} \right)
+2\sqrt{s} k \right)
-\ln\left(-s-\left(M_{N}^{2}-M_{K}^{2}\right)
+2\sqrt{s} k \right) \Bigl] \biggl \},
\label{eq:loop}
\end{eqnarray}
where $\mu$ is the scale parameter of the dimensional regularization and
$a(\mu)$ is the subtraction constant depending on $\mu$.
We take away the infinite part as renormalization procedure, and
the subtraction constant is determined so as to reproduce experiments.
For the interaction kernel, here we take the chiral perturbation amplitudes
calculated in the previous section up to the next-to-leading order as
\begin{equation}
V^{I} = V_{\rm WT}^{I} + V_{\rm Born}^{I} + V_{\rm NLO}^{I}
\end{equation}
and perform partial wave decomposition in the way explained in Sec.~\ref{sec:amp}.
\subsection{Coulomb correction}
For the $K^{+}p$ amplitude, we introduce the Coulomb correction
as done in Ref.~\cite{hashimoto}.
To the strong interaction part of the $K^{+}p$ scattering amplitude
calculated in the center of mass frame,
we add
the Coulomb amplitude
\begin{equation}
f_{C} = -\frac{\alpha}{2 k v \sin^{2}(\theta/2)} \exp\left[ - i \frac{\alpha}{v}
\ln \left(\sin^{2} \frac{\theta}{2} \right)\right]
\end{equation}
with the scattering angle $\theta$, the fine structure constant $\alpha$
and the $KN$ relative velocity $v$ defined by
\begin{equation}
v = \frac{ k (E_{K} + E_{p})}{E_{K}E_{p}},
\end{equation}
and multiply the Coulomb phase shift factor $e^{2i \Phi_{\ell}}$
with
\begin{equation}
\Phi_{\ell} = \sum_{n=1}^{\ell} \tan^{-1} \frac{\alpha}{n v}
\end{equation}
for $\ell >0$ ($\Phi_{0} = 0$) as
\begin{eqnarray}
f^{K^{+}p} &=& \sum_{\ell=0}^{\infty} \left[ (\ell+1) T_{\ell+}^{I=1}
+ \ell T_{\ell-}^{I=1} \right] e^{2i\Phi_{\ell}} P_{\ell}(\cos\theta)
- 8 \pi \sqrt s f_{C}, \\
g^{K^{+}p} &=& \sum_{\ell=1}^{\infty}
\left[T_{\ell+}^{I=1} - T_{\ell-}^{I=1}\right] e^{2i\Phi_{\ell}}
\sin\theta \frac{dP_{\ell}(\cos\theta)}{d\cos\theta} .
\end{eqnarray}
\section{Results}
\label{sec:sec3}
In the previous section, we have constructed the unitarized $KN$ amplitudes using the chiral unitary model.
In this section, we carry out the $\chi^{2}$ fitting of the unitarized amplitude to the experimental data
and determine the low-energy constants appearing in the next-to-leading order.
We assume to fix the subtraction constants for $I=0$ and $1$ at $a^{I=0,1} = -1.150$
with the regularization scale $\mu=1$ GeV
to suppress the number of the parameters in the situation that the experimental data have somewhat large error
and disagreement in different experiments.
Moderate change of the subtraction constants can be absorbed into the low-energy constants.
We assume isospin symmetry and use the isospin averaged masses for the hadrons.
We take the kaon decay constant as $f_{K} = (1.19 \pm 0.01)f_{\pi} = 110.0$ MeV where $f_{\pi}$ = 92.4 MeV.
The low-energy constants at the leading-order chiral Lagrangian $D$ and $F$
are already determined by the semileptonic hyperon beta decay reported in \cite{DandF} as
\begin{eqnarray}
D=0.80, \enspace F= 0.46.
\end{eqnarray}
The values of these parameters are summarized in Table~\ref{tab:masses}.
\begin{table}[t]
\begin{center}
\caption{The values of the fixed parameters. We take the isospin averaged masses. }
\label{tab:masses}
\begin{tabular}{ccccccc}
\hline
$M_{N}$ & $M_{K}$ & $M_{\Lambda}$ & $M_{\Sigma}$ & $f_{K}$ & $D$ & $F$ \\
\hline\hline
$938.9$ MeV & $495.6$ MeV & $1115.7$ MeV & $1193.2$ MeV & $110.0$ MeV & 0.80 & 0.46 \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Determining amplitude}
We determine the $KN$ amplitudes for $I=1$ and $I=0$
reproducing the experimental data using the $\chi^{2}$ fitting of the low-energy constants to minimize
the $\chi^{2}$ function
\begin{eqnarray}
\chi^{2} = \sum_{i}^{N} \left(\frac{y_{i} - f(x_{i})}{\sigma_{i}} \right)^{2},
\end{eqnarray}
where $y_{i}, f(x_{i}), \sigma_{i}$ and $N$ are the experimental data,
the theoretical calculations with the parameters, the errors of the data and the number of the data, respectively.
In our analysis, we consider the partial waves up to the $D$-wave ($l=2$).
We restrict the energy region up to $p_{\rm lab} = 800$ MeV/c, where the inelastic contribution
such as pion production starts to be significant.
\begin{table}[t]
\begin{center}
\caption{The determined parameters for the $I=1$ and $I=0$ amplitudes. There are two
parameter sets.
Solutions 1 and 2 are characteristic in reproduction of the experimental data. (See text in details.)
The values of the parameters $b^{I}, d^{I}, g^{I}$ and $h^{I}$ are shown in unit of $10^{-3}\ {\rm MeV}^{-1}$.
The subtraction constants for $I=0$ and $I=1$ are fixed as $a^{I=0, 1}=-1.150$.}
\begin{tabular}{c c| D{.}{.}{4}D{.}{.}{4} }
\hline
& & \multicolumn{1}{c}{Solution 1} & \multicolumn{1}{c}{Solution 2} \\
\hline \hline
&$b^{I=1}$ &0.54 &0.30 \\
&$d^{I=1}$ &-0.29 &-0.24 \\
$I=1$ &$g^{I=1}$ &0.05 &0.72 \\
&$h^{I=1}$ &0.03 &1.05 \\
& $\chi^{2}/N$ &2.96 &2.97 \\
\hline
&$b^{I=0}$ &0.11 &-0.46 \\
&$d^{I=0}$ &0.33 &0.73 \\
$I=0$ &$g^{I=0}$ &-0.42 &0.56 \\
&$h^{I=0}$ &1.14 &-3.54 \\
& $\chi^{2}/N$ &4.54 &4.06 \\
\hline
\end{tabular}
\label{tab:parameter}
\end{center}
\end{table}
First of all, we determine the $I=1$ scattering amplitude from the $K^{+}p$
elastic scattering data, which were observed well with small
variation and certainly constrain the $I=1$ parameters.
To fix the $I=1$ low-energy constants $b^{I=1}, d^{I=1}, g^{I=1}$ and $h^{I=1}$,
we use the $K^{+}p$ differential cross section between
$p_{{\rm lab}} = 145$ to 726 MeV/c \cite{cameron1974}
and the total cross section between $p_{{\rm lab}} = 145$ to 788 MeV/c
\cite{bugg1968, bowen1970, adams1971,bowen1973, carroll1973, cameron1974}.
The fitted values of the parameters are summarized in the Table \ref{tab:parameter}.
We find two solutions for $I=1$, which equivalently reproduce the cross sections.
In Fig.~\ref{fig:i1_tot}, we present the total cross sections of the $I=1$ $KN$ elastic scattering
obtained with the fitted parameters and compare with the experimental
data~\cite{bugg1968, bowen1970, adams1971,bowen1973, carroll1973, cameron1974}.
The calculated amplitude gives a good reproduction of the data up to $p_{{\rm lab}}=$800~MeV/c.
It shows that the $S$-wave contribution is dominated
and the contributions from the partial waves higher than the $P$-wave are negligibly small.
This is consistent with the old observation.
In Fig.~\ref{fig:kp_diff}, we show the calculated differential cross sections
and make a comparison with the experimental data.
The figure shows that the obtained amplitudes reproduce the experimental data well
for all the energies which we consider here.
\begin{figure}[]
\begin{tabular}{c}
\begin{minipage}[t]{1.0\hsize}
\centering
\includegraphics[keepaspectratio, scale=0.8,bb=0 0 360 252]{tot1_sol1.pdf}
\end{minipage} \\
\begin{minipage}[t]{1.0\hsize}
\centering
\includegraphics[keepaspectratio, scale=0.8,bb=0 0 360 252]{tot1_sol2.pdf}
\end{minipage} \\
\end{tabular}
\caption{The calculated $I=1$ total cross sections using Solutions 1 and 2 in comparison
with the experimental data \cite{bugg1968, bowen1970, adams1971,
bowen1973, carroll1973, cameron1974}.
The partial wave components are also described by the dashed line.
The horizontal axis means the $K^{+}$ meson incident momentum
in the lab frame $p_{{\rm lab}}$ in the unit of MeV/c
and the vertical axis is the total cross section $\sigma$ in the unit of mb.
}
\label{fig:i1_tot}
\end{figure}
\begin{figure}[]
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl145.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl175.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl205.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl235.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl265.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl295.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl325.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl355.pdf}
\end{center}
\end{minipage}
\end{figure}
\begin{figure}[]
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl385.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl500.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl613.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl726.pdf}
\end{center}
\end{minipage}
\vspace{-0.5cm}
\caption{
The calculated differential cross sections of the $K^{+}p$ elastic scattering
using Solutions 1 and 2
in comparison with the experimental data of Ref. \cite{cameron1974}.
}
\label{fig:kp_diff}
\end{figure}
Next, we determine the $I=0$ low-energy constants $b^{I=0}, d^{I=0}, g^{I=0}$ and $h^{I=0}$
using the data of the $K^{+}n \to K^{+}n$ and $K^{+}n \to K^{0} p$ differential cross sections
at $p_{{\rm lab}} = 526$, 604 and 640 MeV/c given in Refs \cite{Giacomelli:1972uj, Giacomelli:1973ed, dam1975}
together with the $I=0$ total cross section of Ref.~\cite{bowen1970}
between $p_{{\rm lab}} = 366$ to 717 MeV/c,
which is referred as Bowen 1970 in Fig. \ref{fig:i0_tot}.
We have confirmed that even if we include the $I=0$ total cross section
of Ref. \cite{carroll1973} referred as Carroll 1973 in Fig. \ref{fig:i0_tot},
we obtain similar parameter sets with much worse $\chi^{2}$ values.
It would imply that our model prefers the data of Bowen 1970 \cite{bowen1970} and 1973 \cite{bowen1973}.
As a fine-tuning, we use the data of Bowen 1970 as the $I=0$ total cross section data.
The $K^{+}n$ elastic and charge exchange scattering amplitudes
are linear combinations of the $I=0$ and $I=1$ amplitudes
as shown in Eqs.~(\ref{eq:Kn}) and (\ref{eq:CE}).
The $I=1$ amplitude is already determined with the $K^{+}p$ elastic scattering.
The $I=1$ parameters are fixed, when the $I=0$ parameters are determined.
The fitted results for the $I=0$ parameters are summarized in the Table \ref{tab:parameter}.
Here we propose two solutions which have different character in the $I=0$ total cross section,
as we will discuss in details later.
In Fig.~\ref{fig:i0_tot}, we show the $I=0$ total cross sections calculated with Solution 1, 2 and
find that these two solutions reproduce well the observed total cross section.
The band
shown in the figure show the allowable region of each solution around the
vicinity of the local minimum of $\chi^{2}$, that is
$4.54<\chi^{2}/N<5.53$ for Solution 1
and $4.06<\chi^{2}/N<5.01$ for Solution 2.
As one can see, the $I=0$ total cross section rapidly increases around $p_{\rm lab} = 500$ MeV/c.
In the two solutions, the partial wave which is responsible for the rapid increase of the cross section
is different.
Actually, as we shall see later, this feature links to the property of
a possible resonance appearing in $I=0$ with a large width.
In Solution 1, the $P_{01}$ amplitude\footnote{
We use the partial wave convention $L_{I\, 2J}$ with orbital angular momentum $L$,
isospin $I$ and total angular momentum $J = L \pm 1/2$.}
dominantly contributes, and thus
the rapid increases is caused by the $P_{01}$ amplitude. In Solution 2,
both $P_{01}$ and $P_{03}$ amplitudes provide the contribution for the $I=0$ total
cross section and the $P_{03}$ amplitude is responsible for the rapid increase.
In this way, these two solutions have
their own characteristic features in the $I=0$ total cross section.
In summary, we would say that Solution 1 is ``$P_{01}$ dominant solution",
Solution 2 is ``$P_{03}$ dominant solution".
In the following, we show the result of the differential cross sections used $I=0$ and 1 amplitudes.
Similar to the $I=0$ total cross section, we show the allowable region of solutions as the band.
Figures \ref{fig:kn_diff_sol1} and \ref{fig:kn_diff_sol2} show
the $K^{+}n$ elastic differential cross section for Solutions 1 and 2.
Solutions 1 and 2 are mostly consistent with the experimental data.
Figures \ref{fig:cex_diff_sol1} and \ref{fig:cex_diff_sol2} show the $K^{+}n$ charge exchange differential cross section.
Solutions 1 and 2 show relatively good reproduction except for the forward and backward scattering.
We cannot find sizable contradictions for Solutions 1 and 2 with the experimental data.
As seen later, other analyses support Solution 1.
\begin{figure}[]
\begin{tabular}{c}
\begin{minipage}[t]{1.0\hsize}
\centering
\includegraphics[keepaspectratio, scale=0.8, bb=0 0 360 252]{tot0_sol1.pdf}
\end{minipage} \\
\begin{minipage}[t]{1.0\hsize}
\centering
\includegraphics[keepaspectratio, scale=0.8,bb=0 0 360 252]{tot0_sol2.pdf}
\end{minipage} \\
\end{tabular}
\caption{
The $I=0$ total cross sections calculated using Solutions 1 and 2 in comparison
with the experimental data \cite{bowen1970, bowen1973, carroll1973}.
The solid line shows that the best-fit solution.
The shaded area shows that the allowable region of the parameter around
the vicinity of the best-fit solution.
}
\label{fig:i0_tot}
\end{figure}
\begin{figure}[]
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm, bb=0 0 360 252]{pl434_kn.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm, bb=0 0 360 252]{pl526_kn.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm, bb=0 0 360 252]{pl604_kn.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm, bb=0 0 360 252]{pl640_kn.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm, bb=0 0 360 252]{pl688_kn.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm, bb=0 0 360 252]{pl720_kn.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm, bb=0 0 360 252]{pl771_kn.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm, bb=0 0 360 252]{pl780_kn.pdf}
\end{center}
\end{minipage}
\vspace{-0.5cm}
\caption{
The differential cross sections of $K^{+}n$ elastic scattering
using Solution 1
in comparison with the experimental data of Ref. \cite{Giacomelli:1973ed, dam1975}.
The momenta at the $p_{{\rm lab}}$=640, 720 and 780 MeV/c are the data from Ref. \cite{Giacomelli:1973ed}.
The others are the data from Ref. \cite{dam1975}.}
\label{fig:kn_diff_sol1}
\end{figure}
\begin{figure}[]
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl434_kn2.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl526_kn2.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl604_kn2.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl640_kn2.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl688_kn2.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl720_kn2.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl771_kn2.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl780_kn2.pdf}
\end{center}
\end{minipage}
\vspace{-0.5cm}
\caption{
The differential cross sections of $K^{+}n$ elastic scattering
using Solution 2.}
\label{fig:kn_diff_sol2}
\end{figure}
\begin{figure}[]
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl434_n.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl526_n.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl604_n.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl640_n.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl688_n.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl720_n.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl771_n.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl780_n.pdf}
\end{center}
\end{minipage}
\vspace{-0.5cm}
\caption{
The differential cross sections of $K^{+}n$ charge exchange scattering
using Solution 1 in
comparison with the experimental data of Ref. \cite{Giacomelli:1972uj, dam1975}.
The momenta at the $p_{{\rm lab}}$=640, 720 and 780 MeV/c are the data from Ref.~\cite{Giacomelli:1972uj}.
The others are the data from Ref.~\cite{dam1975}. }
\label{fig:cex_diff_sol1}
\end{figure}
\begin{figure}[]
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl434_n2.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl526_n2.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl604_n2.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl640_n2.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl688_n2.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl720_n2.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl771_n2.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{pl780_n2.pdf}
\end{center}
\end{minipage}
\vspace{-0.5cm}
\caption{
The differential cross sections of $K^{+}n$ charge exchange scattering
using Solution~2.}
\label{fig:cex_diff_sol2}
\end{figure}
\subsection{Possible broad resonances}
We have constructed the $KN$ amplitudes which reproduce the experimental data well.
In the following, we concentrate on the $KN$ partial wave amplitudes with $I=0$ and
discuss the outcome from the obtained amplitude.
First of all, we look for poles of the scattering amplitude in the complex energy plane.
Having the $KN$ scattering amplitude
in an analytic form, we can perform analytic continuation of the scattering amplitude into
the complex energy plane.
We find a pole in the $P_{01}$ amplitude of Solution 1 at $z = 1617 - 153 i$~MeV,
which corresponds to a resonance state with mass 1617~MeV/$\rm c^{2}$,
width 305~MeV and $J^{p} = (1/2)^{+}$.
The resonance has a quite large width and it could be hard to pin down the resonance
in production experiments.
Similarly, we find a pole of the $P_{03}$ amplitude of Solution 2 in the complex energy plane at
$z=1678 - 232 i $~MeV corresponding to a resonance state with mass
1678~MeV/${\rm c^{2}}$, width 463~MeV and $J^{p} = (3/2)^{+}$.
Since this resonance state
is located far from the real axis, it
is not constrained well by experimental observation
appearing in the real axis and theoretical uncertainty should be large and this solution
could be unstable against small deviation of experimental data.
These results are summarized in Table \ref{tab:amp}.
These resonances could be compared with the state found in the chiral soliton model \cite{weigel}
with around 1700 MeV/${\rm c^{2}}$ mass even though it has a narrow width.
In Fig. \ref{fig:poles}, we show the distribution of the poles in the vicinity of the best-fit value.
\begin{table}[]
\begin{center}
\caption{The resonance states of Solutions 1 and 2.}
\begin{tabular}{l c c} \hline
amplitude ($J^{P}$) & mass [MeV] & width [MeV] \\ \hline \hline
Solution 1 \enspace $P_{01}$ ($ \frac{1}{2}^{+}$) & 1617 & 305 \\ \hline
Solution 2 \enspace $P_{03}$ ($ \frac{3}{2}^{+}$) & 1678 & 463 \\ \hline
\end{tabular}
\label{tab:amp}
\end{center}
\end{table}
\begin{figure}[]
\begin{center}
\includegraphics[width=100mm,bb=0 0 3946 2705]{poles4.pdf}
\end{center}
\caption{The distribution of the poles of the amplitude in the complex energy plane $z$.
The red star stands for the best-fit value.}
\label{fig:poles}
\end{figure}
{Even though we find the resonance state as a pole of the scattering amplitude,
there are no peak structure in the scattering amplitude around the resonance energy.}
One usually expects
that resonance states should appear as a peak in the cross section. It is not necessarily true
when the resonance has a large width and substantial coupling to non-resonance background.
We demonstrate this situation by using a simple amplitude in which a resonance pole is embedded
in a constant background with a relative phase $\delta$:
\begin{equation}
f(E) = \frac{i}{E - M + i \Gamma/2} + b e^{i\delta}. \label{eq:BWamp}
\end{equation}
In Fig.~\ref{fig:fano}, we show the cross sections of the amplitudes (\ref{eq:BWamp}) with
$\delta= 0, \frac{\pi}{2}, \pi, \frac{3\pi}{2}$ for $M=1600$ MeV, $\Gamma=300$ MeV and
$b=0.01$ MeV$^{-1}$.
As one can see in the figure, the resonance shape depends on the relative phase.
For $\delta=0$, the resonance and background contributions are interfered constructively
a resonance peak appears in the cross section, while for $\delta = \pi$, the resonance
and background contribute deconstructively and the resonance is seen as a dip.
It is very interesting to see that, for the case of $\delta=\pi/2$, a rapid increase
takes place at the resonance energy. This is also one of the resonance shapes.
These kinds of resonances are known as Fano resonance~\cite{Fano:1961zz}.
The resonance structure in the $KN$ $I=0$ channel might be one of
the example of Fano resonance.
\begin{figure}[]
\begin{minipage}{0.24\hsize}
\begin{center}
\includegraphics[width=53mm,bb=0 0 360 252]{fano1_new.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.24\hsize}
\begin{center}
\includegraphics[width=53mm,bb=0 0 360 252]{fano2_new.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.24\hsize}
\begin{center}
\includegraphics[width=53mm,bb=0 0 360 252]{fano3_new.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.24\hsize}
\begin{center}
\includegraphics[width=53mm,bb=0 0 360 252]{fano4_new.pdf}
\end{center}
\end{minipage}
\caption{Fano resonance. Cross sections of amplitudes composed of a resonance and
a continuum background with relative phase $\delta$ are shown.
The resonance is assumed to have 1600~MeV mass and 300 MeV width.
The resonance shape is dependent on the relative phase $\delta$. The vertical axis is in arbitrary unit. }
\label{fig:fano}
\end{figure}
In Fig.~\ref{fig:amp_i0}, we show the real and imaginary parts of
the $I=0$ scattering amplitudes of $P_{01}$ for Solution 1 and
$P_{03}$ for Solution 2, where the resonances are found. As seen in figure,
a typical resonance structure is seen in the amplitudes, but the role of the real and imaginary
parts is interchanged. (Usually the imaginary part has a peak structure, while the real part
increases around the resonance point.) This is due to strong coupling of the resonance
to the continuum background with some relative phase. In order to confirm whether
the structure in the amplitude comes from the resonance state, we subtract
the resonance contribution from the amplitude. We express the resonance contribution
as the Breit-Wigner form, of which the numerator is obtained
by calculating the residue of the amplitude at the resonance pole.
The subtracted amplitudes are shown as dotted lines in Fig,~\ref{fig:amp_i0}.
It implies that the subtracted amplitudes are almost constant without significant structure.
Thus, the structure appearing in the amplitudes is caused by the resonance state.
As we have mentioned above, the imaginary part of the amplitude rapidly
increases around the resonance energy. According to the optical theorem,
the total cross section is proportional to the imaginary part. Therefore,
we conclude that the rapid increase seen in the $I=0$ total cross section around
$p_{\rm lab}=500$ MeV/c can be a sign of the possible existence of a
resonance with a large width. In addition, the spin-parity of the resonance
can be learned by knowing which partial wave is responsible for the rapid increase
of the $I=0$ total cross section.
Here we have proposed
two solutions; in Solution 1, the rapid increase appears in the $P_{01}$-wave
and the resonance should have $J^{p} = (1/2)^{+}$. In Solution 2, it does
in the $P_{03}$-wave and the resonance should have $J^{p} = (3/2)^{+}$.
It would be very interesting if one could understand the feature of the
$I=0$ total cross section around $p_{\rm lab}=500$ MeV/c with more accurate
experimental data.
\begin{figure}[]
\begin{tabular}{c}
\begin{minipage}[t]{1.0\hsize}
\centering
\includegraphics[keepaspectratio, scale=0.8,bb=0 0 360 252]{sol1_p01.pdf}
\end{minipage} \\
\begin{minipage}[t]{1.0\hsize}
\centering
\includegraphics[keepaspectratio, scale=0.8,bb=0 0 360 252]{sol2_p03.pdf}
\end{minipage} \\
\end{tabular}
\caption{
The real and imaginary parts of the $I=0$ amplitudes of $P_{01}$ for Solution 1,
$P_{03}$ for Solution 2 around the resonance energy.
The solid lines stand for the original amplitudes, while
the dotted lines stand for the amplitudes obtained by subtracting the resonance pole.}
\label{fig:amp_i0}
\end{figure}
\begin{figure}[t]
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{S01_SOL1.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{S01_SOL2.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{P01_SOL1.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{P01_SOL2.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.50\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{P03_SOL1.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{P03_SOL2.pdf}
\end{center}
\end{minipage}
\vspace{-0.5cm}
\caption{The Argand diagrams of Solutions 1 and 2 in the $I=0$ states up to the momentum $p_{{\rm lab}}=$800 MeV/c.
Solution 1 is compared with the Martin's amplitude \cite{martin1975} and SAID amplitude
\cite{said} shown in the dotted line. It shows that Solution 1 is consistent with existing partial wave amplitude.}
\label{fig:arg_iso0}
\end{figure}
In Fig. \ref{fig:arg_iso0},
we show the Argand diagrams of the $I=0$ scattering amplitudes for the $S$ and $P$-wave
up to $p_{{\rm lab}}=$ 800 MeV/c using Solutions 1 and 2.
Then we compare with Martin's amplitude~\cite{martin1975} and amplitude of
SAID program~\cite{said}.
We find that the partial wave amplitudes for Solution 1 are very similar to the Martin's amplitude
and amplitude of SAID program.
Thus, one could find a pole for a board resonance also in the Martin's amplitude
and amplitude of SAID program.
It is also interesting to point out that, for Solution 2, the $P_{03}$ channel has an attraction
interaction and actually hold a broad resonance, while the $P_{01}$ channel is repulsive
even though some contribution of $P_{01}$ is seen in the total cross section.
In~Fig.~\ref{fig:energy_amp}, we show the momentum dependence of the $I=0$ partial-wave
$T$-matrix~$T^{\prime}_{l \pm}$ defined by $T^{\prime}_{l \pm} = -kT_{l \pm}/(8\pi\sqrt{s})$,
where $T_{l \pm}$ is given
in Eq.~(\ref{eq:less_amp}), for Solutions~1 and 2 in comparison with Martin's amplitude~\cite{martin1975} and SAID amplitude~\cite{said}.
The solid and dashed lines stand for the real and imaginary parts of the amplitudes, respectively.
\begin{figure}[t]
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{amp1_S01.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{amp2_S01.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{amp1_P01.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{amp2_P01.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.50\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{amp1_P03.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=80mm,bb=0 0 360 252]{amp2_P03.pdf}
\end{center}
\end{minipage}
\vspace{-0.5cm}
\caption{
The dimensionless partial-wave amplitudes $T^{\prime}_{l \pm} = -kT_{l \pm}/(8\pi\sqrt{s})$
of Solutions 1 and 2 in the $I=0$ channel up to the momentum $p_{{\rm lab}}=$800~MeV/c
in comparison with Martin's~\cite{martin1975} and SAID~\cite{said} amplitudes.
The solid and dashed lines stand for the real and imaginary parts of the amplitudes, respectively.}
\label{fig:energy_amp}
\end{figure}
It would be interesting to show a theoretical amplitude
which was the rapid increase of the $I=0$ total cross section in $S$-wave.
We find such a solution with the parameter set called Solution 3 given in Table \ref{tab:sol3}.
Figure \ref{fig:tot3} shows the $I=0$ total cross section.
It shows that the $S_{01}$ amplitude substantially contributes and
the rapid increase of the cross section stems from the $S_{01}$ amplitude.
Solution 3 cannot reproduce the angular dependence of the differential cross section
of the charge exchange, because the amplitude of Solution 3 is composed of $S$-wave contribution.
Thus, Solution 3 could be ruled out.
Nevertheless, here we discuss also Solution 3, because we want to point out
the relation between the rapid increase of the $I=0$ total cross section and existence of the
possible broad resonance.
We find a pole in the $S_{01}$ amplitude of Solution 3 at $z = 1624 - 132i$ MeV,
which corresponds to a resonance state with 1624 MeV/${\rm c^{2}}$, width 264 MeV and $J^{P}=(1/2)^{-}$.
In Fig. \ref{fig:amp3}, we show the real and imaginary part of the $S_{01}$ amplitude
for Solution 3 and find the resonance structure in the amplitude.
\begin{table}[]
\begin{center}
\caption{
The $S$-wave parameter set.
}
\begin{tabular}{c c| D{.}{.}{4}}
\hline
& & {\rm Solution 3} \\
\hline \hline
&$b^{I=1}$ &0.30 \\
&$d^{I=1}$ &-0.24 \\
$I=1$ &$g^{I=1}$ &0.72 \\
&$h^{I=1}$ &1.05 \\
& $\chi^{2}/N$ &4.06\\
\hline
&$b^{I=0}$ &0.38 \\
&$d^{I=0}$ &0.01 \\
$I=0$ &$g^{I=0}$ &-1.35 \\
&$h^{I=0}$ &-0.11 \\
& $\chi^{2}/N$ &29.7\\
\hline
\end{tabular}
\label{tab:sol3}
\end{center}
\end{table}
\begin{figure}[]
\begin{center}
\includegraphics[width=100mm,bb=0 0 360 252]{tot0_sol3.pdf}
\end{center}
\caption{The $I=0$ total cross section calculated using the Solution 3.}
\label{fig:tot3}
\end{figure}
\begin{figure}[]
\begin{center}
\includegraphics[width=100mm,bb=0 0 360 252]{sol3_s01.pdf}
\end{center}
\caption{The real and imaginary parts of the $S_{01}$ amplitude calculated using Solution 3.}
\label{fig:amp3}
\end{figure}
\section{Conclusion}
\label{sec:sec4}
We have investigated the $KN$ elastic scattering below the energy where the
inelastic contributions become signifiant, that is, $p_{\rm lab} < 800$ MeV/c,
by describing the scattering amplitude in the chiral unitary approach as an analytic function.
We utilize a next-to-leading
chiral Lagrangian for the kernel interaction of the unitarized amplitude, and the
low energy constants in the amplitude are determined to reproduce
the differential cross sections of $K^{+}p \to K^{+}p$, $K^{+}n \to K^{+}n$, $K^{+}n \to K^{0} p$
and the $I=0, 1$ total cross sections.
We have obtained good scattering amplitudes which reproduce the observed
scattering cross section very well. Particularly, the $I=1$ scattering amplitude,
namely $K^{+}p$ elastic amplitude, has been determined well thanks to less ambiguous
experimental data with small errors, and we have found that the $I=1$ scattering
amplitude at $p_{\rm lab}<800$ MeV/c is essentially described by
$S$-wave contribution, which is consistent with our conventional knowledge.
For the $I=0$ amplitude, we have proposed two possible parameter sets,
which reproduce the $I=0$ scattering cross sections similarly and have different
nature for the rapid increase appearing in the $I=0$ total cross section around
$p_{\rm lab} = 500$ MeV/c. In Solution 1, the rapid increase appears in $P_{01}$-wave
contribution, while in Solution 2 it stems from the $P_{03}$-wave contribution.
To show the example of the rapid increase of the $I=0$ total cross section
in the $S_{01}$ amplitude by Solution 3, even though it is not a realistic solution.
Having performed analytic continuation of the obtained $I=0$ scattering amplitudes
to the complex energy plane, we have found a pole corresponding to a broad resonance
state around
$E_{\rm c.m.} = 1617~{\rm MeV}$ with 305 MeV width in each scattering amplitude.
We would like to emphasize strongly that the existence of a broad resonance
is responsible for the rapid increase of the $I=0$ total cross section around $p_{\rm lab} =
500$ MeV/c. Thus, further investigation of the nature of the rapid increase
of the $I=0$ total cross section reveals directly the existence of the $S=+1$ exotic resonance
state. Usually resonance states, especially narrow resonances, appear as a bump
in the total cross section. Nevertheless, for broad resonances, because they strongly
couple to the non-resonance background, their resonance shape seen in the cross section
can be modified. This is known as Fano resonance.
In order to pin down the existence of the $S=+1$ broad resonance, one needs further
detailed investigation. First of all, the resonance found in this work has a broad width
and is located far from the real axis in the complex energy plane. The experimental
informations are in the real axis and constrain the scattering amplitude well close to the real axis.
To make the scattering amplitude, or the position of the pole, constrained more,
more accurate experimental data are necessary. In addition, one also needs more reliable
theoretical description. For instance, it could be necessary to introduce more terms
into the interaction kernel. It is also important to describe $K^{+}d$ scattering with
the deuteron wavefunction theoretically. This makes us to perform direct comparison
of the theoretical calculation to the experimental observation.
\section*{Acknowledgment}
The authors would like to thank Dr.\ T.~Hyodo for his helpful comments.
The work of D.J.\ was partly supported by Grants-in-Aid for Scientific Research from JSPS (17K05449).
|
{
"timestamp": "2018-12-21T02:07:25",
"yymm": "1806",
"arxiv_id": "1806.00925",
"language": "en",
"url": "https://arxiv.org/abs/1806.00925"
}
|
\section{Introduction}
Given two images, style transfer aims to transfer the style feature representation of one onto the content of the other. \textit{Convolutional neural networks} have shown to effectively learn lower level representations as well as more abstract features of an image. This means we can use CNNs for style transfer as we can preserve the style feature representations of one image and then apply it to a content image. In this paper, we first define the problem of style transfer, describe the different approaches we explore as well as their advantages and disadvantages, attempt to find evaluation measures for our results, and finally show some qualitative results.
\section{Style Transfer}
\label{style-transfer}
As discussed above, style transfer is obtained by minimizing a loss function which incorporates the semantic information of the style with the salient features of the content image. We used the VGG-16 model \cite{SimonyanZ14a} for both neural style transfer and universal style transfer, and either directly optimize the below loss function or to train a feed forward network to approximate the optimization procedure over the two losses, the \textit{content loss} and \textit{the style loss} \cite{nst}:
\begin{equation}
L = \alpha \ || I_o - I_c ||^2_2 + \beta \ || \phi(I_o) - \phi(I_s)||^2_2
\end{equation}
Here, $I_o$, $I_c$ and $I_s$ are the feature maps from the forward pass of the VGG-16 network at certain layers that we define heuristically. The above $\alpha$ and $\beta$ are scaling weights for the two loss components, where $\alpha$ determines the strength of the \textit{Frobenius norm} between the content and generated images, and $\beta$ determines the strength of the \textit{Frobenius norm} from the \textit{feature map correlations} derived by the \textit{gram matrices} ($\phi$) between the style and generated image feature maps.
Here, the Gram Matrix $\phi$ can be computed as:
\begin{equation}
\phi(x) = \frac{1}{HWC} \sum_{h=1}^{H} \sum_{w=1}^W x_{h,w,c} \ x_{h,w,c'}
\end{equation}
where $x$ is the feature maps of the a provided layer in the VGG-16 network, and $\phi(x)$ is proportional to the uncentered covariance of each of the channels $C_k$ in that layer, treating each image location as an independent sample.
While this objective function is sufficient, the resultant generated images are particularly noisy due to significant differences between the feature correlations of certain layers. To reduce said grain, we incorporate a regularizer, called \textit{Total Variation regularization} \cite{aly2005totalvariation}, which reduces the above problem. It can be defined as:
$$ J(x) = \int_W L(\left \| \triangledown_x f(x)) \right \|)dx $$
More details and explanations about total variation regularization can be found in the paper by \textit{Aly et al} \cite{aly2005totalvariation}.
Finally, the objective can be defined as the minimization of the linearly scaled sum of the 3 losses described above. The values of the 2 scaling factors $(\alpha, \beta)$ are obtained from the paper by \textit{Johnson et al} \cite{johnson2016perceptual}. For the value of $\lambda$, we experimented with several values via grid search, and chose a value of \textit{8.5e{-2}} which balances the requirement of a crisp image with the noise from a grainy image.
\begin{equation}
L = \alpha \ || I_o - I_c ||^2_2 + \beta \ || \phi(I_o) - \phi(I_s)||^2_2 + \lambda \ J(I_o)
\end{equation}
\section{Approaches}
\label{approaches}
\subsection{Neural Style Transfer}
Originally, style transfer could be achieved simply by optimizing an image initialized with Gaussian noise and minimizing the above loss function using an optimizer such as L-BFGS or Adam \cite{kingma2014adam} for a few thousand iterations. It was subjectively observed that L-BFGS obtained a much more appealing final output than Adam, although Adam was more memory efficient.
The original style transfer algorithm can be improved by using a variety of techniques discussed by \textit{Novak et al} \cite{NovakN16improving}. We incorporate a few of the proposed improvements such as utilizing all of the convolution layers of the VGG-16 to compute the overall style loss, use a geometric weighing of the style loss from each of these layers ($w_l^s = 2 ^ {(D - d(l))}$), incorporate \textit{activation shift} in the \textit{Gram Matrices}
\begin{equation}
\phi(x) = \frac{1}{HWC} \sum_{h=1}^{H} \sum_{w=1}^W (x-1)_{h,w,c} \ (x-1)_{h,w,c'}
\end{equation}
and apply \textit{Chained Correlation} to determine feature correlations between adjacent layers of the network at the same spatial dimensions ($\{ \phi(x_l, x_{l-1}) \ | \ l = 2 \dots 13 \}$) where
\begin{equation}
\phi(x, y) = \frac{1}{HWC} \sum_{h=1}^{H} \sum_{w=1}^W (x-1)_{h,w,c} \ (y-1)_{h,w,c'}
\end{equation}
When applied to the original style transfer technique, the combination of all of these improvements significantly improves the subjective quality of the generated images.
A significant drawback of style transfer is that the feature correlations obtained from the \textit{Gram matrices} does not incorporate the color information from the original content image. This causes the generated image to have the color palette of the style image, which might not be realistic or appealing. Work done by \textit{Gatys et al.} \cite{colorpreserve} incorporates \textit{Color transform}, a method of preserving the color statistics from the content image to the generated image. While there exist two techniques, \textit{Luminance matching} and \textit{Histogram matching}, we focus primarily on \textit{Histogram matching}.
We choose this transformation so that the mean and covariance of the RGB values in the new style image $S'$ match those of $C'$. Consider $\mu_C$ and $\mu_S$ be the mean colors of the content and style image respectively, $\Sigma_C$ and $\Sigma_S$ be the pixel covariances. We then need to choose \textbf{A} and \textbf{b} such that the transform $x' = Ax + b$ yields $\mu_{S'} = \mu_C$ and $\Sigma_{S'} = \Sigma_C$, where \textit{A} is a 3x3 matrix and \textit{b} is a 3 dimensional vector. Those can be satisfied by the constraints :
$$b = \mu_C - A\mu_S$$
$$A \Sigma_S A^T = \Sigma_C$$
While there exist a family of solutions for the above problem, we can quickly find a solution to the above using 3D Color Matching formulations. First, let the eigenvalue decomposition of a covariance matrix be $\Sigma = U \Delta U^T$. Then the matrix square root can be defined as : $\Sigma^{1/2} = U \Delta^{1/2} U^T$. Finally, the \textit{Histogram color transform} can be computed as :
\begin{equation}
A_{IA} = \Sigma_C^{1/2} \Sigma_S^{{-1}/2}
\end{equation}
An important extension of style transfer is the ability to mask certain regions where the transfer process should not occur. This problem is discussed in the work done by \textit{Chan et al.} \cite{ChanMaskedStyleTransfer}, which proposes the utilization of binary masks to provide the algorithm with guidance on which aspects of the content image must not be transformed. The binary mask provided is rescaled for each of the layers where the style loss is computed and the result of hadamard product of the mask with all feature maps of that layer is then used to compute the style loss. This technique allows several important extensions to style transfer, such as scaled style transfer (where the magnitude of the mask with values in the range $[0, 1]$ will determine the strength of style loss at a given position), binary masked style transfer (where binary masks determine which of the 2 styles will be applied at a certain position) and even n-ary masked style transfer (where more than 2 styles are disambiguated using pre-determined mask values).
\subsection{Universal Style Transfer}
Universal style transfer performs style transfer by approaching the problem as an image reconstruction process coupled with feature transformation, i.e., whitening and coloring \cite{ust}. The authors in the original paper constructed an VGG-19 auto-encoder network for image reconstruction. This network was then fixed and a decoder network trained to invert the VGG-19 features to the original image.
The main difference between Universal Style Transfer and previous approaches is the introduction of the feature transformations: \textit{whitening} and \textit{coloring}. Given a pair of content image $I_c$ and style image $I_s$, the algorithm first extracts the vectorized VGG-19 feature maps $f_c \in \mathbb{R}^{C \times H_c W_c}$ and $f_s \in \mathbb{R}^{C \times H_s W_s}$ at a certain layer (e.g., Relu\_5\_1), where $H_c$, $W_c$ ($H_s$, $W_s$) are height and width of the content (style) feature, and C is the number of channels. The decoder then reconstructs the image $I_c$ given $f_c$. \\
\subsubsection{Feature Transformations}
\textbf{Whitening Transform.} The model first centers $f_c$ by subtracting its mean vector $m_c$. Then $f_c$ is transformed linearly to remove correlation between $\hat{f_c}\hat{f_c}^T=I$. This is given by: $$\hat{f_c}=E_cD_c^{-1/2}E_c^Tf_c,$$ where $D_c$ is a diagonal matrix with the eigenvalues of the covariance matrix $f_cf_c^T \in \mathbb{R}^{C \times C}$, and $E_c$ is the corresponding orthogonal matrix of eigenvectors, satisfying $f_cf_c^T \in E_cD_cE_c^T$.
\textbf{Coloring Transform.} The same centering operation is done as with Whitening Transform, but is done to the style image. We first center $f_s$ by subtracting its mean vector $m_s$, and then carry out coloring transform which is the inverse of whitening to transform $f_c$ as before to obtain $\hat{f_{cs}}$ which has the desired correlations between its feature maps ($\hat{f_{cs}}\hat{f_{cs}}^T=f_sf_s^T$),
$$\hat{f_{cs}}=E_s D_s^{1/2}E_s^T\hat{f_c},$$
where $D_s$ is a diagonal matrix with eigenvalues of covariance matrix $f_sf_s^T \in \mathbb{R}^{C \times C}$, and $E_s$ is the corresponding orthogonal matrix of eigenvectors. Finally, we re-center the $\hat{f_{cs}}$ with mean vector $m_s$ of style. When compared to histogram matching, WCT helps transfer the global color of the style image as well as salient visual patterns. After WCT, we blend $\hat{f_{cs}}$ with content feature map $f_c$ before feeding into decoder as: $$\hat{f_{cs}} = \alpha \hat{f_{cs}} + (1-\alpha)f_c,$$ where $\alpha$ serves as the style weight for controlling the transfer effect.
\subsubsection{Multi-level coarse-to-fine stylization}
Different layers of VGG networks (Relu\_X\_1) capture different levels of image structure. Higher layers capture more complicated local structures while lower layers capture more low-level information. This is due to the increasing size of receptive field and feature complexity in network hierarchy. Thus, it is more advantageous to use features from all layers instead of just the last layer. \\
WCT is applied on Relu\_5\_1 features to obtain a coarse stylized result and it is considered as the new content image to adjust features in the lower level. Experiments clearly show that higher layers capture salient patterns of style and lower levels improve the details. If we go the other way (fine-to-coarse layers), lower level information cannot be preserved.
\section{Evaluation}
\label{evaluation}
Evaluating artistic style transfer is difficult. Note that the loss measures defined above for each model is based on the gram matrices, thus making the measurement and reductions in loss very much subjective to that particular image. There is no good quantitative measure to determine the overall effectiveness of a style transfer model. Thus, our primary evaluation will be \textbf{qualitative} and will rely on user's perception of effective style transfer, generally meaning how well the style has been adapted onto the content without overwhelming it.
One other aspect we can compare with other models would be \textit{speed} and \textit{efficiency}. How fast can one algorithm produce visually appealing images when compared to other algorithms. This can be a trade-off issue with \textbf{speed vs quality}. This determination needs to be made by the user.
Further, a third aspect for evaluation would be user control: how flexible a method is in adapting a user's particular requirements on the \textbf{stylization} and \textbf{sizes of images} that can be fed to the models and the \textbf{sizes of the outputs}.
\section{Results}
\label{results}
\begin{figure}[t!]
\begin{tabular}{cc}
\includegraphics[width=65mm]{gatys2.jpg} & \includegraphics[width=65mm, height=40.5mm]{gatys4.jpg} \\
(a) Japanese shrine \& Starry Night (Van Gogh) & (b) Milky Way \& Blue Strokes + Color \\[6pt]
\includegraphics[width=65mm]{gatys5.jpg} & \includegraphics[width=65mm]{gatys7.jpg} \\
(c) Itsukushima Shrine \& Blue Strokes + Color & (d) Japanese shrine \& Patterned Leaf + Color \\[6pt]
\includegraphics[width=65mm]{gatys6.jpg} & \includegraphics[width=65mm]{gatys8.jpg} \\
(e) Cat's Eyes \& Brush Strokes + Mask & (f) Moon Overlooking Lake \& Starry Night (Van Gogh)\\[6pt]
\end{tabular}
\caption{Improved Neural Style Transfer with Color + Mask Transfer}
\label{table1}
\end{figure}
The model was trained on the MS-Coco dataset \cite{lin2014microsoft} 80K training images with 1 million iterations (12.5 epochs). The total training time was 45 hours for all 5 decoders. The latter two layers took 10 and 22 hours respectively. The model was trained on a Google Cloud Platform instance with 16 Intel Skylake CPUs, 64GB RAM, and one Nvidia P100 GPU. The results of \textit{Improved Neural Style Transfer} can be observed in Figure \ref{table1}.
The above generated images in Figure \ref{table1} were up-scaled by a factor of 4 and then de-noised using \textit{Gaussian blurring} as post-processing to reduce noise from the upscaled images. We can observe that the quality of the generated images is excellent. We reiterate that this quality was obtained by using the improvements suggested by \textit{Novak et al} \cite{NovakN16improving}.
We now compare the above with the generated images obtained from the \textit{Universal Style Transfer}, which are generated at 1080p quality in less than 5 seconds each on a single GPU. Since color transfer and mask transfer cannot be obtained during the forward pass, we instead apply them as post-processing steps on the generated 1080p image. The generated images can be compared with the above in Figure \ref{table2}.
\begin{figure}[t!]
\begin{tabular}{cc}
\includegraphics[width=65mm]{uni2.jpg} & \includegraphics[width=65mm, height=40.5mm]{uni4.jpg} \\
(a) Japanese shrine \& Starry Night (Van Gogh) & (b) Milky Way \& Blue Strokes + Color \\[6pt]
\includegraphics[width=65mm]{uni5.jpg} & \includegraphics[width=65mm]{uni7.jpg} \\
(c) Itsukushima Shrine \& Blue Strokes + Color & (d) Japanese shrine \& Patterned Leaf + Color \\[6pt]
\includegraphics[width=65mm]{uni6.jpg} & \includegraphics[width=65mm]{uni8.jpg} \\
(e) Cat's Eyes \& Brush Strokes + Mask & (f) Moon Overlooking Lake \& Starry Night (Van Gogh)\\[6pt]
\end{tabular}
\caption{Universal Neural Style Transfer with Color + Mask Post Processing}
\label{table2}
\end{figure}
\section{Conclusion}
We learned about style transfer using encoder-decoder networks. We explored the various algorithms and methods tried by previous authors and how they compare. The predominant conclusion that arises is there is a massive trade-off between speed and quality with respect to the generated images. This is clearly seen - the \textit{neural style transfer} model lets users control every aspect of tuning and training and takes a long time to train per style, but produces images with amazing quality.
The \textit{universal style transfer} model aims to alleviate some of the disadvantages of \textit{neural style transfer} by trading off some quality and introducing a general model that does not need to be fine tuned for each style image, can generate images with comparable speed, and produces visually appealing images. Users can also input larger images and get outputs that don't need rescaling or denoising using this model, unlike the \textit{neural style transfer} model. In the end, there is a huge margin for improvement in this task and much can be explored.
\subsubsection*{Acknowledgments}
We would like to thank Professor \href{https://www.cs.uic.edu/~zhangx/}{Xinhua Zhang} and \href{https://www.linkedin.com/in/vigneshganapathiraman}{Vignesh Ganapathiraman} for their invaluable knowledge, advice, and guidance.
|
{
"timestamp": "2018-06-05T02:12:29",
"yymm": "1806",
"arxiv_id": "1806.00868",
"language": "en",
"url": "https://arxiv.org/abs/1806.00868"
}
|
\section{Introduction}
An interface, the frontier between two media, is often a region of interest for scientists.
In parti\-cular, surface physics becomes more relevant the smaller the scale, as volume forces get weaker and weaker compared to surface forces.
In a fluid, another effect of a smaller scale is that the relative importance of inertia over viscosity decreases~\cite{batchelor2000}.
With this in mind, one could wonder what role interfaces can play in viscosity-dominated microscopic flows, and in particular how surface forces affect the locomotion of microorganisms and the swimming microrobots that mimic them.
This paper will first present the generalities of microscopic locomotion in a fluid, and what becomes of these principles when the presence of an interface is taken into account.
Then, several experiments from the literature will be discussed that use surface effects for locomotion.
Lastly, we will describe in more details two experiments of surface swimmers powered by external magnetic fields.
This last section will contain original experimental and theoretical results.
\section{Scallops, interfaces and symmetry}
\subsection{Swimming in the bulk}
We generally have a good enough intuition about what it means to swim in a fluid at our scale.
Water is pushed away in a given direction, and motion ensues in the opposite direction by conservation of momentum~\cite{taylor1951}.
Sustain the motion by repeating this periodically and we obtain a working swimming strategy.
However, as is often the case in nature, the physics of swimming is highly influenced by the relevant length and time scales~\cite{taylor1951,purcell1977,lauga2009}.
Consider the general case of a body that is able to actively deform, moving in an infinite fluid volume.
The conservation of momentum at each point in the fluid is described by the Navier-Stokes equation~\cite{navier1823,stokes1845} which, for an incompressible fluid, submitted to gravity, of density $\rho$ and kinematic viscosity $\nu$, reads
\begin{equation}
\frac{\partial \textbf{u}}{\partial t} + \textbf{u} \cdot \nabla \textbf{u} = -\frac{1}{\rho}\nabla p + \nu\nabla^2 \textbf{u} + \textbf{g}.
\label{NS}
\end{equation}
This equation must be completed with the continuity equation $\nabla \cdot \textbf{u} = 0$ and with the appropriate boundary conditions, taking into account the position at all times of the surface of the deformable body.
For example, no-slip boundary conditions stipulate that $\textbf{u}_{\mathrm{fluid}} = \textbf{u}_{\mathrm{body}}$ at each point on the surface of the body.
It is often appropriate to compare the magnitude of the various terms in equation~(\ref{NS}), in order to identify the relevant effects and provide some simplification.
Let $L$ and $U$ be a typical length and a typical speed of the flow, respectively.
For example, this could be the body length and speed of the swimmer, although it is sometimes more useful to look at the dimensions associated with what is producing the flow, like a beating fin or a rotating flagellum.
The left member of equation~(\ref{NS}) represents inertial forces and contains the unsteady term $\partial \textbf{u} / \partial t$ and the advection term $\textbf{u} \cdot \nabla \textbf{u}$.
Both terms have the units of $U^2 /L$.
The viscous forces per unit mass $\nu \nabla^2 \textbf{u}$ scale like $\nu U / L^2$.
Therefore, the ratio of inertial and viscous forces in the flow is given by the Reynolds number Re~$=UL/\nu$~\cite{stokes1851,reynolds1884}.
This means that, for a given liquid, viscous forces tend to dominate over inertia at small scales.
Water has a kinematic viscosity of about $\nu=10^{-6}$~m$^2/$s, so that for a swimmer moving at one body length per second, \emph{i.e.} $L/U = 1$~s, we would have Re $<$ 1 for $L<1$~cm.
For Reynolds numbers close to zero, the left member of equation~(\ref{NS}) can be neglected.
This leads to the Stokes equation
\begin{equation}
-\frac{1}{\rho}\nabla p + \nu\nabla^2 \textbf{u} + \textbf{g} = 0
\label{Stokes}
\end{equation}
which is linear and independent of time.
This has some serious consequences on the swimming mechanisms of microorganisms~\cite{purcell1977,lauga2009}.
The fact that time does not intervene in the Stokes equation means that flows are typically reversible and rate independent.
Let a body composed of two segments linked by a hinge, as shown in Figure~\ref{scallop}(i).
The opening angle between the segments is the only degree of freedom.
At high Reynolds number, this simple structure can swim by rapidly closing, expelling water, and then reopening slowly.
This is a swimming strategy similar to what some scallops do, with valves in place of the segments, except that the water is expelled through small openings on either side of the hinge.
However, the rate of closing and opening does not influence the flow in the Stokes regime, meaning that only the succession of geometric configurations adopted by the swimmer matters.
With only one degree of freedom, our model scallop can only go back and forth between the open and close configurations.
Even if water is pushed during the closing phase, it will always produce the inverse flow by reopening, so that the center of mass is not displaced over one period.
This leads to what is colloquially known as the ``scallop theorem'', stating that, at Re~$=0$, if the succession of configurations adopted by the swimmer is unchanged by a time-reversal transformation, then it cannot produce a net motion~\cite{purcell1977}.
Several other swimming strategies that have been proven to work at higher Reynolds number will fail for the same reasons.
For instance, two spheres of different sizes linked by an oscillating spring, as shown in Figure~\ref{scallop}(ii), would not be able to swim in the Stokes regime, as there is only one degree of freedom for deformation.
Such a swimmer has been shown to produce virtually no net motion under a critical value for the Reynolds number, at around Re~$\approx 20$~\cite{klotsa2015}.
Similarly, a beating rigid tail, shown in Figure~\ref{scallop}(iii), can produce no net motion if the Reynolds number is smaller than unity~\cite{alben2005}.
Depending on the particular case, the onset of a net motion for a reciprocal swimmer, following an increase in the Reynolds number, can occur either continuously or discontinuously~\cite{lauga2007}.
Another way to discuss the implications of equation~(\ref{Stokes}) is to consider that a swimmer, in the Stokes regime, is incapable of exerting a net force, or conversely, a net torque, on the surrounding fluid~\cite{lauga2009}.
Indeed, considering that inertia is negligible when Re~$\approx 0$ is equivalent to stating that the swimmer experiences a resultant force from the fluid equal to zero at all times, granted that there is no external force pushing the swimmer.
If the flow field around the swimmer is expressed as a multipole expansion, the term decaying like $1/r$, which corresponds to a point force and is called a stokeslet, is therefore zero.
The leading term in the far-field flow is thus a symmetric force dipole at best, \emph{i.e.} the flow generated by two opposite point forces, which decays like $1/r^2$.
This is called a stresslet and is a useful tool for describing in general terms the motion of a microswimmer and its interactions with its environment~\cite{lauga2009,trouilloud2008,pushkin2013}.
In general, a swimmer generates a flow that is a combination of stresslets and higher order terms, such as the source dipole, whose velocity field decays like $1/r^3$.
\begin{figure}
\includegraphics[width=\linewidth]{FIG-scallop.pdf}
\caption{On top, reciprocal swimming strategies that do not work in the bulk at low Reynolds number.
This includes (i)~a scallop-like swimmer~\cite{purcell1977}, (ii)~two oscillating spheres of different sizes~\cite{klotsa2015}, and (iii)~a body with a rigid beating tail~\cite{alben2005}.
In the middle, non-time-reversible deformation sequences that can produce a net motion in the Stokes regime.
The three-link swimmer~(iv) has two hinges that move out of phase~\cite{purcell1977}.
The two arms of the three-linked-spheres swimmer~(v) also oscillate out of phase~\cite{najafi2004}.
A deformable body such as a flexible magnetic tail~(vi) can also produce a non-reciprocal motion~\cite{dreyfus2005}.
Another way to propel three spheres is to use a triangular configuration~(vii), where the oscillation of one pair is accompanied by an out-of-phase rotation~\cite{grosjean2015}.
Below, swimming srategies that work in the Stokes regime only with a nearby interface.
A reciprocal swimmer such as the scallop-like one~(viii) can move in all directions when close to a deformable interface~\cite{trouilloud2008}.
Two stacked spheres in a precession movement~(ix) can move with an interface nearby~\cite{tierno2008b}.
Rotating spheres can also move close to an interface~(x), where they self-assemble into a colloidal conveyor belt under a combination of rotating and oscillating fields~\cite{martinez2015}.}
\label{scallop}
\end{figure}
For decades now, researchers have come up with swimming strategies that satisfy the conditions imposed by the scallop theorem~\cite{purcell1977,lauga2009,najafi2004,dreyfus2005,tierno2008,grosjean2015}.
Such strategies already exist in nature and are used by motile bacteria and sperm cells.
This includes the rotation of one or several helical flagella~\cite{purcell1977}, the sequential motion of a series of cilia~\cite{lauga2009} or the transfer of mass by deformation of the whole cell membrane~\cite{farutin2013}.
However, it is possible to devise strategies that are conceptually simpler, more adapted to analytical calculations or numerical simulations, and/or easier to implement experimentally with existing technologies.
One early example, proposed by Purcell in 1976~\cite{purcell1977}, is the addition of one degree of freedom in the scallop-like system from Figure~\ref{scallop}(i), which is now composed of two hinges and three segments, the so-called three-link swimmer, as shown in Figure~\ref{scallop}(iv).
This allows to move the external arms one after the other, leading to a sequence of configurations that is not time-reversible.
As shown, the sequence leads to a net motion to the right.
This is easier to understand when picturing this swimmer as standing on a sinusoidal wave travelling to the left, where every configuration change moves the wave by a quarter wavelength.
Experimental implementations of this swimmer have been made, though they require macroscopic elements such as motors.
An arguably simpler, one-dimensional deformation sequence has been proposed by Najafi and Golestanian in 2004~\cite{najafi2004}.
Like the swimmer from Figure~\ref{scallop}(ii), it consists of spheres linked by arms whose length can vary.
In order to beat the scallop theorem, a minimum of three spheres and two independent arms is required.
The sequence as depicted in Figure~\ref{scallop}(v) leads to a net motion to the right.
A lot of theoretical work has been based on this swimmer, sometimes using oscillating springs instead of arms~\cite{golestanian2008,zargar2009,pickl2012,pande2015,pande2017}.
This can be attributed in part to its one-dimensional nature which greatly facilitates analytical calculations.
Notably, it can be shown that the speed of this swimmer over one period is proportional to the area of the cycle drawn by the swimmer in the plane defined by the two arms' lengths~\cite{golestanian2008}.
If the arms oscillate harmonically at a frequency $\omega$, this can be expressed as the product of the amplitudes of oscillations of each spring with the sine of their phase difference $\phi$, namely
\begin{equation}
V = K A_1 A_2 \, \omega \sin \left( \phi \right) = K W,
\label{golestanian}
\end{equation}
where $K$ is a geometrical prefactor that can be determined analytically, and we defined swimming efficiency $W$.
Despite its simplicity, the three-linked-spheres swimmer is far from being the first to have been implemented experimentally.
This honour goes to the work by Dreyfus \emph{et al.} in 2005, which is often regarded as the first artificial microswimmer~\cite{dreyfus2005}.
A magnetic tail, composed of superparamagnetic colloids linked by DNA strands, is attached to a red blood cell.
In an oscillating magnetic field, the tail deforms and aligns itself periodically with the field, following a non-time-reversible sequence illustrated in Figure~\ref{scallop}(vi).
Another possible deformation sequence uses three spherical particles forming a regular triangle, as shown in Figure~\ref{scallop}(vii)~\cite{lumay2013,grosjean2015}.
It only requires one pair of spheres to oscillate, while the other two arms can remain rigid.
In this case, the rotation of the ensemble is the key ingredient to generate a non-reciprocal cycle.
Indeed, the center of rotation is determined by the hydrodynamic interaction between the spheres.
During each contraction of the pair, it moves away from the center of mass, which is then displaced by the rotation.
Once the swimmer goes back to the equilateral configuration, the center of mass and the center of rotation are confounded again, so that a net displacement has been produced over one cycle.
Compared to the one-dimensional three-bead-swimmer, the triangular one can freely move in the plane with two degrees of freedom.
\subsection{Beyond the scallop theorem}
While the scallop theorem has been the basis for many studies, there are several cases where it is not applicable.
For instance, the independence in the rate of deformation of the body is only valid in a Newtonian fluid.
In a shear thinning or shear thickening fluid, for example, the change in apparent viscosity can be used to produce a net displacement with a reciprocal motion~\cite{lauga2009b,qiu2014}.
The proximity of another body can also be used to relax the condition imposed by the scallop theorem.
For instance, two out-of-phase reciprocal swimmers can essentially act as one non-reciprocal swimmer~\cite{lauga2008}.
A nearby interface, which is a common scenario in biological fluids or microfluidic devices, can also be used to beat the scallop theorem.
In their 2008 paper, Trouilloud \emph{et al.} studied the flow induced by a reciprocal swimmer near an interface, by looking at the flow in the far field as a superposition of stresslets and source dipoles~\cite{trouilloud2008}.
In this case, while the proximity of a rigid wall can induce an additional velocity component, it does not allow to overcome the scallop theorem.
However, a reciprocal swimmer can move when the interface in question is deformable, such as the interface between two fluids, or between a fluid and a deformable solid, like a membrane or a gel.
Swimming is possible towards, away and parallel to the interface, depending on the stresslets and source dipoles considered.
To generate a significant motion, the swimmer must be able to generate an important enough deformation of the interface.
The swimmer exerts a typical viscous force $\eta U L$ on the interface, where $L$ represents both the swimmer size and its distance to the interface, which must be compared to a typical restoring force, such as a capillary force $\gamma L$ in the case of an interface between two fluids.
In this case, one obtains the capillary number $\eta U/\gamma$ which must be larger than unity while keeping the Reynolds number small.
This leads to a typical length scale $\eta^2 / \rho \gamma$, the Ohnesorge length, under which this kind of propulsion is effective.
Note that for simplicity, two fluids of similar viscosity and density were considered.
While a deformable interface is required in the work by Trouilloud \emph{et al.}, it is possible to generate motion parallel to a rigid interface with reciprocal motion.
In experiments performed by Tierno \emph{et al.}, a small paramagnetic sphere is attached to a larger one~\cite{tierno2008,tierno2008b,tierno2010}.
The doublet is then submitted to a precessing magnetic field, as illustrated in Figure~\ref{scallop}(ix).
Motion is induced in a direction perpendicular to the precession axis and parallel the wall.
Here, the presence of the wall creates a difference in viscous dissipation between each part of the rotation, far and close to the interface.
Indeed, the ``return stroke'' experiences a lower viscous dissipation than the ``forwards stroke'', which is closer to the interface.
This could be compared with the previously discussed reciprocal motion in a non-Newtonian fluid, except that the modulation in viscous dissipation is due to the proximity of the interface, and not the fluid itself.
This asymmetry explains the apparition of a net motion that would not be observed in the bulk.
Similarly, a simple rotational motion in a plane perpendicular to a nearby interface can lead to propulsion, as the flows below and above the rotating body differ.
On the other hand, several magnetic colloids rotating in a plane parallel to the interface can self-assemble into a two-dimensional colloidal carpet, as the time-averaged dipole-dipole interaction between the spheres is an attraction.
By combining the two effects, it is possible to generate a colloidal carpet that moves along the interface~\cite{martinez2015}.
This is illustrated in Figure~\ref{scallop}(ix).
The speed of the carpet increases with the number of particles and saturates around $N \approx 300$.
This structure can work as a conveyor belt able to transport a cargo.
\section{Swimming at the interface}
\begin{figure}
\includegraphics[width=\linewidth]{FIG-surface.pdf}
\caption{On top, swimmers that propel using a gradient in surface tension, also known as the Marangoni effect.
In the classic camphor boat~(i), surface tension is locally lowered by a piece of camphor dissolving at the stern~\cite{nakata2013}.
Marangoni-driven propulsion can also be obtained by releasing a solvent~(ii) contained in a gel~\cite{bassik2008} or in a droplet coated with colloids~\cite{bormashenko2015}.
Note that motion can arise even with symmetric objects, as a spontaneous breaking of symmetry can be observed.
The gradient in surface tension can also come from a temperature change~(iii), for instance using a light source~\cite{okawa2009}.
In the middle, two systems that use surface waves for propulsion.
A droplet placed on a vertically vibrating bath can be deformed by a Faraday standing wave~(iv), leading to a net motion after a symmetry breaking in the position of the nodes~\cite{ebata2015}.
Magnetic colloids~(v) can also generate surface waves under a vertical oscillating field~\cite{snezhko2006}.
The particles arrange to form an aster~(vi), which can swim if the spatial symmetry is broken~\cite{snezhko2011}.
Below, floating magnetic spheres under a vertical constant field~(vii) arrange into structures~(viii) due to a competition between magnetic and capillary forces~\cite{lumay2013}.
These assemblies can move under oscillating fields~(ix), for example by mimicking the deformation sequence of the three-linked-spheres swimmer from Figure~\ref{scallop}(v)~\cite{grosjean2016}.}
\label{surface}
\end{figure}
In the previous section, we discussed how the breaking of spatial symmetry provided by a nearby interface can allow to beat the scallop theorem.
We will now show that interfacial phenomena, such as the Marangoni effect, surface waves or the so-called Cheerios effect, can also be used to generate microswimmers.
\subsection{Marangoni effect}
In the scallop-theorem paradigm, it is assumed that the self-propulsion of the swimmer is achieved through the deformations of the body.
However, it is also possible for a rigid body to achieve a force-free self-propulsion through a self-generated gradient.
This is the basic principle behind self-phoretic swimmers, which induce flows on their surface through gradients in concentration, temperature or electrostatic potential~\cite{golestanian2007}.
For example, self-diffusio\-phoresis can arise when a particle is partially covered with a catalyst for a chemical reaction that can occur in the surrounding fluid, locally creating a gradient in concentration~\cite{howse2007,popescu2016}.
The asymmetry is not necessarily required, as a symmetry breaking can also spontaneously occur with isotropic particles~\cite{michelin2013}.
The process is similar with self-thermo\-phoresis~\cite{jiang2010} or self-electro\-phoresis~\cite{pumera2010}.
Self-phoretic swimmers are in a class of their own, with many mechanisms involved, which is why it will not be discussed in more detail here.
However, the principle of a self-generated gradient is also used for motion along an interface, in the case of propulsion by Marangoni effect.
The camphor boat, which has been known for more than a century, is now used as a model system for low Reynolds locomotion~\cite{nakata2013,karasawa2014}.
A piece of camphor is attached to a floating object.
When it dissolves in the water, the camphor molecules adsorbed at the water surface locally lower surface tension, as illustrated in figure~\ref{surface}(i).
The resulting surface tension gradient propels the object forwards.
While the camphor boat is the earliest and most well-known example of propulsion by Marangoni effect, it is far from being the only one.
For example, a body releasing a solvent in the surrounding liquid can generate a gradient in surface tension, as represented in Figure~\ref{surface}(ii).
Examples of bodies placed on a water bath include a gel disk soaked in oxolane (tetrahydrofuran)~\cite{mitsumata2001} or ethanol~\cite{bassik2008}, a droplet of aqueous ethanol coated with colloidal particles, called a liquid marble~\cite{bormashenko2015}, and a soap disk at an oil-water interface~\cite{nakata2005}.
This effect has also been observed with pure water droplets placed on an oil-surfactant bath~\cite{izri2014}.
Note that these objects can be isotropic, as any small anisotropy due to the initial conditions can increase when the object starts to move.
A variation on this principle is to generate a surface tension gradient by locally heating the fluid, for example by illuminating an object with intense light.
This is illustrated in Figure~\ref{surface}(iii), where a light-absorbing element is placed at the back of an object heated with focused light~\cite{okawa2009}.
Using a laser as heat source makes it possible to move isotropic objects such as steel spheres, as it allows light to hit the surface at a precise point~\cite{mallea2017}.
Finally, stationary heated structures on a chip suspended above the surface can generate many types of behaviours by using point, line, annular or triangular heat sources~\cite{basu2007}.
\subsection{Surface waves}
Another type of interfacial phenomenon that can be used for propulsion is surface waves.
For instance, one can use Faraday waves, \emph{i.e.} standing waves that appear on a vibrating bath, for locomotion.
A famous example is the case of walking droplets, where an oil droplet bouncing on a vibrating bath, just below the onset of the Faraday instability, generates waves that help propel it forwards~\cite{couder2005}.
This does not technically qualify as swimming, as the droplet is never immersed in the bath.
It is possible, however, to produce a swimmer by using a similar system~\cite{ebata2015}.
In this case, a water droplet is placed on a vibrating bath of silicon oil.
The droplet is mostly immersed, with a small cap peaking above the surface of the bath.
Under a strong enough acceleration of vibration $\Gamma$, a Faraday standing wave can appear on the surface of the water droplet, as depicted in Figure~\ref{surface}(iv).
This generates a flow in the surrounding oil that can lead to motion.
Depending on the forcing parameters, several types of motions are observed, including spinning, rotation on an orbit, zig-zag and translational motion.
This is linked to the number and positions of the nodes of the standing wave on the droplet.
Indeed, if the nodes are in a straight line, the droplet is either stationary or spinning.
Conversely, for some values of the forcing parameters, a symmetry breaking in the position of the nodes can spontaneously appear, leading to a net motion.
The Reynolds number for this system is typically around 0.1.
With a less viscous bath, and thus a higher Reynolds numbers of around 10, another type of motion can be observed.
In this case, the wave on the droplet is a travelling wave, leading to locomotion in the opposite direction.
This resembles the squirming model for microswimmers, where a sphere deforms its surface to generate a flow, similarly to what is observed with some bacteria such as ciliates~\cite{pak2014}.
In a second example, ferromagnetic particles floating on water can produce waves when submitted to a vertical oscillating field $B_z$.
Indeed, while the interaction between vertical dipoles on the surface is repulsive, nearby particles can form chains with a resulting moment in the plane of the interface.
These chains can deform the interface as they try to align themselves with the vertical field, which leads to the formation of self-organized structures, such as ``snakes''~\cite{snezhko2006} and asters~\cite{snezhko2011}.
A side view of an aster is depicted in Figure~\ref{surface}(v).
The colloidal chains stand on the slope of the standing wave they produce.
The addition of a constant horizontal field $B_x$ can break the circular symmetry of the aster, as shown in Figure~\ref{surface}(vi).
The asymmetric aster generates a net fluid flow, leading to locomotion.
Note that, while the particles have a typical size of 90~$\mu$m, inertia is not negligible in the flow, as the Reynolds number is of the order of 10.
\subsection{Magnetocapillary swimmers}
\begin{figure}
\includegraphics[width=\linewidth]{SwimmerCrop.jpg}
\includegraphics[width=\linewidth]{SwimmerCrop2.jpg}
\caption{On top, a triangular magnetocapillary swimmer, composed of three 500~$\mu$m spheres floating on water submitted to a vertical field $B_z\approx 3$~mT.
On bottom, a collinear swimmer composed of two 400~$\mu$m and one 500~$\mu$m sphere.
In this case we have a vertical $B_z\approx 5$~mT and horizontal $B_x\approx 3$~mT.}
\label{magnetocapillary}
\end{figure}
The last type of surface microswimmer that we will discuss is the magnetocapillary swimmer~\cite{lumay2013,hubert2013,grosjean2015,chinomona2015,lagubeau2016,grosjean2016,grosjean2017}.
Metallic spheres of 500~$\mu$m in diameter are placed on a water surface under a constant vertical magnetic field $B_z$.
The spheres experience an attractive force due to capillarity.
Indeed, each particle is surrounded by a meniscus, as the weight of the particle deforms the surface.
This leads to the apparition of a lateral capillary force, which is an attraction in the case of similar particles~\cite{kralchevsky1994}.
This effect is colloquially known as the Cheerios effect, as it can be observed simply with breakfast cereals in a bowl of liquid~\cite{vella2005}.
The vertical field $B_z$ is used to counter this attraction.
While they are made of a ferromagnetic material, the spheres have an almost linear magnetization due to finite size effects~\cite{lagubeau2016}.
This means that they behave essentially like paramagnets, except for a larger (effective) susceptibility $\chi_{\mathrm{eff}} \approx 3$.
However, the beads can reorient in an external field, an effect which can be attributed to a small residual magnetism of the order of 100~A/m.
Under a vertical field, the magnetic interaction between the spheres is a repulsion.
The combination of these two forces can lead to a finite equilibrium distance, as illustrated in Figure~\ref{surface}(vii).
For more than two particles, organized structures emerge, typically following a triangular lattice~\cite{lumay2013}.
The basic structures observed, up to $N=7$, are shown in Figure~\ref{surface}(viii).
A photograph of a triangular magnetocapillary swimmer is shown in Figure~\ref{magnetocapillary}.
\begin{figure}
\includegraphics[width=\linewidth]{FIG-cycles.pdf}
\caption{(a) Interdistances $d_1$ and $d_2$ describe a non-reciprocal cycle in a collinear magnetocapillary swimmer.
The experiment was run for ten oscillations at 3~Hz.
(b) An internal angle $\alpha$ and the orientation of the swimmer $\theta$ also describe a non-reciprocal cycle in a triangular swimmer.
Orientation $\theta$ is defined as the average of the orientations of the three particles in the referential of the center of mass.
Ten oscillations at 0.5~Hz are shown.
}
\label{cycles}
\end{figure}
\subsubsection{Collinear swimmer}
In order to beat the scallop theorem, a minimum of three rigid spheres is required~\cite{lumay2013}, which is why the triangular swimmer was the first one to be studied in depth~\cite{grosjean2015}.
However, this triangular swimmer is neither the only, nor the most simple three-particle swimmer obtainable with a magnetocapillary system.
In fact, under a large enough constant horizontal field, a collinear configuration becomes stable~\cite{chinomona2015}.
An exemple of such a configuration is shown in Figure~\ref{magnetocapillary}.
This means that it is possible to mimic the deformation cycle of the three-link swimmer depicted in Figure~\ref{scallop}(v)~\cite{grosjean2016}.
To generate the deformation, a horizontal field of the form
\begin{equation}
B_x = B_{x,0} + \delta B \sin \left(2\pi f t\right)
\end{equation}
is used, where $\delta B \ll B_{x,0}$ in order to maintain the swimmer in the collinear state.
Identical particles will oscillate around their equilibrium position in a time-reversible way, with the magnetocapillary interaction essentially acting as a non-linear spring that brings the particles back to their equilibrium position.
In order to break time-reversal symmetry, one must introduce an asymmetry in the system.
This is simply done by changing the size of one of the external particles, as shown in Figure~\ref{surface}(ix) as well as Figure~\ref{magnetocapillary}.
In this spring analogy, this is equivalent to changing the spring constant of one of the two springs, which can introduce a phase difference between the oscillations.
The swimmer therefore follows a deformation sequence similar to the one proposed in~\cite{najafi2004} and depicted in Figure~\ref{scallop}(v).
Note that, while inertia in the flow can be neglected, the oscillations of the springs must not be overdamped in order to observe the phase shift, which means that the inertia of the particles themselves must be considered.
A typical experimental deformation cycle is shown in Figure~\ref{cycles}(a).
This approximately circular trajectory in the $(d_1,d_2)$ plane means that the two oscillations are in quadrature for $f \approx 3$~Hz.
Note that the maximum speed is not necessarily reached at the phase quadrature, as the amplitudes of both oscillations are also a function of $f$.
The amplitudes reach their maximum at the radial magnetocapillary resonance frequency described in~\cite{lagubeau2016}, which happens typically around 2 or 3~Hz.
An analytical expression for the swimming speed can be expressed by combining the equations of motion of the particles with equation~(\ref{golestanian}).
Within this framework, and with the possibility of analytical developments, the collinear swimmer could be used as a model system to verify some general principles of microswimmers~\cite{pande2017,grosjean2016}.
For instance, it has been shown that the influence of the viscosity of the surrounding fluid on a swimmer is not always trivial and, in particular, that an increase in viscosity can counter-intuitively lead to an increase in speed~\cite{pande2017}.
This is also expected in the case of the magnetocapillary swimmer, as shown in Figure~\ref{speed}(a).
There seems to exist an optimal viscous damping in terms of swimming speed, which is a function of the excitation frequency $f$.
The magnetocapillary collinear swimmer could therefore offer a way to experimentally validate the results from~\cite{pande2017}.
Similarly, there is an optimal surface tension $\gamma$ for a given excitation frequency, as shown in Figure~\ref{speed}(b), where $\gamma$ has been varied while keeping the contact angle constant.
If we consider that the particles are linked by a magnetocapillary spring, the role of surface tension is essentially to act as an extension spring force while the dipole-dipole repulsion induced by $B_z$ acts as a compression spring force.
\begin{figure}
\includegraphics[width=\linewidth]{speedGamma.pdf}
\caption{(a) Swimming efficiency $W$ of the collinear magnetocapillary swimmer as a function of viscosity, for various values of the excitation frequency $f$.
(b) Swimming efficiency $W$ of the collinear magnetocapillary swimmer as a function of surface tension, for various values of the excitation frequency $f$.
The contact angle between the spheres and water was kept constant.
}
\label{speed}
\end{figure}
\subsubsection{Triangular swimmer}
To generate non-reciprocal motion in a triangular swimmer, it is submitted to a horizontal oscillating field
\begin{equation}
B_x = B \sin \left(2\pi f t\right)
\end{equation}
with $B<B_z/\sqrt{2}$ to avoid contact between the particles~\cite{grosjean2017}.
A constant horizontal field $B_{x,0}$ can also be added, which tends to force the swimmer into a particular swimming mode by further breaking spatial symmetry in the system~\cite{grosjean2015}.
In general, the frequency of the oscillating field is below 1~Hz, which leads to a relatively low Reynolds number for the size of the particles, usually between $10^{-3}$ and $10^{-1}$.
Higher excitation frequencies tend to lead to more complex behaviours~\cite{hubert2013}.
One typical deformation sequence observed is similar to the one depicted in Figure~\ref{scallop}(vii).
The regular triangle deforms into an isosceles as the magnitude of the field increases.
A rotation is then observed, which is kick started by the presence of a small residual magnetism in the spheres, leading to a torque on the assembly.
The center of mass CM moves, as it is distinct from the center of rotation CR, which is determined by the hydrodynamic interactions between the spheres.
The swimmer then goes back to a less deformed state and rotates back to its initial orientation, leading overall to a net displacement of the center of mass.
Figure~\ref{lepto} illustrates this process, showing a typical cycle going from equilateral to a pointy isosceles, which then deforms back into an equilateral and rotates back to its initial configuration.
The pointy isosceles state, whose apex angle $\alpha$ is below $\pi/3$ and which is called \emph{lepto}, is shown in more detail.
The center of rotation is defined by the hydrodynamic coupling between the spheres.
Indeed, assuming each sphere rotates individually due to the magnetic field, the induced flow field leads to a force on the neighbouring particles.
The resulting forces define a rotation center that is aligned with CM and CR.
In the \emph{platy} case, CM is closer to the apex than CR.
In the case of a flat isosceles, whose apex angle $\alpha$ is above $\pi/3$ and which is called \emph{platy}, CM would be further away from the apex than CR.
The \emph{lepto} and \emph{platy} configurations were observed in both experiments and quasi-static simulations, though with a different particle at the apex~\cite{grosjean2015}.
Their effect in the cycle is complementary, as they rotate in opposite directions.
For the sake of simplicity though, the cycle from Figure~\ref{lepto} only goes back and forth between a regular and a \emph{lepto} configuration, as the \emph{platy} state is less pronounced and not essential to explain the non-reciprocal cycle.
An experimental deformation cycle is shown in Figure~\ref{cycles}(b).
The two degrees of freedom shown are the internal angle $\alpha$, corresponding to the apex of the \emph{lepto} triangle, and the orientation of the triangle $\theta$, defined as the average orientation of the spheres in the referential of the center of mass.
\begin{figure}
\includegraphics[width=\linewidth]{FIG-lepto.pdf}
\caption{
The magnetocapillary triangle swims thanks to a combination of deformation and rotation.
The rotation-translation hydrodynamic coupling (dotted lines), caused by the individual rotation of the spheres, defines a rotation center CR (red circle) that, in an isosceles of apex angle $\alpha$, is distinct from the center of mass CM (red dot).
Therefore, the rotation leads to a displacement of CM that is preserved if the return rotation happens when the triangle is equilateral.
CR moves closer or further away from the apex depending if $\alpha>\pi/3$ (\emph{platy}) or $\alpha<\pi/3$ (\emph{lepto}), respectively.
In the experiment, the swimmer tends to deform into a flat isosceles during the return rotation~\cite{grosjean2015}, which further increases the effect.
}
\label{lepto}
\end{figure}
Because the particles are further away from each other on average and due to the restrictions on the amplitude $\delta B$, the collinear swimmer is usually about ten times slower than the triangular one, whose typical speed is around one particle radius per period of the oscillating field.
This is why the triangular swimmer, despite being more complex, is more appropriate in the study of potential applications of microswimmers.
Indeed, it is possible to control rather precisely its trajectory in the plane of the interface~\cite{grosjean2015}.
The possibility of capturing, transporting and releasing a floating cargo has been demonstrated experimentally, as well as the mixing of fluids at low Reynolds number~\cite{grosjean2017}.
\subsection{Surface effects at larger scales}
\begin{figure}
\includegraphics[width=\linewidth]{FIG-meso.pdf}
\caption{
Exemples of centimeter-scale surface locomotion.
The larva of the waterlily leaf beetle (i) can move upwards a meniscus by deforming to generate a Cheerios effect.
Water striders (ii) float and propel vortices thanks to their hydrophobic legs~\cite{hu2003}.
A simple surface swimmer (iii) composed of two arms and an asymmetric body can swim by expelling vortices.
There is a net motion to the right thanks to the asymmetry in momentum transfer to the fluid.
Experimentally (iv), the oscillation of the piece is achieved thanks to embedded magnets in the arms and an external oscillating field.
}
\label{meso}
\end{figure}
While the systems described in this paper mainly belong to the realm of low-Reynolds-number flows, it should be noted that interfacial effects are typically relevant up to the centimeter scale.
In fact, the capillary length $l_c = \sqrt{\gamma/\rho g}$, below which surface tension dominates over gravity, is around 2.7~mm in water.
This explains why some insects and other invertebrates rely on surface forces for propulsion.
For instance, some small animals can use the Cheerios effect to ascend a meniscus~\cite{hu2005}.
Water treaders, a semiaquatic insect, achieve this by pulling the interface upwards with their legs.
Alternatively, some terrestrial insects such as beetle larvae can bend their whole body to generate the same effect, allowing them to reach land after an unintended fall onto water.
Figure~\ref{meso}(i) depicts a beetle larva climbing a meniscus.
Millimeter-long nematodes, also called roundworms, have been shown to not only climb a meniscus, but aggregate and remain grouped together thanks to the Cheerios effect~\cite{gart2011}.
Other invertebrates, such as water striders~\cite{hu2003} and fisher spiders~\cite{suter1997}, float on water thanks to surface tension.
They use their hydrophobic legs to transfer momentum to the liquid by generating U-shaped vortex rings attached to the interface, as represented in Figure~\ref{meso}(ii).
This type of locomotion relies on a higher Reynolds number, typically around 100 or more~\cite{hu2003}.
To achieve this, water striders possess three pairs of legs that secrete a hydrophobic wax.
They are covered with microscopic needles, called setae, which themselves are marked with a multitude of nanogrooves~\cite{feng2007}.
Only the middle pair of legs is used for propulsion.
This motion resembles rowing, as the return stroke happens outside of water.
In addition, some aquatic insects such as riffle bugs, smaller relatives of the water striders, can secrete surfactants to move by Marangoni effect~\cite{bush2007}.
This is based on the same principle that was depicted in Figure~\ref{surface}~(ii).
It generates a fast motion that is used as an escape mechanism.
It is possible to design artificial surface swimmers based on similar mechanisms.
For instance, artificial water striders have been built using an elastic thread~\cite{hu2003}, piezoelectric actuators~\cite{song2007} or small dc motors~\cite{zhang2011} to power the legs.
A larger number of supporting legs can allow such robots to support heavier loads~\cite{song2007,zhang2011}.
Using 3D-printing technology, we designed a very simple structure that captures the basics of this swimming strategy, \emph{i.e.} floating on water and transferring momentum to the fluid by producing vortices.
This swimmer is composed of a body that has the shape of a disk, with a pair of arms attached, as shown in Figure~\ref{meso}(iii).
However, to generate propulsion, the fore-aft symmetry of the piece must be broken.
Therefore, the central disk has a small radius on one side (fore) and a larger radius on the other (aft).
In the example shown in Figure~\ref{meso}(iv), the arm reach of the piece is 2~cm and the radii of the body are 0.35 (fore) and 0.50~cm (aft).
When this object oscillates, it can generate vortex half-rings on each side.
Thanks to the geometric asymmetry of the piece, the expulsion of vortices itself is asymmetric.
This leads to a net motion of about 1.8~cm per period of oscillation.
Figure~\ref{meso}(iv) shows the trail left by such a swimmer in coloured water.
One can see the rather large vortices on the side with the larger radius (aft).
In practice, a permanent magnets is embedded in each arm, oriented perpendicular to it.
The oscillation of the piece is powered by an oscillating magnetic field which creates a time-dependent torque.
The field is sinusoidal, of amplitude 2.8~mT and frequency 0.5~Hz and oscillates perpendicular to the swimming direction.
A small offset of 0.28~mT is added perpendicular to the oscillation, which prevents the piece to perform a full turn.
Figure~\ref{meso}(iv) also shows the trajectory of the arms' ends over one period of oscillation.
One can see that the piece swings back and forth between $-\pi$ and $\pi$ radians.
The Reynolds number of this system is typically of a few hundreds.
Another useful dimensionless number to describe vortex shedding is the Strouhal number, often written St~$= Af/U$ where $A$ is the stroke amplitude, $f$ its frequency and $U$ is the swimming speed~\cite{batchelor2000,taylor2003}.
In the case of undulatory propulsion of fish, the optimal Strouhal number has been theoretically predicted, and ranges between 0.15 for large animals like cetaceans up to 0.8 for small animals such as tadpoles~\cite{eloy2012}.
An oscillating piece like the one in Figure~\ref{meso}(iv) has a Strouhal number of about 0.55, suggesting that vortex shedding is the relevant swimming mechanism.
Further studies could aim to optimize the efficiency of the swimming piece by varying the geometrical parameters as well as the applied field.
This could provide a model structure to study the laws that govern biolocomotion, as well as a basic element to construct untethered swimming robots.
\section{Conclusion}
It certainly makes sense, from a theoretical standpoint, to study microswimmers in an unbounded volume of fluid.
However, in real world systems such as microfluidic devices or the human body, microswimmers are highly likely to encounter obstacles, interfaces or membranes.
Here, we discussed the necessary conditions for swimming imposed by the so-called scallop theorem, which stipulates that a deforming body must adopt a non-time-reversible series of shapes in order to produce a net motion at low Reynolds number.
This condition can be relaxed in the vicinity of an interface, which can for example add an extra degree of freedom in the system.
Not only can an interface help produce the breaking of symmetry necessary for propulsion, but interfacial phenomena can also play a role in generating motion.
This includes the Marangoni effect, where a gradient in surface tension leads to a net motion; surface waves, which can generate flows in the surrounding fluid; and the Cheerios effect, where particles self-assemble into a swimmer thanks to a lateral capillary force.
The latter swimmer was studied in more depth in this paper.
Two distinct swimming mechanisms were evidenced and discussed both experimentally and numerically.
Interfacial forces can also play a role in systems with a larger Reynolds number, as seen in some insects and other invertebrates.
This was illustrated experimentally by designing an asymmetric oscillator that swims by vortex shedding.
\section{Acknowledgements}
GG thanks FRIA (FNRS) for financial support.
This work was financially supported by FNRS PDR grant T.0129.18.
\section{Authors contributions}
All the authors were involved in the preparation of the manuscript.
All the authors have read and approved the final manuscript.
\vfill
\bibliographystyle{epj}
|
{
"timestamp": "2018-06-05T02:17:52",
"yymm": "1806",
"arxiv_id": "1806.01090",
"language": "en",
"url": "https://arxiv.org/abs/1806.01090"
}
|
\section{Introduction}
A global concern has been arisen owing to rapid industrial development and population growth, resulting in energy scarcity and earth pollution. In this regard, developing green and sustainable methods for producing clean energy and solving environmental pollution problems have absorbed enormous attention~\cite{abe2010recent, samadi2016recent}. Among various auspicious strategies, semiconductor photocatalysis has been widely studied in recent years owing to its capabilities to obtain hydrogen as an energy carrier, to remove organic pollutants, and to reduce CO$_2$ by converting solar energy into chemical energy. Therefore, the performance of photocatalytic materials is greatly dependent on the efficiency of visible light absorption because approximately 50 percent of sunlight consists of the visible part ~\cite{abe2010recent, li2013photoelectrochemical, ebrahimi2018facile}.
Recently, a metal-free semiconductor photocatalyst based on graphitic carbon nitride, g-C$_3$N$_4$, has received much attention \cite{PhysRevB.50.10362, PhysRevB.64.235416, PhysRevB.73.125427, wei2013strong} from a photocatalytic perspective because of its high thermal stability, chemical stability, and visible light absorption ~\cite{ding2016does}. However, pure g-C$_3$N$_4$ displays a poor photocatalytic efficiency owing to the low surface area, high recombination rate of photogenerated electron-hole pairs, and poor optical absorption above 420 nm \cite{zhang2015origin, naseri2017graphitic}. To avoid these drawbacks and enhance the photocatalytic performance, many attempts have been pursued to achieve a reasonable efficiency, such as exfoliating layered GCN into nanosheets \cite{xu2013chemical}, incorporation of foreign elements and impurities including Fe~\cite{wang2009metal}, Na~\cite{xiong2016bridging}, K~\cite{xiong2016bridging} ,Li~\cite{zhu2014lithium}, P~\cite{guo2016phosphorus}, O~\cite{bu2014effect}, N~\cite{zhou2016n}, coupling with metals such as Ag~\cite{bai2014enhancement} and Au~\cite{cheng2013nanoparticle}, inorganic semiconductors like TiO$_2$~\cite{chen2016heterojunctions}, and layered semiconductors such as MoS$_2$~\cite{ge2013synthesis}.
To improve the charge transfer kinetics in GCN, many efforts have put into designing and constructing GCN-based heterojunction using different semiconductors, such as Fe$_2$O$_3$ and TiO$_2$, with a proper valence band and conduction band potentials ~\cite{ong2016graphitic}. Considering the GCN-nanosheets-based heterojunction, composed of two components and three heterostructures~ \cite{xu2015sulfur, li2016novel, yan2016construction} with different types of interfaces have been propounded so as to enhance the photocatalytic performance of GCN. Furthermore, the incorporation of GCN with metals (particularly noble metals such as Au and Ag) is an effective way to exploit the charge kinetics of GCN \cite{ong2016graphitic}. In this regard, upon light absorption, collective oscillations of free electrons, known as localized surface plasmon resonance effect, occurs. This surface plasmon results in extending light absorption substantially into the visible and thus increasing the number of photogenerated electron-hole pairs in the adjacent semiconductor. Additionally, the incorporation of foreign elements and impurities into the GCN framework is an intriguing way to promote the electrical, optical, and surface properties of GCN \cite{ong2016graphitic}.
Co-doping is a promising technique that can be used for effectively tuning the dopant populations, electronic and optical properties. It can enhance the solubility of dopants and improve the stability of desired defects. Recently, Zhang et al.~\cite{zhang2010phosphorus} have investigated the effect of P doping on electrical characteristic of g-C$_3$N$_4$. Their results indicated that electrical conductivity increases remarkably upon phosphorous doping, leading to a higher charge carrier density. Furthermore, Sagara et al.~\cite{sagara2016photoelectrochemical} have found that B doped GCN electrode shows a far better CO$_2$ reduction activity than that of pure g-C$_3$N$_4$ electrode. To do so, boron and phosphorous co-doped g-C$_3$N$_4$ has been experimentally reported by Razig et al.~\cite{raziq2017synthesis} very recently. On the basis of their report, the optimized nanocomposites exhibit improved visible-light activities for CO$_2$ conversion as well as phenol and acetaldehyde degradation. Additionally, Srinivasu et al.~\cite{srinivasu2014porous} theoretically illustrated that incorporation of P or B elements into g-CN, another form of graphitic carbon nitride with CN stoichiometry, enhanced the charge carrier mobility. It is worth mentioning that in the last five years, various studies have been conducted on photocatalytic properties of GCN as a two-dimensional (2D) layered system synthesized via chemical, liquid, ultrasound, and thermal exfoliation. Synthesizing GCN nanosheets could be attributed to the fact that bulk GCN possesses a high degree of grain boundary defects due to preparation at high temperatures resulting in high electron-hole recombination rate. Hence, by exfoliating layered GCN into nanosheets, improved electronic properties and high specific surface area could be achieved.
In this work, GCN monolayer is chosen to be doped with phosphorous and boron impurities. Using first principles based calculations, we study the geometries, electronic, and optical properties of B doped and P doped as well as B/P co-doped g-C$_3$N$_4$ monolayer and their results are compared. We show that the incorporation of both B and P into a hexagonal lattice of GCN reduces the energy band gap from $3.1$ for pristine GCN to $1.9$ eV. Moreover, the co-doped system exhibits an improved absorption intensity in the visible region and more electronic transitions which are prohibited in the pristine GCN. Therefore, B/P co-doped GCN is expected to be an promising material to be used in many chemical and optical applications.
This paper is organized as follows. In Sec.~\ref{sec:model}, we introduce our system and model and explain the method which is used to calculate the electronic and optical properties of the system. In Sec.~\ref{sec:results}, we present and describe the numerical results for the of a GCN reside on a co-doping effects. Finally, we conclude and summarize our main results in Sec.~\ref{sec:concl}.
\begin{figure*}[t]
\centering
\includegraphics[width=15cm,height=18cm]{Figure1.jpg}
\caption{ (Color online) (a) Optimized structure of a GCN unit cell with hexagonal layered structure and (b) band structure of bulk GCN which indicates a semiconducting behavior with a direct band gap energy of $2.70$ eV, (c) optimized ($2\times$2) supercell, (d) band structure which is $3.11$ eV, and (e) PDOS calculations of GCN monolayer. Notice that the upper valence band composes of the p states of N$_{\rm ring}$, while the lowest conduction band prevailingly originates from p states of C and N$_{\rm ring}$ atoms. Moreover, some electrons, extracted from the C atoms, delocalized over the N atoms. The $1\times 1$ supercell is used here.}
\label{fig1}
\end{figure*}
\section{Methods and Computational details}\label{sec:model}
In this study, the density functional theory (DFT) simulations are carried out by using the CASTEP code~\cite{segall2002first}. All crystalline cells, including bulk GCN, monolayer of GCN, B and P-doped, and B/P co-doped GCN are optimized within the generalized gradient approximation (GGA) and the exchange-correlation functional of the Perdew-Burke-Ernzerhof (PBE). To consider the ion-electron interactions, the ultrasoft pseudo-potential is employed. The kinetic cutoff energy of $500$ eV and sampling of the reciprocal space Brillouin zone is done by a Monkhorst-pack grid of $6\times6$ $k$-point to perform geometry optimization and electronic structure calculations. A vacuum slab of $20$ $\mbox{\AA}$ along the $z$ direction (normal to the GCN monolayer) is used to all the pure and doped monolayer systems to avoid the interaction between neighboring cells. All atomic positions and lattice parameters are allowed to relax until the convergence threshold for energy, the maximum Hellmann-Feyman forces on each atom, and the maximum displacement are less than $1.0\times10^{-5}$ Hartree atomic units , 0.002 Hartree atomic/\AA, and 0.005 \AA, respectively. It should be noted that to obtain more accurate calculations, a HSE06 hybrid functional is also employed. Moreover, we will consider diluted doping concentrations, therefore, the defect-defect interaction should be very small and we ignore this effect in our calculations.
\section{Results and discussion}\label{sec:results}
\subsection{GCN bulk monolayer}
In this section, we present our main numerical results based on first-principles simulations. Our aim is to explore the impact of co-doping on electronic and optical properties of GCN. All the first-principles calculations are performed at zero temperature.
Figure \ref{fig1}(a) presents the unit cell of triazine bulk GCN with hexagonal layered structure, belonging to space group P = 6m2 (No. 187). The calculated optimized lattice constants, $a = b = 4.81$ and $c = 6.27$ \AA are in good agreement with experimental measurements and theoretical studies \cite{Liu2016, sun2008solvent}. The calculated band structure of bulk GCN is plotted in Fig. \ref{fig1}(b). Clearly, bulk GCN indicates a semiconducting behavior with a direct band gap energy of $2.70$ eV, which is consistent with obtaining experimental band gaps \cite{yang2013exfoliated}. Considering the GCN monolayer, shown in Fig. \ref{fig1}(c), two kinds of nitrogen atoms, namely N$_{\rm ring}$ and N$_{\rm bridge}$, are observed because of different chemical environments~\cite{Liu2016}. In fact, although N$_{\rm bridge}$ atoms are fully saturated by three surrounding C atoms, N$_{\rm ring}$ atoms only connect two C atoms, leaving a non-bonding character behind~\cite{xu2015insights}. In this regard, two kinds of bond lengths are calculated around 1.47 and 1.33 \AA for C-$N_{\rm bridge}$ and C-$N_{\rm ring}$, respectively. The calculated band gap of GCN monolayer is $3.11$ eV (Fig.~\ref{fig1}(d)), which is in reasonable agreement with the HSE06 band gap of $3.18$ eV~\cite{cui2015structural}. Moreover, the different chemical bonding environments of nitrogen atoms are also confirmed by the calculated partial density of states (PDOS), as shown in Fig. \ref{fig1}e. On the basis of PDOS calculations, the upper valence band composes of the p states of N$_{\rm ring}$, while the lowest conduction band prevailingly originates from p states of C and N$_{\rm ring}$ atoms~\cite{xu2015insights}. Moreover, since nitrogen is more electronegative than carbon, some electrons, extracted from the C atoms, delocalized over the N atoms~\cite{ma2012strategy}. This behavior is confirmed by Mulliken population analysis in which the electron distributions at N$_{\rm ring}$, N$_{\rm bridge}$, and C are -0.410, -0.330, and +0.520 electron, respectively. It would be worth mentioning that the Kohn-Sham eigenvalues do not correspond, in general, to physical excitation energies of the system and therefore the PDOS, stemming from DFT, provides
a qualitative picture of the accurate PDOS of the system.
Table \ref{tab1} presents some reported band gap values obtained experimentally as well as theoretically using DFT calculations based on implication of different exchange correlation functionals. It should be noted that the calculated band gap of the pure GCN is always underestimated by generalized gradient approximations~\cite{wei2013strong}. The increment in band gap value for the GCN monolayer can be assigned to the quantum confinement effect.
\begin {table}[b]
\caption{The calculated and experimental band gap energy of pure and doped GCN}
\begin{center}
\begin{tabular}{|c|c|p {1.6 cm}|p {1.3 cm}|p {1 cm}|}
\hline
Structure & Method & Exchange correlation function & Band gap (eV) &Ref.\\
\hline
GCN monolayer & Experimental & - & 2.92 &\cite{xu2013chemical} \\
\hline
Layered GCN & Experimental & - & 2.79 & \cite{lin2015efficient}\\
\hline
Layered GCN & Experimental & - & 2.82 &\cite{ma2016water} \\
\hline
GCN monolayer & Theoretical & HSE06 & 3.03 & \cite{zhang2015origin}\\
\hline
GCN monolayer & Theoretical & HSE06 & 3.18 & \cite{cui2015structural}\\
\hline
Layered GCN & Theoretical & LDA & 1.43 &\cite{wei2013strong}\\
\hline
Layered GCN & Theoretical & GGA & 1.60 & \cite{gao2016atomically}\\
\hline
P doped GCN & Theoretical & HSE06 & 2.01 & \cite{ma2012strategy} \\
\hline
P doped GCN & Theoretical & HSE06 & 2.55 & \cite{srinivasu2014porous} \\
\hline
B doped GCN & Theoretical & HSE06 & Half metallic & \cite{meng2015half} \\
\hline
GCN monolayer & Theoretical & HSE06 & 3.10 & Present work \\
\hline
B doped GCN & Theoretical & HSE06 & Metallic & Present work \\
\hline
P doped GCN & Theoretical &HSE06 & Half-filled metalic & Present work \\
\hline
\end{tabular}
\label{tab1}
\end{center}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=3.3 in]{figure2.jpg}
\caption{ (Color online) The front views of the (a) B doped, (b) P doped, and (c) B/P co-doped GCN monolayer. }
\label{fig2}
\end{figure}
\subsection{Doped GCN monolayer}
In order to dope GCN with both boron and phosphorous atoms, four possible sites, including C, N$_{\rm ring}$, N$_{\rm bridge}$, and interstitial, can be considered. In the case of boron, substitution of B for C atoms is energetically most favorable~\cite{ding2016does}. For phosphorous doping, it is found that P atom cannot substitute the carbon or both types of nitrogen atoms. However, interstitially P doped GCN is reported to be the most stable configuration thermodynamically~\cite{wen2017review}. Finally for the B/P co-doped system, B atom substitutes the C atom and P atom chemically adsorbs on the GCN monolayer as shown in Fig. \ref{fig2}(a), \ref{fig2}(b), and \ref{fig2}(c).
\begin{figure}[t]
\centering
\includegraphics[width = 3.1 in]{figure3v.jpg}
\caption{(Color online) (a) Band structure and (b) PDOS calculations of B-doped GCN monolayer. A metallic behavior is induced in the GCN monolayer when a
C atom is replaced by the B atom. The metallicity is principally dominated by p orbitals of N$_{\rm ring}$ atoms, which are connected with the B atom.
N$_{\rm bridge}$ and boron atoms have a little contribution as the PDOS illustrates in (b).
For the $2\times2$ supercell, there are 28 atoms. For the boron doped GCN, the concentration is $3.57$\%.
In the case of interstitial doped GCN the concentration is $3.44$\%.
Finally, for the co-doped system, the concentration is $6.89$\%. }
\label{fig3}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width = 3.1 in]{figure4.jpg}
\caption{(Color online) (a) Band structure and (b) PDOS calculations of the P-doped GCN monolayer. A half-filled metalic behavior is found. Inset: Small intersection between the band structure and the Fermi energy is shown. Notice that the PDOS of P interstitially doped GCN is remarkably different from that of the un-doped system. Moreover, doping GCN with the P atom leads to a reduction in the contribution of p states of N$_{\rm ring}$ atom to the valence band edge, whereas there is a little increment in the contribution of p states on N$_{\rm bridge}$ atom to the conduction band edge. Besides, the P atom prefers to bind two adjacent N$_{\rm ring}$ atoms. The $2\times 2$ supercell is used here.}
\label{fig4}
\end{figure}
To study the stability of the mono-doped system, the formation energy ($E_f$) can be calculated using the following equations:
\begin{eqnarray}
E_{f}&=& E_{T}(sub)-E_{T}(pure)-\mu_{A}+\mu_{B}\nonumber\\
E_{f}&=& E_{T}(int)-E_{T}(pure)-\mu_{A}
\end{eqnarray}
where $E_{T}(pure)$, $E_{T}(sub)$, and $E_{T}(int)$ are the total energies of the pristine GCN, doped GCN for substitutional and interestitial dopants, respectively. $\mu_{A}$ and $\mu_{B}$ are the chemical potentials of the phosphorous (boron) and carbon/nitrogen atoms, respectively. For GCN, the relation $3\mu_{C}+4\mu_{N}=\mu(GCN)$ should be satisfied. To determine the chemical potentials, solid graphite\cite{ding2016does, ma2012strategy, Liu2016}, boron nitride (BN) \cite{ding2016does}, and $P_{4}$ are used \cite{ma2012strategy, Liu2016}: $\mu_{C}=\frac{\mu(graphite)}{4}$, $\mu_{N}=\frac{(\mu(graphite)-3\mu)}{4}$, $\mu_{B}=\mu(BN)-\mu_{N}$, and $\mu_{P}=\frac{\mu(P_4)}{4}$ \cite{franck1990jd, wiberg1972chemische, chase1974janaf}. In the case of triazine GCN monolayer, there are two inequivalent N sites, N$_{\rm ring}$ and N$_{\rm bridge}$, and all carbon atoms are chemically equivalent. The dopant formation energies of B doped and P doped GCN monolayer are reported in Table 2. As seen from \ref{tab2}, the B substituted carbon and P interstitially doped systems indicate the lowest formation energies. Therefore, these systems are energetically more favorable to form.
\begin {table}[ht]
\caption{Dopant Formation Energies (eV) of GCN monolayer}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
B$_{\rm Nring}$ & B$_{\rm Nbridge}$ & B$_{\rm C}$ & B$_{\rm i}$ & P$_{\rm Nring}$& P$_{\rm Nbridge}$ & P$_{\rm C}$ & P$_{\rm i}$\\
\hline
3.95 & 3.25 & 1.35 & 1.95 & 3.52 & 0.83 & 1.42 & 0.75 \\
\hline
\end{tabular}
\label{tab2}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width = 3.1 in]{figure5.jpg}
\caption{(Color online) The band structure of the B/P co-doped GCN monolayer. The band gap is almost 1.95 eV. The $2\times 2$ supercell is used here. }
\label{fig5}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width = 3.3 in]{figure6.jpg}
\caption{(Color online) PDOS calculations of the B/P co-doped GCN monolayer. The contribution of p states of N$_{\rm ring}$ atoms reduces for the B/P co-doped GCN. Moreover, the p states of phosphorous and carbon have contributions to the both conduction and valence bands, and electrons can be excited from P and C atoms. Two P-N$_{\rm ring}$ bond lengths are almost similar to those of the P doped GCN. The bond length of B atom with two adjacent N$_{\rm ring}$ atoms are slightly different for the co-doped system. This difference can be assigned to the severe deformation of the planar shape of GCN monolayer when P is added to the system. }
\label{fig6}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width = 3.1 in]{figure7.jpg}
\caption{(Color online) Difference charge density contour maps projected on the parallel plane for (a) pristine and (b) B/P codoped GCN monolayer. }
\label{fig7}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=15cm,height=6cm]{figure8.jpg}
\caption{(Color online) LUMO and HOMO of (a) pure , (b) B doped, (c) P doped, and (d) B/P codoped GCN monolayer. The Fermi level is set to the zero of energy. Gray and blue spheres represent the C and N atoms, respectively. }
\label{fig8}
\end{figure*}
We show the electronic band structure of B doped GCN monolayer in Fig. \ref{fig3}(a). Our numerical calculation show that a metallic behavior is induced in the GCN monolayer when a C atom is replaced by the B atom. The metallicity is principally dominated by p orbitals of N$_{\rm ring}$ atoms, which are connected with the B atom, nevertheless N$_{\rm bridge}$ and boron atoms have a little contribution as the PDOS illustrates in Fig. \ref{fig3}(b). The lattice parameter, $a$, increased to $4.99$ {\AA} upon doping with the B atom. Furthermore, the lengths of B-N$_{\rm ring}$ and B-N$_{\rm bridge}$ bonds are 1.45 and 1.51 \AA, respectively, which are higher than C-N$_{\rm ring}$ and C-N$_{\rm bridge}$ bonds in the pure GCN monolayer. The formation of the weaker covalent bonds between the B atom and the adjacent N$_{\rm ring}$ and N$_{\rm bridge}$ atoms may stem from the smaller absolute electronegativity of the B atom (4.29) than those of the C atom (6.27) and the N atom (7.30) on the Pauling scale~\cite{ma2012strategy}. Mulliken population analysis suggests that nitrogen atoms gained -0.150 and -0.260 electron, however, the C atom lost +0.060 electron. Hence, the electron distribution at Nring, Nbridge, and C becomes -0.560, -0.590, and +0.580 electron, respectively. This charge redistribution at the N and C atoms produces an electric field near the surface of the GCN monolayer. Despite the fact that the charge redistribution may favor the charge carrier separation, the observed metallic behavior is not appropriate to be utilized in photocatalytic applications. It should be noted that a half-metallic behavior was observed by Gao et al.~\cite{gao2016atomically} for B doped heptazine GCN prepared by heating the mixture of melamine and boron oxide.
For the P-doped GCN monolayer, a metallic behavior is observed. The band structure is shown in Fig. \ref{fig4}(a). Basically, the Fermi level crosses the top of the highest valence band in a small window in k-space. Since an appropriate band gap energy higher than 1.9 eV must be used in any photocatalytic applications, this system cannot be utilized as a visible photocatalyst. To get further insight into the electronic structures of the P doped GCN monolayer, its PDOS is illustrated in Fig. \ref{fig4}(b). As shown, the PDOS of P interstitially doped GCN is remarkably different from that of the un-doped system. Doping GCN with the P atom leads to a reduction in the contribution of p states of N$_{\rm ring}$ atom to the valence band edge, whereas there is a little increment in the contribution of p states on N$_{\rm bridge}$ atom to the conduction band edge. The p states of P and C atoms contribute both the valence and conduction band edges of the P doped GCN, and, as a result, electrons in the valence band edge can be excited from the P and C atoms. Although B doped system conserves its planar structure after optimization, the original planar shape is broken in the P doped system, resulting in the deformation of the overall $\pi$-conjugated electronic states in the triazine unit. Moreover, according to the obtained results, the P atom prefers to bind two adjacent N$_{\rm ring}$ atoms. Two P-N$_{\rm ring}$ bond lengths are almost similar and are calculated around 1.78 \AA, which are weaker than other C-N$_{\rm ring}$ bonds. The Mulliken charge population analysis also shows that the P atom loses electrons and is positive in charge (+0.430 electron). It should be noted that for P doped GCN configuration $sp^2$ hybrid orbitals of phosphorus bonded with two $sp^2$ hybrid orbitals of the adjacent nitrogen, while a lone-pair electrons localize around the P atom and an electron delocalizes around NPN chain.
\begin{figure*}[t]
\centering
\includegraphics[width=15cm,height=12.5cm]{figure9new.jpg}
\caption{(Color online) Imaginary part of the dielectric function (a) Pure, (b) B-doped. (c) P-doped and (d)B/P codoped GCN monolayer. (e) absorption coefficient of pure and doped GCN monolayer, and (f) the side views of pristine and B/P codoped GCN. Gray and blue represent C and N atoms. The imaginary part of the dielectric function depends of the light polarization. The inset in (e) is zoom for a finite wavelength. The pure GCN monolayer shows a strong absorption peak around 270-320 nm, attributed to the $\pi$-$\pi^*$ electronic transition. Notice that the B/P co-doped GCN monolayer improves the utilization of visible portion of solar irradiation observed by experiment~\cite{raziq2017synthesis}. In addition, the dielectric function depends on the
material density, the interlayer distance and the type of doped system. }
\label{fig9}
\end{figure*}
In the case of B/P co-doped GCN monolayer, the band gap is determined of around 1.95 eV which is higher than that of the P doped GCN and our numerical results are shown in Fig. \ref{fig5}. This increment in band gap energy from 1.43 to 1.95 eV makes the co-doped system an appropriate candidate to be used in photocatalytic applications for water splitting. It is worth mentioning that the bandwidths are smaller for the P doped and B/P co-doped GCN monolayer than that for the pure or a B doped system. Therefore, the mobility of charge carriers decreases. In spite of the reduction in the mobility of charge carriers, which may be compensated by migration of excitons through a GCN single layer, doping of GCN with B and P can be an effective strategy so as to enhance the optical properties of this structure.
The PDOS plots of the co-doped system is indicated in Fig. \ref{fig6}. Similar to the P doped GCN, the contribution of p states of N$_{\rm ring}$ atoms reduces for the B/P co-doped GCN. Moreover, the p states of phosphorous and carbon have contributions to the both conduction and valence bands, and, as a result, electrons can be excited from P and C atoms. As seen, unlike the P atom, the B atom has no contribution to the conduction and valence band edges. Two P-N$_{\rm ring}$ bond lengths are almost similar to those of the P doped GCN, however, the band lengths of B-N$_{\rm ring}$ and B-N$_{\rm bridge}$ are slightly different from those of B doped GCN. Furthermore, the lattice parameters of $a$ and $b$ vary around 2.9 and 2.7 percent, respectively, for the co-doped system. It should be noted that unlike the B doped system, in which the lengths of two B-N$_{\rm ring}$ bonds are similar, the bond length of B atom with two adjacent N$_{\rm ring}$ atoms are slightly different for the co-doped system. This difference can be assigned to the severe deformation of the planar shape of GCN monolayer when P is added to the system. Furthermore, the Mulliken charge of P and B atoms in co-doped configuration are +0.51 and +1.06 electron which differs from those of B and P doped systems.
In the case of charge distribution over the pure GCN monolayer, a homogeneous electron density difference, projected on the parallel plane contains the system layer, owing to the formation of a sinusoidal-like shape is shown in Fig. \ref{fig7}(b). This distortion leads to charge redistribution and, as a result, an internal electric field may form leading to the retardation of charge recombination.
\begin{figure}[h]
\centering
\includegraphics[width = 3.3 in]{figure10new.jpg}
\caption{(Color online) optical conductivity for pure and doped GCN monolayer. Note that doped system exhibits more effective ultraviolet absorption and enhanced visible-light response. This behavior is also confirmed by our optical conductivity calculations. A rather sharp increase in the optical conductivity takes place in the ultra-violate region and the correspondent peak moves toward lower wavelength with B, P and B/P doping. }
\label{fig10}
\end{figure}
To investigate the variation of frontier orbitals induced by the addition of B, P, and both B/P into GCN monolayer, the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) are illustrated in Fig. \ref{fig8}. It is well-known that pure GCN is $\pi$-delocalized with $sp^2$ hybridization \cite{li2017mechanistic}. According to Fig. \ref{fig8}(a), the LUMO is mainly dominated by 2p states of C and N$_{\rm ring}$ atoms, and, as a result, these atoms incline to provide reduction sites for H$^+$ to H$_2$ \cite{lu2017effects}, whereas the HOMO is mainly originated from 2p states of N$_{\rm ring}$ atoms preferring oxidation sites for H$_2$O to O$_2$~\cite{lu2017effects}. This result is in a good consistent with the PDOS calculations. It should be noted that since electrons and holes are separated on the neighboring C and N$_{\rm ring}$ atoms upon the light radiation, the recombination can be occurred easily. Therefore, GCN has a low photocatalytic efficiency under visible light irradiation. Upon the doping of GCN with B, P, and both B/P, a certain redistribution of HOMO and LUMO is observed. In the case of B doped GCN monolayer, the HOMO and LUMO mostly locate in the N$_{\rm ring}$ atom bonded with boron. Therefore, The LUMO and HOMO of B doped GCN are dominated by the 2p states of N$_{\rm ring}$ atom bonded with B (Fig. \ref{fig8}(b)). For P doped and B/P co-doped GCN monolayer, the formation of the sinusoidal-like structure results in weakening the $\pi$-delocalization, thus the intrinsic electron distributions vary as compared to pure GCN. According to the results shown in Fig. \ref{fig8}(c) and \ref{fig8}(d), for P doped and B/P co-doped systems, the LUMO and HOMO are great N$_{\rm ring}$, C, and P atoms. It is believed that the photogenerated charge carriers transfer freely by the way of C-N-P-N-C chain between two adjacent triazine units \cite{ma2012strategy}. On the basis of partly or totally separated HOMO and LUMO of doped GCN monolayer, the lifetime of photogenerated electron-hole pairs as well the carrier mobility may greatly enhance.
\subsection{Optical characteristic }
In order to evaluate the performance of a photocatalyst in the visible-light region, it is very crucial to investigate its optical properties. In this context, the absorption spectra, the imaginary part of the dielectric function, and the optical conductivity of pure and doped systems are calculated using the Fermi golden rule within the dipole approximation by means of HSE06 functional. Measurement of the absorption of light is one of the most important techniques for optical
measurements in solids. The absorption coefficient, $\alpha(\omega)=2\omega \kappa(\omega)/c$ where $\kappa(\omega)$ is the imaginary part of the complex index
of refraction, is obtained from the following equation~\cite{srinivasu2014porous,fox2002optical}
\begin{equation}
\begin{split}
\alpha(\omega)= \frac{\sqrt{2}\omega}{c}(\sqrt{\varepsilon_1(\omega)^2+\varepsilon_2(\omega)^2}-\varepsilon_1(\omega))^{\frac{1}{2}}
\end{split}
\end{equation}
where $\varepsilon_1$ and $\varepsilon_2$ are the real and imaginary parts of the dielectric function, respectively. The real part of the dielectric function is obtained by a Kramers-Kronig transformation, and the imaginary part can be expressed by the following \cite{saha2000structural},
\begin{equation}
\begin{split}
\varepsilon_2(\omega)= \frac{Ve^2}{2\pi\hbar m^2\omega^2} \int d^3k \sum_{n,n'}|\langle kn|p|k'n\rangle|^2 f_n(k)\\
(1-f_{n'}(k))\delta(E_n(k)-E_{n'}(k)-\hbar\omega)
\end{split}
\end{equation}
Here, $\hbar\omega$ is the energy of the incident photon, $m$ is the electron mass, p is the momentum operator, $|kn>$ is a crystal wave function and $f_n(k)$ is the Fermi distribution function with the energy $E_n(k)$ of band $n$. The dielectric function is calculated by invoking the Kohn-Sham wave functions and they change by changing the structures and inter-atomic interactions. Therefore, the dielectric function depends on the
material density, the interlayer distance and the type of doped system.
The absorption coefficient determines the fraction of energy lost by the electromagnetic wave when it penetrates through a unit thickness of the material~\cite{ma2012strategy}. Figures \ref{fig9}(a)-(b) indicate the imaginary part of the dielectric function of pristine and doped GCN monolayer in different light polarization. Compared with the GCN monolayer, doped system exhibits more effective ultra-violate absorption and enhanced visible light response. It can be seen from the curves of imaginary parts of the dielectric function, the dielectric spectrum under the polarization in $z$ direction is obviously different from those under $x$ and $y$ directions. This difference can be assigned to the symmetry of the dielectric spectra correspond to the symmetry of the lattice structure. It can also be seen that for the pure GCN, the dielectric function for the polarization parallel to $x$ axis is the same as that parallel to the $y$ direction, whereas for the B/P co-doped system , there is a slight difference. This difference can be attributed To the deformation occurred upon doping. Figure \ref{fig9}(e) illustrate the total optical absorption spectra for the pure and doped GCN monolayer. The pure GCN monolayer is found to have a strong absorption peak around 270-320 nm, attributed to the $\pi$-$\pi^*$ electronic transition, which can be commonly observed in the aromatic s-triazine compounds~\cite{srinivasu2014porous, li2006self}. Considering the inset of Fig. 9(a), the absorption edge of the undoped GCN is approximately 420 nm. In this regard, although the pure GCN is a visible light semiconductor photocatalyst, the visible light absorption is not sufficient to lead to the highly photocatalytic performance of this system. On the basis of optical absorption spectra of the doped systems, particularly P and B/P doped GCN monolayer indicate a very strong absorption tail (Urbach tail) in the visible region. A Similar behavior has been also reported recently~\cite{raziq2017synthesis, ran2015porous}. The reason behind the observed remarkable enhanced absorption in the visible region for P and B/P doped systems can be ascribed to the $\pi^*$ electronic transitions including lone pairs on the edge N atoms of the triazine rings. These transitions, prohibited in the planar pristine GCN, can be attributed to charge redistribution upon doping, caused by distorted configurable B/P co-doped GCN (Fig. \ref{fig9}(f)) (confirmed by both the electron density and Mulliken charge population). In other words, the forbidden $\pi^*$ electron transitions in the planar GCN are allowed in the distorted configuration mainly due to the deviation of the ring units from trigonal symmetry\cite{jorge2013h2, chen2014activation}.
The observed behavior by imaginay dielectric function is also confirmed by the optical conductivity measurement. This characteristic, which links the current density to the electric field, is one of the promising tools for studying the electronic states in materials. A redistribution of charges occurs when a system is subjected to an external electric field, resulting in current induction~\cite{lahiji2016first}. Figure \ref{fig10} shows the variation of optical conductivity as a function of the ultra-violate wavelength for pure, B doped, P doped, and B/P co-doped GCN monolayer. As it is exhibited in the figure, a rather sharp increase in optical conductivity takes place in the ultra-violate region and the correspondent peak moves toward lower wavelength with B, P and B/P doping. In the visible region, a kind of almost a linear trend of optical conductivity can be observed, which increases with B, P, and B/P doping. This shows the contribution of more electrons by the added dopants to the host material, resulting in higher optical conductance. According to the obtained results, it is believed that the B/P co-doped GCN monolayer can improve the utilization of visible portion of solar irradiation leading to high photocatalytic performance. This enhancement was experimentally observed by Raziq et al. for B and P co-doped GCN system~\cite{raziq2017synthesis}.
Aside from being a visible light active sample, the band edges of a suitable material for photocatalytic water splitting should be positioned appropriately with respect to the reduction and oxidation reaction levels of water in order to generate hydrogen and oxygen. The valence band maximum (VBM) and conduction band minimum (CBM) are calculated to be -3.40 and -6.10 eV, respectively. These values are in good agreement with experimental results~\cite{ma2016water}. The calculated CBM for the monolayer is similar to that of bulk GCN and it is found to be 1.12 eV above the water reduction level. However, the VBM shifts downward and is calculated to be 0.75 eV below the water oxidation level.
In the case of B/P co-doping, the VBM shifts remarkably upward, while the CBM shifts significantly downward as compared to the pure GCN monolayer. It should be noted that both pure and co-doped GCN nanosheet can be utilized in water splitting reactions. Although the pure system is more favorable for the reduction-oxidation reaction reactions, the co-doped GCN is expected to exhibit better photocatalytic performance due to the following reasons. (i) Using the visible part of sunlight because of the lower band gap energy; (ii) increasing the absorption coefficient in the visible region because of activating $\pi^*$ electronic transitions in the distorted configuration; (iii) prolonging the lifetime of photo-excited electron-hole pairs owing to a partially or totally separated HOMO and LUMO.
\section{Conclusion}\label{sec:concl}
We have studied and compared the electronic and optical properties of s-traiazine based graphitic carbon nitride (GCN) monolayer mono-doped with B and P as well co-doped with B/P using density functional theory calculations. Single layer GCN 2D system is found to have an increased band gap of 3.10 eV in comparison to that of 2.7 eV of the bulk GCN due to the quantum confinement effect. B-doped GCN monolayer exhibited a metallic character, while P-doped system showed a \textcolor{red}{metallic} behavior. Therefore, both systems were not suitable for using them in photocatalytic applications. Interestingly enough, the co-doped system displayed an appropriate band gap of 1.95 eV, making this configuration a promising candidate for water splitting reaction. Moreover, since more activating $\pi^*$ electronic transitions in the distorted configurations are observed, the optical absorption coefficient of the P-doped and B/P co-doped systems increased in the visible region which are beneficial in photocatalytic applications.
We remark that, in the accurate study of the optical properties of co-doped GCN, a model going beyond the DFT such as GW-DFT or time-dependent DFT simulations might be necessary to account for increasing correlation effects.
\section{Acknowledgment}
We would like to thank S. Tafreshi, S. Yousefzadeh, and M. Sabzali for useful discussions. In addition, financial assistance of the Research and Technology Council of the Sharif University of Technology, support of the Iran National Science Foundation through Research Chair Award of Surface and Interface Physics Grant No. 940009 and the Iran Science Elites Federation Grant no 11/66332 are highly appreciated.
\bibliographystyle{aipnum4-1}
|
{
"timestamp": "2018-06-05T02:15:18",
"yymm": "1806",
"arxiv_id": "1806.01001",
"language": "en",
"url": "https://arxiv.org/abs/1806.01001"
}
|
\section{Introduction}
Automatic building extraction from high-resolution satellite imagery creates new opportunities for urban planning and world population monitoring. Traditionally, the building boundaries are delineated through manual labeling from digital images in the stereo view using the photogrammetric stereo plotters \cite{san2010building}. However, this process is a tedious task and requires qualified people and expensive equipment. For this reason, building extraction using the automatic techniques has a great potential and importance. The advantages of satellite imagery compared to aerial imagery are the almost worldwide availability and that the data typically contains wider spectral range, that includes both optical, infrared and extra channels. The geometric resolution of 0.3-1.0 m per pixel is worse than for aerial imagery, but is sufficient to be able to extract large objects, such as buildings. The worldwide availability of the data makes it possible to produce topographic databases for nearly any region of the earth.
In the last years, different methods have been proposed to tackle the problem by creating convolutional neural networks (CNN) that can produce a segmentation map for an entire input image in a single forward pass. One of the most successful state-of-the-art deep learning method is based on the Fully Convolutional Networks (FCN) \cite{long2015fully}. The main idea of this approach is to use CNN as a powerful feature extractor that creates high-level feature maps. Those maps are further upsampled to produce dense pixel-wise output. The method allows training CNN in the end to end manner for semantic segmentation with input images of arbitrary sizes. This method has been further improved with skipped connections and now known as U-Net neural network \cite{ronneberger2015u}. Skip connections allow combining low-level feature maps with higher-level ones, which enables precise pixel-level localization. A large number of feature channels in upsampling part allows propagating context information to higher resolution layers. This type of network architecture proved itself well in a satellite image analysis competitions. \cite{goldberg2018urban, iglovikov2017satellite, zhang2017building}. Another modification to the U-Net architecture that lead to a first place in the Carvana Image Masking Challenge \cite{kaggle_carvana} was to replace encoder by a first few convolution blocks of the VGG11 network. This modification was called TernausNet \cite{iglovikov2018ternausnet} that we naturally extend in the current work (see also \cite{shvets2018automatic, shvets2018angiodysplasia}).
The semantic segmentation is not able to separate different instances because the predicted boundaries are usually not fine and closely packed objects of the same class collapse into one connected component. It may also happen that there is no distance between objects at all and even perfect network will predict different instances as being part of the same connected blob. In work, \cite{bai2017deep} authors propose a method that utilizes three stacked networks, the first one performs semantic segmentation, the second one predicts gradients of the distance transform, the last predicts energy levels that are used in the postprocessing step during the watershed transformation. Our method is similar in the spirit, but much more straightforward.
In this work, we solve two different problems. First of all, we use all available multispectral information. Then, we need a way to modify the network, so that combination of its outputs allows to make segmentation on the instance level. To resolve the first problem, we suggest an extension of the TernausNet architecture \cite{iglovikov2018ternausnet} that replaces VGG11 encoder with a more powerful ABN WideResnet-38 \cite{bulo2017place}. We also extend the input RGB channels to 11 multispectral channels. So that, we are able to perform transfer learning from RGB to RGB + multispectral inputs. For the second issue, we use ideas that were developed in winning solutions for recent data science challenges \cite{goldberg2018urban, dsbowl2018}. To be specific and separate buildings in a predicted binary masks, we add additional output channel that predicts areas where objects are touched or close to each other. This output is used in a post-processing step and allows to partition the mask into separate instances.
\section{Dataset}
The training data for the building detection sub-challenge originate from the SpaceNet dataset \cite{spacenet_dataset}. The dataset uses satellite imagery with 30 cm resolution collected from DigitalGlobe's WorldView-3 satellite. Each image has 650x650 pixels size and covers 195x195 $m^2$ of the earth surface. Moreover, each region consists of high-resolution RGB, panchromatic, and 8-channel low-resolution multi-spectral images. The satellite data comes from 4 different cities: Vegas, Paris, Shanghai, and Khartoum with different coverage, of (3831, 1148, 4582, 1012) images in the train and (1282, 381, 1528, 336) images in the test sets correspondingly. All images in the train set have a paired list of polygons that describes building instances. The labels are not perfect due to the cost of mask annotation, especially in places with high density. To evaluate our model performance the predicted masks for the test images should be upload into DeepGlobe website \cite{deepglobe_website, demir2018deepglobe}. An example of a test image and predictions of our method is depicted in the Fig. \ref{fig:buildings}. \ref{fig:buildings}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{./figures/buildings.png}
\end{center}
\caption{From left to right: RGB part of the input image, predicted binary mask in blue and touching borders in green, building instances after the watershed transform.}
\label{fig:buildings}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth,height=9cm]{./figures/TernausNetV2.png}
\end{center}
\caption{TernausNetV2: encoder-decoder network with skipped connections that has ABN WideResnet-38 as the encoder. As an input, we have RGB + extra channels image. B1-B5 are the first five convolutional blocks of the base network that was pre-trained on the ImageNet. At every step of the decoder block, we perform upsampling, followed by the series of the convolution layers. Skip connections are added between convolution blocks in the encoder and the decoder of the corresponding size. In the end, 1x1 convolution is added to reduce the number of channels to the desired two, one for the binary mask and another one for touching instances.
}
\label{fig::fpn}
\end{figure*}
\section{Model}
Our approach leverages encoder-decoder type architecture with skipped connections that is also known as U-Net \cite{ronneberger2015u}. In general, U-Net consists of a contracting path to capture context and of symmetrically expanding path. This enables precise localization with skip connections added between blocks of the same size in the contracting and expansive parts. Skip connections allow information to flow directly from the low level to high-level feature maps without alternations that even further improve localization accuracy and speed up convergence \cite{ronneberger2015u}. The contracting path follows the typical architecture of a convolutional network with alternating convolution and pooling operations and progressively down samples feature maps, increasing the number of feature maps per layer at the same time. Every step in the expansive path consists of an upsampling of the feature map followed by a series of convolution layers. The output of the model is a pixel-by-pixel mask that outputs the class of each pixel.
As an improvement over the U-Net architecture, we replace the encoder with a convolutional part of the WideResNet-38 network with in-place activated batch normalization \cite{bulo2017place} that was pre-trained on ImageNet. In-place activated batch normalization merges batch normalization layer with activation layers which lead to up to 50\% memory savings. It allows to fit into the GPU memory larger batches and work with the input images of the larger size. A model based on this encoder showed state of the art performance on semantic segmentation tasks for Mapillary Vistas \cite{neuhold2017mapillary} and Cityscapes \cite{cordts2016cityscapes} datasets. Compared to the original ResNet architecture \cite{he2016deep}, WideResnet uses layers with a higher number of channels, while reducing the number of layers. We use the first five convolutional blocks of the network as an encoder. The decoder of our network consists of five decoder blocks that are connected to the corresponding encoder block of the same size. The transmitted block from the encoder is concatenated to the corresponding decoder block. Each decoder block contains two sets of 3x3 convolutions, followed by ReLU activations \cite{glorot2011deep} that is followed by an upsampling layer that increases the size of the feature map twice. To prevent artifacts at the edges of the predicted buildings, we use nearest neighbor upsampling that showed the best result in our experiments. The output of the model is a two-channel pixel-by-pixel image where the first channel contains a binary mask of the combined building footprint. The second channel contains building borders that are attached to each other or separated by few pixels (see Fig. \ref{fig:buildings}).
To allow the encoder that was pre-trained on RGB images to take 11 channels as an input (RGB + 8 multispectral), we replace the first convolutional layer by a larger one. So that, it takes 11 channel images as an input. We copy weights of the original pre-trained WideResnet38 to the first three channels and
initialize the remaining channels by zeros.
\section{Training}
The satellite imagery in the Spacenet dataset comes in an 11-bit format. In order to make pixel intensity distributions closer to the usual RGB images, we perform min-max normalization per channel $(x - x_{min}) / (x_{max} - x_{min})$. Then, we normalize RGB part subtracting (0.485, 0.456, 0.406, 0, 0, 0, 0 , 0, 0, 0, 0) and dividing by (0.229, 0.224, 0.225, 1, 1, 1, 1, 1, 1, 1, 1) for each channel correspondingly.
During a training, to perform a smooth transition from RGB to RGB + multi-spectral data we train our network with the following schedule. At the first epoch, we freeze all weights in the encoder, so that only weights in the decoder are trained. Because weights that correspond to the extra layers are zero-initialized only RGB part of the input is used during training. At the end of the first epoch, decoder weights have meaningful with respect to the problem values. At the second epoch, we unfreeze all layers and train it end to end. As a result, the network learns how to go from three to a larger number of input channels in a delicate, careful manner, slowly increasing weights of the multi-spectral part of the input.
As an output of the network, we have an image with two channels. These channels are independent, and both of them need to predict binary masks. One for building footprints and the second one for touching borders. As a loss function, we use a combination of a binary cross entropy and a soft Jaccard loss. This loss was inspired by \cite{iglovikov2017satellite} where authors proposed a way to generalize discrete Jaccard index (also known as intersection over union) into a differentiable form. This allows the network to optimize the loss directly during the training process.
The Jaccard index can be interpreted as a similarity measure between a finite number of sets. For two sets $A$ and $B$, it can be defined as following:
\begin{equation}
\label{jaccard_iou}
J(A, B) = \frac{|A\cap B|}{|A\cup B|} = \frac{|A\cap B|}{|A|+|B|-|A\cap B|}
\end{equation}
Since an image consists of pixels, the last expression can be adapted for non-discrete objects in the following way:
\begin{equation}
\label{dicrjacc}
J=\frac{1}{n}\sum_{c=1}^2w_c\sum\limits_{i=1}^n\left(\frac{y_i^c\hat{y}^c_i}{y_{i}^c+\hat{y}^c_i-y_i^c\hat{y}_i^c}\right)
\end{equation}
where $y_i^c$ and $\hat{y}_i^c$ are a binary values (label) and corresponding predicted probability for the pixel $i$ of the class $c$. For simplicity, we choose $w_1 = w_2 = 1$.
An image segmentation task can also be considered as a pixel classification problem. We additionally use common classification loss function for a binary cross entropy, denoted as $H$ that we apply independently to each output channel.
The final expression for the generalized loss function is obtained combining Eg. (\ref{dicrjacc}) and $H$ as following:
\begin{equation}
\label{free_en}
L=\alpha H +(1-\alpha)(1-J)
\end{equation}
By minimizing this loss function, we simultaneously maximize predicted probabilities for the right class for each pixel and maximize the intersection over union $J$ between masks and corresponding predictions. In our experiments we used $\alpha = 0.7$.
As an additional regularization, we apply extensive data augmentation both spatial and in the color space. For spatial augmentation we use a random re-size, randomly choosing scale between 0.5 and 1.5 of the input image and mask. We apply random rotations in the full (0, 360) range, using reflection padding if needed. From the resulting image and mask, we crop random regions of the size 384x384 pixels. These images are subject to color transformations such as random contrast/brightness and gamma corrections with gamma coefficient randomly chosen between two discrete values: 0.8 and 1.2. One video card GTX1080 Ti with 11 GB of memory allows using the batch size of 5 images. In our case, we use 4 GTX1080 Ti and batch 20.
We train our network using Adam optimizer with learning rate 1e-4. The training is done for 800 epochs. At the inference time, we make predictions on the whole image padding it with 11 pixels on each side to the 672x672 size, so that it would be divisible by $32=2^5$ (5 is the number of max-pooling layers in the decoder that constrains the allowed input sizes). After prediction is done the padded regions is cropped.
The last step during the inference is to post process predicted binary masks and touching borders in such a way that the binary mask is splitted into separate instances. To make this, we subtract touching borders from the corresponding mask to obtain seeds and use both masks and these newly generated seeds as an input to the watershed transform. We do not fine tune the model for different cities. We also do no use bagging, checkpoint averaging, test time augmentations or any other ensembling techniques in our solution. The end to end process including network inference and watershed transformation process ten samples per a second using one GTX 1080 Ti.
\section{Conclusions}
We developed a model for satellite imagery building detection. We used a fully convolutional neural network that is traditionally used for semantic segmentation and added additional output that adds instance segmentation functionality. As an encoder, we chose pre-trained on ImageNet WideResnet-38 network with in-place activated batch normalization that can generate good semantic features and it is memory efficient at the same time. We also generalized this pre-trained encoder and propose training schedule that allows applying transfer learning from RGB to multi-spectral data. Based on the public leaderboard score our model provides state of the art result with the score equal to 0.74.
\section*{Acknowledgment}
The authors would like to thank Open Data Science community \cite{ods_website} for many valuable discussions and educational help in the growing field of machine/deep learning.
{\small
\bibliographystyle{ieee}
|
{
"timestamp": "2018-06-21T02:01:24",
"yymm": "1806",
"arxiv_id": "1806.00844",
"language": "en",
"url": "https://arxiv.org/abs/1806.00844"
}
|
\section{}
\begin{abstract} In their work, \cite{GR}, Gaitsgory and Rozenblyum introduce a derived version of the well-studied arc spaces of classical algebraic geometry. They observe that these derived spaces do not differ from their classical counterparts in the case of smooth schemes. In this note we will see that this is also the case for reduced local complete intersection schemes. \end{abstract}
\section{Acknowledgements} The results presented here were obtained whilst I was a PhD student under the supervision of Ian Grojnowski at Cambridge University. I owe him a tremendous debt of gratitude. Thanks are also owed to the University of Cambridge for providing an ideal working environment.
\section{INTRODUCTION} Let $X$ be a classical scheme, we will be dealing with the space $X(\mathcal{D})$ of maps from a formal disc $\mathcal{D}=spec(\mathbb{C}[[t]])$ to $X$, we call this the \emph{arc space} of $X$ and will introduce it and its variants in detail below. This space is obtainable as a tower of simpler \emph{truncated} arc spaces and this tower is very easy to understand in the case where $X$ is smooth. Each map in the tower is an etale locally trivial affine space bundle. For non-smooth things the situation is substantially subtler, and the study of arcs in this setting has led to some beautiful constructions, perhaps most notably the \emph{motivic vanishing cycles} of Denef and Loeser, cf. \cite{DL}. It is something of a foundational insight of Derived Algebraic Geometry that constructions which are simple for smooth spaces and more complicated for singular ones are often clarified when viewed as derived objects. Gaitsgory and Rozenblyum remark in their work, \cite{GR}, that the derived versions of arc spaces genuinely can differ from their classical counterparts, this points the way to a potentially interesting avenue of study as it implies that every \emph{classical} arc space is endowed with a highly canonical family of quasi-coherent sheaves on it, namely the higher homotopy sheaves of the derived version of the arc space. It had originally been our hope that in the example of a singular hypersurface $\{f=0\}$ inside a smooth scheme $U$ we could relate these sheaves to interesting invariants of the singularities of $f$, e.g. to vanishing cycles cohomology. We will see that this is not the case. In fact we will prove that the derived arc spaces do not differ from their classical counterparts in quite substantial generality. We view this as ultimately dissapointing. The proof will proceed by constructing explicit (cofibrant) models for the algebras of functions on the derived spaces in the tower of truncated arcs and using this to define a sequence of degenerations of each of the maps in the tower. Eventually we will prove the following;\begin{tcolorbox} \begin{theorem} If X is a reduced local complete intersection scheme then the inclusion of the classical arcs into derived arcs, \[X(\mathcal{D})^{cl}\hookrightarrow X(\mathcal{D}),\] is an equivalence. \end{theorem}\end{tcolorbox}
\section{BASICS}\subsection{Arcs} We recall here the basics of the theory of arc spaces. For a more thorough introduction we refer the reader to \cite{DL}. $X$ will be a scheme below. The spaces we define will be defined on the pre-sheaf level and $A$ will denote an arbitrary test ($\mathbb{C}$-) algebra.\begin{definition}\begin{enumerate}\item The \emph{n-truncated arc space} of $X$, denoted $X(\mathcal{D}_{n})$, is defined by $X(\mathcal{D}_{n})(A)=X(A[t]/t^{n+1})$. \item The \emph{formal arc space}, denoted $X(\mathcal{D}_{\infty})$ is defined as the pro-limit of the truncated arc spaces, where the limit is induced by the natural maps $A[t]/t^{n+1}\rightarrow A[t]/t^{n}$. \item The \emph{arc space} of $X$, $X(\mathcal{D})$, is defined to have $A$-points $X(A[[t]])$. \end{enumerate}\end{definition}
\begin{remark} The arc space $X(\mathcal{D})$ is endowed with natural maps to all the truncated arc spaces, and thus by definition to the formal arc space. It is a difficult result of Bhatt (\cite{Bh}) that this map is an isomorphism if $X$ is assumed to be quasi-compact and quasi-separated. Note that this is obvious in the case that $X$ is affine.\end{remark}. We will quickly summarise the representability properties of these pre-sheaves. \begin{lemma}\begin{enumerate}\item If $X$ is a scheme (resp. affine scheme), then the spaces of truncated arcs are schemes (resp. affine schemes). \item The same holds true for $X(\mathcal{D}_{\infty})$. \item If $X$ is affine then the arc space $X(\mathcal{D})$ is an affine scheme.\end{enumerate}\end{lemma} \begin{proof} Cf. \cite{DL}, in all cases one simply observes it for the affine line and uses the appropriate compatabilities with (arbitrary) limits and (Zariski) colimits. \end{proof} \begin{remark} Note that according to the result of Bhatt mentioned above, for $X$ qcqs it is in fact the case that $X(\mathcal{D})$ is representable even for $X$ non-affine. \end{remark}\begin{example}\begin{itemize}\item For arbitrary $X$, the space $X(\mathcal{D}_{0})$ is $X$ itself. \item For arbitrary $X$, the space $X(\mathcal{D}_{1})$ is the geometric tangent bundle of $X$, i.e. the total space of the cotangent sheaf. \item The arc space of the affine line is an infinite dimensional affine space, $\mathbb{A}^{\infty}=spec(\mathbb{C}[x_{0},x_{1},x_{2},...])$. \end{itemize}\end{example} \subsection{Recollections on Derived Geometry} There are numerous good introductions to derived geometry, let us mention as examples \cite{TV2}, \cite{Lu} and \cite{GR}. We choose to use the language of \emph{pre-stacks} as it is developed in \cite{GR}. We find this convenient as it allows us to define the objects of interest to us at the level of their functors of points (valued in $\infty$-groupoids.) The following definition is really just the fixing of some notation, the reader familiar with D.A.G. can skip and refer back to it. \begin{definition}\begin{itemize} \item We denote the $\infty$-category of \emph{derived algebras} $\mathbf{dAlg}_{\mathbb{C}}$, its elements are commutative differential graded algebras concentrated in non-positive degree. For non-negative $i$ we write $\pi_{i}A$ for the $-i^{th}$ cohomology group of a derived algebra A. If these vanish for strictly positive $i$ then we refer to $A$ as \emph{classical}. \item The cateogry of \emph{pre-stacks} is the $\infty$-category of functors from derived algebras to the $\infty$-category of spaces, $\mathbf{Fun}(\mathbf{dAlg}_{\mathbb{C}},\mathbf{sSet})$. \item Given a derived algebra $A$, we denote by $\mathbf{Mod}_{A}$ the stable $\infty$-category of modules for $A$ and by $\mathbf{Perf}_{A}$ those which are perfect. Left Kan extension extends both of these notions to an arbitrary pre-stack, $\mathcal{X}$, we denote the resulting categories $QC(\mathcal{X})$ and $\mathbf{Perf}_{\mathcal{X}}.$\item If $\mathcal{X}$ is a pre-stack then we define its \emph{classical truncation}, denoted $\mathcal{X}^{cl}$, is defined as the right Kan extension (to all of $\mathbf{dAlg}_{\mathbb{C}}$) of the restriction of $\mathcal{X}$ to classical algebras.\item The pre-stack represented by a derived algebra $A$ will be denoted $spec(A)$ and pre-stacks locally of this form (cf. \cite{Lu}, chapter 7 for a precise definition) will be called \emph{derived schemes}. \end{itemize}\end{definition}\begin{remark} The pre-stacks we deal with throught will all be representable by derived schemes. In the case of a derived scheme locally of the form $spec(A)$, the classical truncation is locally of the form $spec(\pi_{0}A)$ as one would certainly hope. \end{remark}
There is one additional recollection we require, the proofs below will make use of explicit models for the algebras of functions on derived arc spaces and it is crucial that we be able to work explicitly with them. To this end we make the following remark; \begin{remark} The $\infty$-category of derived algebras is obtainable as the localisation of a model category, which we denote $\mathbf{CDGA}_{\mathbb{C}}$. The elements are non-positively graded commutative differential graded algebras and the weak equivalences are quasi-isomorphisms. Most importantly, maps for which the underlying map of graded algebras is free are all cofibrant, and indeed generate the class of such maps.\end{remark}\section{DERIVED ARCS} We are now in a position to mix the objects described in the above sub-sections. Henceforth whenever we mention the arc spaces above we shall use a superscript $^{cl}$ so as to emphasise that their definition is in terms of classical algebraic geometry. $X$ will be a derived scheme in what follows; \begin{definition}\begin{enumerate}\item The pre-stack of \emph{n-truncated} arcs , denoted $X(\mathcal{D}_{n})$, is defined by $X(\mathcal{D}_{n})(A)=X(A[t]/t^{n+1})$. \item The \emph{formal arc space}, denoted $X(\mathcal{D}_{\infty})$ is defined as the pro-limit of the truncated arc spaces, where the limit is induced by the natural maps $A[t]/t^{n+1}\rightarrow A[t]/t^{n}$. \item The \emph{arc space} of $X$, $X(\mathcal{D})$, is defined to have $A$-points $X(A[[t]])$. \end{enumerate}\end{definition} This is of course a carbon copy of the definition in the classical case. We have, as was first observed by Gaitsgory and Rozenblyum, an analogue of the representability results above. \begin{lemma}\begin{enumerate}\item If $X$ is a derived scheme (resp. derived affine scheme), then the spaces of truncated arcs are derived schemes (resp. derived affine schemes). \item The same holds true for $X(\mathcal{D}_{\infty})$. \item If $X$ is derived affine then the arc space $X(\mathcal{D})$ is an affine scheme.\end{enumerate}\end{lemma}\begin{proof} Cf. [GR], it is not fundamentally different from the proof in the classical case. \end{proof}\begin{remark} We will be particularly interested in the case where $X$ is taken to be a classical scheme. In the case where $X$ os further assumed to be quasi-compact and quasi-separated then the result of Bhatt mentioned above implies once again that the space of (derived) arcs, $X(\mathcal{D})$ is representable by a derived scheme. \end{remark} We have the following lemma due to [GR];\begin{lemma} Let $X$ be a smooth classical scheme, then the spaces $X(\mathcal{D}_{n})$ and $X(\mathcal{D})$ are classical. \end{lemma}\begin{proof} We reproduce the proof of \cite{GR}, although the arguments we present below for our main result give an independent proof. It suffices to prove the result for $X$ affine and for the spaces $X(\mathcal{D}_{n})$ for all $n$.We assume $X$ is given as the zeroes of a smooth map $f:\mathbb{A}^{p}\rightarrow\mathbb{A}^{q}$. Formation of truncated arc spaces commutes with the formation of limits so $X(\mathcal{D}_{n})$ is obtained as the zeroes of the induced map $f(\mathcal{D}_{n})$. The infinitessimal lifiting criterion for smoothness implies this map is also smooth. It is finitely presented and thus in fact flat, according to a standard piece of commutative algebra. It follows that there are no tors and the classical fibre product computing $X(\mathcal{D}_{n})^{cl}$ also computes the derived space $X(\mathcal{D}_{n})$. \end{proof}\begin{remark} Gaitsgory and Rozenblyum note further that for non-smooth spaces this equivalence need not hold, and point out that if $X$ is taken to be $spec(\mathbb{C}[z]/z^{2})$ then the derived arc space of $X$ is a non-trivial derived thickening of its classical loop space. They observe further that even for singular spaces the derived thickening \emph{can be} trivial, for example for an ordinary double-point $(xy=z^{2})$. In fact they generalise this to the case of the nilpotent cone $\mathfrak{N}$ inside a classical Lie algebra $\mathfrak{g}$, the ordinary double point being the special case of $\mathfrak{sl}_{2}$. According to a theorem of Kostant, the nilpotent cone is a reduced complete intersection, we take this as our starting point.\end{remark}\section{THE MAIN RESULTS} \subsection{Explicit Cofibrant Models} Let $X=spec(A)$ be an affine derived scheme and assume $A$ is given as a cofibrant element of $\mathbf{CDGA}_{\mathbb{C}}$. We may assume that it is of the form \[\mathbb{C}\Big[x^{\lambda}\, \big|\, \partial_{A}(x^{\lambda})=f_{\lambda}(x^{\underline{\mu}}) \, \Big],\] where the $\lambda\in\Lambda$ form an indexing set, $\partial_{A}$ denotes the differential and $x^{\underline{\mu}}$ denotes a multi-variable. We wish to give a description of the algebra of functions on $X(\mathcal{D})$ in these terms, in particular we want to produce a cofibrant model for functions on $X(\mathcal{D})$. We will refer to this algebra as $A(\mathcal{D})$, so that we will have $spec(A(\mathcal{D}))=X(\mathcal{D})$.\\ \\ What follows is as much a construction as a definition, we will show below that the algebras constructed in the following definition are indeed models for the algebras on the relevant arc spaces. \begin{definition}\begin{itemize}\item $A(\mathcal{D})$ will be freely generated by elements $x^{\lambda}_{i}$ for $\lambda\in\Lambda$ and for $i\geq 0$. The differential $\partial_{A(\mathcal{D})}$ is best described via a generating function and to this end we introduce the formal sum, $x^{\lambda}(t)=\sum x^{\lambda}_{i}t^{i}$. We now define $\partial_{A(\mathcal{D})}$ via the generating function $\partial_{A(\mathcal{D})}(x^{\lambda}(t))=f_{\lambda}(x^{\underline{\mu}}(t)).$\item Restricting to those variables $x^{\lambda}_{i}$ with $i$ at most $n$ defines an algebra we will denote $A(\mathcal{D}_{n})$. \item Setting $x^{\lambda}_{i}$ to be of degree $i$ defines a grading on $A(\mathcal{D})$ which we refer to as the grading by \emph{conformal weight}. It is in fact induced by the rotation action of $\mathbb{G}_{m}$ on $\mathcal{D}$. \end{itemize}\end{definition} We now have the following simple lemma, hinted at above: \begin{lemma} With notation as above, the algebra $A(\mathcal{D})$ is a model for the algebra of functions on the arc space $X(\mathcal{D})$.\end{lemma} \begin{proof} If $A\rightarrow B$ is a cofibration in $\mathbf{CDGA}_{\mathbb{C}}$ then it is easily seen that so too is the induced map $A(\mathcal{D})\rightarrow B(\mathcal{D})$. A cofibrant algebra is a (possibly transfinitely) iterated coproduct of symmetric algebras of derived vector spaces, for which the result is clear. \\ \\ The result now follows since formation of arc spaces commutes with arbitrary homotopy limits, and the assignment, $A\mapsto A(\mathcal{D})$ preserves \emph{homotopy} colimits, as it preserves classical colimits and cofibrations. \end{proof}\begin{example} Set $X=spec(\mathbb{C}[z]/z^{2})$. A cofibrant model for $\mathcal{O}(X)$ can be taken to be $A=\mathbb{C}\big[x,\zeta \, \big| \, \partial(\zeta)=x^{2} \,\big]$. Then $A(\mathcal{D})$ has $$\partial(\zeta_{0})=x_{0}^{2}, \partial(\zeta_{1})=2x_{0}x_{1}, \partial(\zeta_{2})=2x_{0}x_{2}+x_{1}^{2}, \ \&c.$$ Let us note that the class $\eta=2x_{1}\zeta_{0}-x_{0}\zeta_ {1}$ is a \emph{non-zero} element of $\pi_{1}A(\mathcal{D})$, and so this genuinely differs from its classical counterpart, as mentioned above. If we now write $X_{n}=spec(\mathbb{C}[z]/z^{n})$ we can consider $\mathcal{O}(X_{n}(\mathcal{D}))$. This comes with two gradings, one from conformal weight and one from the $\mathbb{G}_{m}$-action on the space $X_{n}$. We can compute the bi-graded Euler characteristic of $\mathcal{O}(X_{n}(\mathcal{D}))$ as follows, where we write $q$ for the conformal weight variable and $z$ for the internal weight one. Then we have $\chi(\mathcal{O}(X_{n}(\mathcal{D})))=(-z^{n};q)_{\infty}(z;q)^{-1}_{\infty}.$ Here we have written $(z;q)_{\infty}=\prod(1-q^{i}z)$ as is standard in $q$-series literature. We remark that this is a hugely more pleasant answer than one would get by computing the bi-graded dimension of the algebra of functions on the space of \emph{classical} arcs into $X_{n}$. \end{example} \subsection{Weak Smoothness} We require a simple definition before stating our criterion for classicality of derived arc spaces. Before we state it we remind the reader that for $X$ a scheme there is an object $\mathbb{L}_{X}\in QC(X)$ called the \emph{cotangent complex}. Its $0^{th}$ homotopy sheaf is the cotangent sheaf $\Omega^{1}_{X}$ and in the case of a \emph{smooth} scheme $X$ the two things agree. Inspired by this we define: \begin{definition} We say a scheme $X$ is \emph{weakly smooth} if its cotangent complex has no higher homotopy groups, i.e. if there is an isomorphsim $\mathbb{L}_{X}= \Omega^{1}_{X}[0]$ inside $QC(X)$. \end{definition} We may then state the main result of this sub-section:\begin{tcolorbox} \begin{theorem} If $X$ is a classical scheme, then the derived scheme of arcs, $X(\mathcal{D}),$ is classical iff $X$ is weakly smooth. \end{theorem}\end{tcolorbox}\begin{proof} We may assume $X=spec(A)$ is affine, once again we will assume given a cofibrant model for $A$ of the form $$\mathbb{C}\Big[x^{\lambda}\, \big|\, \partial_{A}(x^{\lambda})=f_{\lambda}(x^{\underline{\mu}}) \, \Big],$$ and write $A(\mathcal{D})$ for the associated cofibrant model for $\mathcal{O}(X(\mathcal{D}))$. \\ \\ Let us first assume that $X(\mathcal{D})$ is classical. As mentioned above this means that $A(\mathcal{D})$ has no higher homotopy groups. $A(\mathcal{D})$ is graded by conformal weight $q$ and this of course descends to the homotopy groups. There is a sub-complex $V$ of $A(\mathcal{D})$ consisting of elements of conformal weight $1$. This is simply the cotangent complex $\mathbb{L}_{A}$, and thus we have deduced weak smoothness of $A$. \\ \\ We now focus on the converse. We will show that each space $X(\mathcal{D}_{n})$ is classical, ie that for all $n$, and $i>0$, we have a vanishing $\pi_{i}A(\mathcal{D}_{n})=0$. Below we will introduce an increasing filtration, $\mathcal{F}_{n}^{\leq}$, counting weight in the top conformal weight variables $x^{\lambda}_{n}$. Examining the generating function description for the differential $\partial_{A(\mathcal{D}_{n})}$ we see that we have $$\partial_{A(\mathcal{D}_{n})}(x^{\lambda}_{n})=\sum_{\mu}(\partial_{\mu}f_{\lambda})x^{\mu}_{n}+\mathcal{O}(<n),$$ where $f_{\lambda}$ is meant to be understood as a polynomial in the weight $0$ variables and $\mathcal{O}(<n)$ denotes a sum of monomials containing no conformal weight $n$ variables. Now we can define $\mathcal{F}_{n}^{\leq}$ by letting $\mathcal{F}_{n}^{\leq i}A(\mathcal{D}_{n})$ be spanned by monomials of weight at most $i$ in the conformal weight $n$ generators. The formula for $\partial_{A(\mathcal{D}_{n})}(x^{\lambda}_{n})$ above shows that this respects the differential. Further, it immediately implies that we have $$Gr_{\mathcal{F}_{n+1}}A(\mathcal{D}_{n+1})\cong sym_{A(\mathcal{D}_{n})}(\mathbb{L}_{A}\otimes_{A}A(\mathcal{D}_{n})).$$ We have a convergent $E^{1}$ spectral sequence: $$\pi_{*} sym_{A(\mathcal{D}_{n})}(\mathbb{L}_{A}\otimes_{A}A(\mathcal{D}_{n}))=\pi_{*}(Gr_{\mathcal{F}_{n+1}}A(\mathcal{D}_{n+1}))\implies Gr(\pi_{*}A(\mathcal{D}_{n})).$$ Weak smoothness now allows us to prove by induction on $n$ that all the algebras $A(\mathcal{D}_{n})$ are classical.\end{proof} \begin{remark} Geometrically (according to the Rees construction) we are constructing a derived $\mathbb{A}^{1}$-family, $\mathcal{X}\rightarrow \mathbb{A}^{1}$, with generic fibres $\mathcal{X}_{\eta}=X(\mathcal{D}_{n+1})$ and central fibre $$Tot_{X(\mathcal{D}_{n})}(X(\mathcal{D}_{n})\otimes_{X}\mathbb{L}_{X}).$$ \end{remark} \subsection{Classicality for lci schemes} We now prove that the derived arc spaces of a reduced locally complete interesection inside a smooth scheme (henceforth an lci scheme) are classical. This is standard commutative algebra given the above characterisation in terms of weak smoothness. \begin{lemma} If $X$ is an lci scheme, then it is weakly smooth.\end{lemma} \begin{proof} After an etale localisation we can assume that $X=spec(R)$ is a complete intersection inside an affine space $\mathbb{A}^{d}$, with ideal sheaf $I$ being cut out by equations $(f_{1},...,f_{c})$. Being a complete intersection means that $$\mathbb{C}\big[x_{1},...,x_{d},\zeta_{1},...,\zeta_{c}\ | \ \partial(\zeta_{i})=f_{i}\big]\rightarrow R$$ is a cofibrant resolution. From this we see that the cotangent complex, $\mathbb{L}_{R}$ is computed as $$Jac(f_{1},...,f_{c}): R^{\oplus c}\rightarrow R^{\oplus d}.$$ This identifies with the map $$I/I^{2}\rightarrow\Omega^{1}(\mathbb{A}^{d})\otimes_{\mathbb{A}^{d}}X$$ coming from the conormal sequence. As explained in Ex 17.2 of Eisenbud´s book \cite{Ei}, this conormal sequence is exact on the left in the case of a reduced complete intersection and so we deduce that $\pi_{1}\mathbb{L}_{R}=0$. Noting that $\pi_{>1}$ manifestly vanishes we have proven weak smoothness of $X$. \end{proof} Finally, we deduce the main theorem, which we restate here: \begin{tcolorbox} \begin{theorem} If X is a reduced local complete intersection scheme then the inclusion of the classical arcs into derived arcs, \[X(\mathcal{D})^{cl}\hookrightarrow X(\mathcal{D}),\] is an equivalence. \end{theorem}\end{tcolorbox}
|
{
"timestamp": "2018-06-05T02:17:35",
"yymm": "1806",
"arxiv_id": "1806.01071",
"language": "en",
"url": "https://arxiv.org/abs/1806.01071"
}
|
\section*{ACKNOWLEDGMENT}
This work was partially supported by JST CREST and JSPS KAKENHI Grant Number JP15K16074.
\bibliographystyle{IEEEtran}
\section{Introduction
\label{sec_intro}
}
Low-cost or disposable wireless sensors have a huge potential impact on environmental monitoring and hazardous event detection. In this study, we consider the problem of the automatic deployment of sensor networks using a drone. Typical use cases include monitoring flash floods in a desert\cite{A9}, human detection in landslides, and contamination detection on a mountain.
In most of such applications, humans are not supposed to enter the target area because of safety, cost, or other reasons; therefore, unmanned sensor deployment is required. In this paper, we use a drone to transport sensors to a target area to monitor it. Because most drones have limited battery resources, careful planning for their transportation is required to maximize a certain information-theoretic gain.
In this paper, we define a sensor scattering (SS) problem as a planning problem where drones scatter sensors in a target area to maximize a certain information criterion. In an SS problem, we have to consider the following two issues. First, because sensors are dropped from the air, their final positions on the ground are uncertain depending on the terrain and their construction material.
\Update
Second, it is reasonable to update the plan online because of uncertainty in sensor positions.
\Done
The SS problem has a close relationship with the sensor placement problem\cite{K27}. Both problems are challenging because they are typically NP-hard\cite{Y4}. The number of possible combinations of sensors increases exponentially as the number of sensors increases. Recently, Krause's work\cite{K24} proved that $(1-1/e)$--approximation can be obtained using submodularity in the mutual information criterion. This means that 63\% of the optimal score is guaranteed using a greedy method, which can avoid a combinatorial explosion. However, the method assumes that the sensor positions are known, which is invalid in SS problems.
\begin{figure}[t]
\centering
\includegraphics[clip,width=85mm,height=57mm]{fig/eye_catch/eye_catch_drone03.jpg}
\caption{Typical task scenario in which the task is to deploy sensors in the target area.
First, the drone takes off in the loading area and a sensor is attached to it.
Given the previously scattered sensors, the next target position $\hat{y}_{pos}$ is planned by the SS method (Plan).
The drone flies to $\hat{y}_{pos}$ and drops the sensor (Drop). The drone returns to the loading position (Return).
}
\label{eye_catch}
\vspace{-4mm}
\end{figure}
In this paper, we propose an SS method that plans sensor positions in an online manner.
It does not suffer from combinatorial explosion but obtains a $(1-1/e)$--approximation of the optimal solution.
A typical task scenario is illustrated in \figref{eye_catch}.
We built a customized physical drone that could scatter sensors in an indoor environment in addition to a simulation environment. In this paper, we present the theoretical background of our proposed method and its experimental validation.
\Update
To make the experimental results reproducible, the experiments were performed in the simulated environment shown in \figref{simenv}.
\Done
The following is our key contribution:
\begin{itemize}
\item We propose the SubModular Optimization Sensor Scattering (SuMo-SS) method \Update that considers distance-based uncertainty in sensor positions, which is relevant for practical applications. \Done The method is explained in \secref{sec_proposed}.
\end{itemize}
\section{Related Work
\label{related}
}
There have been many studies on sensor placement, especially in the fields of sensor networks and robotics\cite{Y4,W9,H13,K27}. For readability, we use the term ``drone'' instead of ``Unmanned Aerial Vehicle (UAV)'' or ``multiroter helicopter''.
Research on optimized node placement in wireless sensor networks was previously summarized in \cite{Y4}. Some recent studies used drones for deploying sensors for optimal topology\cite{C14} or connectivity\cite{V2}. In \cite{A9}, low-cost sensors were scattered from a drone and used for detecting a flash flood; however, the work did not discuss how to optimally scatter the sensors.
In the wireless sensor network community, drone-based monitoring has been investigated to improve quality of user experience (QoE)\cite{H16}. A method to minimize a cost function based on a cover function was proposed in \cite{H16}. Energy-efficient 3D placement of a drone that maximizes the number of covered users using the minimum required transmit power was proposed in \cite{A11}.
Uncertainty in positions, poses and maps have been widely investigated in path planning and simultaneous localization and mapping (SLAM) studies\cite{T23}. In \cite{S24}, a path planning method for mobile robots based on expected uncertainty reduction was proposed. Uncertainty in the maps and poses was modeled with a Rao-Blackwellized particle filter. Sim and Roy proposed a path planning method based on an active learning approach utilizing A-optimality\cite{S23}. In other studies, the path was planned to maximize a certain information-theoretic gain of sensors mounted on drones\cite{N11}.
The first attempt that introduced submodularity in path planning was done in \cite{C15}. Singh et al. also proposed a path planning method utilizing submodularity, and conducted real-world experiments with river- and lake-monitoring robots\cite{S22}. The submodularity objective proposed in \cite{K28} included sensor failure and a penalty reduction for the worst case. Their target application included the detection of contamination in a large water distribution network.
\Update
Golovin et al. proposed the concept of adaptive submodularity in order to extend the optimization policy from a greedy method to adaptive policies\cite{G8}. In their work, uncertainty in sensor failure was discussed.
However, none of the above studies discussed uncertainty in sensor positions.
\Done
\Update
Submodularity has a close relationship with the combinatorial theory of matroids. Williams et al. recently proposed to model multi-robot tasks as functionality-requirement pairs, and applied a matroid optimization method to task allocation\cite{W8}. In their model, no uncertainty was handled. Specifically, unlike our method, their method does not consider uncertainty in sensor positions.
\Done
\Update
There have also been many attempts on alternative sensor placement methods such as evolutionary computation\cite{C16}. The method proposed in \cite{A12} can handle uncertainty in line-of-sight coverage, however it cannot handle uncertainty in sensor positions. Moreover, the method cannot be applicable to an SS problem because online planning is impossible. Indeed, most evolutionary algorithms suffer from the fact that the learning cannot be conducted in an online manner.
\Done
Recently, drone-based monitoring has been extended to sound source localization. In \cite{W7}, a microphone array equipped to a drone was used for robustly localizing sound sources on the ground. Nakadai et al. proposed an online outdoor sound source localization method and evaluated it with a microphone array embedded on a drone\cite{N12}.
\section{Problem Statement and Task Scenario
\label{task}
}
\begin{figure}[t]
\centering
\includegraphics[clip,width=87mm,height=70mm]{fig/env/real_env06.jpg}
\caption{
Model environment (8 $\times$ 12 m).
(a) Target area. (b) A sensor is attached by the experimenter. The yellow objects are sensors already scattered.
(c) The drone drops the sensor at the target position $\hat{y}_{pos}$. (d) The sensor bounces on the terrain and stops at the final position. The double arrow represents deviation.
}
\label{realenv02}
\end{figure}
In this paper, we define an SS problem as follows:
\begin{itemize}
\item A planning problem in which \Update drones \Done scatter sensors in a target area to maximize a certain information criterion.
\end{itemize}
A typical task scenario of SS is illustrated in \figref{eye_catch}. SS is an online planning problem based on uncertain information. Previously scattered sensors affect the position of subsequent sensors, and actual sensor positions might deviate from their planned positions. In this paper, we define the term ``{\bf deviation}'' as follows:
\begin{itemize}
\item The distance between the positions at which the drone drops the sensor and at which it lands. Although we are only considering two-dimensional deviation and distance, the method is not limited in dimensionality. In this paper, the distance is simply projected on the ground.
\end{itemize}
The following are the input and output of the planning method:
\begin{description}
\item[{\bf Input}]: Covariance between previously scattered sensors and their target positions
\item[\hspace{-2.5mm}{\bf Output}]: Target position of the next sensor and its informational gain
\end{description}
The target area is defined as the area to be monitored by the sensors. Humans are not supposed to enter it.
The task scenario is summarized as follows:
\begin{enumerate}
\item[(0)] Initialization: The drone takes off from the loading area.
While the drone is hovering, the experimenter attaches a sensor to it. Although it could be autonomously loaded by the drone, that idea is outside the scope of this study.
\item[(1)] Plan: Given the previously scattered sensors, target position $\hat{y}_{pos}$ is planned by our method.
\item[(2)] Drop: The drone flies to $\hat{y}_{pos}$ and drops the sensor. The actual sensor position on the ground, $y_{pos}$ is randomly deviated from $\hat{y}_{pos}$.
\item[(3)] Return: The drone returns to the loading position, and loads the next sensor. Go to Step (1) until the maximum number of sensors have been placed.
\end{enumerate}
Demo video clip is available at this website\footnote{\url{https://youtu.be/cLx9_Zv10Oo}}.
We assume that no remote control is performed by humans; therefore, the drone must navigate itself based on its sensor observations and a known map. Indeed in our experiments explained in Section \ref{exp}, we used a monocular SLAM method proposed by Engel {\it et al.}\cite{E2}.
The input to the method is images taken by a monocular camera equipped with the drone. Because no external position estimation devices are used in the experiments, our method can work both indoors and outdoors.
\section{Sensor Model
\label{sec_model}
}
\begin{table}[t]
\caption{Symbol notations}
\centering
{\normalsize
\begin{tabular}{c p{5cm} }
\toprule
$y, y'$ & Sensors\\
$V$ & Set of target position candidates\\
$A$ & Set of previously scattered sensors\\
$\bar{A}$ & $V \backslash \{ A \cup y \}$\\
$MI(A)$ & Mutual information of $A$ and $V \backslash A$\\
$\delta_y$ & Increase in $MI(A)$ when sensor $y$ is added\\
$y_{pos}$ & Actual position of sensor $y$\\
$\hat{y}_{pos}$ & Next target position\\
$\vec{\epsilon}_{dev}$ & Deviation\\
$\vec{\Sigma}_{dev}$ & Covariance matrix of $\vec{\epsilon}_{dev}$\\
$d$ & Traveling distance of drone\\
$\mathcal{N}(\cdot, \cdot)$ & Gaussian distribution\\
$K(\cdot, \cdot)$ & Kernel function\\
$y_{obs}$ & Observation of sensor $y$\\
$\mathcal{Y}_A$ & Observation vector of sensor set $A$\\
$p(y_{obs})$ & Probabilistic distribution of $y_{obs}$ (Gaussian)\\
$p(\mathcal{Y}_A)$ & Joint distribution of $\mathcal{Y}_A$ (Gaussian)\\
$\mu_y, \sigma^2_y$ & Mean and variance of $y_{obs}$\\
${\bm \mu}_A, \vec{\Sigma}_{AA}$ & Mean and covariance of $\mathcal{Y}_A$\\
$\mu_{y|A}, \sigma^2_{y|A}$ & Mean and variance of $y_{obs}$ conditioned by $\mathcal{Y}_A$\\
$\vec{\Sigma}_{yA}$ & Covariance vector of $y_{obs}$ and $\mathcal{Y}_A$\\
$\sigma^2_{yy'}$ & Covariance of $y_{obs}$ and $y'_{obs}$\\
\bottomrule
\end{tabular}
}
\label{tab01}
\end{table}
The symbol notations used in this paper are summarized in \tabref{tab01} for readability.
First, we explain the sensor models used in this study. We assume that the sensor observations are modeled by Gaussian processes. That is, when a new sensor is introduced to the environment, its observations are modeled by a Gaussian distribution:
\begin{align}
p(y_{obs}) &= \mathcal{N}(\mu_y, \sigma^2_y).
\end{align}
The observations obtained from the sensor set $A$ are also modeled by a Gaussian distribution:
\begin{align}
p(\mathcal{Y}_A) &= \mathcal{N}( {\bm \mu}_A, \vec{\Sigma}_{AA} ).
\end{align}
We make the same assumption as in the previous study\cite{K24}; the covariance between two sensors can be approximated by a radial basis function (RBF) kernel using sensor positions as its parameters.
Thus, the covariance between sensors $y$ and $y'$ is modeled as follows:
\begin{align}
\sigma^2_{yy'} \simeq K(y_{pos}, y'_{pos}) &= \exp \left\{ \frac{||y_{pos} - y'_{pos}||^2}{2 \phi^2}
\right\},
\end{align}
where $\phi$ denotes the kernel's parameter. The intention of the above equation is that close sensors will have similar values.
We assume that a sensor is dropped at one of the target candidates defined in the target area beforehand. Let $V$ and $A$ be a set of the target candidates and a set of previously selected target positions, respectively. Krause's method\cite{K24} uses mutual information as information gain by introducing a new sensor $y$ given $A$. Let $MI(A)$ be the mutual information between observations obtained from $A$ and $V \backslash A$:
\begin{align}
MI(A) \triangleq I(A;V \backslash A).
\end{align}
Note that we cannot directly obtain observations from $V \backslash A$; therefore, we use the sensor model.
When a sensor $y$ is newly introduced, the increase in $MI(A)$ is:
\begin{align}
\delta_y = MI(A\cup y) - MI(A).
\end{align}
Although a greedy method does not always give the optimal solution in general, it is guaranteed to give $(1-1/e)$--approximation for monotonic submodular functions\cite{N10}. $MI(A)$ is a monotonic submodular function when the number of sensors is less than $|V|/2$\cite{K24}.
Because $(1-1/e)$ is approximately 63\%, this means that 63\% of the optimal score is guaranteed even in the worst case. In a typical sensor placement task, 90\% of the optimal score is empirically reported in the above work.
Under a condition where sensors can be placed without uncertainty, the near-optimal target position $\hat{y}_{pos}$ is obtained as follows:
\begin{align}
\hat{y}_{pos} &= \argmax_{y \in V\backslash A} \delta_y.
\label{eq41}
\end{align}
Details are explained in Appendix B.
\section{Proposed Method: SuMo-SS
\label{sec_proposed}
}
The main difference between the ordinary sensor placement problems and SS is that sensor positions have uncertainty. Instead of \eqref{eq41}, SuMo-SS maximizes the expectation of $\delta_y$ over a deviation distribution as follows:
\begin{align}
\hat{y}_{pos} &= \argmax_{y \in V\backslash A}
\left\{ \mathbb{E}_{pos}[MI(A\cup y)] - \mathbb{E}_{pos}[MI(A)]\right\}\nonumber\\
&= \argmax_{y \in V\backslash A}
\mathbb{E}_{pos}[MI(A\cup y) - MI(A)].
\label{eq43}
\end{align}
\Update
In Appendix A, we explain that the above expected mutual information is submodular.
\Done
Using the transformation explained in Appendix B, we obtain the following:
\begin{align}
\hat{y}_{pos}&= \argmax_{y \in V\backslash A}
\mathbb{E}_{pos}
\left[
\frac{ \sigma^2_y - \vec{\Sigma}_{yA} \vec{\Sigma}_{AA}^{-1} \vec{\Sigma}_{Ay} }
{\sigma^2_y - \vec{\Sigma}_{y\bar{A}} \vec{\Sigma}_{\bar{A}\bar{A}}^{-1} \vec{\Sigma}_{\bar{A}y}}
\right].
\label{eq44}
\end{align}
To obtain the expectation above, we model the final position of a dropped sensor as follows:
\begin{align}
y_{pos} &= \hat{y}_{pos} + \vec{\epsilon}_{dev}\\
\vec{\epsilon}_{dev} &\sim \mathcal{N}( \vec{0}, \vec{\Sigma}_{dev} ).
\end{align}
This means that the deviations are modeled by a Gaussian distribution, where the mean is zero and the covariance matrix is $\vec{\Sigma}_{dev}$. Because prior knowledge about the sensor materials and the environment's terrain is given in most practical applications in industry, we assume that $\vec{\Sigma}_{dev}$ is set with reasonable values by the developer.
In the preliminary investigation with the physical environment shown in \figref{realenv02}, the deviation was mainly dependent on the distance from the loading position and the directions ($x$ and $y$ axes); therefore, we model $\vec{\Sigma}_{dev}$ as a linear combination as follows:
\begin{align}
\vec{\Sigma}_{dev} &=
\begin{bmatrix}
w_1 d + \gamma & \gamma\\
\gamma & w_2 d + \gamma
\end{bmatrix}
,
\end{align}
where $d$ denotes the Euclidean distance between the loading position and $\hat{y}_{pos}$, $(w_1, w_2)$ denotes weight parameters with regard to the directions, and $\gamma$ denotes a positive small number so that the variances are always strictly positive.
Although the distribution of the previously scattered sensors should be continuous, the expectation can be approximated by a discrete mesh with appropriate granularity. \eqref{eq44} can be computed in parallel because such discrete points are independent.
\section{Experiments
\label{exp}
}
\Update
To validate our method, we conducted simulation experiments in which SuMo-SS was compared with a reasonable baseline method. In the following, we first explain the physical drone and environment that were used for building the simulation. Then, we explain qualitative and quantitative results.
\Done
\subsection{Robot and Environment Models}
\input{section5.hardware}
\subsection{Experimental Settings}
\begin{figure}[b]
\centering
\includegraphics[clip,width=85mm,height=60mm]{fig/env/drone_simenv02.png}
\caption{
Simulation environment used in the experiments. The right bottom image is a sample camera image taken by the drone.
The blue cubes represent sensors.
The sensors are supposed to be placed in the area inside the green lines.
}
\label{simenv}
\end{figure}
Experiments were conducted in the simulation environment shown in \figref{simenv}. In the figure, the blue cubes represent the deployed sensors. In the environment, we set 25 grid points as the target candidates, $V$. The 25 grid points were equally positioned within the \Update 5$\times$5 \Done m area surrounded by the green lines. Note that even if a sensor was dropped inside the area, it might land outside of it because of deviation.
A sample camera image taken by the drone is shown in the bottom-right panel of \figref{simenv}. Feature points detected by the aforementioned monocular SLAM method\cite{E2} are shown as red, green, and yellow dots.
\subsection{Qualitative Results}
\input{section5.fig}
We compared our proposed method (SuMo-SS) and a baseline method. We used the method proposed by Krause\cite{K24} as the baseline. Unlike SuMo-SS, the baseline method does not consider the uncertainty of sensor positions.
\figref{quali_0302} shows the qualitative results, where $(w_1, w_2)$ are set to $(w_1, w_2)=(0.3,0.2)$. The subfigures in the left and right columns are the results of the baseline and SuMo-SS, respectively. The color strength represents $\delta_y$ \Update (increase in mutual information when sensor $y$ is added) \Done after three conditions: (a) and (b) show two sensors, (c) and (d) show five sensors, and (e) and (f) show eight sensors. In the subfigures, the squares, cross marks (``\ding{53}''), and black circles represent $V$ \Update (set of target position candidates)\Done, $y_{pos}$ \Update (actual position of sensor $y$)\Done, and the loading position, respectively. Note that $y_{pos}$ was unobservable from the methods.
\Update
Numbers in blue represents the ordering of $y_{pos}$. In each subfigure, '$\hat{y}$' in blue represents each penultimate target position $\hat{y}_{pos}$.
\Done
The difference in the sensor positions in subfigures (a) and (b) is thought to be caused by deviation, and this indicates that SuMo-SS and the baseline have no significant difference when two sensors are scattered. Although there is a difference between subfigures (c) and (d), the bias in the sensor positions is not significant. By contrast, subfigure (e) shows that the sensors are scattered in a biased manner compared with subfigure (f). This indicates that SuMo-SS could plan to scatter sensors unbiasedly under uncertainty. Because this needs to be quantitatively validated, we show the quantitative validation in \figref{cumuMIcomp}.
\subsection{Quantitative Results}
A quantitative comparison is shown in \figref{cumuMIcomp}. The horizontal axis represents the number of sensors, $n$. The vertical axis represents $MI(A_n)$, which is the mutual information when $n$ sensors are introduced to the environment. $MI(A_n)$ is defined as follows:
\begin{align}
MI(A_n) = \sum_{i=2}^{n} \delta_{y_i},
\end{align}
where $\delta_{y_i}$ denotes the information gain when the $i$-th sensor is introduced. To satisfy the condition that $MI(A_n)$ is a monotonic submodular function, $n$ needs to be less than $|V|/2$. SuMo-SS and the baseline method require at least one sensor in the environment. Therefore, the first target position was manually given as the center of the area. Then, the second to $n$-th target positions were planned by the proposed and baseline methods.
\figref{cumuMIcomp} compares the results from (a) SuMo-SS (proposed), (b) baseline\cite{K24}, and (c) random selection. The random selection method was introduced as the lower bound method, which selected \Update the next target position \Done $\hat{y}_{pos}$ randomly from $V$. The left-hand and right-hand figures show the results where $(w_1, w_2)=(0.3,0.2)$ and $(w_1, w_2)=(0.35,0.35)$, respectively. The average results of 10 experimental runs are shown.
From the left-hand figure of \figref{cumuMIcomp}, SuMo-SS obtained larger $MI(A_n)$ than the baseline method. $MI(A_n)$ values at $n=12$ obtained by (a) SuMo-SS (proposed), (b) baseline\cite{K24}, and (c) random selection were 22.14, 20.47, and 16.89, respectively. In the right-hand figure, $MI(A_n)$ at $n=12$ obtained by (a), (b), and (c), were 20.59, 19.26, and 17.40, respectively.
The above results show that our method obtained better results in both settings.
\begin{figure}[t]
\begin{minipage}[c]{40mm}
\centering
\includegraphics[clip,trim=0 0 30 10,width=40mm,height=38mm]{fig/pattern051/cumuMIcomp0_3_0_2.png}
\end{minipage}
\hspace{1mm}
\begin{minipage}[c]{40mm}
\centering
\includegraphics[clip,trim=0 0 30 10,width=40mm,height=38mm]{fig/pattern085/cumuMIcomp0_35_0_35.png}
\end{minipage}
\caption{
Comparison of (a) SuMo-SS (proposed), (b) baseline\cite{K24}, and (c) random method.
$MI(A_n)$ is plotted against the number of sensors, $n$. The average results of ten experimental runs are shown.
Left: $(w_1, w_2)=(0.3,0.2)$. Right: $(w_1, w_2)=(0.35,0.35)$.
}
\label{cumuMIcomp}
\end{figure}
\subsection{Sensitivity Analysis}
To validate SuMo-SS under various deviations, we evaluated the performance under various combinations of $(w_1, w_2)$. The conditions for $w1$ and $w2$ were $w1 \in \{0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5\}$ and $w2 \in \{0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5\}$, respectively. We ran 10 simulations for each combination of $(w_1, w_2)$. The evaluation was conducted for SuMo-SS and the baseline\cite{K24}. Therefore, we ran the simulation 980 ($=7\times7\times10\times2$) times in total.
\tabref{delta02} shows a performance difference between SuMo-SS and the baseline. The performance difference $\Delta_n$ is defined as follows:
\begin{align}
\Delta_n = MI(A_n)_{proposed} - MI(A_n)_{baseline},
\end{align}
where $n$ represents the number of sensors. A positive $\Delta_n (> 0)$ indicates that SuMo-SS obtained larger $MI(A_n)$ than the baseline. The subtables (a), (b), (c), and (d) show $\Delta_3$, $\Delta_6$, $\Delta_9$, and $\Delta_{12}$, respectively.
\Update
In each sub-table, the top and bottom three results are displayed in red and blue, respectively.
\Done
Sub-table (a) shows that SuMo-SS outperformed the baseline in all conditions (49 out of 49) when the number of introduced sensors was three ($n=3$). Sub-tables (b), (c) and (d) show that SuMo-SS outperformed the baseline in 41, 46, and 44 conditions when $n=6$, $n=9$, and $n=12$, respectively. These results indicate that SuMo-SS could obtain larger $MI(A_n)$ under most of the conditions.
\begin{table}[t]
\centering
\caption{Performance difference $\Delta_N$ at $N=3, 6, 9, 12$. Average of ten experiments are shown.}
\label{delta02}
(a) $N=3$
\input{./fig/sensitivity/sensitivity_table03}
\\[3mm]
(b) $N=6$
\input{./fig/sensitivity/sensitivity_table06}
\\[3mm]
(c) $N=9$
\input{./fig/sensitivity/sensitivity_table09}
\\[3mm]
(d) $N=12$
\input{./fig/sensitivity/sensitivity_table12}
\end{table}
\subsection{Discussions
}
First, we discuss the covariance of the sensor observations. In this study, covariance was obtained based on the sensor positions.
\Update
This does not mean that SuMo-SS requires precise sensor position information.
\Done
Instead, this was because (a) our focus is not to model realistic sensor observations, and (b) simulations require a certain-level of approximation on sensor observation. However, this does not hold in real-world applications; therefore, covariance should be calculated based on sensor observations. By doing so, we will be able to apply the proposed method to real-world applications including cases in which scattered sensors are washed away by rain.
We used mutual information $MI(A)$ as the criterion for submodular optimization. However, we can use other criteria that have submodularity, such as the monitoring area size and the number of grid points covered by the area.
\Update
Future study includes the improvement of the optimization policy instead of the greedy method. Golovin et al. proposed adaptive policies by introducing the concept of adaptive submodularity\cite{G8}. Although the assumptions for submodularity and adaptive submodularity are different, there is a possibility that the SS problem can be extended to satisfy the assumptions.
\Done
\Update
\figref{cumuMIcomp} might give the impression that the performance of the baseline and SuMo-SS slightly decrease at $n=12$. This is caused by the fact that the increase in $\delta_{y_i}$ is decreasing in monotonic submodular functions. \Done In this study, the maximum number of sensors was 12, which is the greatest integer less than $|V|/2$. However, this does not mean that the method is limited to 12 sensors. By increasing $|V|$, more sensors can be deployed without any fundamental changes. \Update For example, if a developer needs to deploy 100 sensors in a practical use case, then $|V|$ can be set $|V|=201, 202, ...$ because $|V|$ is arbitrary in SuMo-SS. \Done
One might question whether sensors should be simply dropped at grid points; however, not all sensors might be informative because sometimes local events cannot be monitored by rough granularity. Although the sensor material can be changed to reduce the deviation, reducing it to zero will not be easy. Although we used a drone to transport sensors, SuMo-SS can be applied to a setting in which sensors are deployed with catapult-like devices provided that the deviation can be modeled.
\section{Summary}
In this paper, we made the following contribution:
\begin{itemize}
\item We proposed the SuMo-SS method that can deal with uncertainty in sensor positions. Unlike existing methods, SuMo-SS can deal with uncertainty in sensor positions, which is relevant for practical applications. Its experimental validation with a baseline method was explained in \secref{exp}.
\end{itemize}
The target use case of our method include building sensor networks for environmental monitoring.
Future work includes an experimental validation with physical drones in outdoor environments.
\section{Appendix}
\subsection{Submodurality in Expected Mutual Information}
\Update
A set function $f$ is called submodular if it holds $f(A \cup \{e\}) - f(A) \geq f(B \cup \{e\}) - f(B)$
for every $A, B \subseteq E$ with $A \subseteq B$ and every $e \in E \backslash B$. $MI(A)$ is proved to be a monotone submodular function in particular conditions\cite{N10,K24}.
If the probabilistic distribution of $y_{pos}$ is discrete, \eqref{eq43} can be rewritten as follows:
\begin{align}
\hat{y}_{pos} &=
\argmax_{y \in V\backslash A}
\sum p(y_{pos})[MI(A\cup y) - MI(A)],
\label{eqA7}
\end{align}
where $p(y_{pos})$ denotes the probabilistic distribution of $y_{pos}$. A nonnegative linear combination of submodular functions is also submodular\cite{K28,K29}. Therefore, the right-hand term in \eqref{eqA7} is submodular. If $p(y_{pos})$ is continuous instead, we can approximate it with the average of sufficiently fine discrete distributions as shown in \eqref{eqA7}. Therefore, the expected mutual information shown in \eqref{eq43} is a submodular function.
\Done
\subsection{Submodular Optimization Using Mutual Information}
Hereinafter, we explain the method proposed in \cite{K24}. For readability, $y_{obs}$ and $\mathcal{Y}_A$ are written as $y$ and $A$, respectively.
From the definition of mutual information, $MI(A)$ is decomposed as follows:
\begin{align*}
MI(A)
&= H(A) - H(A|V \backslash A) = H(A) - H(A| \bar{A} \cup y )\\
MI(A \cup y) &= H(A \cup y) - H(A \cup y | \bar{A}),
\end{align*}
where $H(\cdot)$ represents entropy.
Let $\delta_y$ be the difference between $MI(A \cup y)$ and $MI(A)$ as follows:
\begin{align}
\delta_y &= MI(A \cup y) - MI(A) \nonumber\\
&= H(A \cup y) - H(A \cup y | \bar{A}) - H(A) + H(A|\bar{A} \cup y) \label{eqA2}
.
\end{align}
From the definition of conditional entropy, $H(A \cup y | \bar{A}) \}$ can be written:
\begin{align}
H(A \cup y | \bar{A}) \} &= H(A \cup y, \bar{A}) - H(\bar{A}) \nonumber \\
&= H(V) - H(\bar{A}).
\end{align}
We can also transform $H(A|\bar{A} \cup y)$ in the same manner. Thus \eqref{eqA2} can be rewritten:
\begin{align}
\delta_y
&= H(A \cup y) - H(A) - H(\bar{A} \cup y) + H(\bar{A}) \nonumber\\
&= H(y| A) - H(y|\bar{A}) . \label{eqA3}
\end{align}
Another definition of conditional entropy $H(y|A)$ is given as follows:
\begin{align}
H(y|A)
&= -\int p(y,A) \log \mathcal{N} (\mu_{y|A}, \sigma^2_{y|A})dydA \nonumber\\
&= \frac{1}{2} \log 2 \pi e \sigma^2_{y|A}, \label{eqA4}
\end{align}
where the formula of the integral of Gaussian distributions is used. Similarly, we can obtain $H(y|\bar{A})$.
From Equations (\ref{eqA3}) and (\ref{eqA4}), we obtain
\begin{align}
\delta_y = \frac{1}{2} \log \frac{\sigma^2_{y|A}}{\sigma^2_{y|\bar{A}}}. \label{eqA5}
\end{align}
When a multi-variate Gaussian distribution is divided, the following holds:
\begin{align}
\sigma^2_{y|A} &= \sigma^2_y - \vec{\Sigma}_{yA} \vec{\Sigma}_{AA}^{-1} \vec{\Sigma}_{Ay}. \label{eqA51}
\end{align}
From Equations (\ref{eqA5}) and (\ref{eqA51}), we obtain the following:
\begin{align}
\delta_y = \frac{1}{2} \log
\frac{ \sigma^2_y - \vec{\Sigma}_{yA} \vec{\Sigma}_{AA}^{-1} \vec{\Sigma}_{Ay} }
{\sigma^2_y - \vec{\Sigma}_{y\bar{A}} \vec{\Sigma}_{\bar{A}\bar{A}}^{-1} \vec{\Sigma}_{\bar{A}y}}.
\label{eqA6}
\end{align}
|
{
"timestamp": "2018-06-05T02:17:07",
"yymm": "1806",
"arxiv_id": "1806.01065",
"language": "en",
"url": "https://arxiv.org/abs/1806.01065"
}
|
\section{Introduction}
Consider a problem domain like the one shown in figure~\ref{fig:example}.
A holonomic two-dimensional agent is tasked with navigating to a specified goal region as quickly as possible.
The path is blocked by doors that can only opened by pressing the appropriate switch.
Planning the sequence of switches to toggle requires combinatorial search; deciding if a path exists to each switch requires motion planning.
As in many real-world planning domains, such as object manipulation or navigation among movable objects, the combinatorial search and motion planning problems are coupled and cannot be completely separated.
\begin{figure}
\centering
\begin{subfigure}[t]{0.5\columnwidth}
\centering
\includegraphics[width=\textwidth]{example_problem.pdf}
\caption{The door puzzle problem}
\label{fig:example_problem}
\end{subfigure}%
\begin{subfigure}[t]{0.5\columnwidth}
\centering
\includegraphics[width=\textwidth]{example_solution.pdf}
\caption{The optimal solution}
\label{fig:example_solution}
\end{subfigure}
\caption{%
The door-switch problem, an example task and motion planning domain.
A two-dimensional robot must navigate from the start location to a goal location, but the way is obstructed by doors that can only be opened by toggling a corresponding switch.
The optimal solution to this problem instance is to toggle the switches in the order $(1, 3, 2, 4, 5)$ and then go to the goal set.
Because the size of the configuration space grows exponentially with the number of doors, planning is computationally challenging.
Abstraction can render such planning problems tractable.
\label{fig:example}
}
\end{figure}
A standard approach to making such problems computationally tractable is to use abstraction to reason about the properties of groups of primitive plans simultaneously.
For example, we could choose a sequence of high-level operations using a task planner, ignoring the details of the underlying motion plan.
If we later determine that we cannot find a motion plan consistent with our high-level plan, we can use that information to modify our high-level plan.
For example, \cite{gravot2005asymov} describe an integrated approach that relies on a heuristic search for a high-level plan and uses motion planners as subroutines to deal with detailed geometry.
\cite{kaelbling2011hierarchical} use a hierarchy to guide high-level decision making, resolving low-level decisions arbitrarily and trusting in the reversibility of the system to ensure hierarchical completeness.
Although these and other approaches (e.g.,~\cite{garrett2015ffrob,cambon2009hybrid,srivastava2013using}) vary in how they deal with the interaction between geometric planning and combinatorial search, they share a common weakness: they can only make guarantees about the plans they generate relative to the abstraction they are provided.
Even optimizing approaches (\cite{wolfe2010combined}) are generally limited to guarantees of hierarchical optimality.
Angelic semantics (\cite{marthi2008angelic}) provide a way to describe an abstraction that preserves optimality, but it is not clear what criteria an angelic abstraction must satisfy in order to make guarantees about the quality of synthesized plans.
In this paper, we describe conditions under which an abstraction will preserve the ability to find the optimal motion plan while accelerating planning.
We derive abstractions for two continuous planning domains, and using these abstractions we can dramatically reduce the complexity of search relative to a direct motion planner.
We find near-optimal motion plans in planning problems involving $10^{13}$ states without using a separate task planner.
\section{Problem Formulation}
We are interested in planning problems involving some underlying continuous configuration space $\mathcal{X}$, such as the position of a robot or the configuration of its joints.
Our task is to find a path through free space that starts in a specified state $s_0$ and ends in a goal set $S_{\mathrm{goal}}$.
This goal set may be specified implicitly, as the set of all states satisfying some constraint.
A path is a continuous map $p:[0, 1]\to \mathcal{X}$.
We define a concatenation operator $\circ$ for paths.
\begin{equation}
(p_1\circ p_2)(s) =
\begin{cases}
p_1(2t) & \mathrm{if\,} t \le \frac{1}{2} \\
p_2(2t-1) & \mathrm{if\,} \frac{1}{2} < t \le 1
\end{cases}
\end{equation}
Let $\mathcal{P}_{\mathcal{X}}(S, S')$ be the set of all paths starting in $S \subset \mathcal{X}$ and ending in $S' \subset \mathcal{X}$.
Let $c: \mathcal{X} \times T\mathcal{X} \to \mathbb{R}_{>0}$ be a cost function, where $T\mathcal{X}$ is the tangent space of $\mathcal{X}$.
We can define an associated cost functional $\mathcal{C}:P \to \mathbb{R}_{\ge 0}$.
\begin{equation}
\mathcal{C}[p] = \int_0^1 c(p(t), \dot{p}(t))\,\mathrm{d}t
\end{equation}
Because $\mathcal{C}$ is additive, $\mathcal{C}[p_1 \circ p_2] = \mathcal{C}[p_1] + \mathcal{C}[p_2]$.
We define the \emph{optimal cost function} $c^*: 2^{\mathcal{X}} \times 2^{\mathcal{X}} \to \mathbb{R}_{\ge 0}$ as
\begin{equation}
c^*(S, S') = \inf \{\mathcal{C}(p): p \in \mathcal{P}_{\mathcal{X}}(S, S')\}.
\end{equation}
We define the $\epsilon$-approximate planning problem as the search for a path $\hat{p} \in \mathcal{P}_{\mathcal{X}}(\{s_0\}, S_g)$ with cost less than $(1+\epsilon)$ the optimal cost for any $\epsilon \in \mathbb{R}_{\ge 0} \cup \{\infty\}$.
\begin{equation}
\hat{p} \in \{ p \in \mathcal{P}_{\mathcal{X}}(\{s_0\}, S_g) : \mathcal{C}(\hat{p}) \le (1 + \epsilon) c^*(s_0, S_g) \}
\end{equation}
The case where $\epsilon=\infty$, when we wish to find any feasible path to the goal set, is the problem of \emph{satisficing} planning.
The case where $\epsilon=0$ is optimal planning.
The set $\mathcal{P}_{\mathcal{X}}(\mathcal{X}, \mathcal{X})$ of all possible paths from all possible start and goal locations is continuous and topologically complex.
To simplify planning, we assume we have available a finite set $\mathcal{A}_0$ of \emph{primitive operators}, low-level actions that can be executed in the real world.
The problem of constructing such a set of operators in continuous motion planning domains is well studied; in this document, we will assume the set of operators are given by the edges in a probabilistic roadmap (PRM*).
That is, we randomly sample a finite set of configurations $\mathcal{V}_n \subset \mathcal{X}$, and for each such configuration $v$, we define an operator $p_v$.
The operator $p_v$ ensures that the robot will end at the state $v$ if executed from any state in the open ball of radius $r_n$ around $v$, where $r_n \propto (\log n / n)^{1/d}$ is a radius that increases slowly with the size of the discretization.
Any feasible plan can be well-approximated by a sequence of these randomly sampled operators as the number of sampled configurations tends to infinity.
For example, we can show that if $\mathcal{A}_{0,n}^*$ is the set of all paths through a PRM* with $n$ sampled configurations, then
\begin{multline}
\label{eq:asymptotically_optimal}
\lim_{n\to\infty} \inf \{\mathcal{C}[p]: p \in \mathcal{A}_{0,n}^* \cap \mathcal{P}_{\mathcal{X}}(\{s_0\}, S_g)\} = \\
\inf \{\mathcal{C}[p]: p \in \mathcal{P}_{\mathcal{X}}(\{s_0\}, S_g)\}.
\end{multline}
This was proven by \cite{karaman2011sampling} for the case where the system is subject to analytic differential constraints, and by \cite{vega-brown2016asymptotically} when the system has piecewise-analytic differential constraints (as in object manipulation problems).
Because the set of primitive operators can grow quite large, especially in problems with high-dimensional configuration spaces, a direct search for primitive plans is computationally intractable.
Instead, we will use angelic semantics to encode bounds on the cost of large groups of plans.
We can use these bounds to plan efficiently while preserving optimality.
\section{Angelic Semantics}
An \emph{abstract operator} $\mathbf{a}$ represents a set $\mathbf{a} \subset \mathcal{P}_{\mathcal{X}}$ of primitive plans.
Because the space of plans is infinite, we define operators implicitly, using constraints on the underlying primitive plans.
For example, in a navigation problem, we might define an operator as any primitive plan that remains inside a given set of configuration space and ends in a different set of configuration space.
This is depicted graphically in figure~\ref{fig:abstraction:operator}: the operator $\mathbf{a}_{23}$ contains every path that is contained in region $2$ and ends in region $3$.
The concatenation of two operators $\mathbf{a}_i \circ \mathbf{a}_j$ is an abstract plan containing all possible concatenations of primitive plans in the operators.
\begin{equation}
\mathbf{a}_i \circ \mathbf{a}_j = \{
p_i \circ p_j : p_i \in \mathbf{a}_i, p_j \in \mathbf{a}_j, p_i(1) = p_j(0)
\}
\end{equation}
The condition $p_i(1) = p_j(0)$ is necessary to enforce that only feasible plans are contained in $\mathbf{a}_i \circ \mathbf{a}_j$.
In a problem with nontrivial dynamic constraints, the condition would need to be more complex.
In figure~\ref{fig:abstraction:plan}, we show samples from the plan $\mathbf{a}_{12} \circ \mathbf{a}_{23} \circ \mathbf{a}_{34} \circ \mathbf{a}_{4g}$, which contains paths that move from region 1 to 2 to 3 to 4 to the goal.
The concatenation operation allows us to express complicated sets of plans in a compact way.
\begin{figure*}
\centering
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{abstraction_operator.pdf}
\caption{Plans in operator $\mathbf{a}_{23}$}
\label{fig:abstraction:operator}
\end{subfigure}%
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{abstraction_plan.pdf}
\caption{Plans in $\mathbf{a}_{12} \circ \mathbf{a}_{23} \circ \mathbf{a}_{34} \circ \mathbf{a}_{4g}$}
\label{fig:abstraction:plan}
\end{subfigure}
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{abstraction_valuation.pdf}
\caption{Bounds for $V[\mathbf{a}_{34}]$}
\label{fig:abstraction:valuation}
\end{subfigure}%
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{abstraction_propagation.pdf}
\caption{Bounds for $V[\mathbf{a}_{12} \circ \mathbf{a}_{23} \circ \mathbf{a}_{34} \circ \mathbf{a}_{4g}]$}
\label{fig:abstraction:propagation}
\end{subfigure}
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{abstraction_refinement_a.pdf}
\caption{Plans in $\mathbf{a}_{12} \circ \textsc{Act}$}
\label{fig:abstraction:refinement_a}
\end{subfigure}%
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{abstraction_refinement_b.pdf}
\caption{Plans in $\mathbf{a}_{12} \circ \mathbf{a}_{23} \circ \textsc{Act}$}
\label{fig:abstraction:refinement_b}
\end{subfigure}
\caption{%
A schematic description of angelic semantics.
Abstract operators (a) are sets of primitive plans, and can be defined implicitly in terms of constraints.
For example, the operator $\mathbf{a}_{23}$ contains all plans that end in region $3$ and do not leave region $2$.
We can sequence abstract operators into abstract plans (b).
The red lines link the centroids of successive regions, while the black lines are randomly sampled primitive plans representative of the abstract plan that move from regions $1$ to $2$ to $3$ to $4$ to the goal.
We can use domain-specific information to compute bounds on the cost of any plan in an operator stating from a specific set of states (c).
Here, lower bounds are drawn using dashed lines, while upper bounds are drawn in dotted lines.
Note the dependence on the initial state: the cost of a plan starting in $1 \cap 3$ is strictly higher than the cost of a plan stating in $2 \cap 3$.
We can sequence these bounds (d) to compute bounds on the cost of an abstract plan.
Finally, a refinement of an abstract plan $\mathbf{p}$ (e) is a less abstract plan (f) $\mathbf{p}' \subset \mathbf{p}$.
Primitive plans in $\mathbf{p}'$ are shown with heavy lines, while plans in $\mathbf{p} \setminus \mathbf{p}'$ are shown with finer lines.
\label{fig:abstraction}}
\end{figure*}
Because our operators are defined implicitly, it can be difficult to find the best plan in the abstract plan, or even to decide if there exists a plan consistent with the constraints of an abstract plan.
Note that it is easy to write down abstract plans that are empty; in the toy navigation example in figure~\ref{fig:abstraction}, the plan $\mathbf{a}_{12} \circ \mathbf{a}_{34}$ contains no primitive plans, as the intersection of regions $1$, $2$, and $3$ is empty.
For planning with abstract operators to be feasible, we need a way to reason about the primitive plans contained by an abstract plan \emph{without} first enumerating those primitive plans.
Specifically, we will develop a way to compare abstract plans, and we will show this comparison is sufficient for planning.
We do this using the \emph{valuation} of an operator or plan.
A \emph{valuation} $V[\mathbf{a}]$ for an operator or plan $\mathbf{a}$ is the unique map $V[\mathbf{a}]: \mathcal{X} \times \mathcal{X} \to \mathbb{R}_{\ge 0}$ that takes a pair of states and gives the lowest cost path between the pair.
\begin{equation}
V[\mathbf{a}](s_1, s_2) = \inf \{\mathcal{C}(\sigma):
\sigma\in \mathbf{a}, \sigma(0)=s_1, \sigma(1)=s_2 \} \label{eq:valuation}
\end{equation}
Note that if there are no paths in $\mathbf{a}$ linking $s_1$ and $s_2$, then $V[\mathbf{a}](s_1, s_2) = \inf \varnothing = \infty$.
Valuations allow us to compare abstract plans without reference to the primitive plans they contain.
Given two abstract plans $\mathbf{p}$ and $\mathbf{p}'$, if we can prove that for any pair of states $x, x'$, either $V[\mathbf{p}](x, x') < V[\mathbf{p}'](x, x')$ or $V[\mathbf{p}'](x, x') = \infty$, then either there is a solution to our planning problem in $\mathbf{p}$, or there is no solution in $\mathbf{p}$ or $\mathbf{p}'$.
Either way, we do not need to consider any plan in $\mathbf{p}'$; we can prune $\mathbf{p}'$ from our search space.
Under such a condition, we say that $\mathbf{p}$ \emph{dominates} $\mathbf{p}'$ and we write $V[\mathbf{p}] \prec V[\mathbf{p}']$.
Similarly, if either $V[\mathbf{p}](x, x') \le V[\mathbf{p}'](x, x')$ or $V[\mathbf{p}'](x, x') = \infty$, then we say that $\mathbf{p}$ \emph{weakly dominates} $\mathbf{p}'$ and we write $V[\mathbf{p}] \preceq V[\mathbf{p}']$.
Unfortunately, determining the valuation of an operator is itself an optimization problem, and one that is not necessarily any easier than the planning problem we are trying to solve.
In many cases, however, we can derive a computational advantage from reasoning about \emph{bounds} on the valuation of an abstract operator.
By representing these bounds \emph{symbolically}, we will be able to reason without reference to the underlying states or plans.
We first define bounds on the valuation of an operator over a set of states.
\begin{align}
V_L[\mathbf{a}](\mathbf{s}, \mathbf{s}') = \inf \{ \inf \{ V[\mathbf{p}](s, s'): s' \in \mathbf{s}' \}: s \in \mathbf{s} \} \\
V_U[\mathbf{a}](\mathbf{s}, \mathbf{s}') = \sup \{ \inf \{ V[\mathbf{p}](s, s'): s' \in \mathbf{s}' \}: s \in \mathbf{s} \}
\end{align}
A symbolic valuation bound $\hat{V}[\mathbf{a}]$ is a set of tuples $\{(\mathbf{s}, \mathbf{s}', l, u)\}$, where $\mathbf{s}, \mathbf{s}'$ are symbolic states and $l < u \in \mathbb{R}_{\ge0} \cup \{\infty\}$.
A bound $\hat{V}[\mathbf{a}]$ is admissible if
\begin{align}
\exists (\mathbf{s}, \mathbf{s}', l, u) &\in \hat{V}[\mathbf{a}]:&
l &\le V_L[\mathbf{a}](\mathbf{s}, \mathbf{s}') \\
\forall (\mathbf{s}, \mathbf{s}', l, u) &\in \hat{V}[\mathbf{a}]:&
u &\ge V_U[\mathbf{a}](\mathbf{s}, \mathbf{s}').
\end{align}
In words, a bound $(\mathbf{s}, \mathbf{s}', l, u)$ is admissible if for any state $x$ in $\mathbf{s}$ there exists a plan $p$ ending in some state $x'$ in $\mathbf{s}'$ with cost $c = \mathcal{C}[p]$ bounded above by $u$ and below by $l$.
We can also interpret a symbolic valuation bound $\hat{V}$ as a bound over sets of states.
\begin{align}
\hat{V}_L[\mathbf{a}](\mathbf{s}, \mathbf{s}') =&
\inf \{l: (\mathbf{s}_0, \mathbf{s}_1, l, u) \in \hat{V}[\mathbf{a}], \mathbf{s} \cap \mathbf{s}_0 \ne \varnothing, \nonumber\\
&\qquad\qquad \mathbf{s}' \cap \mathbf{s}_1 \ne \varnothing\} \\
\hat{V}_U[\mathbf{a}](\mathbf{s}, \mathbf{s}') =&
\inf \{u: (\mathbf{s}_0, \mathbf{s}_1, l, u) \in \hat{V}[\mathbf{a}], \mathbf{s} \subseteq \mathbf{s}_0, \mathbf{s}' \subseteq \mathbf{s}_1\}.
\end{align}
Note that if $\hat{V}[\mathbf{a}]$ is admissible, then $\hat{V}_L[\mathbf{a}](\mathbf{s}, \mathbf{s}') \le V_L[\mathbf{a}](\mathbf{s}, \mathbf{s}')$ and $V_U[\mathbf{a}](\mathbf{s}, \mathbf{s}') \le \hat{V}_U[\mathbf{a}](\mathbf{s}, \mathbf{s}')$ for all abstract state pairs $\mathbf{s}, \mathbf{s}'$ (see appendix~\ref{sec:proofs}, proposition~\ref{thm:bounds}).
This observation has important consequences in a few interesting limiting cases.
A bound $\hat{V}[\mathbf{a}]$ contains at least one element $(\mathbf{s}, \mathbf{s}', l, u)$ where $u$ is finite only if then there must be some plan in the operator $\mathbf{a}$.
A bound $\hat{V}[\mathbf{a}]$ does not contain an element $(\mathbf{s}, \mathbf{s}', l, u)$ where $l$ is finite only if $\mathbf{a}$ is empty.
Similarly, $\hat{V}_U[\mathbf{a}](\mathbf{s}, \mathbf{s}') < \infty$ implies $\mathbf{a}$ contains feasible plans connecting each state in $\mathbf{s}$ to some state in $\mathbf{s}'$, while $\hat{V}_L[\mathbf{a}](\mathbf{s}, \mathbf{s}') = \infty$ implies $\mathbf{a}$ contains no plan connecting a state in $\mathbf{s}$ to a state in $\mathbf{s}'$.
In addition, if $\hat{V}[\mathbf{p}]$ and $\hat{V}[\mathbf{p}']$ are admissible, then $\hat{V}[\mathbf{p} \cup \mathbf{p}'] = \hat{V}[\mathbf{p}] \cup \hat{V}[\mathbf{p}']$ is admissible (see appendix~\ref{sec:proofs}, proposition~\ref{thm:join} for a proof).
It is also important to recognize the state-dependence of valuation bounds.
Consider the operator $\mathbf{a}_{34}$ in figure~\ref{fig:abstraction:valuation}; the operator is defined as containing any plan contained in region $3$ that ends in region $4$.
Because regions $3$ and $4$ intersect, the global lower bound on the cost of a plan in this operator is zero.
However, we can compute nontrivial bounds for specific states, or for specific sets of states.
For example, paths achieving lower and upper bounds are drawn from the abstract states $R_2 \cap R_3$ and $R_1 \cap R_3$ to the termination set of the operator.
As we will see in sections~\ref{sec:abstraction:navigation} and \ref{sec:abstraction:door_puzzle}, for many domains we will not need to write down a valuation explicitly.
Instead, we can use domain information to make metric computations and generate the necessary elements of a valuation procedurally.
Moreover, by working with symbolic bounds we can efficiently compute bounds on the cost of plans consisting of sequences of abstract operators, without reference to a dense discretization of the underlying space of plans.
For example, if we have bounds on a plan $\hat{V}[\mathbf{a}]$ and an operator $\hat{V}[\mathbf{a}']$, we can compute a bound $\hat{V}[\mathbf{a} \circ \mathbf{a}']$.
\begin{align}
\hat{V}[\mathbf{a} \circ \mathbf{a}'] =
&\{(\mathbf{s}, \mathbf{s}''', l+l', u+u'):
(\mathbf{s}, \mathbf{s}', l, u) \in \hat{V}[\mathbf{a}],
\nonumber \\&\quad
(\mathbf{s}'', \mathbf{s}''', l', u') \in \hat{V}[\mathbf{a}'],
\mathbf{s}' \subseteq \mathbf{s}''
\} \,\cup \nonumber \\
& \{(\mathbf{s}, \mathbf{s}''', l+l', u):
(\mathbf{s}, \mathbf{s}', l, u') \in \hat{V}[\mathbf{a}],
\nonumber \\&\quad
(\mathbf{s}'', \mathbf{s}''', l', \infty) \in \hat{V}[\mathbf{a}'],
\mathbf{s}' \cap \mathbf{s}'' \ne \varnothing
\}
\end{align}
If $\hat{V}[\mathbf{a}]$ and $\hat{V}[\mathbf{a}']$ are admissible, then $\hat{V}[\mathbf{a} \circ \mathbf{a}']$ is admissible as well (see appendix~\ref{sec:proofs}, proposition~\ref{thm:propagation} for a proof).
We call this process \emph{propagation}.
This process is depicted graphically in figure~\ref{fig:abstraction:propagation}.
\section{Admissible Abstractions}
\label{sec:refinement}
We will use angelic semantics to specify abstractions that enable efficient planning.
Suppose that $\mathbf{p}, \mathbf{p}'$ are abstract plans, with $\mathbf{p} \subset \mathbf{p}'$.
Then $\mathbf{p}' \preceq \mathbf{p}$, since any plan in $\mathbf{p}$ is also in $\mathbf{p}'$---but because $\mathbf{p}$ is a smaller set than $\mathbf{p}'$, our bounds may tighter.
If $\hat{V}_U[\mathbf{p}'] \prec \hat{V}_L[\mathbf{p}]$, then we can also conclude that $\mathbf{p}' \setminus \mathbf{p} \prec \mathbf{p}'$.
We can incrementally construct an increasingly accurate estimate of $V[\mathbf{p}]$ by iteratively considering smaller and smaller subsets of an operator $\mathbf{p}$ and pruning those subsets that cannot contain an optimal plan.
This is depicted graphically in figures~\ref{fig:abstraction:refinement_a} and \ref{fig:abstraction:refinement_b}.
We can make precise the construction of these increasingly fine subsets by introducing a \emph{refinement relation} $\mathcal{R} \subset \mathcal{A}^* \times \mathcal{A}^*$, where $*$ denotes the Kleene closure.
The elements of $\mathcal{R}$ are ordered pairs $(\mathbf{p}, \mathbf{p}')$ such that $\mathbf{p}' \subset \mathbf{p}$.
We can construct a relation $\mathcal{R}$ by defining a procedure to generate plans $\mathbf{p}'$ given a plan $\mathbf{p}$.
First, define an operation $\textsc{Head}:\mathcal{A}^*\to \mathcal{A}$, which takes a plan $\mathbf{p}$ and selects a single operator $\mathbf{a}$ from it to replace with a more constrained refinement.
We then define operations $\textsc{Base}:\mathcal{A}^* \to \mathcal{A}^*$ and $\textsc{Ext}: \mathcal{A}^* \to \mathcal{A}^*$ that return the part of $\mathbf{p}$ before and after $\textsc{Head}(\mathbf{p})$, respectively.
Together, the three operators split a plan $\mathbf{p}$ into three segments so that $\mathbf{p} = \textsc{Base}(\mathbf{p}) \circ \textsc{Head}(\mathbf{p}) \circ \textsc{Ext}(\mathbf{p})$.
Finally, we define a domain-specific relation $\bar{\mathcal{R}} \subset \mathcal{A} \times \mathcal{A}^*$; this can be thought of as a function mapping an abstract operators to a set of abstract plans.
Then $(\mathbf{p}, \mathbf{p}') \in \mathcal{R}$ if and only if $\mathbf{p}' = \textsc{Base}(\mathbf{p}) \circ \mathbf{p}'' \circ \textsc{Ext}(\mathbf{p})$ and $(\textsc{Head}(\mathbf{p}), \mathbf{p}'') \in \bar{\mathcal{R}}$.
If $(\mathbf{a}, \mathbf{p}) \in \bar{\mathcal{R}}$, we call $\mathbf{p}$ a refinement of $\mathbf{a}$; similarly, if $(\mathbf{p}, \mathbf{p}') \in \mathcal{R}$, we call $\mathbf{p}'$ a refinement of $\mathbf{p}$.
We can combine these elements into an \emph{abstraction} over a problem domain $(\mathcal{X}, c, s_0, S_g)$.
Formally, an abstraction is a tuple $(\mathcal{S}, \mathcal{A}, \bar{\mathcal{R}}, \hat{V})$, where
\begin{itemize}
\item $\mathcal{S}$ is a collection of propositional symbols,
\item $\mathcal{A}$ is a collection of operators, including a distinguished top-level operator $\mathrm{Act}$,
\item $\bar{\mathcal{R}} \subset \mathcal{A} \times \mathcal{A}^*$ is a refinement relation, and
\item $\hat{V}$ is a symbolic valuation bound.
\end{itemize}
The valuation bound encodes both the cost and the dynamics of our problem domain.
The refinement relation structures the space of abstract plans.
Angelic planning algorithms accept an abstraction as an argument in much the same way that the A* search algorithm \cite{hart1968formal} accepts a heuristic.
This raises an important question: under what circumstances will an abstraction $(\mathcal{S}, \mathcal{A}, \bar{\mathcal{R}}, \hat{V})$ allow us to find the optimal primitive plan for a domain $(\mathcal{X}, c, s_0, S_g)$, and to prove we have done so?
We will generalize the idea of an admissible heuristic to define an \emph{admissible} abstraction.
As we will show in section~\ref{sec:algorithms}, two properties suffice.
\begin{definition}
\label{thm:admissibility}
An abstraction $(\mathcal{S}, \mathcal{A}, \bar{\mathcal{R}}, \hat{V})$, defined over a planning domain $(\mathcal{X}, c)$, is admissible if
\begin{enumerate}
\item For each abstract operator $\mathbf{a} \in \mathcal{A}$, for each primitive plan $p$ in $\mathbf{a}$, there is a refinement $\mathbf{p}$ of $\mathbf{a}$ such that $p \in \mathbf{p}$, i.e.,
\begin{equation}
\forall \mathbf{a} \in \mathcal{A}, \forall p \in \mathbf{a}, \exists (\mathbf{a},\mathbf{p}) \in \bar{\mathcal{R}}: p \in \mathbf{p}.
\end{equation}
\item $\hat{V}$ is admissible, i.e., $\hat{V}_L[\mathbf{p}] \preceq V[\mathbf{p}] \preceq \hat{V}_U[\mathbf{p}]$ for each abstract operator $\mathbf{p} \in \mathcal{A}$.
\end{enumerate}
\end{definition}
The first property ensures that we do not ``lose track'' of any primitive plans while refining a plan.
Plans are only removed from consideration when they are deliberately pruned.
The second property ensures that if abstract plans $\mathbf{p}, \mathbf{p}' \in P$, where $P$ is a collection of abstract plans, and $\hat{V}_U[\mathbf{p}] \prec \hat{V}_L[\mathbf{p}']$, then no optimal plan is in $\mathbf{p}'$ and thus the best plan in $p$ is also in the set $P' = P \setminus \{\mathbf{p}'\}$.
Taken together, these properties ensure that if $P'$ is the result of refining and pruning a collection of plans $P$, then for every plan in $P$ there is a plan that is no worse in $P'$.
If we start with the set $P_0 = \{\mathrm{Act}\}$, no sequence of refinement and pruning operations will discard an optimal solution.
This ensures completeness.
To construct planning algorithms, we simply need to choose an order in which to refine and prune, and keep track of bounds to know when we can terminate the search.
\subsection{The Flat Abstraction for Graph Search}
We illustrate the construction of an admissible abstraction with graph search.
Let $\mathcal{G} = (\mathcal{V}, \mathcal{E})$ be a graph, where each edge $e \in \mathcal{E}$ has an associated cost $c_e$.
Suppose our objective is to find the shortest path to a goal node $v_g \in \mathcal{V}$ and we have an admissible heuristic $h: \mathcal{V} \to \mathbb{R}_{\ge0}$.
Then the abstraction $\mathcal{A}_\mathcal{G} = (\mathcal{V}, \mathcal{E} \cup \{\textsc{Act}\}, \hat{V}, \bar{\mathcal{R}})$ is admissible, where
\begin{itemize}
\item $\hat{V}[\mathbf{a}_e] = \{(\{e_0\}, \{e_1\}, c_e, c_e)\}$,
\item $\hat{V}[\textsc{Act}] = \{ (\{v\}, \{v_g\}, h(v, v_g'), \infty): v \in \mathcal{V})\}$
\item $\bar{\mathcal{R}}$ is the union of $\{(\textsc{Act}, e \circ \textsc{Act}) \forall e \in \mathcal{E} \}$ and $\{(\textsc{Act}, e) \forall e \in \mathcal{E}: e_1 = v_g \}.$
\end{itemize}
Admissibility of $\hat{V}$ follows immediately from the admissibility of $h$, and the admissibility of $\bar{\mathcal{R}}$ is easily proven.
By definition, any primitive plan $p$ is contained in $\textsc{Act}$.
Every primitive plan in the abstract plan $p \circ \textsc{Act}$ is of the form $p \circ p'$.
Suppose the first primitive operator in $p'$ is $e$.
For each such $p$ and $p'$, $(\textsc{Act}, e \circ \textsc{Act}) \in \bar{\mathcal{R}}$.
Therefore $\bar{\mathcal{R}}$ is admissible, and so $\mathcal{A}_\mathcal{G}$ is admissible.
This demonstrates that the machinery of angelic abstractions is at least as general as heuristics in graph search: every graph search problem can be reformulated as an abstract search, using the edges to define a refinement operation and an admissible heuristic to define lower bounds.
Often, however, we can use domain-specific information to devise even more informative abstractions.
In the remainder of this section, we will provide concrete examples of admissible abstractions for a pair of simple \emph{continuous} planning problems.
\subsection{An Abstraction for Navigation}
\label{sec:abstraction:navigation}
A common problem in robotics is navigating to some specified goal location in a structured environment.
Simple heuristics like the Euclidean distance to the goal work well in environments that are cluttered but largely unstructured, where the distance is a good proxy for the true cost.
In highly structured environments, however, the Euclidean distance can be quite a bad proxy for cost.
Consider the example in figure~\ref{fig:navigation}, in which the robot starts just on the other side of a wall from the goal.
Using A* with a Euclidean heuristic requires searching almost the entire space.
We can plan more efficiently by taking advantage of structure in the environment.
Suppose we have a decomposition of the environment into a finite set of overlapping regions, as in figure~\ref{fig:abstraction}, and we know which regions overlap.
These regions can be derived from a semantic understanding of the environment, such as rooms and doorways, or they can be automatically extracted using (for example) the constrained Delaunay triangulation.
Then any plan can be described by the sequence of regions it moves through.
We can use this region decomposition to define an abstraction.
Let $\mathcal{S} = \{R_i\}$, where $\cup_i R_i = \mathcal{X}$, and let $\mathcal{A} = \mathcal{A}_0 \cup \{\mathbf{a}_{ij}: R_i \cap R_j \ne \varnothing \} \cup \{\textsc{Act}\}$, where $p \in \mathbf{a}_{ij}$ if $p(s)\in R_i \forall s\in[0, 1) \wedge p(1) \in R_j$.
The refinement can be defined as follows.
\begin{equation}
\begin{aligned}
\bar{\mathcal{R}} = \bigcup_{ij}
& \{(\textsc{Act}, \mathbf{a}_{ij} \circ \textsc{Act}), (\textsc{Act}, \mathbf{a}_{ij}) \} \,\cup \\
& \{(\mathbf{a}_{ij}, a \circ \mathbf{a}_{ij}): a(t) \in R_i \forall t\} \,\cup \\
& \{(\mathbf{a}_{ij}, a ): a(t) \in R_i \forall t, a(1) \in \mathrm{cl}(R_j) \}
\end{aligned}
\end{equation}
We can use spatial indices like k-D trees and R-trees to quickly find the operators that are valid from a particular state.
It is straightforward to show this refinement relation is admissible (see appendix~\ref{sec:proofs}, proposition~\ref{thm:regions_admissible}).
If the cost function is path length, then we can compute bounds using geometric operations.
Executing the action $\mathbf{a}_{ij}$ from a state in $R_k \cap R_i$ would incur a cost at least as great as the set distance $\inf \{\Vert s - s' \Vert: s \in R_i \cap R_k, s' \in R_i \cap R_j\}$.
If the intersections between sets are small and well-separated, this lower bound will be an accurate estimate.
This has the effect of heuristically guiding the search towards the next region, allowing us to perform a search in the (small) space of abstract plans rather than the (large) space of primitive plans.
The Euclidean heuristic can deal with things like clutter and unstructured obstacles, while the abstraction can take advantage of structure in the environment.
Note that we have made no reference to the shape of the regions, nor even to their connectedness.
If regions can be disconnected, for instance by an obstacle, abstract operators can have no upper bound, which can lead the search to be inefficient.
On the other hand, if we require the regions to be convex, then executing the action $\mathbf{a}_{ij}$ from a state in $R_k \cap R_i$ would incur a cost no greater than the Hausdorff distance $d_H(R_i \cap R_k, R_i \cap R_j)$, where
\begin{equation}
d_H(X, Y) = \max(\sup_{x \in X} \inf_{y \in Y}\Vert x - y \Vert, \sup_{y \in Y} \inf_{x \in X}\Vert x - y \Vert).
\end{equation}
\begin{figure}
\centering
\begin{subfigure}[t]{0.5\columnwidth}
\centering
\includegraphics[width=\textwidth]{convex_convex.pdf}
\caption{}
\label{fig:convex:convex}
\end{subfigure}%
\begin{subfigure}[t]{0.5\columnwidth}
\centering
\includegraphics[width=\textwidth]{convex_eps_convex.pdf}
\caption{}
\label{fig:convex:epsilon_convex}
\end{subfigure}
\begin{subfigure}[t]{0.5\columnwidth}
\centering
\includegraphics[width=\textwidth]{convex_nonconvex.pdf}
\caption{}
\label{fig:convex:nonconvex}
\end{subfigure}%
\begin{subfigure}[t]{0.5\columnwidth}
\centering
\includegraphics[width=\textwidth]{convex_nonconvex_connected.pdf}
\caption{}
\label{fig:convex:nonconvex_connected}
\end{subfigure}
\caption{%
Useful regions for defining abstract operators are nearly convex.
In all four examples here, the lower bound is given by the Euclidean distance in work space.
In a convex region (a), the gap between the lower bound on the cost of a plan and the true optimal cost is zero.
In an $\epsilon$-convex region (b), the gap between the lower bound $l$ on the cost of a plan and the true optimal cost $c$ is small: $l < c < (1+\epsilon) l$.
Some regions are not $\epsilon$-convex for any finite $\epsilon$; for example, the region might not be connected (c).
This can happen even if the region is connected in the work space (d) if it is not connected in configuration space.
Here, the object cannot fit through the narrow gap, and so the region is not $\epsilon$-convex.
In the presence of dynamic constraints, regions can fail to be path-connected even if they are connected in the workspace.
\label{fig:convex}
}
\end{figure}
Convexity is quite a strong requirement.
In a cluttered environment, a convex representation may need to contain many regions.
We can relax the requirement of convexity, and generalize to costs besides path length, by defining $\epsilon$-convexity.
A region $R$ is $\epsilon$-convex if
\begin{equation}
\underset{p \in \mathcal{P}_R(x, x')}{\mathrm{inf}} \mathcal{C}[p] \le (1+\epsilon)\Vert x - x'\Vert.
\end{equation}
This is shown graphically in figure~\ref{fig:convex}.
Intuitively, a region is $\epsilon$-convex if the shortest path between any two points is only slightly longer than the distance between the points.
For example, if $\mathcal{X} \subset \mathbb{R}^n$, a convex region $R$ cluttered with convex objects of diameter less than $d$ is $\epsilon$-convex, with $\epsilon = \pi \sqrt{\frac{n}{2(n+1)}}$; this is an elementary consequence of Jung's theorem \cite{jung1899kleinste}.
\subsection{An Abstraction for the Door Puzzle}
\label{sec:abstraction:door_puzzle}
\begin{figure}
\centering
\begin{subfigure}[t]{\columnwidth}
\centering
\includegraphics[width=\textwidth]{door_puzzle_many.pdf}
\caption{Problem}
\label{fig:door_puzzle:tsp}
\end{subfigure}
\begin{subfigure}[t]{\columnwidth}
\centering
\includegraphics[width=\textwidth]{door_puzzle_tsp.pdf}
\caption{Lower bound}
\label{fig:door_puzzle:mst}
\end{subfigure}
\begin{subfigure}[t]{\columnwidth}
\centering
\includegraphics[width=\textwidth]{door_puzzle_solution.pdf}
\caption{Solution}
\label{fig:door_puzzle:plans}
\end{subfigure}
\caption{%
In the problem shown in (a), it is easy to conclude that all $N=6$ doors must be opened before the robot can reach the goal.
However, there are $N!=720$ possible orders in which we might press the switches.
We can bound the cost of any sequence by solving a travelling salesperson problem (b, dotted lines), where the edge costs are the minimal distance the robot must travel to move between switches.
Although this is an NP-hard problem, we can compute a lower bound on the cost of a solution in polynomial time by computing a minimum spanning tree (b, solid red line).
This allows the planner to quickly find a near-optimal solution (c).
\label{fig:door_puzzle}
}
\end{figure}
The door puzzle introduced in the introduction combines the motion-planning aspects of navigation with a high-level task planning problem: the choice of which doors to open and in which order.
Unlike in the navigation problem, the configuration space for the door problem involves discrete components: $\mathcal{X} \subset \mathbb{R}^2 \times \{0, 1\}^N$, where where $N$ is the number of doors.
This creates an element of combinatorial search that is not present in the navigation experiment.
We use the same region-based abstraction to guide the search for motion plans, and construct a relaxed representation of the effects of toggling switches in PDDL by omitting geometric constraints like collision.
Using this representation, we can quickly compute a partial ordering on the sequence of switches that need to be pressed in order to reach the goal.
For example, in figure~\ref{fig:door_puzzle}, the path to the goal is blocked by six doors.
Before we can move towards the goal, we must move to and press each of the six switches.
This leaves us with the task of computing a lower bound on the cost to reach and toggle each switch.
We can find such a bound in two steps.
First, we construct a directed graph whose vertices are the possible effects of executing each operator, and whose edges have weights that lower bound the cost of executing each operator.
This graph, and the minimum spanning tree, are drawn in figure~\ref{fig:door_puzzle:mst}.
This reduces the problem of finding a lower bound to solving a travelling salesperson problem (TSP).
While solving a TSP requires exponential time, we can compute a lower bound on the cost of the optimal solution by computing a minimum spanning tree of the directed graph---and this is a computation that can be done in polynomial time with standard methods.
Although this bound neglects possible interactions between the operators, it is admissible; in fact, it is an admissible special case of the more general (and inadmissible) $h_{\mathrm{add}}$ heuristic \cite{haslum2000admissible}.
We can use this bound to guide the search for a more detailed motion plan (figure~\ref{fig:door_puzzle:plans}).
\section{Algorithms}
\label{sec:algorithms}
We now describe several algorithms that leverage an admissible angelic abstraction to search efficiently, even in high-dimensional continuous spaces.
Our algorithms are all derived from the angelic hierarchical A* algorithm developed by \cite{marthi2008angelic}.
We begin by reviewing this algorithm (section~\ref{sec:algorithms:angelic}), then discuss a subtle variation that dramatically improves efficiency in some common degenerate or nearly degenerate cases (section~\ref{sec:algorithms:acyclic}).
Finally, we discuss an extension that solves the approximately optimal planning problem, embracing the key insights of weighted A* (section~\ref{sec:approximate}).
\subsection{Angelic A*}
\label{sec:algorithms:angelic}
Angelic A* (algorithm~\ref{alg:angelic}) is a reformulation of the angelic hierarchical A* algorithm \cite{marthi2008angelic}.
This algorithm solves the optimal planning problem using a best-first forward search over abstract plans.
The primary data structure maintained by our algorithm is a tree.
Each node in the tree is a tuple $(\mathbf{a}, \mathbf{p}_{-}, \textsc{Base}(\mathbf{p}), \hat{V}[\mathbf{p}])$ representing a plan $\mathbf{p} = \mathbf{p}_{-} \circ \mathbf{a}$, where
\begin{itemize}
\item $\mathbf{a}$ is an abstract operator,
\item $\mathbf{p}_{-}$ is a pointer to the predecessor of the node,
\item $\textsc{Base}(\mathbf{p})$ is a pointer to the base plan, which is used in choosing refinements, and
\item $\hat{V}[\mathbf{p}]$ is an admissible bound on the valuation of $\mathbf{p}$.
\end{itemize}
The root of the tree is the node $(\varnothing, \varnothing, \varnothing, \{(\{x_s\}, \{x_s\}, 0, 0)\}$, representing the start of any plan.
\begin{algorithm}
\caption{Angelic A*\label{alg:angelic}}
\begin{algorithmic}[1]
\Function{Search}{abstraction $(\mathcal{S}, \mathcal{A}, \bar{\mathcal{R}}, \hat{V})$}
\State $\mathrm{root} = (\varnothing, \varnothing, \varnothing, \{(x_{\mathrm{s}}, x_{\mathrm{s}}, 0, 0)\})$ \label{alg:angelic:root}
\State $\mathbf{p}^* = \varnothing$
\State $\textsc{Bound}(\varnothing) = \hat{V}[\textsc{Act}]$
\State $\mathbf{p}_0 = \textsc{Propagate}(\mathrm{root}, [\textsc{Act}])$ \label{alg:angelic:initialize}
\State $Q = \{\mathbf{p}_0\}$ \label{alg:angelic:enqueue}
\While {$\vert Q \vert > 0$}
\State $\mathbf{p} = \mathrm{arg\,min} \{ \hat{V}[\mathbf{p}](\{x_s\}, X_g): \mathbf{p} \in Q \}$ \label{alg:angelic:min}
\If {$\textsc{Primitive}(\mathbf{p}^*)$ and $ \hat{V}_U[\mathbf{p}^*] \prec \hat{V}_L[\mathbf{p}]$} \label{alg:angelic:check}
\State\Return $\mathbf{p}^*$ \label{alg:angelic:success}
\Else
\State $Q \gets Q \setminus \{\mathbf{p}\}$ \label{alg:angelic:begin_expand}
\State $S \gets \textsc{Successors}(\mathbf{p})$
\For{$\mathbf{p}' \in S$}
\If {$\hat{V}_U[\mathbf{p}'] < \hat{V}_U[\mathbf{p}^*]$}
\State $\mathbf{p}^* \gets \mathbf{p}'$ \label{alg:angelic:store_best}
\EndIf
\EndFor
\State $Q \gets Q \cup \{\mathbf{p}' \in S: \neg \hat{V}_U[\mathbf{p}^*] \prec \hat{V}_L[\mathbf{p}]\}$ \label{alg:angelic:end_expand}
\EndIf
\EndWhile
\State\Return $\varnothing$
\EndFunction
\Function{Successors}{plan node $\mathbf{p}$}
\label{alg:angelic:start_successors}
\State $\textsc{Post}(\textsc{Base}(\mathbf{p})) = \{\mathbf{s}': (\mathbf{s}, \mathbf{s}', l, u) \in \hat{V}[\textsc{Base}(\mathbf{p})]\}$
\State $\mathbf{a} = \textsc{Operator}(\textsc{Head}(\mathbf{p}))$
\State $S = \varnothing$
\For {$\mathbf{p}' : (\mathbf{a}, \mathbf{p}') \in \bar{\mathcal{R}}, \exists \mathbf{s} \in \textsc{Post}(\textsc{Base}(\mathbf{p})):\textsc{Head}(\mathbf{p}') \cap \mathbf{s} \ne \varnothing$}
\State $\mathbf{p}_{\mathrm{ref}} \gets \textsc{Propagate}(\textsc{Base}(\mathbf{p}), \mathbf{p}' \circ \textsc{Ext}(\mathbf{p}))$ \label{alg:angelic:call_propagate}
\If {$\hat{V}_L[\mathbf{p}_{\mathrm{ref}}](x_s, X_g) < \infty$}
\State $S \gets S \cup \{\mathbf{p}_{\mathrm{ref}}\}$
\EndIf
\EndFor
\State\Return $S$
\label{alg:angelic:end_successors}
\EndFunction
\Function{Propagate}{base node $\mathbf{p}$, list $\mathbf{p}_{\mathrm{ext}}$}
\label{alg:angelic:propagate}
\State $\mathbf{b} \gets \mathbf{p}$
\While {$\mathbf{p}_{\mathrm{ext}}$ is not empty}
\State $\mathbf{a} \gets \textsc{Pop}(\mathbf{p}_{\mathrm{ext}})$
\If {$\mathbf{a}$ is more primitive than $\textsc{Operator}(\mathbf{p})$}
\State $\mathbf{b} \gets \mathbf{p}$
\EndIf
\State $\mathbf{p} \gets (\mathbf{a}, \mathbf{p}, \mathbf{b}, \hat{V}[\mathbf{p} \circ \mathbf{a}])$
\If {$\hat{V}[\mathbf{p}] = \varnothing$}
\Return
\ElsIf {$\textsc{Bound}(\mathbf{p}_{\mathrm{ext}}) \prec \hat{V}_L[\mathbf{p}]$}
\State \Return $\varnothing$ \label{alg:angelic:prune}
\Else
\State $\textsc{Bound}(\mathbf{p}_{\mathrm{ext}}) \gets \textsc{Bound}(\mathbf{p}_{\mathrm{ext}}) \cup \hat{V}[\mathbf{p}]$
\label{alg:angelic:join}
\EndIf
\EndWhile
\State\Return $A$
\EndFunction
\end{algorithmic}
\end{algorithm}
The main entry point for the algorithm is the $\textsc{Search}$ routine, which first constructs the root plan node (line~\ref{alg:angelic:root}) then computes an initial abstract plan that includes all possible primitive plans (line~\ref{alg:angelic:initialize}).
This abstract plan is then added to the plan queue (line~\ref{alg:angelic:enqueue}).
Then, as long as a plan remains on the queue, AA* repeatedly finds the abstract plan in $Q$ with the lowest lower bound (line~\ref{alg:angelic:min}).
If this plan is dominated by a previously discovered plan, then the algorithm returns successfully, as any remaining plan on the queue is also dominated.
Otherwise, AA* expands the active plan by computing its successors and adding them to the queue if they cannot be pruned (lines~\ref{alg:angelic:begin_expand}-\ref{alg:angelic:end_expand}).
If the queue becomes empty without discovering a primitive plan that reaches the goal, then no plan exists and the algorithm returns failure.
\begin{figure*}
\centering
\begin{subfigure}[t]{\textwidth}
\centering
\input{original_plan.tikz.tex}
\end{subfigure}
\begin{subfigure}[t]{\textwidth}
\centering
\input{split_plan.tikz.tex}
\end{subfigure}
\begin{subfigure}[t]{\textwidth}
\centering
\input{refinement.tikz.tex}
\end{subfigure}
\caption{%
A schematic illustration of the process by which we construct successor plans.
A plan is represented by a collection of nodes representing operators.
Each node has a pointer to its predecessor, and represents the concatenation of the predecessor with its operator.
Each node also has a pointer to a \emph{base} node.
To form the successors of a plan, we first break plan into three pieces: the base, the node after the base (called the head), and the rest of the plan (called the extension).
We then replace the head with a valid refinement, chosen to be optimistically feasible.
Finally, we propagate, creating new nodes corresponding to the operators in the refinement and the extension.
\label{fig:successors}
}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=.8\textwidth]{plan_tree_solution.pdf}
\caption{%
Part of the plan tree constructed by AA* for the problem shown in figure~\ref{fig:plan_tree:problem}.
Each node represents a plan; edges link a node to its predecessor.
Nodes that are part of the optimal plan are highlighted in red.
Branches of the tree not drawn are indicated with an ellipsis.
The act of opening the door is referred to as $\textsc{Toggle}(S)$.
Primitive motion operators are referred to as $\textsc{Go}(x, y)$, where $x$ and $y$ are coordinates.
An abstract motion to a region $R_j$ through a region $R_i$ is referred to as $\textsc{Go}(R_i, R_j)$.
The top level operator is labelled $\textsc{Act}$.
\label{fig:plan_tree:solution}
}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{plan_tree_problem.pdf}
\caption{%
The problem solved by the tree in figure~\ref{fig:plan_tree:solution}.
The plan first toggles the switch in the lower left hand corner, then moves through the opened door to reach the goal.
Primitive operators (edges) are displayed as dotted lines, while the optimal plan is highlighted in red.
\label{fig:plan_tree:problem}
}
\end{figure}
AA* generates successors to a plan using the refinement relation.
It then constructs a set of \emph{child} plans by selecting one operator from the plan and replacing it with its refinements.
Any successor plan that cannot possibly contain an acceptable solution is pruned, while any plan that could contain an acceptable solution is added to the priority queue.
The algorithm terminates when we remove a plan from the queue that is dominated by a previously expanded primitive plan.
We compute the valuation of each new plan incrementally (line~\ref{alg:angelic:propagate}).
If that new plan does not optimistically reach some state with lower cost than a previously explored plan ending with the same extension, we discard it (line~\ref{alg:angelic:prune}).
Otherwise, we update the bounds on any plan with the current extension to include the new plan (line~\ref{alg:angelic:join}).
Next, if the upper bound on the cost of reaching the goal under the new plan is better than any previous plan, we record this new plan as the best yet found (line~\ref{alg:angelic:store_best}).
Finally, if the lower bound on the cost of reaching the goal under the new plan is better than the upper bound under any previous plan, we add it to the set $Q$ of active plans.
Marthi~et~al.~showed this algorithm will return the optimal refinement of the top-level operator $\textsc{Act}$ after a finite number of iterations, provided the lower bound on the cost of every operator is greater than zero.
\begin{theorem}
\label{thm:angelic:hierarchical}
Algorithm~\ref{alg:angelic} will return the optimal primitive refinement of the abstract plan $\textsc{Act}$, provided the lower bound on the cost of every operator is strictly positive \cite{marthi2008angelic}.
\end{theorem}
However, if the abstraction is admissible, we can prove the following stronger claim.
\begin{restatable*}{theorem}{AngelicOptimality}
\label{thm:angelic:optimality}
If the abstraction $\mathcal{A}$ is admissible and a feasible plan exists, then algorithm~\ref{alg:angelic} returns an optimal sequence of primitive operators in finite time, provided the lower bound on the cost of every operator is greater than zero.
\end{restatable*}
\begin{corollary}
If the set of primitive operators $\mathcal{A}_{0,n}$ is asymptotically optimal (equation~\ref{eq:asymptotically_optimal}), then
\begin{equation}
\lim_{n\to\infty}\mathrm{Pr}(C[\textsc{Search}(\mathcal{A}_n)] < (1 +\epsilon) c^*) = 1.
\end{equation}
\end{corollary}
\begin{proof}
See appendix~\ref{sec:proofs}, theorem~\ref{thm:angelic:optimality}.
\end{proof}
The distinction between these claims is subtle, but important.
Theorem~\ref{thm:angelic:hierarchical} implies hierarchical optimality: if a plan is returned, no better plan can be expressed as a refinement of the top-level operator.
Theorem~\ref{thm:angelic:optimality} implies primitive optimality: if a plan is returned, no better plan exists.
If we can ensure our abstraction is admissible, then using our abstraction provides the same guarantees as a direct search over the space of primitive plans, but may be much faster.
\subsection{Acyclic Angelic A*}
\label{sec:algorithms:acyclic}
\begin{figure}
\centering
\begin{subfigure}[t]{0.5\columnwidth}
\centering
\includegraphics[width=\textwidth]{acyclic_loop_zero.pdf}
\caption{}
\label{fig:acyclic:loop_zero}
\end{subfigure}%
\begin{subfigure}[t]{0.5\columnwidth}
\centering
\includegraphics[width=\textwidth]{acyclic_loop_small.pdf}
\caption{}
\label{fig:acyclic:loop_small}
\end{subfigure}
\begin{subfigure}[t]{0.33\columnwidth}
\centering
\includegraphics[width=\textwidth]{acyclic_loop_necessary.pdf}
\caption{}
\label{fig:acyclic:loop_necessary}
\end{subfigure}%
\begin{subfigure}[t]{0.33\columnwidth}
\centering
\includegraphics[width=\textwidth]{acyclic_loop_expensive.pdf}
\caption{}
\label{fig:acyclic:loop_expensive}
\end{subfigure}%
\begin{subfigure}[t]{0.33\columnwidth}
\centering
\includegraphics[width=\textwidth]{acyclic_loop_connected.pdf}
\caption{}
\label{fig:acyclic:loop_connected}
\end{subfigure}
\caption{%
An illustration of the problems with cyclic paths.
Many natural operators in continuous domains have a cost with a lower bound of zero and no upper bound.
For example, deciding whether the irregularly-shaped object (a) can reach the blue region requires detailed geometric analysis.
Since regions $0$ and $2$ touch, the greatest lower bound on the cost of a plan in $\mathbf{a}_{01} \circ \mathbf{a}_{12}$ is the same as the bound on $\mathbf{a}_{01} \circ \mathbf{a}_{12} \circ \mathbf{a}_{10} \circ \mathbf{a}_{12}$.
Any number of repetitions of the cycle $\mathbf{a}_{10} \circ \mathbf{a}_{12}$ will have the same cost, and so if $\mathbf{a}_{01} \circ \mathbf{a}_{12}$ is ever selected for expansion, the algorithm will only ever refine this infinite sequence of cyclic plans.
Separating the regions (b) eliminates the infinite recursion, but remains inefficient; each cycle of $\mathbf{a}_{10} \circ \mathbf{a}_{12}$ adds only a small cost $\epsilon$ to the lower bound, meaning that if the next plan on the queue has a cost $\delta$ greater, the planner will consider $\lceil \delta / \epsilon \rceil$ cycles before considering the next acyclic plan.
We cannot simply ignore these `cyclic' plans; in some scenarios, the best plan (c) or any feasible plan (d) is cyclic.
This can occur even if the regions defining our operators are connected in configuration space: in the diagram in (e), although there is a feasible plan in $\mathbf{a}_{12} \circ \mathbf{a}_{2g}$, the optimal plan is in $\mathbf{a}_{12} \circ \mathbf{a}_{21} \circ \mathbf{a}_{2g}$.
\label{fig:acyclic}
}
\end{figure}
Algorithm~\ref{alg:angelic} requires strictly positive lower bounds on the cost of any operator.
In discrete problems, this is reasonable restriction, but it presents challenges in continuous problems.
For example, suppose we have a plan consisting of two operators $\mathbf{a}_{ij} \circ \mathbf{a}_{i'j'}$ from our navigation abstraction.
If the destination regions intersect---if $R_j \cap R_j' \ne \varnothing$---then the largest possible lower bound for the valuation of $\mathbf{a}_{i',j'}$ is zero.
This phenomenon can lead to a zero-cost cycle: a sequence of operators that can optimistically returns to a given state with zero cost (figure~\ref{fig:acyclic:loop_zero}).
Even positive cost-cycles are problematic, if the lower bound $l$ on the cost of a cycle is much smaller than the upper bound $u$: the algorithm can only prune a plan if it executes the cycle $\lceil u/l \rceil$ times (figure~\ref{fig:acyclic:loop_small}).
Unfortunately, we cannot simply discard any abstract plan with a cycle: the optimal plan may leave and return to an abstract state if the state is non-convex, even if the state is connected (figure~\ref{fig:acyclic:loop_necessary}-\ref{fig:acyclic:loop_connected}).
Often, this indicates a poor choice of abstraction, but it arise even with natural choices of abstraction, especially in domains with topologically complex configuration spaces.
We can deal with such edge cases while still avoiding cycles with a minor modification to the algorithm.
We define an acyclic plan as any plan $\mathbf{p}$ that cannot be partitioned into two plans $\mathbf{p}_0 \circ \mathbf{p}_1$ such that $\hat{V}_L[\mathbf{p}_0] \preceq \hat{V}_L[\mathbf{p}]$ (algorithm~\ref{alg:acyclic_fn}).
When we compute the successors of a plan $\mathbf{p}$, if we find the extension $\mathbf{p}_\mathrm{ext}$ would create a cyclic (i.e.~not acyclic) plan when propagated on top of $\textsc{Base}(\mathrm{p})$, we do not add $\mathbf{p} \circ \mathbf{p}_\mathrm{ext}$ to the set of successors.
Instead, we add $(\textsc{Base}(\mathbf{p}), \mathbf{p}_\mathrm{ext})$ to the set of deferred plans (algorithm~\ref{alg:acyclic}, line~\ref{alg:acyclic:defer}).
When any descendent of $\mathbf{p}$ is expanded, we consider activating any deferred extension of $\mathbf{p}$ by propagating it on top of the descendent plan.
If the resulting plan is no longer cyclic, we add it to the set of successors (line~\ref{alg:acyclic:activate}).
This ensures that only acyclic plans will ever be added to the queue of plans, while also ensuring all plans that are not pruned will eventually be considered.
\begin{algorithm}
\caption{Acyclic angelic A*\label{alg:acyclic}}
\begin{algorithmic}[1]
\Function{Successors}{plan node $\mathbf{p}$}
\State\Comment $D$ is a global variable, initially set to $\varnothing$.
\label{alg:acyclic:start_successors}
\State $\textsc{Post}(\textsc{Base}(\mathbf{p})) = \{\mathbf{s}': (\mathbf{s}, \mathbf{s}', l, u) \in \hat{V}[\textsc{Base}(\mathbf{p})]\}$
\State $S = \varnothing$
\State $\mathbf{a} = \textsc{Operator}(\textsc{Head}(\mathbf{p}))$
\For {$\mathbf{p}' : (\mathbf{a}, \mathbf{p}') \in \bar{\mathcal{R}}, \exists \mathbf{s} \in \textsc{Post}(\textsc{Base}(\mathbf{p})):\textsc{Head}(\mathbf{p}') \cap \mathbf{s} \ne \varnothing$}
\State $\mathbf{p}_{\mathrm{ref}} \gets \textsc{Propagate}(\textsc{Base}(\mathbf{p}), \mathbf{p}' \circ \textsc{Ext}(\mathbf{p}))$ \label{alg:acyclic:call_propagate}
\If {$\hat{V}_L[\mathbf{p}_{\mathrm{ref}}](x_s, X_g) < \infty$}
\If {$\textsc{Acyclic}(\mathbf{p}_{\mathrm{ref}}, \varnothing)$}
\State $S \gets S \cup \{\mathbf{p}_{\mathrm{ref}}\}$
\Else
\State $D \gets D \cup \{(\textsc{Base}(\mathbf{p}), \textsc{Ext}(\mathbf{p}_{\mathrm{ref}}))\}$
\label{alg:acyclic:defer}
\EndIf
\EndIf
\EndFor
\State $\mathbf{p}_a \gets \mathbf{p}$
\While {$\textsc{Base}(\textsc{Parent}(\mathbf{p}_a)) \ne \varnothing$}
\State $\mathbf{p}_a \gets \textsc{Base}(\textsc{Parent}(\mathbf{p}_a))$
\For {$\mathbf{p}_{\mathrm{ext}} : (\mathbf{p}_a, \mathbf{p}_{\mathrm{ext}}) \in D$}
\State $\mathbf{p}_{\mathrm{ref}} \gets \textsc{Propagate}(\textsc{Base}(\mathbf{p}), \mathbf{p}_{\mathrm{ext}})$
\If {$\textsc{Acyclic}(\mathbf{p}_{\mathrm{ref}}, \varnothing)$ and $\hat{V}_L[\mathbf{p}_{\mathrm{ref}}] < \infty$}
\State $S \gets S \cup \{\mathbf{p}_{\mathrm{ref}}\}$
\label{alg:acyclic:activate}
\EndIf
\EndFor
\EndWhile
\State\Return $S$
\label{alg:acyclic:end_successors}
\EndFunction
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Acyclic angelic A*\label{alg:acyclic_fn}}
\begin{algorithmic}[1]
\Function{Acyclic}{plan nodes $\mathbf{p}, \mathbf{p}'$}
\If {$\mathbf{p} = \varnothing$}
\State\Return$\mathbf{true}$
\ElsIf {$\mathbf{p}'=\varnothing$}
\State $\mathbf{p}_- \gets \textsc{Predecessor}(\mathbf{p})$
\State\Return$\textsc{Acyclic}(\mathbf{p}_-, \varnothing) \wedge \textsc{Acyclic}(\mathbf{p}_-, \mathbf{p})$
\Else
\State $\mathbf{p}_- \gets \textsc{Predecessor}(\mathbf{p})$
\State\Return$\neg (\hat{V}_L[\mathbf{p}] \preceq \hat{V}_L[\mathbf{p}']) \wedge \textsc{Acyclic}(\mathbf{p}_-, \varnothing)$
\EndIf
\EndFunction
\end{algorithmic}
\end{algorithm}
\begin{restatable*}{theorem}{AcyclicOptimality}
\label{thm:acyclic:optimality}
If the abstraction $\mathcal{A}$ is admissible and a feasible plan exists, then the acyclic angelic A* algorithm returns a sequence of primitive operators with cost no greater than $c^*(\{x_s\}, X_g)$ in finite time.
\end{restatable*}
\begin{corollary}
\label{thm:acyclic:asymptotic}
If $\mathcal{A}_{0,n}$ is asymptotically optimal, then
\begin{equation}
\forall \epsilon \ge 0, \lim_{n\to\infty}\mathrm{Pr}(C[\textsc{Search}(\mathcal{A}_n)] < (1 +\epsilon) c^*) = 1.
\end{equation}
\end{corollary}
\begin{proof}
See appendix~\ref{sec:proofs}, theorem~\ref{thm:acyclic:optimality}.
\end{proof}
\subsection{Approximate Angelic A*}
\label{sec:approximate}
Even with a good abstraction, finding an optimal solution may be intractable for many problems.
By modifying the order in which plans are expanded and the conditions under which the algorithm terminates, we can accelerates the search process while still ensuring approximate optimality.
This modification is described in algorithm~\ref{alg:approximate}.
Often, admissible valuations are unduly optimistic: the lower bound $L[\mathbf{p}]$ on the cost of a plan is much less than the true optimal cost $c^*(\{x_s\}, X_g)$.
This problem is well-understood in the context of graph search, where it is often mitigated by using an admissible heuristic that has been inflated, as in weighted A* (\cite{pohl1970heuristic}).
WA* keeps a queue of states, and expands the state minimizing
\begin{equation}
\textsc{Key}_{\mathrm{WA}^*}(x; w) = g(x) + w h(x),
\end{equation}
where $g(x)$ is the estimated cost to reach a state $x$ and $h(x)$ is an admissible estimate of the cost to reach the goal from a state $x$.
This biases the search towards plans that pessimistically reach states close to the goal.
If $h(x)-c^*(\{x\}, X_g)$ has only shallow local minima, this will explore far fewer states before finding a path to the goal than A* would.
Moreover, when a path to the goal is found, it will have cost less than $w c^*(\{x_s\}, X_g)$.
Unfortunately, we cannot directly apply this computation of an inflated priority in the context of angelic search.
When we use angelic semantics, we may not have a distinct cost $g(x)$ to reach a state; we only have bounds on the cost of plans.
In order to apply the idea of WA* to angelic search, we need to compute a priority that satisfies the same properties as $\textsc{Key}_{\mathrm{WA}^*}(x; w)$.
A na\"ive approach, such as inflating the lower bound on each operator, does not have the desired effect: it would inflate the priority of all plans equally and would not affect the order in which plans are expanded.
A more reasonable approach might be to inflate the lower bounds on each nonprimitive operator, which would exactly equal the priority computed by WA* for a flat abstraction---but this approach does not properly take upper bounds into account.
Consider an operator $\mathbf{a}$ for which $\hat{V}_U[\mathbf{a}](\mathbf{s}, \mathbf{s}') = (1+\varepsilon) L[\mathbf{a}](\mathbf{s}, \mathbf{s}')$ for some abstract state pair $(\mathbf{s}, \mathbf{s}')$.
If $\varepsilon$ is zero, the operator would be treated as primitive and its lower bound would not be inflated.
If it is small but positive, it would be treated as nonprimitive.
This would artificially bias the search away from operators that are almost, but not quite, primitive.
To avoid this undesirable bias, we recursively compute a priority $\textsc{Key}(\mathbf{p}, w)$.
If $\mathbf{p}_{-}=\textsc{Predecessor}(\mathbf{p})$,
\begin{equation}
\textsc{Key}(\mathbf{p}; w) =
\min(\textsc{Key}(\mathbf{p}_{-}) + w (\hat{V}_L[\mathbf{p}] - \hat{V}_L[\mathbf{p}_{-}]), \hat{V}_U[\mathbf{p}]),
\end{equation}
with $\textsc{Key}(\varnothing; w) = 0$.
$\textsc{Key}(\mathbf{p}; w)$ has several useful properties.
For each plan $\mathbf{p}$, $\textsc{Key}(\mathbf{p}; w)$ is no greater than the upper bound $U[\mathbf{p}]$ or the inflated lower bound $w L[\mathbf{p}]$.
If $w=1$, then $\textsc{Key}(\mathbf{p}; w) = L[\mathbf{p}]$
For a primitive plan, $\textsc{Key}(\mathbf{p}; w)$ equals the cost of the plan.
In a flat abstraction, $\textsc{Key}(\mathbf{p}; w)$ is precisely equal to the cost estimate used by WA*.
We refer to this approach as \emph{approximate} angelic A*, and present pseudocode in algorithm~\ref{alg:approximate}.
The pseudocode is substantially similar to algorithm~\ref{alg:angelic}, and identical subroutines have been omitted.
Changes are highlighted in red.
\begin{algorithm}
\caption{Approximate angelic A*\label{alg:approximate}}
\begin{algorithmic}[1]
\Function{Search}{abstraction $(\mathcal{S}, \mathcal{A}, \bar{\mathcal{R}}, \hat{V})$, \textcolor{red}{weight $w$}}
\State $\mathrm{root} = (\varnothing, \varnothing, \varnothing, \{(x_{\mathrm{s}}, x_{\mathrm{s}}, 0, 0)\})$
\State $\mathbf{p}^* = \varnothing$
\State $\textsc{Bound}(\varnothing) = \hat{V}[\textsc{Act}]$
\State $\mathbf{p}_0 = \textsc{Propagate}(\mathrm{root}, [\textsc{Act}])$
\State $Q = \{\mathbf{p}_0\}$
\While {$\vert Q \vert > 0$}
\State $\mathbf{p} = \textcolor{red}{\mathrm{arg\,min} \{
\textsc{Key}(\mathbf{p}, w): \mathbf{p} \in Q
\}}$
\If {$\textsc{Primitive}(\mathbf{p}^*)$ and $ \hat{V}_U[\mathbf{p}^*] \prec \hat{V}_L[\mathbf{p}]$}
\State\Return $\mathbf{p}^*$
\Else
\State $Q \gets Q \setminus \{\mathbf{p}\}$
\State $S \gets \textsc{Successors}(\mathbf{p})$
\For{$\mathbf{p}' \in S$}
\If {$\hat{V}_U[\mathbf{p}'] < \hat{V}_U[\mathbf{p}^*]$}
\State $\mathbf{p}^* \gets \mathbf{p}'$
\EndIf
\EndFor
\State $Q \gets Q \cup S$
\EndIf
\EndWhile
\State\Return $\varnothing$
\EndFunction
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Approximate angelic search priority}
\begin{algorithmic}[1]
\Function{Key}{node $\mathbf{p}$, weight $w\in\mathbb{R}_{\ge 1}$}
\If {$\mathbf{p} = \varnothing$}
\State\Return $0$
\Else
\State $\mathbf{p}_- \gets \textsc{Predecessor}(\mathbf{p})$
\State\Return $\mathrm{min}(\textsc{Key}(\mathbf{p}_-) + w (\hat{V}_L[\mathbf{p}] - \hat{V}_L[\mathbf{p}_-]), \hat{V}_U[\mathbf{p}])$
\EndIf
\EndFunction
\end{algorithmic}
\end{algorithm}
This algorithm is approximately optimal, in the sense that any plan it returns has cost less than $w$ times the cost of the optimal plan.
If we combine the acyclic generation of successor plans with the approximate search, the resulting algorithm is approximately optimal even if there are zero-cost operators.
\begin{restatable*}{theorem}{ApproximateOptimality}
\label{thm:approximate:optimality}
If the abstraction $\mathcal{A}$ is admissible and a feasible plan exists, then algorithm~\ref{alg:approximate} returns a sequence of primitive operators with cost no greater than $w \cdot c^*(\{x_s\}, X_g)$ in finite time.
\end{restatable*}
\begin{corollary}
\label{thm:approximate:asymptotic}
If $\mathcal{A}_{0,n}$ is asymptotically optimal, then
\begin{equation}
\forall \epsilon \ge 0, \lim_{n\to\infty}\mathrm{Pr}(C[\textsc{Search}(\mathcal{A}_n)] < (1 +\epsilon) w \cdot c^*) = 1.
\end{equation}
\end{corollary}
\begin{proof}
See appendix~\ref{sec:proofs}, theorem~\ref{thm:approximate:optimality}.
\end{proof}
If we use this idea in the angelic A* algorithm, rather than the acyclic angelic A* algorithm, the same theorem holds with the additional constraint that all operators must have a strictly positive lower-bound.
\section{Results}
We implemented algorithms~\ref{alg:angelic}-\ref{alg:approximate} and the abstractions described in sections~\ref{sec:abstraction:navigation} and \ref{sec:abstraction:door_puzzle} in the Python programming language.
We then compared the performance of the planner to the original angelic A* search algorithm (\cite{marthi2008angelic}) and to a search without abstraction using A*.
In the navigation domain, we constructed a random discretization with $10^4$ states.
Examples of the search trees constructed by $A*$ and by algorithm~\ref{alg:acyclic} are given in figure~\ref{fig:navigation}.
By using the abstraction, the algorithm can avoid exploring large parts of the configuration space.
Our quantitative results bear this out: using abstraction allows us to reduce the number of states explored by a factor of three and the number of plans considered by several orders of magnitude.
Using abstraction in the door puzzle domain resulted in even larger speedups.
Even in easy problem instances with only a few doors, search without abstraction quickly became infeasible (figure~\ref{fig:quantitative}).
Using abstraction reduced the number of states explored by orders of magnitude.
However, the unmodified angelic search spent a great deal of time exploring plans with cycles.
By deferring these plans, our algorithms were able to reduce the number of plans expanded by an order of magnitude.
In fact, only our algorithm was able to solve problem instances with more than ten doors.
We were able to find 2-optimal plans for instances with up to 32 doors and $10^4$ sampled configurations (corresponding to a discretized state space with approximately 40 trillion states).
Unfortunately, software limitations prevented us from experimenting on states with more than 32 doors.
\begin{figure}
\centering
\begin{subfigure}[t]{0.5\columnwidth}
\centering
\includegraphics[width=\textwidth]{navigation_astar_tree.pdf}
\caption{A*}
\label{fig:navigation:astar}
\end{subfigure}%
\begin{subfigure}[t]{0.5\columnwidth}
\centering
\includegraphics[width=\textwidth]{navigation_aastar_tree.pdf}
\caption{Acyclic Angelic A*}
\label{fig:navigation:angelic}
\end{subfigure}
\caption{%
The search trees constructed by A* (a) and by algorithm~\ref{alg:acyclic} (b).
Note that the A* search needs to explore almost the entire space, due to limitations of the Euclidean distance as a heuristic.
In contrast, when provided with a decomposition of the world into nearly-convex regions, angelic A* can find a path to the goal while exploring far fewer states.
By avoiding plans with cycles, our modified angelic planning algorithm can explore these states while expanding far fewer plans.
\label{fig:navigation}
}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{\columnwidth}
\centering
\includegraphics[width=\textwidth]{quantitative_door_puzzle_expanded.pdf}
\end{subfigure}
\begin{subfigure}[t]{\columnwidth}
\centering
\includegraphics[width=\textwidth]{quantitative_door_puzzle_explored.pdf}
\end{subfigure}
\caption{%
Quantitative evaluation on an easy instance of the door puzzle domain with only two doors.
More difficult instances could not be solved by any algorithm considered except algorithm~\ref{alg:acyclic}.
The abscissa measures the number of randomly sampled states in the discretization of the configuration space.
The ordinate axes measure the number plans expanded by each algorithm and the number of distinct configurations explored during search.
\label{fig:quantitative}
}
\end{figure}
\begin{table}
\footnotesize
\centering
\input{statistics.tex}
\caption{Quantitative performance on a problem instance in the navigation domain.
The discretized state space includes $10^4$ sampled configurations.
We see that abstraction and approximation result expanding fewer plans and exploring fewer states, yielding a faster search and optimal or nearly optimal results.
\label{table:quantitative}
}
\end{table}
\section{Related Work}
There is a long history of using abstraction to solve robotic planning problems \cite{nilsson1984shakey,lozano1987handey}, and although our formulation is different from most standard approaches to task or motion planning, many authors \cite{alami1990geometrical,simeon2004manipulation} have employed our underlying approach of searching for paths through a graph of configurations connected by feasible motion plans.
Practical algorithms often overcome the high computational cost of searching these planning graphs using clever heuristics.
For example, aSyMov \cite{cambon2009hybrid} and FFRob \cite{garrett2016ffrob} both employ the fast-forward heuristic, augmented with information derived from the geometric and kinematic computation.
Like these approaches, our work is built atop a heuristically-guided search; however, angelic semantics allow us to define upper bounds which can be used to prune away abstract plans, and allow for admissible hierarchies of arbitrary depth.
Our definition of abstract plans is closely related to the notion of ``plan skeletons'' considered by several authors \cite{erdogan2013planning,desilva2013towards,lozano2014constraint}.
Plan skeletons fix a sequence of operators but leave continuous parameters undefined.
There are many approaches to determining the feasibility of a given skeleton; for example, \cite{toussaint2015logic} uses continuous optimization techniques to search for optimal values of the real-valued variables.
\cite{lozano2014constraint} fix a discretization of the continuous variables then find feasible values by formulating and solving a constraint satisfaction problem.
\cite{lagriffoul2014efficiently} use linear programming to find valid values of the free variables or prove that none exist.
The primary difference between our approach and these plan skeletons is the choice of formalism.
By defining our abstract operators as implicitly defined \emph{sets} of primitive motion plans, we can reason about plans at varying levels of abstraction in a unified way, which is essential to the generality of our guarantees.
Another approach to task and motion planning represents geometric information in a way amenable to search using classical AI search techniques.
For example, \cite{dornhege2010integrating} model geometric information as predicates that can be resolved by solving motion planning problems during the task planning process.
More recently, \cite{ferrer-mestres2017combined} show that by fixing a discretization, in some domains all geometric information can be represented compactly in planning languages more expressive than PDDL, avoiding the need to make geometric queries during the planning process.
Other authors \cite{erdem2011combining,srivastava2013using,dantam2016incremental} use the task planner as a partial or approximate representation of the underlying geometric task, which can be improved during search.
For instance, \cite{erdem2011combining} use a high-level task planner to find an optimal task plan, then use a motion planner to attempt to find a kinematically feasible primitive solution to that task plan.
If no feasible solution exists, additional kinematic constraints are extracted from the motion planner and provided to the task planner, and the process is repeated.
Many authors have devised planning algorithms tailored to more specific task and motion planning domains.
For example, the problem of navigation among movable obstacles has long been of practical interest, and probabilistically complete solutions have been known since 2008 \cite{stilman2008planning,nieuwenhuisen2008effective}.
Planning for non-prehensile manipulation has been addressed by \cite{dogar2011framework} and by \cite{barry2013hierarchical}.
Our work could provide a new analytical tool with which to study these special classes of problems, and perhaps formulate new algorithms with stronger performance guarantees.
\section{Conclusions}
We have defined conditions on an abstraction that allow us to accelerate planning while preserving the ability to find an optimal or near-optimal solution to complex motion planning problems.
We motivate these conditions by deriving two admissible abstractions and showing they improve the efficiency of search without adversely affecting the quality of the resulting solutions.
We view this work as a proof of concept, demonstrating that a good abstraction can render optimal planning feasible even on large problems.
The classical planning community has developed several powerful families of admissible heuristics \cite{haslum2000admissible}; by reformulating these heuristics to employ angelic abstractions, we may be able to obtain optimal or near-optimal solutions to practical manipulation planning problems.
\section*{Acknowledgements}
This research was sponsored by Northrop Grumman and by the Robotics Collaborative Technology Alliance (RCTA) of the US Army.
Their support is gratefully acknowledged.
\bibliographystyle{named}
|
{
"timestamp": "2018-06-05T02:11:18",
"yymm": "1806",
"arxiv_id": "1806.00805",
"language": "en",
"url": "https://arxiv.org/abs/1806.00805"
}
|
\section{Introduction}
Automatic emotion recognition from speech signal has fascinated the research community in the recent years due to its applicability in real-life. Human beings use a lot of emotions along with textual messages to convey the intended information. Emotions improve human computer interactions (HCI) system such as interactive movies \cite{nakatsu2000emotion}, story telling and E-tutoring applications \cite{ververidis2003state}, and, retrieval and indexing of the video/audio files \cite{sagar2007characterisation}. Emotion recognition system assists to improve the quality of service of call attendants at call centers \cite{lee2005toward}. Automatic emotion detection could be helpful in the psychological treatment as used in references [\cite{ooi2012early},\cite{low2011detection},\cite{yang2013detecting}]. It can also be useful in the case of surveillance systems \cite{clavel2008fear}.
Modern speech-based systems are designed largely using neutral speech. Here, the components of emotions can be used as an add-on to improve the accuracy in practical applications.
Excitation source features are not much exploited to recognize emotions. Observation from the literature reveals that the majority of the previous works used prosodic and system features for emotion recognition using speech [\cite{wang2004investigation,nicholson2000emotion}]. The system features MFCCs, Linear Predictive Cepstral Coefficients (LPCCs) and their derivatives reflect the emotion specific information. Prosodic features such as fundamental frequency, duration, energy and intonation are also used for emotion recognition. Combinations of prosodic and system features are also widely used for emotion recognition. Reference \cite{ververidis2004automatic} uses supra-segmental features such as energy, F0, formant locations, energy, dynamics of F0 and formant contours for emotion classification. The statistical parameters of F0 like maximum, minimum, and median values, and the slopes of F0 contours have emotion specific information \cite{dellaert1996recognizing}. However, not much work has been done in using excitation source features for emotion recognition.
Reference \cite{wang2004investigation} combined 55 features (24 MFCCs, 25 prosodic and 6 formant frequencies) for recognizing six emotions. Prosodic and spectral features are combined in reference \cite{nicholson2000emotion} for emotion classification. It is proven from literature that a combination of different complement features improve the accuracy of emotion recognition system. Most of the features are extracted from speech based on the assumption that the speech signal is stationary in the small speech segment. However, the speech features -- either source features or system features -- vary rapidly in emotional speech because of the rapid changes in the vibration of the vocal cords. In reference \cite{krothapalli2013characterization}, emotion recognition model is developed using a combination of epoch and MFCC features. The proposed model used zero frequency filter (ZFF) method for extracting epoch features. The accuracy of epoch detection using ZFF decreases for emotional speech because it requires a priory pitch period to detect epoch location. However, the pitch period of emotional speech varies frequently in an utterance. The emotion recognition model (in reference \cite{krothapalli2013characterization}) was developed using auto-associative neural networks (AANN) and support vector machines (SVM) on IITKGP-SESC database.
In our earlier work \cite{yadav2017epoch}, we proposed a robust method to detect epoch locations. In this paper, epoch features namely instantaneous pitch, phase and strength of excitation (SOE) are extracted. These features are explored for different emotions and combined with MFCCs for classifying four emotions. Using this method, a significant increase in the accuracy of emotion recognition model was observed. The average accuracy for emotion recognition system when using MFCC and epoch features separately is 59.25\% and 54.52\% respectively. This improves to 64.2\% when MFCC and epoch features are combined.
The rest of the paper is organized as follows. Section \ref{a} contains the description of speech databases, Sec. \ref{b} describes detection of epoch features and Sec. \ref{c} briefly discusses MFCC and development of emotion recognition models. The results are discussed in Sec. \ref{d}. Section \ref{e} concludes the paper.
\section{Databases}{\label{a}}
Our proposed model has been evaluated on IEMOCAP (Interactive emotional dyadic motion capture database)\cite{busso2008iemocap} and IITKGP:SEHSC (Indian Institute of Technology Kharagpur: Simulated Emotion Hindi Speech Corpus) \cite{koolagudi2011iitkgp}.
IEMOCAP database is a multi-modal database which contains audio, video, text and gesture information of conversations arranged in dyadic sessions. The database is recorded with ten actors (five male and five female) in five sessions. In each session, there are conversations of two actors, one from each gender, on two subjects. The conversation of one session is approximately five minutes long. The contents of the database are recorded in both scripted and spontaneous scenarios. The total number of utterances in the database are 10,039, where 4,784 utterances are from
the spontaneous sessions and 5,225 are from the scripted
sessions. The average duration of an utterance is 4.5 seconds while the average word count per utterance is 11.4 words. The duration of the database is about 12 hours. The database is labeled as per the two popular schemes: discrete categorical label (i.e, labeled as happy, anger, neutral and sad) and continuous dimensional label (i.e, valence, activation and dominance). We have only used the audio tracks and the corresponding discrete categorical labels for emotion recognition.
In IITKGP-SESC, fifteen emotionally neutral Hindi text prompts were used for recording the emotion in multiple sessions to capture diversity. In each session, 15 sentences in eight basic emotions are uttered by each artist. Recording was done with the help of SHURE dynamic cardioid microphone C660N at 16 kHz sampling frequency. The Hindi emotional speech database has 10 speakers (five males and five females) and 15 sentences were recorded for eight emotions (Neutral, Happy, Angry, Sad, Disgust, Sarcastic, Surprise and Fear). There are a total of 12000 speech utterances (10 speakers x 15 sentences x 8 emotions x 10 sessions) in the Hindi emotional speech database. There are 1500 articulations for each emotions. The number of syllables and words in the sentences lie in the range of 9-17 and 4-7 respectively.
\section{Extraction of Epoch features using Zero time Windowing method}{\label{b}}
In our method, voiced regions are detected using the phase of zero frequency filtered speech signal \cite{kumar2016voice}. After that, Zero Time Windowing (ZTW) method \cite{bayya2013spectro} is applied to get Hilbert envelope of the Numerator Group Delay (HNGD) spectra of each of the voiced segments. The amplitude of the sum of the three prominent peaks is obtained from each spectrum of the HNGD. The resulting output reproduces the instantaneous energy profile of the windowed signal. The spectral energy profile, obtained from HNGD spectrum, shows high energy at the epoch locations because of high SNR (signal to noise ratio) at these locations. Further, the spectral energy profile is normalized using mean smooth filter. The normalized spectral energy profile is then convolved with a Gaussian filter to highlight the peaks. The positive peaks -- selected after removing the spurious peaks -- are considered as epochs. Next, each of the above step is described in detail.
\subsection{Voiced Activity Detection (VAD)}
Epochs are present in the voiced regions due to vibration of the vocal cords. Hence, we first divide the speech into voiced and unvoiced regions based on its characteristics. In the present paper, voiced regions are detected \cite{kumar2016voice} using the phase of Zero Frequency Filtered Signal (ZFFS). The ZFFS of a speech utterance is obtained by using zero frequency resonator \cite{murty2008epoch}. The phase of a ZFFS is determined using the Hilbrert transformation. Further, the phase-signal is split into frames of size 30 ms with frame shift of 5 ms and each frame is convolved with Hanning-window. The amplitude spectrum of Hanning-windowed frame is computed. Thereafter, the sum of the first 10 harmonics is computed. The decision of voiced and non-voiced regions is taken based on the appropriate threshold of global maxima of the sum of phase harmonics (SPH) because the global maxima of the SPH of voiced regions is significantly higher than unvoiced regions.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\linewidth]{v}
\caption{Detection of voiced and unvoiced regions using the phase of ZFFS. (a) Speech signal. (b) its corresponding global maxima of SPH. (c) unvoiced and voiced regions correspond to low and high amplitude respectively.}
\label{m}
\end{center}
\end{figure*}
Voiced and unvoiced regions of a speech signal are detected by setting the threshold of 0.08 for global maxima of SPH of each and every frame as shown in Fig. \ref{m}.
Fig. \ref{m}(a) shows the the speech
signal. The corresponding global maxima of SPH is shown in
Fig. \ref{m}(b), which is separated as voiced and unvoiced speech in Fig. \ref{m}(c) through rectangular waveform. Here, voiced speech is labeled 1(high) and unvoiced speech is labeled 0 (low).
\subsection{Sequence of Steps for Epoch extraction }
The steps to detect epoch locations are described next.
\begin{enumerate}
\item The voiced segment is detected using the phase of zero frequency filtered speech signal \cite{kumar2016voice}.
\item The voiced speech signal is differentiated to remove any low frequency bias in the speech signal using the formula
\begin{equation}
y[n] = s[n]-s[n-1]
\end{equation}
where:\\
$y[n]$ is the differentiated signal at $n^{\textit{th}}$ sample\\
$s[n]$ is the actual speech signal at $n^{\textit{th}}$ sample, and,\\
$s[n-1]$ is the actual speech signal at $(n-1)^{\textit{th}}$ sample\\
\item Three milliseconds segments of the differentiated speech signal (resulting in $M$ = 48 samples) were taken at each sampling point. These were appended with $N-M$ (2048-48) zeros to obtain sufficient resolution in the frequency domain.
\item The time domain signal is multiplied with the square of window function $h_{1}$ (defined below) to achieve the smoothened spectrum by integration in the frequency domain.
\begin{equation}
h_{1}[n] = \begin{cases} 0 & n = 0 \\ h_{1}[n] = \frac{1}{4 sin^{2}(\frac{\pi n}{N})}& n=1,2,..,N-1 \end{cases}
\end{equation}
\item The ripple effect due to truncation is reduced by multiplying the signal of the previous step with the window $h_{2}$, which is defined as:
\begin{equation}
h_{2}[n] = 4 cos^{2}(\frac{\pi n}{2M}), n = 0,1,2...,M-1
\end{equation}
The resultant signal $x[n]$ is called windowed signal.
\item To highlight the spectral features, the numerator of group delay of windowed signal, denoted $g[k]$, is computed as:
\begin{equation}
g[k] = X_{R}[k]Y_{R}[k]+X_{I}[k]Y_{R}[k], k= 0,1,2...,N-1
\end{equation}
The resultant signal is known as DNGD signal.
\item Hilbert envelope of the DNGD spectrum is computed to prominently highlight the spectral peaks. The Hilbert envelope $h_{e}[k]$ of DNGD signal $g[k]$ is computed as:
\begin{equation}
h_{e}[k] = \sqrt{g^{2}[k]+g_{h}^{2}[k]}
\end{equation}
where $g_{h}[k]$ is the Hilbert transformation of the sequence $g[k]$. It is computed as:
\begin{equation}
g_{h}[k] = IDFT{E_{h}(w)}
\end{equation}
where $E(\omega)$ is the DTFT of the sequence $g(k)$. It is defined as:
\begin{equation}
E_{h}(\omega) = \begin{cases} -jE(\omega), & 0 < \omega < \pi \\ jE(\omega, & -\pi < \omega <0 \end{cases}
\end{equation}
\item The sum of the three most prominent peaks of the HNGD spectrum is determined at each sampling instant. The resultant amplitude shows high SNR around glottal closure. Further, the amplitude contour is smoothened using 5-point mean smoothing filter to eliminate any outliers.
\item The sum of the three prominent peaks obtained from each HNGD spectra is called spectral energy profile. The spectral energy profile is convolved with a Gaussian filter of size, average pitch period of that segment. A Gaussian filter of length L is given by
\begin{equation}
G[n] = \frac{1}{\sqrt{2\pi\sigma}}e^{-\frac{n^{2}}{2\sigma^{2}}}, n = 1,2,...,L
\end{equation}
The standard deviation $\sigma $ used in the above formula is $\frac{1}{4^{th}}$ of the Gaussian filter length.
\item The spurious peaks are eliminated by using following sub steps:
\begin{itemize}
\item[(a)] First, the spurious peaks are eliminated on the basis that the difference between successive peaks should not be less than 2 ms. This is because 2ms is the minimum range of the pitch period. If two successive peaks having a difference of less than 2ms are found, the peak location with less amplitude is removed.
\item[(b)] Two successive peaks bound a negative region between them. This criteria also eliminates some spurious peak locations.
\end{itemize}
\item The positive peaks in epoch evidence plot represent epoch locations.
\end{enumerate}
Epoch detection using ZTW method is shown in Fig. \ref{j}. The angry emotional speech segment is shown in Fig. \ref{j}(a) and its differentiated EGG signal is shown in Fig. \ref{j}(b). The spectral energy profile obtained from HNGD spectrum of the speech signal using ZTW analysis is plotted in Fig. \ref{j}(c). The epoch evidence plot after convolving spectral energy profile with a Gaussian window of 2 m sec is shown in Fig. \ref{j}(d). Epoch locations are shown in Fig. \ref{j}(e).
\begin{figure}[h]
\begin{center}
\includegraphics[width=1.0\linewidth]{proposed}
\caption{Epoch extraction using proposed method. (a) Angry speech segment. (b) Differentiated EGG signal. (c) Spectral energy profile obtained from HNGD spectrum. (d) Epoch evidence plot. (e) Epoch locations.}
\label{j}
\end{center}
\end{figure}
ZTW method for epoch detection is robust for emotional speech \cite{yadav2017epoch}. This method is based on spectral peak energy, therefore, it preserves the energy of the signal.
\subsection{Epoch Features}
The epoch features such as instantaneous pitch, strength of the epoch, slope of the strength of the epoch, the change of phase at the epoch are specific to each emotion \cite{krothapalli2013characterization}. The above mentioned features are determined by the epoch signal obtained by ZTW method \cite{yadav2017epoch}. The advantage of this method is that the value at epoch location is actually the sum of the glottal formants. Therefore, the epochs retain both time and spectral information.
\subsubsection{Instantaneous Frequency}
Instantaneous Period (IP) is the duration between two successive epoch locations; instantaneous frequency, denoted $\Delta f$, is computed as the reciprocal of IP \cite{koolagudi2010emotion,narendra2015robust}:
\begin{equation}
\Delta f = \frac{1}{t(i)-t(i+1)},i=1,2,....(n-1)
\end{equation}
where $t(i)$ represents $i^{th}$ epoch location.
\subsubsection{Strength Of Excitation}
The Strength Of Excitation (SOE) is computed as the difference between two successive epoch values \cite{gangamohan2014excitation}:
\begin{equation}
y(i) = {x(i)-x(i+1)},i=1,2,....(n-1)
\end{equation}
where $x(i)$ is the epoch strength at $i^{th}$ epoch.
\subsubsection{Instantaneous Phase}
The instantaneous phase of a glottal signal is obtained by the cosine of the phase function of the corresponding analytical signal.
\begin{itemize}
\item The analytic signal $g_{a}(n)$ corresponding to glottal signal $g(n)$ is given by
\begin{equation}
g_{a}(n) = g(n) +jg_{h}(n)
\end{equation}
\item where $g_{h}(n)$ is the Hilbert transformation of $g(n)$, and is obtained by
\begin{equation}
g_{h}[n] = IDFT{g_{h}(w)}
\end{equation}
where ${g_{h}(w)}$ is defined as:
\begin{equation}
g_{h}(\omega) = \begin{cases} -jG(\omega), & 0 < \omega < \pi \\ jG(\omega, & -\pi < \omega <0 \end{cases}
\end{equation}
$G(\omega)$ is the DTFT of the sequence $g(n)$ and IDFT denotes Inverse Discrete Fourier Transform
and
\item The Hilbert envelope of glottal signal $g(n)$ is calculated as:
\begin{equation}
h_{e}[n] = \sqrt{g^{2}[n]+g_{h}^{2}[n]}
\end{equation}
\item The cosine of the phase of the analytic signal $g_{a}(n)$ is
given by
\begin{equation}
cos\Phi(n) = \frac{Re g_{a}(n)}{|g_{a}(n)|} =\frac{g(n)}{h_{e}[n]}
\end{equation}
where $g(n)$ is glottal signal derived from speech signal $s(n)$ using ZTW method.
\end{itemize}
In Fig. \ref{k}., instantaneous frequency and SOE values of same speech utterance by same speaker in different emotions are plotted. Figure \ref{k}(a) shows instantaneous pitch for two emotions: angry and sad. Red color indicates angry emotion while black indicates Sad emotion. It is clear from Fig. \ref{k}(a) that the range of instantaneous pitch varies from 250-400 Hz for angry emotion while for sad it varies from 100-200 Hz. The instantaneous pith contour for same arousal emotion (happy and angry) is same but their variation with time is different. This property of instantaneous pitch contour is well captured with dynamic model like Hidden Morkov Model (HMM) or Long Short Term Memory (LSTM) network. Figure \ref{k}(b) shows SOE for two emotions: anger and sad. The variation of SOE is higher in angry emotion than sad emotion. The variation of SOE is quite less in the case of sad emotion. \ref{k}(b) shows the phase of glottal signal, it is high for sad compared than angry. The two features SOE and glottal phase also discriminate between same arousal emotion (happy and angry).
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=1.0\linewidth]{phase}
\caption{Instantaneous pitch and SOE contours of angry and sad speech signal using proposed method. (a) Instantaneous pitch contour, (b) SOE contour, and (c) Instantaneous phase contour of angry and sad speech signal.}
\label{k}
\end{center}
\end{figure*}
\section{Development of Emotion Recognition System}{\label{c}}
Emotion recognition system is an outcome of two principal stages. In the first stage, training is performed using the features extracted form the known emotional speech utterances. In the second stage, i.e., the testing phase, evaluation of the trained model is
carried out on unseen emotional speech utterances. The schematic diagram of the proposed emotion recognition system is shown in Fig. \ref{l}. We combined the MFCC features with the epoch features namely instantaneous pitch, instantaneous phase and strength of epoch (SOE). The excitation source and system features have complementary information for recognizing emotions, hence, the combined features significantly improve the accuracy of emotion recognition.
\begin{figure}
\begin{center}
\includegraphics[height= 14cm, width=8cm]{block}
\caption{Schematic diagram of the proposed emotion recognition model. }
\label{l}
\end{center}
\end{figure}
\subsection{MFCC Feature extraction}
Mel Frequency Cepstral Coefficients (MFCCs) features also have emotion specific information. We combine MFCC features with epoch features in our model for recognizing emotions. Gradual spectral variations are captured using 13 MFCCs extracted from speech signal. The speech signal is segmented into frames of size 20 ms, where each frame is overlapped by 10 ms with the adjacent frame. For each frame, 13 MFCC features are extracted. To minimize spectral distortion at the beginning and at the end of each frame, Hamming window is superimposed on each frame segment. MFCC features are extracted from these frames using the MFCC algorithm given in \cite{rabiner1993fundamentals}. Recording variations are countered by subtracting cepstral mean and normalizing variance of MFCCs at the utterance level. The schematic diagram of the proposed feature extraction and transformation is shown in Fig. \ref{s}.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\linewidth]{feat}
\caption{Schematic diagram of the proposed feature extraction and transformation.}
\label{s}
\end{center}
\end{figure*}
\subsection{DNN-HMMs}
In our work, the emotion recognition system has been developed using Hidden morkov model (HMM) \cite{rabiner1986introduction} -- a dynamic modeling approach. It captures the temporal dynamic characteristics of different epoch features of corresponding emotions. In conventional HMM, the observation probabilities of HMM states are estimated by Gaussian mixture models (GMMs). The GMMs used in such a conventional HMM are statistically inefficient to model non-linear data in the feature space. Therefore, we have replaced the GMMs with DNN to estimate the observation probabilities of observing input sequence at each state in the training phase. In this work, we have developed four HMMs for four discrete emotions. Emotion label is assigned for an unknown speech utterance using Viterbi algorithm. The procedure for training and recognition of DNN-HMM is followed as mentioned in [\cite{li2013hybrid}, \cite{hinton2012deep}]. To the best of our knowledge, this is the first time that such a model is being used in an emotion recognition system.
For providing class labels to DNN, we used a GMM-HMM model with five states for each emotion class. Specifically, for each speech utterance in the raining set, viterbi algorithm is applied to find an optimal state sequence. The optimal state sequence is stored in the state-label mapping table, which is used to assign a label to each state. The training speech utterances, combined with their labeled state sequences, are then fed as input to the DNN. The output of the DNN is the posterior probabilities of the 20 output units. The observation probability of each state, denoted $p(i_{t}|q{t}) $, is calculated using Bayes theorem as follows:
\begin{equation}
p(i_{t}|q{t}) = \frac{p(q_{t}|i_{t})* p(i_{t}) }{p(q_{t})}
\end{equation}
Where $I = (i_{1},i_{2},.......i_{T} )$ is the input sequence and $p(q_{t}|i{t})$ is the posterior probability obtained as output from the DNN. During decoding, for an unseen speech utterance, the probability of each emotion is estimated and the utterance is assigned the class whose estimated probability is maximum. $p(q_{t})$ is computed from the initial state level alignment of the training set. $p(i_{t})$ remains constant because input feature vectors are assumed to be mutually independent.
\section{Experimental Results and Discussion}{\label{d}}
Three models were developed for emotion recognition: using system (MFFCCs) features, using source (epoch) features, and by combining MFCC and epoch features. The model on combined features has significantly higher accuracy compared to individual models.
The experiments were performed on IEMOCAP and IITKGP:SEHSC databases. However, we have conducted experiments for only four emotions, namely angry, happy, sad and neutral. Three-fourth part of the database is used for training purpose and the rest one-fourth of the database is used for evaluating the model. We have used MATLAB tool for feature extraction and KALDI toolkit \cite{povey2011kaldi} for developing the system. For the emotion recognition system developed using MFCC features, 13 MFCCs are extracted from each frame. Cepstral mean variance normalization (CMVN) \cite{viikki1998cepstral} is performed at utterance level to mitigate the recording variations. We have also taken the derivative and double derivative of the normalized MFCCs as features. Therefore, the total number of MFCC features for each frame is 39. To preserve the contextual information, we have used the triphone model approach used in speech recognition where each frame is spliced with the left four frames and the right four frames. A significant improvement in emotion recognition accuracy is observed using the triphone model. Feature transformation is applied on the top of 9 spliced frame features. These features are projected into lower dimensional space using Linear Discriminant Analysis (LDA). Then, diagonalizing Maximum Likelihood Linear Transform (MLLT) [\cite{gales1998maximum,gales1999semi}] is applied to further improve the result. Speaker Adaptive Training (SAT) is also used to further
enhance the accuracy of the emotion recognition model. For speaker adaptive training Feature Space Maximum Likelihood Linear Regression (fMLLR) transformation is used during both training and testing phases. Thus, accuracy of system is further improved using (LDA+MLLT+SAT)\cite{rath2013improved}. Four different DNN-HMM models corresponding to each emotion class are built using the transformed feature vectors.
The DNN architecture used is: 80:512x5:20, where 80 is the number of transformed input features to the DNN and 512x5 represents 512 nodes in each of the 5 hidden layers. This DNN configuration was found to be optimal after experimenting with different sized configurations. The results discussed in this paper have been obtained on optimal DNN configuration only. There are 20 output classes in the DNN model (20=4x5, where 4 denotes the number of emotion classes and 5 denotes the number of states in HMM). These output classes are treated as "ground-truth" states and are obtained by GMM-HMM based viterbi algorithm. The initial learning rate of 0.005 is gradually decreased to 0.0005 after 25 epochs. Additional 20 epochs are performed after this. The batch size for training is 512. The training of DNN is performed in three stages as in \cite{vesely2013sequence}: (i) unsupervised pre-training
consisting in layer-wise training of Restricted Boltzmann Machines
(RBM) by Contrastive Divergence algorithm; (ii) frame classification
training based on mini-batch Stochastic Gradient
Descent (SGD), optimizing frame cross-entropy; and (iii) sequence discriminative
training consisting in SGD with per-sentence updates,
optimizing state Minimum Bayes Risk (MBR).
In our study, we have considered four categorical (class) labeled emotions namely angry, happy, sad and neutral. The numbers of utterances in each class are 1103, 595, 1084 and 1708 respectively with a total of 4490. The IEMOCAP database is imbalanced.
The model was trained in a speaker independent fashion. We used four sessions as training data and the remaining one session for testing. We followed the approach of leave-one-speaker-out cross-validation to generalize the model. The test dataset is also imbalanced corresponding to the emotion classes, hence, we calculated both weighted accuracy(WA) and unweighted accuracy(UWA). Weighted accuracy is calculated by dividing the total number of correct classified test examples with the total number of test samples. Unweighted accuracy is calculated for each emotion category and the average accuracy of all emotions class is taken. The unweighted accuracy is also called class accuracy.
Similarly, for epcoh features, the emotion recognition system is developed using three epoch features namely instantaneous pitch, phase and the strength of epoch. These features are extracted using ZTW method. We have taken frames of size 20 ms -- same as MFCC features -- to extract epoch features. The number of epoch features are different for each frame. To fix the length of epoch-feature vector, we have taken length as 10 -- the maximum number of epochs encountered in any frame. If the size of the feature vector is less than 10, we pad the remaining length with zeros. There are no adverse effects of padding to train the network because we transform the input feature vectors (using LDA+MLLT). Therefore, the total number of epoch feature per frame is 30 (10 epochs $\times$ 3 features per epoch). We developed the DNN-HMM model for each emotion using these 30 epoch features.
Finally, we combined the epoch and MFCC features to improve the performance of emotion recognition system. After combining the MFCC and epoch features, the length of the feature vector becomes 69.
We have developed baseline GMM-HMM system using (1) monophone training, (2) triphone training with $MFCC+\Delta+\Delta^{2}$, and (3) triphone training with LDA+MLLT. We developed the DNN-HMM system with LDA+MLLT. In Table \ref{o} we have shown the result of emotion recognition system using only MFCC and its derivative features. We have also applied LDA+MLLT transformation on MFCC and its derivative features. Our system is trained using both monophone and triphone training. Triphone system gives better result than monophone because it captures the contextual information. We also estimate the observation probability using DNN instead of GMM as described in previous section. Our system gives best results in the case of DNN-HMM. The average accuracy increases approximately 3.5\% when observation probability of HMM models is calculated by DNN instead of GMM. The confusion matrix for experiments done using only $MFCC+\Delta+\Delta^{2}$ features with LDA+MLLT transformation on DNN-HMM system is shown in Table \ref{f}. From the result it is clear that there is more confusion between angry and happy emotions because both are high arousal emotions. The sad and neutral emotions also show confusion because both are low arousal emotions.
\begin{table}[h]
\centering
\caption{Emotion classification performance (\%) using the MFCC features on IEMOCAP database} \label{o}
\label{h}
\begin{tabular}{ccccc}
\hline\vspace{1mm}
\bf Features &\bf Model \bf & \bf UWA ( \%) \\\hline
MFCC(monophone) & GMM-HMM& 44.70 \\ \vspace{1mm}
$MFCC+\Delta+\Delta^{2}$ (triphone) & GMM-HMM& 47.70 & \\\vspace{1mm}
MFCC(LDA+MLLT) & GMM-HMM & 51.25 \\
MFCC(LDA+MLLT) & DNN-HMM & 54.35 \\
\hline
\end{tabular}
\end{table}
\begin{table}[!h]
\centering
\caption{Emotion recognition performance on IEMOCAP Database, based on MFCC feature vector of voiced region using DNN-HMM. Abbreviations: A-Anger, H-Happy, N-Neutral, S-Sad}
\label{f}
\begin{tabular}{lllll}
\hline\noalign{\smallskip}
&\multicolumn{4}{l}{MFCC feature vector(Average: 59.58)} \\
\cline{2-5}
& A & H & N & S\\
\hline
\noalign{\smallskip}
Anger & \bf 60.21 & 23.29 & 9.45 & 7.05 \\
Happy & 26.56 & \bf 58.17 &8.70 & 7.57 \\
Neutral & 8.13 & 11.43 & \bf 59.71& 20.73 \\
Sadness & 8.3 & 8.45 & 23.00 & \bf 60.25 \\
\hline
\end{tabular}
\end{table}
Similarly, we also developed the system for epoch features. The average recognition rate for the model developed using MFCC features only is 54.35\%. The average recognition rate for the model developed using epoch features only is 54.15 \%.
\begin{table}
\centering
\caption{Emotion recognition performance on IEMOCAP Database, based on Epoch feature vector of voiced region. Abbreviations: A-Anger, H-Happy, N-Neutral, S-Sad}
\label{t}
\begin{tabular}{lllll}
\hline\noalign{\smallskip}
&\multicolumn{4}{l}{Epoch feature vector(Average: 54.52)} \\
\cline{2-5}
& A & H & N & S\\
\hline
\noalign{\smallskip}
Anger & \bf 57.21 & 15.29 & 22.45 & 5.05 \\
Happy & 13.56 & \bf 52.24& 21.70 & 12.5 \\
Neutral & 15.23 & 14.40 & \bf 53.71& 16.66 \\
Sadness & 7.00 & 9.05 & 29.00 & \bf 54.95 \\
\hline
\end{tabular}
\end{table}
The confusion matrix in Table \ref{t} shows the recognition performance for each emotions using Epoch features.
The diagonal elements of the confusion matrix shows the recognition performance for individual emotions using epoch features. From experimental result it is clear that epoch features discriminate well between angry and happy emotions compared to MFCC features.
The average recognition rate for the model developed using the combination of MFCC and epoch features is 60.14\%.
The performance of the model for each emotion using MFCC features, epoch features and combination of MFCC and epoch features is compared in Table \ref{h}. The combined features significantly improves the accuracy of emotion recognition.
\begin{table}[h]
\centering
\caption{Emotion classification performance (\%) using the Epoch, MFCC and Combined(MFCC+Epoch) features on IEMOCAP database}
\label{h}
\begin{tabular}{p{3.7cm}cc}\hline
\bf Features &\bf Model \bf & \bf UWA ( \%) \\ \hline
Epoch Features+LDA+ MLLT(triphone) & GMM-HMM& 50.25 \\\vspace{1mm}
Epoch Features+LDA+ MLLT(triphone)& DNN-HMM& 54.15 \\ \vspace{1mm}
Epoch Features+MFCC+ $\Delta+\Delta^{2}$ (LDA+MLLT)& GMM-HMM & \bf57.25 \\\vspace{1mm}
Epoch Features+MFCC+ $\Delta+\Delta^{2}$(LDA+MLLT) & DNN-HMM & \bf60.14\\
\hline
\end{tabular}
\end{table}
\subsection{ Speaker Adaptation}
Adaptation is a necessary task for emotion recognition. In general, we train our model with limited dataset but in real environment there may be different types of speakers and noise. There must be a robust method to adapt trained model in real environment. In this paper, we have applied Cepstral mean variance normalization (CMVN) at utterance level to mitigate the recording variations. fMLLR transformation is applied per speaker to adapt the emotion variation of different speakers. After the LDA-MLLT transformation of a feature vector, we transform this matrix into feature space using constraint maximum likelihood linear regression(CMLLR). The model is developed using the strategy of leave-one-speaker-out cross-validation where each time two speakers -- that were not a part of the training dataset -- are used for testing.
\begin{table}[!hbt]
\centering
\caption{Emotion classification performance (\%) using the Epoch, MFCC and Combined(MFCC+Epoch) features on IEMOCAP database}
\label{u}
\begin{tabular}{lcc}
\hline
\bf Features &\bf Model \bf & \bf UWA(\%) \\ \hline
\vspace{1mm}
MFCC(LDA+MLLT) & DNN-HMM & 54.35 \\\vspace{1mm}
Epoch(LDA+MLLT) & DNN-HMM & 54.15 \\ \vspace{1mm}
MFCC+Epoch(LDA+MLLT) & DNN-HMM & 60.14 \\ \vspace{1mm}
MFCC(LDA+MLLT+SAT) & DNN-HMM & \bf59.58 \\ \vspace{1mm}
Epoch (LDA+MLLT+SAT) & DNN-HMM & \bf54.52 \\\vspace{1mm}
MFCC+Epoch(LDA+MLLT+SAT)&DNN-HMM & \bf64.20\\ \hline
\end{tabular}
\end{table}
There is significant improvement in recognition rate after applying speaker adaptive training for MFCC features. It is mentioned in the Table \ref{u} that after applying fMLLR the emotion recognition rate increases up to 4 \% for MFCC features but there is no improvement for epoch features. Therefore, we can say that epoch features are speaker independent features for which no speaker adaptive technique is required. The bar graph in Fig. \ref{g} Shows that emotion recognition accuracy is higher for combined (MFCC+Epoch) set of features than using each feature-set alone.
The average performance of combined features is increased by 5.34\% compared to the emotion recognition model developed using MFCC features only. This result proves that both system features and excitation source features have complementary information for emotion recognition.
\begin{figure*}
\centering
\includegraphics[width=0.9\linewidth]{draw}
\caption{Emotion classification performance (\%) using the Epoch, MFCC and Combined(MFCC+Epoch) features on IEMOCAP database}\label{g}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{draw1}
\caption{Emotion classification performance (\%) using the Epoch, MFCC and Combined(MFCC+Epoch) features on IEMOCAP and IITKGP-SEHSC databases}\label{i}
\end{figure}
We evaluate our proposed approach on two databases IEMOCAP and IITKGP:SEHSC database. The bar graph corresponding to MFCC, Epoch and combined feature set for each database is shown in Fig.\ref{i}. In both databases, the accuracy increased for combined features. The accuracy is more in IITKGP:SEHSC database because it is a scripted database. IEMOCAP database contains both scripted and spontaneous sessions, and is more natural. It is also text independent database. The utterance length in IITKGP:SEHSC database is almost equal whereas large variation in utterance length is observed in IEMOCAP database.
We compare our result with the prior work on IEMOCAP database. In \cite{han2014speech}, DNN was used to extract the features from speech segment and further utterance level features were constructed and fed to the Extreme Learning Machine(ELM). In \cite{mirsamadi2017automatic}, raw spectrogram and Low Level Descriptors(LLDs) features were modeled with attentive LSTM. The accuracy is more for LLDs compared to spectrogram. In \cite{fayek2017evaluating} Convolutional Neural network(CNN) was used for feature extraction from speech frame and these features was fed to the dense neural network. \cite{satt2017efficient}, Long Short Term Memory (LSTM) network was used to preserve the contextual information of CNN-based features. CNN-based features are fully data driven. It extracts features from the raw spectrogram which are the representation of speech but it does not contain temporal resolution properly. To achieve temporal resolution, we have to restrict frequency resolution. All the methods used spectrogram for speech representation but it can mislead the accuracy. Our feature extraction approach is not data driven, we are identifying desired temporal and spectral features using signal processing technique. The HMM model captures the contextual information of epoch features. As can be seen from the Table \ref{q} both the weighted and unweighted accuracy outperform from the other methods. Our result proves that MFCC and source features(epoch) contain complimentary information.
\begin{table*}[t]
\centering
\caption{ Comparision of Emotion classification performance (\%) reported in the prior work on IEMOCAP database} \label{q}
\begin{tabular}{cp{2.7cm}cc}
\hline\vspace{1mm}
\bf Model &\bf Features &\bf WA(\%) &\bf UWA ( \%) \\\hline
\vspace{1mm}
DNN+ELM \cite{han2014speech} & MFCC features, pitch-based features and their derivatives& 54.3 & 48.00 \\\vspace{1mm}
LSTM with attention \cite{mirsamadi2017automatic} & Local level descriptor and Spectrogram & 63.5 & 58.8 \\ \vspace{1mm}
CNN \cite{fayek2017evaluating} & Spectrogram & 64.78 & 60.89 \\ \vspace{1mm}
CNN+LSTM \cite{satt2017efficient} & frame-level Spectrogram & 68.8 & 59.4 \\
DNN-HMM & Epoch (Proposed) & 58.60 & 54.52 \\
DNN-HMM & MFCC (proposed) & 64.3 & 59.58 \\
DNN-HMM & MFCC+Epoch (Proposed) & 69.5 & 64.2 \\
\hline
\end{tabular}
\end{table*}
\section{SUMMARY AND CONCLUSION}{\label{e}}
The paper highlights the robust characteristic of ZTW method for extracting epoch features. The DNN-HMM model is developed for each emotion using epoch features such as instantaneous pitch, strength of epoch (SOE). The average emotion recognition rate of the proposed model using epoch features is 54.52\%. The model developed using epoch features is further combined with the model developed using MFCC feature vectors. The observed accuracy of the proposed model using MFCC and epoch features together is 64.20\%. The experimental results show that the epoch feature set is complementary to the MFCC feature set for emotion classification. Our future work is to use LSTM network to capture the contextual information of epoch feature and to explore epoch features in the other applications
of speech processing such as speaker identification, speech recognition and, synthesis and language identification.
\section*{References}
|
{
"timestamp": "2018-06-05T02:14:48",
"yymm": "1806",
"arxiv_id": "1806.00984",
"language": "en",
"url": "https://arxiv.org/abs/1806.00984"
}
|
\section{ElasticOS Primitives in Action}
\label{sec:abstractions}
\iffalse
\begin{figure*}[th]
\centering
\includegraphics[scale=0.50]{elasticos_abstractions.pdf}
\label{fig:abstractions1}
\vspace{-0.2in}
\caption{Illustration of ElasticOS abstractions. Each box labeled with a number above is a compute node,
with the shaded boxes within represent individual pages. Starting with execution on a single
machine in (0), when memory nears being filled, we stretch to two nodes in (1) and balance the pages in (2). We then push and
pull pages in (3), with the red shaded pages going from node 1 to 2 (push) and from node 2 to 1 (pull). Finally, in (4) and (6) we
are seeing too many page faults (resulting in pull), so decide to jump from node 1 to 2 in (5) and from node 2 to 1 in (7), respectively.}
\end{figure*}
\fi
In this section, we describe the four primitives through an illustration of a running program.
Figure 2
graphically presents each of the primitives.
In this figure, we can see nodes 1 and 2, with pages inside of each node -- this represents the physical memory and whether a
given page is used (shaded) or unused (unshaded) in physical memory.
As a starting point, an application is running on a single machine. Over time, this application
grows in memory use to nearly the size of the amount of memory in the entire node (label 0 in the figure).
This is when ElasticOS decides to stretch the process, that is to scale out by
using memory on a second node (label 1). At this point, the memory available to the application has grown
(doubled in the figure, since it is now on two nodes with equal memory, which is not required in ElasticOS).
ElasticOS can choose to balance the pages at this point, to transfer pages
to the (new) remote node (label 2). These can be chosen by a means, such as least recently used.
Once the process is stretched, this means that the process is effectively running
on multiple machines, but each node only hosts some of the pages.
At this point, execution continues on the original machine. As not all
of the pages are on this machine (which would have naturally happened over time, even if we didn't balance pages),
when the process tries to access a page, it might trigger a page fault. In ElasticOS, the
page fault handler is modified to handle this situation. At this point, we perform a pull, where a page from
a remote machine (that caused the fault), is transferred to the local machine and the process is resumed.
The process will be able to make progress, as the page that is being accessed (and caused a fault) is now local.
If space is needed to perform a pull, we can perform a push to free up memory for the incoming page by transferring
a page to a remote node (that we have stretched the application to). Push
(and pull) is more versatile, as they can be performed proactively as well -- moving pages around, in the background,
to optimize the placement for locality (label 3).
The idea of locality is important, especially in regards to our final primitive, jump.
Assuming that programs have locality, there is a certain point at which, when we transition
into a new pocket of locality, that the amount of data that forms that locality is high. It
is therefore advantageous to jump execution to the data, rather than pull it all into the local node
(as is done in network swap). In the figure, in steps (4 and 6), the area highlighted in red represents an island of locality that would be more advantageous to jump to rather than pulling the entire group of pages to the local machine. When to jump is an important decision -- jumping
too much can hurt performance (constantly transferring execution, without making progress), but not jumping
enough can also hurt performance (transferring lots of data back and forth between machines). As such,
we created an initial algorithm, and implemented it as a flexible module within which new decision making algorithms
can be integrated seamlessly.
\section{ElasticOS Architecture}
\begin{figure}
\centerline{\includegraphics[width=\linewidth,keepaspectratio]{Picture1.png}}
\caption{EOS Architecture.}
\label{fig:arch}
\end{figure}
In this section, we describe the main components of the ElasticOS architecture. ElasticOS can be built as a service integrated into existing and commercially-available operating systems. Figure \ref{fig:arch} illustrates the main functional elements that enable a process (e.g., a.out) to be stretched for distributed execution over two ElasticOS nodes. For clarity purposes, we depict the perspective of pushing and pulling from the perspective of node 1, but in reality all nodes have symmetric capabilities to enable pushing, pulling, and jumping in all directions.
In the remainder of this section, we will provide a more detailed architectural overview focusing on mechanisms that are roughly OS-independent in order to achieve stretching (\ref{sec:stretching}), pushing (\ref{sec:pushing}), pulling (\ref{sec:pulling}), and jumping (\ref{sec:jumping}). The discussion of OS-dependent elements specific to the Linux implementation is reserved for Section \ref{sec:implementation}.
\subsection{Stretching}
\label{sec:stretching}
\begin{figure}
\centerline{\includegraphics[width=\linewidth,keepaspectratio]{stretching.png}}
\caption{Stretching.}
\label{fig:stretching}
\end{figure}
Stretching is responsible for enabling a process to span multiple nodes. This consists of an initial stretch operation, as well as on going synchronization.
\textbf{Initial stretch operation:}
In order for a process to span multiple nodes, it needs a process shell on each machine. In this way,
stretching resembles previous Checkpoint/Restore (C/R) works \cite{CRIU, MOSIX}, except that less information needs to be written into the checkpoint. Here we will need to create a process shell that will remain in a suspended state rather than wholly-independent runnable replica. This makes stretching faster than standard C/R. It requires kernel-space process meta-data. These include virtual memory mappings (mmaps), the file descriptor table, scheduling class, and any other meta-data which is not updated frequently. Other information that is typically modified at a high rate such as pending signals, register state, and stack frames need not be in the checkpoint and will be carried over from the running process whenever it jumps (\ref{sec:jumping}).
As shown in Figure~\ref{fig:stretching}, stretching is triggered by the EOS manager, which continuously monitors process' memory usage and issues a newly-created signal (\texttt{SIGSTRETCH}) whenever it detects a process that is too big to fit into the node where it is running. Our special kernel-space handler (eos\_sig\_handler) intercepts the signal and instructs the process-export module (p\_export) to send the checkpoint using a pre-created TCP socket to a process-import module (p\_import) waiting in the other node. The latter will, then, create a shell process by allocating the necessary kernel-space structures and filling them in with checkpoint data.
\textbf{State Synchronization:} After the process has been stretched, and its replica has been created on another machine, additional changes in process state on the first machine will need to be propagated to the replica. This is handled in two ways. Rapid changes in state are handled using the jumping mechanism, as explained later. Changes in state at a more intermediate time scale such as mapping new memory regions and opening or closing files are handled using multicast sockets to listeners on each participating node.
\iffalse
Stretching creates a process shell on "slave" machines, and jumping carries over with it information that change at a high rate. Furthermore, some other events are synchronized they happen. Examples of such, include mapping new memory regions, and opening or closing files.
For this purpose, a pair of multi-cast socket and lister are setup on each participating node. Once one of those events is performed on a process replica, a message containing the necessary information is broadcast to all other participating nodes. The same operation is then carried out on all of the other replicas.
\fi
One of the pitfalls to avoid here is that the operating system scheduler may delay flushing all such synchronization messages until after a jump is performed. If this happens, the system may arrive at an incorrect state or even crash. So, it is crucial to flush all synchronization message before a jump is performed.
\subsection{Pushing}
\label{sec:pushing}
Now that the process has presence on more than one machine, its memory pages are \textit{pushed} between nodes in order to balance the load among participating nodes. Our page pusher piggybacks on existing OS's swap management (See Figure \ref{fig:pushing}).
Typically, the swap daemon scans least-recently used (LRU) lists to select least recently used page frames for swapping. Our page balancer modifies this page scanner in order to identify pages mapped by elasticized processes (shaded pages in Figure \ref{fig:pushing}) using reverse mapping information associated with the page. These are then sent to a virtual block device client (VBD), similar to the one described in \cite{Infiniswap}, after updating the respective page table entries (PTEs) in the elastic page table. The VBD then forwards the page along with relevant information such as process ID, and the page's virtual starting address to the page injection module (pg\_inject) on the node, which will then allocate a new page, fill it with the proper content, and update the replicas elastic page table.
\begin{figure}
\centerline{\includegraphics[width=\linewidth,keepaspectratio]{pushing.png}}
\caption{Pushing.}
\label{fig:pushing}
\end{figure}
Maintaining accurate information in the elastic page tables when pushing pages is very crucial to correct execution. As we will see later, jumping depends on this information for locating pages in the system.
\subsection{Pulling}
\label{sec:pulling}
Partitioning the process's memory footprint will, inevitably, result in references to remote pages. These are handled by our modified page fault handler (Figure \ref{fig:pulling}). On a page fault, the handler will consult the elastic page table to identify the page's location. If it happened to be on a remote node, the page's starting virtual address and process ID is forwarded to the VBD, which will then contact the page extraction module (pg\_extract) on the respective node to pull the page. Once it receives the page's content, the VBD client, then restores the process's access to the page.
Whenever a remote page fault is handled as described above, page fault counters are updated. This is required by ElasticOS's jumping policy (Section \ref{sec:jumping}), which will always try to co-locate execution with its most-referenced memory.
\begin{figure}
\centerline{\includegraphics[width=\linewidth,keepaspectratio]{pulling.png}}
\caption{Pulling.}
\label{fig:pulling}
\end{figure}
\subsection{Jumping}
\label{sec:jumping}
\begin{figure}
\centerline{\includegraphics[width=\linewidth,keepaspectratio]{jumping.png}}
\caption{Jumping.}
\label{fig:jumping}
\end{figure}
Jumping is the act of transferring execution from one node to another. For this, there is both a jumping mechanism that performs
a lightweight process migration, and the jumping policy to determine when to jump.
\textbf{Jumping mechanism:}
Jumping is an lightweight mechanism similar to checkpoint/restore systems. In contrast to stretching, with jumping, the process does actually transfer execution, and only carries in the checkpoint the information that changes at a high rate. This includes CPU state, the top stack frames, pending signals, auditing information, and I/O context. The overall size of jumping checkpoint data is dominated by the stack frames, so it is very important to include only the topmost stack memory pages that are necessary for correct execution.
As shown in Figure~\ref{fig:jumping}, whenever a jump is deemed necessary by the jumping policy in the EOS Manager, it sends a special signal (\texttt{SIGJUMP}) to the process, which is then routed to the eos\_sig\_handler which will then instruct the p\_export module to checkpoint the process and send the information to the other node's p\_import module. The latter will fill in the appropriate kernel-space structures and set the process's state to runnable. Notice here that when jumping, no new structures need to be allocated since the process has been already stretched to the target node. Also, notice that the process at the source node will remain in a suspended state. In essence, jumping resembles rescheduling a process from one CPU to another across the boundaries of a single machine.
\textbf{Jumping Policy Algorithm:} Maximizing locality is crucially important to the application's performance. A naive approach to moving execution and memory pages around in the system will, inevitably, increase the rate of remote page faults leading to poor performance. Thus, a good policy for moving processes close to their most frequently used memory is of critical importance.
ElasticOS can achieve this goal by overcoming two challenges, namely having a good sense of how to group inter-dependent memory pages together on the same node, and detecting which of those groups is the most frequently accessed one.
The first challenge can be overcome by taking advantage of the natural groupings memory pages belonging to an application tend to form due to recency of reference. This property is already evident in the wide adoption of the LRU algorithm for page replacement in most modern OSs. Thus, we can extend LRU algorithms to work in a multi-node system, where pages evicted from one node's RAM are immediately shipped to another node via our pushing mechanism.
The second challenge can be addressed by implementing a jumping policy that: 1) monitors the process's page accesses to find the "preferred" node, and 2) reschedules the process to the preferred node if it is running on any of the other ones.
Bear in mind that accurately tracking memory references for a particular process can be a challenging task since CPUs do not report every memory access to the OS for performance reasons. This leaves us with options that provide the "next best thing", such as counting the number of time the CPU sets \texttt{PG\_ACCESSED} flag for a particular page frame when it is accessed in the X86\_64 architecture or tracking handled page faults.
\subsection*{Abstract}
\input{abstract}
\input{Introduction}
\input{abstractions}
\input{Architecture}
\section{ElasticOS Implementation}
\label{sec:implementation}
\input{Implementation}
\section{Performance Evaluation}
\input{Evaluation}
\section{Discussion and Future Work}
\input{FutureWork}
\section{Conclusion}
\input{Conclusion}
\section{Acknowledgments}
This research was supported by NSF CCF grant \# 1337399, funded under the NSF program Exploiting Parallelism and Scalability "XPS: SDA: Elasticizing the Linux Operating System for the Cloud". We also wish to thank Sepideh Goodarzy and Ethan Hanner.
\iffalse
\section{Availability}
It's great when this section says that MyWonderfulApp is free software,
available via anonymous FTP from
\begin{center}
{\tt ftp.site.dom/pub/myname/Wonderful}\\
\end{center}
Also, it's even greater when you can write that information is also
available on the Wonderful homepage at
\begin{center}
{\tt http://www.site.dom/\~{}myname/SWIG}
\end{center}
Now we get serious and fill in those references. Remember you will
have to run latex twice on the document in order to resolve those
cite tags you met earlier. This is where they get resolved.
We've preserved some real ones in addition to the template-speak.
After the bibliography you are DONE.
\fi
\newpage
{\footnotesize \bibliographystyle{acm}
\subsection{Experimental Setup}
We evaluated ElasticOS on the Emulab testbed~\cite{emulab}. We used Emulab D710 nodes with 64-bit Quad Core Xeon processor, 12 gigabytes RAM, and a gigabit NIC. We choose Emulab D710 nodes because they support Linux kernel 2.6. Our experimental setup for each experiment consists of two nodes connected via gigabit Ethernet ports, transported through a network switch.
To evaluate, we ran tests on a variety of algorithms representing the type of processing
that would be a target use case for ElasticOS -- large graphs or lists to be processed.
Shown in Table~\ref{tab:alg} is a summary of these applications, and the footprint of each
application -- note that the footprint goes beyond the limits of a single node in Emulab.
Specifically, these algorithms typically use 11GB of memory on the first machine, and stretch to
a remote machine for the additional memory.
\begin{table}
\centering
\caption{Tested algorithms and their memory footprints.}
\label{tab:alg}
\begin{tabular}{|l|c|}
\hline
Algorithm & Memory Footprint \\ \hline
Depth First Search & 330 million nodes (15 GB) \\ \hline
Linear Search & 2 billion long int (15 GB) \\ \hline
Dijkstra & 3.5 billion int weights (14 GB) \\ \hline
Block Sort & 1.8 billion long int (13 GB) \\ \hline
Heap Sort & 1.8 billion long int (14 GB) \\ \hline
Count Sort & 1.8 billion long int (14 GB) \\ \hline
\end{tabular}
\end{table}
In our experimental setup, we employed a basic jumping algorithm to trigger transfer of execution. A simple remote page fault counter is updated for each remote pull, and whenever a counter threshold value is reached, then a process will jump its execution to the remote machine. In addition, the counter is then reset. We tested the algorithms with different counter threshold values (32 up to 4M).
For each algorithm, we measure its execution time as well as network traffic generated, and compare results of ElasticOS and network swap. To provide a comparison with network swap, hereafter termed Nswap, in a manner which isolated the gains to simply the benefit of jumping and not any implementation differences, we use ElasticOS code, but disable jumping. In this way, Nswap tests pin a process on one machine, but use the memory of a remote machine as a swap space. In our experiments, both ElasticOS and Nswap spanned two machines. Emulab provides isolation of networking and execution in the testbed from external disturbances.
\begin{table}
\centering
\caption{Micro-benchmarks of ElasticOS primitives.}
\label{tab:prim}
\begin{tabular}{|l|l|l|}
\hline
Primitive & Latency & Network Transfer \\
\hline \hline
Stretch & 2.2ms & 9KB \\
\hline
Push & 30-35us & 4KB \\
\hline
Pull & 30-35us & 4KB \\
\hline
Jump & 45-55us & 9KB \\
\hline
\end{tabular}
\end{table}
\subsection{Micro-benchmarks}
An important metric when evaluating ElasticOS is the performance of each individual primitive. These are summarized in Table~\ref{tab:prim}, based
on our measurements on Emulab D710 nodes. We'll note that jumping is very fast, taking only 45-55 microseconds. This is
substantially lower than reported numbers for process or VM migration, which are measured in seconds (e.g., one benchmark states
CRIU's downtime is roughly 3 seconds~\cite{criu_perf}). Stretching is
also only performed once -- when a decision is made that this process would benefit from scaling in the future.
We measured pushing and pulling to be between 30-35 microseconds -- roughly the time to submit the request and transfer a page (4KB) of
data across a network.
For jumping to be effective in speeding up execution in ElasticOS, there must be locality. That is, the time for a single jump must
be less than the time for the number of remote page pulls that would be saved by jumping.
For our performance microbenchmarks,
for jumping to be efficient, the process should save at least two remote page pulls. As we show next, the locality is much greater than
this, resulting in substantial speedups.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.65,trim={0 1.5cm 0 1.5cm}, clip]{execution_time.eps}
\caption{Execution Time Comparison.}
\label{fig:execution}
\end{figure}
\subsection{Execution Time and Network Traffic}
There are two key metrics to consider when comparing ElasticOS (with jumping, pulling and pushing), to network swap (just pulling and pushing). The first is overall execution time.
Here, the key premise behind jumping is that to exploit locality, we should transfer execution to where the data is, rather than pull in the data
to where the execution is. The second is the amount of network traffic -- jumping needs to transfer context (e.g., the current stack), and pulling/pushing
transfers pages.
In Figure~\ref{fig:execution}, we show our measured average execution time for both Nswap and ElasticOS for each of the algorithms
we have evaluated. These execution times are averaged over four runs using the threshold that achieves the most improvement. We observe that in the best case, ElasticOS shows substantial performance benefits for most algorithms. For example, Linear Search experienced about an order of magnitude speedup in execution performance, Depth First Search (DFS) achieved about 1.5X delay improvement, while Dijkstra's algorithm achieved no speedup.
Table \ref{tab:jumping} describes the specific threshold values where best performance was achieved in ElasticOS for each algorithm. It also lists the total number of jumps at that threshold as well as the frequency of jumping for each algorithm at that threshold. The jumping rate ranges from less than once per second to hundreds of times per second.
While Figure~\ref{fig:execution} represents the best case, we were also interested in understanding whether we could find universal threshold values that achieves performance improvements - perhaps not the best - regardless of the algorithm. Our analysis found that, regardless of the algorithm, using any threshold value above 128, Elastic OS performs better than Nswap for any algorithm, either in delay, network overhead or both.
The use of jumping to exploit locality improves the execution time by enabling more local pages to be accessed, rather than
needing to go across a network (which is orders of magnitude slower). This also reduces the amount of network traffic, even taking
into account the data transfer needed to perform a jump. Shown in Figure \ref{fig:traffic} are our measured results for each of the algorithms
tested. We can see that ElasticOS reduces the amount of traffic on the network for all algorithms tested by a significant amount -- from a 5x reduction
for Linear Search to about 2x reduction for DFS. By avoiding the process of swapping in and out to remote machines through lightweight jumping,
we save a large amount of data and control traffic associated with avoidable remote page faults. Also, even if we did not achieve any delay improvements
running ElasticOS, we still can obtain network traffic reduction. For example, Dijkstra's algorithm did not achieve any delay improvement, even though Table~\ref{tab:jumping} shows that Dijkstra had 520 jumps, but these jumps helped reducing its network overhead by 70\%.
In examining the behavior of Dijsktra's, its initial set of jumps before settling down to execution on one machine resulted in substantial overhead savings.
\begin{table}
\centering
\caption{Jumping Thresholds.}
\label{tab:jumping}
\begin{tabular}{|l|l|l|l|}
\hline
Algorithm & Threshold & Number & Jumping \\
& & of jumps & frequency \\
& & & (jumps/sec)\\
\hline \hline
DFS & 8K & 180 & 0.6 \\
\hline
Block Sort & 512 & 1032 & 12.3 \\
\hline
Heap Sort & 512 & 3454 & 12.4 \\
\hline
Linear Search & 32 & 3054& 157.4\\
\hline
Count Sort & 4096 & 198& 0.6\\
\hline
Dijkstra & 512 & 520& 1.4\\
\hline
\end{tabular}
\end{table}
\begin{figure}[ht]
\includegraphics[scale=0.65,trim={0 1.2cm 0 1.5cm}, clip]{network_traffic.eps}
\caption{Network Traffic Comparison.}
\label{fig:traffic}
\end{figure}
\subsection{Understanding Application Specific Behavior}
We previously showed that each algorithm has a varying degree of improvements. While the simple
reasoning is that it is due to locality, here we examine three of the algorithms in detail to
really understand this behavior.
\subsubsection{Linear Search}
For Linear Search, the memory access pattern is simple and predictable, namely the memory address space is accessed in a linear fashion. As a result, consecutive memory pages tend to age in LRU lists together, and end up being swapped to the remote machine together. When a process jumps towards a remote page, it is very likely for the process to find a chunk of consecutive pages to access, exploiting locality of these pages, which saves the process a significant amount of time by avoiding swap overhead. Figure \ref{fig:lienar_time_ext} shows delay improvements on Linear Search with respect to jumping threshold. Linear Search tends to perform better when the counter threshold is smaller, hence jumping early is better when accessing the address space in a linear fashion. Table~\ref{tab:jumping} shows the highest frequency of jumping for linear search, as well as the lowest threshold value used. We also observe that as the threshold for jumping increases, jumping will occur less often and eventually not at all, hence explaining why the delay curve for ElasticOS converges to Nswap.
\begin{figure}[ht]
\includegraphics[scale=0.6,trim={0 1.5cm 0 1.5cm}]{linear_search_execution_time.eps}
\caption{Linear Search Execution Time.}
\label{fig:lienar_time_ext}
\end{figure}
\subsubsection{Depth First Search}
On the other hand, Depth First Search has a non linear memory access pattern. When the algorithm starts a depth first search, the search starts at the root node, and traverses the graph branch by branch, from root to the end (depth) of the branch. While the graph nodes are laid out in a certain order in the memory space,
the access pattern of DFS does not match this layout. This increased randomness of access to pages means that there is less locality to exploit on each jump than occurred for Linear Search, and hence less gain versus Nswap compared to Linear Search.
Figure \ref{fig:dfs_time_ext} shows different execution times of DFS for various counter threshold sizes. ElasticOS achieves at best about a 1.5x improvement in delay over Nswap across a wide range of counter thresholds, namely larger than 64. However, for very small values of threshold less than or equal to 64, DFS performs worse. Figure~\ref{fig:dfs_jumps_threshold} shows that when the threshold value is very small, DFS experiences a large number of jumps. Also, our tests showed that DFS's best performance happens when the threshold value is large compared to other algorithms as shown in Table~\ref{tab:jumping}.
\begin{figure}
\includegraphics[scale=0.6,trim={0 1.5cm 0 1.5cm}]{dfs_exec_counter_size.eps}
\caption{Depth First Search Execution Time.}
\label{fig:dfs_time_ext}
\end{figure}
\begin{figure}[ht]
\includegraphics[scale=0.6, trim={0 1.5cm 0 1.5cm}]{dfs_exec_jumps_threshold.eps}
\caption{Depth First Search Number of Jumps.}
\label{fig:dfs_jumps_threshold}
\end{figure}
The shape of the graph in DFS can also impact the memory access pattern. For example increasing the depth of the graph would make branches longer, resulting in a longer branch that occupies more memory pages, increasing the chance of a single branch having pages located both on local and remote machines. This would increase the chances of jumping more and performing poorly. Figure \ref{fig:dfs_time_depths} shows DFS performance on ElasticOS for different graph depths with a fixed jumping counter size of 512. Increasing the graph depth eventually results in poorer performance. Figure \ref{fig:dfs_jumps_depths} shows that this poorer performance occurs when there is excessive jumping for deep graphs. To make ElasticOS perform better on such graph depth we need to increase the jumping counter size to values larger than 512, to avoid jumping too much.
\begin{figure}
\includegraphics[scale=0.6,trim={0 1.5cm 0 1.5cm}]{dfs_exec_depth.eps}
\caption{Depth First Search Performance on Different Depths.}
\label{fig:dfs_time_depths}
\end{figure}
\begin{figure}
\includegraphics[scale=0.6,trim={0 1.5cm 0 1.5cm}]{dfs_exec_jumps.eps}
\caption{Depth First Search Jumps on Different Depths.}
\label{fig:dfs_jumps_depths}
\end{figure}
\subsubsection{Dijkstra's Algorithm}
ElasticOS achieved very little gain when executing Dijkstra's algorithm when compared to Nswap.
Dijkstra's algorithm scans through an adjacency matrix, then learns and stores information about the shortest path in a separate array. However, Dijkstra does not necessarily access all nodes in the adjacency matrix, because some nodes are not connected, or one of the paths was excluded for being too long. Since Dijkstra's algorithm keeps track of the shortest path nodes in a separate array, it only accesses the adjacency matrix nodes once, and keeps useful information in the shortest path array. Based on how Dijkstra's algorithm works, it does not access memory frequently, and only accesses part of the allocated memory. Therefore, most of Dijkstra's execution time does not involve many remote page faults. Since jumping saves time wasted on remote page faults, Dijkstra does not gain much delay improvement, because it does not jump due to very small number of remote page faults. Figure \ref{fig:max_time_no_jump} confirms that Dijkstra's algorithm spends most of its execution time on one machine without jumping. Our experiments showed that only a relatively small set of jumps happened at the beginning, and the rest of the time execution stayed on one machine.
\begin{figure}
\includegraphics[scale=0.6,trim={0 1.5cm 0 1.5cm}]{max_time_spent_without_jumping.eps}
\caption{Maximum Time Spent on a Machine without Jumping.}
\label{fig:max_time_no_jump}
\end{figure}
\subsection{EOS Features}
\subsubsection{Improving The Locality Of Reference}
Imagine an application that is iterating several times over a working set size of data larger than what
can fit in physical memory. Normally, it will thrash constantly. ElasticOS, deals with this scenario by
allowing the process to jump to the remote machine where some of the required data is stored in memory
when it detects that "too many" pages are being pulled from the remote machine. Caching reveals that there
is temporal locality in the referencing of data (and code). This locality is reflected spatially in the storage
of pages on remote nodes, creating "islands of locality" of pages stored on networked nodes. Jumping
of execution allows ElasticOS to jump to an island of locality on another node, and achieve considerable execution while operating primarily on the pages in that island with limited network overhead of pushing
and pulling pages outside that island. This is a very profitable trade-off since jumping involves copying very
few memory pages.
Another scenario that can show the power of jumping is when two processes on two different nodes in
a DSM system contend for write access on a memory page. This scenario will force the coherency protocol
to constantly invalidate and copy the page between the two nodes. This is in contrast to ElasticOS, which
will allow one of the processes to jump closer to where the other process is so both of them can content
locally for the page via local synchronization. In this way, ElasticOS avoids the non-scalable invalidatecopy
operations on that page characteristic of DSM-like approaches. \\
\subsubsection{Improving Scalability}
Fine-grained jumping of execution can potentially provide better scalability compared to DSM and
remote-paging/network-swap based approaches. Jumping treats all shared memories from the all nodes as
a first-class resource. For example, consider a process stretched over two machines. Forcing it to jump
from one to the other after pulling too many pages will turn remote memory into local memory, thereby
effectively increasing the pool of available memory seen as local physical memory by the process. Network
swap-based techniques only move memory back and forth, not execution, and hence cannot exploit islands
of locality during execution to reduce network overhead. \\
\begin{figure}
\includegraphics[width=\linewidth]{fig2.PNG}
\caption{Example Natural Groupings Within Nodes In A Graph.}
\label{fig:Example Natural Groupings Within Nodes In A Graph.}
\end{figure}
\subsubsection{Improving Page Placement}
Jumping also opens the door for optimizing page placement within the system. Remote-pagingbased
approaches and DSM leave no options but to keep as much memory as possible in the local machine.
ElasticOS can employ a multi-node page placement algorithm, where natural groupings of memory pages
resulting from pattern of reference can be exploited, namely islands of locality.
For example, when traversing the graph shown in figure 2, a good page placement may group nodes with
the same color on the same machine so that when traversing an edge that connects brown nodes to one of
the yellow nodes, execution can transfer to the machine where the latter are located.
\subsubsection{Zero-downtime Vertical Scaling}
Stretching in concert with jumping, pulling and pushing can enable zero-downtime vertical elasticity
by amortizing the cost of atomic handover discussed in section 2.3.2 since it allows the process to distribute
its identity over multiple machines. That is, ElasticOS typically first stretches the address space of a process
to span the memory resources of another machine, and eventually fills that stretched node’s memory with
pages through normal pushing and pulling. At some point, execution of the process jumps to the stretched
remote node when it is beneficial to do so, namely when there is an island of locality to exploit on the remote
node. This approach naturally lends itself to vertical scaling, i.e. scaling up (or down), when the remote
node is substantially more resource-rich (or poor) than the initial machine. ElasticOS’s approach can be
seen to be more general and flexible than conventional process and VM migration approaches discussed in
section 2.2.6, which confine the migrated process or VM to one presumably larger target machine, limited
to the resources of the target machine. In contrast, ElasticOS permits a process or VM to span not only the
target machine but other machines nearby. Also, unlike standard migration approaches, there is no need to
freeze the process during an atomic handover phase from one machine to another.\\
Having described the major properties of ElasticOS, we present the following table that summarizes
the benefits of ElasticOS compared to other approaches to elasticity described earlier in related work.
\begin{table*}[t]
\begin{center}
\begin{tabular}{||c c c c c c c||}
\hline
& ElasticOS & DSM & MPI &PGAS&MapReduce&Remote Paging/Network Swap \\[0.5ex]
\hline\hline
Vertical Scaling& Yes& No& No &No& No& No\\
\hline
Horizontal Scaling& Yes& Yes& Yes& Yes& Yes& Yes\\
\hline
Transparent &Yes &Yes& No& No& No& Yes\\
\hline
Coherency Messaging& No& Yes& No& Yes& No& No\\
\hline
Explicit Messaging& No& No& Yes &Yes& No &No\\[1ex]
\hline
\end{tabular}
Table 1: This table compares the properties of ElasticOS to related work.
\end{center}
\end{table*}
ElasticOS provides a single virtual address space, which can map physical memories from different
nodes. Every node is identical, containing several modules as shown in Figure \ref{fig:ElasticOS Components}, each of which has a
different function. In addition, ElasticOS enables execution to jump among machines that are contributing
their physical memories to the single virtual address space.
The EOS manager continuously queries the system monitor to detect variations in the real-time demand
for memory. And when it finds such scenario where the active memory set of a memory-intensive
application is too big to fit into RAM, it issues a command for starting a new VM instance and instructs the
multi-node scheduler to stretch this application over the two nodes.
Stretching, which resembles a remote fork, is initiated by a special signal sent to the check-point
module, which creates a snap shot of the process, bundles it, and sends it to the restart module in the remote
node. This module, then, uses the snap shot data to create a clone of the original process. The application then resumes and its clone remains in a suspended state.
With the stretching done, the process’s presence is extended to multiple nodes. At this point, a
page balancer starts to select pages from the home node and pushes them to the slave. Additionally, any
modifications to the process’s meta data, such as new memory mappings or files opened, are also applied to
the clone.
\begin{figure}
\includegraphics[width=\linewidth,keepaspectratio]{fig3.PNG}
\caption{ElasticOS Architecture Highlight.}
\label{fig:ElasticOS Architecture Highlight.}
\end{figure}
To maximize the opportunities for exploiting locality of reference, ElasticOS tries to keep the elasticized
process near its "better" part of memory. So, the memory-access monitor continually observes the
process’s page faults. If one of those references a remote page, the fault is trapped and forwarded to the
elastic page fault handler, which then restores access to the page in question by pulling the page; also, it
updates its internal per-process page fault history to record the occurrence of this fault type. Over time, the
collected history information will provide a picture of where the process’s memory accesses are going. At
that point, the EOS manager can make an informed decision and choose to either keep the process running
locally, or instruct the multi-node scheduler to force the process to jump to the remote machine. That should
result in more machine-local memory accesses and, thus, better performance.
When vertical scaling is desired ElasticOS can resize the compute node by starting a new instance
that is larger than the one currently in use and stretching the application to it. And with more aggressive page balancing, migrating a large bulk of the process’s memory may be accelerated. Then, once the process
jumps to the larger node, it can be pinned there. This will force all remaining pages to be pulled from
the original node, thus, migrating the whole process to the larger node. And once all these steps are done,
the original node can be safely shutdown. In contrast to other proposals, ElasticOS’s approach for vertical
scaling will not impose any significant down time since it operates while the process is active.
Vertical scaling is limited by the fact that you can only expand to the physical size of the server.
Beyond that, horizontal scaling is a must. Additionally, cloud service pricing policies may favor a set of
small machines over one large machine in terms of cost efficiency. In such cases and others, horizontal
scaling may be more feasible and favorable.
ElasticOS’s framework can support horizontal scaling by stretching the target process address space
to the newly started node. And as before, then page balancing and process jumping kick in to maximize
locality and balance the load on both machines.
Unlike other proposed distributed operating systems like fos [79] and Barrelfish [21]. ElasticOS
can be built on top of a commodity operating system. And the next section describes how it can be built
on top of the well-known Linux kernel, which in essence eliminates the need to adopt new programming
models. Furthermore, ElasticOS does not require any modifications to the source code, which is in contrast
to many of the other approaches including MPI, PGAS, MapReduce and others. To compensate for the lack
of information passed from the software developers via programming language annotations or specialized
API, instead, ElasticOS monitors system events and memory reference patterns to infer key information that
can help partition the memory footprint of the elasticized process in an optimal way.
Linux uses multi-level page tables [15] to map virtual addresses to their physical counterparts. Those
are organized in a tree-shaped index, with the page table entry (PTE) as leaf nodes. Each PTE maps the
starting address of a virtual page to a physical page frame number if a valid flag is set. If this flag is reset, this means the page is not valid (i.e. has not been allocated or swapped out), in which case the operating system
handles fetching this page from the swap or create a new mapping to a fresh physical page. ElasticOS,
which is built on top of Linux, extends this by creating a virtual swap device that maps physical memory of
another node as a swap space. So, each node in the system of N nodes will have at least N-1 virtual swap
devices. This approach allows an elasticized process to quickly find a remote page and fault it in whenever
it is referenced by reducing the steps needed to find a remote page.
When a page is sent over the wire upon a request from another node, the page is reclaimed, and an
invalid PTE entry is placed into its place in the process’ address space. This entry is also a handle to the
page’s new location. These steps guarantee that only one copy of each page is active at any given time. This
design choice has the effect of eliminating the need for an expensive coherence protocol.
\fi
\section{Introduction}
We are in the midst of a significant transition in computing, where we are consuming infrastructure
rather than building it. This means that applications have the power of a dynamic infrastructure underlying them,
but many applications struggle to leverage that flexibility. In this paper, we propose supporting this at the
operating system level with new primitives to support scaling.
To gain some context on the challenge with scaling, we first discuss how it is predominantly handled today.
The most straight forward option, which required no changes to applications, is to simply get a bigger (virtual)
machine as load for an application increases.
Cloud providers, such as Amazon\cite{amazon}, offer a wide range of machine sizes which cost anywhere from
less than a penny per hour to a few dollars per hour. For cost efficiency, companies wish to use
the ``right'' size machine, which might change over time. But, transitioning from one VM size to another
can pose challenges. In some cases, we can take snapshots (e.g., with CRIU~\cite{CRIU}) and migrate the application to a
bigger/smaller VM, but this can be disruptive, and the management of the application needs scripts and
other infrastructure to trigger scaling.
An alternative is to re-write the applications with scaling in mind. To leverage the scaling, commonly
applications are built around frameworks such as Hadoop\cite{hadoop}, Apache Spark\cite{spark}, MPI\cite{mpi} or PGAS \cite{pgas}. These frameworks are designed with the
flexibility of being able to execute tasks on a varying amount of distributed resources available.
The problem here is two fold. First, to leverage this, the application needs to be built for this -- a
significant challenge (requiring a re-write) for any existing application, and forcing application developers
to evaluate and become fluent in the latest frameworks and potentially adapt the application as the frameworks
change. Second, and perhaps more challenging, is that not every application fits into one of these frameworks.
Another approach to scaling is to replicate VMs/containers when an application becomes popular and requires more resources. This too introduces burdens on the programmer in order to synchronize shared data and state across
multiple replicas, as well as to script their applications to spawn/delete replicas depending on load.
In short, in each case, \emph{the burden of scaling is placed on programmers}.
We argue that developers shouldn't need to be experts in cloud management and
other frameworks, in addition to also needing to be fluent in programming and their application domain. Instead,
the operating system should provide more support. Broadly speaking, the job of an operating system is to make the life
of an application developer easier (through abstraction). A modern OS provides virtual memory abstractions, so developers
do not have to coordinate memory use among applications, network socket abstractions, so developers can send messages
without needing to be intimately familiar with the underlying network protocols, and many other abstractions (file system,
device, multi-tasking) all to support developers.\emph{We propose that scaling should be an OS abstraction}.
\textbf{Related Work:}
We are not the first to propose that operating systems should support scaling. Scaling of memory approaches are popular and include efforts such as RAMCloud~\cite{RamCloud}, which requires refactoring in user space to utilize its memory scaling capabilities. An early approach to sharing memory called DSM~\cite{DSM, mether,KaiLimemory,apollo,treadmarks,farm} suffered from scaling issues, but more recently disaggregation-based approaches towards memory have emerged that are centered around transparent scaling of memory behind the swap interface, such as NSwap, Infiniswap, X-Swap and Memx~\cite{Nswap, Infiniswap, X-Swap,memx}.
Scaling of computation approaches include process migration to machines with more resources~\cite{CRIU, BLCR, MOSIX,smith1988survey,milojivcic2000process,stellner1996cocheck}, in addition to the scaling frameworks and replication methods mentioned previously. Approaches to accelerate process migration~\cite{fast96,Kerrighed} have been proposed to hide the latency of migration by copying most of the process state in the background and only copying a small delta to the new machine after halting the process.
\iffalse
such that once the decision to migrate is made, the process state is copied in the background to the remote machine while the process continues to run locally, and only at the final instant is the process halted and snapshotted, whereupon a small delta change in state is copied to the remote machine. This approach enables the process to restart quickly on the remote machine, reducing the latency seen by the process during migration.
\fi
Single system image (SSI) OSs such as Kerrighed, MOSIX, Sprite and Amoeba ~\cite{Kerrighed,MOSIX,sprite-migrate,amoeba}
have been created to support operation across a distributed cluster of machines. These approaches typically employ a process migration model to move computation around cluster nodes and require applications to be recompiled for these specialized OSs.
These prior efforts in OS scaling suffer from a variety of limitations. Network swap-based approaches, while being a step in the right direction of disaggregation in the data center, miss the opportunity to exploit \emph{joint disaggregation} of computation and memory for improved performance. Execution is typically assumed to be pinned on one machine, while memory pages are swapped back and forth remotely across the network. This can result in excessive swapping of pages over the network. In these cases, movement of computation to a remote machine towards a cluster of locality stored in the remote machine's memory would result in substantially faster execution and lower network overhead, as we will show later.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{elasticos_vision.pdf}
\caption{ElasticOS Vision for Cloud Data Centers.}
\label{fig:Vision}
\end{figure}
Combining current network swap approaches with existing process migration techniques to alleviate excessive network swapping overhead would suffer two major limitations. First, each decision to move computation would incur the overhead of copying the entire address space. This is a significant amount of overhead to impose on the network. Second, even with accelerated process migration, there is a substantial delay between the time the decision is made to migrate and when that is completed, at which time the conditions that triggered the original migration decision may be obsolete due to the length of time needed to copy all the state.
\begin{figure*}[th]
\centering
\includegraphics[scale=0.50]{elasticos_abstractions.pdf}
\label{fig:abstractions1}
\vspace{-0.2in}
\caption{Illustration of ElasticOS abstractions. Each box labeled with a number above is a compute node,
with the shaded boxes within represent individual pages. Starting with execution on a single
machine in (0), when memory nears being filled, we stretch to two nodes in (1) and balance the pages in (2). We then push and
pull pages in (3), with the red shaded pages going from node 1 to 2 (push) and from node 2 to 1 (pull). Finally, in (4) and (6) we
are seeing too many page faults (resulting in pull), so decide to jump from node 1 to 2 in (5) and from node 2 to 1 in (7), respectively.}
\end{figure*}
\textbf{Introducing ElasticOS:}
In response to these shortcomings, we introduce four primitives to realize the scaling OS
abstraction -- \emph{stretch}, \emph{jump}, \emph{push}, and \emph{pull}. These scaling abstractions are designed to be transparent, efficient, and practically useful. Our approach is inspired by an early work that
hypothesized elasticizing operating systems as a hot research topic, but did not build a working implementation
of the proposed concept~\cite{hotos13eos}. \emph{Stretch} is used when an application becomes
overloaded (e.g., a lot of thrashing to disk is occurring), so the operating system \emph{stretches}
the application's address space to another machine -- extending the amount of memory available to the application.
Push and pull allow memory pages to be transferred between
machines which the application has been stretched to, whether proactively to optimize placement, or reactively
to make it so the data is available where it is needed.
\emph{Jump} allows program execution to transfer to a machine which the application has been stretched to. Unlike heavyweight process migration, our jump primitive is a lightweight transfer of execution that only copies the small amount of state needed to begin execution immediately on the remote machine, such as register state and the top of the stack. Any additional state that is needed is faulted in using pulls from the rest of the distributed address space. Having both jumping and push/pull allows for the OS to choose between moving the data to be where
the execution needs it, and moving the execution to be where the data is. This supports the
natural, but not necessarily perfect locality that exists in applications.
To demonstrate the feasibility of this scaling approach, we extended the Linux kernel with these four primitives, and call the
extended Linux, ElasticOS.
Figure~\ref{fig:Vision} provides a high level view of ElasticOS.
We see that an instance of ElasticOS is capable of spanning a
number of nodes in the data center, and that the number of spanned nodes can
elastically scale up or down depending upon application demand. The application is executed
within ElasticOS, and the scaling primitives are used to support this execution across a distributed collection
of resources.
To demonstrate the desirability of these four primitives,
we evaluated a set of applications with large memory footprints and compared against network swap, which
supports the pull and push primitives, and itself has shown performance improvements of being able to
scale memory resources transparently across multiple machines. We illustrate the additional
benefit of also transparently scaling computing resources across multiple machines, forming
a system with joint disaggregation of memory and computation.
Our evaluation shows up to 10x speedup over network swap, as well as a reduction of
network transfer between 2x and 5x.
In summary, we make the following contributions.
\begin{itemize}
\item Introduce scaling as a new OS abstraction, specifically with four primitives: stretch, push, pull, and jump.
\item Provide an architecture and implementation of these abstractions in Linux.
\item Demonstrate through an evaluation on Emulab servers that ElasticOS achieves up to 10x speed up over network swap across a range of applications, and up to 5X reduction in network overhead.
\end{itemize}
\if{false}
\begin{figure}
\includegraphics[width=\linewidth]{fig1.PNG}
\caption{Vision.}
\label{fig:Vision}
\end{figure}
In the past few years cloud computing has been gaining popularity due to its ability to help organizations scale up/down their compute infrastructure to meet the requirements of dynamically changing workload characteristics. Before that, enterprises built their own data centers and had to over-provision in order to keep up with varying compute power needs. By sharing a pool of resources, cloud computing created the illusion of unlimited resources. To understand the importance of this emerging technology consider an enterprise that offers a web based service housed in a classical data center. It is very crucial for the service to handle varying volume sizes, forcing the enterprise to over-provision when planning for capacity to meet business needs. This over-provisioning renders the data center resources under utilized during off-peak hours, days, or months. In contrast, consider the same service running in a cloud computing environment. It can respond to fluctuations of service demand by either one of two ways. First, it can scale horizontally by spinning up additional compute nodes to handle larger volumes during peak demand hours, and then, scale down by taking those additional nodes out of service and shut them down during off-peak hours. Second, it can scale up the service by migrating to a larger compute node during those peak hours, and scale down again by migrating back to a smaller one during off-peak ones. Certainly, this flexibility is made possible by the cloud’s promise of shared resources permitting on-demand compute power availability, which helps customers maximize their return on investment. \\
Cloud computing offers the ability to use more compute power on demand, but it is up to the cloud application to actually benefit from this offering. And what can limit its ability to do so is its architecture. In the web service application mentioned above, one approach to achieve dynamic scaling is that more web servers can be run in newly spawned nodes and then load balancers would redistribute the workload among all active servers. That may not be the case, however, for legacy applications, such as simulation software, or large graph analysis. For elasticizing such classes of applications, major recoding efforts and reconfigurations are necessary. Distributed systems research produced several works that seek to address this problem. Previous
approaches such as distributed shared memory (DSM) \cite{Farm,munin,KaiLimemory,mether,apollo}, MapReduce \cite{mapreduce}, message passing interface (MPI) \cite{mpi}, partitioned global address space (PGAS) \cite{de2015partitioned}, and remote paging based approaches \cite{memx,Infiniswap,Nswap}, can be fitted with machine hot-plugging and hot-removal capabilities to allow applications to scale horizontally. Also, process and virtual migration proposals can be used for vertical scaling. Adopting these approaches, however, either degrades the application’s performance, or involve significant efforts for elasticizing applications that may render them infeasible or undesirable. Clearly, software remains a key problem to achieving elasticity today. \\
This paper addresses the challenge described above. It proposes offering elasticity as a generic service supported by the software system. This service allows applications to scale up, scale out, and scale down to accommodate changing workload needs in an automatic and transparent manner to software developers. This new service is implemented as an integral part of the operating system and allows processes to stretch the limits of available resources beyond one machine, while supporting process mobility between nodes to minimize performance loss stemming from resource distribution.We introduce ElasticOS, a new operating system built on top of Linux, which is a realization of the vision put forward above. Figure 1 provides a high level view of ElasticOS. If each element of the grid represents a node in a datacenter, then we see that an instance of ElasticOS is capable of spanning a
number of nodes in the data center, and that the number of spanned nodes can elastically scale up or down depending upon application demand. This new operating system supports four new primitives: (1) stretching the address space of processes across a cluster of compute nodes, allowing processes to jump (2) from one
machine to another within the set of nodes participating in the stretched address space, to maximize locality, pushing (3) pages between nodes for optimal placement, and pulling (4) memory pages to serve remote page faults. We term our approach of stretching virtual memory address space across multiple physical nodes as
elastic virtual memory. The focus of our efforts towards elasticizing operating systems for cloud applications will be on jointly elasticizing memory and computation. Outside of the scope of our work is elasticizing other elements
of the OS, such as I/O devices and file systems.\\
In the remainder of this disseration, we first describe related work that may help achieve software elasticity in Chapter 2. In that chapter, we also identify what is still missing in the state of the art. Next, in Chapter 3, we first describe our approach to address those gaps. This is followed by a detailed description of
the architecture and components that were introduced into the Linux kernel to achieve ElasticOS. Chapter 4 describes our evaluation of ElasticOS, including analyzing the latency performance versus networked swap, as well as a performance analysis of individual components of ElasticOS. We conclude with a discussion of
future work in Chapter 5.
\fi
|
{
"timestamp": "2018-06-05T02:12:52",
"yymm": "1806",
"arxiv_id": "1806.00885",
"language": "en",
"url": "https://arxiv.org/abs/1806.00885"
}
|
\subsection{#1}}
\newcommand{\mysubsection}[1]{\smallskip \noindent \textbf{#1}}
\title{Modeling realistic degradations in non-blind deconvolution}
\name{J\'er\'emy Anger$^\dagger$, Mauricio Delbracio$^{\S}$, and Gabriele Facciolo$^\dagger$\thanks{We thank Jean-Michel Morel for fruitful comments and discussions. Work partly financed by Agencia Nacional de Investigaci\'on e Innovaci\'on (ANII, Uruguay) grant FCE\_1\_2017\_135458; Office of Naval research grant N00014-17-1-2552, Programme ECOS Sud -- UdelaR - Paris Descartes U17E04, DGA Astrid project << filmer la Terre >> n$^{\circ}$ANR-17-ASTR-0013-01, MENRT; DGA PhD scholarship jointly supported with FMJH.}}
\address{
$^\dagger$CMLA,
ENS Cachan,
CNRS,
Universit\'e Paris-Saclay,
94235 Cachan,
France\\
$^\S$IIE, Universidad de la Rep\'ublica, Uruguay
}
\begin{document}
\maketitle
\begin{abstract}
Most image deblurring methods assume an over-simplistic image formation model and as a result are sensitive to more realistic image degradations.
We propose a novel variational framework, that explicitly handles pixel saturation, noise, quantization, as well as non-linear camera response function due to e.g., gamma correction.
We show that accurately modeling a more realistic image acquisition pipeline leads to significant improvements, both in terms of image quality and PSNR.
Furthermore, we show that incorporating the non-linear response in both the data and the regularization terms of the proposed energy leads to a more detailed restoration than a naive inversion of the non-linear curve.
The minimization of the proposed energy is performed using stochastic optimization. A dataset consisting of realistically degraded images is created in order to evaluate the method.
\end{abstract}
\begin{keywords}
Non-blind deconvolution, image deblurring, saturation, quantization, gamma correction
\end{keywords}
\section{Introduction}%
\label{sec:intro}
\gf{
next actions: by priority
\begin{enumerate}
\item find a better title,
this one doesn't say much about what we do
\item check contributions part of intro: use itemize if possible
\item {\bf ADD contribution: realistic dataset! } \jaC{slightly added in the overview part of the intro}
\item Abstract
\item proofread and shorten the text and conclusions
\item mention somewhere the runtime
\end{enumerate}
}
One of the major sources of image blur is due to camera motion during the sensor integration time.
This phenomenon is most visible in low light conditions, when the integration time has to be long enough to capture a minimum amount of photons. In this situation, any strong light source present in the scene will certainly lead to pixel saturation, since the dynamic range to capture will be too large for the sensor.
Most motion deblurring strategies consist in estimating a blur kernel (which represents the effect of the camera motion in the image plane), and then deconvolving the blurred image with the estimated kernel. In this paper, we propose a non-blind deconvolution algorithm, which assumes that the kernel is known.
The simplest image acquisition model is
\begin{equation}
v = u \ast k + n,
\label{eq:model-simplest}
\end{equation}
where $u$ represents the sharp noiseless ideal image, $\ast{}$ denotes the convolution operator, $k$ is a known blurring kernel which we assume stationary, $v$ is the observed blurry image, and $n$ is a realization of white Gaussian or Poisson noise, depending on the formulation.
The inverse problem defined in~\eqref{eq:model-simplest} is linear, but significantly ill-posed. There is a huge amount of work seeking to restore images under this formulation~\cite{Wang2014a}.
Most methods are casted as minimizations of an energy of the form
\begin{equation}
E(u) = D(u;v)+ \lambda R(u), \label{eq:energy_form}
\end{equation}
where $D(u;v)$ (denoted $D(u)$ from now on) is a data fitting term that enforces the image formation model~\eqref{eq:model-simplest} and $R(u)$ is a regularizer that imposes prior knowledge on the solution. The total variation penalization~\cite{Rudin1992} is often used:
\begin{equation}
R(u) = \text{TV}(u) = \int |\nabla u({\mathbf x})| d{\mathbf x}.
\label{eq:tv}
\end{equation}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.32\linewidth}
\overimg[trim=0 0 0 0,clip,width=\linewidth]{exp/realistic/8/nonlinear_256}{\scriptsize 18.41dB}
\caption{Degraded image.}
\end{subfigure}
\begin{subfigure}[b]{0.32\linewidth}
\overimg[trim=0 0 0 0,clip,width=\linewidth]{exp/realistic/8/out_naive_best_nonlinear_256}{\scriptsize{25.56dB}}
\caption{$D_{\gamma\text{inv}}$-TV.}
\end{subfigure}
\begin{subfigure}[b]{0.32\linewidth}
\overimg[trim=0 0 0 0,clip,width=\linewidth]{exp/realistic/8/out_gamma_quantize_tv_best_nonlinear_256}{\scriptsize{28.60dB}}
\caption{\textbf{$D_\text{full}$-TV$^\gamma$.}}
\end{subfigure}%
\vspace{-0.5em}
\caption{
Image deconvolution can be significantly improved by defining a data fitting term that considers the whole image pipeline (quantization, noise, saturation, gamma correction) as shown in (c). Details best seen in the electronic version.}
%
%
\label{fig:results-realistic}
\vspace{-0.2em}
\end{figure}
Although interesting, the model~\eqref{eq:model-simplest} is over-simplistic.
In more realistic scenarios due to the physical image acquisition and the complex processing pipeline, non invertible non-linear degradations, such as, quantization and compression, can occur.
Under these circumstances, a more accurate forward model is needed:
\begin{equation}
v = Q_q(S_c(u \ast k + n)^\frac{1}{\gamma}),
\label{eq:model-complex-truenoise}
\end{equation}
where $S_c(u) = \min(c, u)$ is the pixel saturation operator, $Q_q(u) = q \cdot round(\frac{u}{q})$ is the pixel quantization of step $q$ and $\gamma$ is a gamma correction coefficient, generally introduced by the camera manufacturer (usually $q\!=\!\frac{1}{256}$, as $u({\mathbf x}) \!\in\! [0,1]$).
To avoid modeling the effects of the non-linear processing on the noise, as done by White~\textit{et al}.~\cite{Whyte2014a}, we approximate the forward model~\eqref{eq:model-complex-truenoise} by
\begin{equation}
v = Q_q(S_c(u \ast k)^\frac{1}{\gamma}) + n,
\label{eq:model-complex}
\end{equation}
where $n$ is assumed white and Gaussian.
While this is an approximation, the effect of the gamma correction on shot noise (which follows a Poisson distribution) can be assimilated to a variance-stabilizing transform~\cite{Anscombe1948}.
\JPEG{
We also study JPEG compression as a degradation given by the following forward model
\begin{equation}
v = \text{JPEG}(u \ast k + n).
\end{equation}
}
Due to model mismatch, traditional approaches, that assume the simple linear model~\eqref{eq:model-simplest}, need to impose a strong image prior to overcome these degradations. In this work, we propose to adapt the data fitting term to explicitly account for these typical degradations.
This yields better results as is illustrated in Fig.~\ref{fig:results-realistic}, where a naive restoration model~\eqref{eq:model-simplest} using total variation regularization is compared to the proposed model~\eqref{eq:model-complex} with a gamma corrected TV regularization.
The paper is organized as follows. In Section~\ref{sec:related-work} we review state-of-the-art methods that consider realistic acquisition models. In section~\ref{sec:method} we present a deconvolution method that works under real practical degradations, such as saturation or quantization, \JPEG{compression, }while considering gamma correction in a rigorous way.
Finally, in Section~\ref{sec:experiments} we demonstrate the effectiveness of our approach on a new dataset of degraded images and conclude in Section~\ref{sec:conclusions}.
\section{Related Work}\label{sec:related-work}
Although image deconvolution has received significant attention in the past decades~\cite{kundur1996blind}, there are only few works that address the deconvolution problem under a realistic image pipeline (saturated pixels, quantization, gamma correction\JPEG{, jpeg compression}).
Cho~\textit{et al}.{}~\cite{Cho} proposed a robust method that explicitly model outliers in the degradation process. However, this method is only effective when outliers are sparse in well-localized areas (e.g., saturated regions).
Gregson~\textit{et al}.{}~\cite{Gregson2013} proposed a variational stochastic deconvolution framework, inspired on stochastic tomography reconstruction~\cite{gregson2012stochastic}, that works with different image priors. The method can handle saturation by discarding saturated pixels and uses a prior in non-linear space.
The method is later extended to blind deconvolution~\cite{Xiao2015}, where a two step reconstruction that improved the saturation handling is introduced. The first step reconstructs the latent image by discarding unreliable blurred pixels, and the second one works on the regions that were masked out in the first phase.
Our method is simpler and does not need to distinguish reliable from unreliable pixels.
Whyte~\textit{et al}.{}~\cite{Whyte2014a} claim that while saturation can be handled by discarding saturated pixels, a better solution is obtained by modifying the data term to handle saturation explicitly.
The saturation operator (clipping) is approximated with a smooth function allowing to compute its derivative. In this work, we present a similar approach, but it does not require to approximate the non-smooth saturation operator. The authors also proposed a split update between reliable and unreliable pixels.
Although it effectively reduces ringing, it introduces blur as we show in the experimental section.
Camera response functions, including the gamma curve, are typically invertible functions. As such, some methods~\cite{Cho} directly invert the non-linear curve before deconvolving the image. However, if the image was quantized in the non-linear space, inverting the response curve results in non uniform quantization in the linear pixel space.
\JPEG{Xu~\textit{et al}.{}~\cite{XuNN} trained a convolutional neural network that is robust to JPEG compression artifacts and saturation. Their network is trained on images that went through such degradations so that the filters learned by the network are able to properly restore the latent image.}
\section{Method}%
\label{sec:method}
In this section, we present different formulations that incorporate data fitting terms to handle the following degradations: saturation, quantization, and gamma correction.
Each problem is formulated as an energy of the form~\eqref{eq:energy_form}, that is minimized using the Stochastic Deconvolution framework~\cite{Gregson}.
This framework is based on a coordinate descent algorithm with a Metropolis-Hastings strategy guiding the pixel sampling.
The method is derivative-free and can be applied to
any energy minimization problem,
although its convergence is not guaranteed for non-smooth functions.
At each step of Stochastic Deconvolution, a pixel is drawn either by selecting a pixel nearby the previous sampled one or by randomly choosing a new position.
Given a pixel position, the method evaluates the difference of energy that a small increase or decrease of the given pixel value would produce.
If the energy decreases, the new value is kept and the algorithm is more likely to chose a nearby pixel in the next iteration.
Since this process affects a single pixel of the solution at a time, the energy change due to the data and regularization terms can be computed by evaluating a small number of pixels surrounding the sampled one~\cite{Gregson}.
Convolution boundaries are handled by padding the image and considering the data fitting term only on valid pixels.
In what follows, unless otherwise specified, we use a total variation penalization~\eqref{eq:tv}.
\mysubsection{Saturation.} Pixel saturation occurs when the scene dynamic range is larger than the one captured by the camera sensor.
In this case, high intensities are clipped to the maximum sensor capacity, resulting in information loss.
The saturation model that we use is very simple, yet leads to competitive results. If $c$ is the sensor saturation limit, the considered data term is
\begin{equation}
D_S(u) = \|\min(c, u \ast k) - v\|^2.
\label{eq:saturation-energy}
\end{equation}
Instead of discarding saturated pixels, this formulation expresses the fact that the estimated sharp image convolved by the motion blur kernel has to be saturated in the same pixels as the observed image, even if the exact intensity values are lost.
Note that the data fitting term is equal to zero in the saturated pixels that match. This implies that these regions are regularized by the prior, independently of the regularization strength (controlled by $\lambda$).
This model works well on small saturated regions. For larger regions, we observe an over-smoothed restoration and missing pixel values cannot be properly restored (see Fig.~\ref{subfig:saturation200-results-images-us}).
\mysubsection{Quantization.}\label{sec:method-quantization}
A direct way to handle quantization is to explicitly introduce it as a constrained minimization problem
\begin{equation}
\,\, \argmin_u R(u) \,\,\text{ s.t. }\,\, Q(u\ast{}k) = v,
\end{equation}
where $Q$ is the quantization operator.
Note that this problem considers a noiseless observation.
The data term in the associated Lagrangian relaxation is
\begin{equation}
D_{Q\text{fw}}(u) = \| Q(u\ast{}k) - v \|^2.
\label{eq:quantization-energy-naive}
\end{equation}
Let us denote Equation~\eqref{eq:quantization-energy-naive} as the \emph{forward quantization energy}.
This data fitting term is piecewise constant; in general, a small perturbation in $u$ does not introduce any change on the energy, making the energy difficult to optimize.
More importantly, this model does not exploit the nature of the problem: given two different images, $u_1$ and $u_2$ such that $Q(u_1\ast{}k) = Q(u_2\ast{}k) \neq v$, the data fitting term~\eqref{eq:quantization-energy-naive} leads to the same cost, whereas one image could be closer to the true latent sharp one. Hence the cost should favor one over the other.
Thus, we propose to replace the constraint $Q(u\ast k) = v$ by $(u\ast k) ({\mathbf x}) \in Q^{-1}(v({\mathbf x}))$ where $Q^{-1}(s) = [s-\frac{q}{2}, s+\frac{q}{2}]$ is the quantization error interval centered at $s$. This yields
\begin{equation}
\argmin_u R(u) \,\,\text{ s.t. }\,\, (u\ast k)({\mathbf x}) \in Q^{-1}(v({\mathbf x})), \,\,\forall {\mathbf x}
\label{eq:quantization-problem-adapted}.
\end{equation}
In this formulation, if the estimate is not within the quantization error, we can compute a distance to the interval, namely
\begin{equation}
D_{Q\text{cx}}(u) = \left\| \left( \left| u\ast{}k - v\right| - \frac{q}{2} \right)_+ \right\|^2,
\label{eq:quantization-energy-adapted}
\end{equation}
where $(\cdot)_+ = \max(\cdot, 0).$
Let us denote \eqref{eq:quantization-energy-adapted} as the \emph{convexified quantization energy}.
With this formulation the penalization is zero when the current residual is within the quantization error, while it is quadratic when the estimated sharp image $u$ is far from the solution.
Overfitting the observed image further than the quantization is thus avoided and model can successfully restore the image even with a low regularization weight.
\mysubsection{Gamma correction.} Since images are stored in a non-linear color space, through the use of gamma correction, the deconvolution cannot be performed directly.
Indeed, if not properly handled, the gamma correction produces ringing around strong edges during deconvolution~\cite{tai2013nonlinear}.
The usual way to deal with gamma correction is to apply the inverse function directly on the observed image~\cite{Cho}, leading to
\begin{equation}
D_{\gamma\text{inv}}(u) = \| u \ast k - v^\gamma \|^2.
\label{eq:invgamma-energy}
\end{equation}
In this case, the model is fitted in linear space.
\begin{figure}
\centering
\begin{subfigure}[b]{0.46\linewidth}
\overimg[trim=50 140 90 50,clip,width=\linewidth]{exp/gammatv_noquantize/out_naive2.png}{\scriptsize 29.19dB}
\end{subfigure}
\begin{subfigure}[b]{0.46\linewidth}
\overimg[trim=50 140 90 50,clip,width=\linewidth]{exp/gammatv_noquantize/out_gamma2.png}{\scriptsize 31.54dB}
\end{subfigure}
\caption{Effect of a gamma corrected data fitting term. On the left, the deconvolution is performed in linear space; on the right, it is performed in gamma-corrected space.}%
\label{fig:linear_vs_nonlinear}
\vspace{-0.2em}
\end{figure}
In this work, we argue that the data fitting should be computed directly in the non-linear color space,
\begin{equation}
D_{\gamma}(u) = \| (u\ast{}k)^\frac{1}{\gamma} - v \|^2,
\label{eq:gamma-energy}
\end{equation}
where $v$ is the observed image in non-linear space, and $u$ is the restored sharp image in linear space.
Fitting the model in the non-linear space reduces the importance of bright regions and improves the restoration of dark regions, which are more sensitive to the eyes.
The effects of fitting the data in the gamma corrected space are easily visible to the human eye, as shown in Fig.~\ref{fig:linear_vs_nonlinear}.
Furthermore, Gregson~\textit{et al}.{}~\cite{Gregson} proposed to adapt the TV regularization in order to account for the non-linearity of the eye sensitivity. This is done by defining a new regularizer using a $3\times{}3$ neighborhood that computes absolute differences in the non-linear space.
As such, the noise, amplified in dark region due to the gamma correction, is better taken into account.
For our experiments, we use a similar regularization, expressed as $\text{TV}^\gamma(u)=\text{TV}(u^\frac{1}{\gamma})$, which is the total variation of the gamma-corrected image.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{exp/results/21077/saturation200_quantization16.pdf}
\vspace{-1.5em}
\caption{Numerical results on individual degradations. Both plots indicates the behavior of the methods for various regularization weights under a large degradation.}%
\label{fig:results-plot}
\end{figure}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.19\linewidth}
\includegraphics[trim=0 50 0 40,clip,width=\linewidth]{exp/results/21077/clip/200/groundtruth.png}
\caption{Ground-truth.}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\overimg[trim=0 50 0 40,clip,width=\linewidth]{exp/results/21077/clip/200/input.png}{\scriptsize 17.00dB}
\caption{Observation.}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\overimg[trim=0 50 0 40,clip,width=\linewidth]{exp/results/21077/clip/200/out_almeida.png}{\scriptsize 22.77dB}
\caption{Almeida~\cite{Almeida2013}.}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\overimg[trim=0 50 0 40,clip,width=\linewidth]{exp/results/21077/clip/200/out_whyte.png}{\scriptsize 23.54dB}
\caption{Whyte~\cite{Whyte2014a}.}
\end{subfigure}
\begin{subfigure}[b]{0.19\linewidth}
\overimg[trim=0 50 0 40,clip,width=\linewidth]{exp/results/21077/clip/200/out_stodecus.png}{\scriptsize 26.74dB}
\caption{$D_{S}$~\eqref{eq:saturation-energy}.}\label{subfig:saturation200-results-images-us}
\end{subfigure}
\vspace{-.5em}
\caption{Large saturation results (image clipped at intensity 200). The proposed model present less artifacts than~\cite{Almeida2013} and~\cite{White1994a}. }%
\label{fig:saturation200-results-images}
\vspace{-0.2em}
\end{figure*}
\mysubsection{Model composition.} We have seen how to independently address deconvolution under saturation, quantization and gamma correction. We now propose a straightforward data term that combines these degradations in a single one,
\begin{equation}
D_\text{full}(u) = \left\| \left( \left| (\min(c, u\ast{}k))^\frac{1}{\gamma} - v\right| - \frac{q}{2} \right)_+ \right\|^2.
\label{eq:every-degradations-energy}
\end{equation}
While saturation is independent from the others, quantization and gamma correction interact with each other. Quantization in non-linear space leads to non-uniform quantization in the linear space.
In dark regions, where the human eye is sensitive, the quantization is less than one graylevel and the methods are usually unaffected. However, in bright regions, the gamma correction compresses the dynamic and the quantization leading to larger errors.
For this reason, a simple model such as the naive gamma inversion~\eqref{eq:invgamma-energy} that considers the gamma correction invertible even though the image is quantized, produces artifacts especially around bright regions. Our model effectively handles the interactions between all degradations.
\JPEG{
\mysubsection{JPEG compression} JPEG encoding can be considered as a non invertible degradation.
The most lossy step of the JPEG compression standard is the quantization of DCT coefficients of $8\times{}8$ blocks.
Similarly to our quantization model, quantization in the DCT domain can be handled effortlessly.
We propose to adapt Eq.~\eqref{eq:quantization-energy-adapted} to incorporate the JPEG compression:
\begin{equation}
E(u) = \| ( |\text{JPEG}(u\ast{}k) - \text{DCT}(v)| - \frac{q_i}{2} )_+ \|^2 + \lambda R(u),
\end{equation}
where the JPEG operator consists in the encoding steps of JPEG, up to the DCT coefficient quantization.
The coefficients $q_i$ depend on the quantization table that was used to compress the image.
For our experiments, we simplified the forward model as simply a quantization of DCT coefficients, without the other JPEG steps (colorspace transformation and chromatic downsampling).
While this is not strictly conform to the JPEG standard, it is a close enough prototype to derive results for synthetic images.
We have found that current methods do not produce many artifacts for realistic JPEG images (quality above 80\%) and our model does not improve the results.
}
\vspace{-.5em}
\section{Experiments}\label{sec:experiments}
\vspace{-.5em}
First, we study the effectiveness of the different models individually.
Our results are compared to two state-of-the-art methods.
We compared our results with the methods of Almeida~\textit{et al}.{}~\cite{Almeida2013} and Whyte~\textit{et al}.{}~\cite{Whyte2014a}.
The first one uses the TV prior, which is representative of the literature, and handles accurately the boundary conditions.
The second one is based on the Richardson-Lucy deconvolution algorithm~\cite{Richardson1972}, which is more robust to ringing, and in this version, also handles boundary conditions as well as saturation.
Then, we present qualitative and quantitative results on a synthetic but realistic dataset, and show that modeling the complete degradation pipeline significantly improves the results.
\mysubsection{Individual degradations.} We evaluate two of our degradation models: saturation and quantization. Gamma correction is not evaluated individually as it can be directly inverted if no other degradation is present. For each modality, we apply the forward model to images of BSDS300~\cite{MartinFTM01}, vary the strength of the degradation and record the best PSNR obtained by optimizing the regularization weight.
Fig.~\ref{fig:results-plot} shows the PSNR obtained by varying the regularization weight for the different methods under quantization with 16 levels and saturation %
at intensity 200.
We note that, while the PSNR of our method for quantization is slightly lower than the others, it is more stable when sweeping the regularization weight.
For saturation, our model clearly outperforms both~\cite{Almeida2013} and~\cite{Whyte2014a}, this is confirmed by the qualitative evaluation shown in Fig.~\ref{fig:saturation200-results-images}.
\mysubsection{A realistic model.}
To assess the gain of our individual models over a traditional model that does not consider the degradations, we created a realistic dataset.
The dataset was created from eight sharp natural images. The images are converted to linear space, by applying the inverse gamma curve, and subsampled to reduce the residual quantization and noise. Then, each image is synthetically blurred using one of the kernels of Levin~\textit{et al}.{}~\cite{Levin2009}, and saturated by clipping the pixels at the 98th percentile. Images are converted back to the non-linear color space, where additive white Gaussian noise of $\sigma^2 =5$ is added. Finally a quantization with $q=\frac{1}{256}$ is applied.
Fig.~\ref{fig:realistic-psnr-plot} shows the PSNR results obtained with three models.
From these results, we observe that considering all the degradations improves the results. Modeling the quantization does not always improve the PSNR but makes the minimization less sensitive to the regularization weight. We also report the results obtained by using the gamma corrected total variation. When combined with our model it yields a large gain in PSNR as well as in image quality (see Fig.~\ref{fig:results-realistic}). Due to space constraints, the dataset and the full resolution results are available on the project webpage: %
{\url{https://goo.gl/oids7H}}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{exp/realistic/psnrplot-new-new.pdf}
\vspace{-2em}
\caption{PSNR statistics for the different models on a realistic dataset. $D_{S,\gamma}$ is a combination of Eq.~\eqref{eq:saturation-energy} and Eq.~\eqref{eq:gamma-energy}.
The orange bar indicates the median PSNR and the highest and lowest bar indicates the maximum and minimum PSNR obtained over the eight images.}%
\label{fig:realistic-psnr-plot}
\vspace{-0.2em}
\end{figure}
\JPEG{
\begin{figure}
\centering
\includegraphics[trim=0 0 0 50,clip,width=0.23\linewidth]{exp/realistic/db/1}
\includegraphics[trim=0 0 0 50,clip,width=0.23\linewidth]{exp/realistic/db/2}
\includegraphics[trim=0 0 0 50,clip,width=0.23\linewidth]{exp/realistic/db/3}
\includegraphics[trim=0 0 0 50,clip,width=0.23\linewidth]{exp/realistic/db/4}
\\[0.5mm]
\includegraphics[trim=0 0 0 50,clip,width=0.23\linewidth]{exp/realistic/db/5}
\includegraphics[trim=0 0 0 50,clip,width=0.23\linewidth]{exp/realistic/db/6}
\includegraphics[trim=0 0 0 50,clip,width=0.23\linewidth]{exp/realistic/db/7}
\includegraphics[trim=0 0 0 50,clip,width=0.23\linewidth]{exp/realistic/db/8}
\caption{Dataset of degraded images.}%
\label{fig:realistic-dataset}
\end{figure}
}
\vspace{-.5em}
\section{Conclusion}\label{sec:conclusions}
\vspace{-.5em}
We proposed a non-blind image deconvolution method that handles non-linear degradations including, saturation, noise, gamma correction, and quantization. The optimization is possible thanks to a relaxed formulation of the quantization data term. The minimization of the resulting energy is performed by stochastic deconvolution \cite{Gregson}.
Our experiments highlight the importance of modeling these realistic degradations present in the image processing pipeline. For the gamma correction, we show that the usual gamma inversion might introduce errors when the image is quantized in a non-linear color space, such as sRGB{}.
As future work, we would like to extend the method to blind image deconvolution, and study if kernel estimation can benefit from a more accurate image formation model. We also plan to explore the incorporation of other regularization terms, which could further improve the results.
\newpage
\bibliographystyle{IEEEbib}
|
{
"timestamp": "2018-06-05T02:18:04",
"yymm": "1806",
"arxiv_id": "1806.01097",
"language": "en",
"url": "https://arxiv.org/abs/1806.01097"
}
|
\section{Introduction}
The supervised learning can often be formalized as the problem of
minimizing the expected squared loss
\begin{align*}
\mathcal{E}(f)=\int\limits_{X\times Y} (f(x)-y)^2d\rho(x,y),
\end{align*}
given a training set $z=\left\{z_i,i=1,2,...,\left|z\right| \right\}$
of samples $z_i=(x_i,y_i)$ drawn independently from a fixed but
unknown (joint) probability distribution $\rho$ on $Z=X\times Y$,
where $X$ is a set of $d$-dimensional input vectors $x$, and $Y$ is a
set of corresponding outputs labeled by real numbers $y$. Here we
denote $\left|z\right|$ be the number of the observations. The
primary objective is the regression function $f_\rho$ that minimizes
$\mathcal{E}(f)$ and can be written as
\begin{align*}
f_\rho(x)=\int\limits_{Y} yd\rho(y|x), \quad x\in X,
\end{align*}
where $\rho(y|x)$ is the conditional distribution at $x$ induced by
$\rho$ such that $\rho(x,y)=\rho(y|x)\rho_X(x)$, and $\rho_X$ is the
marginal probability measure on $X$.
Since the conditional distribution $\rho(y|x)$
is unknown, the above integral representation for $f_\rho$ is of
no help in practice, and the goal is to find an estimator $\hat f_z$,
on the base of the given training data $z$, that approximates the
unknown regression function~$f_\rho$ well with high probability.
Ideally, a good estimator $\hat f_z$ should have small excess loss
$\mathcal{E}(\hat f_z)-\mathcal{E}(f_\rho)$. Due to a version of
Fubini's theorem we have
\begin{align*}
\mathcal{E}(\hat
f_z)-\mathcal{E}(f_\rho)=\left\|\hat f_z-f_\rho\right\|_\rho^2,
\end{align*}
where
$\left\|\cdot\right\|_\rho:=\left\|\cdot\right\|_{L{_2}(X,\rho_X)}$ is
the norm in the space $L{_2}(X,\rho_X)$ of square integrable functions
with respect to the marginal probability measure. Therefore, the
standard way of measuring the performance of the estimator $\hat f_z$
is by studying its convergence to $f_\rho$ in
$ \left\|\cdot\right\|_\rho$-norm.
In kernel machine learning the estimator~$\hat f_z$ is sought within
some hypothesis space, often taken to be a
reproducing kernel Hilbert space $\mathcal{H}_K$ associated with a
Mercer kernel $K:X\cdot X\rightarrow\mathbb{R}$. The space~$\mathcal
H_{K}$ is then defined to be the
closure of the linear span of the set of functions $K_x=K(\cdot,x)$,
$x\in X$, with the inner product satisfying
\begin{align*}
\left\langle K_x,K_{y} \right\rangle_{\mathcal{H}_{K}} := K(x,y),\quad x,y\in X.
\end{align*}
One of the main drawbacks of kernel learning machines is that the
storing and manipulating the kernel Gram matrix
$\mathbb{K}_{\left|z\right|}={\left\{K(x_i,x_j)\right\}}^{\left|z\right|}_{i,j=1}$
require $\mathcal O(\left|z\right|^2)$ space, and the amount of computations
required to find $\hat f_z\in \mathcal{H}_{K}$ scales as
$\mathcal O(\left|z\right|^3)$, that can become intractable in the case of the
so-called Big Data, when $\left|z\right|$ grows.
The Nystr{\"o}m type subsampling \cite{Williams,Smola} is a popular
tool for overcoming these limitations.
Up to now, the theoretical analysis of the Nystr{\"o}m approach has
been carried out extensively in the well-specified case, when
the regression function $f_\rho\in\mathcal{H}_{K}$
\cite{Bach2013,Kriukova2017,LP2013,Myleiko2017,Rudi2015,Rudi2017FALKONAO}. In
the present paper we concentrate ourselves on the misspecified case
such that $f_\rho\in L_2 (X,\rho_X)\setminus\mathcal{H}_{K}$,
which is much less understood, in spite of its practical importance.
The quality of the approximation~$\hat f_{z}$ depends on smoothness
properties of the underlying regression function~$f_{\rho}$, often
given in terms of source conditions and the canonical inclusion
operator $J_K:\mathcal{H}_K\hookrightarrow L_{2}(X,\rho_X)$.
We highlight that the misspecified case yields the fact that
the (unknown) target function $f_{\rho}$ does not belong
to~$\mathcal H_{K}$, and hence that for the regularized empirical
risk functional~$T_{z}^{\lm}$ from~(\ref{eq_generalfunctional}) below, we will
have that $T_{z}^{\lambda}(f_{\rho}) = +\infty$. Such an oversmoothing
penalty term is not standard in classic regularization theory, see
e.g.~\cite{EHN1996}, but it has gained attention in numerical
differentiation \cite{WJC2002,WWY2006} and regularization in Hilbert
scales \cite{Natterer1984,HM2018}.
In the present setting the convergence analysis shall be carried out in the
norm in the space $L_2(X, \rho_X)$ instead of the norm in the
reproducing kernel Hilbert space $\mathcal{H}_K$.
Due to \cite{Rudi2015} the Nystr{\"o}m approach can be seen as a
combination of random projections with a regularization scheme, and
the regularization theory tells us that such a scheme should have
enough qualification to utilize the whole smoothness of $f_\rho$. On
the other hand, from this perspective it follows that, because of low
smoothness of $f_\rho$, even a scheme with a modest qualification,
such as the standard Tikhonov regularization, is sufficient for the
misspecified case. For this reason, in the present study we restrict
ourselves to the Nystr{\"o}m subsampling for Tikhonov regularization
known also as the kernel ridge regression (KRR).
The learning rate (i.e., the convergence rate of the approximant
to the target function $f_\rho $ in
$\left\|\cdot\right\|_\rho $-norm) of KRR in the misspecified case
was first studied in \cite{Smale2007}. As one may see from that study,
for $f_\rho\in L_2 (X,\rho_X) \setminus\mathcal{H}_{K}$ the learning
rate of KRR cannot be in general described by the same formula as in
the well-specified case $f_\rho\in\mathcal{H}_{K}$. A uniform
description for both cases was obtained in \cite{Steinwart} under
additional assumptions on the inclusion operator $J_{K}$,
which may not be always satisfied. To the best of our knowledge, the
best known learning rates, that are valid for KRR with arbitrary
Mercer kernel functions $K$, have been recently given in
\cite{ShaoboLin}. In the present research we study conditions under
which the above mentioned rates can be achieved at a subquadratic
cost (with respect to
the number of observations $\left|z\right|$)
by KRR combined with the Nystr{\"o}m approach.
The paper is organized as follows. In the next section we recall KRR
setting. Then we follow \cite{Rudi2015} and consider the Nystr{\"o}m
approach to KRR as a projection method regularized by Tikhonov
regularization. In Section 3 we estimate the learning rate of KRR
combined with the plain Nystr{\"o}m subsampling in the misspecified case.
The technical proofs are given separately in Section~\ref{sec:proofs}.
In contrast to previous studies, we employ general source
conditions to measure the smoothness
$f_\rho\in L_2 (X,\rho_X)\setminus \mathcal{H}_{K}$. One important
and interesting observation here is that almost for the whole range of
source conditions describing the misspecified case the corresponding
learning rate bounds can be achieved with the same value of Tikhonov
regularization parameter that can be chosen a priori and without any
knowledge of the smoothness of $f_\rho$. This observation allows us to
formulate simple conditions under which the plain Nystr{\"o}m
subsampling can be realized with subquadratic cost, still maintaining
guaranteed learning rates.
\section{KRR with Nystr\"{o}m subsampling}
Recall that in KRR the goal is to approximate $f_\rho$ by the
minimizer $f_{z}^\lambda$ of the regularized empirical risk functional
\begin{align}\label{eq_generalfunctional}
T_{z}^\lambda(f)&
:=\left|z\right|^{-1}\sum\limits_{i=1}^{\left|z\right|}(f(x_i)-y_i)^2+\lambda\left\|f\right\|^2_{\mathcal{H}_{K}},\quad
f\in \mathcal H_{K}.
\end{align}
For the subsequent analysis we shall use the identical
operator~$I\colon \mathcal H_{K}\to \mathcal H_{K}$,
the canonical inclusion~$J_{K}\colon \mathcal H_{K}\to L_2
(X,\rho_X)$,
and the sampling
operators~$S_{x}:\mathcal{H}_K\rightarrow\mathbb{R}^{\left|z\right|}$,
given as~$S_{x} f=(f(x_i))^{\left| z\right|}_{i=1} $,
and its adjoint~$S_{x}^{\ast}:\mathbb{R}^{\left|z\right|}\rightarrow\mathcal{H}_K$.
It is known, cf.~\cite{ShaoboLin}, that the product of the inclusion
operator $J_K$ and its adjoint is the integral operator defined by
\begin{align}\label{eq:jkjkast}
\left( J_{K}J_{K}^* \right) f(x)= \int\limits_{X} K(x,t) f(t)
d\rho_X(t),\quad f\in L_2 (X,\rho_X),\ x\in X.
\end{align}
The celebrated representer theorem of G. S. Kimeldorf and G. Wahba
tells us that the minimizer of~(\ref{eq_generalfunctional}) has the form
\begin{align*}
f_z^\lambda &=\sum\limits_{i=1}^{\left|z\right|}c_i\cdot K(\cdot,
x_i), \
c=(c_i)^{\left|z\right|}_{i=1}=(\mathbb{K}_{\left|z\right|}+\lambda
\mathbb{I})^{-1}\mathbb{Y},
\end{align*}
where $\mathbb{I}$ is the $\left|z\right|\times\left|z\right|$
diagonal identity matrix and
$\mathbb{Y}=(y_i)^{\left|z\right|}_{i=1}$.
It is clear that KRR has at least quadratic computational cost
$\mathcal O(\left|z\right|^2)$, as this is the cost of computing the
kernel Gram matrix
$\mathbb{K}_{\left|z\right|}=\lbrace {K(x_{i},
x_{j})}\rbrace_{i,j=1}^{\left|z\right|}$.
Therefore, in the setting, where $\left|z\right|$ is large,
one tries to avoid the computation of the minimizer
$f_{z}^\lambda$.
\subsection{Plain Nystr\"{o}m subsampling}
\label{sec:subsampling}
In the Nystr{\"o}m algorithms this is done by replacing
$\mathbb{K}_{\left|z\right|}$ with a smaller low-rank matrix obtained
by random subsampling of columns of $\mathbb{K}_{\left|z\right|}$.
In the forthcoming
analysis, we restrict attention to the so-called \emph{plain
Nystr{\"o}m subsampling}
approach, where the points $(x_i, y_i)$ forming the subsample~$z^\nu$ are sampled
uniformly at random without replacement from the training set
$z$.
An important observation made in \cite{Rudi2015} is that the Nystr{\"o}m
subsampling can be interpreted as a restriction of the minimization of
$T_z^\lambda(f)$ to the (randomly chosen) space
\begin{align*}
\mathcal{H}_K^{z^\nu}:=\lbrace{f : f=\sum\limits_{x_i:(x_i,y_i)\in
z^\nu}}c_iK(\cdot,x_i), \ c_i \in \mathbb{R}\rbrace\subset\mathcal{H}_K ,
\end{align*}
where $z^\nu$ is a randomly selected subset of $z$ with the
cardinality $\left|z^\nu\right|\ll\left|z\right|$.
We let~$P_{z^\nu}:\mathcal{H}_K\rightarrow \mathcal{H}^{z^\nu}_K$ be
the orthogonal projection operator in $\mathcal{H}_K$ with the range
$\mathcal{H}^{z^\nu}_K$.
It is clear that $P_{z^\nu}$ has a probabilistic character and depends
on the way we perform the subsampling $z^\nu$.
In the analysis the composition~$B_{\nu}:= S_{x}
P_{z^\nu}\colon \mathcal H_{K}\to \mathbb R^{|z|}$ will be relevant.
Then the minimizer
$f_{z,z^\nu}^\lambda$ of $T_z^\lambda (f)$ over
$\mathcal{H}_K^{z^\nu}$ is given as
\begin{align} \label{eq:2.3} f_{z,z^\nu}^\lambda &=(\lambda
I+P_{z^{\nu}}S_{x}^{\ast}S_{x} P_{z^\nu})^{-1} P_{z^\nu} S_{x}^{\ast} \mathbb{Y},\\
&= P_{z^{\nu}}(\lambda
I+ B_{\nu}^{\ast}B_{\nu})^{-1} B_{\nu}^{\ast} \mathbb{Y},\notag
\end{align}
where the latter follows because~$f_{z,z^\nu}^\lambda\in \mathcal{H}_K^{z^\nu}$.
Note that $f^\lambda_{z,z^\nu}$ can be computed with a computational
cost
$$
\operatorname{cost}(f^\lambda_{z,z^\nu}) = \mathcal
O(\left|z\right|\cdot\left|z^{\nu}\right|^2),\quad \text{as}
\left|z\right|\geq \left|z^{\nu}\right|\to\infty.
$$
(see, e.g., \cite{Rudi2015}).
\subsection{Assumptions}
\label{sec:assumptions}
The answer to the learning rate depends on additional assumptions.
The first two assumptions, concerning properties of the underlying
kernel and the noise moments, will not be referenced explicitly throughout the study.
\begin{ass}
[kernel properties]\label{ass:kernel}
The kernel~$K\colon X\times X\to \mathbb R$ is
continuous, symmetric, positive definite and
$$
\sup\limits_{ x\in X } K(x,x) = \kp < \infty.
$$
Then it is clear that
$$
\sup\limits_{x\in X} \norm{ K_x }_{ L_2 \kl{ X, \rho_X } } \leq
\sup\limits_{x\in X} \norm{ K_x }_{ C(X) } \leq
\sup\limits_{x\in X} \norm{ K_x }_{ \Hcl_K }^{2} = \kp.
$$
\end{ass}
Under Assumption~\ref{ass:kernel} the operator~$J_{K}J_{K}^*$
from~(\ref{eq:jkjkast}) has a
finite trace. Specifically, for each~$\lambda>0$ it holds that
$$
\mathcal N_x(\lm) :=\langle K(\cdot,x),(\lambda
I+J^*_KJ_K)^{-1}K(\cdot,x)\rangle_{\mathcal{H}_K} = \| (\lm +
J^*_KJ_K)^{-1/2} K_{x}\|^{2}< \infty.
$$
We highlight the related quantities
\begin{align}
\label{eq:2.2n}
\Ncl_{\infty}(\lm) &:=\sup \limits_{x\in X} \Ncl_x(\lambda) \quad (\leq \kp/\lm),
\\
\intertext{and }
\mathcal N(\lm) &:=\int\limits_X
\mathcal N_x(\lambda)d\rho_X(x)=\operatorname{trace}\lbrace(\lambda I+J^*_KJ_K)^{-1}J^*_KJ_K\rbrace.
\end{align}
The function~$\mathcal N$ measures the capacity of the
RKHS $\mathcal H_{K}$ in the space~$L_{2}(X,\rho_{X})$, and it is
called {\it the effective dimension}. It is well known that this is a
decreasing function of~$\lm$ with~$\lim_{\lm\to 0+}\mathcal
N(\lm)=\infty$, provided that the RKHS~$\mathcal H_{K}$ is infinite
dimensional, c.f. \cite{Zha05}.
Extended discussion on properties of the effective dimension for general operators can be found in \cite{LM2014}.
For the efficiency of the Nystr\"{o}m
subsampling we shall need an additional assumption on the kernel,
specified in Assumption~\ref{ass:kernel-src}, below.
\begin{ass}
[noise moments]\label{ass:noise}
The family of random variables~$\varepsilon_{x}:= y - f_{\rho}(x),\
x\in X$ has all moments~$p\geq 2$, which satisfy
$$
\mathbf E| \varepsilon_{x}|^{p} \leq \frac 1 2 p! M^{p-2}
\sigma^{2},\quad x\in X, \ \text{ a.e.},
$$
for some positive constants~$M$ and~$\sigma$.
\end{ass}
Next, an assumption is made on the underlying smoothness of the regression function~$f_\rho$.
\begin{ass}
[source condition]\label{ass:smoothness}
There is an operator concave index function\footnote{
A function~$\varphi:\ekl{0, d} \rightarrow[0,\infty)$
is called an index function if it is continuous, strictly increasing,
obeying $\varphi (0)=0$. It is called operator concave if for
self-adjoint operator~$C,C_{1}\colon L_2 (X,\rho_X) \to L_2 (X,\rho_X) $ we
have that~$\varphi\left( \frac 1 2 (C + C_{1}) \right) \geq \frac 1 2 \left(
\varphi(C) + \varphi(C_{1})\right)$.
}~$\varphi\colon [0,d]\to[0,\infty) $, for some~$d> \|J_{K}J_{K}^*\|$, such that
$$
f=\varphi (J_{K}J_{K}^*)v_f, \ \left\|v_f\right\|_\rho \leq 1.
$$
The function~$t \mapsto \sqrt t/\varphi(t)$ is nondecreasing.
\end{ass}
\begin{remark}\label{rem:smoothness}
First, from~\cite{Mathe2008} we know that for every
$f\in L_2 (X,\rho_X)$ and $\varepsilon>0$ there exists an
index function
$\varphi:\ekl{ 0, \left\|J_{K} J_{K}^* \right\| }
\rightarrow[0,\infty)
$
such that
\begin{align} \label{eq:2.1} f=\varphi (J_{K}J_{K}^*)v_f, \
\left\|v_f\right\|_\rho \leq (1+\varepsilon)\left\| f \right\|_\rho.
\end{align}
In the
context of learning it is used starting from the paper
\cite{Smale2007}, where $f=f_\rho$ was assumed to satisfy
\eqref{eq:2.1} with $\varphi(t)=t^r, r\in (0,1].$
Secondly, the following is known. If the function $t\rightarrow\sqrt t/\varphi(t)$ is
nonincreasing, then the image of $\varphi (J_{K}J_{K}^*)$ is contained
in $\mathcal{H}_K$. Therefore, in order to treat the low-smoothness
case we assume that
$t\rightarrow\sqrt{t}/\varphi(t)$ is nondecreasing.
Thus, the misspecified case studied here corresponds
to \eqref{eq:2.1} with $\varphi(t)$ increasing not faster than
$\sqrt{t}$.
Moreover, as in \cite{Kriukova2017}, in order to control the effect
of subsampling, we assume that $\varphi$ is operator concave on
$[0,d],\ d > \left\|J_{K}J_{K}^*\right\|$. Note that previously
considered H{\"o}lder-type index functions
$\varphi(t)=t^r, r\in (0,\frac{1}{2}]$, as well as logarithmic
functions $\varphi(t)=\log^{-r}\frac{1}{t},\ { 0 < r\leq 1}$, are
operator monotone, and hence operator concave. An important
implication of operator monotonicity is that there is a number
$d_\varphi$ depending only on $\varphi$ such that for any self-adjoint operators
$C,C_1:L_2(X,\rho_X)\rightarrow L_2(X,\rho_X)$ with spectra in $[0,d]$
it holds
\begin{align} \label{eq:2.2}
\left\|\varphi(C)-\varphi(C_1)\right\|_{L_2(X,\rho_X)\to
L_2(X,\rho_X)}\leq d_\varphi
\varphi\left(\left\|C-C_1\right\|_{L_2(X,\rho_X)\to
L_2(X,\rho_X)}\right).
\end{align}
\end{remark}
\section{Main results}
\label{sec:results}
\subsection{Error bound for Nystr{\"o}m subsampling in the misspecified case}
We formulate the main result below. For this to hold we must obey the following
relations for the parameter~$\lm>0$, the overall sample
size~$\left|z\right|$, and the subsample size~$\left|z^{\nu}\right|$.
Given, for~$0< \delta < 1$, a confidence level~$1-\delta$, we require that
\begin{align}\label{eq:2.5}
\left|z^{\nu}\right| &\geq c\mathcal N_\infty
(\lambda)\log\dfrac{1}{\lambda}\log\dfrac{1}{\delta},\\
\intertext{and}
\lambda & \in \left[c{\left|z\right|}^{-1}\log {\dfrac{{\left|z\right|}}{\delta}},\ \left\|J^*_KJ_K\right\|_{\mathcal{H}_K\rightarrow\mathcal{H}_K} \right].\label{eq:3.3}
\end{align}
Concerning the choice of the size~$|z^{\nu}|$ of the subsample, two competing
goals are relevant. First, it should be large enough to maintain the
learning rate as this was obtained by using the full sample~$z$. On
the other hand, it should be as small as possible to reduce the
computational burden. Here this choice in (\ref{eq:2.5}) is analyzed in the low smoothness situation.
Here and in the sequel, we adopt the convention that $c$ denotes a generic positive coefficient, which can vary from estimation
to estimation and may only depend on basic parameters, such as $K$, $\rho$.
Also, for functions~$a$, $b$ depending on $\lm$ or $\abs{z}$, respectively, the relation $a \asymp b$ means that
$a = \mathcal O(b)$ and $b = \mathcal O(a)$ as $\lm\to 0$, or $\abs{z}\to\infty$.
Note that the Nystr{\"o}m approximant from~\eqref{eq:2.3}
represents an element of $\mathcal{H}_K$, and to estimate
$\mathcal{E}(f^\lambda_{z,z^\nu})$ we need to embed
$f^\lambda_{z,z^\nu}$ in $L_2(X,\rho_X)$. Then the error decomposes as
\begin{align}\label{eq:3.6}
\begin{split}
\left\|J_Kf^\lambda_{z,z^\nu}-f_{\rho}\right\|_{\rho}
&\leq \left\|f_{\rho}-J_K(\lambda I+
B_{\nu}^{\ast}B_{\nu})^{-1}P_{z^\nu}J^*_Kf_{\rho}\right\|_{\rho}
\\
&+ \left\|J_K(\lambda I+B_{\nu}^{\ast}B_{\nu})^{-1}
P_{z^\nu}(J^*_Kf_{\rho}-S_{x}^{\ast}\mathbb{Y})\right\|_{\rho},
\end{split}
\end{align}
which can be regarded as decomposition into approximation error and the
sample error, respectively. By estimating both terms in the right-hand side of (\ref{eq:3.6}), we establish the main error estimate.
\begin{theorem}\label{th:1}
Assume that in the plain Nystr{\"o}m subsampling the values
$\left|z^{\nu}\right|$ and $\lambda$ satisfy \eqref{eq:2.5} and
\eqref{eq:3.3}. If $f_{\rho}$ obeys Assumption~\ref{ass:smoothness}
for the index function~$\varphi$, then with probability at
least $1-\delta$ we have
\begin{align*}
& \left\|f_{\rho}-J_K(\lambda I+ B_{\nu}^{\ast}B_{\nu})^{-1}P_{z^\nu}J^*_Kf_{\rho}\right\|_{\rho} \leq c\varphi(\lambda)\log {\dfrac{1}{\delta}}
\left( 1+\sqrt{\dfrac{{\Ncl(\lambda)}}{\lambda{\left|z\right|}}}\right); \\
& \left\|J_K(\lambda I+B_{\nu}^{\ast}B_{\nu})^{-1}
P_{z^\nu}(J^*_Kf_{\rho}-S_{x}^{\ast}\mathbb{Y})\right\|_{\rho} \leq c\sqrt \lm
\log \dfrac{1}{\delta}\left( 1 + \sqrt{\dfrac{\Ncl(\lambda)}{\lambda\left|z\right|}}\right);
\end{align*}
and the total error estimate
\begin{align*}
\left\|J_Kf^\lambda_{z,z^\nu}-f_{\rho}\right\|_{\rho}\leq c\varphi(\lambda)\log {\dfrac{1}{\delta}}
\left( 1+\sqrt{\dfrac{{\Ncl(\lambda)}}{\lambda{\left|z\right|}}}\right) .
\end{align*}
\end{theorem}
It is interesting to observe that the approximation error
dominates the sample error, which is not standard in regularization
theory. This is a consequence of the misspecified source condition,
Assumption~\ref{ass:smoothness}, for a function~$\varphi$
with~$\varphi(\lm)\geq c \sqrt\lm$. We will provide more explanation in Remark \ref{rem:approxi} after the proof of the above theorem, which has been postponed in Section \ref{se_4}.
\subsection{Parameter choice}
\label{sec:parameter}
A somehow surprising message of the above theorem is that the $\lambda$-dependent term
\begin{align*}
\theta_{\varphi}(\lambda)=\varphi(\lambda)\left(1+\sqrt{\frac{\Ncl(\lambda)}{\left|z\right|\lambda}}\right),\quad \lm>0.
\end{align*}
bounding (the square root of) the excess loss
$\mathcal{E}(f^\lambda_{z,z^\nu})-\mathcal{E}(f_{\rho})$ attains its
minimum (up to a constant factor) at a value of the regularization
parameter $\lambda=\lambda_0$, which can be chosen a priori and does
not require the knowledge of the index function $\varphi$. Precisely,
let~$\lambda_{0} = \lm_{0}(\left|z\right|)$ solve the equation
\begin{equation}
\label{eq:lambda0}
\mathcal N(\lambda)=\lambda\left|z\right|.
\end{equation}
Notice that this equation always has a unique solution, and that it
does not depend on the underlying smoothness, as expressed in the
function~$ \varphi$.
Also, as~$\left|z\right|\to \infty$ we have
that~$\lm_{0}(\left|z\right|)\to 0$.
\begin{cor}\label{cor:lambda0}
For any index function~$\varphi$ in Assumption~\ref{ass:smoothness} we have
\begin{align}
\varphi(\lambda_0)\leq \min\limits_{\lambda}\theta_{\varphi}(\lambda)\leq2 \varphi(\lambda_0),\label{eq:lambda-minr}
\end{align}
where $\lambda_0$ is chosen in~(\ref{eq:lambda0}).
Consequently, under the conditions of Theorem~\ref{th:1}, and if~$\lm_{0}$
obeys~(\ref{eq:3.3}) then we have that
$$
\|J_Kf^\lambda_{z,z^\nu}-f_{\rho} \|_{\rho} =
\mathcal O(\varphi(\lm_{0}(|z|))),\quad \text{as}\ \ |z|\to \infty.
$$
\end{cor}
\begin{remark}
\label{rem:emp-eff-dim}
The effective dimension $\Ncl(\lambda)$ can be rather
accurately estimated from the data (see,
e.g.,~\cite[Prop.~1]{Rudi2015}) that makes the parameter choice
$\lambda=\lambda_0$ practically feasible.
\end{remark}
\begin{remark}
\label{rem:lm0-feasible}
We comment when the above choice of~$\lm_{0}$ obeys the
condition from~(\ref{eq:3.3}). We claim that this holds true whenever
the effective dimension grows at least as~$\log(1/\lm)$, as~$\lm\to 0$.
Indeed, we have that~$|z|\lm_{0}\leq 1$, and hence that~$\log(|z|)
\leq \log(1/\lm_{0})$, such that in this case we find that
$$
|z|\lm_{0} = \mathcal N(\lm_{0}) \geq \log(1/\lm_{0})\geq \frac 1 2 \log\left(\frac{|z|}{\delta}\right),
$$
provided that for given confidence level~$1-\delta$, the sample
zise~$|z|$ is large
enough.
This condition on the effective dimension is fulfilled for all types of the behavior of the effective
dimension discussed in the literature, (see, e.g., the discussion with
power type behavior in Section~\ref{sec:efficiency}, below).
\end{remark}
\subsection{Full data}
\label{sec:full-data}
Note that in the case when $\left|z^{\nu}\right| = \left|z\right|$, the inequality~\eqref{eq:2.5} is satisfied
because in view of~\eqref{eq:2.2n} and~\eqref{eq:3.3}, $\Ncl_{\infty}(\lm) \log(1/\lm) $ is of
lower order than $\left|z^{\nu}\right| = \left|z\right|$, i.e.
$ \Ncl_{\infty}(\lm) \log(1/\lm) = \mathcal O\kl{
\abs{z} \cdot \log^{-1}\abs{z} \cdot \log\log \abs{z}
} $. Therefore, Theorem~\ref{th:1} has the following corollary.
\begin{cor}
If $z^{\nu} = z$, then under the conditions of Theorem~\ref{th:1}, we have
\begin{align}\label{eq:3.14}
\left\| J_K f^{\lm_0}_{z} - f_{\rho} \right\|_{\rho}=
\mathcal O\left(\varphi(\lambda_0)\log\dfrac{1}{\delta}\right)\quad
\text{as}\ |z| \to \infty.
\end{align}
\end{cor}
The error of Nystr\"{o}m subsampling that follows from Theorem~\ref{th:1} and
Corollary~\ref{cor:lambda0}, coincides with the best learning rate known in the
misspecified case for KRR with general Mercer kernels and full data,
i.e $z^{\nu}=z$.
\begin{xmpl}
We discuss the previously considered H{\"o}lder-type index
functions $\varphi(t)=t^r, \ r\in (0,1/2]$, and under the usual
assumption on the effective dimension
$\Ncl(\lambda)=\mathcal O(\lambda^{-s}), \ s\in(0,1]$. Then the bound~\eqref{eq:3.14}
is of order $\mathcal O\left(\left|z\right|^{ -r/(s+1) } \right)$.
For KRR with full data, this result is in accordance with~\cite{ShaoboLin}.
\end{xmpl}
\subsection{Efficiency of Subsampling}
\label{sec:efficiency}
Now we are in position to discuss conditions under which the plain
Nystr{\"o}m subsampling achieves \eqref{eq:3.14} with subquadratic
cost~$ \textfrc o(\left|z\right|^2)$.
In order to actually establish the superiority of the subsampling an
additional assumption is made, borrowed from~\cite{VitRosToi14}.
\begin{ass}
[source condition for kernel]\label{ass:kernel-src}
There exist $\gamma\in (0,1]$ and
$c_{\gamma}>0$ such that for all $x\in X$ the kernel sections
$K(\cdot,x)\in\mathcal{H}_K$ satisfy the source condition
\begin{align} \label{eq:3.16} K(\cdot,x)= (J^*_KJ_K)^{\gamma
/2}\upsilon_{x}, \ \left\|\upsilon_x\right\|_{\mathcal{H}_K}\leq
c_\gamma.
\end{align}
\end{ass}
\begin{remark}\label{rem:kernel}
As discussed in Remark~\ref{rem:smoothness} there is always an index
function, say~$\psi$, which guarantees that
$$
K(\cdot,x)= \psi(J^*_KJ_K)\upsilon_{x}, \
\left\|\upsilon_x\right\|_{\mathcal{H}_K}\leq
(1 + \varepsilon) \| K_{x}\|_{ \Hcl_K }
\leq (1 + \varepsilon) \sqrt{\kp} .
$$
Thus, Assumption~\ref{ass:kernel-src} requires this index function to be
of at least power type.
\end{remark}
We mention the following consequence of Assumption~\ref{ass:kernel-src}.
\begin{lemma}\label{lem:gamma-bound}
Under Assumption~\ref{ass:kernel-src} we have that
$$
\mathcal N_{\infty}(\lambda) \leq c^2_\gamma \lambda^{\gamma-1}.
$$
\end{lemma}
\begin{proof}
This simply follows from
\begin{align*}
\mathcal N_{\infty}(\lambda) &=\sup\limits_{x\in X}\left\|(\lambda
I+J^*_KJ_K)^{-1/2}K(\cdot,x)\right\|^2_{\mathcal{H}_K}\\
&\leq\sup\limits_{x\in X}\left\|\upsilon_x\right\|^2_{\mathcal{H}_K}\sup\limits_{t>0}[(\lambda+t)^{-1/2}t^{\gamma/2}]^2
\\
&\leq c^2_\gamma \sup_{t>0} (\lambda+t)^{-1}t^{\gamma}\leq c^2_\gamma\lambda^{\gamma-1},
\end{align*}
which completes the proof.
\end{proof}
Recall that $f^{\lambda}_{z,z^{\nu}}$ can be computed with a
computational cost
$\mathcal O(\left|z\right|\cdot\left|z^{\nu}\right|^2)$, and note that
\eqref{eq:2.5} is the only condition on the subsampling size
$\left|z^{\nu}\right|$ that is needed in Theorem~\ref{th:1}. Then from
Corollary~\ref{cor:lambda0} it follows that the Nystr{\"o}m approximation
$f^{\lambda_0}_{z,z^{\nu}}$ realizing the order \eqref{eq:3.14} can be
computed with a computational cost
\begin{align}\label{eq:3.15}
\operatorname{cost}\left(f^{\lambda_0}_{z,z^{\nu}}\right)=\mathcal O \left(\left|z\right|\cdot\left(\mathcal
N_{\infty}(\lambda_0)\log\dfrac{1}{\lambda_0}\right)^2\right).
\end{align}
If we stay with the standard assumption that
$\mathcal N(\lambda) \asymp \lambda^{-s}$,
then from the very definition and Lemma~\ref{lem:gamma-bound}, it follows that
\begin{align*}
\mathcal N(\lambda)\leq \mathcal N_{\infty}(\lambda)\Rightarrow s+\gamma\leq 1.
\end{align*}
In the considered scenario, from~\eqref{eq:lambda0}
and~\eqref{eq:3.15}, we have $\lambda_0 \asymp \left|z\right|^{-1/(s+1)}$, and the cost can be
bounded as
\begin{align*}
\operatorname{cost}(f^{\lambda_0}_{z,z^{\nu}})=\mathcal O\left(\left|z\right|\cdot\left|z\right|^{ 2(1-\gamma)/(s+1) }
\log^2\left|z\right|\right)=\mathcal O\left(\left|z\right|^{ (3+s-2\gamma)/(1+s) }
\log^2\left|z\right|\right),
\end{align*}
which is subquadratic whenever~$2\gamma+s>1$. We summarize this as
\begin{proposition}\label{prop:2}
Assume that Assumption~\ref{ass:kernel-src} holds true and $\Ncl(\lm) \asymp \lm^{-s}$, $s\in ( 0, 1-\gm ]$.
If $2\gm + s > 1$, then the plain Nystr\"{o}m approximation
$ f_{ z, z^{\nu} }^{ \lm_0 } $
can be computed at a subquadratic computational cost, and it preserves the learning rate~\eqref{eq:3.14}
guaranteed for the full amount of data.
\end{proposition}
In particular,
if Assumption~\ref{ass:kernel-src} is satisfied with $\gm > 1/2$,
then the plain Nystr\"{o}m approximation can always be computed at a subquadratic cost still maintaining guaranteed
learning rates.
\section{Proofs}\label{se_4}
\label{sec:proofs}
\subsection{A regularization perspective to KRR}
\label{sec:regularization}
Here we briefly emphasize the aspects of regularization theory which
will be relevant in the subsequent proofs.
We recall the structure of the estimator~$f_{z,z^{\nu}}^{\lm}$
from~(\ref{eq:2.3}) as
$$
f_{z,z^\nu}^\lambda = \left(\lambda
I+B_{\nu}^{\ast}B_{\nu}\right)^{-1} B_{\nu}^{\ast}\mathbb{Y},
$$
with~$B_{\nu}= S_{x} P_{z^{\nu}}$. We can write this as
$ f_{z,z^\nu}^\lambda = g_{\lambda}(B_{\nu}^{\ast}B_{\nu})B_{\nu}^{\ast}\mathbb{Y}$,
where we introduced the KRR filter function~$g_{\lambda}(t):= 1/(t + \lm ),\
t,\lm>0$, applied to the non-negative self-adjoint
operator~$B_{\nu}^{\ast}B_{\nu}$ via spectral calculus.
We shall also employ the fact that for any linear bounded operator,
say~$B$ acting between Hilbert spaces, and for any bounded function $g$
it holds~$g(B^*B)B^*=B^*g(BB^*)$. The corresponding residual
function is given as~$r_{\lambda}(t) := 1 - g_{\lambda}(t) t =
\lm/(t+\lm),\ t,\lm >0 $. In particular we have that~$0 < r_{\lambda}(t) \leq 1$. The impact of the residual function on the
given solution smoothness is measured by its qualification, and we
mention the well known result that
\begin{equation}
\label{eq:quali}
\sup_{t>0}\left|r_{\lambda}(t) \varphi(t)\right| \leq \varphi(\lm),\quad \lm>0,
\end{equation}
provided that the index function~$\varphi$ is such that~$\varphi(t)/t$
is non-decreasing, as this is the case for the functions~$\varphi$
which obey Assumption~\ref{ass:smoothness}.
Applying this for the index function~$t \mapsto t^{q}\varphi(t)$,
with~$0 \leq q \leq 1/2$, we find that this still obeys the assumption
for~(\ref{eq:quali}), and hence, (see, e,g., (16) in \cite{Kriukova2017}), we have that
\begin{align}\label{eq:3.9}
\sup_{t>0}\left|r_{\lambda}(t)t^q\varphi(t)\right|\leq
c\lambda^q\varphi(\lambda), \ \text{when }q\in[0,\dfrac{1}{2}].
\end{align}
Finally, by the specific structure we see that~$g_{\lambda}(t) = r_{\lambda}(t)/ \lm $,
such that~(\ref{eq:3.9}) yields with~$q:= 1/2$ that
\begin{equation}
\label{eq:ga-bound}
\left|g_{\lambda}(t)\sqrt t \varphi(t) \right| \leq \varphi(\lm)
\lm^{-1/2},\quad \lm>0.
\end{equation}
\subsection{Probabilistic bounds}
\label{sec:prob-bounds}
We shall also use probabilistic bounds.
From~\cite[Lem.~6]{Rudi2015} and \cite[Cor.~1]{Myleiko2017} it follows that if
$z^\nu$ is subsampled according to the plain Nystr{\"o}m approach,
then with probability at least $1-\delta$ we have
\begin{align} \label{eq:2.4} \left\|\
J_K(I-P_{z^\nu})\right\|^2_{\mathcal{H}_K\rightarrow
L_2(X,\rho_X)} \leq 3\lambda,
\end{align}
provided that~\eqref{eq:2.5} holds.
We also recourse to the following inequality
from~\cite[Lem.~5]{Rudi2015}, which assert
\begin{align} \label{eq:3.2} \left\|(\lambda I+J^*_KJ_K)^{1/2}(\lambda
I+S_{x}^{\ast}S_{x})^{1/2}\right\|_{\mathcal{H}_K\rightarrow\mathcal{H}_K}\leq2,
\end{align}
the latter one is satisfied with probability at least $1-\delta$ if~\eqref{eq:3.3} holds.
Moreover, from Lemma 5.1 \cite{Blanchard2016} it follows that for
$\lambda$ satisfying \eqref{eq:3.3} with probability at least
$1-\delta$ we have
\begin{align} \label{eq:3.4} \left\|(\lambda
I+J^*_KJ_K)^{-1/2}(J^*_KJ_K-S_{x}^{\ast}S_{x})\right\|_{\mathcal{H}_K\rightarrow\mathcal{H}_K}\leq
c
\log{\dfrac{1}{\delta}}\sqrt{\dfrac{{\Ncl(\lambda)}}{{\left|z\right|}}},
\end{align}
\begin{align}\label{eq:3.5}
\left\|(\lambda I+J^*_KJ_K)^{-1/2}(J^*_Kf_{\rho}-S_{x}^{\ast}\mathbb{Y})\right\|_{\mathcal{H}_K} \leq
c \log{\dfrac{1}{\delta}}\sqrt{\dfrac{{\Ncl(\lambda)}}{{\left|z\right|}}}.
\end{align}
\subsection{Proof of Theorem~\ref{th:1}}
\label{sec:proof-thm}
We first mention the following well known bound, using spectral calculus.
\begin{align}\label{eq:J_K-bound}
\begin{split}
\left\|J_K(\lambda I+J_K^* J_K)^{-1/2}\right\|_{\mathcal{H}_{K}\rightarrow L_2(X,\rho_X)}
& = \left\|(J^*_KJ_K)^{1/2}(\lambda
I+J^*_KJ_K)^{-1/2}\right\|_{\mathcal{H}_K\rightarrow \mathcal{H}_K} \\
&\leq \sup\limits_{t>0}(t/(\lambda +t))^{1/2}\leq 1.
\end{split}
\end{align}
Furthermore, the following result will be used, and we refer to~\cite[Lem.~2\&8]{Rudi2015}. For every
choice~$z^{\nu}$ from the sample~$z$ we have that
\begin{align} \label{eq:3.1} \left\|(\lambda I+ S_{x}^{\ast}S_{x})^{1/2}P_{z^\nu} (\lambda
I+P_{z^\nu}S_{x}^{\ast}S_{x} P_{z^\nu})^{-1}P_{z^\nu}(\lambda
I+S_{x}^{\ast}S_{x})^{1/2}\right\|_{\mathcal{H}_K\rightarrow\mathcal{H}_K}\leq 1.
\end{align}
Recall the error decomposition
\begin{align*}
\left\|J_Kf^\lambda_{z,z^\nu}-f_{\rho}\right\|_{\rho}\nonumber
&\leq \left\|f_{\rho}-J_K(\lambda I+B_{\nu}^{\ast}B_{\nu})^{-1}P_{z^\nu}J^*_Kf_{\rho}\right\|_{\rho}\nonumber \\
&\quad +\left\|J_K(\lambda I+ B_{\nu}^{\ast}B_{\nu})^{-1} P_{z^\nu}(J^*_Kf_{\rho}-S_{x}^{\ast}\mathbb{Y})\right\|_{\rho}.
\end{align*}
The sample error, i.e. the second term on the right hand side of above inequality, can be estimated with the use of~\eqref{eq:J_K-bound}, \eqref{eq:3.1}, \eqref{eq:3.2} and~\eqref{eq:3.5} as follows
\begin{align} \label{eq:3.7} &\left\| J_K(\lambda
I+ B_{\nu}^{\ast}B_{\nu})^{-1}P_{z^{\nu}}(J^*_Kf_{\rho}-S_{x}^{\ast}\mathbb{Y})
\right\|_{\rho}\nonumber
\\
&\leq \left\| J_K(\lambda
I+J^*_KJ_K)^{-1/2}\right\|_{\mathcal{H}_K\rightarrow
L_2(X,\rho_X)}\nonumber
\\
&\qquad \times\left\|(\lambda
I+J^*_KJ_K)^{1/2}(\lambda I+S_{x}^{\ast}S_{x}
)^{-1/2}\right\|_{\mathcal{H}_K\rightarrow\mathcal{H}_K}\nonumber
\\
&\qquad \times \left\|(\lambda
I+S_{x}^{\ast}S_{x})^{1/2}(\lambda
I+ B_{\nu}^{\ast}B_{\nu})^{-1}P_{z^{\nu}}(\lambda
I+S_{x}^{\ast}S_{x})^{1/2}\right\|_{\mathcal{H}_K\rightarrow\mathcal{H}_K}\nonumber
\\
&\qquad \times \left\|(\lambda
I+S_{x}^{\ast}S_{x})^{-1/2}(\lambda
I+J^*_KJ_K)^{1/2}
\right\|_{\mathcal{H}_K\rightarrow\mathcal{H}_K}\nonumber
\\
&\qquad \times\left\|(\lambda
I+J^*_KJ_K)^{-1/2}(J^*_Kf_{\rho}-S_{x}^{\ast}\mathbb{Y})\right\|_{\mathcal{H}_K}\nonumber
\\
&\leq 4c \log
\dfrac{1}{\delta}\sqrt{\dfrac{\Ncl(\lambda)}{\left|z\right|}}
\leq 4c \sqrt\lm \log
\dfrac{1}{\delta}\left( 1 +
\sqrt{\dfrac{\Ncl(\lambda)}{\lm\left|z\right|}}
\right).
\end{align}
The rest of the proof is to estimate the
approximation error, i.e.\ the first term
on the right hand side of~\eqref{eq:3.6}. This can further be decomposed as
\begin{align} \label{eq:3.8} \left\| f_{\rho}-J_K(\lambda
I+B_{\nu}^{\ast}B_{\nu})^{-1}P_{z^\nu}J^*_Kf_{\rho}\right\|\leq
I_1+I_2,
\end{align}
where
\begin{align*}
\begin{split}
I_1&=\left\|f_{\rho}-J_K(\lambda
I+P_{z^\nu}J^*_KJ_KP_{z^\nu})^{-1}P_{z^\nu}J^*_Kf_{\rho}\right\|,
\\
I_2&=\left\| J_K[(\lambda I+P_{z^\nu}J^*_KJ_KP_{z^\nu})^{-1}-
(\lambda I+B_{\nu}^{\ast}B_{\nu})^{-1}]P_{z^\nu}J^*_Kf_{\rho}\right\|_{\rho}
\\
&=\left\| J_K(\lambda
I+B_{\nu}^{\ast}B_{\nu})^{-1}P_{z^\nu}(J^*_KJ_K- S_{x}^{\ast}S_{x})
\right.
\\
&\quad \times\left.P_{z^\nu}(\lambda
I+P_{z^\nu}J^*_KJ_KP_{z^\nu})^{-1}P_{z^\nu}J^*_Kf_{\rho}\right\|_{\rho}.
\end{split}
\end{align*}
To estimate $I_1$ we recall~\eqref{eq:3.9}.
Moreover, from \eqref{eq:2.2}, \eqref{eq:2.4} it follows that under
the condition \eqref{eq:2.5} we have
\begin{align} \label{eq:3.10}
&\left\|\varphi(J_KJ^*_K)-\varphi(J_KP_{z^\nu}J^*_K)\right\|_{L_2(X,\rho_X)\rightarrow
L_2(X,\rho_X)}\nonumber
\\
&\quad \leq
d_{\varphi}\varphi(\left\|J_K(I-P_{z^{\nu}})J^*_K\right\|_{L_2(X,\rho_X)\rightarrow
L_2(X,\rho_X)})\leq c\varphi(\lambda).
\end{align}
Then, using the source condition~\eqref{eq:2.1} with $f=f_{\rho}$,
and the qualification of KRR as in~(\ref{eq:quali}), we can estimate
$I_1$, by using~$r_{\lambda}(t):= \lm/(\lm + t),\ t,\lm>0$, as follows
\begin{align*
I_1&=\left\|(I-J_KP_{z^\nu}J^*_K(\lambda I+J_KP_{z^\nu}J^*_K)^{-1})f_{\rho}\right\|_{\rho}\nonumber
\\
&\leq\left\|r_{\lambda}(J_KP_{z^\nu}J^*_K)\varphi(J_KP_{z^\nu}J^*_K)\upsilon_{f_{\rho}}\right\|_{\rho}\notag\\
& \qquad
+\left\|r_{\lambda}(J_KP_{z^\nu}J^*_K)(\varphi(J_KJ^*_K)-\varphi(J_KP_{z^\nu}J^*_K))\upsilon_{f_{\rho}}\right\|_{\rho}\nonumber
\\
&\leq \left\|\upsilon_{f_{\rho}}\right\|_{\rho}
\kl{ \sup_{t>0}r_{\lambda}(t)\varphi(t)
+\left\|\varphi(J_KJ^*_K)-\varphi(J_KP_{z^\nu}J^*_K)\right\|_{L_2(X,\rho_X)\rightarrow L_2(X,\rho_X)}
}
\nonumber \\
& \leq c\varphi(\lambda).
\end{align*}
To estimate $I_2$ we observe that $I_2\leq I_{2,1}\cdot I_{2,2}$,
where
\begin{align*}
I_{2,1}&=\left\|J_K(\lambda I+B_{\nu}^{\ast}B_{\nu})^{-1}P_{z^{\nu}}(J^*_KJ_K-S_{x}^{\ast}S_{x}) \right\|_{\mathcal{H}_K\rightarrow\mathcal{H}_K},
\\ I_{2,2}&=\left\|(\lambda I+P_{z^{\nu}}J^*_KJ_KP_{z^{\nu}})^{-1}P_{z^{\nu}}J^*_Kf_{\rho} \right\|_{\mathcal{H}_K}.
\end{align*}
By the same chain of arguments as in \eqref{eq:3.7} we obtain that
\begin{align*
I_{2,1}\leq c\log \dfrac{1}{\delta}\sqrt{\dfrac{\Ncl(\lambda)}{\left|z\right|}},
\end{align*}
where the only difference is that one needs to use \eqref{eq:3.4}
instead of \eqref{eq:3.5}.
Observing that $I_{2,2}\leq I_{2,2,1}+I_{2,2,2}$, we then have
\begin{align*}
I_{2,2,1}&=\left\|P_{z^{\nu}}J^*_K(\lambda I+J_KP_{z^{\nu}}J^*_K)^{-1}\varphi(J_KP_{z^{\nu}}J^*_K)\upsilon_{f_{\rho}} \right\|_{\mathcal{H}_K},
\\
I_{2,2,2}&=\left\|P_{z^{\nu}}J^*_K(\lambda I+J_KP_{z^{\nu}}J^*_K)^{-1}(\varphi(J_KJ^*_K)-\varphi(J_KP_{z^{\nu}}J^*_K))\upsilon_{f_{\rho}} \right\|_{\mathcal{H}_K}.
\end{align*}
Using~(\ref{eq:ga-bound}) we derive
\begin{align*
I_{2,2,1}& =
\|g_{\lambda}(J_KP_{z^{\nu}}J^*_K)(J_KP_{z^{\nu}}J^*_K)^{1/2}\varphi(J_KP_{z^{\nu}}J^*_K)\|\left\|\upsilon_{f_{\rho}}\right\|_{\rho}\\
&\leq \left\|\upsilon_{f_{\rho}}\right\|_{\rho}
\sup_{t>0}\left|g_{\lambda}(t)\right|t^{1/2}\varphi(t) \leq c \varphi(\lambda)\lambda^{-1/2}.
\end{align*}
Moreover, similarly to~(\ref{eq:J_K-bound}), and \eqref{eq:3.10} there holds
\begin{align}\label{eq:approxi}
I_{2,2,2}\leq c \left\|\upsilon_{f_{\rho}}\right\|_{\rho} \varphi(\lambda)\sup_{t>0}\left\|g_{\lambda}(t)\right|t^{1/2}\leq c\varphi(\lambda)\lambda^{-1/2}.
\end{align}
Thus, we have that~$I_{2,2}\leq c \varphi(\lambda)\lambda^{-1/2}$,
and hence overall
\begin{align}\label{eq:approxi2}
I_2\leq c \log\dfrac{1}{\delta}\varphi(\lambda)
\sqrt{ \dfrac{\Ncl(\lambda)}{ \lm \left|z\right| } },\quad \lm>0.
\end{align}
Combining this with \eqref{eq:3.6}, \eqref{eq:3.7}, \eqref{eq:3.8}
and \eqref{eq:3.1} we obtain the statement of the theorem, and the
proof is complete.
\begin{remark}\label{rem:approxi}
We shall emphasize that in Theorem \ref{th:1} the total error estimate is dominated by the approximation error which is actually induced by the estimate of the term $I_{2,2,2}$ in (\ref{eq:approxi}). The misspecified source condition Assumption \ref{ass:smoothness} then yields an increasing function $\varphi(\lambda)\lambda^{-\frac{1}{2}}$ which blows up as $\lambda\rightarrow 0$. As a consequence, the estimate of $I_2$ in (\ref{eq:approxi2}) dominates the sample error.
\end{remark}
\subsection{Proof of Corollary~\ref{cor:lambda0}}
\label{sec:proof-prop}
The right inequality in~(\ref{eq:lambda-minr}) is obvious by the
choice of~$\lm_{0}$ from~(\ref{eq:lambda0}). To prove the left
inequality we distinguish two cases. First, if $\lambda>\lambda_0$
then $ \theta_{\varphi}(\lambda)>\varphi(\lambda)>\varphi(\lambda_0)$.
Otherwise, if~$\lambda\leq\lambda_0$, then we use that by assumption the function
$\lambda\rightarrow\varphi(\lambda)/\sqrt{\lambda}$ is decreasing,
and hence
\begin{align*}
\theta_{\varphi}(\lambda)=
\dfrac{\varphi(\lambda)}{\sqrt{\lambda}}\left(\sqrt{\lambda}+\sqrt{\dfrac{\Ncl(\lambda)}{\left|z\right|}}\right)\geq
\dfrac{\varphi(\lambda)}{\sqrt{\lambda}}\sqrt{\dfrac{\Ncl(\lambda)}{\left|z\right|}}\geq
\dfrac{\varphi(\lambda_0)}{\sqrt{\lambda_0}}\sqrt{\dfrac{\Ncl(\lambda_0)}{\left|z\right|}}=\varphi(\lambda_0).
\end{align*}
This proves the left hand side bound and completes the proof of the
first assertion. The second one is an immediate application of the
theorem, and the proof is complete.
\input{ms_bbl.bbl}
\end{document}
|
{
"timestamp": "2018-06-05T02:11:34",
"yymm": "1806",
"arxiv_id": "1806.00826",
"language": "en",
"url": "https://arxiv.org/abs/1806.00826"
}
|
\section{Introduction}
\begin{figure}[ht]
\centering
\fbox{
\includegraphics[width=.7\linewidth]{images/essentialMVRrecovery.png}}
\caption{\small{The procedure for learning the structure of an essential MVR CG from a faithful distribution.}}
\label{fig:my_label}
\end{figure}
Probabilistic graphical models (PGMs) use graphs, either undirected, directed, bidirected, or mixed, to represent possible dependencies among the variables of a multivariate probability distribution. Two types of graphical representations of distributions are commonly used, namely, Bayesian networks (BNs) and Markov random fields (Markov networks (MNs)), whose graphical parts are, respectively, a directed acyclic graph (DAG) and an undirected graph. Both families encompass the properties of factorization and independencies, but they differ in the set of independencies they can encode and the factorization of the distribution that they induce.
Currently systems containing both causal and non-causal relationships are mostly modeled with directed acyclic graphs (DAGs). An alternative approach is using chain graphs (CGs). Chain graphs may
have both directed and undirected edges under the constraint that there do not exist any semi-directed cycles \citep{d}. So, CGs may contain two types of edges,
the directed type that corresponds to the causal relationship in DAGs and a
second type of edge representing a symmetric relationship \citep{s2}. In
particular, $X_1$ is a direct cause of $X_2$ only if $X_1\to X_2$ (i.e., $X_1$ is a parent
of $X_2$), and $X_1$ is a (possibly indirect) cause of $X_2$ only if there is a directed
path from $X_1$ to $X_2$ (i.e., $X_1$ is an ancestor of $X_2$). So, while the interpretation of the directed edge in a CG is quite clear,
the second type of edge can represent different types of relations and, depending on how we interpret it in the graph, we say that we have different CG interpretations with different separation criteria, i.e. different ways of reading conditional independencies from the graph, and different intuitive meaning behind
their edges. The three following interpretations are the best known in the literature. The first interpretation (LWF) was introduced by Lauritzen,
Wermuth and Frydenberg \citep{lw, f} to combine DAGs and undirected graphs (UGs). The second
interpretation (AMP), was introduced by Andersson, Madigan and Perlman, and also combines DAGs and UGs but with a separation criterion
that more closely resembles the one of DAGs \citep{amp}. The third interpretation,
the multivariate regression interpretation (MVR), was introduced by Cox
and Wermuth \citep{cw1, cw2} to combine DAGs and bidirected (covariance) graphs.
Unlike in the other CG interpretations, the bidirected edge in MVR CGs has
a strong intuitive meaning. It can be seen to represent one or more hidden
common causes between the variables connected by it. In other words, in an MVR CG any bidirected
edge $X\leftrightarrow Y$ can be replaced by $X\gets H\to Y$ to obtain a Bayesian network representing
the same independence model over the original variables, i.e. excluding the
new variables H. These variables are called hidden, or latent, and have been
marginalized away in the CG model \citep{s}. See \citep{jv1} for details on the properties of MVR chain graphs.
Latent variables, which are often present in practice, cause several complications. First, causal inference based on structural learning (model selection) algorithms such as the PC algorithm \citep{sgs} may be incorrect. Second, if a distribution is faithful\footnote{A distribution $P$ is faithful to DAG $G$ if any independency in $P$ implies a corresponding $d$-separation property in $G$ \citep{sgs}.} to a DAG, then the distribution obtained by marginalizing on some of the variables may not be faithful to any DAG on the observed variables, i.e., the space of DAGs is not closed under marginalization \citep{cmkr}.
These problems can be solved by exploiting MVR chain graphs. An example of a situation for which CG is useful is if we have
a system containing two genes and two diseases caused by these such that Gene1
is the cause of Disease1, Gene2 is the cause of Disease2, and the diseases are correlated. In this case we might suspect the presence of an
unknown factor inducing the correlation between Disease1 and Disease2, such as
being exposed to a stressful environment. Having such a hidden variable results in the
independence model described in the information above. The MVR CG representing the information
above is shown in Figure \ref{Fig:gene} (a) while the best (inclusion optimal) BN and MN are shown in
Figure \ref{Fig:gene} (b) and (c), respectively. We can now see that it is only the MVR CG that
describes the relations in the system correctly \citep{Sonntag2015}.
\begin{figure}[ht]
\centering
\includegraphics[scale=.4]{images/gene.png}
\caption{A gene and disease example with MVR CG representation, BN representation and MN
representation \citep{Sonntag2015}.} \label{Fig:gene}
\end{figure}
As a result, designing efficient algorithms for learning the structure of MVR chain graphs is an important and desirable task.
Sonntag lists four constraint-based learning algorithms for CGs. All are based on testing if variables
are (conditionally) independent in the data using an independence test, and using this information to deduce the structure of the optimal graph. These algorithms are the PC-like algorithms
\citep{srs, p1, sp}, the answer set programming (ASP) algorithms \citep{p3, sjph}, the LCD algorithm \citep{mxg} and the
CKES algorithm \citep{psn}. The former two have implementations for all three
CG interpretations, while the latter two are only applicable for LWF CGs \citep{s2}.
In this paper, we propose a decomposition approach for recovering structures of MVR CGs. Our algorithms are natural extensions of algorithms in \citep{xie}. In particular, the
rule in \citep{xie} for combining local structures into a global skeleton is still applicable
and no more careful work (unlike, for example, algorithms in \citep{mxg}) must be done to ensure a valid combination. Moreover, the method for
extending a global skeleton to a Markov equivalence class is exactly the same as that for Bayesian networks.
The paper is organized as follows: Section \ref{defs&concepts} gives notation and definitions. In Section 3, we show a condition for decomposing structural learning
of MVR CGs. Construction of $m$-separation trees to be used for decomposition is discussed in Section \ref{construct-trees}. We propose the
main algorithm and then give an example in Section \ref{main-alg} to illustrate our approach for recovering the global structure
of an MVR CG. Section \ref{complexity} discusses the complexity and advantages of the proposed algorithms. Section \ref{evaluation} describes our evaluation setup. Both Gaussian and discrete networks were used. A comparison with the PC-like algorithm of \citep{sp} was carried out. Both quality of the recovered networks and running time are reported. Finally, we conclude with
some discussion in Section \ref{discussion}. The proofs of our main results and the correctness of the algorithms are given in Appendices A and B.
\section{Definitions and Concepts}\label{defs&concepts}
In this paper we consider graphs containing both directed ($\to$) and bidirected ($\leftrightarrow$) edges and largely use the terminology of \citep{xie, r2}, where the reader can also find further details. Below we briefly list some of the most central concepts used in this paper.
If there is an arrow from $a$ pointing towards $b$, $a$ is said to be a parent
of $b$. The set of parents of $b$ is denoted as $pa(b)$. If there is a bidirected edge between $a$ and $b$, $a$ and $b$ are said to be neighbors. The set of neighbors of a vertex $a$ is denoted as $ne(a)$. The expressions $pa(A)$ and $ne(A)$ denote the collection of
parents and neighbors of vertices in $A$ that are not themselves
elements of $A$. The boundary $bd(A)$ of a subset $A$ of vertices is the set of vertices in $V\setminus A$ that are parents or neighbors to vertices in $A$.
A path of length $n$ from $a$ to $b$ is a sequence $a=a_0,\dots , a_n=b$ of
distinct vertices such that $(a_i\to a_{i+1})\in E$, for all $i=1,\dots ,n$. A chain of length $n$ from $a$ to $b$ is a sequence $a=a_0,\dots , a_n=b$ of
distinct vertices such that $(a_i\to a_{i+1})\in E$, or $(a_{i+1}\to a_i)\in E$, or $(a_{i+1}\leftrightarrow a_i)\in E$, for all $i=1,\dots ,n$. We say that $u$ is an ancestor of $v$ and $v$
is a descendant of $u$ if there is a path from $u$ to $v$ in $G$.
The set of ancestors of $v$ is denoted as $an(v)$, and we define $An(v) = an(v)\cup v$. We apply this definition to sets: $an(X) = \{\alpha | \alpha \textrm{ is an ancestor of } \beta \textrm{ for some } \beta \in X\}$.
A partially directed cycle in a graph $G$ is a sequence of $n$ distinct vertices $v_1,\dots, v_n (n\ge 3)$,
and $v_{n+1}\equiv v_1$, such that
\begin{itemize}
\item $\forall i (1\le i\le n)$ either $v_i\leftrightarrow v_{i+1}$ or $v_i\to v_{i+1}$, and
\item $\exists j (1\le j\le n)$ such that $v_i\to v_{i+1}$.
\end{itemize}
A graph with only undirected edges is called an undirected graph (UG). A graph with only
directed edges and without directed cycles is called a directed acyclic graph (DAG). Acyclic directed mixed graphs, also known as semi-Markov(ian) \citep{pj}
models contain directed ($\rightarrow$) and bidirected
($\leftrightarrow$) edges subject to the restriction that there are no directed cycles \citep{r2,er}. A graph that has no partially directed cycles is called \textit{chain graph}.
A nonendpoint vertex $\zeta$ on a chain is a \emph{collider} on the chain if the edges preceding and succeeding $\zeta$ on the chain have an arrowhead at $\zeta$, that is, $\to \zeta \gets, or \leftrightarrow \zeta \leftrightarrow, or\leftrightarrow \zeta \gets, or\to \zeta \leftrightarrow$. A nonendpoint vertex $\zeta$ on a chain which is not a collider is a noncollider on the chain. A chain between vertices $\alpha$ and $\beta$ in chain graph $G$ is said to be $m$-connecting given a set $Z$ (possibly empty), with $\alpha, \beta \notin Z$, if:
\begin{enumerate}
\item[(i)] every noncollider on the path is not in $Z$, and
\item[(ii)] every collider on the path is in $An_G(Z)$.
\end{enumerate}
A chain that is not $m$-connecting given $Z$ is said to be blocked given (or by) $Z$.
If there is no chain $m$-connecting $\alpha$ and $\beta$ given $Z$, then $\alpha$ and $\beta$ are said to be \emph{m-separated} given $Z$. Sets $X$ and $Y$ are $m$-separated given $Z$, if for every pair $\alpha, \beta$, with $\alpha\in X$ and $\beta \in Y$, $\alpha$ and $\beta$ are $m$-separated given $Z$ ($X$, $Y$, and $Z$ are disjoint sets; $X, Y$ are nonempty). We denote the independence model resulting from applying the $m$-separation criterion to $G$, by $\Im_m$(G). This is an extension of Pearl's $d$-separation criterion \citep{pearl1} to MVR chain graphs in that in a DAG $D$, a chain is $d$-connecting if and only if it is $m$-connecting.
Two vertices $x$ and $y$ in chain graph $G$ are said to be collider connected if there is a chain from $x$ to $y$ in $G$ on which every non-endpoint vertex is a collider; such a chain is called a collider chain. Note that a single edge trivially forms a collider chain (path), so if $x$ and $y$ are adjacent in a chain graph then they are collider connected. The augmented graph derived from $G$, denoted $(G)^a$, is an undirected graph with the same vertex set as $G$ such that $$c\--d \textrm{ in } (G)^a \Leftrightarrow c \textrm{ and } d \textrm{ are collider connected in } G.$$
Disjoint sets $X, Y\ne \emptyset,$ and $Z$ ($Z$ may be empty) are said to be
$m^\ast$-separated if $X$ and $Y$ are separated by $Z$ in $(G_{an(X\cup Y\cup Z)})^a$. Otherwise $X$ and $Y$ are said to be $m^\ast$-connected
given $Z$. The resulting independence model is denoted by $\Im_{m^\ast}(G)$.
According to \citep[Theorem 3.18.]{rs} and \citep{jv1}, for chain graph $G$ we have: $\Im_m(G)=\Im_{m^\ast}(G)$.
Let $\bar{G}_V = (V, \bar{E}_V)$ denote an undirected graph where $\bar{E}_V$ is a set of undirected edges. An undirected edge between
two vertices $u$ and $v$ is denoted by $(u, v)$. For a subset $A$ of $V$, let $\bar{G}_A= (A, \bar{E}_A)$ be the subgraph induced by $A$
and $\bar{E}_A = \{e\in \bar{E}_V | e\in A\times A\} = \bar{E}_V\cap (A\times A)$. An undirected graph is called complete if any pair of vertices is connected by an edge. For an undirected graph, we say that vertices $u$ and $v$ are separated by a set of vertices $Z$ if each path between $u$ and $v$ passes through $Z$. We say that two distinct vertex sets $X$ and $Y$ are separated by $Z$ if and
only if $Z$ separates every pair of vertices $u$ and $v$ for any $u\in X$ and $v\in Y$. We say that an undirected graph $\bar{G}_V$ is
an undirected independence graph (UIG) for CG $G$ if the fact that a set $Z$ separates $X$ and $Y$ in $\bar{G}_V$ implies that $Z$
$m$-separates $X$ and $Y$ in $G$. Note that the augmented graph derived from CG $G$, $(G)^a$, is an undirected independence graph for $G$. We say that $\bar{G}_V$ can be decomposed into subgraphs $\bar{G}_A$ and $\bar{G}_B$ if
\begin{itemize}
\item[(1)] $A\cup B=V$, and
\item[(2)] $C=A\cap B$ separates $V\setminus A$ and $V\setminus B$ in $\bar{G}_V$.
\end{itemize}
The above decomposition does not require that the separator $C$ be complete, which is required for weak decomposition defined in \citep{l}. In the next section, we show that a
problem of structural learning of CG can also be decomposed into problems for its decomposed subgraphs even if
the separator is not complete.
A triangulated (chordal) graph is an undirected graph in which all cycles of four or more vertices have a chord, which is an edge that is not part of the cycle but connects two vertices of the cycle (see, for example, Figure \ref{Fig:mvr1}). For an
undirected graph $\bar{G}_V$ which is not triangulated, we can add extra (``fill-in") edges to it such that it becomes to be a triangulated
graph, denoted by $\bar{G}_V^t$.
\begin{figure}[ht]
\centering
\includegraphics[scale=.4]{images/mvr1.png}
\caption{(a) An MVR CG $G$. (b) The augmented graph $G^a$, which is also a triangulated graph $G^t$.} \label{Fig:mvr1}
\end{figure}
Let $X\!\perp\!\!\!\perp Y$ denote
the independence of $X$ and $Y$, and $X\!\perp\!\!\!\perp Y|Z$ (or $\langle X,Y | Z\rangle$) the conditional independence of $X$ and $Y$ given $Z$. In this paper, we assume that all independencies of a
probability distribution of variables in $V$ can be checked by $m$-separations of $G$, called the faithfulness assumption \citep{sgs}. The faithfulness assumption means that all independencies and conditional independencies among variables can be represented by $G$.
The global skeleton is an undirected graph obtained by dropping direction of CG. Note that the absence of an
edge $(u, v)$ implies that there is a variable subset $S$ of $V$ such that $u$ and $v$ are independent conditional on $S$, that
is, $u\!\perp\!\!\!\perp v|S$ for some $S\subseteq V\setminus \{u,v\}$ \citep{jv1}. Two MVR CGs over the same variable set are called Markov equivalent if they
induce the same conditional independence restrictions. Two MVR CGs are Markov equivalent if and only if they have the
same global skeleton and the same set of $v$-structures (unshielded colliders) \citep{ws}. An equivalence class of MVR CGs consists of all MVR CGs which
are Markov equivalent, and it is represented as a partially directed graph (i.e., a graph containing directed, undirected, and bidirected edges and no directed cycles) where the directed/bidirected edges represent edges that are common to every MVR CG in it, while the undirected edges represent that any legal orientation of them leads
to a Markov equivalent MVR CG. Therefore the goal of structural learning is to construct a partially directed graph to
represent the equivalence class. A local skeleton for a subset $A$ of variables is an undirected subgraph for $A$ in which
the absence of an edge $(u, v)$ implies that there is a subset $S$ of $A$ such that $u\!\perp\!\!\!\perp v|S$.
Now, we introduce the notion of $m$-separation trees, which is used to facilitate the representation of the decomposition. The concept is similar to the junction tree of cliques and the
independence tree introduced for DAGs as $d$-separation trees in \citep{xie}. Let $C = \{C_1, \dots, C_H \}$ be a collection of distinct variable sets such that for $h = 1,\dots ,H, C_h\subseteq V$.
Let $T$ be a tree where each node corresponds to a distinct variable set in $C$, to be displayed as an oval (see, for example, Figure \ref{Fig:tree1}). The term ‘node’ is used for an $m$-separation tree to distinguish from
the term ‘vertex’ for a graph in general. An undirected edge $e = (C_i,C_j)$ connecting nodes $C_i$ and $C_j$ in $T$ is labeled with a separator $S = C_i\cap C_j$, which is displayed as a rectangle.
Removing an edge $e$ or, equivalently, removing a separator $S$ from $T$ splits $T$ into two subtrees
$T_1$ and $T_2$ with node sets $C_1$ and $C_2$ respectively. We use $V_i$ to denote the union of the
vertices contained in the nodes of the subtree $T_i$ for $i = 1,2$.
\begin{definition}\label{septree}
A tree $T$ with node set $C$ is said to be an $m$-separation tree for chain graph $G = (V,E)$ if
\begin{itemize}
\item $\cup_{C_i\in C}C_i=V$, and
\item for any separator $S$ in $T$ with $V_1$ and $V_2$ defined as above by removing $S$, we have $\langle V_1\setminus S,V_2\setminus S | S\rangle_G$.
\end{itemize}
\end{definition}
\begin{figure}[ht]
\centering
\includegraphics[scale=.4]{images/tree1.png}
\caption{an $m$-separation tree.} \label{Fig:tree1}
\end{figure}
Notice that a separator is defined in terms of a tree whose nodes consist of variable sets, while
the $m$-separator is defined based on chain graph. In general, these two concepts are not related, though for an $m$-separation tree its separator must be some corresponding $m$-separator in the underlying MVR chain graph. The definition of $m$-separation trees for MVR chain graphs is similar to that of junction trees of cliques,
see \citep{cdls,l}. Actually, it is not difficult to see that a junction tree
of chain graph $G$ is also an $m$-separation tree. However, as in \citep{mxg}, we point out two differences here: (a) an $m$-separation tree is defined with $m$-separation and it does not require that every node be a clique or
that every separator be complete on the augmented graph; (b) junction trees are mostly used as inference engines, while our interest in $m$-separation trees is mainly derived from their power in facilitating the decomposition of structural learning.
A collection of variable sets $C = \{C_1, \dots, C_H \}$ is said to be a hypergraph on $V$ where each hyperedge $C_h$ is
a nonempty subset of variables, and $\cup_{h=1}^HC_h=V$. A hypergraph is a reduced hypergraph if $C_i\not\subseteq C_j$ for $i\ne j$. In this paper, only reduced hypergraphs are used, and thus simply called hypergraphs.
\section{Construction of \textit{m}-Separation Trees}\label{construct-trees}
As proposed in \citep{xie}, one can construct a $d$-separation tree from observed data, from
domain or prior knowledge of conditional independence relations or from a collection of databases.
However, their arguments are not valid for constructing an $m$-separation tree from domain knowledge or from observed data patterns when latent common causes are present, as in the current setting. In this section, we first extend
Theorem 2 of \citep{xie}, which guarantees that their method for constructing a separation tree
from data is valid for MVR chain graphs. Then we investigate sufficient conditions for constructing $m$-separation trees from domain or prior knowledge of conditional independence relations or from a collection of databases.
\subsection{Constructing an \textit{m}-Separation Tree from Observed Data}
In several algorithms for structural learning of PGMs, the first step is to construct an undirected independence graph in which
the absence of an edge $(u, v)$ implies $u \perp\!\!\!\perp v | V\setminus\{u,v\}$. To construct such an undirected graph, we can start with a complete undirected graph, and then for each pair of variables $u$ and $v$, an undirected edge $(u, v)$ is removed if $u$ and
$v$ are independent conditional on the set of all other variables \citep{xie}. For normally distributed data, the undirected independence graph can be efficiently constructed by removing an edge $(u, v)$ if and only if the corresponding entry in the concentration matrix (inverse covariance matrix) is zero \citep[Proposition 5.2]{l}. For this purpose, performing a conditional independence test for each pair of random variables using the
partial correlation coefficient can be used. If the $p$-value of the test is smaller than the given threshold, then there will be an edge on the output graph. For discrete data, a test of conditional independence given a large number of discrete variables may
be of extremely low power. To cope with such difficulty, a local discovery algorithm called
Max-Min Parents and Children (MMPC) \citep{tas} or the forward selection procedure described in \citep{ed} can be applied.
An $m$-separation tree can be built by constructing a junction tree from an undirected independence graph. In fact, we generalize Theorem 2 of \citep{xie} as follows.
\begin{theorem}\label{thm2}
A junction tree constructed from an undirected independence graph for MVR CG $G$ is an $m$-separation tree for $G$.
\end{theorem}
An $m$-separation tree $T$ only requires that all $m$-separation properties of $T$ also hold for MVR CG $G$, but the reverse is
not required. Thus we only need to construct an undirected independence graph that may have fewer conditional
independencies than the moral graph, and this means that the undirected independence graph may have extra edges
added to the augmented graph. As \citep{xie} observe for $d$-separation in DAGs, if all nodes of an $m$-separation tree contain only a few variables, ``the null hypothesis of the absence of an undirected edge may be tested statistically at
a larger significance level."
Since there are standard algorithms for constructing junction trees from UIGs \citep[Chapter 4, Section 4]{cdls}, the construction of separation trees reduces to the construction of
UIGs. In this sense, Theorem \ref{thm2} enables us to exploit various techniques for learning UIGs to serve
our purpose. More suggested methods for learning UIGs from data, in addition to the above mentioned techniques, can be found in \citep{mxg}.
\begin{example}
To construct an $m$-separation tree for MVR CG $G$ in Figure \ref{Fig:mvr1}(a), at first an undirected independence graph
is constructed by starting with a complete graph and removing an edge $(u, v)$ if $u \perp\!\!\!\perp v | V\setminus\{u,v\}$. An undirected graph
obtained in this way is the augmented graph of MVR CG $G$. In fact, we only need to construct an undirected independence
graph which may have extra edges added to the augmented graph. Next triangulate the undirected graph and finally obtain
the $m$-separation tree, as shown in Figure \ref{Fig:mvr1}(b) and Figure \ref{Fig:tree1} respectively.
\end{example}
\subsection{Constructing an \textit{m}-Separation Tree from Domain Knowledge or from Observed Data Patterns}\label{subsec2}
Algorithm 2 of \citep{xie} proposes an algorithm for constructing a $d$-separation tree $T$ from domain knowledge or from observed
data patterns such that a correct skeleton can be constructed by combining subgraphs for nodes of $T$. In this subsection, we propose an approach for constructing an $m$-separation tree from domain knowledge or from
observed data patterns without conditional independence tests. Domain knowledge of variable dependencies can be represented as a collection of variable
sets $C = \{C_1,\dots , C_H \}$, in which variables contained in the same set may associate with each other directly but variables
contained in different sets associate with each other through other variables. This means that two variables that are not
contained in the same set are independent conditionally on all other variables. On the other hand, in an application study, observed data may have a collection of different observed patterns, $C = \{C_1,\dots , C_H \}$, where $C_h$ is the set of observed variables for the $h$th group of individuals. In both cases, the condition to make our algorithms correct for structural learning from a
collection $C$ is that $C$ must contain sufficient data such that parameters of the underlying MVR CG are estimable.
For a DAG, parameters are estimable if, for each variable $u$, there is an observed data pattern $C_h$ in $C$ that contains
both $u$ and its parent set. Thus a collection $C$ of observed patterns has sufficient data for correct structural learning
if there is a pattern $C_h$ in $C$ for each $u$ such that $C_h$ contains both $u$ and its parent set in the underlying DAG. Also, domain knowledge is legitimate if, for each variable $u$, there is a hyperedge $C_h$ in $C$ that contains both $u$ and its parent set \citep{xie}. However, these conditions are not valid in the case of MVR chain graphs. In fact, for MVR CGs domain knowledge is legitimate if for each connected component $\tau$, there is a hyperedge $C_h$ in $C$ that contains both $\tau$ and its parent set $pa_G(\tau)$. Also, a collection $C$ of observed patterns has sufficient data for correct structural learning
if there is a pattern $C_h$ in $C$ for each connected component $\tau$ such that $C_h$ contains both $\tau$ and its parent set $pa_G(\tau)$ in the underlying MVR CG.
\begin{algorithm}
\caption{Construct an $m$-separation tree from a hypergraph}\label{hypergraph}
\SetAlgoLined
\KwIn{a hypergraph $C = \{C_1, \dots, C_H \}$, where each hyperedge $C_h$ is a variable set such that for each connected component $\tau$, there is a hyperedge $C_h$ in $C$ that contains both $\tau$ and its parent set $pa_G(\tau)$.}
\KwOut{$T$, which is an $m$-separation tree for the hypergraph $C$.}
For each hyperedge $C_h$, construct a complete undirected graph $\bar{G}_h$ with the edge set $\bar{E}_h=\{(u,v)|\forall u,v\in C_h\}=C_h\times C_h$\;
Construct the entire undirected graph $\bar{G}_V=(V,\bar{E})$, where $\bar{E}=\bar{E}_1\cup\dots\cup \bar{E}_H$\;
Construct a junction tree $T$ by triangulating $\bar{G}_V$\;
\end{algorithm}
The correctness of Algorithm \ref{hypergraph} is proven in Appendix B. Note that we do not need any conditional independence
test in Algorithm \ref{hypergraph} to construct an $m$-separation tree. In this algorithm, we can use the proposed algorithm in \citep{bbhp} to construct a minimal triangulated graph. In order to illustrate Algorithm \ref{hypergraph}, see Figure \ref{Fig:hypergraph}.
\begin{figure}[ht]
\centering
\includegraphics[scale=.45]{images/alg2.png}
\caption{Construction the $m$-separation tree. (a) An MVR CG. (b) Domain knowledge of associations. (c) The undirected graph and triangulation. (d) The $m$-separation tree $T$.} \label{Fig:hypergraph}
\end{figure}
Guaranteeing the presence of both $\tau$ and its parent set $pa(\tau)$ in at least one hyperedge, as required in Algorithm \ref{hypergraph}, is a strong requirement, which may prevent the use of domain knowledge as a practical source of information for constructing MVR chain graphs. In addition, we remark that answering the question "how can one obtain this information?" is beyond the scope of this paper. The two examples that follow show that restricting the hyperedge contents in two natural ways lead to errors.
The example illustrated in Figure \ref{Fig:counterex} shows that, if for each variable $u$ there is a hyperedge $C_h$ in $C$ that contains both $u$ and its parent set, we cannot guarantee the correctness of our algorithm. Note that vertices $a$ and $d$ are separated in the tree $T$ of Figure \ref{Fig:counterex} part (d) by removing vertex $b$, but $a$ and $d$ are not $m$-separated given $b$ as can be verified using \ref{Fig:counterex} part (a).
\begin{figure}[ht]
\centering
\includegraphics[scale=.5]{images/counterex.png}
\caption{Insufficiency of having a hypergraph that contains both $u$ and its parent set for every $u\in V$. (a) An MVR CG. (b) Domain knowledge of associations. (c) The undirected graph constructed by union of complete graphs corresponding to each hyperedge, which is also a triangulated graph. (d) The junction tree $T$. (e) Local skeleton for every node of $T$. (f) The global skeleton and all $v$-structures.} \label{Fig:counterex}
\end{figure}
The example illustrated in Figure \ref{Fig:jvmethod} shows that, if for each variable $u$ there is a hyperedge $C_h$ in $C$ that contains both $u$ and its boundary set, Algorithm \ref{hypergraph} does not necessarily give an $m$-separation tree because, for example, $S=\{a,b\}$ separates $c$ and $d$ in tree $T$ of Figure \ref{Fig:jvmethod} part (d), but $S$ does not $m$-separate $c$ and $d$ in the MVR CG $G$ in Figure \ref{Fig:jvmethod} part (a).
\begin{figure}[ht]
\centering
\includegraphics[scale=.5]{images/jvmethod.png}
\caption{Insufficiency of having a hypergraph that contains both $u$ and its boundary set for every $u\in V$. (a) An MVR CG. (b) Domain knowledge of associations. (c) The undirected graph constructed by union of complete graphs corresponding to each hyperedge, which is also a triangulated graph. (d) The junction tree $T$, which is not an $m$-separation tree.} \label{Fig:jvmethod}
\end{figure}
\section{Decomposition of Structural Learning}\label{main-alg}
Applying the following theorem to structural learning, we can split a problem of searching for $m$-separators and building the skeleton of a CG into small problems for every node of $m$-separation tree $T$.
\begin{theorem}\label{thm1}
Let $T$ be an $m$-separation tree for CG $G$. Vertices $u$ and $v$ are $m$-separated by $S\subseteq V$ in $G$ if and
only if (i) $u$ and $v$ are not contained together in any node $C$ of $T$ or (ii) there exists a node $C$ that contains both $u$
and $v$ such that a subset $S'$ of $C$ $m$-separates $u$ and $v$.
\end{theorem}
According to Theorem \ref{thm1}, a problem of searching for an $m$-separator $S$ of $u$ and $v$ in all possible subsets of $V$ is
localized to all possible subsets of nodes in an $m$-separation tree that contain $u$ and $v$. For a given $m$-separation tree $T$
with the node set $C = \{C_1,\dots , C_H \}$, we can recover the skeleton and all $v$-structures for a CG as follows. First
we construct a local skeleton for every node $C_h$ of $T$, which is constructed by starting with a complete undirected
subgraph and removing an undirected edge $(u, v)$ if there is a subset $S$ of $C_h$ such that $u$ and $v$ are independent
conditional on $S$. Then, in order to construct the global skeleton, we combine all these local skeletons together and remove
edges that are present in some local skeletons but absent in other local skeletons. Then we determine every $v$-structure
if two non-adjacent vertices $u$ and $v$ have a common neighbor in the global skeleton but the neighbor is not contained
in the $m$-separator of $u$ and $v$. Finally we can orient more undirected edges if none of them creates either a
partially directed cycle or a new $v$-structure (see, for example, Figure \ref{Fig:alg1}). This process is formally described in the following algorithm:
\begin{algorithm}[ht]
\caption{A recovery algorithm for MVR chain graphs}\label{alg1}
\SetAlgoLined
\KwIn{a probability distribution $p$ faithful to an
unknown MVR CG $G$.}
\KwOut{the pattern of MVR CG $G$.}
Construct an $m$-separation tree $T$ with a node set $C = \{C_1, \dots, C_H \}$ as discussed in Section \ref{construct-trees}\;
Set $S=\emptyset$\;
\For{$h\gets 1$ \KwTo $H$}{
Start from a complete undirected graph $\bar{G}_h$ with vertex set $C_h$\;
\For{\textrm{each vertex pair $\{u,v\}\subseteq C_h$ }}{\If{$\exists S_{uv}\subseteq C_h \textrm{ such that } u \perp\!\!\!\perp v | S_{uv}$}{
Delete the edge $(u,v)$ in $\bar{G}_h$\;
Add $S_{uv}$ to $S$;
}
}
}
Initialize the edge set $\bar{E}_V$ of $\bar{G}_V$ as the union of all edge sets of $\bar{G}_h, h=1,\dots, H$\;
\For{\textrm{each Vertex pair $\{u,v\}$ contained in more than one tree node and $(u,v)\in \bar{G}_V$}}{
\If{$\exists C_h \textrm{ such that } \{u,v\}\subseteq C_h \textrm{ and } \{u,v\}\not\in \bar{E}_h$}{
Delete the edge $(u,v)$ in $\bar{G}_V$\;
}
}
\For{\textrm{each $m$-separator $S_{uv}$ in the list $S$}}{\If{\textrm{$u\mathrel{\circ\!-} w\mathrel{-\!\circ} v$ appears in the global skeleton
and $w$ is not in $S_{uv}$}}{
\tcc{$u\mathrel{\circ\!-} w$ means $u\gets w$ or $u-w$. Also, $w\mathrel{-\!\circ} v$ means $w\to v$ or $w-v.$}
Determine a $v$-structure $u\mathrel{\circ\!\!\!\rightarrow} w\mathrel{\leftarrow\!\!\!\circ} v$\;
}
}
\end{algorithm}
\begin{figure}[ht]
\centering
\includegraphics[scale=.4]{images/alg1b.png}
\caption{(a) Local skeletons for every node of the $m$-separation tree in Figure \ref{Fig:tree1}. (b) The global skeleton and all $v$-structures.} \label{Fig:alg1}
\end{figure}
The following algorithm returns an MVR chain graph that contains exactly the minimum set
of bidirected edges for its Markov equivalence
class. For the correctness of lines 2-7 in Algorithm \ref{alg2}, see \citep{sp}.
\begin{algorithm}
\caption{A recovery algorithm for MVR chain graphs with minimum set of bidirected edges for its equivalence
class}\label{alg2}
\SetAlgoLined
\KwIn{a probability distribution $p$ faithful to an
unknown MVR CG $G$.}
\KwOut{an MVR CG $G'$ s.t.
$G$ and $G'$ are Markov equivalent and $G'$ has exactly the minimum set of bidirected edges for its equivalence
class.}
Call Algorithm \ref{alg1} to construct $G'$, which is the equivalence class of MVR CGs for $G$\;
Apply rules 1-3 in Figure \ref{Fig:rules} while possible\;
\tcc{After this line, the learned graph is the \textit{essential graph} of MVR CG $G$ i.e., it
has the same skeleton as $G$ and contain all and only the arrowheads that
are shared by all MVR CGs in the Markov equivalence class of $G$ \citep{essentialmvrcgs}.}
Let $G'_u$ be the subgraph of $G'$ containing only the
nodes and the undirected edges in $G'$\;
Let $T$ be the junction tree of $G'_u$\;
\tcc{If $G'_u$ is disconnected, the cliques belonging to different connected components can be linked with empty separators, as described in \cite[Theorem 4.8]{Golumbic}.}
Order the cliques $C_1,\cdots , C_n$ of $G'_u$ s.t. $C_1$ is the root of $T$ and if $C_i$ is closer to the root than $C_j$ in $T$ then $C_i < C_j$\;
Order the nodes such that if $A\in C_i$, $B\in C_j$, and $C_i < C_j$ then $A < B$\;
Orient the undirected edges in $G'$ according to the ordering obtained in line 6.
\end{algorithm}
\begin{figure}[ht]
\centering
\includegraphics[scale=.4]{images/rules.png}
\caption{The Rules \citep{sp}} \label{Fig:rules}
\end{figure}
According to Theorem \ref{thm1}, we can prove that the global skeleton and all $v$-structures obtained by applying the decomposition in Algorithm \ref{alg1} are correct, that is, they are the same as those obtained from the joint distribution of $V$, see
Appendix A for the details of proof. Note that separators in an $m$-separation tree may not be complete in the augmented graph.
Thus the decomposition is weaker than the decomposition usually defined for parameter estimation \citep{cdls,l}.
\section{Complexity Analysis and Advantages}\label{complexity}
In this section, we start by comparing our algorithm with the main algorithm in \citep{xie} that is designed
specifically for DAG structural learning when the underlying graph structure is a DAG. We make
this choice of the DAG specific algorithm so that both algorithms can have the same separation tree
as input and hence are directly comparable.
In a DAG, all chain components are singletons. Therefore, sufficiency of having a hypergraph that contains both $\tau$ and its parent set for every chain component is equivalent with having a hypergraph that contains both $u$ and its parent set for every $u\in V$, when the underlying graph structure is a DAG. Therefore, it is obvious that our algorithm has the same effect and the same complexity as the main algorithm in \citep{xie}.
The same advantages mentioned by \citep{xie} for their BN structural learning algorithm hold for our algorithm when applied to MVR CGs. For the reader convenience, we list them here.
First, by using the $m$-separation tree, independence tests are performed only conditionally on smaller sets
contained in a node of the $m$-separation tree rather than on the full set of all other variables. Thus our algorithm has
higher power for statistical tests.
Second, the computational complexity can be reduced. This complexity analysis focuses only on the number of
conditional independence tests for constructing the equivalence class. Decomposition of graphs is a computationally
simple task compared to the task of testing conditional independence for a large number of triples of sets of variables. The triangulation of an undirected graph is used in our
algorithms to construct an $m$-separation from an undirected independence graph. Although the problem for optimally
triangulating an undirected graph is NP-hard, sub-optimal triangulation methods \citep{bbhp} may be used provided
that the obtained tree does not contain too large nodes to test conditional independencies. Two of the best known
algorithms are lexicographic search and maximum cardinality search, and their complexities are
$O(|V||E|)$ and $O(|V|+ |E|)$, respectively \citep{bbhp}. Thus in our algorithms, the conditional independence tests dominate the algorithmic
complexity.
The complexity of the Algorithm \ref{alg1} is $O(Hm^22^m)$ as claimed in \citep[Section 6]{xie}, where $H$ is the number of hyperedges (usually $H \ll |V|$) and $m=\max_h|C_h|$ where $|C_h|$ denotes the number of variables in $C_h$ ($m$ usually is much less than $|V|$).
\section{Evaluation}\label{evaluation}
In this section, we evaluate the performance of our algorithms in various setups
using simulated / synthetic data sets. We first compare the performance of our algorithm with the PC-like
learning algorithm \citep{sp} by running them
on randomly generated MVR chain graphs. (A brief description of the PC-like algorithm is provided at the beginning of Section \ref{discussion}.) We then compare our method with the PC-like algorithm on different discrete Bayesian networks such as \href{http://www.bnlearn.com/bnrepository/}{ASIA, INSURANCE, ALARM, and HAILFINDER} that have
been widely used in evaluating the performance of structural learning algorithms. Empirical simulations show that our algorithm achieves
competitive results with the PC-like learning algorithm; in particular, in the Gaussian case the decomposition-based algorithm outperforms (except in running time) the PC-like algorithm.
Algorithms \ref{alg1} , \ref{alg2}, and the PC-like algorithm have been implemented in the R language. All the results reported here are
based on our R implementation \citep{jv3}.
\subsection{Performance Evaluation on Random MVR Chain Graphs (Gaussian case)}
To investigate the performance of the decomposition-based learning method, we use the same approach that \citep{mxg} used in
evaluating the performance of the LCD algorithm on LWF chain graphs. We run our algorithms and the PC-like algorithm
on randomly generated MVR chain graphs and then we compare the results and report summary error measures in all cases.
\subsubsection{Data Generation Procedure}
First we explain the way in which the random MVR chain graphs and random samples are generated.
Given a vertex set $V$ , let $p = |V|$ and $N$ denote the average degree of edges (including bidirected
and pointing out and pointing in) for each vertex. We generate a random MVR chain graph on $V$ as
follows:
\begin{itemize}
\item Choose one element, say $k$, of the vector $c=(0.1, 0.2, 0.3, 0.4, 0.5)$ randomly\footnote{In the case of $p=40,50$ we use $c=(0.1,0.2)$.}.
\item Use the randDAG function from the \href{https://cran.r-project.org/web/packages/pcalg/index.html}{pcalg} R package and generate an un-weighted random Erdos-Renyi graph, which is a DAG with $p+(k\times p)$ nodes and $N$ expected number of neighbours per node.
\item Use the AG function from the \href{https://cran.r-project.org/web/packages/ggm/index.html}{ggm} R package and marginalize out $k\times p$ nodes to obtain a random MVR chain graph with $p$ nodes and $N$ expected number of neighbours per node. If the obtained graph is not an MVR chain graph, repeat this procedure until an MVR CG is obtained.
\end{itemize}
The rnorm.cg function from the \href{http://www2.uaem.mx/r-mirror/web/packages/lcd/index.html}{lcd} R package was used to generate a desired number of normal random samples from the canonical DAG \citep{rs} corresponding to the obtained MVR chain graph in the first step. Notice that faithfulness is not necessarily guaranteed by the current sampling procedure \citep{mxg}.
\subsubsection{Experimental Results for Random MVR Chain Graphs (Gaussian case)}
We evaluate the performance of the decomposition-based and PC-like algorithms in terms of five measurements: (a) the true positive
rate (TPR)\footnote{Also known as sensitivity, recall, and hit rate.}, (b) the false positive rate (FPR)\footnote{Also known as fall-out.}, (c) accuracy (ACC) for the skeleton, (d) the structural Hamming distance (SHD)\footnote{This is the metric described in \citep{Tsamardinos2006} to compare the
structure of the learned and the original graphs.}, and (e) run-time for the pattern recovery algorithms. In short, $TPR=\frac{\textrm{true positive } (TP)}{\textrm{the number of real positive cases in the data } (Pos)}$ is the ratio of the number of correctly identified edges over total number of edges, $FPR=\frac{\textrm{false positive }(FP)}{\textrm{the number of real negative cases in the data }(Neg)}$ is the ratio of the number of incorrectly identified edges over total number of gaps, $ACC=\frac{\textrm{true positive }(TP) +\textrm{ true negative }(TN)}{Pos+Neg}$ and
$SHD$ is the number of legitimate operations needed to change the current pattern to the true one,
where legitimate operations are: (a) add or delete an edge and (b) insert, delete or reverse an edge
orientation. In principle, a large TPR and ACC, a small FPR and SHD indicate good performance.
In our simulation, we change three parameters $p$ (the number of vertices), $n$ (sample size) and
$N$ (expected number of adjacent vertices) as follows:
\begin{itemize}
\item $p\in\{10, 20, 30, 40, 50\}$,
\item $n\in\{300, 1000, 3000, 10000\}$, and
\item $N\in\{2,3,5,8,10\}$.
\end{itemize}
For each $(p,N)$ combination, we first generate 25 random MVR chain graphs. We then generate a
random Gaussian distribution based on each corresponding canonical DAG and draw an identically independently distributed
(i.i.d.) sample of size $n$ from this distribution for each possible $n$, and finally we remove those columns (if any exist) that correspond to the hidden variables. For each sample, three different
significance levels $\alpha=0.05/0.01/0.005$ are used to perform the hypothesis tests. For decomposition-based algorithm we consider two different versions: The first version uses Algorithm \ref{alg1} and the three rules in Algorithm \ref{alg2}, while the second version uses both Algorithm \ref{alg1} and \ref{alg2}. Since the learned graph of the first version may contain some undirected edges, we call it the \textit{essential recovery algorithm}. However, removing all directed and bidirected edges from the learned graph results in a chordal graph \citep{sp}. Furthermore, the learned graph has exactly the (unique) minimum set of bidirected edges for its Markov equivalence class \citep{sp}. The second version of the decomposition-based algorithm returns an MVR chain graph that has exactly the
minimum set of bidirected edges for its equivalence
class. A similar approach is used for the PC-like algorithm. We then
compare the results to access the performance of the decomposition-based algorithm against the PC-like algorithm. The entire plots of the error measures and running times can be seen in the \href{https://www.dropbox.com/sh/iynnlwyu8il7m3v/AACk8SyIEn7s-W9NRlLnz0DDa?dl=0}{supplementary document} \citep{jv3}.
From the plots, we infer that: (a) both
algorithms yield better results on sparse graphs $(N = 2,3)$ than on dense graphs $(N = 5,8,10)$, for example see Figures \ref{fig:tprfpracc1} and \ref{fig:shd1}; (b) for both algorithms, typically the TPR and ACC increase with sample size, for example see Figure \ref{fig:tprfpracc1}; (c) for both algorithms, typically the SHD decreases with sample size for sparse graphs $(N = 2,3)$. For $N=5$ the SHD decreases with sample size for the decomposition-based algorithm while the SHD has no clear dependence on the sample size for the PC-like algorithm in this case. Typically, for the PC-like algorithm the SHD increases with sample size for dense graphs $(N = 8,10)$ while the SHD has no clear dependence on the sample size for the decomposition-based algorithm in these cases, for example see Figure \ref{fig:shd1}; (d) a large significance level $(\alpha=0.05)$ typically yields large
TPR, FPR, and SHD, for example see Figures \ref{fig:tprfpracc1} and \ref{fig:shd1}; (e) in almost all cases, the performance of the decomposition-based algorithm based on all error measures i.e., TPR, FPR, ACC, and SHD is better than the performance of the PC-like algorithm, for example see Figure \ref{fig:tprfpracc1} and \ref{fig:shd1}; (f) In most cases, error measures based on $\alpha=0.01$ and $\alpha=0.005$ are very close, for example see Figure \ref{fig:tprfpracc1} and \ref{fig:shd1}. Generally, our empirical results suggests that in order to obtain a better performance, we can choose a small value (say $\alpha=0.005$ or 0.01) for
the significance level of individual tests along with large sample (say $n=3000$ or 10000). However, the optimal value for a desired overall error rate may depend on the sample size, significance level, and the sparsity of the underlying graph.
\begin{figure}
\centering
\includegraphics[scale=.25,page=9]{images/tpr_lcd.pdf}
\includegraphics[scale=.25,page=9]{images/fpr_lcd.pdf}
\includegraphics[scale=.25,page=9]{images/acc_lcd.pdf}
\includegraphics[scale=.25,page=9]{images/tpr_pc.pdf}
\includegraphics[scale=.25,page=9]{images/fpr_pc.pdf}
\includegraphics[scale=.25,page=9]{images/acc_pc.pdf}
\includegraphics[scale=.25,page=12]{images/tpr_lcd.pdf}
\includegraphics[scale=.25,page=12]{images/fpr_lcd.pdf}
\includegraphics[scale=.25,page=12]{images/acc_lcd.pdf}
\includegraphics[scale=.25,page=12]{images/tpr_pc.pdf}
\includegraphics[scale=.25,page=12]{images/fpr_pc.pdf}
\includegraphics[scale=.25,page=12]{images/acc_pc.pdf}
\caption{Error measures of the decomposition-based and PC-like algorithms for randomly generated Gaussian chain graph models:
average over 25 repetitions with 30 variables. The four rows correspond to N = 2 and 8. The three columns give three error measures: TPR, FPR and ACC in each
setting respectively. In each plot, the solid (blue)/dashed (green)/dotted (red) lines correspond to significance
levels $\alpha=0.05/0.01/0.005$.}
\label{fig:tprfpracc1}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.21,page=9]{images/shd_lcd.pdf}
\includegraphics[scale=.21,page=9]{images/shd_pc.pdf}
\includegraphics[scale=.21,page=9]{images/min_shd_lcd.pdf}
\includegraphics[scale=.21,page=9]{images/min_shd_pc.pdf}
\includegraphics[scale=.21,page=11]{images/shd_lcd.pdf}
\includegraphics[scale=.21,page=11]{images/shd_pc.pdf}
\includegraphics[scale=.21,page=11]{images/min_shd_lcd.pdf}
\includegraphics[scale=.21,page=11]{images/min_shd_pc.pdf}
\includegraphics[scale=.21,page=12]{images/shd_lcd.pdf}
\includegraphics[scale=.21,page=12]{images/shd_pc.pdf}
\includegraphics[scale=.21,page=12]{images/min_shd_lcd.pdf}
\includegraphics[scale=.21,page=12]{images/min_shd_pc.pdf}
\caption{Error measure SHD of the decomposition-based and PC-like algorithms for randomly generated Gaussian chain graph models:
average over 25 repetitions with 30 variables. The first row correspond to N = 2, the second row correspond to N=5, and the third row correspond to N=8. The first two columns correspond to the essential recovery while the last two columns correspond to the minimum bidirected recovery respectively. In each plot, the solid (blue)/dashed (green)/dotted (red) lines correspond to significance
levels $\alpha=0.05/0.01/0.005$.}
\label{fig:shd1}
\end{figure}
Considering average running times vs. sample sizes, it can be
seen that, for example see Figure \ref{fig:time1}: (a) the average run time increases with sample size; (b) the average run times based on $\alpha=0.01$ and $\alpha=0.005$ are very close and in all cases are better than $\alpha=0.05$, while
choosing $\alpha=0.005$ yields a consistently (albeit slightly) lower average run time across all the settings in
the current simulation; (c) generally, the average run time for the PC-like algorithm is better than that for the decomposition-based algorithm. One possible justification is related to the details of the implementation. The PC algorithm implementation in the pcalg R package is very well optimized, while we have not concentrated on optimizing our implementation of the LCD algorithm; therefore the comparison on run time may be unfair to the new algorithm. For future work, one may consider both optimization of the LCD implementation and instrumentation of the code to allow counting characteristic operations and therefore reducing the dependence of run-time comparison on program optimization. The simulations were run on an Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz. An R language package that implements our algorithms is available in the \href{https://www.dropbox.com/sh/iynnlwyu8il7m3v/AACk8SyIEn7s-W9NRlLnz0DDa?dl=0}{supplementary document} \citep{jv3}.
\begin{figure}
\centering
\includegraphics[scale=.21,page=9]{images/time_lcd.pdf}
\includegraphics[scale=.21,page=9]{images/time_pc.pdf}
\includegraphics[scale=.21,page=9]{images/min_time_lcd.pdf}
\includegraphics[scale=.21,page=9]{images/min_time_pc.pdf}
\caption{Running times of the decomposition-based and PC-like algorithms for randomly generated Gaussian chain graph models:
average over 25 repetitions with 30 variables correspond to N = 2. The first two columns correspond to the essential recovery algorithm while the last two columns correspond to the minimum bidirected recovery respectively. In each plot, the solid (blue)/dashed (green)/dotted (red) lines correspond to significance
levels $\alpha=0.05/0.01/0.005$.}
\label{fig:time1}
\end{figure}
It is worth noting that since our implementation of the decomposition-based algorithms is based on the LCD R package, the generated normal random samples from a given MVR chain graph is not guaranteed to be faithful to it. So, one can expect a better performance if we only consider faithful probability distributions in the experiments. Also, the LCD R package uses ${\chi}^2$ test which is an asymptotic test for $G^2$ \citep{mxg}. Again, one can expect a better results if we replace the asymptotic test used in the LCD R package with an exact test. However, there is a trade-off between accuracy and computational time \citep{mxg}.
\subsection{Performance on Discrete Bayesian Networks}
Bayesian networks are special cases of MVR chain graphs. It is of
interest to see whether the decomposition-based algorithms still work well when the data are actually generated
from a Bayesian network. For this purpose, in this subsection, we perform simulation studies for four well-known Bayesian networks from \href{http://www.bnlearn.com/bnrepository/}{Bayesian Network Repository} (Figures \ref{fig:asia}, \ref{fig:insurance}, \ref{fig:alarm}, and \ref{fig:hailfinder}):
\begin{itemize}
\item ASIA \citep{asia}: with 8 nodes, 8 edges, and 18 parameters, it describes the diagnosis of a patient at a chest clinic who may have just come back from a trip to Asia and may be showing dyspnea. Standard learning algorithms are not able to recover the true structure of the network because of the presence of a functional node (either, representing logical or)\footnote{\href{https://cran.r-project.org/web/packages/bnlearn/bnlearn.pdf}{Package 'bnlearn'}}.
\item INSURANCE \citep{insurance}: with 27 nodes, 52 edges, and 984 parameters, it evaluates car insurance risks.
\item ALARM \citep{alarm}: with 37 nodes, 46 edges and 509 parameters, it was designed by medical experts to provide an alarm message system for intensive care unit patients based on the output a number of vital signs monitoring devices.
\item HAILFINDER \citep{Hailfinder}: with 56 nodes, 66 edges, and 2656 parameters, it was designed to forecast severe summer hail in northeastern Colorado.
\end{itemize}
We compare the performance of our algorithms against the PC-like algorithm for these Bayesian networks for three different significance levels $(\alpha=0.05/0.01/0.005)$.
The results of all learning methods are summarized in Table \ref{asia}, \ref{insurance}, \ref{alarm}, and \ref{hailfinder}.
For the decomposition-based methods, all the three error measures: TPR, FPR and SHD are similar to those of the PC-like algorithms, but the results indicate that the decomposition-based method outperforms the PC-like algorithms as the size of Bayesian network become larger, especially in terms of TPR and SHD.
\section{Discussion and Conclusion}\label{discussion}
In this paper, we presented a computationally feasible algorithm for learning the structure of MVR chain graphs via decomposition. We compared the performance of our algorithm with that of the PC-like algorithm proposed by \citep{sp}, in the Gaussian and discrete cases. The PC-like algorithm is a constraint-based algorithm that learns the structure of the underlying MVR chain graph in four steps: (a) determining the skeleton: the resulting undirected graph in this phase contains an undirected edge $u-v$ iff there is no set $S\subseteq V\setminus\{u,v\}$ such that $u\!\perp\!\!\!\perp v|S$; (b) determining the v-structures (unshielded colliders); (c) orienting some of the undirected/directed edges into directed/bidirected edges according to a set of rules applied iteratively; (d) transforming the resulting graph in the previous step into an MVR CG. The essential recovery algorithm obtained after step (c) contains all directed and bidirected edges that are present in every MVR CG of the same Markov equivalence class. The decomposition-based algorithm is also a constraint-based algorithm that is based on a divide and conquer approach and contains four steps: (a) determining the skeleton by a divide-and-conquer approach; (b) determining the v-structures (unshielded colliders) with localized search for $m$-separators; continuing with steps (c) and (d) exactly as in the PC-like algorithm. The correctness of both algorithms lies upon the assumption that the probability distribution $p$ is faithful
to some MVR CG.
As for the PC-like algorithms, unless the probability distribution $p$ of the data is faithful to some MVR CG the learned CG cannot be ensured to factorize $p$ properly. Empirical simulations in the Gaussian case show that both algorithms yield good results when the underlying graph is sparse. The decomposition-based algorithm achieves
competitive results with the PC-like learning algorithm in both Gaussian and discrete cases.
In fact, the decomposition-based method usually outperforms the PC-like algorithm in all four error measures i.e., TPR, FPR, ACC, and SHD.
Such simulation results confirm that our method is reliable both when latent variables are present (and the underlying graph is an MVR CG) and when there are no such variables (and the underlying graph is a DAG. The algorithm works reliably when latent variables are present and only fails when selection bias variables are presents. Our algorithm allows relaxing half of the causal sufficiency assumption, because only selection bias needs to be represented explicitly. Since our implementation of the decomposition-based algorithm is based on the LCD R package, with fixed
number of samples, one can expect a better performance if we replace the asymptotic test used in the LCD R package with an exact test. However, there is a trade-off between accuracy and computational time. Also, one can expect a better results if we only consider faithful probability distributions in the experiments.
The natural continuation of the work presented here would be to develop a learning algorithm with weaker assumptions than the one presented. This could for example be a learning
algorithm that only assumes that the probability distribution satisfies the composition property. It should be mentioned that \citep{psn} developed an algorithm for learning LWF CGs under the composition property. However, \citep{Addendum} proved that the same technique cannot be used for MVR chain graphs.
We believe that our approach is extendable to the structural learning of AMP chain graphs \citep{amp}. So, the natural continuation of the work presented here would be to develop a learning algorithm via decomposition for AMP chain graphs under the faithfulness assumption.
\begin{table}\centering
\begin{tabular}{c|c|c|c|c}
& TPR & FPR&ACC& SHD\\
\midrule
& 0.625& 0.2& 0.75& 9\\
Decomposition-Based essential recovery algorithm & 0.625& 0.2& 0.75& 9\\% That's the rule you're looking for.
&0.625& 0.2& 0.75& 9\\
\midrule
&0.625& 0& 0.893& 6\\
PC-Like essential recovery algorithm Algorithm &0.625& 0& 0.893& 6 \\
&0.625& 0& 0.893& 6\\
\midrule
&0.625& 0.2& 0.75& 8\\
Decomposition-Based Algorithm with Minimum bidirected Edges & 0.625& 0.2& 0.75& 7 \\
&0.625& 0.2& 0.75& 8 \\
\midrule
&0.625& 0& 0.893& 4\\
PC-Like Algorithm with Minimum bidirected Edges &0.625& 0& 0.893& 4\\
&0.625& 0& 0.893& 4\\
\bottomrule
\end{tabular}
\caption{Results for discrete samples from the ASIA network. Each row corresponds to the significance
level: $\alpha=0.05/0.01/0.005$ respectively.}\label{asia}
\end{table}
\begin{table}\centering
\begin{tabular}{c|c|c|c|c}
& TPR & FPR&ACC& SHD\\
\midrule
&0.635& 0.0167& 0.932& 31\\
Decomposition-Based essential recovery algorithm & 0.635& 0.020& 0.926& 32\\% That's the rule you're looking for.
&0.654& 0.0134& 0.937& 28\\
\midrule
&0.558& 0& 0.934& 37\\
PC-Like essential recovery algorithm Algorithm &0.519& 0& 0.929& 37\\
&0.519& 0& 0.929& 37\\
\midrule
&0.635& 0.0167& 0.932& 30\\
Decomposition-Based Algorithm with Minimum bidirected Edges & 0.635& 0.020& 0.926& 32 \\
&0.654& 0.0134& 0.937& 27 \\
\midrule
&0.558& 0& 0.934& 27\\
PC-Like Algorithm with Minimum bidirected Edges &0.519& 0& 0.929& 29\\
&0.519& 0& 0.929& 29\\
\bottomrule
\end{tabular}
\caption{Results for discrete samples from the INSURANCE network. Each row corresponds to the significance
level: $\alpha=0.05/0.01/0.005$ respectively.}\label{insurance}
\end{table}
\begin{table}\centering
\begin{tabular}{c|c|c|c|c}
& TPR & FPR&ACC& SHD\\
\midrule
&0.783& 0.0194& 0.967&34\\
Decomposition-Based essential recovery algorithm &0.783& 0.0161& 0.967&32\\% That's the rule you're looking for.
&0.761& 0.021& 0.964& 36\\
\midrule
&0.457& 0& 0.962& 38\\
PC-Like essential recovery algorithm Algorithm &0.435& 0& 0.961& 38\\
&0.413& 0& 0.959& 41\\
\midrule
&0.783& 0.0194& 0.967&30\\
Decomposition-Based Algorithm with Minimum bidirected Edges &0.783& 0.0161& 0.967&28 \\
&0.761& 0.021& 0.964& 35\\
\midrule
&0.457& 0& 0.962& 33\\
PC-Like Algorithm with Minimum bidirected Edges &0.435& 0& 0.961& 33\\
&0.413& 0& 0.959& 36\\
\bottomrule
\end{tabular}
\caption{Results for discrete samples from the ALARM network. Each row corresponds to the significance
level: $\alpha=0.05/0.01/0.005$ respectively.}\label{alarm}
\end{table}
\begin{table}\centering
\begin{tabular}{c|c|c|c|c}
& TPR & FPR&ACC& SHD\\
\midrule
&0.758& 0.003& 0.986& 26\\
Decomposition-Based essential recovery algorithm &0.742& 0.002& 0.987& 24\\% That's the rule you're looking for.
&0.757& 0.002& 0.988& 22\\
\midrule
&0.457& 0& 0.962& 38\\
PC-Like essential recovery algorithm Algorithm &0.515& 0.0007& 0.979& 40\\
&0.515& 0.0007& 0.979& 40\\
\midrule
&0.758& 0.003& 0.986& 42\\
Decomposition-Based Algorithm with Minimum bidirected Edges &0.742& 0.002& 0.987&41 \\
&0.757& 0.002& 0.988& 24\\
\midrule
&0.457& 0& 0.962& 38\\
PC-Like Algorithm with Minimum bidirected Edges &0.515& 0.0007& 0.979& 38\\
&0.515& 0.0007& 0.979&39\\
\bottomrule
\end{tabular}
\caption{Results for discrete samples from the HAILFINDER network. Each row corresponds to the significance
level: $\alpha=0.05/0.01/0.005$ respectively.}\label{hailfinder}
\end{table}
\begin{figure}
\centering
\includegraphics[scale=.25]{images/asia.pdf}
\caption{\href{http://www.bnlearn.com/bnrepository/}{ASIA (sometimes called LUNG CANCER or CHEST CLINIC)} ,
Number of nodes: 8,
Number of arcs: 8,
Number of parameters: 18,
Average Markov blanket size: 2.50,
Average degree: 2.00,
Maximum in-degree: 2.}
\label{fig:asia}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.5]{images/insurance.pdf}
\caption{\href{http://www.bnlearn.com/bnrepository/}{INSURANCE} ,
Number of nodes: 27,
Number of arcs: 52,
Number of parameters: 984,
Average Markov blanket size: 5.19,
Average degree: 3.85
Maximum in-degree: 3.}
\label{fig:insurance}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.5]{images/alarm.pdf}
\caption{\href{http://www.bnlearn.com/bnrepository/}{ALARM} ,
Number of nodes: 37,
Number of arcs: 46,
Number of parameters: 509,
Average Markov blanket size: 3.51,
Average degree: 2.49,
Maximum in-degree: 4.}
\label{fig:alarm}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.9]{images/hailfinder.pdf}
\caption{\href{http://www.bnlearn.com/bnrepository/}{HAILFINDER} ,
Number of nodes: 56,
Number of arcs: 66,
Number of parameters: 2656,
Average Markov blanket size: 3.54,
Average degree: 2.36
Maximum in-degree: 4.}
\label{fig:hailfinder}
\end{figure}
\section*{Appendix A. Proofs of Theoretical Results}
\begin{lemma}\label{lem1}
Let $\rho$ be a chain from $u$ to $v$, and $W$ be the set of all vertices on $\rho$ ($W$ may or may not contain $u$ and $v$).
Suppose that (the endpoints of) a chain $\rho$ is (are) blocked by $S$. If $W\subseteq S$, then the chain $\rho$ is blocked by $W$ and by any set containing $W$.
\end{lemma}
\begin{proof}
Since the blocking of the chain $\rho$ depends on those vertices between $u$ and $v$ that are contained in the $m$-separator,
and since $W$ contains all vertices on $\rho$, $\rho$ is also blocked by $S \cap W = W$ if $\rho$ is blocked by $S$. Since all colliders on $\rho$
have already been activated conditionally on $W$, adding other vertices into the conditional set does not make any
new collider active on $\rho$. This implies that $\rho$ is blocked by any set containing $W$.
\end{proof}
\begin{lemma}\label{lem2}
Let $T$ be an $m$-separation tree for CG $G$, and $K$ be a separator of $T$ that separates $T$ into two
subtrees $T_1$ and $T_2$ with variable sets $V_1$ and $V_2$ respectively. Suppose that $\rho$ is a chain from $u$ to $v$ in $G$ where $u\in V_1\setminus K$ and $v\in V_2\setminus K$. Let $W$ denote the set of all vertices on $\rho$ ($W$ may or may not contain $u$ and $v$). Then the
chain $\rho$ is blocked by $W\cap K$ and by any set containing $W\cap K$.
\end{lemma}
\begin{proof}
Since $u\in V_1\setminus K$ and $v\in V_2\setminus K$, there is a sequence from $s$ (may be $u$) to $y$ (may be $v$) in $\rho =(u,\dots,s,t,\dots,x,y,\dots,v)$ such that $s\in V_1\setminus K$ and $y\in V_2\setminus K$ and all vertices from $t$ to $x$ are contained in $K$. Let $\rho'$ be the sub-chain of $\rho$ from $s$ to $y$ and $W'$ the vertex set from $t$ to $x$, so $W'\subseteq K$. Since $s\in V_1\setminus K$ and $y\in V_2\setminus K$, we have from definition of $m$-separation tree that $K$ $m$-separates $s$ and $y$ in $G$, i.e., $K$ blocks $\rho'$. By lemma \ref{lem1}, we obtain that $\rho'$ is blocked by $W'(\subseteq K)$ and any set containing $W'$. Since $W'\subseteq (K\cap W)$, $\rho'$ is blocked by $K\cap W$ and by any set containing $K\cap W$. Thus $\rho(\supseteq \rho')$ is also blocked by them.
\end{proof}
\begin{remark}\label{rem1}
Javidian and Valtorta showed that if we find a separator over $S$ in $(G_{An(u\cup v)})^a$ then it is an $m$-separator in $G$. On the other hand, if there exists an $m$-separator over $S$ in $G$ then there must exist a separator over $S$ in $(G_{An(u\cup v)})^a$ by removing all nodes which are not in $An(u\cup v)$ from it \citep{jv2}.
\end{remark}
Observations in Remark \ref{rem1} yield the following results.
\begin{lemma}\label{lem3}
Let $u$ and $v$ be two non-adjacent vertices in MVR CG $G$, and let $\rho$ be a chain from $u$ to $v$. If $\rho$ is not contained in
$An(u\cup v)$, then $\rho$ is blocked by any subset S of $an(u\cup v)$.
\end{lemma}
\begin{proof}
Since $\rho \not\subseteq An(u\cup v)$, there is a sequence from $s$ (may be $u$) to $y$ (may be $v$) in $\rho=(u,\dots,s,t,\dots,x,y,\dots,v)$ such that $s$ and $y$ are contained in $An(u\cup v)$ and all vertices from $t$ to $x$ are out of $An(u\cup v)$.Then the edges $s-t$ and $x-y$ must be oriented as $s\mathrel{\circ\!\!\!\rightarrow} t$ and $x\mathrel{\leftarrow\!\!\!\circ} y$, otherwise $t$ or $x$ belongs to $an(u\cup v)$. Thus there exist at least one collider between $s$ and $y$ on $\rho$. The middle vertex $w$ of the collider closest to $s$ between $s$ and $y$ is not contained in
$an(u\cup v)$, and any descendant of $w$ is not in $an(u\cup v)$, otherwise there is a (partially) directed cycle. So $\rho$ is blocked
by the collider, and it cannot be activated conditionally on any vertex in $S$ where $S\subseteq an(u\cup v)$.
\end{proof}
\begin{lemma}\label{lem4}
Let $T$ be an $m$-separation tree for CG $G$. For any vertex $u$ there exists at least one node of $T$ that contains $u$ and $bd(u)$.
\end{lemma}
\begin{proof}
If $bd(u)$ is empty, it is trivial. Otherwise let $C$ denote the node of $T$ which contains $u$ and the most elements
of $u$'s boundary. Since no set can separate $u$ from a parent (or neighbor), there must be a node of $T$ that contains $u$ and the parent (or neighbor). If $u$ has only
one parent (or neighbor), then we obtain the lemma. If $u$ has two or more elements in its boundary, we choose two arbitrary elements $v$ and $w$ of $u$'s boundary that are not contained in a single node but are contained in two different nodes of $T$,
say $\{u,v\}\subseteq C$ and $\{u,w\}\subseteq C'$ respectively, since all vertices in $V$ appear in $T$. On the chain from $C$ to $C'$ in $T$, all
separators must contain $u$, otherwise they cannot separate $C$ from $C'$. However, any separator containing $u$ cannot
separate $v$ and $w$ because $v\mathrel{\circ\!\!\!\rightarrow} u\mathrel{\leftarrow\!\!\!\circ} w$ is an active chain between $v$ and $w$ in $G$. Thus we got a contradiction.
\end{proof}
\begin{lemma}\label{lem5}
Let $T$ be an $m$-separation tree for CG $G$ and $C$ a node of $T$. If $u$ and $v$ are two vertices in $C$ that
are non-adjacent in $G$, then there exists a node $C'$ of $T$ containing $u, v$ and a set $S$ such that $S$ $m$-separates $u$ and $v$ in $G$.
\end{lemma}
\begin{proof}
Without loss of generality, we can suppose that $v$ is not a descendant of the vertex $u$ in $G$, i.e., $v\not\in nd(u)$. According to the local Markov property for MVR chain graphs proposed by Javidian and Valtorta in \citep{jv1}, we know that $u\perp\!\!\!\perp [nd(u)\setminus bd(u)]|pa_G(u).$ By Lemma \ref{lem4}, there is a
node $C_1$ of $T$ that contains $u$ and $bd(u)$. If $v\in C_1$, then $S$ defined as the parents of $u$ $m$-separates $u$ from $v$.
If $v\not\in C_1$, choose the node $C_2$ that is the closest node in $T$ to the node $C_1$ and that contains $u$ and $v$. Consider that there is at least one parent (or neighbor) $p$ of $u$ that is not contained
in $C_2$. Thus there is a separator $K$ connecting $C_2$ toward $C_1$ in $T$ such that $K$ $m$-separates $p$ from all vertices in $C_2\setminus K$. Note that on the chain from $C_1$ to $C_2$ in $T$, all
separators must contain $u$, otherwise they cannot separate $C_1$ from $C_2$. So, we have $u\in K$ but $v\not\in K$ (if $v\in K$, then $C_2$ is not the closest node of $T$ to the node $C_1$). In fact, for every parent (or neighbor) $p'$ of $u$ that is contained in $C_1$ but not in $C_2$, $K$ separates $p'$ from all vertices in $C_2\setminus K$, especially the vertex $v$.
Define $S=(an(u\cup v)\cap C_2)$, which is a subset of $C_2$. We need to show that $u$ and $v$ are $m$-separated by $S$, that is, every chain between $u$
and $v$ in $G$ is blocked by $S$.
If $\rho$ is not contained in $An(u\cup v)$, then we obtain from Lemma \ref{lem3} that $\rho$ is blocked by $S$.
When $\rho$ is contained in $An(u\cup v)$, let $x$ be adjacent to $u$ on $\rho$, that is, $\rho =
(u, x, y, \dots , v)$. We consider the three possible orientations of the edge between $u$ and $x$. We now show that $\rho$ is blocked in all three cases.
\begin{itemize}
\item[i:] $u\gets x$, so we know that $x$ is not a collider and we have two possible sub-cases:
\begin{enumerate}
\item $x\in C_2$. In this case the chain $\rho$ is blocked at $x$.
\item $x\not\in C_2$. In this case $K$ $m$-separates $x$ from $v$. By
Lemma \ref{lem2}, we can obtain that the sub-chain $\rho'$ from $x$ to $v$ can be blocked by $W\cap K$ where $W$ denotes the set of
all vertices between $x$ and $v$ (not containing $x$ and $v$) on $\rho'$. Since $S\supseteq (W\cap K)$, we obtain from Lemma \ref{lem2} that $S$ also blocks $\rho'$. Hence the chain $\rho$ is blocked by $S$.
\end{enumerate}
\item[ii:] $u\to x$. We have the following sub-cases:
\begin{enumerate}
\item $x\in an(u)$. This case is impossible because a directed cycle would occur.
\item $x\in an(v)$. This case is impossible because $v$ cannot be a descendant of $u$.
\end{enumerate}
\item[iii:] $u\leftrightarrow x$. We have the following sub-cases:
\begin{enumerate}
\item $x\in an(u)$. This case is impossible because a partially directed cycle would occur.
\item $x\in an(v)$ and $v$ is in the same chain component $\tau$ that contains $u, x$. This is impossible, because in this case we have a partially directed cycle.
\item $x\in an(v)$ and $v$ is not in the same chain component $\tau$ that contains $u, x$. We have the following sub-cases:
\begin{itemize}
\item $x\not\in C_2$. In this case $K$ $m$-separates $x$ from $v$. By
Lemma \ref{lem2}, we can obtain that the sub-chain $\rho'$ from $x$ to $v$ can be blocked by $W\cap K$ where $W$ denotes the set of
all vertices between $x$ and $v$ (not containing $x$ and $v$) on $\rho'$. Since $S\supseteq (W\cap K)$, we obtain from Lemma \ref{lem2} that $S$ also blocks $\rho'$. Hence the chain $\rho$ is blocked by $S$.
\item $x\in C_2$. We have the three following sub-cases:
\begin{itemize}
\item $u\leftrightarrow x\to y$. In this case $x\in S$ blocks the chain. Note that in this case it is possible that $y=v$.
\item $u\leftrightarrow x\gets y$. So, $y$ ($\ne v$ o.w., a directed cycle would occur) is not a collider. If $y\in C_2$ then the chain $\rho$ is blocked at $y$. Otherwise, we have the two following sub-cases:
\begin{itemize}
\item There is a node $C'$ between $C_1$ and $C_2$ that contains $y$ (note that it is possible that $C'=C_1$), so $K$ $m$-separates $y$ from $v$ and the same argument used for case i.2 holds.
\item In this case $K$ $m$-separates $y$ from $p$ ($p\in bd(u)\cap C_1$ and $p\not\in C_2$), which is impossible because the chain $p\mathrel{\circ\!\!\!\rightarrow} u\leftrightarrow x\gets y$ is active (note that $u,x\in K$).
\end{itemize}
\item $u\leftrightarrow x\leftrightarrow y$. If there is an outgoing ($\to$) edge from $y$ ($\ne v$ o.w., a partially directed cycle would occur) then the same argument in the previous sub-case ($u\leftrightarrow x\gets y$) holds. Otherwise, $y$ is a collider. If $y\not\in C_2$ then the chain $\rho$ is blocked at $y$. If $y\in C_2$, there must be a non-collider vertex on the chain $\rho$ between $y$ and $v$ to prevent a (partially) directed cycle. The same argument as in the previous sub-case ($u\leftrightarrow x\gets y$) holds.
\end{itemize}
\end{itemize}
\end{enumerate}
\end{itemize}
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm2}]
From \citep{cdls}, we know that any separator $S$ in junction tree $T$ separates $V_1\setminus S$ and $V_2\setminus S$ in the triangulated graph $\bar{G}_V^t$, where $V_i$ denotes the variable set of the subtree $T_i$ induced by removing the edge with
a separator $S$ attached, for $i = 1, 2$. Since the edge set of $\bar{G}_V^t$ contains that of undirected independence graph $\bar{G}_V$ for $G$, $V_1\setminus S$ and $V_2\setminus S$ are also separated in $\bar{G}_V$. Since $\bar{G}_V$ is an undirected independence graph for $G$, using Definition \ref{septree} we obtain that $T$ is an $m$-separation tree for $G$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm1}]
\noindent ($\Rightarrow$) If condition (i) is the case, nothing remains to prove. Otherwise, Lemma \ref{lem5} implies condition (ii).
\noindent ($\Leftarrow$) Assume that $u$ and $v$ are not contained together in any node $C$ of $T$. Also, assume that $C_1$ and $C_2$ are two nodes of $T$ that contain $u$ and $v$, respectively. Consider that $C_1'$ is the most distant node from $C_1$, between $C_1$ and $C_2$, that contains $u$ and $C_2'$ is the most distant node from $C_2$, between $C_1$ and $C_2$, that contains $v$. Note that it is possible that $C_1'=C_1$ or $C_2'=C_2$. By the condition (i) we know that $C_1'\ne C_2'$. Any separator between $C_1'$ and $C_2'$ satisfies the assumptions of Lemma \ref{lem2}. The sufficiency of condition (i) is given by Lemma \ref{lem2}.
The sufficiency of
conditions (ii) is trivial by the definition of $m$-separation.
\end{proof}
\section*{Appendix B. Proofs for Correctness of the Algorithms}
\begin{proof}
[Correctness of Algorithm \ref{hypergraph}] Since an augmented graph for CG $G$ is an undirected independence graph, by definition of
an undirected independence graph, it is enough to show that $\bar{G}_V$ defined in step 3 contains all edges of $(G_V)^a$. It
is obvious that $\bar{E}$ contains all edges obtained by dropping directions of directed edges in $G$ since any set cannot
$m$-separate two vertices that are adjacent in $G$.
Now we show that $\bar{E}$ also contains any augmented edge that connects vertices $u$ and $v$ having a collider chain between them, that
is, $(u, v)\in \bar{E}$. Any chain graph yields a directed acyclic graph $D$ of its chain components having $\mathcal{T}$ as a node set and an edge $T_1\to T_2$ whenever there exists in the chain graph $G$ at least one edge $u\rightarrow v$ connecting a node \textit{u} in $T_1$ with a node \textit{v} in $T_2$ \citep{ml2}. So, there is a collider chain between two nodes $u$ and $v$ if and only if there is a chain component $\tau\in \mathcal{T}$ such that
\begin{enumerate}
\item $u,v\in \tau$, or
\item $u\in \tau$ and $v\in pa_G(\tau)$ or vice versa, or
\item $u, v\in pa_G(\tau)$
\end{enumerate}
Since for each connected component $\tau$ there is a $C_h\in C$ containing both $\tau$ and its parent set $pa_G(\tau)$, in all of above mentioned cases we have an $(u,v)$ edge in step 2. Therefore, $\bar{G}_V$ defined in step 3 contains all edges of $(G_V)^a$.
\end{proof}
\begin{proof}
[Correctness of Algorithm \ref{alg1}] By the sufficiency of Theorem \ref{thm1}, the initializations at steps 2 and 3 for
creating edges guarantee that no edge is created between any two variables which are not in the same node of the
$m$-separation tree. Also, by the sufficiency of Theorem \ref{thm1}, deleting edges at steps 2 and 3 guarantees that any other edge
between two $m$-separated variables can be deleted in some local skeleton. Thus the global skeleton obtained at step 3 is
correct. In a maximal ancestral graph, every missing edge corresponds to at least one independency in the corresponding
independence model \citep{rs}, and MVR CGs are a subclass of maximal ancestral graphs \citep{jv1}. Therefore, according to the necessity of Theorem \ref{thm1}, each augmented edge $(u, v)$ in the undirected independence graph must be deleted at some subgraph over a node of the $m$-separation tree. Furthermore, according to Lemma \ref{lem4}, for every $v$-structure $(u\mathrel{\circ\!\!\!\rightarrow} w\mathrel{\leftarrow\!\!\!\circ} v)$ there is a node in $m$-separation tree $T$ that contains $u, v$ and $w$, and obviously $w\not\in S_{uv}$. Therefore, we can determine all $v$-structures at step 4, which
completes our proof.
\end{proof}
\section*{Acknowledgements}
We are grateful to Professor Jose M. Pe\~{n}a and Dr. Dag Sonntag for providing us with code that helped in the design of the algorithm that we implemented in R.
\section{Introduction}
\begin{figure}[ht]
\centering
\fbox{
\includegraphics[width=.7\linewidth]{images/essentialMVRrecovery.png}}
\caption{\small{The procedure for learning the structure of an essential MVR CG from a faithful distribution.}}
\label{fig:my_label}
\end{figure}
Probabilistic graphical models (PGMs) use graphs, either undirected, directed, bidirected, or mixed, to represent possible dependencies among the variables of a multivariate probability distribution. Two types of graphical representations of distributions are commonly used, namely, Bayesian networks (BNs) and Markov random fields (Markov networks (MNs)), whose graphical parts are, respectively, a directed acyclic graph (DAG) and an undirected graph. Both families encompass the properties of factorization and independencies, but they differ in the set of independencies they can encode and the factorization of the distribution that they induce.
Currently systems containing both causal and non-causal relationships are mostly modeled with directed acyclic graphs (DAGs). An alternative approach is using chain graphs (CGs). Chain graphs may
have both directed and undirected edges under the constraint that there do not exist any semi-directed cycles \citep{d}. So, CGs may contain two types of edges,
the directed type that corresponds to the causal relationship in DAGs and a
second type of edge representing a symmetric relationship \citep{s2}. In
particular, $X_1$ is a direct cause of $X_2$ only if $X_1\to X_2$ (i.e., $X_1$ is a parent
of $X_2$), and $X_1$ is a (possibly indirect) cause of $X_2$ only if there is a directed
path from $X_1$ to $X_2$ (i.e., $X_1$ is an ancestor of $X_2$). So, while the interpretation of the directed edge in a CG is quite clear,
the second type of edge can represent different types of relations and, depending on how we interpret it in the graph, we say that we have different CG interpretations with different separation criteria, i.e. different ways of reading conditional independencies from the graph, and different intuitive meaning behind
their edges. The three following interpretations are the best known in the literature. The first interpretation (LWF) was introduced by Lauritzen,
Wermuth and Frydenberg \citep{lw, f} to combine DAGs and undirected graphs (UGs). The second
interpretation (AMP), was introduced by Andersson, Madigan and Perlman, and also combines DAGs and UGs but with a separation criterion
that more closely resembles the one of DAGs \citep{amp}. The third interpretation,
the multivariate regression interpretation (MVR), was introduced by Cox
and Wermuth \citep{cw1, cw2} to combine DAGs and bidirected (covariance) graphs.
Unlike in the other CG interpretations, the bidirected edge in MVR CGs has
a strong intuitive meaning. It can be seen to represent one or more hidden
common causes between the variables connected by it. In other words, in an MVR CG any bidirected
edge $X\leftrightarrow Y$ can be replaced by $X\gets H\to Y$ to obtain a Bayesian network representing
the same independence model over the original variables, i.e. excluding the
new variables H. These variables are called hidden, or latent, and have been
marginalized away in the CG model \citep{s}. See \citep{jv1} for details on the properties of MVR chain graphs.
Latent variables, which are often present in practice, cause several complications. First, causal inference based on structural learning (model selection) algorithms such as the PC algorithm \citep{sgs} may be incorrect. Second, if a distribution is faithful\footnote{A distribution $P$ is faithful to DAG $G$ if any independency in $P$ implies a corresponding $d$-separation property in $G$ \citep{sgs}.} to a DAG, then the distribution obtained by marginalizing on some of the variables may not be faithful to any DAG on the observed variables, i.e., the space of DAGs is not closed under marginalization \citep{cmkr}.
These problems can be solved by exploiting MVR chain graphs. An example of a situation for which CG is useful is if we have
a system containing two genes and two diseases caused by these such that Gene1
is the cause of Disease1, Gene2 is the cause of Disease2, and the diseases are correlated. In this case we might suspect the presence of an
unknown factor inducing the correlation between Disease1 and Disease2, such as
being exposed to a stressful environment. Having such a hidden variable results in the
independence model described in the information above. The MVR CG representing the information
above is shown in Figure \ref{Fig:gene} (a) while the best (inclusion optimal) BN and MN are shown in
Figure \ref{Fig:gene} (b) and (c), respectively. We can now see that it is only the MVR CG that
describes the relations in the system correctly \citep{Sonntag2015}.
\begin{figure}[ht]
\centering
\includegraphics[scale=.4]{images/gene.png}
\caption{A gene and disease example with MVR CG representation, BN representation and MN
representation \citep{Sonntag2015}.} \label{Fig:gene}
\end{figure}
As a result, designing efficient algorithms for learning the structure of MVR chain graphs is an important and desirable task.
Sonntag lists four constraint-based learning algorithms for CGs. All are based on testing if variables
are (conditionally) independent in the data using an independence test, and using this information to deduce the structure of the optimal graph. These algorithms are the PC-like algorithms
\citep{srs, p1, sp}, the answer set programming (ASP) algorithms \citep{p3, sjph}, the LCD algorithm \citep{mxg} and the
CKES algorithm \citep{psn}. The former two have implementations for all three
CG interpretations, while the latter two are only applicable for LWF CGs \citep{s2}.
In this paper, we propose a decomposition approach for recovering structures of MVR CGs. Our algorithms are natural extensions of algorithms in \citep{xie}. In particular, the
rule in \citep{xie} for combining local structures into a global skeleton is still applicable
and no more careful work (unlike, for example, algorithms in \citep{mxg}) must be done to ensure a valid combination. Moreover, the method for
extending a global skeleton to a Markov equivalence class is exactly the same as that for Bayesian networks.
The paper is organized as follows: Section \ref{defs&concepts} gives notation and definitions. In Section 3, we show a condition for decomposing structural learning
of MVR CGs. Construction of $m$-separation trees to be used for decomposition is discussed in Section \ref{construct-trees}. We propose the
main algorithm and then give an example in Section \ref{main-alg} to illustrate our approach for recovering the global structure
of an MVR CG. Section \ref{complexity} discusses the complexity and advantages of the proposed algorithms. Section \ref{evaluation} describes our evaluation setup. Both Gaussian and discrete networks were used. A comparison with the PC-like algorithm of \citep{sp} was carried out. Both quality of the recovered networks and running time are reported. Finally, we conclude with
some discussion in Section \ref{discussion}. The proofs of our main results and the correctness of the algorithms are given in Appendices A and B.
\section{Definitions and Concepts}\label{defs&concepts}
In this paper we consider graphs containing both directed ($\to$) and bidirected ($\leftrightarrow$) edges and largely use the terminology of \citep{xie, r2}, where the reader can also find further details. Below we briefly list some of the most central concepts used in this paper.
If there is an arrow from $a$ pointing towards $b$, $a$ is said to be a parent
of $b$. The set of parents of $b$ is denoted as $pa(b)$. If there is a bidirected edge between $a$ and $b$, $a$ and $b$ are said to be neighbors. The set of neighbors of a vertex $a$ is denoted as $ne(a)$. The expressions $pa(A)$ and $ne(A)$ denote the collection of
parents and neighbors of vertices in $A$ that are not themselves
elements of $A$. The boundary $bd(A)$ of a subset $A$ of vertices is the set of vertices in $V\setminus A$ that are parents or neighbors to vertices in $A$.
A path of length $n$ from $a$ to $b$ is a sequence $a=a_0,\dots , a_n=b$ of
distinct vertices such that $(a_i\to a_{i+1})\in E$, for all $i=1,\dots ,n$. A chain of length $n$ from $a$ to $b$ is a sequence $a=a_0,\dots , a_n=b$ of
distinct vertices such that $(a_i\to a_{i+1})\in E$, or $(a_{i+1}\to a_i)\in E$, or $(a_{i+1}\leftrightarrow a_i)\in E$, for all $i=1,\dots ,n$. We say that $u$ is an ancestor of $v$ and $v$
is a descendant of $u$ if there is a path from $u$ to $v$ in $G$.
The set of ancestors of $v$ is denoted as $an(v)$, and we define $An(v) = an(v)\cup v$. We apply this definition to sets: $an(X) = \{\alpha | \alpha \textrm{ is an ancestor of } \beta \textrm{ for some } \beta \in X\}$.
A partially directed cycle in a graph $G$ is a sequence of $n$ distinct vertices $v_1,\dots, v_n (n\ge 3)$,
and $v_{n+1}\equiv v_1$, such that
\begin{itemize}
\item $\forall i (1\le i\le n)$ either $v_i\leftrightarrow v_{i+1}$ or $v_i\to v_{i+1}$, and
\item $\exists j (1\le j\le n)$ such that $v_i\to v_{i+1}$.
\end{itemize}
A graph with only undirected edges is called an undirected graph (UG). A graph with only
directed edges and without directed cycles is called a directed acyclic graph (DAG). Acyclic directed mixed graphs, also known as semi-Markov(ian) \citep{pj}
models contain directed ($\rightarrow$) and bidirected
($\leftrightarrow$) edges subject to the restriction that there are no directed cycles \citep{r2,er}. A graph that has no partially directed cycles is called \textit{chain graph}.
A nonendpoint vertex $\zeta$ on a chain is a \emph{collider} on the chain if the edges preceding and succeeding $\zeta$ on the chain have an arrowhead at $\zeta$, that is, $\to \zeta \gets, or \leftrightarrow \zeta \leftrightarrow, or\leftrightarrow \zeta \gets, or\to \zeta \leftrightarrow$. A nonendpoint vertex $\zeta$ on a chain which is not a collider is a noncollider on the chain. A chain between vertices $\alpha$ and $\beta$ in chain graph $G$ is said to be $m$-connecting given a set $Z$ (possibly empty), with $\alpha, \beta \notin Z$, if:
\begin{enumerate}
\item[(i)] every noncollider on the path is not in $Z$, and
\item[(ii)] every collider on the path is in $An_G(Z)$.
\end{enumerate}
A chain that is not $m$-connecting given $Z$ is said to be blocked given (or by) $Z$.
If there is no chain $m$-connecting $\alpha$ and $\beta$ given $Z$, then $\alpha$ and $\beta$ are said to be \emph{m-separated} given $Z$. Sets $X$ and $Y$ are $m$-separated given $Z$, if for every pair $\alpha, \beta$, with $\alpha\in X$ and $\beta \in Y$, $\alpha$ and $\beta$ are $m$-separated given $Z$ ($X$, $Y$, and $Z$ are disjoint sets; $X, Y$ are nonempty). We denote the independence model resulting from applying the $m$-separation criterion to $G$, by $\Im_m$(G). This is an extension of Pearl's $d$-separation criterion \citep{pearl1} to MVR chain graphs in that in a DAG $D$, a chain is $d$-connecting if and only if it is $m$-connecting.
Two vertices $x$ and $y$ in chain graph $G$ are said to be collider connected if there is a chain from $x$ to $y$ in $G$ on which every non-endpoint vertex is a collider; such a chain is called a collider chain. Note that a single edge trivially forms a collider chain (path), so if $x$ and $y$ are adjacent in a chain graph then they are collider connected. The augmented graph derived from $G$, denoted $(G)^a$, is an undirected graph with the same vertex set as $G$ such that $$c\--d \textrm{ in } (G)^a \Leftrightarrow c \textrm{ and } d \textrm{ are collider connected in } G.$$
Disjoint sets $X, Y\ne \emptyset,$ and $Z$ ($Z$ may be empty) are said to be
$m^\ast$-separated if $X$ and $Y$ are separated by $Z$ in $(G_{an(X\cup Y\cup Z)})^a$. Otherwise $X$ and $Y$ are said to be $m^\ast$-connected
given $Z$. The resulting independence model is denoted by $\Im_{m^\ast}(G)$.
According to \citep[Theorem 3.18.]{rs} and \citep{jv1}, for chain graph $G$ we have: $\Im_m(G)=\Im_{m^\ast}(G)$.
Let $\bar{G}_V = (V, \bar{E}_V)$ denote an undirected graph where $\bar{E}_V$ is a set of undirected edges. An undirected edge between
two vertices $u$ and $v$ is denoted by $(u, v)$. For a subset $A$ of $V$, let $\bar{G}_A= (A, \bar{E}_A)$ be the subgraph induced by $A$
and $\bar{E}_A = \{e\in \bar{E}_V | e\in A\times A\} = \bar{E}_V\cap (A\times A)$. An undirected graph is called complete if any pair of vertices is connected by an edge. For an undirected graph, we say that vertices $u$ and $v$ are separated by a set of vertices $Z$ if each path between $u$ and $v$ passes through $Z$. We say that two distinct vertex sets $X$ and $Y$ are separated by $Z$ if and
only if $Z$ separates every pair of vertices $u$ and $v$ for any $u\in X$ and $v\in Y$. We say that an undirected graph $\bar{G}_V$ is
an undirected independence graph (UIG) for CG $G$ if the fact that a set $Z$ separates $X$ and $Y$ in $\bar{G}_V$ implies that $Z$
$m$-separates $X$ and $Y$ in $G$. Note that the augmented graph derived from CG $G$, $(G)^a$, is an undirected independence graph for $G$. We say that $\bar{G}_V$ can be decomposed into subgraphs $\bar{G}_A$ and $\bar{G}_B$ if
\begin{itemize}
\item[(1)] $A\cup B=V$, and
\item[(2)] $C=A\cap B$ separates $V\setminus A$ and $V\setminus B$ in $\bar{G}_V$.
\end{itemize}
The above decomposition does not require that the separator $C$ be complete, which is required for weak decomposition defined in \citep{l}. In the next section, we show that a
problem of structural learning of CG can also be decomposed into problems for its decomposed subgraphs even if
the separator is not complete.
A triangulated (chordal) graph is an undirected graph in which all cycles of four or more vertices have a chord, which is an edge that is not part of the cycle but connects two vertices of the cycle (see, for example, Figure \ref{Fig:mvr1}). For an
undirected graph $\bar{G}_V$ which is not triangulated, we can add extra (``fill-in") edges to it such that it becomes to be a triangulated
graph, denoted by $\bar{G}_V^t$.
\begin{figure}[ht]
\centering
\includegraphics[scale=.4]{images/mvr1.png}
\caption{(a) An MVR CG $G$. (b) The augmented graph $G^a$, which is also a triangulated graph $G^t$.} \label{Fig:mvr1}
\end{figure}
Let $X\!\perp\!\!\!\perp Y$ denote
the independence of $X$ and $Y$, and $X\!\perp\!\!\!\perp Y|Z$ (or $\langle X,Y | Z\rangle$) the conditional independence of $X$ and $Y$ given $Z$. In this paper, we assume that all independencies of a
probability distribution of variables in $V$ can be checked by $m$-separations of $G$, called the faithfulness assumption \citep{sgs}. The faithfulness assumption means that all independencies and conditional independencies among variables can be represented by $G$.
The global skeleton is an undirected graph obtained by dropping direction of CG. Note that the absence of an
edge $(u, v)$ implies that there is a variable subset $S$ of $V$ such that $u$ and $v$ are independent conditional on $S$, that
is, $u\!\perp\!\!\!\perp v|S$ for some $S\subseteq V\setminus \{u,v\}$ \citep{jv1}. Two MVR CGs over the same variable set are called Markov equivalent if they
induce the same conditional independence restrictions. Two MVR CGs are Markov equivalent if and only if they have the
same global skeleton and the same set of $v$-structures (unshielded colliders) \citep{ws}. An equivalence class of MVR CGs consists of all MVR CGs which
are Markov equivalent, and it is represented as a partially directed graph (i.e., a graph containing directed, undirected, and bidirected edges and no directed cycles) where the directed/bidirected edges represent edges that are common to every MVR CG in it, while the undirected edges represent that any legal orientation of them leads
to a Markov equivalent MVR CG. Therefore the goal of structural learning is to construct a partially directed graph to
represent the equivalence class. A local skeleton for a subset $A$ of variables is an undirected subgraph for $A$ in which
the absence of an edge $(u, v)$ implies that there is a subset $S$ of $A$ such that $u\!\perp\!\!\!\perp v|S$.
Now, we introduce the notion of $m$-separation trees, which is used to facilitate the representation of the decomposition. The concept is similar to the junction tree of cliques and the
independence tree introduced for DAGs as $d$-separation trees in \citep{xie}. Let $C = \{C_1, \dots, C_H \}$ be a collection of distinct variable sets such that for $h = 1,\dots ,H, C_h\subseteq V$.
Let $T$ be a tree where each node corresponds to a distinct variable set in $C$, to be displayed as an oval (see, for example, Figure \ref{Fig:tree1}). The term ‘node’ is used for an $m$-separation tree to distinguish from
the term ‘vertex’ for a graph in general. An undirected edge $e = (C_i,C_j)$ connecting nodes $C_i$ and $C_j$ in $T$ is labeled with a separator $S = C_i\cap C_j$, which is displayed as a rectangle.
Removing an edge $e$ or, equivalently, removing a separator $S$ from $T$ splits $T$ into two subtrees
$T_1$ and $T_2$ with node sets $C_1$ and $C_2$ respectively. We use $V_i$ to denote the union of the
vertices contained in the nodes of the subtree $T_i$ for $i = 1,2$.
\begin{definition}\label{septree}
A tree $T$ with node set $C$ is said to be an $m$-separation tree for chain graph $G = (V,E)$ if
\begin{itemize}
\item $\cup_{C_i\in C}C_i=V$, and
\item for any separator $S$ in $T$ with $V_1$ and $V_2$ defined as above by removing $S$, we have $\langle V_1\setminus S,V_2\setminus S | S\rangle_G$.
\end{itemize}
\end{definition}
\begin{figure}[ht]
\centering
\includegraphics[scale=.4]{images/tree1.png}
\caption{an $m$-separation tree.} \label{Fig:tree1}
\end{figure}
Notice that a separator is defined in terms of a tree whose nodes consist of variable sets, while
the $m$-separator is defined based on chain graph. In general, these two concepts are not related, though for an $m$-separation tree its separator must be some corresponding $m$-separator in the underlying MVR chain graph. The definition of $m$-separation trees for MVR chain graphs is similar to that of junction trees of cliques,
see \citep{cdls,l}. Actually, it is not difficult to see that a junction tree
of chain graph $G$ is also an $m$-separation tree. However, as in \citep{mxg}, we point out two differences here: (a) an $m$-separation tree is defined with $m$-separation and it does not require that every node be a clique or
that every separator be complete on the augmented graph; (b) junction trees are mostly used as inference engines, while our interest in $m$-separation trees is mainly derived from their power in facilitating the decomposition of structural learning.
A collection of variable sets $C = \{C_1, \dots, C_H \}$ is said to be a hypergraph on $V$ where each hyperedge $C_h$ is
a nonempty subset of variables, and $\cup_{h=1}^HC_h=V$. A hypergraph is a reduced hypergraph if $C_i\not\subseteq C_j$ for $i\ne j$. In this paper, only reduced hypergraphs are used, and thus simply called hypergraphs.
\section{Construction of \textit{m}-Separation Trees}\label{construct-trees}
As proposed in \citep{xie}, one can construct a $d$-separation tree from observed data, from
domain or prior knowledge of conditional independence relations or from a collection of databases.
However, their arguments are not valid for constructing an $m$-separation tree from domain knowledge or from observed data patterns when latent common causes are present, as in the current setting. In this section, we first extend
Theorem 2 of \citep{xie}, which guarantees that their method for constructing a separation tree
from data is valid for MVR chain graphs. Then we investigate sufficient conditions for constructing $m$-separation trees from domain or prior knowledge of conditional independence relations or from a collection of databases.
\subsection{Constructing an \textit{m}-Separation Tree from Observed Data}
In several algorithms for structural learning of PGMs, the first step is to construct an undirected independence graph in which
the absence of an edge $(u, v)$ implies $u \perp\!\!\!\perp v | V\setminus\{u,v\}$. To construct such an undirected graph, we can start with a complete undirected graph, and then for each pair of variables $u$ and $v$, an undirected edge $(u, v)$ is removed if $u$ and
$v$ are independent conditional on the set of all other variables \citep{xie}. For normally distributed data, the undirected independence graph can be efficiently constructed by removing an edge $(u, v)$ if and only if the corresponding entry in the concentration matrix (inverse covariance matrix) is zero \citep[Proposition 5.2]{l}. For this purpose, performing a conditional independence test for each pair of random variables using the
partial correlation coefficient can be used. If the $p$-value of the test is smaller than the given threshold, then there will be an edge on the output graph. For discrete data, a test of conditional independence given a large number of discrete variables may
be of extremely low power. To cope with such difficulty, a local discovery algorithm called
Max-Min Parents and Children (MMPC) \citep{tas} or the forward selection procedure described in \citep{ed} can be applied.
An $m$-separation tree can be built by constructing a junction tree from an undirected independence graph. In fact, we generalize Theorem 2 of \citep{xie} as follows.
\begin{theorem}\label{thm2}
A junction tree constructed from an undirected independence graph for MVR CG $G$ is an $m$-separation tree for $G$.
\end{theorem}
An $m$-separation tree $T$ only requires that all $m$-separation properties of $T$ also hold for MVR CG $G$, but the reverse is
not required. Thus we only need to construct an undirected independence graph that may have fewer conditional
independencies than the moral graph, and this means that the undirected independence graph may have extra edges
added to the augmented graph. As \citep{xie} observe for $d$-separation in DAGs, if all nodes of an $m$-separation tree contain only a few variables, ``the null hypothesis of the absence of an undirected edge may be tested statistically at
a larger significance level."
Since there are standard algorithms for constructing junction trees from UIGs \citep[Chapter 4, Section 4]{cdls}, the construction of separation trees reduces to the construction of
UIGs. In this sense, Theorem \ref{thm2} enables us to exploit various techniques for learning UIGs to serve
our purpose. More suggested methods for learning UIGs from data, in addition to the above mentioned techniques, can be found in \citep{mxg}.
\begin{example}
To construct an $m$-separation tree for MVR CG $G$ in Figure \ref{Fig:mvr1}(a), at first an undirected independence graph
is constructed by starting with a complete graph and removing an edge $(u, v)$ if $u \perp\!\!\!\perp v | V\setminus\{u,v\}$. An undirected graph
obtained in this way is the augmented graph of MVR CG $G$. In fact, we only need to construct an undirected independence
graph which may have extra edges added to the augmented graph. Next triangulate the undirected graph and finally obtain
the $m$-separation tree, as shown in Figure \ref{Fig:mvr1}(b) and Figure \ref{Fig:tree1} respectively.
\end{example}
\subsection{Constructing an \textit{m}-Separation Tree from Domain Knowledge or from Observed Data Patterns}\label{subsec2}
Algorithm 2 of \citep{xie} proposes an algorithm for constructing a $d$-separation tree $T$ from domain knowledge or from observed
data patterns such that a correct skeleton can be constructed by combining subgraphs for nodes of $T$. In this subsection, we propose an approach for constructing an $m$-separation tree from domain knowledge or from
observed data patterns without conditional independence tests. Domain knowledge of variable dependencies can be represented as a collection of variable
sets $C = \{C_1,\dots , C_H \}$, in which variables contained in the same set may associate with each other directly but variables
contained in different sets associate with each other through other variables. This means that two variables that are not
contained in the same set are independent conditionally on all other variables. On the other hand, in an application study, observed data may have a collection of different observed patterns, $C = \{C_1,\dots , C_H \}$, where $C_h$ is the set of observed variables for the $h$th group of individuals. In both cases, the condition to make our algorithms correct for structural learning from a
collection $C$ is that $C$ must contain sufficient data such that parameters of the underlying MVR CG are estimable.
For a DAG, parameters are estimable if, for each variable $u$, there is an observed data pattern $C_h$ in $C$ that contains
both $u$ and its parent set. Thus a collection $C$ of observed patterns has sufficient data for correct structural learning
if there is a pattern $C_h$ in $C$ for each $u$ such that $C_h$ contains both $u$ and its parent set in the underlying DAG. Also, domain knowledge is legitimate if, for each variable $u$, there is a hyperedge $C_h$ in $C$ that contains both $u$ and its parent set \citep{xie}. However, these conditions are not valid in the case of MVR chain graphs. In fact, for MVR CGs domain knowledge is legitimate if for each connected component $\tau$, there is a hyperedge $C_h$ in $C$ that contains both $\tau$ and its parent set $pa_G(\tau)$. Also, a collection $C$ of observed patterns has sufficient data for correct structural learning
if there is a pattern $C_h$ in $C$ for each connected component $\tau$ such that $C_h$ contains both $\tau$ and its parent set $pa_G(\tau)$ in the underlying MVR CG.
\begin{algorithm}
\caption{Construct an $m$-separation tree from a hypergraph}\label{hypergraph}
\SetAlgoLined
\KwIn{a hypergraph $C = \{C_1, \dots, C_H \}$, where each hyperedge $C_h$ is a variable set such that for each connected component $\tau$, there is a hyperedge $C_h$ in $C$ that contains both $\tau$ and its parent set $pa_G(\tau)$.}
\KwOut{$T$, which is an $m$-separation tree for the hypergraph $C$.}
For each hyperedge $C_h$, construct a complete undirected graph $\bar{G}_h$ with the edge set $\bar{E}_h=\{(u,v)|\forall u,v\in C_h\}=C_h\times C_h$\;
Construct the entire undirected graph $\bar{G}_V=(V,\bar{E})$, where $\bar{E}=\bar{E}_1\cup\dots\cup \bar{E}_H$\;
Construct a junction tree $T$ by triangulating $\bar{G}_V$\;
\end{algorithm}
The correctness of Algorithm \ref{hypergraph} is proven in Appendix B. Note that we do not need any conditional independence
test in Algorithm \ref{hypergraph} to construct an $m$-separation tree. In this algorithm, we can use the proposed algorithm in \citep{bbhp} to construct a minimal triangulated graph. In order to illustrate Algorithm \ref{hypergraph}, see Figure \ref{Fig:hypergraph}.
\begin{figure}[ht]
\centering
\includegraphics[scale=.45]{images/alg2.png}
\caption{Construction the $m$-separation tree. (a) An MVR CG. (b) Domain knowledge of associations. (c) The undirected graph and triangulation. (d) The $m$-separation tree $T$.} \label{Fig:hypergraph}
\end{figure}
Guaranteeing the presence of both $\tau$ and its parent set $pa(\tau)$ in at least one hyperedge, as required in Algorithm \ref{hypergraph}, is a strong requirement, which may prevent the use of domain knowledge as a practical source of information for constructing MVR chain graphs. In addition, we remark that answering the question "how can one obtain this information?" is beyond the scope of this paper. The two examples that follow show that restricting the hyperedge contents in two natural ways lead to errors.
The example illustrated in Figure \ref{Fig:counterex} shows that, if for each variable $u$ there is a hyperedge $C_h$ in $C$ that contains both $u$ and its parent set, we cannot guarantee the correctness of our algorithm. Note that vertices $a$ and $d$ are separated in the tree $T$ of Figure \ref{Fig:counterex} part (d) by removing vertex $b$, but $a$ and $d$ are not $m$-separated given $b$ as can be verified using \ref{Fig:counterex} part (a).
\begin{figure}[ht]
\centering
\includegraphics[scale=.5]{images/counterex.png}
\caption{Insufficiency of having a hypergraph that contains both $u$ and its parent set for every $u\in V$. (a) An MVR CG. (b) Domain knowledge of associations. (c) The undirected graph constructed by union of complete graphs corresponding to each hyperedge, which is also a triangulated graph. (d) The junction tree $T$. (e) Local skeleton for every node of $T$. (f) The global skeleton and all $v$-structures.} \label{Fig:counterex}
\end{figure}
The example illustrated in Figure \ref{Fig:jvmethod} shows that, if for each variable $u$ there is a hyperedge $C_h$ in $C$ that contains both $u$ and its boundary set, Algorithm \ref{hypergraph} does not necessarily give an $m$-separation tree because, for example, $S=\{a,b\}$ separates $c$ and $d$ in tree $T$ of Figure \ref{Fig:jvmethod} part (d), but $S$ does not $m$-separate $c$ and $d$ in the MVR CG $G$ in Figure \ref{Fig:jvmethod} part (a).
\begin{figure}[ht]
\centering
\includegraphics[scale=.5]{images/jvmethod.png}
\caption{Insufficiency of having a hypergraph that contains both $u$ and its boundary set for every $u\in V$. (a) An MVR CG. (b) Domain knowledge of associations. (c) The undirected graph constructed by union of complete graphs corresponding to each hyperedge, which is also a triangulated graph. (d) The junction tree $T$, which is not an $m$-separation tree.} \label{Fig:jvmethod}
\end{figure}
\section{Decomposition of Structural Learning}\label{main-alg}
Applying the following theorem to structural learning, we can split a problem of searching for $m$-separators and building the skeleton of a CG into small problems for every node of $m$-separation tree $T$.
\begin{theorem}\label{thm1}
Let $T$ be an $m$-separation tree for CG $G$. Vertices $u$ and $v$ are $m$-separated by $S\subseteq V$ in $G$ if and
only if (i) $u$ and $v$ are not contained together in any node $C$ of $T$ or (ii) there exists a node $C$ that contains both $u$
and $v$ such that a subset $S'$ of $C$ $m$-separates $u$ and $v$.
\end{theorem}
According to Theorem \ref{thm1}, a problem of searching for an $m$-separator $S$ of $u$ and $v$ in all possible subsets of $V$ is
localized to all possible subsets of nodes in an $m$-separation tree that contain $u$ and $v$. For a given $m$-separation tree $T$
with the node set $C = \{C_1,\dots , C_H \}$, we can recover the skeleton and all $v$-structures for a CG as follows. First
we construct a local skeleton for every node $C_h$ of $T$, which is constructed by starting with a complete undirected
subgraph and removing an undirected edge $(u, v)$ if there is a subset $S$ of $C_h$ such that $u$ and $v$ are independent
conditional on $S$. Then, in order to construct the global skeleton, we combine all these local skeletons together and remove
edges that are present in some local skeletons but absent in other local skeletons. Then we determine every $v$-structure
if two non-adjacent vertices $u$ and $v$ have a common neighbor in the global skeleton but the neighbor is not contained
in the $m$-separator of $u$ and $v$. Finally we can orient more undirected edges if none of them creates either a
partially directed cycle or a new $v$-structure (see, for example, Figure \ref{Fig:alg1}). This process is formally described in the following algorithm:
\begin{algorithm}[ht]
\caption{A recovery algorithm for MVR chain graphs}\label{alg1}
\SetAlgoLined
\KwIn{a probability distribution $p$ faithful to an
unknown MVR CG $G$.}
\KwOut{the pattern of MVR CG $G$.}
Construct an $m$-separation tree $T$ with a node set $C = \{C_1, \dots, C_H \}$ as discussed in Section \ref{construct-trees}\;
Set $S=\emptyset$\;
\For{$h\gets 1$ \KwTo $H$}{
Start from a complete undirected graph $\bar{G}_h$ with vertex set $C_h$\;
\For{\textrm{each vertex pair $\{u,v\}\subseteq C_h$ }}{\If{$\exists S_{uv}\subseteq C_h \textrm{ such that } u \perp\!\!\!\perp v | S_{uv}$}{
Delete the edge $(u,v)$ in $\bar{G}_h$\;
Add $S_{uv}$ to $S$;
}
}
}
Initialize the edge set $\bar{E}_V$ of $\bar{G}_V$ as the union of all edge sets of $\bar{G}_h, h=1,\dots, H$\;
\For{\textrm{each Vertex pair $\{u,v\}$ contained in more than one tree node and $(u,v)\in \bar{G}_V$}}{
\If{$\exists C_h \textrm{ such that } \{u,v\}\subseteq C_h \textrm{ and } \{u,v\}\not\in \bar{E}_h$}{
Delete the edge $(u,v)$ in $\bar{G}_V$\;
}
}
\For{\textrm{each $m$-separator $S_{uv}$ in the list $S$}}{\If{\textrm{$u\mathrel{\circ\!-} w\mathrel{-\!\circ} v$ appears in the global skeleton
and $w$ is not in $S_{uv}$}}{
\tcc{$u\mathrel{\circ\!-} w$ means $u\gets w$ or $u-w$. Also, $w\mathrel{-\!\circ} v$ means $w\to v$ or $w-v.$}
Determine a $v$-structure $u\mathrel{\circ\!\!\!\rightarrow} w\mathrel{\leftarrow\!\!\!\circ} v$\;
}
}
\end{algorithm}
\begin{figure}[ht]
\centering
\includegraphics[scale=.4]{images/alg1b.png}
\caption{(a) Local skeletons for every node of the $m$-separation tree in Figure \ref{Fig:tree1}. (b) The global skeleton and all $v$-structures.} \label{Fig:alg1}
\end{figure}
The following algorithm returns an MVR chain graph that contains exactly the minimum set
of bidirected edges for its Markov equivalence
class. For the correctness of lines 2-7 in Algorithm \ref{alg2}, see \citep{sp}.
\begin{algorithm}
\caption{A recovery algorithm for MVR chain graphs with minimum set of bidirected edges for its equivalence
class}\label{alg2}
\SetAlgoLined
\KwIn{a probability distribution $p$ faithful to an
unknown MVR CG $G$.}
\KwOut{an MVR CG $G'$ s.t.
$G$ and $G'$ are Markov equivalent and $G'$ has exactly the minimum set of bidirected edges for its equivalence
class.}
Call Algorithm \ref{alg1} to construct $G'$, which is the equivalence class of MVR CGs for $G$\;
Apply rules 1-3 in Figure \ref{Fig:rules} while possible\;
\tcc{After this line, the learned graph is the \textit{essential graph} of MVR CG $G$ i.e., it
has the same skeleton as $G$ and contain all and only the arrowheads that
are shared by all MVR CGs in the Markov equivalence class of $G$ \citep{essentialmvrcgs}.}
Let $G'_u$ be the subgraph of $G'$ containing only the
nodes and the undirected edges in $G'$\;
Let $T$ be the junction tree of $G'_u$\;
\tcc{If $G'_u$ is disconnected, the cliques belonging to different connected components can be linked with empty separators, as described in \cite[Theorem 4.8]{Golumbic}.}
Order the cliques $C_1,\cdots , C_n$ of $G'_u$ s.t. $C_1$ is the root of $T$ and if $C_i$ is closer to the root than $C_j$ in $T$ then $C_i < C_j$\;
Order the nodes such that if $A\in C_i$, $B\in C_j$, and $C_i < C_j$ then $A < B$\;
Orient the undirected edges in $G'$ according to the ordering obtained in line 6.
\end{algorithm}
\begin{figure}[ht]
\centering
\includegraphics[scale=.4]{images/rules.png}
\caption{The Rules \citep{sp}} \label{Fig:rules}
\end{figure}
According to Theorem \ref{thm1}, we can prove that the global skeleton and all $v$-structures obtained by applying the decomposition in Algorithm \ref{alg1} are correct, that is, they are the same as those obtained from the joint distribution of $V$, see
Appendix A for the details of proof. Note that separators in an $m$-separation tree may not be complete in the augmented graph.
Thus the decomposition is weaker than the decomposition usually defined for parameter estimation \citep{cdls,l}.
\section{Complexity Analysis and Advantages}\label{complexity}
In this section, we start by comparing our algorithm with the main algorithm in \citep{xie} that is designed
specifically for DAG structural learning when the underlying graph structure is a DAG. We make
this choice of the DAG specific algorithm so that both algorithms can have the same separation tree
as input and hence are directly comparable.
In a DAG, all chain components are singletons. Therefore, sufficiency of having a hypergraph that contains both $\tau$ and its parent set for every chain component is equivalent with having a hypergraph that contains both $u$ and its parent set for every $u\in V$, when the underlying graph structure is a DAG. Therefore, it is obvious that our algorithm has the same effect and the same complexity as the main algorithm in \citep{xie}.
The same advantages mentioned by \citep{xie} for their BN structural learning algorithm hold for our algorithm when applied to MVR CGs. For the reader convenience, we list them here.
First, by using the $m$-separation tree, independence tests are performed only conditionally on smaller sets
contained in a node of the $m$-separation tree rather than on the full set of all other variables. Thus our algorithm has
higher power for statistical tests.
Second, the computational complexity can be reduced. This complexity analysis focuses only on the number of
conditional independence tests for constructing the equivalence class. Decomposition of graphs is a computationally
simple task compared to the task of testing conditional independence for a large number of triples of sets of variables. The triangulation of an undirected graph is used in our
algorithms to construct an $m$-separation from an undirected independence graph. Although the problem for optimally
triangulating an undirected graph is NP-hard, sub-optimal triangulation methods \citep{bbhp} may be used provided
that the obtained tree does not contain too large nodes to test conditional independencies. Two of the best known
algorithms are lexicographic search and maximum cardinality search, and their complexities are
$O(|V||E|)$ and $O(|V|+ |E|)$, respectively \citep{bbhp}. Thus in our algorithms, the conditional independence tests dominate the algorithmic
complexity.
The complexity of the Algorithm \ref{alg1} is $O(Hm^22^m)$ as claimed in \citep[Section 6]{xie}, where $H$ is the number of hyperedges (usually $H \ll |V|$) and $m=\max_h|C_h|$ where $|C_h|$ denotes the number of variables in $C_h$ ($m$ usually is much less than $|V|$).
\section{Evaluation}\label{evaluation}
In this section, we evaluate the performance of our algorithms in various setups
using simulated / synthetic data sets. We first compare the performance of our algorithm with the PC-like
learning algorithm \citep{sp} by running them
on randomly generated MVR chain graphs. (A brief description of the PC-like algorithm is provided at the beginning of Section \ref{discussion}.) We then compare our method with the PC-like algorithm on different discrete Bayesian networks such as \href{http://www.bnlearn.com/bnrepository/}{ASIA, INSURANCE, ALARM, and HAILFINDER} that have
been widely used in evaluating the performance of structural learning algorithms. Empirical simulations show that our algorithm achieves
competitive results with the PC-like learning algorithm; in particular, in the Gaussian case the decomposition-based algorithm outperforms (except in running time) the PC-like algorithm.
Algorithms \ref{alg1} , \ref{alg2}, and the PC-like algorithm have been implemented in the R language. All the results reported here are
based on our R implementation \citep{jv3}.
\subsection{Performance Evaluation on Random MVR Chain Graphs (Gaussian case)}
To investigate the performance of the decomposition-based learning method, we use the same approach that \citep{mxg} used in
evaluating the performance of the LCD algorithm on LWF chain graphs. We run our algorithms and the PC-like algorithm
on randomly generated MVR chain graphs and then we compare the results and report summary error measures in all cases.
\subsubsection{Data Generation Procedure}
First we explain the way in which the random MVR chain graphs and random samples are generated.
Given a vertex set $V$ , let $p = |V|$ and $N$ denote the average degree of edges (including bidirected
and pointing out and pointing in) for each vertex. We generate a random MVR chain graph on $V$ as
follows:
\begin{itemize}
\item Choose one element, say $k$, of the vector $c=(0.1, 0.2, 0.3, 0.4, 0.5)$ randomly\footnote{In the case of $p=40,50$ we use $c=(0.1,0.2)$.}.
\item Use the randDAG function from the \href{https://cran.r-project.org/web/packages/pcalg/index.html}{pcalg} R package and generate an un-weighted random Erdos-Renyi graph, which is a DAG with $p+(k\times p)$ nodes and $N$ expected number of neighbours per node.
\item Use the AG function from the \href{https://cran.r-project.org/web/packages/ggm/index.html}{ggm} R package and marginalize out $k\times p$ nodes to obtain a random MVR chain graph with $p$ nodes and $N$ expected number of neighbours per node. If the obtained graph is not an MVR chain graph, repeat this procedure until an MVR CG is obtained.
\end{itemize}
The rnorm.cg function from the \href{http://www2.uaem.mx/r-mirror/web/packages/lcd/index.html}{lcd} R package was used to generate a desired number of normal random samples from the canonical DAG \citep{rs} corresponding to the obtained MVR chain graph in the first step. Notice that faithfulness is not necessarily guaranteed by the current sampling procedure \citep{mxg}.
\subsubsection{Experimental Results for Random MVR Chain Graphs (Gaussian case)}
We evaluate the performance of the decomposition-based and PC-like algorithms in terms of five measurements: (a) the true positive
rate (TPR)\footnote{Also known as sensitivity, recall, and hit rate.}, (b) the false positive rate (FPR)\footnote{Also known as fall-out.}, (c) accuracy (ACC) for the skeleton, (d) the structural Hamming distance (SHD)\footnote{This is the metric described in \citep{Tsamardinos2006} to compare the
structure of the learned and the original graphs.}, and (e) run-time for the pattern recovery algorithms. In short, $TPR=\frac{\textrm{true positive } (TP)}{\textrm{the number of real positive cases in the data } (Pos)}$ is the ratio of the number of correctly identified edges over total number of edges, $FPR=\frac{\textrm{false positive }(FP)}{\textrm{the number of real negative cases in the data }(Neg)}$ is the ratio of the number of incorrectly identified edges over total number of gaps, $ACC=\frac{\textrm{true positive }(TP) +\textrm{ true negative }(TN)}{Pos+Neg}$ and
$SHD$ is the number of legitimate operations needed to change the current pattern to the true one,
where legitimate operations are: (a) add or delete an edge and (b) insert, delete or reverse an edge
orientation. In principle, a large TPR and ACC, a small FPR and SHD indicate good performance.
In our simulation, we change three parameters $p$ (the number of vertices), $n$ (sample size) and
$N$ (expected number of adjacent vertices) as follows:
\begin{itemize}
\item $p\in\{10, 20, 30, 40, 50\}$,
\item $n\in\{300, 1000, 3000, 10000\}$, and
\item $N\in\{2,3,5,8,10\}$.
\end{itemize}
For each $(p,N)$ combination, we first generate 25 random MVR chain graphs. We then generate a
random Gaussian distribution based on each corresponding canonical DAG and draw an identically independently distributed
(i.i.d.) sample of size $n$ from this distribution for each possible $n$, and finally we remove those columns (if any exist) that correspond to the hidden variables. For each sample, three different
significance levels $\alpha=0.05/0.01/0.005$ are used to perform the hypothesis tests. For decomposition-based algorithm we consider two different versions: The first version uses Algorithm \ref{alg1} and the three rules in Algorithm \ref{alg2}, while the second version uses both Algorithm \ref{alg1} and \ref{alg2}. Since the learned graph of the first version may contain some undirected edges, we call it the \textit{essential recovery algorithm}. However, removing all directed and bidirected edges from the learned graph results in a chordal graph \citep{sp}. Furthermore, the learned graph has exactly the (unique) minimum set of bidirected edges for its Markov equivalence class \citep{sp}. The second version of the decomposition-based algorithm returns an MVR chain graph that has exactly the
minimum set of bidirected edges for its equivalence
class. A similar approach is used for the PC-like algorithm. We then
compare the results to access the performance of the decomposition-based algorithm against the PC-like algorithm. The entire plots of the error measures and running times can be seen in the \href{https://www.dropbox.com/sh/iynnlwyu8il7m3v/AACk8SyIEn7s-W9NRlLnz0DDa?dl=0}{supplementary document} \citep{jv3}.
From the plots, we infer that: (a) both
algorithms yield better results on sparse graphs $(N = 2,3)$ than on dense graphs $(N = 5,8,10)$, for example see Figures \ref{fig:tprfpracc1} and \ref{fig:shd1}; (b) for both algorithms, typically the TPR and ACC increase with sample size, for example see Figure \ref{fig:tprfpracc1}; (c) for both algorithms, typically the SHD decreases with sample size for sparse graphs $(N = 2,3)$. For $N=5$ the SHD decreases with sample size for the decomposition-based algorithm while the SHD has no clear dependence on the sample size for the PC-like algorithm in this case. Typically, for the PC-like algorithm the SHD increases with sample size for dense graphs $(N = 8,10)$ while the SHD has no clear dependence on the sample size for the decomposition-based algorithm in these cases, for example see Figure \ref{fig:shd1}; (d) a large significance level $(\alpha=0.05)$ typically yields large
TPR, FPR, and SHD, for example see Figures \ref{fig:tprfpracc1} and \ref{fig:shd1}; (e) in almost all cases, the performance of the decomposition-based algorithm based on all error measures i.e., TPR, FPR, ACC, and SHD is better than the performance of the PC-like algorithm, for example see Figure \ref{fig:tprfpracc1} and \ref{fig:shd1}; (f) In most cases, error measures based on $\alpha=0.01$ and $\alpha=0.005$ are very close, for example see Figure \ref{fig:tprfpracc1} and \ref{fig:shd1}. Generally, our empirical results suggests that in order to obtain a better performance, we can choose a small value (say $\alpha=0.005$ or 0.01) for
the significance level of individual tests along with large sample (say $n=3000$ or 10000). However, the optimal value for a desired overall error rate may depend on the sample size, significance level, and the sparsity of the underlying graph.
\begin{figure}
\centering
\includegraphics[scale=.25,page=9]{images/tpr_lcd.pdf}
\includegraphics[scale=.25,page=9]{images/fpr_lcd.pdf}
\includegraphics[scale=.25,page=9]{images/acc_lcd.pdf}
\includegraphics[scale=.25,page=9]{images/tpr_pc.pdf}
\includegraphics[scale=.25,page=9]{images/fpr_pc.pdf}
\includegraphics[scale=.25,page=9]{images/acc_pc.pdf}
\includegraphics[scale=.25,page=12]{images/tpr_lcd.pdf}
\includegraphics[scale=.25,page=12]{images/fpr_lcd.pdf}
\includegraphics[scale=.25,page=12]{images/acc_lcd.pdf}
\includegraphics[scale=.25,page=12]{images/tpr_pc.pdf}
\includegraphics[scale=.25,page=12]{images/fpr_pc.pdf}
\includegraphics[scale=.25,page=12]{images/acc_pc.pdf}
\caption{Error measures of the decomposition-based and PC-like algorithms for randomly generated Gaussian chain graph models:
average over 25 repetitions with 30 variables. The four rows correspond to N = 2 and 8. The three columns give three error measures: TPR, FPR and ACC in each
setting respectively. In each plot, the solid (blue)/dashed (green)/dotted (red) lines correspond to significance
levels $\alpha=0.05/0.01/0.005$.}
\label{fig:tprfpracc1}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.21,page=9]{images/shd_lcd.pdf}
\includegraphics[scale=.21,page=9]{images/shd_pc.pdf}
\includegraphics[scale=.21,page=9]{images/min_shd_lcd.pdf}
\includegraphics[scale=.21,page=9]{images/min_shd_pc.pdf}
\includegraphics[scale=.21,page=11]{images/shd_lcd.pdf}
\includegraphics[scale=.21,page=11]{images/shd_pc.pdf}
\includegraphics[scale=.21,page=11]{images/min_shd_lcd.pdf}
\includegraphics[scale=.21,page=11]{images/min_shd_pc.pdf}
\includegraphics[scale=.21,page=12]{images/shd_lcd.pdf}
\includegraphics[scale=.21,page=12]{images/shd_pc.pdf}
\includegraphics[scale=.21,page=12]{images/min_shd_lcd.pdf}
\includegraphics[scale=.21,page=12]{images/min_shd_pc.pdf}
\caption{Error measure SHD of the decomposition-based and PC-like algorithms for randomly generated Gaussian chain graph models:
average over 25 repetitions with 30 variables. The first row correspond to N = 2, the second row correspond to N=5, and the third row correspond to N=8. The first two columns correspond to the essential recovery while the last two columns correspond to the minimum bidirected recovery respectively. In each plot, the solid (blue)/dashed (green)/dotted (red) lines correspond to significance
levels $\alpha=0.05/0.01/0.005$.}
\label{fig:shd1}
\end{figure}
Considering average running times vs. sample sizes, it can be
seen that, for example see Figure \ref{fig:time1}: (a) the average run time increases with sample size; (b) the average run times based on $\alpha=0.01$ and $\alpha=0.005$ are very close and in all cases are better than $\alpha=0.05$, while
choosing $\alpha=0.005$ yields a consistently (albeit slightly) lower average run time across all the settings in
the current simulation; (c) generally, the average run time for the PC-like algorithm is better than that for the decomposition-based algorithm. One possible justification is related to the details of the implementation. The PC algorithm implementation in the pcalg R package is very well optimized, while we have not concentrated on optimizing our implementation of the LCD algorithm; therefore the comparison on run time may be unfair to the new algorithm. For future work, one may consider both optimization of the LCD implementation and instrumentation of the code to allow counting characteristic operations and therefore reducing the dependence of run-time comparison on program optimization. The simulations were run on an Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz. An R language package that implements our algorithms is available in the \href{https://www.dropbox.com/sh/iynnlwyu8il7m3v/AACk8SyIEn7s-W9NRlLnz0DDa?dl=0}{supplementary document} \citep{jv3}.
\begin{figure}
\centering
\includegraphics[scale=.21,page=9]{images/time_lcd.pdf}
\includegraphics[scale=.21,page=9]{images/time_pc.pdf}
\includegraphics[scale=.21,page=9]{images/min_time_lcd.pdf}
\includegraphics[scale=.21,page=9]{images/min_time_pc.pdf}
\caption{Running times of the decomposition-based and PC-like algorithms for randomly generated Gaussian chain graph models:
average over 25 repetitions with 30 variables correspond to N = 2. The first two columns correspond to the essential recovery algorithm while the last two columns correspond to the minimum bidirected recovery respectively. In each plot, the solid (blue)/dashed (green)/dotted (red) lines correspond to significance
levels $\alpha=0.05/0.01/0.005$.}
\label{fig:time1}
\end{figure}
It is worth noting that since our implementation of the decomposition-based algorithms is based on the LCD R package, the generated normal random samples from a given MVR chain graph is not guaranteed to be faithful to it. So, one can expect a better performance if we only consider faithful probability distributions in the experiments. Also, the LCD R package uses ${\chi}^2$ test which is an asymptotic test for $G^2$ \citep{mxg}. Again, one can expect a better results if we replace the asymptotic test used in the LCD R package with an exact test. However, there is a trade-off between accuracy and computational time \citep{mxg}.
\subsection{Performance on Discrete Bayesian Networks}
Bayesian networks are special cases of MVR chain graphs. It is of
interest to see whether the decomposition-based algorithms still work well when the data are actually generated
from a Bayesian network. For this purpose, in this subsection, we perform simulation studies for four well-known Bayesian networks from \href{http://www.bnlearn.com/bnrepository/}{Bayesian Network Repository} (Figures \ref{fig:asia}, \ref{fig:insurance}, \ref{fig:alarm}, and \ref{fig:hailfinder}):
\begin{itemize}
\item ASIA \citep{asia}: with 8 nodes, 8 edges, and 18 parameters, it describes the diagnosis of a patient at a chest clinic who may have just come back from a trip to Asia and may be showing dyspnea. Standard learning algorithms are not able to recover the true structure of the network because of the presence of a functional node (either, representing logical or)\footnote{\href{https://cran.r-project.org/web/packages/bnlearn/bnlearn.pdf}{Package 'bnlearn'}}.
\item INSURANCE \citep{insurance}: with 27 nodes, 52 edges, and 984 parameters, it evaluates car insurance risks.
\item ALARM \citep{alarm}: with 37 nodes, 46 edges and 509 parameters, it was designed by medical experts to provide an alarm message system for intensive care unit patients based on the output a number of vital signs monitoring devices.
\item HAILFINDER \citep{Hailfinder}: with 56 nodes, 66 edges, and 2656 parameters, it was designed to forecast severe summer hail in northeastern Colorado.
\end{itemize}
We compare the performance of our algorithms against the PC-like algorithm for these Bayesian networks for three different significance levels $(\alpha=0.05/0.01/0.005)$.
The results of all learning methods are summarized in Table \ref{asia}, \ref{insurance}, \ref{alarm}, and \ref{hailfinder}.
For the decomposition-based methods, all the three error measures: TPR, FPR and SHD are similar to those of the PC-like algorithms, but the results indicate that the decomposition-based method outperforms the PC-like algorithms as the size of Bayesian network become larger, especially in terms of TPR and SHD.
\section{Discussion and Conclusion}\label{discussion}
In this paper, we presented a computationally feasible algorithm for learning the structure of MVR chain graphs via decomposition. We compared the performance of our algorithm with that of the PC-like algorithm proposed by \citep{sp}, in the Gaussian and discrete cases. The PC-like algorithm is a constraint-based algorithm that learns the structure of the underlying MVR chain graph in four steps: (a) determining the skeleton: the resulting undirected graph in this phase contains an undirected edge $u-v$ iff there is no set $S\subseteq V\setminus\{u,v\}$ such that $u\!\perp\!\!\!\perp v|S$; (b) determining the v-structures (unshielded colliders); (c) orienting some of the undirected/directed edges into directed/bidirected edges according to a set of rules applied iteratively; (d) transforming the resulting graph in the previous step into an MVR CG. The essential recovery algorithm obtained after step (c) contains all directed and bidirected edges that are present in every MVR CG of the same Markov equivalence class. The decomposition-based algorithm is also a constraint-based algorithm that is based on a divide and conquer approach and contains four steps: (a) determining the skeleton by a divide-and-conquer approach; (b) determining the v-structures (unshielded colliders) with localized search for $m$-separators; continuing with steps (c) and (d) exactly as in the PC-like algorithm. The correctness of both algorithms lies upon the assumption that the probability distribution $p$ is faithful
to some MVR CG.
As for the PC-like algorithms, unless the probability distribution $p$ of the data is faithful to some MVR CG the learned CG cannot be ensured to factorize $p$ properly. Empirical simulations in the Gaussian case show that both algorithms yield good results when the underlying graph is sparse. The decomposition-based algorithm achieves
competitive results with the PC-like learning algorithm in both Gaussian and discrete cases.
In fact, the decomposition-based method usually outperforms the PC-like algorithm in all four error measures i.e., TPR, FPR, ACC, and SHD.
Such simulation results confirm that our method is reliable both when latent variables are present (and the underlying graph is an MVR CG) and when there are no such variables (and the underlying graph is a DAG. The algorithm works reliably when latent variables are present and only fails when selection bias variables are presents. Our algorithm allows relaxing half of the causal sufficiency assumption, because only selection bias needs to be represented explicitly. Since our implementation of the decomposition-based algorithm is based on the LCD R package, with fixed
number of samples, one can expect a better performance if we replace the asymptotic test used in the LCD R package with an exact test. However, there is a trade-off between accuracy and computational time. Also, one can expect a better results if we only consider faithful probability distributions in the experiments.
The natural continuation of the work presented here would be to develop a learning algorithm with weaker assumptions than the one presented. This could for example be a learning
algorithm that only assumes that the probability distribution satisfies the composition property. It should be mentioned that \citep{psn} developed an algorithm for learning LWF CGs under the composition property. However, \citep{Addendum} proved that the same technique cannot be used for MVR chain graphs.
We believe that our approach is extendable to the structural learning of AMP chain graphs \citep{amp}. So, the natural continuation of the work presented here would be to develop a learning algorithm via decomposition for AMP chain graphs under the faithfulness assumption.
\begin{table}\centering
\begin{tabular}{c|c|c|c|c}
& TPR & FPR&ACC& SHD\\
\midrule
& 0.625& 0.2& 0.75& 9\\
Decomposition-Based essential recovery algorithm & 0.625& 0.2& 0.75& 9\\% That's the rule you're looking for.
&0.625& 0.2& 0.75& 9\\
\midrule
&0.625& 0& 0.893& 6\\
PC-Like essential recovery algorithm Algorithm &0.625& 0& 0.893& 6 \\
&0.625& 0& 0.893& 6\\
\midrule
&0.625& 0.2& 0.75& 8\\
Decomposition-Based Algorithm with Minimum bidirected Edges & 0.625& 0.2& 0.75& 7 \\
&0.625& 0.2& 0.75& 8 \\
\midrule
&0.625& 0& 0.893& 4\\
PC-Like Algorithm with Minimum bidirected Edges &0.625& 0& 0.893& 4\\
&0.625& 0& 0.893& 4\\
\bottomrule
\end{tabular}
\caption{Results for discrete samples from the ASIA network. Each row corresponds to the significance
level: $\alpha=0.05/0.01/0.005$ respectively.}\label{asia}
\end{table}
\begin{table}\centering
\begin{tabular}{c|c|c|c|c}
& TPR & FPR&ACC& SHD\\
\midrule
&0.635& 0.0167& 0.932& 31\\
Decomposition-Based essential recovery algorithm & 0.635& 0.020& 0.926& 32\\% That's the rule you're looking for.
&0.654& 0.0134& 0.937& 28\\
\midrule
&0.558& 0& 0.934& 37\\
PC-Like essential recovery algorithm Algorithm &0.519& 0& 0.929& 37\\
&0.519& 0& 0.929& 37\\
\midrule
&0.635& 0.0167& 0.932& 30\\
Decomposition-Based Algorithm with Minimum bidirected Edges & 0.635& 0.020& 0.926& 32 \\
&0.654& 0.0134& 0.937& 27 \\
\midrule
&0.558& 0& 0.934& 27\\
PC-Like Algorithm with Minimum bidirected Edges &0.519& 0& 0.929& 29\\
&0.519& 0& 0.929& 29\\
\bottomrule
\end{tabular}
\caption{Results for discrete samples from the INSURANCE network. Each row corresponds to the significance
level: $\alpha=0.05/0.01/0.005$ respectively.}\label{insurance}
\end{table}
\begin{table}\centering
\begin{tabular}{c|c|c|c|c}
& TPR & FPR&ACC& SHD\\
\midrule
&0.783& 0.0194& 0.967&34\\
Decomposition-Based essential recovery algorithm &0.783& 0.0161& 0.967&32\\% That's the rule you're looking for.
&0.761& 0.021& 0.964& 36\\
\midrule
&0.457& 0& 0.962& 38\\
PC-Like essential recovery algorithm Algorithm &0.435& 0& 0.961& 38\\
&0.413& 0& 0.959& 41\\
\midrule
&0.783& 0.0194& 0.967&30\\
Decomposition-Based Algorithm with Minimum bidirected Edges &0.783& 0.0161& 0.967&28 \\
&0.761& 0.021& 0.964& 35\\
\midrule
&0.457& 0& 0.962& 33\\
PC-Like Algorithm with Minimum bidirected Edges &0.435& 0& 0.961& 33\\
&0.413& 0& 0.959& 36\\
\bottomrule
\end{tabular}
\caption{Results for discrete samples from the ALARM network. Each row corresponds to the significance
level: $\alpha=0.05/0.01/0.005$ respectively.}\label{alarm}
\end{table}
\begin{table}\centering
\begin{tabular}{c|c|c|c|c}
& TPR & FPR&ACC& SHD\\
\midrule
&0.758& 0.003& 0.986& 26\\
Decomposition-Based essential recovery algorithm &0.742& 0.002& 0.987& 24\\% That's the rule you're looking for.
&0.757& 0.002& 0.988& 22\\
\midrule
&0.457& 0& 0.962& 38\\
PC-Like essential recovery algorithm Algorithm &0.515& 0.0007& 0.979& 40\\
&0.515& 0.0007& 0.979& 40\\
\midrule
&0.758& 0.003& 0.986& 42\\
Decomposition-Based Algorithm with Minimum bidirected Edges &0.742& 0.002& 0.987&41 \\
&0.757& 0.002& 0.988& 24\\
\midrule
&0.457& 0& 0.962& 38\\
PC-Like Algorithm with Minimum bidirected Edges &0.515& 0.0007& 0.979& 38\\
&0.515& 0.0007& 0.979&39\\
\bottomrule
\end{tabular}
\caption{Results for discrete samples from the HAILFINDER network. Each row corresponds to the significance
level: $\alpha=0.05/0.01/0.005$ respectively.}\label{hailfinder}
\end{table}
\begin{figure}
\centering
\includegraphics[scale=.25]{images/asia.pdf}
\caption{\href{http://www.bnlearn.com/bnrepository/}{ASIA (sometimes called LUNG CANCER or CHEST CLINIC)} ,
Number of nodes: 8,
Number of arcs: 8,
Number of parameters: 18,
Average Markov blanket size: 2.50,
Average degree: 2.00,
Maximum in-degree: 2.}
\label{fig:asia}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.5]{images/insurance.pdf}
\caption{\href{http://www.bnlearn.com/bnrepository/}{INSURANCE} ,
Number of nodes: 27,
Number of arcs: 52,
Number of parameters: 984,
Average Markov blanket size: 5.19,
Average degree: 3.85
Maximum in-degree: 3.}
\label{fig:insurance}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.5]{images/alarm.pdf}
\caption{\href{http://www.bnlearn.com/bnrepository/}{ALARM} ,
Number of nodes: 37,
Number of arcs: 46,
Number of parameters: 509,
Average Markov blanket size: 3.51,
Average degree: 2.49,
Maximum in-degree: 4.}
\label{fig:alarm}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.9]{images/hailfinder.pdf}
\caption{\href{http://www.bnlearn.com/bnrepository/}{HAILFINDER} ,
Number of nodes: 56,
Number of arcs: 66,
Number of parameters: 2656,
Average Markov blanket size: 3.54,
Average degree: 2.36
Maximum in-degree: 4.}
\label{fig:hailfinder}
\end{figure}
\section*{Appendix A. Proofs of Theoretical Results}
\begin{lemma}\label{lem1}
Let $\rho$ be a chain from $u$ to $v$, and $W$ be the set of all vertices on $\rho$ ($W$ may or may not contain $u$ and $v$).
Suppose that (the endpoints of) a chain $\rho$ is (are) blocked by $S$. If $W\subseteq S$, then the chain $\rho$ is blocked by $W$ and by any set containing $W$.
\end{lemma}
\begin{proof}
Since the blocking of the chain $\rho$ depends on those vertices between $u$ and $v$ that are contained in the $m$-separator,
and since $W$ contains all vertices on $\rho$, $\rho$ is also blocked by $S \cap W = W$ if $\rho$ is blocked by $S$. Since all colliders on $\rho$
have already been activated conditionally on $W$, adding other vertices into the conditional set does not make any
new collider active on $\rho$. This implies that $\rho$ is blocked by any set containing $W$.
\end{proof}
\begin{lemma}\label{lem2}
Let $T$ be an $m$-separation tree for CG $G$, and $K$ be a separator of $T$ that separates $T$ into two
subtrees $T_1$ and $T_2$ with variable sets $V_1$ and $V_2$ respectively. Suppose that $\rho$ is a chain from $u$ to $v$ in $G$ where $u\in V_1\setminus K$ and $v\in V_2\setminus K$. Let $W$ denote the set of all vertices on $\rho$ ($W$ may or may not contain $u$ and $v$). Then the
chain $\rho$ is blocked by $W\cap K$ and by any set containing $W\cap K$.
\end{lemma}
\begin{proof}
Since $u\in V_1\setminus K$ and $v\in V_2\setminus K$, there is a sequence from $s$ (may be $u$) to $y$ (may be $v$) in $\rho =(u,\dots,s,t,\dots,x,y,\dots,v)$ such that $s\in V_1\setminus K$ and $y\in V_2\setminus K$ and all vertices from $t$ to $x$ are contained in $K$. Let $\rho'$ be the sub-chain of $\rho$ from $s$ to $y$ and $W'$ the vertex set from $t$ to $x$, so $W'\subseteq K$. Since $s\in V_1\setminus K$ and $y\in V_2\setminus K$, we have from definition of $m$-separation tree that $K$ $m$-separates $s$ and $y$ in $G$, i.e., $K$ blocks $\rho'$. By lemma \ref{lem1}, we obtain that $\rho'$ is blocked by $W'(\subseteq K)$ and any set containing $W'$. Since $W'\subseteq (K\cap W)$, $\rho'$ is blocked by $K\cap W$ and by any set containing $K\cap W$. Thus $\rho(\supseteq \rho')$ is also blocked by them.
\end{proof}
\begin{remark}\label{rem1}
Javidian and Valtorta showed that if we find a separator over $S$ in $(G_{An(u\cup v)})^a$ then it is an $m$-separator in $G$. On the other hand, if there exists an $m$-separator over $S$ in $G$ then there must exist a separator over $S$ in $(G_{An(u\cup v)})^a$ by removing all nodes which are not in $An(u\cup v)$ from it \citep{jv2}.
\end{remark}
Observations in Remark \ref{rem1} yield the following results.
\begin{lemma}\label{lem3}
Let $u$ and $v$ be two non-adjacent vertices in MVR CG $G$, and let $\rho$ be a chain from $u$ to $v$. If $\rho$ is not contained in
$An(u\cup v)$, then $\rho$ is blocked by any subset S of $an(u\cup v)$.
\end{lemma}
\begin{proof}
Since $\rho \not\subseteq An(u\cup v)$, there is a sequence from $s$ (may be $u$) to $y$ (may be $v$) in $\rho=(u,\dots,s,t,\dots,x,y,\dots,v)$ such that $s$ and $y$ are contained in $An(u\cup v)$ and all vertices from $t$ to $x$ are out of $An(u\cup v)$.Then the edges $s-t$ and $x-y$ must be oriented as $s\mathrel{\circ\!\!\!\rightarrow} t$ and $x\mathrel{\leftarrow\!\!\!\circ} y$, otherwise $t$ or $x$ belongs to $an(u\cup v)$. Thus there exist at least one collider between $s$ and $y$ on $\rho$. The middle vertex $w$ of the collider closest to $s$ between $s$ and $y$ is not contained in
$an(u\cup v)$, and any descendant of $w$ is not in $an(u\cup v)$, otherwise there is a (partially) directed cycle. So $\rho$ is blocked
by the collider, and it cannot be activated conditionally on any vertex in $S$ where $S\subseteq an(u\cup v)$.
\end{proof}
\begin{lemma}\label{lem4}
Let $T$ be an $m$-separation tree for CG $G$. For any vertex $u$ there exists at least one node of $T$ that contains $u$ and $bd(u)$.
\end{lemma}
\begin{proof}
If $bd(u)$ is empty, it is trivial. Otherwise let $C$ denote the node of $T$ which contains $u$ and the most elements
of $u$'s boundary. Since no set can separate $u$ from a parent (or neighbor), there must be a node of $T$ that contains $u$ and the parent (or neighbor). If $u$ has only
one parent (or neighbor), then we obtain the lemma. If $u$ has two or more elements in its boundary, we choose two arbitrary elements $v$ and $w$ of $u$'s boundary that are not contained in a single node but are contained in two different nodes of $T$,
say $\{u,v\}\subseteq C$ and $\{u,w\}\subseteq C'$ respectively, since all vertices in $V$ appear in $T$. On the chain from $C$ to $C'$ in $T$, all
separators must contain $u$, otherwise they cannot separate $C$ from $C'$. However, any separator containing $u$ cannot
separate $v$ and $w$ because $v\mathrel{\circ\!\!\!\rightarrow} u\mathrel{\leftarrow\!\!\!\circ} w$ is an active chain between $v$ and $w$ in $G$. Thus we got a contradiction.
\end{proof}
\begin{lemma}\label{lem5}
Let $T$ be an $m$-separation tree for CG $G$ and $C$ a node of $T$. If $u$ and $v$ are two vertices in $C$ that
are non-adjacent in $G$, then there exists a node $C'$ of $T$ containing $u, v$ and a set $S$ such that $S$ $m$-separates $u$ and $v$ in $G$.
\end{lemma}
\begin{proof}
Without loss of generality, we can suppose that $v$ is not a descendant of the vertex $u$ in $G$, i.e., $v\not\in nd(u)$. According to the local Markov property for MVR chain graphs proposed by Javidian and Valtorta in \citep{jv1}, we know that $u\perp\!\!\!\perp [nd(u)\setminus bd(u)]|pa_G(u).$ By Lemma \ref{lem4}, there is a
node $C_1$ of $T$ that contains $u$ and $bd(u)$. If $v\in C_1$, then $S$ defined as the parents of $u$ $m$-separates $u$ from $v$.
If $v\not\in C_1$, choose the node $C_2$ that is the closest node in $T$ to the node $C_1$ and that contains $u$ and $v$. Consider that there is at least one parent (or neighbor) $p$ of $u$ that is not contained
in $C_2$. Thus there is a separator $K$ connecting $C_2$ toward $C_1$ in $T$ such that $K$ $m$-separates $p$ from all vertices in $C_2\setminus K$. Note that on the chain from $C_1$ to $C_2$ in $T$, all
separators must contain $u$, otherwise they cannot separate $C_1$ from $C_2$. So, we have $u\in K$ but $v\not\in K$ (if $v\in K$, then $C_2$ is not the closest node of $T$ to the node $C_1$). In fact, for every parent (or neighbor) $p'$ of $u$ that is contained in $C_1$ but not in $C_2$, $K$ separates $p'$ from all vertices in $C_2\setminus K$, especially the vertex $v$.
Define $S=(an(u\cup v)\cap C_2)$, which is a subset of $C_2$. We need to show that $u$ and $v$ are $m$-separated by $S$, that is, every chain between $u$
and $v$ in $G$ is blocked by $S$.
If $\rho$ is not contained in $An(u\cup v)$, then we obtain from Lemma \ref{lem3} that $\rho$ is blocked by $S$.
When $\rho$ is contained in $An(u\cup v)$, let $x$ be adjacent to $u$ on $\rho$, that is, $\rho =
(u, x, y, \dots , v)$. We consider the three possible orientations of the edge between $u$ and $x$. We now show that $\rho$ is blocked in all three cases.
\begin{itemize}
\item[i:] $u\gets x$, so we know that $x$ is not a collider and we have two possible sub-cases:
\begin{enumerate}
\item $x\in C_2$. In this case the chain $\rho$ is blocked at $x$.
\item $x\not\in C_2$. In this case $K$ $m$-separates $x$ from $v$. By
Lemma \ref{lem2}, we can obtain that the sub-chain $\rho'$ from $x$ to $v$ can be blocked by $W\cap K$ where $W$ denotes the set of
all vertices between $x$ and $v$ (not containing $x$ and $v$) on $\rho'$. Since $S\supseteq (W\cap K)$, we obtain from Lemma \ref{lem2} that $S$ also blocks $\rho'$. Hence the chain $\rho$ is blocked by $S$.
\end{enumerate}
\item[ii:] $u\to x$. We have the following sub-cases:
\begin{enumerate}
\item $x\in an(u)$. This case is impossible because a directed cycle would occur.
\item $x\in an(v)$. This case is impossible because $v$ cannot be a descendant of $u$.
\end{enumerate}
\item[iii:] $u\leftrightarrow x$. We have the following sub-cases:
\begin{enumerate}
\item $x\in an(u)$. This case is impossible because a partially directed cycle would occur.
\item $x\in an(v)$ and $v$ is in the same chain component $\tau$ that contains $u, x$. This is impossible, because in this case we have a partially directed cycle.
\item $x\in an(v)$ and $v$ is not in the same chain component $\tau$ that contains $u, x$. We have the following sub-cases:
\begin{itemize}
\item $x\not\in C_2$. In this case $K$ $m$-separates $x$ from $v$. By
Lemma \ref{lem2}, we can obtain that the sub-chain $\rho'$ from $x$ to $v$ can be blocked by $W\cap K$ where $W$ denotes the set of
all vertices between $x$ and $v$ (not containing $x$ and $v$) on $\rho'$. Since $S\supseteq (W\cap K)$, we obtain from Lemma \ref{lem2} that $S$ also blocks $\rho'$. Hence the chain $\rho$ is blocked by $S$.
\item $x\in C_2$. We have the three following sub-cases:
\begin{itemize}
\item $u\leftrightarrow x\to y$. In this case $x\in S$ blocks the chain. Note that in this case it is possible that $y=v$.
\item $u\leftrightarrow x\gets y$. So, $y$ ($\ne v$ o.w., a directed cycle would occur) is not a collider. If $y\in C_2$ then the chain $\rho$ is blocked at $y$. Otherwise, we have the two following sub-cases:
\begin{itemize}
\item There is a node $C'$ between $C_1$ and $C_2$ that contains $y$ (note that it is possible that $C'=C_1$), so $K$ $m$-separates $y$ from $v$ and the same argument used for case i.2 holds.
\item In this case $K$ $m$-separates $y$ from $p$ ($p\in bd(u)\cap C_1$ and $p\not\in C_2$), which is impossible because the chain $p\mathrel{\circ\!\!\!\rightarrow} u\leftrightarrow x\gets y$ is active (note that $u,x\in K$).
\end{itemize}
\item $u\leftrightarrow x\leftrightarrow y$. If there is an outgoing ($\to$) edge from $y$ ($\ne v$ o.w., a partially directed cycle would occur) then the same argument in the previous sub-case ($u\leftrightarrow x\gets y$) holds. Otherwise, $y$ is a collider. If $y\not\in C_2$ then the chain $\rho$ is blocked at $y$. If $y\in C_2$, there must be a non-collider vertex on the chain $\rho$ between $y$ and $v$ to prevent a (partially) directed cycle. The same argument as in the previous sub-case ($u\leftrightarrow x\gets y$) holds.
\end{itemize}
\end{itemize}
\end{enumerate}
\end{itemize}
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm2}]
From \citep{cdls}, we know that any separator $S$ in junction tree $T$ separates $V_1\setminus S$ and $V_2\setminus S$ in the triangulated graph $\bar{G}_V^t$, where $V_i$ denotes the variable set of the subtree $T_i$ induced by removing the edge with
a separator $S$ attached, for $i = 1, 2$. Since the edge set of $\bar{G}_V^t$ contains that of undirected independence graph $\bar{G}_V$ for $G$, $V_1\setminus S$ and $V_2\setminus S$ are also separated in $\bar{G}_V$. Since $\bar{G}_V$ is an undirected independence graph for $G$, using Definition \ref{septree} we obtain that $T$ is an $m$-separation tree for $G$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm1}]
\noindent ($\Rightarrow$) If condition (i) is the case, nothing remains to prove. Otherwise, Lemma \ref{lem5} implies condition (ii).
\noindent ($\Leftarrow$) Assume that $u$ and $v$ are not contained together in any node $C$ of $T$. Also, assume that $C_1$ and $C_2$ are two nodes of $T$ that contain $u$ and $v$, respectively. Consider that $C_1'$ is the most distant node from $C_1$, between $C_1$ and $C_2$, that contains $u$ and $C_2'$ is the most distant node from $C_2$, between $C_1$ and $C_2$, that contains $v$. Note that it is possible that $C_1'=C_1$ or $C_2'=C_2$. By the condition (i) we know that $C_1'\ne C_2'$. Any separator between $C_1'$ and $C_2'$ satisfies the assumptions of Lemma \ref{lem2}. The sufficiency of condition (i) is given by Lemma \ref{lem2}.
The sufficiency of
conditions (ii) is trivial by the definition of $m$-separation.
\end{proof}
\section*{Appendix B. Proofs for Correctness of the Algorithms}
\begin{proof}
[Correctness of Algorithm \ref{hypergraph}] Since an augmented graph for CG $G$ is an undirected independence graph, by definition of
an undirected independence graph, it is enough to show that $\bar{G}_V$ defined in step 3 contains all edges of $(G_V)^a$. It
is obvious that $\bar{E}$ contains all edges obtained by dropping directions of directed edges in $G$ since any set cannot
$m$-separate two vertices that are adjacent in $G$.
Now we show that $\bar{E}$ also contains any augmented edge that connects vertices $u$ and $v$ having a collider chain between them, that
is, $(u, v)\in \bar{E}$. Any chain graph yields a directed acyclic graph $D$ of its chain components having $\mathcal{T}$ as a node set and an edge $T_1\to T_2$ whenever there exists in the chain graph $G$ at least one edge $u\rightarrow v$ connecting a node \textit{u} in $T_1$ with a node \textit{v} in $T_2$ \citep{ml2}. So, there is a collider chain between two nodes $u$ and $v$ if and only if there is a chain component $\tau\in \mathcal{T}$ such that
\begin{enumerate}
\item $u,v\in \tau$, or
\item $u\in \tau$ and $v\in pa_G(\tau)$ or vice versa, or
\item $u, v\in pa_G(\tau)$
\end{enumerate}
Since for each connected component $\tau$ there is a $C_h\in C$ containing both $\tau$ and its parent set $pa_G(\tau)$, in all of above mentioned cases we have an $(u,v)$ edge in step 2. Therefore, $\bar{G}_V$ defined in step 3 contains all edges of $(G_V)^a$.
\end{proof}
\begin{proof}
[Correctness of Algorithm \ref{alg1}] By the sufficiency of Theorem \ref{thm1}, the initializations at steps 2 and 3 for
creating edges guarantee that no edge is created between any two variables which are not in the same node of the
$m$-separation tree. Also, by the sufficiency of Theorem \ref{thm1}, deleting edges at steps 2 and 3 guarantees that any other edge
between two $m$-separated variables can be deleted in some local skeleton. Thus the global skeleton obtained at step 3 is
correct. In a maximal ancestral graph, every missing edge corresponds to at least one independency in the corresponding
independence model \citep{rs}, and MVR CGs are a subclass of maximal ancestral graphs \citep{jv1}. Therefore, according to the necessity of Theorem \ref{thm1}, each augmented edge $(u, v)$ in the undirected independence graph must be deleted at some subgraph over a node of the $m$-separation tree. Furthermore, according to Lemma \ref{lem4}, for every $v$-structure $(u\mathrel{\circ\!\!\!\rightarrow} w\mathrel{\leftarrow\!\!\!\circ} v)$ there is a node in $m$-separation tree $T$ that contains $u, v$ and $w$, and obviously $w\not\in S_{uv}$. Therefore, we can determine all $v$-structures at step 4, which
completes our proof.
\end{proof}
\section*{Acknowledgements}
We are grateful to Professor Jose M. Pe\~{n}a and Dr. Dag Sonntag for providing us with code that helped in the design of the algorithm that we implemented in R.
|
{
"timestamp": "2020-02-26T02:00:59",
"yymm": "1806",
"arxiv_id": "1806.00882",
"language": "en",
"url": "https://arxiv.org/abs/1806.00882"
}
|
\section{Introduction}
Consider the following semilinear stochastic Maxwell equations with additive noise,
\begin{equation}\label{sto_max}
\begin{cases}
\varepsilon {\rm d}{\bf E}-\nabla\times {\bf H}{\rm d}t=-{\bf J}_{e}(t,{\bf x},{\bf E},{\bf H}){\rm d}t-{\bf J}_e^{r}(t,{\bf x})\circ{\rm d}W(t),~ &(t,{\bf x})\in(0,~T]\times D,\\
\mu {\rm d}{\bf H}+\nabla\times {\bf E}{\rm d}t=-{\bf J}_{m}(t,{\bf x},{\bf E},{\bf H}){\rm d}t-{\bf J}_m^{r}(t,{\bf x})\circ{\rm d}W(t),~ &(t,{\bf x})\in(0,~T]\times D,\\
{\bf E}(0,{\bf x})={\bf E}_0({\bf x}),~{\bf H}(0,{\bf x})={\bf H}_0({\bf x}),~&{\bf x}\in D,\\
{\bf n}\times {\bf E}={\bf 0},~&(t,{\bf x})\in(0,~T]\times\partial D,
\end{cases}
\end{equation}
where ${\bf E}$ is the electric field, ${\bf H}$ is the magnetic field, ${\varepsilon}$ denotes the permittivity, $\mu$ denotes the permeability satisfying $\varepsilon,\mu\in L^{\infty}(D)$, $\varepsilon,\mu\geq \delta>0$.
Here $\circ$ means Stratonovich integral, $D\subset {\mathbb R}^{3}$ is a bounded domain, $T\in(0,~\infty)$, and the function ${\bf J}:[0,T]\times D\times {\mathbb R}^3\times{\mathbb R}^3\to{\mathbb R}^3$ is a continuous function satisfying
\begin{eqnarray}
&&|{\bf J}(t,{\bf x},u,v)|\leq L(1+|u|+|v|),\label{bound J}\\
&&|{\bf J}(t,{\bf x},u_1,v_1)-{\bf J}(s,{\bf x},u_2,v_2)|\leq L(|t-s|+|u_1-u_2|+|v_1-v_2|),\label{bound partialJ}
\end{eqnarray}
for all ${\bf x}\in D$, $u,v,u_1,v_1,u_2,v_2\in{\mathbb R}^3$, the constant $L>0$. Here $|\cdot|$ denotes the Euclidean norm, and ${\bf J}$ could be ${\bf J}_e$ or ${\bf J}_m$, and the function ${\bf J}^r:[0,T]\times D\to {\mathbb R}^3$ is a continuous bounded function
with ${\bf J}^{r}$ being ${\bf J}^r_e$ or ${\bf J}_m^r$. Throughout this paper, $W(t)$ is a $Q$-Wiener process with respect to a filtered probability space $(\Omega,{\mathcal F},\{{\mathcal F}_{t}\}_{0\leq t\leq T},{\mathbb P})$ with $Q$ being a symmetric, positive definite operator with finite trace on $U=L^2(D)$.
If we denote an orthonormal basis of the space $U$ by $\{e_i\}_{i\in{\mathbb N}}$, then $W(t)$
can be represented as
\begin{equation}
W(t)=\sum_{i=1}^{\infty}Q^{\frac12}e_i\beta_i(t),~t\in[0,~T],
\end{equation}
where $\{\beta_i(t)\}_{i\in{\mathbb N}}$ is a sequence of independent real-valued Brownian motions.
The well-posedness of stochastic Maxwell equations has been investigated by semigroup approach in \cite{LSY2010,CHJ2018}, by a refined Faedo-Galerkin method and spectral multiplier theorem in \cite{Hor2017}, by using the stochastically perturbed PDEs approach in \cite{SW2017}. The regularity of the
solution of stochastic Maxwell equations driven by It\^o multiplicative noise is considered in \cite{CHJ2018}, allowing sufficient spatial smoothness on the coefficients and noise term. The stochastic multi-symplectic structures are investigated in \cite{HJZ2014,CHZ2016} for stochastic Maxwell equations driven by additive noise via different approaches, in \cite{HJZC2017} for stochastic Maxwell equations driven by multiplicative noise.
The numerical analysis of stochastic Maxwell equations is a recent active ongoing
research subject. There are now a certain number of papers devoted to this field but many problems still need to be solved (see e.g. \cite{Zhang2008,BAZC2010,HJZ2014,CHZ2016,HJZC2017,CHJ2018} and references therein). Particularly, \cite{HJZ2014} proposes a stochastic multi-symplectic method for stochastic Maxwell equations with additive noise based on the stochastic version of variational principle, which has the merits of preserving the discrete stochastic multi-symplectic conservation law and stochastic energy dissipative properties. In \cite{CHZ2016}, the comparison of three different stochastic multi-symplectic methods and the analysis of the linear growth property of energy and the conservative property of divergence are studied. In \cite{HJZC2017}, the authors constructed an innovative stochastic multi-symplectic energy-conserving method for three dimension stochastic Maxwell equations with multiplicative noise by using wavelet interpolate technique. For the rigorous convergence analysis of numerical approximations, we refer to the very recently work \cite{CHJ2018}, in which mean-square convergence of a semi-implicit Euler scheme for stochastic Maxwell equations with multiplicative It\^o noise is investigated. Via the energy estimate technique and a priori estimates on exact and numerical solutions, authors show that the method is convergent with order $1/2$.
To the best of our knowledge, however, there has been no work in the literature which considers the infinite-dimensional stochastic Hamiltonian system form, stochastic symplecticity for stochastic Maxwell equations.
By introducing two new Hamiltonian functionals,
and by utilizing the properties of variational integrals, we present stochastic Maxwell equations \eqref{sto_max} as the equivalent infinite-dimensional stochastic Hamiltonian system form directly. As a result, the phase flow of equations \eqref{sto_max}
preserves the symplectic structure $\overline{\omega}(t)=\int_{D}{\rm d}{\bf E}(t,{\bf x})\wedge {\rm d}{\bf H}(t,{\bf x}){\rm d}x$ almost surely.
Meanwhile, we present the regularity in the space ${\mathcal D}(M^k)$ ($k\in{\mathbb N}$) of the solution for stochastic Maxwell equations \eqref{sto_max}, where $M$ denotes the Maxwell operator. This regularity, together with the adaptedness to filtration, yields the H\"older continuity of the solution in the space ${\mathcal D}(M^{k-1})$ both in mean-square and in mean senses. Furthermore, the evolution laws of energy and divergence are also investigated via the formal application of It\^o formula.
It is important to design numerical methods which could preserve the intrinsic properties of the original system as much as possible, due to the superiority on the long time simulation and stability etc.
In order to construct stochastic symplectic methods for stochastic Maxwell equations \eqref{sto_max}, we introduce a general class of stochastic Runge-Kutta methods to these equations in temporal direction. By utilizing the structure of numerical methods and the properties of differential 2-forms, we derive the symplectic conditions of coefficients for the methods to preserve stochastic symplectic structure. The existence and uniqueness of the numerical solution are proved for the general class of stochastic Runge-Kutta methods which is algebraically stable and coercive.
The relevant prerequisite for the mean-square convergence analysis is to provide the regularity in the space ${\mathcal D}(M^k)$ and H\"older continuity in the space ${\mathcal D}(M^{k-1})$ for the
original system, and also for the temporal stochastic Runge-Kutta semidiscretizations. To deal with the difficulty caused by the interaction of the unbounded operator $M$, stochastic terms and the complex structure of Runge-Kutta method,
we make use of the
semigroup approach which makes the mild solution can be expressed in the form containing a bounded linear semigroup instead of the unbounded differential operator, and a priori estimate on the operators and semigroup, as well as the coercivity and algebraic stability of the proposed methods.
These estimates are then essential for the error analysis, which allow to establish optimal mean-square convergence rates (see Theorem 4.3). An immediate consequence of this result is that the order of mean-square convergence is $1$, which answers an open problem in \cite{CH2016} for stochastic Maxwell equations driven by additive noise. The analysis holds for
the algebraically stable and coercive stochastic Runge-Kutta methods.
Note that symplectic Runge-Kutta methods are algebraic stable automatically, as a consequence the mean-square convergence order of the coercive symplectic Runge-Kutta methods is $1$.
The paper is organized as follows: in Section 2, some preliminaries are collected and an abstract formulation of \eqref{sto_max} is set forth. Some properties of stochastic Maxwell equations, including regularity, evolution laws of energy and divergence are also considered. Section 3 is devoted to the stochastic symplecticity of stochastic Maxwell equations. In Section 4, a semi-discrete scheme is proposed and our main results are stated: in Section 4.1 we give some conditions to guarantee that a given stochastic Runge-Kutta method is symplectic; in Section 4.2 we show the unique existence and regularity of numerical solution of general stochastic Runge-Kutta method. Section 4.3 is devoted to the proof of the convergence theorem of stochastic Runge-Kutta methods satisfying the definition of algebraical stability and coercivity condition.
\section{Preliminaries and framework}
\subsection{Notations}
Throughout the paper, we will use the following notations.
\begin{itemize}
\item[1.] We will work with the real Hilbert space ${\mathbb H}=L^2(D)^3\times L^2(D)^3$, endowed with the inner product
\[
\left\langle \begin{pmatrix}
{\bf E}_1\\{\bf H}_1
\end{pmatrix},~ \begin{pmatrix}
{\bf E}_2\\{\bf H}_2
\end{pmatrix}\right\rangle_{\mathbb H}=\int_{D}(\varepsilon {\bf E}_1\cdot {\bf E}_2
+\mu{\bf H}_1\cdot{\bf H}_2){\rm d}{\bf x}
\]
for all ${\bf E}_1, {\bf H}_1,{\bf E}_2,{\bf H}_2\in L^2(D)^3$, and the norm
\[
\left\|\begin{pmatrix}
{\bf E}\\{\bf H}
\end{pmatrix}\right\|_{\mathbb H}=\left[\int_{D}\left(\varepsilon|{\bf E}|^2+\mu|{\bf H}|^2\right){\rm d}{\bf x}\right]^{1/2},\quad \forall~{\bf E}, {\bf H}\in L^2(D)^3.
\]
\item[2.] We will denote the Maxwell operator by
\begin{equation}\label{M_operator}
M=\begin{pmatrix}
0& \varepsilon^{-1}\nabla\times \\
-\mu^{-1}\nabla\times &0 \\
\end{pmatrix}
\end{equation}
with domain
\begin{equation}
\begin{split}
{\mathcal D}(M)&=\left\{\begin{pmatrix}
{\bf E} \\
{\bf H}
\end{pmatrix}\in {\mathbb H}:~M\begin{pmatrix}
{\bf E} \\
{\bf H}
\end{pmatrix}=\begin{pmatrix}
\varepsilon^{-1} \nabla\times{\bf H}\\
-\mu^{-1}\nabla\times{\bf E}
\end{pmatrix}\in{\mathbb H},~ {\bf n}\times{\bf E}\Big|_{\partial D}={\bf 0} \right\}\\[2mm]
&=H_0({\rm curl},D)\times H({\rm curl},D),
\end{split}
\end{equation}
where the curl-spaces are defined by
\begin{equation*}
\begin{split}
H({\rm curl},D):&=\{ v\in L^2(D)^3:~\nabla\times v\in L^2(D)^3 \},\\[2mm]
H_0({\rm curl},D):&=\{ v\in H({\rm curl},D):~{\bf n}\times v|_{\partial D}={\bf 0} \}.
\end{split}
\end{equation*}
The corresponding graph norm is $\|v\|_{{\mathcal D}(M)}:=\left(\|v\|_{\mathbb H}^2+\|Mv\|_{\mathbb H}^2\right)^{1/2}$.
A frequently used property for Maxwell operator $M$ is:
$
\langle Mu,~u\rangle_{\mathbb H}=0, ~\forall~u\in{\mathcal D}(M).
$
\item[3.] The Maxwell operator $M$ defined in \eqref{M_operator} is closed, skew-adjoint on $\mathbb{H}$, and thus generates a unitary $C_0$-group $S(t)=e^{tM}$ on $\mathbb{H}$ in the view of Stone's theorem.
A frequently used tool of semigroup is the following estimate (see \cite[Lemma 3.1]{CHJ2018}):
\begin{equation}
\|S(t)-Id\|_{{\mathcal L}({\mathcal D}(M);{\mathbb H})}\leq Ct,
\end{equation}
where the constant $C$ does not depend on $t$.
\item[4.]
We define the space ${\mathcal D}(M^n)$ by the domain of the $n$-th power of operator $M$ for $n\in{\mathbb N}$, with norm
\[
\|u\|_{{\mathcal D}(M^n)}:=\left(\|u\|_{\mathbb H}^2+\|M^n u\|_{\mathbb H}^2\right)^{1/2}.
\]
In fact,
the norm $\|\cdot\|_{{\mathcal D}(M^n)}$ corresponds to the scalar product
\[
\langle u,~v\rangle_{{\mathcal D}(M^n)}=\langle u,~v\rangle_{\mathbb H}
+\langle M^nu,~M^nv\rangle_{\mathbb H}.
\]
Moreover, we know that $\|u\|_{{\mathcal D}(M^n)}\leq C\|u\|_{{\mathcal D}(M^m)}$ for all $u\in{\mathcal D}(M^m)$, $n\leq m$.
\item[5.] Denote $HS(U,H)$ the Banach space of all Hilbert-Schmidt operators from one separable Hilbert space $U$ to another separable Hilbert space $H$, equipped with the norm
\[
\|\Gamma\|_{HS(U,H)}=\left(\sum_{j=1}^{\infty}\|\Gamma\eta_j\|_{H}^2\right)^{\frac12},
\]
where $\{\eta_j\}_{j\in{\mathbb N}}$ is any orthonormal basis of $U$.
\item[6.] Throughout this paper, $C$ will denote various constants. The same symbol will be used for different constants. When it is necessary to indicate that a constant depends on some parameters, we will use the notation $C(\cdot)$. For instance, $C(T, p)$ is a constant depending on $T$ and $p$.
\end{itemize}
\subsection{Framework}
We work on the abstract form of stochastic Maxwell equations in infinite dimensional space ${\mathbb H}$:
\begin{equation}\label{sM_equations}
\begin{cases}
{\rm d}u(t)=\left[Mu(t)+F(t,u(t))\right]{\rm d}t+B(t){\rm d}W(t),~t\in(0,~T],\\
u(0)=u_0,
\end{cases}
\end{equation}
where $u(t)=({\bf E}^T(t),{\bf H}^T(t))^T$, $u_0=({\bf E}_0^T,{\bf H}_0^T)$. Here $F:[0,~T]\times {\mathbb H}\to{\mathbb H}$ is a Nemytskij operator associated to ${\bf J}_{e}$, ${\bf J}_m$, which is defined by
\begin{equation}\label{F}
F(t,u(t))({\bf x})=\left( \begin{array}{c}
-\varepsilon^{-1}{\bf J}_{e}(t,{\bf x},{\bf E}(t,{\bf x}),{\bf H}(t,{\bf x}))\\
-\mu^{-1}{\bf J}_{m}(t,{\bf x},{\bf E}(t,{\bf x}),{\bf H}(t,{\bf x}))
\end{array} \right), ~t\in[0,T],~{\bf x}\in D,~u(t)\in{\mathbb H}.
\end{equation}
For diffusion term, we introduce the Nemytskij operator $B:[0,~T]\to HS(U_0,{\mathbb H})$ by
\begin{equation}
(B(t)v)({\bf x})=\left( \begin{array}{c}
-\varepsilon^{-1}{\bf J}_{e}^{r}(t,{\bf x})v({\bf x})\\
-\mu^{-1}{\bf J}_{m}^r(t,{\bf x})v({\bf x})
\end{array} \right),\quad {\bf x}\in D \text{ and } v\in U_0:=Q^{\frac12}U.
\end{equation}
\iffalse
The derivative operators of $F$ is given by
\begin{equation}
F^{\prime}(t,u)(v)({\bf x})=\left( \begin{array}{c}
-\varepsilon^{-1}{\partial_{1} {\bf J}_{e}(t,{\bf x},{\bf E}_1(t,{\bf x}),{\bf H}_1(t,{\bf x}))}{\bf E}_2(t,{\bf x})
-\varepsilon^{-1}{\partial_{2} {\bf J}_{e}(t,{\bf x},{\bf E}_1(t,{\bf x}),{\bf H}_1(t,{\bf x}))}{\bf H}_2(t,{\bf x})\\
-\mu^{-1}\partial_1{\bf J}_{m}(t,{\bf x},{\bf E}_1(t,{\bf x}),{\bf H}_1(t,{\bf x})){\bf E}_2(t,{\bf x})
-\mu^{-1}\partial_1{\bf J}_{m}(t,{\bf x},{\bf E}_1(t,{\bf x}),{\bf H}_1(t,{\bf x})){\bf H}_2(t,{\bf x})
\end{array} \right),
\end{equation}
where
${\bf x}\in D,~u=\left(\begin{array}{c}
{\bf E}_1 \\
{\bf H}_1
\end{array}\right)\in{\mathbb H},~\text{and }
v=\left(\begin{array}{c}
{\bf E}_2 \\
{\bf H}_2
\end{array}\right)\in{\mathbb H}$.
Thanks to \eqref{bound J} and \eqref{bound partialJ}, the operator $F$ satisfies
\begin{eqnarray}
&&\|F(t,u)\|_{\mathbb H} \leq C(1+\|u\|_{\mathbb H}),\label{bound F}\\
&&\|F(t,u)-F(s,v)\|_{\mathbb H}\leq C \big(|t-s|+\|u-v\|_{\mathbb H}\big),\label{Lip F}
\end{eqnarray}
for all $t,s\in[0,T]$, and $u,v\in{\mathbb H}$. Here the positive constant $C$
may depend on $\delta$, the volume $|D|$ of domain $D$, and the constant $L$ in \eqref{bound J} and \eqref{bound partialJ}. In fact,
\begin{equation*}
\begin{split}
\|F(t,u)\|_{\mathbb H}&=\Big(\int_{D}\varepsilon|\varepsilon^{-1}{\bf J}_e|^2
+\mu|\mu^{-1}{\bf J}_m|^2 {\rm d}{\bf x}\Big)^{\frac12}\\
&\leq \delta^{-\frac12}\Big(\int_{D}2L^2(1+|{\bf E}|+|{\bf H}|)^2{\rm d}{\bf x}\Big)^{\frac12}\\
&\leq \delta^{\frac12}\Big[(6L^2|D|)^{\frac12}+\Big(6L^2\delta^{-1}\int_{D}(\varepsilon|{\bf E}|^2+\mu|{\bf H}|^2){\rm d}{\bf x}\Big)^{\frac12} \Big]\\
&\leq C(1+\|u\|_{\mathbb H}),
\end{split}
\end{equation*}
and the proof of \eqref{Lip F} is similar as above.
\fi
\subsubsection{Well-posedness and regularity}
First we present the well-posedness in the Hilbert space ${\mathbb H}$ of the stochastic Maxwell equations \eqref{sM_equations}. From \cite{CHJ2018}, we know that conditions \eqref{bound J} and \eqref{bound partialJ} yield
the linear growth and global Lipschitz properties of the function $F$, i.e., there exists a constant $C$ depending on $\delta$, the volume $|D|$ of the domain $D$ and the constant $L$ in \eqref{bound J} and \eqref{bound partialJ}, such that
\begin{eqnarray}
&&\|F(t,u)\|_{\mathbb H}\leq C\big(1+\|u\|_{\mathbb H}\big),\\
&&\|F(t,u)-F(s,v)\|_{\mathbb H}\leq C\big(|t-s|+\|u-v\|_{\mathbb H}\big),
\end{eqnarray}
for all $t,s\in [0,T]$ and $u,v\in{\mathbb H}$.
The following proposition gives the existence and uniqueness of
the mild solution of equation \eqref{sM_equations}, which has been discussed for example in \cite{LSY2010,SW2017,CHJ2018}.
\begin{proposition}\label{wellposedness thm}
Suppose conditions \eqref{bound J} and \eqref{bound partialJ} are fulfilled, and let $W(t)$, $t\in[0,~T]$ be a $Q$-Wiener process with $Q$ being symmetric, positive definite and with finite trace, and let $u_0$ be an ${\mathcal F}_0$-measurable ${\mathbb H}$-valued random variable satisfying $\|u_0\|_{L^{p}(\Omega;{\mathbb H})}<\infty$ for some $p\geq 2$. Then stochastic Maxwell equations \eqref{sM_equations} have a unique mild solution given by
\begin{equation}\label{mild sol}
u(t)=S(t)u_0+\int_0^t S(t-s)F(s,u(s)){\rm d}s+\int_0^t S(t-s)B(s)dW(s)\quad {\mathbb P}\text{-}a.s.
\end{equation}
for each $t\in[0,~T]$.
Moreover, there exists a constant $C:=C(p,T,{\rm tr}(Q))\in(0,~\infty)$ such that
\begin{equation}
\sup_{t\in[0,~T]}{\mathbb E}\|u(t)\|^{p}_{ {\mathbb H}}\leq C(1+\|u_0\|^p_{L^{p}(\Omega;{\mathbb H})}).
\end{equation}
\end{proposition}
In order to obtain the regularity results of solution of equation \eqref{sM_equations}, we need strong assumptions on $F$ and $B$. Namely, we assume in the rest part that
\begin{assumption}\label{assum_F}
For an integer $\alpha\in{\mathbb N}$, $F(t,\cdot):~{\mathcal D}(M^{\alpha})\to {\mathcal D}(M^{\alpha})$ are $C^2$ functions with bounded derivatives up to order $2$, for any $t\in[0,T]$.
\end{assumption}
\begin{assumption}\label{assum_B}
For an integer $\beta\in{\mathbb N}$, $B(t)\in HS(U_0,{\mathcal D}(M^{\beta}))$, for any $t\in[0,T]$.
\end{assumption}
We are in the position to establish the regularity of the solution of stochastic Maxwell equations \eqref{sM_equations} in $L^{p}(\Omega;{\mathcal D}(M^k))$-norm, which is stated in the following proposition.
\begin{proposition}\label{regularity}
Let Assumptions \ref{assum_F}-\ref{assum_B} be fulfilled with $\alpha=\beta\equiv k$, and suppose that $u_0$ is an ${\mathcal F}_0$-measurable ${\mathbb H}$-valued random variable satisfying $\|u_0\|_{L^{p}(\Omega;{\mathcal D}(M^k))}<\infty$ for some $p\geq 2$. Then the mild solution \eqref{mild sol} satisfies
\begin{eqnarray}
\sup_{t\in[0,T]}{\mathbb E}\|u(t)\|_{{\mathcal D}(M^k)}^{p}\leq C(1+\|u_0\|^{p}_{L^{p}(\Omega;{\mathcal D}(M^k))}),
\end{eqnarray}
where the positive constant $C$ may depend on the coefficients $F$ and $B$, $p$, $T$.
\end{proposition}
\begin{proof}
The proof is similar as that of Proposition 3.1 in \cite{CHJ2018}.
\end{proof}
\begin{proposition}\label{holder}
Under the same assumptions as in Proposition \ref{regularity}, we have for $0\leq t,s\leq T$,
\begin{eqnarray}
&& {\mathbb E}\|u(t)-u(s)\|_{{\mathcal D}(M^{k-1})}^p\leq C|t-s|^{p/2},\\
&& \|{\mathbb E}(u(t)-u(s))\|_{{\mathcal D}(M^{k-1})}\leq C|t-s|,
\end{eqnarray}
where the positive constant $C$ may depend on $p$, $T$,
and $\|u_0\|_{L^{p}(\Omega;{\mathcal D}(M^k))}$.
\end{proposition}
\begin{proof}
The proof is similar as that of Proposition 3.2 in \cite{CHJ2018}.
\end{proof}
\subsubsection{Physical properties}
In this part, we derive some physical properties of stochastic Maxwell equations \eqref{sM_equations}, including the energy evolution law and divergence evolution law.
Notice that in the deterministic case if we endow perfectly electric conducting (PEC) boundary condition ${\bf n}\times {\bf E}=0$, on $\partial D$, the Poynting theorem states the relationship satisfied by the electromagnetic energy:
\[
\partial_{t}{\mathcal H}(u(t))=2\langle u(t), F(t,u(t))\rangle_{\mathbb H},
\]
where the energy is ${\mathcal H}(u(t)):=\|u(t)\|^2_{\mathbb H}$.
Now we investigate the energy evolution law for stochastic Maxwell equations \eqref{sM_equations}, which is stated in the following theorem.
\begin{proposition}
Under the same assumptions as in Proposition \ref{wellposedness thm}, we have $\forall$ $t\in[0,T]$,
\begin{equation}\label{energy}
\begin{split}
\mathcal{H}(u(t))=\mathcal{H}(u_0)+\int_0^t\Big(2\langle{u(s)},F(s,u(s))\rangle_{\mathbb H}+\|B(s)\|^2_{HS(U_0,{\mathbb H})}\Big){\rm d}s+2\int_0^t\langle{u(s)},B(s)\rangle_{\mathbb H}{\rm d}W(s), ~{\mathbb P}\text{-}a.s.,
\end{split}
\end{equation}
where $u$ is the solution of \eqref{sM_equations} given by Proposition \ref{wellposedness thm}.
\end{proposition}
\begin{proof}
The proof is based on the formal application of It\^{o} formula to functional $$\mathcal{H}(u)=\|u\|^2_{\mathbb H}.$$ Since $\mathcal{H}(u)$ is Fr\'{e}chet derivable, the derivatives of $\mathcal{H}(u)$ along direction $\phi$ and $(\phi,\varphi)$ are as follows:
\begin{equation}\label{Fre}
D\mathcal{H}(u)(\phi)=2\langle{\mathcal{E}},\phi\rangle_{\mathbb H},\quad D^2\mathcal{H}(u)(\phi,\varphi)=2\langle\varphi,\phi\rangle_{\mathbb H}.
\end{equation}
From It\^{o} formula (see Theorem 4.32 in \cite{PZ2014}), we have
\begin{equation}\label{InfIto}
\begin{split}
\mathcal{H}(u(t))&=\mathcal{H}(u_0)+\int_{0}^{t}\langle D\mathcal{H}(u(s)), B(s){\rm d}W(s)\rangle_{\mathbb H}\\
&\quad+\int_{0}^{t}\langle D\mathcal{H}(u(s)), Mu(s)+F(s,u(s))\rangle_{\mathbb H}{\rm d}s\\[2mm]
&\quad+\frac{1}{2}\int_0^t{\rm Tr}[D^2\mathcal{H}(u(s)) (B(s)Q^{\frac{1}{2}})(B(s)Q^{\frac{1}{2}})^*]{\rm d}s.
\end{split}
\end{equation}
Substitute \eqref{Fre} into \eqref{InfIto} leads to
\begin{equation*}
\begin{split}
\mathcal{H}(u(t))&=\mathcal{H}(u_0)+2\int_0^t\langle{u(s)},Mu(s)+F(s,u(s))\rangle_{\mathbb H}{\rm d}s\\
&+2\int_0^t\langle{u(s)},B(s)\rangle_{\mathbb H}{\rm d}W(s)+\int_0^t\|B(s)\|^2_{HS(U_0,{\mathbb H})}{\rm d}s.
\end{split}
\end{equation*}
By using
\begin{equation*}
\langle Mu,u\rangle_{\mathbb H}=0\quad \forall~u\in {\mathcal D}(M),
\end{equation*}
the proof is completed.
\end{proof}
\begin{remark}
Compare the evolution of the averaged energy, i.e., the expectation of the equation \eqref{energy}, with the deterministic case, we found that there's one extra term $\int_0^t \|B(s)\|^2_{HS(U_0,{\mathbb H})}{\rm d}s$ in stochastic case. That's the effect caused by the additive noise, see also \cite[Theorem 2.1]{CHZ2016}.
\end{remark}
In the deterministic case, it is well known that the electromagnetic field is divergence free if the medium is lossless, i.e., $F=0$ in the deterministic Maxwell equation.
The following proposition sates the divergence evolution law for the stochastic Maxwell equations \eqref{sM_equations}.
\begin{proposition}
Under the assumptions in Proposition \ref{regularity} with $k=1$. The averaged divergence of system \eqref{sto_max} satisfies
\begin{equation}\label{div_EH}
\begin{split}
\mathbb{E}({\rm div}\left(\varepsilon{\bf E}(t))\right)=&\mathbb{E}({\rm div}\left(\varepsilon{\bf E}_0)\right)-\mathbb{E}\left(\int_0^t{\rm div}{\bf J}_e{\rm d}s\right),\\[2mm]
\mathbb{E}({\rm div}\left(\mu{\bf H}(t))\right)=&\mathbb{E}({\rm div}\left(\mu{\bf H}_0)\right)-\mathbb{E}\left(\int_0^t{\rm div}{\bf J}_m{\rm d}s\right),
\end{split}
\end{equation}
where $u=({\bf E}^{T},{\bf H}^{T})^{T}$ is the solution of \eqref{sM_equations} given by Proposition \ref{wellposedness thm}.
\end{proposition}
\begin{proof}
Denote $\Psi({\bf E}(t))={\rm div}(\varepsilon{\bf E}(t))$. Since $\Psi$ is Fr\'{e}chet derivable, the derivatives of $\Psi$ along direction $\phi$ or $(\phi,\varphi)$ are
\begin{equation}
D\Psi({\bf E})(\phi)={\rm div}(\varepsilon\phi),\quad D^2\Psi({\bf E})(\phi,\varphi)=0.
\end{equation}
By applying It\^{o} formula formally to $\Psi({\bf E}(t))$, it yields
\begin{equation}\label{div_E}
\begin{split}
\Psi({\bf E}(t))=&\Psi({\bf E}_0)+\int_0^tD\Psi({\bf E}(s))({\rm d}{\bf E})+\frac{1}{2}\int_0^t{\rm Tr}\left[D^2\Psi({\bf E}(s))({\rm d}{\bf E},{\rm d}{\bf E})\right]\\[2mm]
=&\Psi({\bf E}_0)-\int_0^t{\rm div}\left({\bf J}_e{\rm d}s+{\bf J}_e^r{\rm d}W(s)\right)+\int_0^t{\rm div}\left(\nabla\times{\bf H}\right){\rm d}s\\[2mm]
=&\Psi({\bf E}_0)-\int_0^t{\rm div}{\bf J}_e{\rm d}s-\int_0^t{\rm div}\left({\bf J}_e^r{\rm d}W(s)\right),
\end{split}
\end{equation}
where the last equality is due to $\nabla\cdot(\nabla\times {\bf \psi})=0,~\forall~{\bf \psi}({\bf x})\in\mathbb{R}^3$. In the similar manner, by applying It\^{o} formula to functional $\Psi({\bf H}(t))={\rm div}(\mu{\bf H}(t))$, we can get
\begin{equation}\label{div_H}
\begin{split}
\Psi({\bf H}(t))=\Psi({\bf H}_0)-\int_0^t{\rm div}{\bf J}_m{\rm d}s-\int_0^t{\rm div}\left({\bf J}_m^r{\rm d}W(s)\right).
\end{split}
\end{equation}
The results \eqref{div_EH} follows from taking the expectation on both sides of \eqref{div_E} and \eqref{div_H}, respectively.
The proof is thus completed.
\end{proof}
\begin{remark}
If the medium is lossless, i.e., $F=0$, or functions ${\bf J}_e$, ${\bf J}_m$ are divergence-free, the averaged divergence holds
\begin{equation*}
\begin{split}
\mathbb{E}({\rm div}\left(\varepsilon{\bf E}(t))\right)=\mathbb{E}({\rm div}\left(\varepsilon{\bf E}_0)\right),~~
\mathbb{E}({\rm div}\left(\mu{\bf H}(t))\right)=\mathbb{E}({\rm div}\left(\mu{\bf H}_0)\right).
\end{split}
\end{equation*}
\end{remark}
\section{Symplecticity of stochastic Maxwell equations}
In \cite{CH2016}, authors introduced the general form of infinite-dimensional stochastic Hamiltonian system based on a stochastic version of variation principle, and showed that the phase flow preserves the stochastic symplecticity on phase space.
In this section, we consider the corresponding infinite-dimensional stochastic Hamiltonian system form of stochastic Maxwell equations \eqref{sto_max}. In the sequel, we assume that $\varepsilon$ and $\mu$ are two positive constants in order to obtain the symplecticity.
We rewrite stochastic Maxwell equations \eqref{sto_max} as
\begin{equation}\label{sto_max_1}
\begin{cases}
{\rm d}{\bf E}-\varepsilon^{-1}\nabla\times {\bf H}{\rm d}t=-\varepsilon^{-1}{\bf J}_{e}(t,x,{\bf E},{\bf H}){\rm d}t-\varepsilon^{-1}{\bf J}_e^{r}(t,x)\circ{\rm d}W(t),~ &(t,{\bf x})\in(0,~T]\times D,\\[2mm]
{\rm d}{\bf H}+\mu^{-1}\nabla\times {\bf E}{\rm d}t=-\mu^{-1}{\bf J}_{m}(t,x,{\bf E},{\bf H}){\rm d}t-\mu^{-1}{\bf J}_m^{r}(t,x)\circ{\rm d}W(t),~ &(t,{\bf x})\in(0,~T]\times D.
\end{cases}
\end{equation}
Denote $G:[0,~T]\times L^{2}(D)^6\to L^{2}(D)^6$ a Nemytskij operator associated to ${\bf J}_{e}$, ${\bf J}_m$, which is defined by
\begin{equation}\label{F}
G(t,u(t))({\bf x})=\left( \begin{array}{c}
\mu^{-1}{\bf J}_{m}(t,{\bf x},{\bf E}(t,{\bf x}),{\bf H}(t,{\bf x}))\\
-\varepsilon^{-1}{\bf J}_{e}(t,{\bf x},{\bf E}(t,{\bf x}),{\bf H}(t,{\bf x}))
\end{array} \right), ~t\in[0,T],~{\bf x}\in D,~u(t)\in{\mathbb H}.
\end{equation}
The following lemma states the integrability condition for the existence of a potential such that $G(t,u)=\frac{\delta \widetilde{\mathcal H}_{1}(t,u)}{\delta u}$, which makes the equations \eqref{sto_max_1} be an infinite-dimensional stochastic Hamiltonian system.
For simplifying presentation, let $G$ do not depend on time $t$ explicitly, since the dependence on time causes no substantial problems in the analysis but just leads to longer formulas.
\begin{lemma}
Let $G:L^{2}(D)^6\to L^{2}(D)^6$ be G\^ateaux derivable, and $DG(u)\in {\mathcal L}(L^{2}(D)^6;L^{2}(D)^6)$ is an symmetric operator, i.e.,
\[
\langle DG(u)\phi,~\psi\rangle_{L^{2}(D)^6}=\langle\phi,~DG(u)\psi\rangle_{L^{2}(D)^6},\quad \forall~\phi,\psi\in L^{2}(D)^6,
\]
then there exists a functional $\widetilde{\mathcal H}_{1}:L^{2}(D)^6\to {\mathbb R},$ such that $$G(u)=\frac{\delta \widetilde{\mathcal H}_{1}(u)}{\delta u},$$ i.e.,
$\frac{\delta \widetilde{\mathcal H}_{1}}{\delta {\bf H}}=-\varepsilon^{-1}{\bf J}_{e}$ and $\frac{\delta \widetilde{\mathcal H}_{1}}{\delta {\bf E}}=\mu^{-1}{\bf J}_{m}$.
\end{lemma}
\begin{proof}
The functional $\widetilde{\mathcal H}_{1}(u)$ can be defined as
\begin{equation}
\widetilde{\mathcal H}_{1}(u)=\int_0^1 \langle u,~G(\lambda u)\rangle_{L^{2}(D)^6}{\rm d}\lambda+C(x).
\end{equation}
The functional derivative of $\widetilde{\mathcal H}_{1}(u)$ leads to
\begin{align*}
\delta \widetilde{\mathcal H}_{1} (u)(\phi)=&\langle \frac{\delta \widetilde{\mathcal H}_{1}(u)}{\delta u},~\phi\rangle_{L^{2}(D)^6}
=\lim_{\epsilon\to 0}\frac{1}{\epsilon}\Big[\widetilde{\mathcal H}_{1} (u+\epsilon \phi)-\widetilde{\mathcal H}_{1} (u)\Big]\\
=&\lim_{\epsilon\to 0}\frac{1}{\epsilon}\Big[ \int_0^1 \langle u+\epsilon\phi,~G(\lambda u+\epsilon\lambda\phi)\rangle_{L^{2}(D)^6}-\langle u,~G(\lambda u)\rangle_{L^{2}(D)^6}{\rm d}\lambda \Big]\\
=& \int_0^1 \langle u,~\lim_{\epsilon\to 0}\frac{1}{\epsilon}\big[G(\lambda u+\epsilon\lambda\phi)-G(\lambda u)\big]\rangle_{L^{2}(D)^6}{\rm d}\lambda
+\lim_{\epsilon\to 0}\int_0^1 \langle \phi,~G(\lambda u+\epsilon\lambda\phi)\rangle_{L^{2}(D)^6}{\rm d}\lambda,
\end{align*}
where the last step is from the Lebesgue dominated theorem and Lipschitz condition \eqref{bound partialJ}. By the definition of G\^ateaux derivative, we get
\begin{align*}
\langle \frac{\delta \widetilde{\mathcal H}_{1}(u)}{\delta u},~\phi\rangle_{L^{2}(D)^6}
&= \int_0^1 \lambda\langle u,~DG(\lambda u)\phi\rangle_{L^{2}(D)^6}{\rm d}\lambda
+\int_0^1 \langle \phi,~G(\lambda u)\rangle_{L^{2}(D)^6}{\rm d}\lambda \\
&=\langle \int_0^1\Big(\lambda DG(\lambda u)u +G(\lambda u)\Big){\rm d}\lambda,~\phi\rangle_{L^{2}(D)^6},
\end{align*}
where we have used the symmetry property of $DG(u)$. Therefore,
\[
\frac{\delta \widetilde{\mathcal H}_{1}(u)}{\delta u}=\int_0^1\Big(\lambda DG(\lambda u)u +G(\lambda u)\Big){\rm d}\lambda\\
=\int_0^1 \frac{\rm d}{\rm d\lambda}\Big(\lambda G(\lambda u)\Big){\rm d}\lambda=G(u).
\]
Thus we finish the proof.
\end{proof}
Therefore,
equations \eqref{sto_max_1} is a stochastic Hamiltonian system, whose infinite-dimensional stochastic Hamiltonian system form is given by
\begin{equation}
\begin{split}
\begin{bmatrix}
{\rm d}{\bf E}\\[2mm]
{\rm d}{\bf H}
\end{bmatrix}
&=\begin{bmatrix}
0 & Id\\[2mm]
-Id & 0
\end{bmatrix}
\begin{bmatrix}
\mu^{-1}\nabla\times{\bf E}+\mu^{-1}{\bf J}_m\\[2mm]
\varepsilon^{-1}\nabla\times{\bf H}-\varepsilon^{-1}{\bf J}_e
\end{bmatrix}
{\rm d}t
+\begin{bmatrix}
0 & Id\\[2mm]
-Id & 0
\end{bmatrix}
\begin{bmatrix}
\mu^{-1}{\bf J}_m^r \\[2mm]
-\varepsilon^{-1}{\bf J}_e^r
\end{bmatrix}
{\rm d}W(t)\\[2mm]
&={\mathbb J}\begin{bmatrix}
\frac{\delta{\mathcal H}_1}{\delta {\bf E}} \\[2mm]
\frac{\delta{\mathcal H}_1}{\delta {\bf H}}\end{bmatrix}
{\rm d}t
+{\mathbb J}\begin{bmatrix}
\frac{\delta{\mathcal H}_2}{\delta {\bf E}} \\[2mm]
\frac{\delta{\mathcal H}_2}{\delta {\bf H}}\end{bmatrix}
\circ{\rm d}W(t)
\end{split}
\end{equation}
with the standard skew-adjoint operator ${\mathbb J}$ on $L^2(D)^6$ with standard inner product, the Hamiltonians
\[
{\mathcal H}_1=\int_{D}\frac12\Big(\mu^{-1}{\bf E}\cdot \nabla\times{\bf E}
+\varepsilon^{-1}{\bf H}\cdot\nabla\times{\bf H}\Big){\rm d}x+\widetilde{\mathcal H}_{1},
\]
and
\[
{\mathcal H}_2=\int_{D}\Big(\mu^{-1}{\bf J}_{m}^r\cdot{\bf E}-\varepsilon^{-1}{\bf J}_e^r\cdot{\bf H}\Big){\rm d}x.
\]
For simplicity in notations, we denote ${\bf E}_0$, ${\bf H}_0$ by ${\bf e}$, ${\bf h}$, respectively.
The symplectic form for system \eqref{sto_max_1} is given by
\begin{equation}\label{symplectic structure}
\overline{\omega}(t)=\int_{D}{\rm d}{\bf E}(t,{\bf x})\wedge{\rm d}{\bf H}(t,{\bf x}){\rm d}{\bf x},
\end{equation}
where the overbar on $\omega$ is a reminder that the differential 2-form ${\rm d}{\bf E}\wedge {\rm d}{\bf H}$ is integrated over the space. Preservation of the symplectic form \eqref{symplectic structure} means that the spatial integral of the oriented areas of projections onto the coordinate planes $({\bf e},{\bf h})$ is an integral invariant. We say that the phase flow of \eqref{sto_max_1} preserves symplectic structure if and only if
\[
\frac{{\rm d}}{{\rm d}t}\overline{\omega}(t)=0.
\]
\begin{remark}
To avoid confusion, we note that the differentials in \eqref{sto_max_1} and \eqref{symplectic structure} have different meanings. In \eqref{sto_max_1}, ${\bf E}$, ${\bf H}$ are treated as functions of time, and ${\bf e}$, ${\bf h}$ are fixed parameters, while differentiation in \eqref{symplectic structure} is made with respect to the initial data ${\bf e}$, ${\bf h}$.
\end{remark}
We have the following result on the stochastic symplecticity of stochastic Maxwell equations \eqref{sto_max_1}.
\begin{theorem}
The phase flow of stochastic Maxwell equations \eqref{sto_max_1} preserves symplectic structure:
\begin{equation}
{\omega}(t)={\omega}(0), \quad {\mathbb P}\text{-}a.s.
\end{equation}
\end{theorem}
\begin{proof}
From the formula of change of variables in differential forms, it yields
\begin{equation}\label{omega_def}
\begin{split}
\overline{\omega}(t)=&\int_{D}\Big(\frac{\partial {\bf E}}{\partial {\bf e}}{\rm d}{\bf e}+\frac{\partial{\bf E}}{\partial{\bf h}}{\rm d}{\bf h}\Big)\wedge
\Big(\frac{\partial {\bf H}}{\partial {\bf e}}{\rm d}{\bf e}+\frac{\partial{\bf H}}{\partial{\bf h}}{\rm d}{\bf h}\Big){\rm d}{\bf x}\\[1.5mm]
=&\int_{D}\Big[{\rm d}{\bf e}\wedge \Big(\frac{\partial {\bf E}}{\partial {\bf e}}\Big)^{T}\frac{\partial {\bf H}}{\partial {\bf e}}{\rm d}{\bf e}\Big]{\rm d}{\bf x}
+\int_{D}\Big[{\rm d}{\bf h}\wedge \Big(\frac{\partial {\bf E}}{\partial {\bf h}}\Big)^{T}\frac{\partial {\bf H}}{\partial {\bf h}}{\rm d}{\bf h}\Big]{\rm d}{\bf x}\\[1.5mm]
&+\int_{D}\Big[{\rm d}{\bf e}\wedge
\bigg(\Big(\frac{\partial {\bf E}}{\partial {\bf e}}\Big)^{T}
\frac{\partial{\bf H}}{\partial{\bf h}}
-\Big(\frac{\partial {\bf H}}{\partial {\bf e}}\Big)^{T}
\frac{\partial{\bf E}}{\partial{\bf h}}\bigg)
{\rm d}{\bf h} \Big]{\rm d}{\bf x}.
\end{split}
\end{equation}
We set ${\bf E}_{\bf e}=\frac{\partial {\bf E}}{\partial {\bf e}}$, ${\bf E}_{\bf h}=\frac{\partial {\bf E}}{\partial {\bf h}}$,
${\bf H}_{\bf e}=\frac{\partial {\bf H}}{\partial {\bf e}}$ and ${\bf H}_{\bf h}=\frac{\partial {\bf H}}{\partial {\bf h}}$. Now, thanks to the differentiability with respect to initial data of stochastic infinite-dimensional equations (see \cite[Chapter 9]{PZ2014}), we have
\begin{align}
&{\rm d}{\bf E}_{\bf e}=\Big(\varepsilon^{-1}\nabla\times{\bf H}_{\bf e}+\frac{\delta^2\widetilde{\mathcal H}_{1}}{\delta {\bf E}\delta {\bf H}}{\bf E}_{\bf e}+\frac{\delta^2\widetilde{\mathcal H}_{1}}{\delta {\bf H}^2}{\bf H}_{\bf e}\Big){\rm d}t,~{\bf E}_{\bf e}(0)=Id,\label{4.4}\\
&{\rm d}{\bf H}_{\bf e}=\Big(-\mu^{-1}\nabla\times{\bf E}_{\bf e}
-\frac{\delta^2\widetilde{\mathcal H}_{1}}{\delta {\bf E}^2}{\bf E}_{\bf e}-\frac{\delta^2\widetilde{\mathcal H}_{1}}{\delta {\bf E}\delta{\bf H}}{\bf H}_{\bf e}
\Big){\rm d}t,~{\bf H}_{\bf e}(0)=0,\label{4.5}\\
&{\rm d}{\bf E}_{\bf h}=\Big(\varepsilon^{-1}\nabla\times{\bf H}_{\bf h}
+\frac{\delta^2\widetilde{\mathcal H}_{1}}{\delta {\bf E}\delta {\bf H}}{\bf E}_{\bf h}+\frac{\delta^2\widetilde{\mathcal H}_{1}}{\delta {\bf H}^2}{\bf H}_{\bf h}
\Big){\rm d}t,~{\bf E}_{\bf h}(0)=0,\label{4.6}\\
&{\rm d}{\bf H}_{\bf h}=\Big(-\mu^{-1}\nabla\times{\bf E}_{\bf h}
-\frac{\delta^2\widetilde{\mathcal H}_{1}}{\delta {\bf E}^2}{\bf E}_{\bf h}-\frac{\delta^2\widetilde{\mathcal H}_{1}}{\delta {\bf E}\delta{\bf H}}{\bf H}_{\bf h}
\Big){\rm d}t,~{\bf H}_{\bf h}(0)=Id.\label{4.7}
\end{align}
From equality \eqref{omega_def}, we get
\begin{equation}\label{4.8}
\begin{split}
\frac{{\rm d}\overline{\omega}(t)}{{\rm d} t}=&
\int_{D}\left[{\rm d}{\bf e}\wedge \frac{\rm d}{{\rm d}t}\bigg(\Big(\frac{\partial {\bf E}}{\partial {\bf e}}\Big)^{T}\frac{\partial {\bf H}}{\partial {\bf e}}\bigg){\rm d}{\bf e}
+{\rm d}{\bf h}\wedge \frac{\rm d}{{\rm d}t}\bigg(\Big(\frac{\partial {\bf E}}{\partial {\bf h}}\Big)^{T}\frac{\partial {\bf H}}{\partial {\bf h}}\bigg){\rm d}{\bf h}\right]{\rm d}{\bf x}\\[2mm]
&+\int_{D}\left[{\rm d}{\bf e}\wedge
\frac{\rm d}{{\rm d}t}\bigg(\Big(\frac{\partial {\bf E}}{\partial {\bf e}}\Big)^{T}
\frac{\partial{\bf H}}{\partial{\bf h}}
-\Big(\frac{\partial {\bf H}}{\partial {\bf e}}\Big)^{T}
\frac{\partial{\bf E}}{\partial{\bf h}}\bigg)
{\rm d}{\bf h} \right]{\rm d}{\bf x}.
\end{split}
\end{equation}
Substituting equations \eqref{4.4}-\eqref{4.7} into the above equality, and using the symmetric property of $\frac{\delta^2\widetilde{\mathcal H}_{1}}{\delta {\bf E}\delta {\bf H}}$, $\frac{\delta^2\widetilde{\mathcal H}_{1}}{\delta {\bf E}^2}$ and $\frac{\delta^2\widetilde{\mathcal H}_{1}}{\delta {\bf H}^2}$,
it holds
\begin{align*}
\frac{{\rm d}\overline{\omega}(t)}{{\rm d} t}=&\int_{D}\Big[{\rm d}{\bf e}\wedge \bigg(\varepsilon^{-1}\big(\nabla\times{\bf H}_{\bf e}\big)^{T}{\bf H}_{\bf e}
-\mu^{-1}{\bf E}_{\bf e}^{T}\nabla\times{\bf E}_{\bf e}\bigg){\rm d}{\bf e}\Big]{\rm d}{\bf x}\\
&+\int_{D}\Big[{\rm d}{\bf h}\wedge\bigg(\varepsilon^{-1}\big(\nabla\times{\bf H}_{\bf h}\big)^{T}{\bf H}_{\bf h}
-\mu^{-1}{\bf E}_{\bf h}^{T}\nabla\times{\bf E}_{\bf h}\bigg){\rm d}{\bf h}\Big]{\rm d}{\bf x}\\
&+\int_{D} \Big[{\rm d}{\bf e}\wedge
\bigg(\varepsilon^{-1}\big(\nabla\times{\bf H}_{\bf e}\big)^{T}{\bf H}_{\bf h}
-\mu^{-1}{\bf E}_{\bf e}^{T}\nabla\times{\bf E}_{\bf h}\bigg){\rm d}{\bf h}
\Big] {\rm d}{\bf x}\\
&+\int_{D} \Big[{\rm d}{\bf e}\wedge
\bigg(
\mu^{-1}\big(\nabla\times{\bf E}_{\bf e}\big)^{T}{\bf E}_{\bf h}
-\varepsilon^{-1}{\bf H}_{\bf e}^{T}\nabla\times{\bf H}_{\bf h}\bigg)
{\rm d}{\bf h} \Big] {\rm d}{\bf x}\\
=&\int_{D}\varepsilon^{-1}\bigg[ {\rm d}{\bf e}\wedge\big(\nabla\times{\bf H}_{\bf e}\big)^{T}{\bf H}_{\bf e}{\rm d}{\bf e}+{\rm d}{\bf h}\wedge \big(\nabla\times{\bf H}_{\bf h}\big)^{T}{\bf H}_{\bf h}{\rm d}{\bf h} \\
&\qquad\qquad+{\rm d}{\bf e}\wedge \big(\nabla\times{\bf H}_{\bf e}\big)^{T}{\bf H}_{\bf h}{\rm d}{\bf h}
-{\rm d}{\bf e}\wedge {\bf H}_{\bf e}^{T}\nabla\times{\bf H}_{\bf h}{\rm d}{\bf h}
\bigg]{\rm d}{\bf x}\\
&+\int_{D}\mu^{-1}\bigg[ {\rm d}{\bf e}\wedge\big(\nabla\times{\bf E}_{\bf e}\big)^{T}{\bf E}_{\bf e}{\rm d}{\bf e}+{\rm d}{\bf h}\wedge \big(\nabla\times{\bf E}_{\bf h}\big)^{T}{\bf E}_{\bf h}{\rm d}{\bf h} \\
&\qquad\qquad+{\rm d}{\bf e}\wedge \big(\nabla\times{\bf E}_{\bf e}\big)^{T}{\bf E}_{\bf h}{\rm d}{\bf h}
-{\rm d}{\bf e}\wedge {\bf E}_{\bf e}^{T}\nabla\times{\bf E}_{\bf h}{\rm d}{\bf h}
\bigg]{\rm d}{\bf x}.
\end{align*}
The properties of wedge product lead to
\begin{align}
\frac{{\rm d}\overline{\omega}(t)}{{\rm d} t} =&\int_{D}\varepsilon^{-1}\bigg[ \nabla\times{\bf H}_{\bf e}{\rm d}{\bf e}\wedge{\bf H}_{\bf e}{\rm d}{\bf e}+\nabla\times{\bf H}_{\bf h}{\rm d}{\bf h}\wedge {\bf H}_{\bf h}{\rm d}{\bf h} \nonumber\\
&\qquad\qquad+\nabla\times{\bf H}_{\bf e}{\rm d}{\bf e}\wedge {\bf H}_{\bf h}{\rm d}{\bf h}
-{\bf H}_{\bf e}{\rm d}{\bf e}\wedge \nabla\times{\bf H}_{\bf h}{\rm d}{\bf h}
\bigg]{\rm d}{\bf x}\nonumber\\
&+\int_{D}\mu^{-1}\bigg[ \nabla\times{\bf E}_{\bf e}{\rm d}{\bf e}\wedge{\bf E}_{\bf e}{\rm d}{\bf e}+\nabla\times{\bf E}_{\bf h}{\rm d}{\bf h}\wedge {\bf E}_{\bf h}{\rm d}{\bf h} \nonumber\\
&\qquad\qquad+\nabla\times{\bf E}_{\bf e}{\rm d}{\bf e}\wedge {\bf E}_{\bf h}{\rm d}{\bf h}
-{\bf E}_{\bf e}{\rm d}{\bf e}\wedge \nabla\times{\bf E}_{\bf h}{\rm d}{\bf h}
\bigg]{\rm d}{\bf x}\label{100}\\
=&\int_{D}\varepsilon^{-1}\left({\rm d}\big(\nabla\times {\bf H}\big)\wedge {\rm d}{\bf H}\right)+\mu^{-1}\left({\rm d}\big(\nabla\times{\bf E}\big)\wedge{\rm d}{\bf E}\right){\rm d}{\bf x}\nonumber\\
=&\int_{D}\varepsilon^{-1}\left(\frac{\partial}{\partial x}({\rm d}H_2\wedge {\rm d}H_3)+\frac{\partial}{\partial y}({\rm d}H_3\wedge {\rm d}H_1)+\frac{\partial}{\partial z}({\rm d}H_1\wedge {\rm d}H_2)\right){\rm d}{\bf x}\nonumber\\
&\quad+\int_{D}\mu^{-1}\left(\frac{\partial}{\partial x}({\rm d}E_2\wedge {\rm d}E_3)+\frac{\partial}{\partial y}({\rm d}E_3\wedge {\rm d}E_1)+\frac{\partial}{\partial z}({\rm d}E_1\wedge {\rm d}E_2)\right){\rm d}{\bf x}.\nonumber
\end{align}
From the zero boundary conditions, we derive immediately the result. Therefore the proof is completed.
\end{proof}
\section{Stochastic Runge-Kutta semidiscretizations}
In this section, we will study the stochastic Runge-Kutta semidiscretizations for stochastic Maxwell equations and state our main results.
For time interval $[0,T]$, introducing the uniform partition $0=t_0<t_1<\ldots<t_{N}=T$. Let $\tau=T/N$, and $\Delta W^{n+1}=W(t_{n+1})-W(t_n)$, $n=0,1,\ldots,N-1$. Applying $s$-stage stochastic Runge-Kutta methods, which only depend on the increments of the Wiener process, to \eqref{sM_equations} in temporal direction, we obtain
\begin{subequations}\label{RK method}
\begin{align}
& U_{ni}=u^{n}+\tau\sum_{j=1}^{s}a_{ij}\big(MU_{nj}+F(t_n+c_j\tau,U_{nj})\big)+\Delta W^{n+1}\sum_{j=1}^{s}\widetilde{a}_{ij}B(t_{n}+c_j\tau),\label{RK method_1}\\
& u^{n+1}=u^{n}+\tau\sum_{i=1}^{s}b_i\big(MU_{ni}+F(t_n+c_i\tau,U_{ni})\big)+\Delta W^{n+1}\sum_{i=1}^{s}\widetilde{b}_{i}B(t_n+c_i\tau),\label{RK method_2}
\end{align}
\end{subequations}
for $i=1,\ldots,s$ and $n=0,\ldots,N-1$. Here $A=\big(a_{ij}\big)_{s\times s}$ and $\widetilde{A}=\big(\widetilde{a}_{ij}\big)_{s\times s}$ are $s\times s$ matrices of real elements while $b^{T}=(b_1,\ldots,b_s)$ and $\widetilde{b}^{T}=(\widetilde{b}_1,\ldots,\widetilde{b}_s)$ are real vectors.
In order to prove, for a fixed $n\in\mathbb{N}$, the existence of a solution of \eqref{RK method_1}-\eqref{RK method_2}, for which the implicitness may be from the drift part, we first introduce the concepts of algebraical stability and coercivity condition for Runge-Kutta method $(A,b)$
\begin{definition}\label{stable}
A Runge-Kutta method $(A,b)$ with
$A=\big(a_{ij}\big)_{i,j=1}^{s}$ and $b=\big(b_i\big)_{i=1}^{s}$ is called algebraically stable, if $b_i\geq 0$ for $i=1,\ldots, s$ and
\begin{equation}
{\mathcal M}=\big(m_{ij}\big)_{i,j=1}^{s}\quad\text{with}\quad
m_{ij}=b_ia_{ij}+b_j a_{ji}-b_{i}b_{j}
\end{equation}
is positive semidefinite.
\end{definition}
\begin{definition}\label{coercivity}
We say that a Runge-Kutta matrix $A$ satisfies the coercivity condition if it is invertible, and there exists a diagonal positive definite matrix ${\mathcal K}={\rm diag}(k_i)$ and a positive scalar $\alpha$ such that
\begin{equation}\label{coercivity condition}
u^{T}{\mathcal K}(A)^{-1}u\geq \alpha u^{T}{\mathcal K}u,\quad \text{for all }u\in{\mathbb R}^{s}.
\end{equation}
\end{definition}
The coercivity plays an important role in the existence of numerical solution of Runge-Kutta method
To present more clearly the stochastic Runge-Kutta methods \eqref{RK method_1}-\eqref{RK method_2}, we consider two concrete examples.
\begin{example}[Implicit Euler method]
The implicit Euler method is an implicit stochastic Runge-Kutta method with Butcher Tableau given by
\begin{center}
\begin{tabular}{c|c}
{\rm 1} & {\rm 1}\\
\hline & {\rm 1}
\end{tabular}~,
\quad\quad\begin{tabular}{c|c}
{\rm 1} & {\rm 1}\\
\hline & {\rm 1}
\end{tabular}~.
\end{center}
If we apply the implicit Euler method to stochastic Maxwell equations \eqref{sM_equations} we obtain the recursion
\begin{align*}
&U_{n1}=u^n+\tau \Big(MU_{n1}+F(t_{n+1},U_{n1})\Big)+\Delta W^{n+1}B(t_{n+1}),\\
&u^{n+1}=u^n+\tau \Big(MU_{n1}+F(t_{n+1},U_{n1})\Big)+\Delta W^{n+1}B(t_{n+1}),
\end{align*}
where we abbreviated $t_{n+1}=t_n+\tau$. Clearly, we have $U_{n1}=u^{n+1}$ and hence we can write the
midpoint method compactly as
\begin{equation}\label{im Euler method}
u^{n+1}=u^n+\tau \Big(Mu^{n+1}+F(t_{n+1},u^{n+1})\Big)+\Delta W^{n+1}B(t_{n+1}).
\end{equation}
By introducing operator
\begin{equation}\label{operators dis_euler}
S^{\rm IE}_{\tau}=(Id-{\tau}M)^{-1},
\end{equation}
we can write the equivalent form of implicit Euler method as
\begin{equation}\label{im Euler un}
u^{n+1}=S^{\rm IE}_{\tau}u^n+\tau S^{\rm IE}_{\tau}F^{n+1}+S^{\rm IE}_{\tau}B^{n+1}\Delta W^{n+1}.
\end{equation}
Note that the implicit Euler method is algebraical stable with ${\mathcal M}=1$, and satisfies the coercivity condition.
\end{example}
\iffalse
\begin{example}[{\color{red}Crank-Nicolson method}]
{\color{red}Crank-Nicolson method is an implicit stochastic Runge-Kutta method with Butcher Tableau given by}
\begin{center}
\begin{tabular}{c|cc}
{\rm 0} & {\rm 0} &{\rm 0} \\
{\rm 1} & {\rm 1/2} &{\rm 1/2}\\
\hline & {\rm 1/2} &{\rm 1/2}
\end{tabular},
\quad\quad\begin{tabular}{c|cc}
{\rm 0} & {\rm 0} &{\rm 0} \\
{\rm 1} & {\rm 1/2} &{\rm 1/2}\\
\hline & {\rm 1/2} &{\rm 1/2}
\end{tabular},
\end{center}
{\color{red}more precisely, the Crank-Nicolson method for stochastic Maxwell equations \eqref{sM_equations} reads}
\begin{align*}
&U_{n1}=u^n,\\
&U_{n2}=u^{n}+\frac{\tau}{2}\Big[\big(MU_{n1}+F(t_n)\big)+\big(MU_{n2}+F(t_{n}+\tau)\big)\Big]+\frac{\Delta W^{n+1}}{2}\big(B(t_{n}+B(t_{n}+\tau))\big),\\
&u^{n+1}=u^{n}+\frac{\tau}{2}\Big[\big(MU_{n1}+F(t_n)\big)+\big(MU_{n2}+F(t_{n}+\tau)\big)\Big]+\frac{\Delta W^{n+1}}{2}\big(B(t_{n}+B(t_{n}+\tau))\big).
\end{align*}
{\color{red}Notice that}
\begin{align*}
U_{n1}=u^n, \quad U_{n2}=u^{n+1}.
\end{align*}
As a consequence, the Crank-Nicolson method simplifies to
\begin{equation}\label{CN method}
u^{n+1}=u^{n}+\frac{\tau}{2}M(u^{n+1}+u^{n})+\frac{\tau}{2}\big(F^{n+1}+F^{n}\big)+\frac{\Delta W^{n+1}}{2}\big(B^{n+1}+B^{n}\big),
\end{equation}
with $F^{n}=F(t_n,u^n)$ and $B^n=B(t_n)$.
Equivalently, we can write it as
\begin{equation}\label{mild CN}
u^{n+1}=S_{\tau}u^{n}+\frac{\tau}{2}T_{\tau}\big(F^n+F^{n+1}\big)+\frac{1}{2}T_{\tau}\big(B^n+B^{n+1}\big)\Delta W^{n+1},
\end{equation}
where operators
\begin{equation}\label{operators dis}
S_{\tau}=(Id-\frac{\tau}{2}M)^{-1}(I+\frac{\tau}{2}M), ~and~ T_{\tau}=(Id-\frac{\tau}{2}M)^{-1}.
\end{equation}
\end{example}
\fi
\begin{example}[Midpoint method]
The midpoint method is another example of implicit stochastic Runge-Kutta method which is given by
\begin{center}
\begin{tabular}{c|c}
{\rm 1/2} & {\rm 1/2}\\
\hline & {\rm 1}
\end{tabular}~,
\quad\quad\begin{tabular}{c|c}
{\rm 1/2} & {\rm 1/2}\\
\hline & {\rm 1}
\end{tabular}.
\end{center}
If we apply the midpoint method to stochastic Maxwell equations \eqref{sM_equations} we obtain the recursion
\begin{align*}
U_{n1}=u^n+\frac{\tau}{2} \Big(MU_{n1}+F(t_{n+1/2},U_{n1})\Big)+\frac{\Delta W^{n+1}}{2}B(t_{n+1/2}),\\
u^{n+1}=u^n+\tau \Big(MU_{n1}+F(t_{n+1/2},U_{n1})\Big)+\Delta W^{n+1}B(t_{n+1/2}),
\end{align*}
where we abbreviated $t_{n+1/2}=t_n+\tau/2$. Clearly, we have $U_{n1}=(u^{n+1}+u^{n})/2$ and hence we can write the
midpoint method compactly as
\begin{equation}\label{midpoint method}
u^{n+1}=u^{n}+\frac{\tau}{2}M(u^{n+1}+u^{n})+\tau F^{n+\frac12}+ B^{n+\frac12}\Delta W^{n+1},
\end{equation}
where $F^{n+\frac12}=F(t_{n+\frac12},(u^n+u^{n+1})/2)$ and $B^{n+\frac12}=B(t_{n+\frac12})$. By introducing operators
\begin{equation}\label{operators dis_Mid}
S^{\rm Mid}_{\tau}=(Id-\frac{\tau}{2}M)^{-1}(I+\frac{\tau}{2}M), ~and~ T^{\rm Mid}_{\tau}=(Id-\frac{\tau}{2}M)^{-1},
\end{equation} we can write the equivalent form of midpoint method as
\begin{equation}\label{mild un}
u^{n+1}=S^{\rm Mid}_{\tau}u^n+\tau T^{\rm Mid}_{\tau}F^{n+\frac12}+T^{\rm Mid}_{\tau}B^{n+\frac12}\Delta W^{n+1}.
\end{equation}
Note that the midpoint method is algebraical stable with ${\mathcal M}=0$ which means stochastic symplecticity (see Theorem \ref{symplecticity of RK}), and satisfies the coercivity condition.
\end{example}
\subsection{Symplectic condition of stochastic Runge-Kutta semidiscretizations}
In this subsection, we analyze the condition of symplecticity for stochastic Runge-Kutta semidiscretizations \eqref{RK method_1}-\eqref{RK method_2}.
\begin{theorem}\label{symplecticity of RK}
Assume that the coefficients $a_{ij},b_i$ of stochastic Runge-Kutta method \eqref{RK method_1}-\eqref{RK method_2} satisfy
\begin{align}\label{cond00}
m_{ij}=b_ia_{ij}+b_j a_{ji}-b_{i}b_{j}\equiv 0,
\end{align}
for all $i,j=1,2,\cdots,s$, then the \eqref{RK method_1}-\eqref{RK method_2} is stochastic symplectic with the discrete stochastic symplectic conservation law ${\mathbb P}$-a.s.,
\[\bar{\omega}^{n+1}=\int_{D}{\rm d}{\bf E}^{n+1}\wedge {\rm d}{\bf H}^{n+1}{\rm d}{\bf x} =\int_{D}{\rm d}{\bf E}^{n}\wedge {\rm d}{\bf H}^{n}{\rm d}{\bf x}=\bar{\omega}^{n}.\]
\end{theorem}
\begin{proof}
It follows from equations \eqref{RK method_1} and \eqref{RK method_2} that
\begin{subequations}
\begin{align}
&{\rm d}U_{ni}={\rm d}u^n+\tau\sum_{j=1}^{s}a_{ij}M{\rm d}U_{nj}+\tau\sum_{j=1}^{s}a_{ij}{\mathbb J}\frac{\delta^2 \widetilde{\mathcal H}_{1}}{\delta u^2}{\rm d}U_{nj},\label{var_RK_1}\\
&{\rm d}u^{n+1}={\rm d}u^{n}+\tau\sum_{i=1}^{s}b_iM{\rm d}U_{ni}+\tau\sum_{i=1}^{s}b_{i}{\mathbb J}\frac{\delta^2 \widetilde{\mathcal H}_{1}}{\delta u^2}{\rm d}U_{ni},\label{var_RK_2}
\end{align}
\end{subequations}
where we use $F={\mathbb J}\frac{ \delta\widetilde{\mathcal H}_{1} }{\delta u}$.
Therefore, we have
\begin{align}\label{eq1}
&{\rm d}u^{n+1}\wedge {\mathbb J}{\rm d}u^{n+1}-{\rm d}u^{n}\wedge {\mathbb J}{\rm d}u^{n}\nonumber\\
&=\left({\rm d}u^{n}+\tau\sum_{i=1}^{s}b_iM{\rm d}U_{ni}+\tau\sum_{i=1}^{s}b_{i}{\mathbb J}\frac{\delta^2 \widetilde{\mathcal H}_{1}}{\delta u^2}{\rm d}U_{ni}\right)\nonumber\\
&\qquad\wedge {\mathbb J}\left({\rm d}u^{n}+\tau\sum_{i=1}^{s}b_iM{\rm d}U_{ni}+\tau\sum_{i=1}^{s}b_{i}{\mathbb J}\frac{\delta^2 \widetilde{\mathcal H}_{1}}{\delta u^2}{\rm d}U_{ni}\right)-{\rm d}u^{n}\wedge {\mathbb J}{\rm d}u^{n}\nonumber\\
&=\tau\sum_{i=1}^{s}b_i\left({\rm d}u^n\wedge{\mathbb J}M{\rm d}U_{ni}+M{\rm d}U_{ni}\wedge{\mathbb J}{\rm d}u^n\right)\\
&\qquad+\tau\sum_{i=1}^{s}b_{i}\Big( {\rm d}u^n\wedge{\mathbb J}^2\frac{\delta^2 \widetilde{\mathcal H}_{1}}{\delta u^2}{\rm d}U_{ni}+{\mathbb J}\frac{\delta^2 \widetilde{\mathcal H}_{1}}{\delta u^2}{\rm d}U_{ni}\wedge{\mathbb J}{\rm d}u^n \Big)\nonumber\\
&\qquad+\tau^2\sum_{i,j=1}^{s}b_ib_j\Big(M{\rm d}U_{ni}\wedge{\mathbb J}M{\rm d}U_{nj}+
{\mathbb J}\frac{\delta^2 \widetilde{\mathcal H}_{1}}{\delta u^2}{\rm d}U_{ni}\wedge {\mathbb J}^2\frac{\delta^2 \widetilde{\mathcal H}_{1}}{\delta u^2}{\rm d}U_{nj}\Big)\nonumber\\
&\qquad+\tau^2\sum_{i,j=1}^{s}b_ib_j\Big(
M{\rm d}U_{ni}\wedge {\mathbb J}^2\frac{\delta^2 \widetilde{\mathcal H}_{1}}{\delta u^2}{\rm d}U_{nj}+{\mathbb J}\frac{\delta^2 \widetilde{\mathcal H}_{1}}{\delta u^2}{\rm d}U_{ni}\wedge {\mathbb J}M{\rm d}U_{nj}\nonumber
\Big).
\end{align}
From \eqref{var_RK_1}, we have
\begin{equation*}
{\rm d}u^n={\rm d}U_{ni}-\tau\sum_{j=1}^{s}a_{ij}M{\rm d}U_{nj}-\tau\sum_{j=1}^{s}a_{ij}{\mathbb J}\frac{\delta^2 \widetilde{\mathcal H}_{1}}{\delta u^2}{\rm d}U_{nj}.
\end{equation*}
Substituting the above equation into the first and second terms on the right-hand side of \eqref{eq1}, we obtain
\begin{equation}\label{eq2}
\begin{split}
&{\rm d}u^{n+1}\wedge {\mathbb J}{\rm d}u^{n+1}-{\rm d}u^{n}\wedge {\mathbb J}{\rm d}u^{n}\\
&=\tau\sum_{i=1}^{s}b_i\left({\rm d}U_{ni}\wedge{\mathbb J}M{\rm d}U_{ni}+M{\rm d}U_{ni}\wedge{\mathbb J}{\rm d}U_{ni}\right)\\
&\qquad+\tau\sum_{i=1}^{s}b_i\left({\rm d}U_{ni}\wedge{\mathbb J}^2\frac{\delta^2 \widetilde{\mathcal H}_{1}}{\delta u^2}{\rm d}U_{ni}+{\mathbb J}\frac{\delta^2 \widetilde{\mathcal H}_{1}}{\delta u^2}{\rm d}U_{ni}\wedge{\mathbb J}{\rm d}U_{ni}\right)\\
&\qquad+\tau^2\sum_{i,j=1}^{s}\left(b_ib_j-b_ia_{ij}-b_ja_{ji}\right)\left(M{\rm d}U_{ni}\wedge{\mathbb J}M{\rm d}U_{nj}\right)\\
&\qquad+2\tau^2\sum_{i,j=1}^{s}\left(b_ib_j-b_ia_{ij}-b_ja_{ji}\right)\left(M{\rm d}U_{ni}\wedge{\mathbb J}^2\frac{\delta^2 \widetilde{\mathcal H}_{1}}{\delta u^2}{\rm d}U_{nj}\right)\\
&\qquad+\tau^2\sum_{i,j=1}^{s}\left(b_ib_j-b_ia_{ij}-b_ja_{ji}\right)\left({\mathbb J}\frac{\delta^2 \widetilde{\mathcal H}_{1}}{\delta u^2}{\rm d}U_{ni}\wedge{\mathbb J}^2\frac{\delta^2 \widetilde{\mathcal H}_{1}}{\delta u^2}{\rm d}U_{nj}\right).
\end{split}
\end{equation}
From the symmetry of $\frac{\delta^2 \widetilde{\mathcal H}_{1}}{\delta u^2}$, the value of the second term on the right-hand side of \eqref{eq2} is zero. From the symplectic condition \eqref{cond00}, the third, forth and fifth terms on the right-hand side of \eqref{eq2} are also zeros. Therefore,
\begin{equation*}
\begin{split}
{\rm d}u^{n+1}\wedge {\mathbb J}{\rm d}u^{n+1}-{\rm d}u^{n}\wedge {\mathbb J}{\rm d}u^{n}
=\tau\sum_{i=1}^{s}b_i\left({\rm d}U_{ni}\wedge{\mathbb J}M{\rm d}U_{ni}+M{\rm d}U_{ni}\wedge{\mathbb J}{\rm d}U_{ni}\right).
\end{split}
\end{equation*}
Recalling $u=\begin{pmatrix}{\bf E}\\ {\bf H}\end{pmatrix}$ and the Maxwell operator $M$ in \eqref{M_operator}, and using the skew-symmetry of ${\mathbb J}$, it yields
\begin{align}
&{\rm d}{\bf E}^{n+1}\wedge {\rm d}{\bf H}^{n+1}-{\rm d}{\bf E}^{n}\wedge {\rm d}{\bf H}^{n}\nonumber\\
&=\frac{1}{2}\left({\rm d}u^{n+1}\wedge {\mathbb J}{\rm d}u^{n+1}-{\rm d}u^{n}\wedge {\mathbb J}{\rm d}u^{n}\right)\\
&=\tau\sum_{i=1}^{s}b_i\left({\rm d}U_{ni}\wedge{\mathbb J}M{\rm d}U_{ni}\right)\nonumber\\
&=-\tau\sum_{i=1}^{s}b_i\left[\mu^{-1}{\rm d}{\bf E}_{ni}\wedge(\nabla\times{\rm d}{\bf E}_{ni})+\varepsilon^{-1}{\rm d}{\bf H}_{ni}\wedge(\nabla\times{\rm d}{\bf H}_{ni})\right].\nonumber
\end{align}
Thereby, by using the similar proof approach in the last two steps of \eqref{100} it holds
\begin{equation*}
\begin{split}
&\int_{D}{\rm d}{\bf E}^{n+1}\wedge {\rm d}{\bf H}^{n+1}{\rm d}{\bf x}-\int_{D}{\rm d}{\bf E}^{n}\wedge {\rm d}{\bf H}^{n}{\rm d}{\bf x}\\
&=-\tau\sum_{i=1}^{s}b_i\int_{D}\left[\mu^{-1}{\rm d}{\bf E}_{ni}\wedge(\nabla\times{\rm d}{\bf E}_{ni})+\varepsilon^{-1}{\rm d}{\bf H}_{ni}\wedge(\nabla\times{\rm d}{\bf H}_{ni})\right]{\rm d}{\bf x}
=0.
\end{split}
\end{equation*}
Thus, the proof is completed.
\end{proof}
\begin{remark}
Note that for a symplectic Runge-Kutta method, it satisfies algebraically stable condition automatically.
\end{remark}
\subsection{Regularity of stochastic Runge-Kutta semidiscretizations}
In this subsection, we present the results of well-posedness and regularity of numerical solution given by stochastic Runge-Kutta method \eqref{RK method_1}-\eqref{RK method_2} satisfying the algebraical stability and coercivity conditions.
First, we utilize Kronecker product to rewrite \eqref{RK method_1}-\eqref{RK method_2} in a compact form,
\begin{subequations}\label{RK_compact}
\begin{align}
&U_{n}={\bf 1}_{s}\otimes u^n+\tau \big(A\otimes M\big)U_{n}+\tau \big(A\otimes I\big)F^{n}(U_n)+\big(\widetilde{A}\otimes I\big)B^{n}\Delta W^{n+1},\label{RK_compact_1}\\[2mm]
&u^{n+1}=u^{n}+\tau\big(b^{T}\otimes M\big)U_n+\tau\big(b^{T}\otimes I\big)F^{n}(U_n)+\big(\widetilde{b}^{T}\otimes I\big)B^{n}\Delta W^{n+1},\label{RK_compact_2}
\end{align}
\end{subequations}
where ${\bf 1}_{s}=[1,\ldots,1]^{T}$, $I$ is the identity matrix of size $6\times 6$, and
\[
U_{n}=\begin{bmatrix}
U_{n1}\\ U_{n2}\\ \cdots \\U_{ns}
\end{bmatrix},
\qquad
F^{n}(U_n)=\begin{bmatrix}
F(t_n+c_1\tau,U_{n1})\\F(t_n+c_2\tau,U_{n2})\\ \cdots \\F(t_n+c_s\tau,U_{ns})
\end{bmatrix},
\qquad
B^{n}=\begin{bmatrix}
B(t_n+c_1\tau)\\B(t_n+c_2\tau)\\ \cdots \\B(t_n+c_s\tau)
\end{bmatrix}.
\]
Next, we give some useful estimates on the operator $(A\otimes M)$, under the coercivity condition of matrix $A$.
\begin{lemma}\label{est operator}
Let matrix $A$ satisfy coercivity condition \eqref{coercivity condition}. Then there exists constant $C$ such that
\begin{itemize}
\item [(i)]$\|\Big(I_{6s\times 6s}-\tau(A\otimes M)\Big)^{-1}\|_{{\mathcal L}({\mathbb H}^s;{\mathbb H}^s)}\leq C$;
\item [(ii)] $\|I_{6s\times 6s}-\Big(I_{6s\times 6s}-\tau(A\otimes M)\Big)^{-1}\|_{{\mathcal L}(({\mathcal D}(M))^s;{\mathbb H}^s)}\leq C\tau$.
\end{itemize}
\end{lemma}
\begin{proof}
In order to estimate the operator $I_{6s\times 6s}-\Big(I_{ 6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}$, we denote $v^{n+1}=\Big(I_{ 6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}v^n$, and then $\{v^n\}_{n\in{\mathbb N}}$ is the discrete solution of the following discrete system
\begin{equation}\label{5.15}
v^{n+1}=v^{n}+\tau\big(A\otimes M\big)v^{n+1}.
\end{equation}
Suppose that $A$ satisfies the coercivity condition, we apply $\langle v^{n+1},\big({\mathcal K}A^{-1}\otimes I\big)\cdot \rangle_{{\mathbb H}^s}$ to both sides of \eqref{5.15} and get
\begin{equation}\label{5.17}
\begin{split}
\langle v^{n+1},\big({\mathcal K}A^{-1}\otimes I\big)v^{n+1} \rangle_{{\mathbb H}^s}
=&\langle v^{n+1},\big({\mathcal K}A^{-1}\otimes I\big)v^{n} \rangle_{{\mathbb H}^s}\\
&+\tau\langle v^{n+1},\big({\mathcal K}A^{-1}\otimes I\big)\big(A\otimes M\big)v^{n+1} \rangle_{{\mathbb H}^s}.
\end{split}
\end{equation}
Since
\[
\langle v^{n+1},\big({\mathcal K}A^{-1}\otimes I\big)v^{n+1} \rangle_{{\mathbb H}^s}
\geq \alpha \sum_{i=1}^{s}k_i\|v^{n+1,i}\|^2_{\mathbb H}\geq \alpha\min\{k_i\} \|v^{n+1}\|^2_{{\mathbb H}^s}:=\tilde{\alpha}\|v^{n+1}\|^2_{{\mathbb H}^s},
\]
and
\[
\langle v^{n+1},\big({\mathcal K}A^{-1}\otimes I\big)\big(A\otimes M\big)v^{n+1} \rangle_{{\mathbb H}^s}=\langle v^{n+1},\big({\mathcal K}\otimes M\big)v^{n+1} \rangle_{{\mathbb H}^s}=\sum_{i=1}^{s}k_i\langle v^{n+1,i} , Mv^{n+1,i}\rangle_{{\mathbb H}}=0,
\]
we get for \eqref{5.17}
\[
\tilde{\alpha}\|v^{n+1}\|^2_{{\mathbb H}^s}\leq \langle v^{n+1},\big({\mathcal K}A^{-1}\otimes I\big)v^{n} \rangle_{{\mathbb H}^s}
\leq \gamma \|v^{n+1}\|^2_{{\mathbb H}^s}+\frac{C}{\gamma}\|v^n\|^2_{{\mathbb H}^s},
\]
where $C$ depends on $|{\mathcal K}|$ and $|A^{-1}|$.
Taking $\gamma=\tilde{\alpha}/2$ leads to
\[
\|v^{n+1}\|^2_{{\mathbb H}^s}\leq C\|v^{n}\|^2_{{\mathbb H}^s},
\]
where the constant $C$ depends on $\tilde{\alpha}$, $|{\mathcal K}|$ and $|A^{-1}|$. It means that
\begin{equation}\label{5.18}
\|\Big(I_{ 6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}v^{n}\|^2_{{\mathbb H}^s}\leq C\|v^{n}\|^2_{{\mathbb H}^s}
\end{equation}
Thus we show the first assertion.
Similarly, we may show that
\[
\|\big(A\otimes M\big)v^{n+1}\|^2_{{\mathbb H}^s}\leq C\|\big(A\otimes M\big)v^{n}\|^2_{{\mathbb H}^s}.
\]
From
\begin{equation}\label{5.16}
\bigg[\Big(I_{ 6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}-I_{6s\times 6s}\bigg]v^n=v^{n+1}-v^n=\tau(A\otimes M)v^{n+1},
\end{equation}
it follows that
\[
\left\|\bigg[\Big(I_{ 6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}-I_{6s\times 6s}\bigg]v^n\right\|_{{\mathbb H}^s}
=\tau\|(A\otimes M)v^{n+1}\|_{{\mathbb H}^s}
\leq C\tau\|(A\otimes M)v^{n}\|_{{\mathbb H}^s},
\]
which leads to the second assertion.
\end{proof}
Now we are in the position to present the existence and uniqueness of the numerical solution given by the stochastic Runge-Kutta method \eqref{RK method}.
\begin{theorem}\label{wellposedness-RK}
In addition to conditions of Proposition \ref{wellposedness thm}, let $B(t)\in HS(U_0,{\mathcal D}(M))$ for any $t\in[0,T]$.
Let the Runge-Kutta method $(A,b)$ be algebraically stable and coercive.
For $p\geq 2$ and fix $T=t_{N}>0$, there exists an unique ${\mathbb H}$-valued $\{{\mathcal F}_{t_n}\}_{0\leq n\leq N}$-adapted discrete solution $\{ u^n; ~n=0,1,\ldots,N\}$ of the scheme \eqref{RK method} for sufficiently small $\tau\leq \tau^{*}$ with $\tau^{*}:= \tau^{*}(\|u_0\|_{\mathbb H},T)$, and a constant $C:=C(p,T,\sup_{t\in[0,T]}\|B(t)\|_{HS(U,{\mathcal D}(M))})>0$ such that
\begin{align}
\max_{1\leq i\leq s}{\mathbb E}\|U_{ni}\|_{\mathbb H}^{p}&\leq C\big({\mathbb E}\|u^n\|_{\mathbb H}^{p}+\tau\big),\label{bound U_ni}\\[0.6em]
\max_{1\leq n\leq N}{\mathbb E}\|u^n\|^{p}_{\mathbb H}&\leq C\big(1+\|u_0\|^{p}_{L^{p}(\Omega;\mathbb H}\big).\label{bound un_RK}
\end{align}
\end{theorem}
\begin{proof}
We only present the proof for $p=2$ here, since the proof for general $p>2$ is similar.
{\em Step 1: Existence and $\{{\mathcal F}_{t_n}\}_{0\leq n\leq N}$-adaptedness.} Fix a set $\Omega^{'}\subset\Omega$, ${\mathbb P}(\Omega^{'})=1$ such that $W(t,\omega)\in U$ for all $t\in[0,T]$ and $\omega\in\Omega^{'}$. In the following, let us assume that $\omega\in\Omega^{'}$. The existence of iterates $\{ u^n; ~n=0,1,\ldots,N\}$
follows from a standard Galerkin method and Brouwer's theorem, in combining with assertions \eqref{bound U_ni}-\eqref{bound un_RK}.
Define a map
\begin{equation*}
\begin{split}
\Lambda:~{\mathbb H}\times U\to{\mathcal P}({\mathbb H}),
\quad(u^n,\Delta W^{n+1})\to \Lambda(u^n,\Delta W^{n+1}),
\end{split}
\end{equation*}
where ${\mathcal P}({\mathbb H})$ denotes the set of all subsets of ${\mathbb H}$, and $\Lambda(u^n,\Delta W^{n+1})$ is the set of solutions $u^{n+1}$ of \eqref{RK method}. By the closedness of the graph of $\Lambda$ and a selector theorem, there exists a universally and Borel measurable mapping $\lambda_n:~{\mathbb H}\times U\to {\mathbb H}$ such that $\lambda_n(s_1,s_2)\in\Lambda(s_1,s_2)$ for all $(s_1,s_2)\in {\mathbb H}\times U$. Therefore, ${\mathcal F}_{t_{n+1}}$-measurability of $u^{n+1}$ follows from the Doob-Dynkin lemma.
{\em Step 2: proof for \eqref{bound U_ni}.}
From the compact formula \eqref{RK_compact_1} and the invertibility of $A$, we get
\begin{equation}\label{Un}
\begin{split}
U_{n}=&\Big(I_{ 6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\big({\bf 1}_{s}\otimes u^n\big)
+\tau\Big(I_{ 6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\big(A\otimes I\big)F^{n}\\
&+\Big(I_{ 6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\Big(\big(\widetilde{A}\otimes I\big)B^{n}\Delta W^{n+1}\Big).
\end{split}
\end{equation}
Using assertion (i) of Lemma \ref{est operator},
we obtain,
\begin{equation}\label{5.7}
\begin{split}
\|U_{n}\|^2_{{\mathbb H}^s}
&\leq C\|{\bf 1}_{s}\otimes u^n
+\tau\big(A\otimes I\big)F^{n}+\big(\widetilde{A}\otimes I\big)B^{n}\Delta W^{n+1}\|^2_{{\mathbb H}^s}\\
&\leq C\|u^n\|^2_{{\mathbb H}}+\tau^2\sum_{i=1}^{s}\|F^{ni}\|^2_{{\mathbb H}}+\sum_{i=1}^{s}\|B^{ni}\Delta W^{n+1}\|^2_{{\mathbb H}}\\
&\leq C\|u^n\|^2_{{\mathbb H}}+C\tau^2\sum_{i=1}^{s}\big(1+\|U_{ni}\|^2_{{\mathbb H}}\big)+\sum_{i=1}^{s}\|B^{ni}\Delta W^{n+1}\|^2_{{\mathbb H}}\\
& \leq C\|u^n\|^2_{{\mathbb H}}+C\tau^2+C\tau^2\|U_{n}\|^2_{{\mathbb H}^s}+\sum_{i=1}^{s}\|B^{ni}\Delta W^{n+1}\|^2_{{\mathbb H}}.
\end{split}
\end{equation}
Taking expectation on both sides of \eqref{5.7}, we have
\begin{equation}
{\mathbb E}\|U_{n}\|^2_{{\mathbb H}^s}\leq
C{\mathbb E} \|u^n\|^2_{{\mathbb H}}+C\tau+C\tau^2{\mathbb E}\|U_{n}\|^2_{{\mathbb H}^s}.
\end{equation}
For sufficiently small step size, by Gronwall inequality,
one gets
\[
{\mathbb E}\|U_{n}\|^2_{{\mathbb H}^s}\leq
C{\mathbb E} \|u^n\|^2_{{\mathbb H}}+C\tau.
\]
Because of the identity $\sum_{i=1}^{s}\|U_{ni}\|^2_{{\mathbb H}}=\|U_{n}\|^2_{{\mathbb H}^s}$,
the proof of \eqref{bound U_ni} is completed.
{\em Step 3: Uniqueness.} The uniqueness of discrete solution follows from the uniqueness of $U_{ni}$, $i=1,\ldots,s$.
Assume that there are two different solutions $U_{n}$ and $V_{n}$ satisfying \eqref{RK_compact_1}, then it follows
\begin{equation}
U_{n}-V_{n}=\tau(A\otimes M)\big(U_n-V_n\big)+\tau(A\otimes I)\big(F^n(U_n)-F^n(V_n)\big),
\end{equation}
which is equivalent to
\begin{equation}
U_{n}-V_{n}=\tau\Big(I_{6s\times 6s}-\tau(A\otimes M)\Big)^{-1}(A\otimes I)\big(F^n(U_n)-F^n(V_n)\big).
\end{equation}
From the assertion (i) of Lemma \ref{est operator} and globally Lipschitz property of function $F$, it follows that
\begin{equation}
\|U_{n}-V_{n}\|_{{\mathbb H}^s}\leq C\tau \|U_{n}-V_{n}\|_{{\mathbb H}^s}.
\end{equation}
Obviously, when the time step $\tau$ is sufficiently small, the internal stages $U_{ni}$ is unique, hence the discrete solution $u^{n+1}$ is unique.
{\em Step 4: proof for \eqref{bound un_RK}.}
We start from \eqref{RK method_2} to get
\begin{equation}\label{5.9}
\begin{split}
\|u^{n+1}\|^2_{{\mathbb H}}=&\|u^{n}\|^2_{{\mathbb H}}+\|\tau\sum_{i=1}^{s}b_i\big(MU_{ni}+F^{ni}\big)\|^2_{{\mathbb H}}
+ \|\sum_{i=1}^{s}\widetilde{b}_iB^{ni}\Delta W^{n+1}\|^2_{{\mathbb H}}\\
& +2\langle u^{n},~\tau\sum_{i=1}^{s}b_i\big(MU_{ni}+F^{ni}\big) \rangle_{{\mathbb H}} +2\langle u^{n},~\sum_{i=1}^{s}\widetilde{b}_iB^{ni}\Delta W^{n+1} \rangle_{{\mathbb H}}\\
&+2\langle \tau\sum_{i=1}^{s}b_i\big(MU_{ni}+F^{ni}\big),~ \sum_{i=1}^{s}\widetilde{b}_iB^{ni}\Delta W^{n+1}\rangle_{{\mathbb H}}.
\end{split}
\end{equation}
From \eqref{RK method_1}, we know that
\begin{equation}\label{5.10}
u^{n}=U_{ni}-\tau\sum_{j=1}^{s}a_{ij}\big(MU_{nj}+F^{nj}\big)-\sum_{j=1}^{s}\widetilde{a}_{ij}B^{nj}\Delta W^{n+1},
\end{equation}
and then substitute \eqref{5.10} into the first term of the second line on the right-hand side of \eqref{5.9} to get
\begin{align*}
2\tau&\sum_{i=1}^{s}b_i \langle u^{n},~MU_{ni}+F^{ni} \rangle_{{\mathbb H}}\\
=&
2\tau\sum_{i=1}^{s}b_i \langle U_{ni}, ~ MU_{ni}+F^{ni} \rangle_{{\mathbb H}} -2\tau^2\sum_{i,j=1}^{s}b_ia_{ij}\langle MU_{nj}+F^{nj},~ MU_{ni}+F^{ni} \rangle_{{\mathbb H}}
\\
&-2\tau\sum_{i,j=1}^{s}b_i \widetilde{a}_{ij}\langle B^{nj}\Delta W^{n+1},~
MU_{ni}+F^{ni} \rangle_{{\mathbb H}} \\
=&2\tau\sum_{i=1}^{s}b_i \langle U_{ni}, ~F^{ni} \rangle_{{\mathbb H}} -\tau^2\sum_{i,j=1}^{s}\big(b_ia_{ij}+b_ja_{ji}\big)\langle MU_{nj}+F^{nj},~ MU_{ni}+F^{ni} \rangle_{{\mathbb H}}
\\
&-2\tau\sum_{i,j=1}^{s}b_i \widetilde{a}_{ij}\langle B^{nj}\Delta W^{n+1},~
MU_{ni}+F^{ni} \rangle_{{\mathbb H}}
\end{align*}
where in the last step we have used the fact $\langle U_{ni},~MU_{ni} \rangle_{{\mathbb H}}=0 $.
Combining the above equality together with \eqref{5.9}, we get
\begin{align}\label{5.3}
\|u^{n+1}\|^2_{{\mathbb H}}=&\|u^{n}\|^2_{{\mathbb H}}
+ \|\sum_{i=1}^{s}\widetilde{b}_iB^{ni}\Delta W^{n+1}\|^2_{{\mathbb H}}
+2\tau\sum_{i=1}^{s}b_i \langle U_{ni}, ~F^{ni} \rangle_{{\mathbb H}} \nonumber\\
& +\tau^2\sum_{i,j=1}^{s}\big(b_ib_j-b_ia_{ij}-b_ja_{ji}\big)\langle MU_{nj}+F^{nj},~ MU_{ni}+F^{ni} \rangle_{{\mathbb H}}
\\
&+2\langle u^{n},~\sum_{i=1}^{s}\widetilde{b}_iB^{ni}\Delta W^{n+1} \rangle_{{\mathbb H}}+2\tau\sum_{i,j=1}^{s}\big(b_i\widetilde{b}_j-b_i \widetilde{a}_{ij}\big)\langle B^{nj}\Delta W^{n+1},~
MU_{ni}+F^{ni} \rangle_{{\mathbb H}}.
\nonumber
\end{align}
Since the method $(A,b)$ is algebraically stable, the second line of \eqref{5.3} is not positive, then we end up with
\begin{equation}\label{5.4}
\begin{split}
\|u^{n+1}\|^2_{{\mathbb H}}\leq&\|u^{n}\|^2_{{\mathbb H}}
+ \|\sum_{i=1}^{s}\widetilde{b}_iB^{ni}\Delta W^{n+1}\|^2_{{\mathbb H}}
+2\tau\sum_{i=1}^{s}b_i \langle U_{ni}, ~F^{ni} \rangle_{{\mathbb H}} \\
&+2\langle u^{n},~\sum_{i=1}^{s}\widetilde{b}_iB^{ni}\Delta W^{n+1} \rangle_{{\mathbb H}}+2\tau\sum_{i,j=1}^{s}\big(b_i\widetilde{b}_j-b_i \widetilde{a}_{ij}\big)\langle B^{nj}\Delta W^{n+1},~
MU_{ni}+F^{ni} \rangle_{{\mathbb H}}\\
\leq & \|u^{n}\|^2_{{\mathbb H}} +C(1+\tau)\sum_{i=1}^{s}\|B^{ni}\Delta W^{n+1}\|^2_{{\mathbb H}}
+C\tau\sum_{i=1}^{s}\|M(B^{ni}\Delta W^{n+1})\|^2_{{\mathbb H}}\\
&+C\tau\sum_{i=1}^{s}\|U_{ni}\|^2_{{\mathbb H}}
+C\tau\sum_{i=1}^{s}\|F^{ni}\|^2_{{\mathbb H}}+2C\tau\sum_{i=1}^{s}b_i \langle U_{ni}, ~F^{ni} \rangle_{{\mathbb H}}.
\end{split}
\end{equation}
Applying expectation and using conditions on $F$, $B$ and $Q$ lead to
\begin{equation}
{\mathbb E}\|u^{n+1}\|^2_{{\mathbb H}}\leq
{\mathbb E}\|u^{n}\|^2_{{\mathbb H}}+C\tau+C\tau{\mathbb E}\|U_n\|^2_{{\mathbb H}^{s}}.
\end{equation}
Substituting \eqref{bound U_ni} into the above inequality, we get
\begin{equation}
{\mathbb E}\|u^{n+1}\|^2_{{\mathbb H}} \leq (1+C\tau){\mathbb E}\|u^{n}\|^2_{{\mathbb H}}+C\tau,
\end{equation}
which by Gronwall's inequality means the boundedness of numerical solution.
Therefore we complete the proof of \eqref{bound un_RK}.
Combining Steps 1-4, we complete the proof.
\end{proof}
\begin{remark}
Note that for the well-posedness of stochastic Runge-Kutta method, we require the additional spatial smooth assumptions on function $B$, which comes from term $\|M(B^{ni}\Delta W^{n+1})\|^2_{{\mathbb H}}$ and needs $\sup_{t\in[0,T]}\|B(t)\|_{HS(U_0,{\mathcal D}(M))}<\infty$.
\end{remark}
Now we are in the position to discuss the regularity in ${\mathcal D}(M^k)$ ($k\in{\mathbb N}$) of the numerical solution given by stochastic Runge-Kutta method.
\begin{proposition}\label{prop}
Let Assumption \ref{assum_F} and Assumption \ref{assum_B} be fulfilled with $\alpha=k$ and $\beta=k+1$, respectively, and suppose the initial data $u_0\in L^{p}(\Omega;{\mathcal D}(M^k))$ for some $p\geq 2$.
For the solution of \eqref{RK method_1}-\eqref{RK method_2}, there exists a constant $C:=C(p,T,\sup_{t\in[0,T]}\|B(t)\|_{HS(U,{\mathcal D}(M^{k+1}))})>0$ such that
\begin{align}
\max_{1\leq i\leq s}{\mathbb E}\|U_{ni}\|_{{\mathcal D}(M^k)}^{p}&\leq C\big({\mathbb E}\|u^n\|_{{\mathcal D}(M^k)}^{p}+\tau\big),\label{bound U_ni_re}\\[0.6em]
\max_{1\leq n\leq N}{\mathbb E}\|u^n\|^{p}_{{\mathcal D}(M^k)}&\leq C\big(1+\|u_0\|^{p}_{L^{p}(\Omega;{\mathcal D}(M^k))}\big).\label{bound un_RK_re}
\end{align}
\end{proposition}
\begin{proof}
The proof is similar as in Step2 and Step 4 of Theorem \ref{wellposedness-RK}.
\end{proof}
\begin{proposition}\label{holder_RK}
Under the same assumption as in Proposition \ref{prop}, we have for $0\leq t,s\leq T$,
\begin{align}
& {\mathbb E}\|u^{n+1}-u^{n}\|_{{\mathcal D}(M^{k-1})}^p\leq C\tau^{p/2},\\
& \|{\mathbb E}(u^{n+1}-u^{n})\|_{{\mathcal D}(M^{k-1})}\leq C\tau.
\end{align}
Moreover, if $u^{n+1}$ is replaced by $U_{ni}$, the above estimates still hold.
\end{proposition}
\subsection{Error analysis of stochastic Runge-Kutta semidiscretizations}
Motivated by answering an open problem in \cite[Remark 18]{CH2016} for stochastic Maxwell equations driven by additive noise,
we establish the error analysis in mean-square sense of the stochastic Runge-Kutta method \eqref{RK method} in this part.
Recall that the strong solution of the stochastic Maxwell equations \eqref{sM_equations} is
\begin{equation}\label{strong solution}
u(t_{n+1})=u(t_{n})+\int_{t_{n}}^{t_{n+1}}Mu(t){\rm d}t
+\int_{t_{n}}^{t_{n+1}}F(t,u(t)){\rm d}t
+\int_{t_{n}}^{t_{n+1}}B(t){\rm d}W(t).
\end{equation}
And
substituting equation \eqref{Un} into \eqref{RK_compact_2} leads to the following formula of discrete solution
\begin{equation}\label{RK}
\begin{split}
u^{n+1}=&u^{n}+\tau\big(b^{T}\otimes M\big)\Big(I_{ 6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\big({\bf 1}_{s}\otimes u^n\big)\\
&+\tau\big(b^{T}\otimes I\big)F^{n}(U_n)+\tau^2\big(b^{T}\otimes M\big)\Big(I_{ 6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\big(A\otimes I\big)F^{n}(U_n)\\
&+\big(\widetilde{b}^{T}\otimes I\big)B^{n}\Delta W^{n+1}+\tau\big(b^{T}\otimes M\big)\Big(I_{ 6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\Big(\big(\widetilde{A}\otimes I\big)B^{n}\Delta W^{n+1}\Big).
\end{split}
\end{equation}
Let $e^{n}=u(t_n)-u^n$. Substracting \eqref{RK} from \eqref{strong solution}, we obtain
\begin{align}
e^{n+1}=&e^{n}+\underbrace{\int_{t_{n}}^{t_{n+1}}Mu(t){\rm d}t-\tau\big(b^{T}\otimes M\big)\Big(I_{ 6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\big({\bf 1}_{s}\otimes u^n\big)}_{I}\nonumber\\
&+\underbrace{\int_{t_{n}}^{t_{n+1}}F(t,u(t)){\rm d}t-\tau\big(b^{T}\otimes I\big)F^{n}(U_n)}_{II_{a}}\nonumber\\
&-\underbrace{\tau^2\big(b^{T}\otimes M\big)\Big(I_{ 6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\big(A\otimes I\big)F^{n}(U_n)}_{II_{b}}\label{error}\\
&+\underbrace{\int_{t_{n}}^{t_{n+1}}B(t){\rm d}W(t)-\big(\widetilde{b}^{T}\otimes I\big)B^{n}\Delta W^{n+1}}_{III_{a}}\nonumber\\
&-\underbrace{\tau\big(b^{T}\otimes M\big)\Big(I_{ 6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\Big(\big(\widetilde{A}\otimes I\big)B^{n}\Delta W^{n+1}\Big)}_{III_{b}}\nonumber\\
=:&e^{n}+I+II_{a}-II_{b}+III_{a}-III_{b}.\nonumber
\end{align}
Taking $\|\cdot\|^{2}_{\mathbb H}$-norm yields
\begin{equation}
\begin{split}
\|e^{n+1}\|^{2}_{\mathbb H}=&\|e^{n}\|^{2}_{\mathbb H}+\|I\|^{2}_{\mathbb H}+\|II\|^{2}_{\mathbb H}+\|III\|^2_{\mathbb H}+2\langle e^{n},I\rangle_{\mathbb H}+2\langle e^{n},II\rangle_{\mathbb H}+2\langle e^{n},III\rangle_{\mathbb H}\\
&+2\langle I,II\rangle_{\mathbb H}+2\langle I,III\rangle_{\mathbb H}
+2\langle II,III\rangle_{\mathbb H}\\
\leq&(1+\tau) \|e^{n}\|^{2}_{\mathbb H}+3\|I\|^{2}_{\mathbb H}+2\langle e^{n},I\rangle_{\mathbb H}+\Big(3+\frac{C}{\tau}\Big)\|II\|^{2}_{\mathbb H}+3\|III\|^2_{\mathbb H}+2\langle e^{n},III\rangle_{\mathbb H}.
\end{split}
\end{equation}
{\em Step 1. The estimates of terms $\|I\|^{2}_{\mathbb H}$ and $\langle e^{n},I\rangle_{\mathbb H}$.} From \eqref{error}, we have
\begin{equation}
\begin{split}
I=&\underbrace{\int_{t_n}^{t_{n+1}}\big(Mu(t)-Mu(t_n)\big){\rm d}t}_{I_a}
+\tau Me^{n}\\
&+\underbrace{\tau Mu^{n}-\tau\big(b^{T}\otimes M\big)\Big(I_{ 6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\big({\bf 1}_{s}\otimes u^n\big)}_{I_b}.
\end{split}
\end{equation}
From Proposition \ref{holder}, we know that
\[
{\mathbb E}\|I_a\|_{\mathbb H}^2\leq \tau\int_{t_n}^{t_{n+1}}{\mathbb E}\|u(t)-u(t_{n})\|^2_{{\mathcal D}(M)}{\rm d}t
\leq C\tau^3,
\]
and
\[
{\mathbb E}\|{\mathbb E}(I_a|{\mathcal F}_{t_n})\|_{\mathbb H}^2\leq \tau\int_{t_n}^{t_{n+1}}\|{\mathbb E}\big(u(t)-u(t_{n})|{\mathcal F}_{t_n}\big)\|^2_{{\mathcal D}(M)}{\rm d}t
\leq C\tau^4,
\]
where the constant $C$ depends on $T$, $\|B(t)\|_{HS(U,{\mathcal D}(M^2))}$ and $\|u_0\|_{L^2(\Omega,{\mathcal D}(M^2))}$.
From Proposition \ref{regularity} and the property of operator $M$, we know that
\[
\|\tau Me^{n}\|^2_{\mathbb H}=-\tau^2\langle e^{n}, M^2 e^{n} \rangle_{{\mathbb H}}
\leq \tau\|e^n\|^2_{\mathbb H}+C\tau^3 \Big(\|M^2 u(t_n)\|^2_{\mathbb H}+\|M^2 u^n\|^2_{\mathbb H}\Big)\leq \tau\|e^n\|^2_{\mathbb H}+C\tau^3 ,
\]
and
\[
\langle e^{n},\tau Me^{n} \rangle_{{\mathbb H}}=0,
\]
where the constant $C$ depends on $T$ and $\|Q^{\frac12}\|_{HS(U,H^2(D))}$.
Under the assumption $\sum_{i=1}^{s}b_i=1$, we know that
\[
\big(b^{T}\otimes I\big)\big({\bf 1}_{s}\otimes Mu^{n}\big)=(b^{T}{\bf 1}_{s})\otimes(IMu^n)=\Big(\sum_{i=1}^{s}b_i\Big)\otimes (Mu^n)=Mu^{n}.
\]
Since $b^{T}\otimes M=\big(b^{T}\otimes I\big)\big(I_{s\times s}\otimes M\big)$
and
$\big(I_{s\times s}\otimes M\big)\big(A\otimes M\big)=A\otimes M^2=\big(A\otimes M\big)\big(I_{s\times s}\otimes M\big)$, we have
\begin{equation}
\begin{split}
&\big(b^{T}\otimes M\big)\Big(I_{ 6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\big({\bf 1}_{s}\otimes u^n\big)\\
&=\big(b^{T}\otimes I\big)\Big(I_{ 6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\big(I_{s\times s}\otimes M\big)\big({\bf 1}_{s}\otimes u^n\big)\\
&=\big(b^{T}\otimes I\big)\Big(I_{ 6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\big({\bf 1}_{s}\otimes Mu^{n}\big).
\end{split}
\end{equation}
Hence for term $I_b$, we get
\begin{equation}
\begin{split}
I_b=&\tau \big(b^{T}\otimes I\big)\big({\bf 1}_{s}\otimes Mu^{n}\big)
-\tau \big(b^{T}\otimes I\big)\Big(I_{ 6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\big({\bf 1}_{s}\otimes Mu^{n}\big)\\
=&\tau\big(b^{T}\otimes I\big)\bigg[I_{6s\times 6s}-\Big(I_{ 6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\bigg]\big({\bf 1}_{s}\otimes Mu^{n}\big).
\end{split}
\end{equation}
By Lemma \ref{est operator},
we get
\begin{equation*}
\begin{split}
\|I_b\|_{\mathbb H}\leq& C\tau \left\|\bigg[I_{6s\times 6s}-\Big(I_{ 6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\bigg]\big({\bf 1}_{s}\otimes Mu^{n}\big)\right\|_{{\mathbb H}^s}\\
\leq& C\tau^2\|(A\otimes M)\big({\bf 1}_{s}\otimes Mu^{n}\big)\|_{{\mathbb H}^s}\\
=& C\tau^2\|(A{\bf 1}_s)\otimes M^2 u^{n}\|_{{\mathbb H}^s}
\leq C\tau^2\|u^n\|_{{\mathcal D}(M^2)},
\end{split}
\end{equation*}
and then
\[
{\mathbb E}\|I_b\|^2_{\mathbb H}\leq C\tau^4{\mathbb E}\|u^n\|_{{\mathcal D}(M^2)}^2\leq C\tau^4.
\]
Therefore,
\[
{\mathbb E}\|I\|^2_{\mathbb H}\leq\tau{\mathbb E}\|e^n\|^2_{\mathbb H}+ C\tau^3,\quad
{\mathbb E}\langle e^n,I\rangle_{{\mathbb H}}={\mathbb E}\langle e^n,{\mathbb E}\big(I_a|{\mathcal F}_{t_n}\big)\rangle_{{\mathbb H}}+{\mathbb E}\langle e^n,I_b\rangle_{{\mathbb H}}\leq \tau{\mathbb E}\|e^n\|^2_{\mathbb H}+C\tau^3.
\]
{\em Step 2. The estimate of the term $\|II\|_{\mathbb H}$ and $\langle e^n,II\rangle_{{\mathbb H}}$.}
For term $II_a$, we recall that $\sum_{i=1}^{s}b_{i}=1$,
\begin{equation}
\begin{split}
II_a=&\int_{t_n}^{t_{n+1}}\Big(F(t,u(t))-\sum_{i=1}^{s}b_iF(t_n+c_i\tau,U_{ni})\Big){\rm d}t
=\tau\Big(F(t_n,u(t_n)-F(t_n,u^n)\Big)\\
&+\int_{t_n}^{t_{n+1}}\Big(F(t,u(t))-F(t_n,u(t_n)\Big){\rm d}t
+\tau\sum_{i=1}^{s}b_i\Big(F(t_n,u^n)-F(t_n+c_i\tau,U_{ni})\Big).
\end{split}
\end{equation}
From the globally Lipschitz property of $F$, we have
\begin{equation}
\begin{split}
\|II_a\|^2_{\mathbb H}\leq C\tau^2\|e^n\|^2_{\mathbb H}+C\tau^4+C\tau\int_{t_n}^{t_{n+1}}
\|u(t)-u(t_n)\|^2_{\mathbb H}{\rm d}t
+C\tau^2\|U_{ni}-u^n\|^2_{\mathbb H}.
\end{split}
\end{equation}
The assertion (i) of Proposition \ref{holder} and the estimate for $U_{ni}-u^n$ in Proposition \ref{holder_RK} lead to
\[
{\mathbb E}\|II_a\|^2_{\mathbb H}\leq C\tau^2{\mathbb E}\|e^n\|^2_{\mathbb H}+C\tau^3.
\]
The estimate of ${\mathbb E}\|{\mathbb E}(II_a|{\mathcal F}_{t_n})\|^2_{\mathbb H}$ is technical. In fact, take the term $$\int_{t_n}^{t_{n+1}}\Big(F(u(t))-F(u(t_n)\Big){\rm d}t$$ in $II_a$ as an example, where we let $F$ do not depend on time $t$ explicitly for ease of presentation, since the dependence on time causes no substantial problems in the analysis but just leads to longer formulas.
Thanks to Taylor formula, we have
\begin{equation}
\begin{split}
\int_{t_n}^{t_{n+1}}\Big(F(u(t))-F(u(t_n)\Big){\rm d}t
=&\int_{t_n}^{t_{n+1}}F^{\prime}(u(t_n))\big(u(t)-u(t_n)\big){\rm d}t\\
&+\frac12\int_{t_n}^{t_{n+1}} F^{\prime\prime}(u_{\theta})\Big(u(t)-u(t_{n}),~u(t)-u(t_n)\Big) {\rm d}t,
\end{split}
\end{equation}
where $u_{\theta}$ is some point between $u(t_n)$ and $u(t)$.
The estimate of the second term on the above equation is based on the assertion (i) of Proposition \ref{holder}, which gives order $O(\tau^4)$ in mean-square sense. For the first term, we apply conditional expectation first,
\begin{equation}\label{4.43}
\begin{split}
{\mathbb E}\left( \int_{t_n}^{t_{n+1}}F^{\prime}(u(t_n))\big(u(t)-u(t_n)\big){\rm d}t\bigg|{\mathcal F}_{t_n}\right)
=\int_{t_n}^{t_{n+1}}F^{\prime}(u(t_n)){\mathbb E}\Big(\big(u(t)-u(t_n)\big)\Big|{\mathcal F}_{t_n}\Big){\rm d}t,
\end{split}
\end{equation}
where the adaptedness of $\{u(t)\}_{t\in[0,T]}$ and the properties of conditional expectation are used. Then by the assertion (ii) of Proposition \ref{holder}, we know that \eqref{4.43} gives order $O(\tau^4)$ in mean-square sense.
Hence, by this approach we can show that
\[{\mathbb E}\|{\mathbb E}(II_a|{\mathcal F}_{t_n})\|^2_{\mathbb H}\leq C\tau^2{\mathbb E}\|e^n\|^2_{\mathbb H} +C\tau^4.\]
For term $II_b$, we have
\begin{equation}
\begin{split}
II_b=&\tau^2 \big(b^{T}\otimes I\big)\big(I_{s\times s}\otimes M\big)\Big(I_{6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\big(A\otimes I\big)F^{n}(U_n)\\
=&\tau^2\big(b^{T}\otimes I\big)\Big(I_{6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\big(I_{s\times s}\otimes M\big)\big(A\otimes I\big)F^{n}(U_n)\\
=&\tau^2\big(b^{T}\otimes I\big)\Big(I_{6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\big(A\otimes I\big)\big(I_{s\times s}\otimes M\big)F^{n}(U_n),
\end{split}
\end{equation}
hence from \eqref{5.18}
\begin{equation}
\begin{split}
\|II_b\|_{\mathbb H}\leq& C\tau^2\|\Big(I_{6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\big(A\otimes I\big)\big(I_{s\times s}\otimes M\big)F^{n}\|_{{\mathbb H}^s}\\
\leq& C\tau^2\|\big(A\otimes I\big)\big(I_{s\times s}\otimes M\big)F^{n}\|_{{\mathbb H}^s}
\leq C\tau^2\max_{1\leq i\leq s}\|F(t_n+C_i\tau, U_{ni})\|_{{\mathcal D}(M)}\\
\leq& C\tau^2\big(1+\|U_n\|_{{\mathcal D}(M)^s}\big),
\end{split}
\end{equation}
which leads to ${\mathbb E}\|II_b\|^2_{\mathbb H}\leq C\tau^4$.
Therefore,
\[
{\mathbb E}\|II\|^2_{\mathbb H}\leq C\tau^2{\mathbb E}\|e^n\|^2_{\mathbb H}+C\tau^3,
\]
and
\[
{\mathbb E}\langle e^n,II\rangle_{{\mathbb H}}={\mathbb E}\langle e^n,{\mathbb E}\big(II_a|{\mathcal F}_{t_n}\big)\rangle_{{\mathbb H}}-{\mathbb E}\langle e^n,II_b\rangle_{{\mathbb H}}\leq C\tau{\mathbb E}\|e^n\|^2_{\mathbb H}+C\tau^3.
\]
{\em Step 3. The estimate of the term $\|III\|_{\mathbb H}$.}
For term $III_a$, we recall that $\sum_{i=1}^{s}\widetilde{b}_i=1$,
\begin{equation}
\begin{split}
III_a=\int_{t_n}^{t_{n+1}}\Big(B(t)-\sum_{i=1}^{s}\widetilde{b}_iB^{ni}\Big){\rm d}W(t)
=\int_{t_n}^{t_{n+1}}\sum_{i=1}^{s}\widetilde{b}_i\Big(B(t)-B^{ni}\Big){\rm d}W(t),
\end{split}
\end{equation}
hence
\[
{\mathbb E}\|III_a\|^2_{\mathbb H}=\int_{t_n}^{t_{n+1}}\left\|\sum_{i=1}^{s}\widetilde{b}_i\Big(B(t)-B^{ni}\Big)\right\|^2_{HS(U_0,{\mathbb H})}{\rm d}t\leq C\tau^3.
\]
For term $III_b$, similarly to $II_b$, we have
\begin{equation*}
\begin{split}
III_b=&\tau\big(b^{T}\otimes I\big)\big(I_{s\times s}\otimes M\big)\Big(I_{6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\Big(\big(\widetilde{A}\otimes I\big)B^{n}\Delta W^{n+1}\Big)\\
=&\tau\big(b^{T}\otimes I\big)\Big(I_{6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\big(I_{s\times s}\otimes M\big)\Big(\big(\widetilde{A}\otimes I\big)B^{n}\Delta W^{n+1}\Big)\\
=&\tau\big(b^{T}\otimes I\big)\Big(I_{6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\big(\widetilde{A}\otimes I\big)\big(I_{s\times s}\otimes M\big)\big(B^{n}\Delta W^{n+1}\big),
\end{split}
\end{equation*}
hence from \eqref{5.18}
\begin{equation}
\begin{split}
{\mathbb E}\|III_b\|^2_{\mathbb H}\leq& C\tau^2\left\|\Big(I_{6s\times 6s}-\tau\big(A\otimes M\big)\Big)^{-1}\big(\widetilde{A}\otimes I\big)\big(I_{s\times s}\otimes M\big)\big(B^{n}\Delta W^{n+1}\big)\right\|^2_{{\mathbb H}^s}\\
\leq & C\tau^2\left\|\big(\widetilde{A}\otimes I\big)\big(I_{s\times s}\otimes M\big)\big(B^{n}\Delta W^{n+1}\big)\right\|^2_{{\mathbb H}^s}\\
\leq & C\tau^3.
\end{split}
\end{equation}
Therefore,
\[
{\mathbb E}\|III\|^2_{\mathbb H}\leq C\tau^3,\quad
{\mathbb E}\langle e^n,III\rangle_{{\mathbb H}}=0.
\]
{\em Step 4. Application of Gronwall's inequality.}
Combining all the estimates in Steps 1-3, we get
\[
{\mathbb E}\|e^{n+1}\|^2_{\mathbb H}\leq (1+C\tau){\mathbb E}\|e^{n}\|^2_{\mathbb H}+C\tau^3,
\]
which by Growall's inequality leads to
\[
\sup_{0\leq n\leq N}\Big({\mathbb E}\|e^{n}\|^2_{\mathbb H}\Big)^{\frac12}\leq C\tau.
\]
The above result is stated in the following theorem.
\begin{theorem}\label{estimate_e_k_RK}
In addition to the conditions of Proposition \ref{prop} with $k=2$, let $\sum_{i=1}^{s}b_i=\sum_{i=1}^{s}\widetilde{b}_i\equiv 1$. we have for the discrete solution of stochastic Runge-Kutta method \eqref{RK method_1}-\eqref{RK method_2},
\begin{equation}
\begin{split}
\max_{1\leq n\leq N}\big(\mathbb{E}\|u(t_n)-u^n\|_{\mathbb H}^2\big)^{\frac12}\leq C\tau,
\end{split}
\end{equation}
where the positive constant $C$ depends on the Lipschitz coefficients of $F$ and $B$, $T$, $\|u_0\|_{L^2(\Omega;{\mathcal D}(M^2))}$ and $\sup_{t\in[0,T]}\|B(t)\|_{HS(U,{\mathcal D}(M^2))}$, but independent of $\tau$ and $n$.
\end{theorem}
We observe that the Butcher Tableaux of the implicit Euler method and the midpoint method satisfy algebraic stability and the coercivity condition, therefore the mean-square convergence order of these two examples is of one,
\begin{corollary}
Under the same assumptions of Theorem \ref{estimate_e_k_RK}. For implicit Euler method, or for midpoint method we have
\begin{equation}
\begin{split}
\max_{1\leq k\leq N}\big(\mathbb{E}\|u(t_k)-u^k\|_{\mathbb H}^2\big)^{\frac12}\leq C\tau,
\end{split}
\end{equation}
where the positive constant $C$ depends on the Lipschitz coefficients of $F$ and $B$, $T$, $\|u_0\|_{L^2(\Omega;{\mathcal D}(M^2))}$ and $\sup_{t\in[0,T]}\|B(t)\|_{HS(U,{\mathcal D}(M^2))}$, but independent of $\tau$ and $k$.
\end{corollary}
\iffalse
\subsection{Error analysis of the Crank-Nicolson method}
First, we establish the estimate of operators $S_{\tau}$, $S(t)$ and $T_{\tau}$.
\begin{lemma}\label{estimate of operators}
For any $k\in{\mathbb N}$ and sufficiently small $\tau$, we have the following estimates:
\begin{itemize}
\item [(i)] $\|S_{\tau}^{k}\|_{{\mathcal L}({\mathcal D}(M),{\mathcal D}(M))}= 1$,
\item [(ii)] $\|S(t_k)-S_{\tau}^{k}\|_{{\mathcal L}({\mathcal D}(M^2),{\mathbb H})}\leq C\tau$,
\item [(iii)] $\|S_{\tau}-Id\|_{{\mathcal L}({\mathcal D}(M),{\mathbb H})}\leq C\tau$,
\item [(iv)] $\|T_{\tau}\|_{{\mathcal L}({\mathcal D}(M),{\mathcal D}(M))}\leq 1$ and $\|T_{\tau}-Id\|_{{\mathcal L}({\mathcal D}(M),{\mathbb H})}\leq C\tau$,
\end{itemize}
where the constant $C$ is independent of $k$ and $\tau$.
\end{lemma}
\begin{proof}
{\em Step 1. Proof of $(i)$.}
As $S_{\tau}$ is the corresponding discrete operator semigroup, i.e., $u^{k}=u^{k-1}+\tau M u^{k-\frac12}$, we have
\[S_{\tau}^{k}u_0=u^{k}=u^{k-1}+\tau M u^{k-\frac12}.\]
Applying $\langle\cdot,~u^{k-\frac12}\rangle_{\mathbb H}$ leads to
\[\frac12 \Big(\|u^{k}\|_{\mathbb H}^{2}-\|u^{k-1}\|_{\mathbb H}^{2}\Big)=0,
\]
hence
\[
\|u^{k}\|_{\mathbb H}=\|u_0\|_{\mathbb H},
\]
which implies assertion $\|S_{\tau}\|_{{\mathcal L}({\mathbb H},{\mathbb H})}= 1$.
Similarly, after applying $\langle\cdot,~(Mu^{k}-Mu^{k-1})\rangle_{\mathbb H}$, we get
\[
0=\frac{\tau}{2}\Big(\|Mu^{k}\|_{\mathbb H}^{2}-\|Mu^{k-1}\|_{\mathbb H}^{2}\Big),
\]
i.e.,
\[
\|Mu^{k}\|_{\mathbb H}=\|Mu_0\|_{\mathbb H},
\]
therefore, we have shown the assertion $(i)$.
{\em Step 2. Proof of $(ii)$.}
As $S(t)$ is the operator semigroup of equation ${\rm d}u(t)=Mu(t){\rm d}t$, $u(0)=u_0\in {\mathcal D}(M^2)$, and $S_{\tau}$ is the corresponding discrete operator semigroup, i.e., $u^{k}=u^{k-1}+\tau M u^{k-\frac12}$, we have
\begin{align}
&S(t_k)u_0=u(t_{k})=u(t_{k-1})+\int_{t_{k-1}}^{t_{k}}M u(s){\rm d}s,\\
&S_{\tau}^{k}u_0=u^{k}=u^{k-1}+\tau M u^{k-\frac12}.
\end{align}
Denote $e^{k}=u(t_k)-u^{k}=\big(S(t_k)-S_{\tau}^{k}\big)u_0$ with $e^0=0$, then
\[
e^{k}-e^{k-1}=Me^{n+\frac12}+\int_{t_{k-1}}^{t_k}\Big(Mu(t)-\frac12\big(Mu(t_{k-1})+Mu(t_{k})\big)\Big){\rm d}t.
\]
Applying $\langle\cdot,~e^{k-\frac12}\rangle_{\mathbb H}$ leads to
\begin{equation*}
\begin{split}
\frac12 \Big(\|e^{k}\|_{\mathbb H}^{2}-\|e^{k-1}\|_{\mathbb H}^{2}\Big)
=&\left\langle \int_{t_{k-1}}^{t_k}Mu(t)-\frac12\big(Mu(t_{k-1})+Mu(t_{k})\big){\rm d}t,~e^{k-\frac12}\right\rangle_{{\mathbb H}}\\
=&-\left\langle\int_{t_{k-1}}^{t_k} u(t)-\frac12\big(u(t_{k-1})+u(t_{k})\big){\rm d}t,~Me^{k-\frac12}\right\rangle_{{\mathbb H}}\\
=&-\left\langle
\int_{t_{k-1}}^{t_k}\int_{t_{k-1}}^{t}\int_{t_{k-1}}^{r}M^2u(\rho){\rm d}\rho{\rm d}r{\rm d}t
,~Me^{k-\frac12}\right\rangle_{{\mathbb H}}\\[1.5mm]
& +\frac12\left\langle
\int_{t_{k-1}}^{t_k}\int_{t_{k-1}}^{t_{k}}\int_{t_{k-1}}^{r}M^2u(\rho){\rm d}\rho{\rm d}r{\rm d}t
,~Me^{k-\frac12}\right\rangle_{{\mathbb H}}\\
\leq &C\tau^{3}\sup_{t}\|M^2u(t)\|_{\mathbb H}\|Mu(t_{k})-Mu^{k}\|_{\mathbb H}\\
&+C\tau^{3}\sup_{t}\|M^2u(t)\|_{\mathbb H}\|Mu(t_{k-1})-Mu^{k-1}\|_{\mathbb H}\\
\leq &C\|u_0\|_{{\mathcal D}(M^2)}^2\tau^3,
\end{split}
\end{equation*}
which by Gronwall's lemma leads to
\[
\|e^k\|_{\mathbb H}^2\leq C\|u_0\|_{{\mathcal D}(M^2)}^2\tau^2.
\]
Therefore, we show the assertion $(ii)$.
{\em Step 3. Proof of $(iii)$.}
As $S_{\tau}$ is the corresponding discrete operator semigroup, i.e., $u^{k}=u^{k-1}+\tau M u^{k-\frac12}$, we have
\[S_{\tau}u_0=u^{1}=u^{0}+\tau M u^{\frac12}.\]
Since $\big(S_{\tau}-Id\big)u_0=u^1-u_0$, we get
\[
\|u^1-u_0\|_{\mathbb H}\leq \frac12\tau\|Mu^{1}\|_{\mathbb H}+\frac12\tau\|Mu_0\|_{\mathbb H}
\leq C\tau \|u_0\|_{{\mathcal D}(M)},
\]
which leads to the assertion $(iii)$.
{\em Step 4. Proof of $(iv)$.}
As $T_{\tau}$ is the corresponding discrete operator semigroup of the following discrete system $u^{k}=u^{k-1}+\frac{\tau}{2} M u^{k}$, we apply $\langle\cdot,~Mu^{k}-Mu^{k-1}\rangle_{{\mathbb H}}$ to get
\[0=\frac12\Big(\|Mu^{k}\|^2_{\mathbb H}-\|Mu^{k-1}\|^2_{\mathbb H}+\|Mu^k-Mu^{k-1}\|_{\mathbb H}^{2}\Big),\]
which leads to
\[\|Mu^k\|_{\mathbb H}\leq \|Mu_0\|_{\mathbb H}.\]
From
$T_{\tau}u_0=u^{1}=u^{0}+\frac{\tau}{2} M u^{1},$
and $\big(T_{\tau}-Id\big)u_0=u^1-u_0$, we get
\[
\|u^1-u_0\|_{\mathbb H}\leq \frac12\tau\|Mu^{1}\|_{\mathbb H}
\leq C\tau \|Mu_0\|_{\mathbb H},
\]
which leads to the assertion $(iii)$.
\end{proof}
\begin{proposition}
Let $p\geq 2$ and fix $T=t_{N}>0$. Suppose $\tau\leq \tau^{*}$ with $\tau^{*}:= \tau^{*}(\|u_0\|_{{\mathcal D}(M)},T)$. There exists an unique ${\mathcal D}(M)$-valued $\{{\mathcal F}_{t_n}\}_{0\leq n\leq N}$-adapted discrete solution $\{ u^n; ~n=0,1,\ldots,N\}$ of the scheme \eqref{CN method}, and a constant $C:=C(p,T,\|Q^{\frac12}\|_{HS(U,H^{1}(D))})>0$ such that
\begin{equation}\label{bound un_CN}
\max_{1\leq n\leq N}{\mathbb E}\|u^n\|^{p}_{{\mathcal D}(M)}\leq C(1+\|u_0\|^{p}_{L^{p}(\Omega;{\mathcal D}(M))}).
\end{equation}
\end{proposition}
\begin{proof}
{\em Step 1: Existence and $\{{\mathcal F}_{t_n}\}_{0\leq n\leq N}$-adaptedness.} Fix a set $\Omega^{'}\subset\Omega$, ${\mathbb P}(\Omega^{'})=1$ such that $W(t,\omega)\in U$ for all $t\in[0,T]$ and $\omega\in\Omega^{'}$. In the following, let us assume that $\omega\in\Omega^{'}$. The existence of iterates $\{ u^n; ~n=0,1,\ldots,N\}$
follows from a standard Galerkin method and Brouwer's theorem, in combining with assertion \eqref{bound un_CN}.
Define a map
\begin{equation*}
\begin{split}
\Lambda:~&{\mathcal D}(M)\times U\to{\mathcal P}({\mathcal D}(M))\\
&(u^n,\Delta W^{n+1})\to \Lambda(u^n,\Delta W^{n+1})
\end{split}
\end{equation*}
where ${\mathcal P}({\mathcal D}(M))$ denotes the set of all subsets of ${\mathcal D}(M)$, and $\Lambda(u^n,\Delta W^{n+1})$ is the set of solutions $u^{n+1}$ of \eqref{CN method}. By the closedness of the graph of $\Lambda$ and a selector theorem \cite[Theorem 3.1]{BT1973}, there exists a universally and Borel measurable map $\lambda_n:~{\mathcal D}(M)\times U\to {\mathcal D}(M)$ such that $\lambda_n(s_1,s_2)\in\Lambda(s_1,s_2)$ for all $(s_1,s_2)\in{\mathcal D}(M)\times U$. Therefore, ${\mathcal F}_{t_{n+1}}$-measurability of $u^{n+1}$ follows from the Doob-Dynkin lemma.
{\em Step 2: Proof of \eqref{bound un_CN}.}
From the mild form \eqref{mild CN} of the Crank-Nicolson method, we have that
\begin{equation*}
\begin{split}
u^{k}=S_{\tau}^{k}u_0+\frac{\tau}{2}\sum_{j=1}^{k}S_{\tau}^{k-j}T_{\tau}\Big(F^{j-1}+F^j\Big)+\sum_{j=1}^{k}S_{\tau}^{k-j}T_{\tau}\Big(B^{j-1}+B^{j}\Big)\Delta W^{j}.
\end{split}
\end{equation*}
Applying ${\mathbb E}\|\cdot\|^{p}_{{\mathcal D}(M)}$, we obtain
\begin{equation*}
\begin{split}
{\mathbb E}\|u^{k}\|^{p}_{{\mathcal D}(M)}
\lesssim& {\mathbb E}\|S_{\tau}^{k}u_0\|^{p}_{{\mathcal D}(M)}+{\mathbb E}\|\tau\sum_{j=1}^{k}S_{\tau}^{k-j}T_{\tau}\Big(F^{j-1}+F^j\Big)\|^{p}_{{\mathcal D}(M)}\\
&+{\mathbb E}\|\sum_{j=1}^{k}S_{\tau}^{k-j}T_{\tau}\Big(B^{j-1}+B^{j}\Big)\Delta W^{j}\|^{p}_{{\mathcal D}(M)}\\
\lesssim& {\mathbb E}\|u_0\|^{p}_{{\mathcal D}(M)}+T^{p-1}\tau\sum_{j=0}^{k}\|S_{\tau}^{k-j}T_{\tau}F^{j}\|^{p}_{{\mathcal D}(M)}+\tau\sum_{j=0}^{k}\|S_{\tau}^{k-j}T_{\tau}B^{j}\|^{p}_{HS(U_0,{\mathcal D}(M))}\\
\lesssim& {\mathbb E}\|u_0\|^{p}_{{\mathcal D}(M)}+T^{p}+T\\
\leq &C(1+{\mathbb E}\|u_0\|^{p}_{{\mathcal D}(M)}).
\end{split}
\end{equation*}
{\em Step 3: Uniqueness.}
Assume there is another solution $\{v^{k}\}_{k\in{\mathbb N}}$ for scheme \eqref{CN method}. Thus we get
\[
u^{k}-v^{k}=S_{\tau}\big(u^{k-1}-v^{k-1}\big),
\]
which leads to
\[
\|u^{k}-v^{k}\|_{{\mathcal D}(M)}=\|u^{k-1}-v^{k-1}\|_{{\mathcal D}(M)}=\|u^0-v^0\|_{{\mathcal D}(M)}=0.
\]
The above equation means the equivalence of $\{u^{k}\}_{k\in{\mathbb N}}$ and $\{v^{k}\}_{k\in{\mathbb N}}$.
Therefore we complete the proof.
\end{proof}
\begin{theorem}\label{estimate_e_k_CN}
Let Assumption \ref{assump0}-\ref{assump 2} be fulfilled with $n=2$, and suppose that $u_0$ be an ${\mathcal F}_0$-measurable random variable satisfying $\|u_0\|_{L^2(\Omega;{\mathcal D}(M^2))}<\infty$. Then we have
\begin{equation}
\begin{split}
\max_{1\leq k\leq N}\big(\mathbb{E}\|u(t_k)-u^k\|_{\mathbb H}^2\big)^{\frac12}\leq C\tau,
\end{split}
\end{equation}
where the positive constant $C$ depends on the Lipschitz coefficients of $F$ and $B$, $T$, $\|u_0\|_{L^2(\Omega;{\mathcal D}(M^2))}$ and $\|Q^{\frac12}\|_{HS(U,H^2(D))}$, but independent of $\tau$ and $k$.
\end{theorem}
\begin{proof}
We now have
\[
u(t_k)=S(t_k)u_0+\int_{0}^{t_k}S(t_k-r)F(r){\rm d}r+\int_{0}^{t_k}S(t_k-r)B(r){\rm d}W(r),
\]
and
\[
u^{k}=S_{\tau}^{k}u_0+\frac{\tau}{2}\sum_{j=1}^{k}S_{\tau}^{k-j}T_{\tau}\Big(F^{j-1}+F^j\Big)+\sum_{j=1}^{k}S_{\tau}^{k-j}T_{\tau}\Big(B^{j-1}+B^{j}\Big)\Delta W^{j}.
\]
We derive
\begin{equation*}
\begin{split}
& {\mathbb E}\|u(t_k)-u^k\|^2_{\mathbb H}\lesssim
{\mathbb E}\|\big(S(t_k)-S_{\tau}^{k}\big)u_0\|^2_{\mathbb H}\\
& +{\mathbb E}\|\sum_{j=1}^{k}\int_{t_{j-1}}^{t_j}\big(S(t_k-r)F(r)-\frac12 S_{\tau}^{k-j}T_{\tau}(F^{j-1}+F^{j})\big){\rm d}r\|^2_{\mathbb H}\\
& +{\mathbb E}\|\sum_{j=1}^{k}\int_{t_{j-1}}^{t_j}\big(S(t_k-r)B(r)-\frac12 S_{\tau}^{k-j}T_{\tau}(B^{j-1}+B^{j})\big){\rm d}W(r)\|^2_{\mathbb H}\\
&\leq \|S(t_k)-S_{\tau}^{k}\|^2_{{\mathcal L}({\mathcal D}(M^2),{\mathbb H})}{\mathbb E}\|u_0\|_{{\mathcal D}(M^2)}^2\\
&+T\sum_{j=1}^{k}\int_{t_{j-1}}^{t_j}\big\|S(t_k-r)F(r)-\frac12 S_{\tau}^{k-j}T_{\tau}(F^{j-1}+F^{j})\big\|^2_{\mathbb H}{\rm d}r\\
&+\sum_{j=1}^{k}\int_{t_{j-1}}^{t_j}\big\|S(t_k-r)B(r)-\frac12 S_{\tau}^{k-j}T_{\tau}(B^{j-1}+B^{j})\big\|^2_{HS(U_0,{\mathbb H})}{\rm d}r\\
&\leq C\tau^2,
\end{split}
\end{equation*}
where we use Lemma \ref{estimate of operators} to get for any $r\in[t_{j-1},t_{j}]$,
\begin{equation*}
\begin{split}
& \big\|S(t_k-r)F(r)-S_{\tau}^{k-j}T_{\tau}(F^{j-1}+F^{j})/2\big\|_{\mathbb H}
\leq \|S(t_k-r)\|_{{\mathcal L}({\mathbb H},{\mathbb H})}\|F(r)-(F^{j-1}+F^{j})/2\|_{\mathbb H}\\
& +\|S(t_k-t_j)\|_{{\mathcal L}({\mathcal D}(M),{\mathcal D}(M))}\|S(t_j-r)-Id\|_{{\mathcal L}({\mathcal D}(M),{\mathbb H})}\|(F^{j-1}+F^{j})/2\|_{{\mathcal D}(M)}\\
&+\|S(t_k-t_j)-S_{\tau}^{k-j}\|_{{\mathcal L}({\mathcal D}(M^2),{\mathbb H})}\|(F^{j-1}+F^{j})/2\|_{{\mathcal D}(M^2)} \\
&+\|S_{\tau}^{k-j}\|_{{\mathcal L}({\mathcal D}(M),{\mathcal D}(M))}\|T_{\tau}-Id\|_{{\mathcal L}({\mathcal D}(M),{\mathbb H})}\|(F^{j-1}+F^{j})/2\|_{{\mathcal D}(M)}\\
&\leq C\tau
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
& \big\|S(t_k-r)B(r)-S_{\tau}^{k-j}T_{\tau}(B^{j-1}+B^j)/2\big\|_{HS(U_0,{\mathbb H})}
\leq \|S(t_k-r)\|_{{\mathcal L}({\mathbb H},{\mathbb H})}\|B(r)-(B^{j-1}+B^j)/2\|_{HS(U_0,{\mathbb H})}\\
& +\|S(t_k-t_j)\|_{{\mathcal L}({\mathcal D}(M),{\mathcal D}(M))}\|S(t_j-r)-Id\|_{{\mathcal L}({\mathcal D}(M),{\mathbb H})}\|(B^{j-1}+B^j)/2\|_{HS(U_0,{\mathcal D}(M))}\\
&+\|S(t_k-t_j)-S_{\tau}^{k-j}\|_{{\mathcal L}({\mathcal D}(M^2),{\mathbb H})}\|(B^{j-1}+B^j)/2\|_{HS(U_0,{\mathcal D}(M^2))} \\
&+\|S_{\tau}^{k-j}\|_{{\mathcal L}({\mathcal D}(M),{\mathcal D}(M))}\|T_{\tau}-Id\|_{{\mathcal L}({\mathcal D}(M),{\mathbb H})}\|(B^{j-1}+B^j)/2\|_{HS(U_0,{\mathcal D}(M))}\\
&\leq C\tau.
\end{split}
\end{equation*}
Therefore, we complete the proof.
\end{proof}
\fi
\iffalse
\subsection{Error analysis of the midpoint scheme}
Recall $u^n=\begin{pmatrix}
{\bf E}^n\\{\bf H}^n
\end{pmatrix}$, then scheme \eqref{midpoint method} is equivalent to
\begin{equation}\label{Midpoint scheme}
\begin{split}
&\varepsilon{\bf E}^{n+1}=\varepsilon{\bf E}^n+\tau\nabla\times{\bf H}^{n+\frac12}
-\tau{\bf J}_e(t_{n+\frac12})-{\bf J}_e^r(t_{n+\frac12})\Delta W^{n+1},\\
&\mu{\bf H}^{n+1}=\mu{\bf H}^n-\tau\nabla\times{\bf E}^{n+\frac12}-\tau{\bf J}_m(t_{n+\frac12})-{\bf J}_m^r(t_{n+\frac12})\Delta W^{n+1},\\
&{\bf E}^0={\bf E}_0,~{\bf H}^0={\bf H}_0.
\end{split}
\end{equation}
\begin{proposition}
Symplecticity of discrete solution....
\end{proposition}
In this subsection, we will show that there exists a ${\mathcal D}(M)$-valued $\{{\mathcal F}_{t_n}\}_{0\leq n\leq N}$-adapted discrete solution $\{ u^n; ~n=0,1,\ldots,N\}$ for scheme \eqref{midpoint method} or $\{ {\bf E}^n,{\bf H}^n;~n=0,1,\ldots,N\}$ for scheme \eqref{Midpoint scheme}.
\begin{proposition}
Let $p\geq 2$ and fix $T=t_{N}>0$. Suppose $\tau\leq \tau^{*}$ with $\tau^{*}:= \tau^{*}(\|u_0\|_{{\mathcal D}(M)},T)$. There exists an unique ${\mathcal D}(M)$-valued $\{{\mathcal F}_{t_n}\}_{0\leq n\leq N}$-adapted discrete solution $\{ u^n; ~n=0,1,\ldots,N\}$ of the scheme \eqref{midpoint method}, and a constant $C:=C(p,T,\|Q^{\frac12}\|_{HS(U,H^{1}(D))})>0$ such that
\begin{equation}\label{bound un}
\max_{1\leq n\leq N}{\mathbb E}\|u^n\|^{p}_{{\mathcal D}(M)}\leq C(1+\|u_0\|^{p}_{L^{p}(\Omega;{\mathcal D}(M))}).
\end{equation}
\end{proposition}
\begin{proof}
{\em Step 1: Existence and $\{{\mathcal F}_{t_n}\}_{0\leq n\leq N}$-adaptedness.} Fix a set $\Omega^{'}\subset\Omega$, ${\mathbb P}(\Omega^{'})=1$ such that $W(t,\omega)\in U$ for all $t\in[0,T]$ and $\omega\in\Omega^{'}$. In the following, let us assume that $\omega\in\Omega^{'}$. The existence of iterates $\{ u^n; ~n=0,1,\ldots,N\}$
follows from a standard Galerkin method and Brouwer's theorem, in combining with assertion \eqref{bound un}.
Define a map
\begin{equation*}
\begin{split}
\Lambda:~&{\mathcal D}(M)\times U\to{\mathcal P}({\mathcal D}(M))\\
&(u^n,\Delta W^{n+1})\to \Lambda(u^n,\Delta W^{n+1})
\end{split}
\end{equation*}
where ${\mathcal P}({\mathcal D}(M))$ denotes the set of all subsets of ${\mathcal D}(M)$, and $\Lambda(u^n,\Delta W^{n+1})$ is the set of solutions $u^{n+1}$ of \eqref{midpoint method}. By the closedness of the graph of $\Lambda$ and a selector theorem \cite[Theorem 3.1]{BT1973}, there exists a universally and Borel measurable map $\lambda_n:~{\mathcal D}(M)\times U\to {\mathcal D}(M)$ such that $\lambda_n(s_1,s_2)\in\Lambda(s_1,s_2)$ for all $(s_1,s_2)\in{\mathcal D}(M)\times U$. Therefore, ${\mathcal F}_{t_{n+1}}$-measurability of $u^{n+1}$ follows from the Doob-Dynkin lemma.
{\em Step 2: Proof of \eqref{bound un}.}
From \eqref{mild un}, we have
\begin{equation*}
\begin{split}
u^{k}=S_{\tau}^{k}u_0+\tau\sum_{j=1}^{k}S_{\tau}^{k-j}T_{\tau}F(t_{j-\frac12})+\sum_{j=1}^{k}S_{\tau}^{k-j}T_{\tau}B(t_{j-\frac12})\Delta W^{j}.
\end{split}
\end{equation*}
Applying ${\mathbb E}\|\cdot\|^{p}_{{\mathcal D}(M)}$, we obtain
\begin{equation*}
\begin{split}
{\mathbb E}\|u^{k}\|^{p}_{{\mathcal D}(M)}
\lesssim& {\mathbb E}\|S_{\tau}^{k}u_0\|^{p}_{{\mathcal D}(M)}+{\mathbb E}\|\tau\sum_{j=1}^{k}S_{\tau}^{k-j}T_{\tau}F(t_{j-\frac12})\|^{p}_{{\mathcal D}(M)}+{\mathbb E}\|\sum_{j=1}^{k}S_{\tau}^{k-j}T_{\tau}B(t_{j-\frac12})\Delta W^{j}\|^{p}_{{\mathcal D}(M)}\\
\lesssim& {\mathbb E}\|u_0\|^{p}_{{\mathcal D}(M)}+T^{p-1}\tau\sum_{j=1}^{k}\|S_{\tau}^{k-j}T_{\tau}F(t_{j-\frac12})\|^{p}_{{\mathcal D}(M)}+\tau\sum_{j=1}^{k}\|S_{\tau}^{k-j}T_{\tau}B(t_{j-\frac12})\|^{p}_{HS(U_0,{\mathcal D}(M))}\\
\lesssim& {\mathbb E}\|u_0\|^{p}_{{\mathcal D}(M)}+T^{p}+T\\
\leq &C(1+{\mathbb E}\|u_0\|^{p}_{{\mathcal D}(M)}).
\end{split}
\end{equation*}
{\em Step 3: Uniqueness.}
Assume there is another solution $\{v^{k}\}_{k\in{\mathbb N}}$ for scheme \eqref{5.2}. Thus we get
\[
u^{k}-v^{k}=S_{\tau}\big(u^{k-1}-v^{k-1}\big),
\]
which leads to
\[
\|u^{k}-v^{k}\|_{{\mathcal D}(M)}=\|u^{k-1}-v^{k-1}\|_{{\mathcal D}(M)}=\|u^0-v^0\|_{{\mathcal D}(M)}=0.
\]
The above equation means the equivalence of $\{u^{k}\}_{k\in{\mathbb N}}$ and $\{v^{k}\}_{k\in{\mathbb N}}$.
Therefore we complete the proof.
\end{proof}
In this subsection we investigate the convergence order in mean-square sense of semidiscretization \eqref{midpoint method} via the truncation error approach.
\begin{theorem}\label{estimate_e_k}
Let Assumption \ref{assump0}-\ref{assump 2} be fulfilled with $n=2$, and suppose that $u_0$ be an ${\mathcal F}_0$-measurable random variable satisfying $\|u_0\|_{L^2(\Omega;{\mathcal D}(M^2))}<\infty$. Then we have
\begin{equation}
\begin{split}
\max_{1\leq k\leq N}\big(\mathbb{E}\|u(t_k)-u^k\|_{\mathbb H}^2\big)^{\frac12}\leq C\tau,
\end{split}
\end{equation}
where the positive constant $C$ depends on the Lipschitz coefficients of $F$ and $B$, $T$, $\|u_0\|_{L^2(\Omega;{\mathcal D}(M^2))}$ and $\|Q^{\frac12}\|_{HS(U,H^2(D))}$, but independent of $\tau$ and $k$.
\end{theorem}
\begin{proof}
We now have
\[
u(t_k)=S(t_k)u_0+\int_{0}^{t_k}S(t_k-r)F(r){\rm d}r+\int_{0}^{t_k}S(t_k-r)B(r){\rm d}W(r),
\]
and
\[
u^{k}=S_{\tau}^{k}u_0+\tau\sum_{j=1}^{k}S_{\tau}^{k-j}T_{\tau}F(t_{j-\frac12})+\sum_{j=1}^{k}S_{\tau}^{k-j}T_{\tau}B(t_{j-\frac12})\Delta W^{j}.
\]
We derive
\begin{equation*}
\begin{split}
& {\mathbb E}\|u(t_k)-u^k\|^2_{\mathbb H}\lesssim
{\mathbb E}\|\big(S(t_k)-S_{\tau}^{k}\big)u_0\|^2_{\mathbb H}\\
& +{\mathbb E}\|\sum_{j=1}^{k}\int_{t_{j-1}}^{t_j}\big(S(t_k-r)F(r)-S_{\tau}^{k-j}T_{\tau}F(t_{j-\frac12})\big){\rm d}r\|^2_{\mathbb H}\\
& +{\mathbb E}\|\sum_{j=1}^{k}\int_{t_{j-1}}^{t_j}\big(S(t_k-r)B(r)-S_{\tau}^{k-j}T_{\tau}B(t_{j-\frac12})\big){\rm d}W(r)\|^2_{\mathbb H}\\
&\leq \|S(t_k)-S_{\tau}^{k}\|^2_{{\mathcal L}({\mathcal D}(M^2),{\mathbb H})}{\mathbb E}\|u_0\|_{{\mathcal D}(M^2)}^2\\
&+T\sum_{j=1}^{k}\int_{t_{j-1}}^{t_j}\big\|S(t_k-r)F(r)-S_{\tau}^{k-j}T_{\tau}F(t_{j-\frac12})\big\|^2_{\mathbb H}{\rm d}r\\
&+\sum_{j=1}^{k}\int_{t_{j-1}}^{t_j}\big\|S(t_k-r)B(r)-S_{\tau}^{k-j}T_{\tau}B(t_{j-\frac12})\big\|^2_{HS(U_0,{\mathbb H})}{\rm d}r\\
&\leq C\tau^2,
\end{split}
\end{equation*}
where we use Lemma \ref{estimate of operators} to get for any $r\in[t_{j-1},t_{j}]$,
\begin{equation*}
\begin{split}
& \big\|S(t_k-r)F(r)-S_{\tau}^{k-j}T_{\tau}F(t_{j-\frac12})\big\|_{\mathbb H}
\leq \|S(t_k-r)\|_{{\mathcal L}({\mathbb H},{\mathbb H})}\|F(r)-F(t_{j-\frac12})\|_{\mathbb H}\\
& +\|S(t_k-t_j)\|_{{\mathcal L}({\mathcal D}(M),{\mathcal D}(M))}\|S(t_j-r)-Id\|_{{\mathcal L}({\mathcal D}(M),{\mathbb H})}\|F(t_{j-\frac12})\|_{{\mathcal D}(M)}\\
&+\|S(t_k-t_j)-S_{\tau}^{k-j}\|_{{\mathcal L}({\mathcal D}(M^2),{\mathbb H})}\|F(t_{j-\frac12})\|_{{\mathcal D}(M^2)} \\
&+\|S_{\tau}^{k-j}\|_{{\mathcal L}({\mathcal D}(M),{\mathcal D}(M))}\|T_{\tau}-Id\|_{{\mathcal L}({\mathcal D}(M),{\mathbb H})}\|F(t_{j-\frac12})\|_{{\mathcal D}(M)}\\
&\leq C\tau
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
& \big\|S(t_k-r)B(r)-S_{\tau}^{k-j}T_{\tau}B(t_{j-\frac12})\big\|_{HS(U_0,{\mathbb H})}
\leq \|S(t_k-r)\|_{{\mathcal L}({\mathbb H},{\mathbb H})}\|B(r)-B(t_{j-\frac12})\|_{HS(U_0,{\mathbb H})}\\
& +\|S(t_k-t_j)\|_{{\mathcal L}({\mathcal D}(M),{\mathcal D}(M))}\|S(t_j-r)-Id\|_{{\mathcal L}({\mathcal D}(M),{\mathbb H})}\|B(t_{j-\frac12})\|_{HS(U_0,{\mathcal D}(M))}\\
&+\|S(t_k-t_j)-S_{\tau}^{k-j}\|_{{\mathcal L}({\mathcal D}(M^2),{\mathbb H})}\|B(t_{j-\frac12})\|_{HS(U_0,{\mathcal D}(M^2))} \\
&+\|S_{\tau}^{k-j}\|_{{\mathcal L}({\mathcal D}(M),{\mathcal D}(M))}\|T_{\tau}-Id\|_{{\mathcal L}({\mathcal D}(M),{\mathbb H})}\|B(t_{j-\frac12})\|_{HS(U_0,{\mathcal D}(M))}\\
&\leq C\tau.
\end{split}
\end{equation*}
Therefore, we complete the proof.
\end{proof}
\fi
\bibliographystyle{plain}
|
{
"timestamp": "2018-06-07T02:06:53",
"yymm": "1806",
"arxiv_id": "1806.00922",
"language": "en",
"url": "https://arxiv.org/abs/1806.00922"
}
|
\section{Introduction}
The discovery of instantons in the $70$'s \cite{BPST1975} made clear that topology was a relevant aspect of the dynamics of the low-energy degrees of freedom in QCD \cite{tHooft1976,Witten1979,Veneziano1979},
but it also raised another important issue: if one introduces in the QCD Lagrangian an additional term $\mathscr{L}_\theta = \theta Q$,
where $Q(x)=\frac{g^{2}}{64\pi^{2}}\varepsilon^{\mu\nu\rho\sigma} F_{\mu\nu}^{a}(x)F_{\rho\sigma}^{a}(x)$ is the so-called \emph{topological charge density},
despite the fact that $Q = \partial^\mu K_\mu$, where $K^\mu$ is the so-called
\emph{Chern-Simons current}, its contribution in the quantum theory would be non-zero thanks to the existence of configurations with non-trivial topology (such as instantons). This term, usually referred to as \emph{topological term} or as \emph{$\theta$-term} (from the name of the coefficient that appears in front of it), is particularly interesting since it introduces an explicit breaking of the CP symmetry in QCD (referred to as \emph{strong-CP violation}), absent in the original theory.
So far, however, no violation of the CP symmetry in strong interactions has been observed experimentally, so that the parameter $\theta$ is believed to be zero (or ``practically'' zero), despite the fact that it could assume, in principle, whatever value in $[0,2\pi)$.
In particular, one can find a relation between the magnitude of the parameter $\theta$ and the neutron electric-dipole moment \cite{Weinberg-book},
$d_N \simeq \frac{M_\pi^2}{M_N^3}\, e \left|\theta \right| \simeq 10^{-16} \left|\theta \right| \; e\cdot cm$,
where $M_N$ is the neutron mass, while $M_\pi$ is the pion mass. From the experimental data \cite{Neutron-EDM} we know that $d_N<10^{-26} e\cdot cm$, which leads to an upper bound:
\begin{equation}\label{upper bound on theta}
\left|\theta \right| < 10^{-10} .
\end{equation}
(More refined relations among the neutron electric dipole moment and the $\theta$ angle were derived by Baluni \cite{Baluni1979}, in the framework of the so-called \emph{bag model}, by Crewther, Di Vecchia, Veneziano, and Witten \cite{CDVW1979}, using the Chiral Perturbation Theory, and by many others using different approaches: see Sec. 7.1 of Ref. \cite{VP2009} for a more detailed discussion
and also Ref. \cite{EDM-lattice} for a recent lattice determination.)
This ``fine-tuning'' problem (usually referred to as the \emph{strong-CP problem}), is still an open issue, despite possible solutions have been proposed
(the most famous one being that of Peccei and Quinn \cite{PQ1977}, who proposed a mechanism, based on a new $U(1)$ symmetry and involving a new light pseudoscalar particle called \emph{axion} \cite{WW1978}, in order to dynamically rotate away the $\theta$-dependence of the theory).
However, it is nonetheless interesting to study the dependence of QCD on finite $\theta$: the insertion of the topological term with $\theta\neq 0$ in the QCD Lagrangian causes (by virtue of the non-trivial topology) a modification of the partition function of the theory and, therefore, a non-trivial dependence on $\theta$ of the \emph{vacuum energy density} $\epsilon_{vac}(\theta)$, which will be the object of our investigations in this paper.
Let us write explicitly the expression for the partition function of the theory with $N_f$ quark flavours and with the inclusion of the $\theta$-term:
\begin{equation}\label{total partition function}
Z = \int [dA][d\Overline[2]{q} dq]e^{i\int d^4x \mathscr{L}_{tot}} ,
\end{equation}
where $\mathscr{L}_{tot}= -\frac{1}{2}\mathrm{Tr}\left[F_{\mu\nu}F^{\mu\nu}\right] +i\Overline[2]{q}\gamma^\mu D_\mu q-\Overline[2]{q}_R \mathcal{M} q_L -\Overline[2]{q}_L \mathcal{M}^{\dagger} q_R +\theta Q$, with $\mathcal{M}$ a general complex mass matrix for the quarks.
If we now perform a change of the (dummy) fermionic integration variables in
\eqref{total partition function} in the form of a $SU(N_f)_L \otimes SU(N_f)_R \otimes U(1)_A$ transformation,\footnote{Throughout this paper, we shall use the following notations for the left-handed and right-handed quark fields: $q_{L,R} \equiv \frac{1}{2} (1 \pm \gamma_5) q$, with $\gamma_5 \equiv -i\gamma^0\gamma^1\gamma^2\gamma^3$. Moreover, we shall adopt the convention $\varepsilon^{0123} = -\varepsilon_{0123} = +1$ for the (Minkowskian) completely antisymmetric tensor $\varepsilon^{\mu\nu\rho\sigma} (=-\varepsilon_{\mu\nu\rho\sigma})$ which appears in the expression of the topological charge density $Q(x)$.}
\begin{equation} \label{transformation SU(L)_L x SU(L)_R x U(1)_A}
\left\{
\begin{aligned}
q_L & \rightarrow q_L' = \tilde{V_L} q_L = e^{i\alpha} V_L q_L ,\\
q_R & \rightarrow q_R' = \tilde{V_R} q_R = e^{-i\alpha} V_R q_R ,
\end{aligned}
\right.
\end{equation}
where $V_L,~V_R \in SU(N_f)$, we see that, because of the non-invariance of the fermionic functional-integral measure
($[d\Overline[2]{q} dq]\rightarrow [d\Overline[2]{q}' dq']=[d\Overline[2]{q} dq] e^{-i2L\alpha \int d^4x \, Q(x)}$)
and of the mass term,
the partition function $Z$ is invariant under the following changes:
\begin{equation}\label{changes in Z}
\left\{
\begin{aligned}
\mathcal{M}&\rightarrow \mathcal{M}'=\tilde{V}_R^{\dagger}\, \mathcal{M} \, \tilde{V}_L , \\
\theta & \rightarrow \theta' = \theta - 2L \alpha .
\end{aligned}
\right.
\end{equation}
We immediately notice that, if $\mathcal{M}$ is invertible ($\det\mathcal{M}\neq 0$), we have:
$\arg(\det\mathcal{M}')=\arg(\det\mathcal{M})+2L \alpha$,
so that, under the transformation \eqref{transformation SU(L)_L x SU(L)_R x U(1)_A}-\eqref{changes in Z}, the following combination:
\begin{equation}\label{physical theta}
\theta_{phys} \equiv \theta + \arg(\det\mathcal{M})
\end{equation}
stays unchanged. This is the ``physical'' value of the parameter $\theta$:
a non-zero value of $\theta_{phys}$ implies a strong CP-violation and
the upper bound \eqref{upper bound on theta} actually refers to $\theta_{phys}$.
Eqs. \eqref{changes in Z} and \eqref{physical theta} also imply that, if the mass matrix is invertible, then it is possible to move all the dependence on the parameter $\theta$ into the mass term. In fact, performing a transformation \eqref{transformation SU(L)_L x SU(L)_R x U(1)_A}-\eqref{changes in Z} with
$\alpha=\frac{\theta}{2L}$, we obtain $\theta'=0$ and $\arg(\det\mathcal{M}') = \theta_{phys}$.
On the other hand, if we take $\mathcal{M}$ to coincide with the \emph{physical} quark-mass matrix $M \equiv \diag(m_1,\ldots m_{N_f})$, with $m_i \in \mathbb{R}^+ \; \forall i$ (which is always possible, by means of a transformation
\eqref{transformation SU(L)_L x SU(L)_R x U(1)_A}-\eqref{changes in Z}),
we have $\arg(\det M)=0$ and $\theta = \theta_{phys}$. (Of course, if at least one quark is massless, we have $\det {M}=0$ and, in this case, it is possible to rotate away all the dependence on $\theta_{phys}$ from the theory.)
From now on, we shall consider the partition function $Z[\theta]$ in this case
($\mathcal{M} = M$ and $\theta = \theta_{phys}$).
In particular, we are interested in the $\theta$-dependence of the vacuum energy density $\epsilon_{vac}(\theta)$, which is related to the partition function $Z[\theta]$ by the following well-known relation:
\begin{equation}\label{vacuum energy density definition}
Z[\theta]\equiv \frac{1}{\mathcal{N}}e^{-i\Omega \epsilon_{vac}(\theta)} \Rightarrow \epsilon_{vac}(\theta) = \frac{i}{\Omega}\log Z[\theta] + const.,
\end{equation}
where $\mathcal{N}$ is a normalizing constant while $\Omega=VT$ is the four-volume considered (sending $\Omega \to \infty^4$ at the end).\footnote{The expression \eqref{vacuum energy density definition} is referred to the partition function of the theory in the Minkowski space-time. It is also common to express it in terms of the partition function $Z_E[\theta]$ of the theory in the Euclidean space-time as follows: $\epsilon_{vac}(\theta) = -\frac{1}{\Omega_E}\log Z_E[\theta] + const.$, where $\Omega_E=VT_E$ is the Euclidean four-volume, with Euclidean time $T_E$}
Being $\theta$ very small, it makes sense to Taylor-expand the vacuum energy density around $\theta=0$:
\begin{equation}\label{Taylor expansion of vacuum energy density}
\epsilon_{vac}(\theta) = \epsilon_{vac}(0) + \frac{1}{2!} c_2 \theta^2 + \frac{1}{4!} c_4 \theta^4 + \ldots ~,~~~ \text{with:}~~
c_n \equiv \left.\frac{\partial^n \epsilon_{vac}(\theta)}{\partial \theta^n}\right|_{\theta=0} .
\end{equation}
Only even powers of $\theta$ appear in \eqref{Taylor expansion of vacuum energy density} since the coefficients $c_n$ of the odd-power terms vanish by parity-invariance at $\theta=0$. The coefficients $c_n$ of this expansion are related to the correlation functions of the topological charge density at $\theta=0$.
More explicitly, starting from the expression \eqref{vacuum energy density definition} and indicating with $Q_{tot} \equiv \int d^4 x\, Q(x)$ the (total) topological charge, one easily finds that:
\begin{equation}\label{topological susceptibility from vacuum energy density}
c_2 \equiv \left.\frac{\partial^2 \epsilon_{vac}(\theta)}{\partial \theta^2}\right|_{\theta=0} = -\frac{i}{\Omega} \langle Q_{tot}^2 \rangle_{\theta=0} \equiv \chi,
\end{equation}
i.e., the coefficient $c_2$ of the $\theta^2$ term in \eqref{Taylor expansion of vacuum energy density} coincides with the so-called \emph{topological susceptibility} of the theory at $\theta=0$:
$\chi \equiv -\frac{i}{\Omega} \langle Q_{tot}^2 \rangle_{\theta=0} =
-i \int d^4 x \langle T Q(x) Q(0) \rangle_{\theta=0}$.
Concerning the coefficient $c_4$, it turns out to coincide with the \emph{second cumulant} of the probability distribution of the topological charge-density operator $Q$ \cite{VP2009}:
\begin{equation}\label{c4 from vacuum energy density}
c_4 \equiv \left.\frac{\partial^4 \epsilon_{vac}(\theta)}{\partial \theta^4}\right|_{\theta=0} = \frac{i}{\Omega}\left(\langle Q_{tot}^4 \rangle_{\theta=0} - 3 \langle Q_{tot}^2 \rangle^2_{\theta=0}\right) ,
\end{equation}
which is related to the $\eta'-\eta'$ elastic scattering amplitude \cite{Veneziano1979} and to the non-gaussianity of the topological charge distribution \cite{VP2009}.
Therefore, the expansion \eqref{Taylor expansion of vacuum energy density} can be rewritten as:
\begin{equation}\label{susceptibility and c4 in vacuum energy density}
\epsilon_{vac}(\theta)=\epsilon_{vac}(0)+\frac{1}{2}\chi \theta^2 + \frac{1}{24}c_4 \theta^4+\ldots
\end{equation}
The strategy of this paper consists in computing the dependence on $\theta$ of the vacuum energy density, so as to obtain, exploiting the relations \eqref{topological susceptibility from vacuum energy density} and \eqref{c4 from vacuum energy density}, the expressions of the topological susceptibility $\chi$ and of the second cumulant $c_4$ in terms of the fundamental parameters of the theory,
not using directly the \emph{fundamental} theory (which is anyhow possible using its formulation on the lattice: see the discussion in Sec. 6),
but using some relevant \emph{effective} Lagrangian models.
We shall first consider, in Sec. 2, the \emph{Chiral Effective Lagrangian} in the case of $L$ ($\le N_f$) \emph{light} quark flavours (taken to be massless in the ideal \emph{chiral} limit): the physically relevant cases are $L=2$, with the
quarks $up$ and $down$, and $L=3$, including also the $strange$ quark \cite{Weinberg1967,Weinberg1979,GL1982-1984,GL1985}.
This effective theory describes the low-energy dynamics for the lightest hadronic states in the spectrum of QCD, i.e., the lightest non-flavour-singlet pseudoscalar mesons, which are identified with the $L^2-1$ pseudo-Goldstone bosons originated by the spontaneous breaking of the $SU(L)_L \otimes SU(L)_R$ chiral symmetry.
The results that we shall report in Sec. 2 are already well known in the literature (see, in particular, Refs. \cite{Smilga-book,MC2009,GM2015,GHVV2016}).
However, for the benefit of the reader, we have decided to report here some details of the calculations of $\chi$ and $c_4$ also in this case since this will allow us to introduce the basic notations and the main techniques for performing the calculations in the other cases.
Moreover, this case is an important frame of reference for the other effective models that we shall discuss in the rest of the paper.
In Secs. 3 and 4 we shall consider different effective Lagrangian models which include the flavour-singlet meson field and also implement the $U(1)$ axial anomaly of the fundamental theory.
In the last decades there were essentially two different ``schools of thought'' debating on how to address this issue: the first assumes that the dominant fluctuations are semiclassical instantons, while the second is based upon the large-$N_c$ limit of a $SU(N_c)$ gauge theory, and assumes that the dominant fluctuations are not semiclassical but quantum. The two models that we shall consider in Secs. 3 and 4 belong respectively to the first trend (the so-called \emph{Extended (Non-)Linear sigma model} \cite{ELSM1,ELSM2,ELSM3}) and to the second one (the model of Witten, Di Vecchia, Veneziano, \emph{et al.} \cite{WDV1,WDV2,WDV3}).
In Sec. 5, we shall consider another effective Lagrangian model (which
was originally proposed in Refs. \cite{EM1994} and elaborated on in Refs.
\cite{MM2003,EM2011,MM2013}), which is in a sense in-between the \emph{Extended (Non-)Linear sigma model} and the model of Witten, Di Vecchia, Veneziano, \emph{et al.}: for this reason we shall call it the \emph{Interpolating model}.
Finally, in Sec. 6 we shall draw our conclusions, summarizing the analytical results that we have obtained for the topological susceptibility $\chi$ and the second cumulant $c_4$ in the four different frameworks mentioned above and also evaluating numerically our results, so as to critically compare them with each other and with the available lattice results.
\section{The Chiral Effective Lagrangian}
We first consider the Chiral Effective Lagrangian in the case of $L$ light quark flavours: the results that we shall report in this section are already well known in the literature (see, in particular, Refs. \cite{Smilga-book,MC2009,GM2015,GHVV2016}).
However, for the benefit of the reader, we have decided to report here some details of the calculations of $\chi$ and $c_4$ also in this case since this will allow us to introduce the basic notations and the main techniques for performing the calculations in the other cases.
Moreover, this case is an important frame of reference for all the other models that we shall discuss: in fact, if one ``neglects'' the presence of the flavour-singlet meson field and of the $U(1)$ axial anomaly (formally sending its mass to infinity), all the predictions derived in the other models must reduce to those that will be found in this section.
The chiral effective Lagrangian formulation was introduced by Weinberg \cite{Weinberg1967}
and was later elaborated on, becoming one of the most important tool to investigate the dynamics of the effective degrees of freedom of the low-energy regime of QCD \cite{Weinberg1979,GL1982-1984,GL1985}.
The idea carried on by Weinberg \emph{et al.} was that of building an effective theory for the lightest hadronic states in the spectrum of the theory, i.e., the lightest pseudoscalar mesons, which are the pseudo-Goldstone bosons originated by the spontaneous breaking of the chiral symmetry.
This purpose can be achieved by writing down all the terms consistent with the symmetries of the fundamental theory, thereby obtaining an ``exact'' theory. However, the number of terms which satisfy the requirement is infinite: so, in order to be able to make any definite physical prediction, it is necessary to endow the theory with a power-counting ordering scheme which organizes the terms, providing a criterion to decide whether to keep or not a term at a given order.
Such a criterion is the low-energy expansion, or the \emph{$p$-expansion}: it consists in sorting the terms of the Chiral Effective Lagrangian on the basis of their number of derivatives, i.e., for the amplitudes in momentum space, on their order in the momentum-scale $p$.
So, a generic Chiral Effective Lagrangian is written as:
\begin{equation}\label{Generic Chiral Effective Lagrangian}
\mathscr{L}_{eff}= \mathscr{L}_{eff}^{(0)} + \mathscr{L}_{eff}^{(2)} + \mathscr{L}_{eff}^{(4)} + \mathscr{L}_{eff}^{(6)} + \ldots ,
\end{equation}
where $\mathscr{L}_{eff}^{(2n)}$ gathers all the terms of order $p^{2n}$ (i.e, with $2n$ derivatives, the quark-mass matrix $\mathcal{M}$ counting as $p^2$, i.e., as two derivatives), while the odd-power terms are ruled out by Lorentz invariance. The term $\mathscr{L}_{eff}^{(0)}$ turns out to be an irrelevant constant, which can be neglected.
In this paper, we shall make use of the Chiral Effective Lagrangian at the lowest (leading) nontrivial order $\mathcal{O}(p^2)$. Here, we limit ourselves to report the final result (for a dissertation on the Chiral Effective Lagrangian up to the next-to-leading order $\mathcal{O}(p^4)$, see Ref. \cite{GL1985}):
\begin{equation}\label{Chiral Effective Lagrangian O(p^2)}
\mathscr{L}_{eff}^{(2)}(U,U^{\dagger})= \frac{1}{2}\mathrm{Tr} [\partial_\mu U \partial^\mu U^{\dagger}] + \frac{B_m}{2\sqrt{2}}\mathrm{Tr}\left[\mathcal{M} U +\mathcal{M}^\dagger U^{\dagger}\right] ,
\end{equation}
where:
\begin{itemize}
\item the field $U$, describing only the $L^2-1$ non-flavour-singlet pseudo-Goldstone bosons, is an element of the group $SU(L)$, up to a multiplicative constant. In other words, it can be written as:
\begin{equation}
U\equiv \frac{F_\pi}{\sqrt{2}}\, U'~,~~~ U'\in SU(L) ,
\end{equation}
where $F_\pi$ is the usual \emph{pion decay constant};
\item $\mathcal{M}$ is a complex quark-mass matrix, which, considering the relation \eqref{physical theta} between the coefficient $\theta$ of the topological term and the argument of the determinant of the mass matrix, can be taken to be:
\begin{equation}\label{mass matrix with theta term}
\mathcal{M}=Me^{i\frac{\theta_{phys}}{L}} ,
\end{equation}
where $M=\diag (m_1,\ldots , m_L)$ is the physical (real and diagonal) quark-mass matrix. In this way, we are moving all the dependence on $\theta_{phys}$ into the mass term. In order to simplify the notation, from now on we shall write $\theta$ in place of $\theta_{phys}$;
\item $B_m$ is a constant having the dimension of an energy squared, often written as:
\begin{equation}\label{B_m definition}
B_m = 2 F_\pi B ,
\end{equation}
where $B$ is a constant, carrying the dimension of an energy, which relates the mass of the quarks \emph{up} and \emph{down} to the mass of the pions through: $M_\pi^2 = B(m_u + m_d)$.
\end{itemize}
We can rewrite the Chiral Effective Lagrangian
\eqref{Chiral Effective Lagrangian O(p^2)} as:
\begin{equation}\label{Chiral Effective Lagrangian O(p^2) bis}
\mathscr{L}_{eff}^{(2)}(U,U^{\dagger})= \frac{1}{2}\mathrm{Tr} [\partial_\mu U \partial^\mu U^{\dagger}] - V(U,U^\dagger) ,
\end{equation}
where the potential $V$ is given by:
\begin{equation}\label{Eff ch Lagr: potential}
V(U,U^\dagger) = -\frac{B_m}{2\sqrt{2}}\mathrm{Tr}\left[\mathcal{M}U +\mathcal{M}^{\dagger} U^{\dagger}\right] = -\frac{B_m}{\sqrt{2}} \re\left[\mathrm{Tr}\left(Me^{i\theta /L}U\right)\right] .
\end{equation}
We shall use the fact that (up to an irrelevant constant with respect to $\theta$), the vacuum energy density $\epsilon_{vac}(\theta)$ coincides with the \emph{minimum} of the potential $V$ obtained with a configuration of fields constant with respect to space-time coordinates $x$ (see Refs. \cite{Smilga-book,LM1992} and references therein):
\begin{equation}\label{vacuum energy density = V_min}
\epsilon_{vac}(\theta) \simeq V_{min}(\theta) + const.
\end{equation}
Given that we are considering $M=\diag(m_1,\ldots , m_L)$, it is reasonable to look for the minimum of the potential guessing a configuration of the field $U$ in a diagonal form.
So, being, in this case, $U=\frac{F_\pi}{\sqrt{2}}\, U'$, where $U'$ is an element of $SU(L)$, we set:
\begin{equation}\label{Diagonal form of U}
U = \frac{F_\pi}{\sqrt{2}}\diag \left( e^{i\alpha_1},\ldots , e^{i\alpha_L}\right) ,
\end{equation}
where the $\alpha_j$ are constant phases, satisfying the constraint:
\begin{equation}\label{Constraint of the determinant of U}
\det U' = e^{i\sum_j \alpha_j} = 1 \Longrightarrow \sum\limits_{j=1}^L \alpha_j = 0 .
\end{equation}
Substituting the explicit expressions for $M$ and $U$ into Eq. \eqref{Eff ch Lagr: potential}, we find:
\begin{equation}\label{Eff ch Lagr: explicit potential}
V = -\frac{F_\pi B_m}{2} \sum\limits_{j=1}^L m_j \cos\phi_j ,
\end{equation}
where we have defined $\phi_j \equiv \frac{\theta}{L}+\alpha_j$.
Starting from Eq. \eqref{Constraint of the determinant of U}, we see that the phases $\phi_j$ must satisfy the constraint:
\begin{equation}\label{Constraint on the phi}
\sum\limits_{j=1}^L \phi_j = \sum\limits_{j=1}^L \left(\frac{\theta}{L}+\alpha_j\right) = \theta .
\end{equation}
It is now more convenient to consider separately the special case $L=2$ and the more general case $L \ge 2$: in fact, the former can be easily solved exactly, for any values of $\theta$ and of the quark masses; on the contrary, the latter cannot be solved exactly (in ``closed form'') in general, but only an approximate solution can be derived.
\subsection{A special case: $L=2$}
In this case, it is easy to find the explicit expressions of the phases $\phi_1$ and $\phi_2$ which minimize the potential \eqref{Eff ch Lagr: explicit potential}, with the constraint \eqref{Constraint on the phi}:
\begin{equation}\label{Solution of the minimization in simple model L=2}
\phi_1 = \arctan\left(\frac{m_2\sin\theta}{m_1+m_2\cos\theta}\right) ~,~~~
\phi_2=\theta - \phi_1 .
\end{equation}
Substituting \eqref{Solution of the minimization in simple model L=2} in \eqref{Eff ch Lagr: explicit potential}, the following expression for the minimum of the potential is found:
\begin{equation}\label{Eff ch Lagr: final form of the potential for L=2}
V(\theta)=\epsilon_{vac}(\theta)=-\frac{F_\pi B_m}{2}\sqrt{m_1^2 + m_2^2 + 2m_1 m_2 \cos\theta} .
\end{equation}
In the end, we are able to find the expressions for the topological susceptibility $\chi$ and the second cumulant $c_4$ \cite{Smilga-book,MC2009,GM2015,GHVV2016}:
\begin{equation}\label{Eff ch Lagr: topological susceptibility for L=2}
\chi=\left.\frac{\partial^2 \epsilon_{vac}(\theta)}{\partial\theta^2}\right|_{\theta=0} = \frac{F_\pi B_m}{2}\left(\frac{1}{m_1}+\frac{1}{m_2}\right)^{-1} ,
\end{equation}
\begin{equation}\label{Eff ch Lagr: second cumulant for L=2}
c_4 = \left.\frac{\partial^4 \epsilon_{vac}(\theta)}{\partial\theta^4}\right|_{\theta=0} = -\frac{F_\pi B_m}{2}\left(\frac{1}{m_1^3} + \frac{1}{m_2^3}\right)\left(\frac{1}{m_1} + \frac{1}{m_2}\right)^{-4} .
\end{equation}
\subsection{The more general case: $L\ge 2$}
In the more general case $L\ge 2$ it is not possible to find an exact analytical solution, as in the previous case. However, given that our final purpose is to obtain the expressions for $\chi$ and $c_4$, which are by definition evaluated at $\theta=0$, we can implement a Taylor expansion of the potential around $\theta=0$.
If we set $\theta=0$, it is easy to show that the form of the field $U$ which minimizes the potential is $U=\frac{F_\pi}{\sqrt{2}}\mathbf{I}$.
We can thus implement a Taylor expansion of the potential \eqref{Eff ch Lagr: explicit potential} considering both $\theta\ll 1$ and $\phi_i \ll 1$ $\forall i$.
After some calculations, the following expression for the phases $\phi_i$ which minimize the potential \eqref{Eff ch Lagr: explicit potential}, with the constraint \eqref{Constraint on the phi}, is found:
\begin{equation}\label{Eff ch Lagr: final form of phi}
\phi_i=\frac{\bar{m}}{m_i}\theta + \frac{1}{6} \frac{\bar{m}}{m_i} \left[\left(\frac{\bar{m}}{m_i}\right)^2 - \sum\limits_{j=1}^L\left(\frac{\bar{m}}{m_j}\right)^3\right]\theta^3+\mathcal{O}(\theta^5) ,
\end{equation}
where we have defined:
\begin{equation}\label{m-bar}
\bar{m}\equiv\left(\sum\limits_{i=1}^L \frac{1}{m_i} \right)^{-1} .
\end{equation}
Finally, inserting \eqref{Eff ch Lagr: final form of phi} in
\eqref{Eff ch Lagr: explicit potential}, we find:
\begin{equation}\label{Eff ch Lagr: final form of the potential L>2}
V(\theta)=\epsilon_{vac}(\theta)= const. + \frac{1}{2} \left[ \frac{F_\pi B_m\bar{m}}{2} \right] \theta^2 + \frac{1}{24} \left[ -\frac{F_\pi B_m\bar{m}}{2}\sum_{j=1}^L\left(\frac{\bar{m}}{m_j}\right)^3 \right] \theta^4 + \ldots
\end{equation}
From this expression, we extract the final results for the topological susceptibility and for the second cumulant \cite{Smilga-book,MC2009,GM2015,GHVV2016}:
\begin{equation}\label{Eff ch Lagr: topological susceptibility for L>2}
\chi=\frac{F_\pi B_m\bar{m}}{2}=\frac{F_\pi B_m}{2}\left(\sum\limits_{j=1}^L \frac{1}{m_j}\right)^{-1} ,
\end{equation}
\begin{equation}\label{Eff ch Lagr: second cumulant for L>2}
c_4=-\frac{F_\pi B_m\bar{m}}{2}\sum\limits_{j=1}^L\left(\frac{\bar{m}}{m_j}\right)^3 = -\frac{F_\pi B_m}{2}\left(\sum\limits_{j=1}^L \frac{1}{m_j}\right)^{-4}\sum\limits_{j=1}^L\frac{1}{m_j^3} .
\end{equation}
These expressions correctly reduce to \eqref{Eff ch Lagr: topological susceptibility for L=2}-\eqref{Eff ch Lagr: second cumulant for L=2} if the number of light flavours considered is set to $L=2$.
In this respect, we want also to oberve that, if one of the quark masses, let's say $m_L$, is much larger than the other masses $m_1,\ldots,m_{L-1}$, we can formally take the limit $m_L \to \infty$ in the expressions
\eqref{Eff ch Lagr: topological susceptibility for L>2} and
\eqref{Eff ch Lagr: second cumulant for L>2} for $\chi^{(L)}$ and $c_4^{(L)}$,
which then reduce to $\chi^{(L-1)}$ and $c_4^{(L-1)}$, respectively.
In the real-world case, for example, the mass of the \emph{strange} quark,
$m_s$, is much larger than the masses $m_u$ and $m_d$ of the \emph{up} and
\emph{down} quarks: for this reason, in Sec. 6 we shall evaluate numerically
the expressions \eqref{Eff ch Lagr: topological susceptibility for L>2} and
\eqref{Eff ch Lagr: second cumulant for L>2} both for the case $L=2$, with
only the quarks \emph{up} and \emph{down}, and for the case $L=3$, where also
the \emph{strange} quark is taken into account.
\subsection{Considerations on the results}
We recall that, if at least one quark is massless, the partition function of the theory (and, so, the vacuum energy density) turns out to be independent of $\theta$: we thus expect that, being the topological susceptibility and the second cumulant derivatives of the vacuum energy density with respect to $\theta$, if we let one of the quark masses tend to zero, both $\chi$ and $c_4$ will tend to zero as well. It is easy to check that the expressions
\eqref{Eff ch Lagr: topological susceptibility for L>2} and
\eqref{Eff ch Lagr: second cumulant for L>2} satisfy this property;
in fact, considering a certain quark mass, say $m_i$, tending to zero, we have:
\begin{equation}\label{Eff ch Lagr: chiral limit of chi and c4}
\chi \simeq \frac{F_\pi B_m m_i}{2} ~,~~~
c_4 \simeq -\frac{F_\pi B_m m_i}{2} ~,~~~{\rm for}~~ m_i \to 0 .
\end{equation}
Or, also, if we take $m_1 = \ldots = m_L \equiv m$, we find that:
\begin{equation}\label{Eff ch Lagr: chiral limit of chi and c4 - bis}
\chi \simeq \frac{F_\pi B_m m}{2L} ~,~~~
c_4 \simeq -\frac{F_\pi B_m m}{2L^3} ~,~~~{\rm for}~~ m \to 0 .
\end{equation}
The result found for the topological susceptibility $\chi$ in this limit is in agreement with what predicted by the relevant (flavour-singlet) Ward-Takahashi identities \cite{Crewther1977-1979}.
In the next sections, we shall consider different effective Lagrangian models which include the flavour-singlet meson field and also implement the $U(1)$ axial anomaly of the fundamental theory.
As we have said in the Introduction, in the last decades there were essentially two different ``schools of thought'' debating on how to address this issue: the first assumes that the dominant fluctuations are semiclassical instantons, while the second is based upon the large-$N_c$ limit of a $SU(N_c)$ gauge theory, and assumes that the dominant fluctuations are not semiclassical but quantum.
The model that we shall consider in Sec. 3 (the so-called \emph{Extended (Non-)Linear sigma model}) belongs to the first trend, while the model of Witten, Di Vecchia, Veneziano, \emph{et al.}, that we shall consider in Sec. 4, belongs to the second one.
\section{The ``Extended (Non-)Linear sigma model''}
The first effective Lagrangian model with the inclusion of the flavour-singlet meson field that we consider was originally proposed in Refs. \cite{ELSM1} to study the chiral dynamics at $T=0$, and later used in many different contexts (e.g., at non-zero temperature, around the chiral transition): in particular, 't Hooft (see Refs. \cite{ELSM2,ELSM3} and references therein) argued that it reproduces, in terms of an effective theory, the $U(1)$ axial breaking caused by instantons in the fundamental theory.
For brevity, from now on we shall refer to it as the \emph{Extended Linear sigma} ($EL_\sigma$) \emph{model}.
This model is described by the following Lagrangian:
\begin{equation}\label{'t Hooft effective Lagrangian}
\mathscr{L}(U,U^\dagger)= \mathscr{L}_0(U,U^\dagger) + \frac{B_m}{2\sqrt{2}}\mathrm{Tr}\left[\mathcal{M} U +\mathcal{M}^\dagger U^{\dagger}\right] + \mathscr{L}_I(U,U^\dagger) ,
\end{equation}
where $\mathscr{L}_0(U,U^\dagger)$ is the Lagrangian of the so-called \emph{Linear sigma model}, originally proposed in Ref. \cite{GL1960} but later elaborated on and extended:
\begin{equation}\label{Lagrangian of sigma model}
\begin{split}
\mathscr{L}_0(U,U^{\dagger})& = \frac{1}{2}\mathrm{Tr} [\partial_\mu U \partial^\mu U^{\dagger}] - V_0(U,U^{\dagger}) ,\\
V_0(U,U^{\dagger})&=\frac{1}{4}\lambda_\pi^2\mathrm{Tr} [( UU^{\dagger}-\rho_\pi \mathbf{I} )^2] + \frac{1}{4}\lambda_\pi^{'2}\left[\mathrm{Tr}(UU^\dagger)\right]^2 ,
\end{split}
\end{equation}
while $\mathscr{L}_I(U,U^\dagger)$ is the term which is claimed to describe, in terms of the effective variables, the $2L$-fermions interaction vertex generated by the instantons. Its form is:
\begin{equation}\label{instanton term in the Lagrangian}
\mathscr{L}_I(U,U^\dagger) = \kappa (\det U + \det U^\dagger ) ,
\end{equation}
where $\kappa$ is a constant which (according to 't Hooft) is expected to be proportional to the typical instanton factor $e^{-8\pi^2/g^2}$ \cite{tHooft1976}.
In this model, the mesonic effective fields are represented by a $L\times L$ complex matrix $U_{ij}$ which can be written, in terms of the quark fields, as:
\begin{equation}\label{Fundamental field U in terms of quark fields}
U_{ij}\sim \Overline[2]{q}_j \left(\frac{1+\gamma_5}{2}\right) q_i = \Overline[2]{q}_{jR}q_{iL} ,
\end{equation}
up to a multiplicative constant.
Under a chiral transformation \eqref{transformation SU(L)_L x SU(L)_R x U(1)_A}
the field $U$ transforms as:
\begin{equation}\label{chiral U transformation}
U \rightarrow \tilde{V}_L U \tilde{V}_R^\dagger ,
\end{equation}
and, as a consequence, the determinant of the field $U$ varies as:
\begin{equation}\label{chiral variation of detU}
\det U \rightarrow \det (\tilde{V}_L) \det (\tilde{V}_R)^* \det U .
\end{equation}
Therefore, the term \eqref{instanton term in the Lagrangian} is invariant under $SU(L)_L\otimes SU(L)_R \otimes U(1)_V$, while under a $U(1)_A$ transformation, $U \to e^{i2\alpha} U$, it varies as:
\begin{equation}\label{axial transformation of 't Hooft term}
\kappa (\det U + \det U^\dagger ) \rightarrow \kappa(e^{i2L\alpha} \det U + e^{-i2L\alpha} \det U^\dagger) .
\end{equation}
When using this model in our work, we have found more convenient to set the mass matrix in the real diagonal form $M=\diag (m_1,\ldots , m_L)$, by performing a $U(1)_A$ rotation of the field $U$ with $\alpha=-\frac{\theta}{2L}$, that is:
\begin{equation}\label{theta rotation of U to make the mass term real}
U\rightarrow e^{-i\theta /L}\, U .
\end{equation}
After this rotation, the Lagrangian \eqref{'t Hooft effective Lagrangian} is modified as:
\begin{equation}\label{'t Hooft effective Lagrangian after rotation}
\mathscr{L}(U,U^\dagger) = \mathscr{L}_0(U,U^\dagger) + \frac{B_m}{2\sqrt{2}}\mathrm{Tr}\left[M(U + U^{\dagger})\right] + \kappa(e^{-i\theta}\det U + e^{i\theta} \det U^\dagger ) .
\end{equation}
For what concerns the potential $V_0(U,U^\dagger)$ appearing in Eq.
\eqref{Lagrangian of sigma model}, we remind that the parameter $\rho_\pi$
is responsible for the fate of the chiral symmetry $SU(L)_L \otimes SU(L)_R$.
In particular, if (as it happens at $T=0$) $\rho_\pi>0$, then the vacuum
expectation value $\Overline[2]{U}$ of the mesonic field $U$ (i.e., the value
of $U$ for which the potential is at the minimum) is
(even in the chiral limit $M=0$) different from zero and of the form
$\Overline[2]{U}|_{\rho_\pi>0} = v\,\mathbf{I}$,
meaning that the chiral symmetry is spontaneously broken down to the vectorial
$SU(L)_V$ subgroup.
If we are interested in describing only the low-energy dynamics of the effective pseudoscalar degrees of freedom (that is, the Goldstone [or \emph{would-be-}Goldstone] bosons), we can decouple the scalar massive fields by letting $\lambda_\pi^2\rightarrow \infty$: in fact, in this way, we are implementing the \emph{static limit} for the scalar fields, giving them infinite mass. In this limit, looking at the potential term in \eqref{Lagrangian of sigma model}, we are forcing the constraint $UU^\dagger=\rho_\pi \, \mathbf{I} \equiv\frac{F_\pi^2}{2} \, \mathbf{I}$, which implies $\mathrm{Tr} (UU^\dagger )=const.$: therefore, the term proportional to $\lambda_\pi^{'2}$ is just an irrelevant constant term, which can be dropped.
So, we shall neglect the scalar degrees of freedom and consider:
\begin{equation}\label{U form for T<Tc}
U=\frac{F_\pi}{\sqrt{2}}\, U' ~,~~~ U' \in U(L) .
\end{equation}
In this way, the Lagrangian of the model reduces to:
\begin{equation}\label{extended non-linear sigma model}
\mathscr{L}=\frac{1}{2}\mathrm{Tr} \left[\partial_\mu U \partial^\mu U^\dagger\right]
- V(U, U^\dagger) ,
\end{equation}
where the potential $V$ is (apart from a trivial constant):
\begin{equation}\label{ELsm: potential}
V(U,U^\dagger)=-\frac{B_m}{2\sqrt{2}}\mathrm{Tr}\left[M(U + U^{\dagger})\right] -\kappa(e^{-i\theta}\det U + e^{i\theta} \det U^\dagger ) .
\end{equation}
For brevity, from now on we shall refer to it as the \emph{Extended Non-Linear sigma} ($ENL_\sigma$) \emph{model}.
Setting $M$ in the usual diagonal form and $U$ as in \eqref{Diagonal form of U}
(but without the constraint \eqref{Constraint of the determinant of U} since
now $U'$ belongs to $U(L)$), we find:
\begin{equation}\label{ELsm: explicit potential}
V(\vec{\alpha})=-\frac{F_\pi B_m}{2}\sum\limits_{j=1}^L m_j \cos\alpha_j - 2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L \cos\left(\theta -\sum\limits_{j=1}^L \alpha_j\right) .
\end{equation}
The minimization equation is, therefore:
\begin{equation}\label{Minimization equation in ELsm}
\frac{\partial V(\vec{\alpha})}{\partial \alpha_i}= \frac{F_\pi B_m}{2}m_i \sin\alpha_i -2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L \sin\left(\theta -\sum\limits_{j=1}^L \alpha_j\right) = 0 .
\end{equation}
Again (as in the previous section), if we set $\theta=0$ the solution of the equation is $\alpha_j=0$ $\forall j$: we can thus consider both $\theta\ll 1$ and $\alpha_j\ll 1$ $\forall j$; moreover, from \eqref{ELsm: explicit potential} we see that the change $\theta \rightarrow -\theta$ is equivalent to the change $\alpha_j \rightarrow -\alpha_j$ $\forall j$. Therefore we can expand the phases $\alpha_j$ in powers of $\theta$, as in the previous section, but keeping only the odd-power terms. So, we set:
\begin{equation}\label{ELsm: expansion in theta of the phases alpha}
\alpha_i=A_i \theta + C_i \theta^3 +\ldots ,
\end{equation}
where the coefficients $A_i$ and $C_i$ have to be determined from the minimization condition. Inserting \eqref{ELsm: expansion in theta of the phases alpha} in \eqref{Minimization equation in ELsm} and expanding up to $\theta^3$, we have:
\begin{equation}\label{ELsm: expansion in theta of the explicit potential}
\begin{aligned}
\frac{\partial V(\vec{\alpha})}{\partial \alpha_i} & =\left[\frac{F_\pi B_m m_i}{2}A_i - 2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L\left(1-\sum_j A_j\right)\right] \theta \\
& + \left[\frac{F_\pi B_m m_i}{2} \left( C_i -\frac{1}{6}A_i^3 \right) + 2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L \sum_j C_j \right. \\
&\left. + 2\kappa \left(\frac{F_\pi}{\sqrt{2}}\right)^L \frac{1}{6}\left(1-\sum_j A_j\right)^3\right]\theta^3 +\ldots= 0 .
\end{aligned}
\end{equation}
Requiring that these equalities are satisfied order by order in $\theta$,
we derive the following expressions for the coefficients $A_i$ and $C_i$:
\begin{equation}\label{ELsm: linear order coefficient}
A_i= \frac{2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L}{\frac{F_\pi B_m \bar{m}}{2} + 2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L} \,\frac{\bar{m}}{m_i} ,
\end{equation}
\begin{equation}\label{ELsm: cubic order coefficient}
\begin{aligned}
C_i=& \frac{1}{6} \frac{2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L}{\left(\frac{F_\pi B_m \bar{m}}{2} + 2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L\right)^4} \,\frac{\bar{m}}{m_i} \\
\times & \left\{\frac{F_\pi B_m \bar{m}}{2}\left[\left(2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L\right)^2\left(\frac{\bar{m}}{m_i}\right)^2-\left(\frac{F_\pi B_m \bar{m}}{2}\right)^2\right] \right.\\ + & \left. \left(2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L\right)^3\left[\left(\frac{\bar{m}}{m_i}\right)^2-\sum_j\left(\frac{\bar{m}}{m_j}\right)^3\right] \right\} ,
\end{aligned}
\end{equation}
with $\bar{m}$ defined in Eq. \eqref{m-bar}.
Substituting the form \eqref{ELsm: expansion in theta of the phases alpha} in \eqref{ELsm: explicit potential} and expanding up to the order $\theta^4$, we find:
\begin{equation}\label{ELsm: theta dependence of the potential}
\begin{aligned}
V(\theta) & = const. + \frac{1}{2}\left[\frac{F_\pi B_m}{2}\sum_j m_j A_j^2 + 2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L (1-\sum_j A_j)^2 \right] \theta^2 \\
& + \frac{1}{24} \left[24\frac{F_\pi B_m}{2}\sum_j m_j A_j C_j - \frac{F_\pi B_m}{2}\sum_j m_j A_j^4 \right. \\
& - \left. 48\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L (1-\sum_j A_j)\sum_j C_j - 2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L(1-\sum_j A_j)^4\right] \theta^4 + \ldots
\end{aligned}
\end{equation}
Finally, substituting the relations \eqref{ELsm: linear order coefficient} and \eqref{ELsm: cubic order coefficient} into \eqref{ELsm: theta dependence of the potential}, we can directly read, inside the square brackets, the expressions of the topological susceptibility and of the second cumulant. We report here the final results:
\begin{equation}\label{ELsm: topological susceptibility}
\chi=\frac{F_\pi B_m \bar{m}}{2} \frac{2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L}{\frac{F_\pi B_m \bar{m}}{2} + 2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L} ~,
\end{equation}
\begin{equation}\label{ELsm: second cumulant}
\begin{aligned}
c_4=&-\frac{F_\pi B_m \bar{m}}{2} \frac{2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L}{\left(\frac{F_\pi B_m \bar{m}}{2} + 2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L\right)^4} \\ &\times \left[\left(2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L\right)^3\sum_j\left(\frac{\bar{m}}{m_j}\right)^3 + \left(\frac{F_\pi B_m \bar{m}}{2}\right)^3\right] .
\end{aligned}
\end{equation}
\subsection{Considerations on the results}
First of all, we notice that, if we take the (formal) limit $\kappa\rightarrow\infty$, the expressions for the topological susceptibility and for the second cumulant obtained in the $ENL_\sigma$ model reduce precisely to those
found in the previous section using the Chiral Effective Lagrangian.
To explain this fact, it is sufficient to observe that the flavour-singlet squared mass takes a contribution from the term proportional to $\kappa$ in the
Lagrangian [see Eq. \eqref{instanton term in the Lagrangian}, which, using
$U = ({F_\pi}/{\sqrt{2}})U'$ with $U' = e^{i\sqrt{\frac{2}{L}}\frac{S_\pi}{F_\pi}} \tilde{U}'$, $\tilde{U}' \in SU(L)$, see Eq. \eqref{U form for T<Tc}, gives
$M_{S_\pi}^2 = \frac{2L}{F_\pi^2} 2\kappa \left(\frac{F_\pi}{\sqrt{2}}\right)^L$
in the chiral limit of zero quark masses\dots]
So, implementing the limit $\kappa\rightarrow\infty$, we are sending the flavour-singlet mass to infinity, decoupling it from the theory, which thus reduces to the Chiral Effective Lagrangian discussed in the previous section.
We also remark that (assuming that the parameter $\kappa$ is independent of
the quark masses or, at least, that it has a finite non-vanishing value in the
chiral limit) the expressions \eqref{ELsm: topological susceptibility} and
\eqref{ELsm: second cumulant} have the right behaviour
\eqref{Eff ch Lagr: chiral limit of chi and c4}, in the chiral limit
$m_i \to 0$, or \eqref{Eff ch Lagr: chiral limit of chi and c4 - bis},
in the chiral limit $m_1 = \ldots = m_L \equiv m \to 0$, as predicted
by the relevant (flavour-singlet) Ward-Takahashi identities \cite{Crewther1977-1979}.
If, on the contrary, we take the \emph{infinite quark-mass limit}, by sending all $m_j\rightarrow\infty$ (which results in $\bar{m}\rightarrow\infty$),\footnote{This limit is clearly a bit stretched since, from the beginning, we have based all the discussion on the existence of $L$ light quarks. Nevertheless, it is interesting to formally investigate the trend of the results also in this limit.}
we find that (assuming, again, that the parameter $\kappa$ is independent of
the quark masses or, at least, that it has a finite, non-divergent value in the
infinite quark-mass limit) the expressions \eqref{ELsm: topological susceptibility} and \eqref{ELsm: second cumulant} become:
\begin{equation}\label{ELsm: pure-gauge limit of chi and c4}
\chi \to 2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L ~,~~~
c_4 \to - 2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L .
\end{equation}
In this way, we are implementing the static limit for the quarks, so that the theory should reduce to a pure Yang-Mills one.
Indeed, the results \eqref{ELsm: pure-gauge limit of chi and c4} are in agreement with the $\theta$ dependence of the vacuum energy density expected in a pure-gauge theory as derived in an \emph{instanton-gas model} \cite{CDG1978}.
In fact, in this case one finds that:
\begin{equation}\label{theta dependence of vacuum energy in presence of instantons}
\epsilon_{vac}(\theta)\simeq \: const. - K \cos\theta
= const. + \frac{1}{2}K\theta^2 - \frac{1}{24}K\theta^4 + \ldots ,
\end{equation}
that, by virtue of Eq. \eqref{susceptibility and c4 in vacuum energy density},
leads to the relation $\chi = - c_4 = K$, which, taking
$K = 2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L$, is satisfied by the
results \eqref{ELsm: pure-gauge limit of chi and c4}.
\section{The effective Lagrangian model of Witten, Di Vecchia, Veneziano, \emph{et al.}}
A different chiral effective Lagrangian, with the inclusion of the flavour-singlet meson field, which implements the $U(1)$ axial anomaly of the fundamental theory, was proposed by Witten, Di Vecchia, Veneziano, \emph{et al.} \cite{WDV1,WDV2,WDV3}: for brevity, in the following we shall refer to this model as the \emph{WDV model}. Even if this model was derived and fully justified in the framework of the $1/N_c$ expansion (i.e., in the limit $N_c\rightarrow\infty$), the numerical results obtained using the $WDV$ model with $N_c=3$ are quite consistent with the real-world (experimental) values. This model is described by the Lagrangian (see Ref. \cite{WDV2} for a complete derivation):
\begin{equation}\label{Witten effective Lagrangian with Q}
\begin{split}
\mathscr{L}(U,U^\dagger,Q) &= \mathscr{L}_0(U,U^\dagger)
+ \frac{B_m}{2\sqrt{2}}\mathrm{Tr}[M(U +U^{\dagger})] \\
& + \frac{i}{2}Q(x) \mathrm{Tr}\left[\log U -\log U^\dagger\right]
+ \frac{1}{2A}Q^2(x) + \theta Q(x) ,
\end{split}
\end{equation}
where $\mathscr{L}_0(U,U^\dagger)$ is the Lagrangian of the
\emph{Linear sigma model}, reported in Eq. \eqref{Lagrangian of sigma model};
$Q(x)$ is the topological charge density and is introduced here as an \emph{auxiliary} field, while $A$ is a parameter which (at least in the large-$N_c$ limit) can be identified with the topological susceptibility in the pure Yang-Mills theory
($A = -i \int d^4 x \langle T Q(x) Q(0) \rangle\vert_{YM}$).
One immediately sees that the ``anomalous'' term $\mathscr{L}_{anom} \equiv \frac{i}{2}Q(x) \mathrm{Tr}\left[\log U -\log U^\dagger\right]$ in Eq. \eqref{Witten effective Lagrangian with Q} is invariant under $SU(L)_L\otimes SU(L)_R \otimes U(1)_V$, while under a $U(1)_A$ transformation, $U \to e^{i2\alpha} U$, it transforms as:
\begin{equation}\label{effective Lagrangian transformation under axial U(1)}
\mathscr{L}_{anom} \rightarrow \mathscr{L}_{anom} - 2L\alpha Q ,
\end{equation}
so correctly reproducing the $U(1)$ axial anomaly of the fundamental theory.\footnote{We recall here the criticism by Crewther (see also the third Ref. \cite{Crewther1977-1979}), Witten \cite{WDV1}, Di Vecchia and Veneziano \cite{WDV2} to the ``anomalous'' term \eqref{instanton term in the Lagrangian} of the $EL_\sigma$ model, which apparently does not correctly reproduce the $U(1)$ axial anomaly of the fundamental theory and, moreover, is inconsistent with the $1/N_c$ expansion.}
According to what one is investigating, it may be convenient to integrate out the auxiliary field $Q(x)$ using its equation of motion, i.e.,
\begin{equation}\label{Q equation of motion}
Q(x) = -A\left[\theta + \frac{i}{2}\mathrm{Tr}\left(\log U - \log U^\dagger\right)\right] .
\end{equation}
After the substitution, we are left with:
\begin{equation}\label{Witten effective Lagrangian after having integrated out Q}
\mathscr{L}(U,U^\dagger) = \mathscr{L}_0(U,U^\dagger)
+ \frac{B_m}{2\sqrt{2}}\mathrm{Tr}[M(U +U^{\dagger})]
- \frac{A}{2}\left[\theta + \frac{i}{2}\mathrm{Tr}(\log U - \log U^\dagger)\right]^2 .
\end{equation}
As we have done in the previous section for the $EL_\sigma$ model, we shall neglect the scalar degrees of freedom (retaining only the low-energy dynamics of the effective pseudoscalar degrees of freedom),
by taking the formal limit $\lambda_\pi^2\rightarrow \infty$ (i.e., by taking
the limit of infinite mass for the scalar fields), which, as we have shown,
implies the constraint \eqref{U form for T<Tc} for the matrix field $U$.
In this way, the Lagrangian of the model reduces to:
\begin{equation}\label{WDV model}
\mathscr{L}=\frac{1}{2}\mathrm{Tr} \left[\partial_\mu U \partial^\mu U^\dagger\right]
- V(U, U^\dagger) ,
\end{equation}
where the potential $V$ is (apart from a trivial constant):
\begin{equation}\label{WDV: potential}
V(U, U^\dagger) = -\frac{B_m}{2\sqrt{2}}\mathrm{Tr}\left[M(U+U^\dagger)\right]
+ \frac{A}{2}\left[\theta + \frac{i}{2}\mathrm{Tr} (\log U - \log U^\dagger )\right]^2 .
\end{equation}
Setting $M$ in the usual diagonal form and $U$ as in \eqref{Diagonal form of U}
(but without the constraint \eqref{Constraint of the determinant of U}),
we find the following expression for the potential:
\begin{equation}\label{WDV: explicit potential}
V(\vec{\alpha})=-\frac{F_\pi B_m}{2}\sum\limits_{j=1}^L m_j \cos\alpha_j + \frac{A}{2}\left(\theta - \sum\limits_{j=1}^L \alpha_j\right)^2 .
\end{equation}
Therefore, the minimization equation is:
\begin{equation}\label{Minimization equation in WDV}
\frac{\partial V(\vec{\alpha})}{\partial \alpha_i} = \frac{F_\pi B_m}{2} m_i \sin \alpha_i - A\left(\theta - \sum_j \alpha_j\right) = 0 .
\end{equation}
As usual, since we are interested in the limit of small $\theta$ and, therefore,
also of small phases $\alpha_i$ (in fact, $\theta=0$ implies that $\alpha_i=0$
$\forall i$), we can Taylor-expand the sine in Eq. \eqref{Minimization equation in WDV} up to the third order in the phases:
\begin{equation}\label{WDV: potential up to the third order in the fields}
\frac{\partial V(\vec{\alpha})}{\partial \alpha_i} \simeq \frac{F_\pi B_m}{2} m_i \left(\alpha_i - \frac{\alpha_i^3}{6} + \ldots \right) - A\left(\theta - \sum_j \alpha_j\right)=0 ,
\end{equation}
and, moreover, observing that in \eqref{WDV: explicit potential} the change $\theta\rightarrow -\theta$ corresponds to the change $\alpha_j \rightarrow -\alpha_j$ $\forall j$, we can use for each phase $\alpha_i$ the following expansion in $\theta$:
\begin{equation}\label{WDV: guess for the alphas}
\alpha_i = A_i \theta + C_i \theta^3 + \ldots
\end{equation}
Inserting the expressions \eqref{WDV: guess for the alphas} into Eq.
\eqref{WDV: potential up to the third order in the fields}, we find that:
\begin{equation}\label{WDV: expansion in theta of the explicit potential}
\begin{aligned}
\frac{\partial V(\vec{\alpha})}{\partial \alpha_i}(\theta) & =\left[\frac{F_\pi B_m m_i}{2}A_i - A \left(1-\sum_j A_j\right)\right] \theta \\
& + \left[\frac{F_\pi B_m m_i}{2} \left(C_i - \frac{1}{6} A_i^3\right) + A \sum_j C_j \right]\theta^3 +\ldots= 0 .
\end{aligned}
\end{equation}
Requiring that these equalities are satisfied order by order in $\theta$, we find the following expressions for the coefficients $A_i$ and $C_i$:
\begin{equation}\label{WDV: linear order coefficient of the alphas}
A_i = \frac{A}{\frac{F_\pi B_m \bar{m}}{2} + A} \frac{\bar{m}}{m_i} ,
\end{equation}
\begin{equation}\label{WDV: cubic order coefficient of the alphas}
C_i = \frac{1}{6}\left(\frac{A}{\frac{F_\pi B_m \bar{m}}{2} + A}\right)^3 \frac{\bar{m}}{m_i} \left[ \left(\frac{\bar{m}}{m_i}\right)^2 - \frac{A}{\frac{F_\pi B_m \bar{m}}{2} + A} \sum_j\left(\frac{\bar{m}}{m_j}\right)^3 \right] ,
\end{equation}
with $\bar{m}$ defined in Eq. \eqref{m-bar}.
Finally, Taylor-expanding the potential \eqref{WDV: explicit potential} up to the fourth order in the phases,
\begin{equation}\label{WDV: Taylor expansion of the potential up to the fourth order in the alphas}
V(\vec{\alpha})\simeq const. + \frac{F_\pi B_m}{4}\sum\limits_{j=1}^L m_j\left(\bar{\alpha}_j^2 - \frac{\bar{\alpha}_j^4}{12} + \ldots \right) + \frac{A}{2}\left(\theta - \sum\limits_{j=1}^L \bar{\alpha}_j\right)^2 ,
\end{equation}
and inserting the form \eqref{WDV: guess for the alphas}, with the expressions
\eqref{WDV: linear order coefficient of the alphas} and
\eqref{WDV: cubic order coefficient of the alphas} for the coefficients
$A_i$ and $C_i$ into Eq. \eqref{WDV: Taylor expansion of the potential up to the fourth order in the alphas}, we find:
\begin{equation}\label{WDV: theta dependence of the potential}
\begin{aligned}
V(\theta)=& \: const. + \frac{1}{2}\chi \theta^2 +
\frac{1}{24} c_4 \theta^4 + \ldots ,
\end{aligned}
\end{equation}
with the following expressions for the topological susceptibility $\chi$
and the second cumulant $c_4$ in this model:
\begin{equation}\label{WDV: topological susceptibility}
\chi=\frac{F_\pi B_m \bar{m}}{2}\frac{A}{\frac{F_\pi B_m \bar{m}}{2} + A} ,
\end{equation}
\begin{equation}\label{WDV: second cumulant}
c_4 = -\frac{F_\pi B_m \bar{m}}{2}\left(\frac{A}{\frac{F_\pi B_m \bar{m}}{2} + A}\right)^4 \sum\limits_{j=1}^L \left(\frac{\bar{m}}{m_j}\right)^3 .
\end{equation}
\subsection{Considerations on the results}
At first, we notice that the result \eqref{WDV: topological susceptibility} was already known in the literature \cite{WDV2}, but it was obtained by studying the two-point correlation function of the topological charge density operator $Q(x)$ rather than by means of the $\theta$ expansion of the vacuum energy density; instead, for what concerns the result \eqref{WDV: second cumulant}, it has been derived for the first time in this paper.
If we consider the (formal) limit $A\rightarrow\infty$, the results \eqref{WDV: topological susceptibility}-\eqref{WDV: second cumulant} obtained in the $WDV$ model precisely reduce to those found in the framework of the Chiral Effective Lagrangian in Sec. 2.
The reason is similar to the one discussed in the previous section for the $ENL_\sigma$ model: being the anomalous term proportional to $A$ in the Lagrangian \eqref{WDV model}-\eqref{WDV: potential} quadratic in the flavour-singlet field
[using $U = ({F_\pi}/{\sqrt{2}})U'$ with $U' = e^{i\sqrt{\frac{2}{L}}\frac{S_\pi}{F_\pi}} \tilde{U}'$, $\tilde{U}' \in SU(L)$, see Eq. \eqref{U form for T<Tc}, it gives $M_{S_\pi}^2 = \frac{2LA}{F_\pi^2}$ in the chiral limit of zero quark masses \ldots], such limit corresponds to send the flavour-singlet mass to infinity, decoupling it from the theory, which thus reduces to the SU(L) Chiral Effective Lagrangian discussed in Sec. 2.
For what concerns the topological susceptibility, we also observe that the result \eqref{WDV: topological susceptibility} coincides with the result \eqref{ELsm: topological susceptibility} found in the $ENL_\sigma$ model provided that the following substitution is implemented:
\begin{equation}\label{Substitution to go from WDV to ELsm and vice versa}
A \longleftrightarrow 2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L .
\end{equation}
And this correspondence also applies to the expression for the flavour-singlet squared mass $M_{S_\pi}^2$.
Remarkably, this is not so for the second cumulant: indeed, even after such substitution, the result \eqref{WDV: second cumulant} does not turn into \eqref{ELsm: second cumulant}. This is due to the difference between the anomalous terms in Eqs. \eqref{WDV: potential}-\eqref{WDV: explicit potential} and \eqref{ELsm: potential}-\eqref{ELsm: explicit potential}:
while the anomalous term in Eqs. \eqref{WDV: potential}-\eqref{WDV: explicit potential} is purely quadratic in the combination $\theta - \frac{\sqrt{2L}}{F_\pi}S_\pi$ (or: $\theta - \sum_j \alpha_j$), the anomalous term in Eqs.
\eqref{ELsm: potential}-\eqref{ELsm: explicit potential} is the cosine of such a combination.
We also remark that the expressions \eqref{WDV: topological susceptibility} and
\eqref{WDV: second cumulant} have the right behaviour
\eqref{Eff ch Lagr: chiral limit of chi and c4}, in the chiral limit
$m_i \to 0$, or \eqref{Eff ch Lagr: chiral limit of chi and c4 - bis},
in the chiral limit $m_1 = \ldots = m_L \equiv m \to 0$, as predicted
by the relevant (flavour-singlet) Ward-Takahashi identities \cite{Crewther1977-1979}.
Instead, if we take the infinite quark-mass limit, by sending all $m_j\rightarrow\infty$ (which results in $\bar{m}\rightarrow\infty$), we find that:
\begin{equation}\label{WDV: pure-gauge limit of chi and c4}
\chi \to A ~,~~~ c_4 \to 0 .
\end{equation}
As we have already observed in the previous section, this limit is meant to ``freeze'' the dynamics of the quarks, reducing the model to a pure Yang-Mills one. So, we expect that in this limit the topological susceptibility coincides with that of the pure-gauge theory: it is exactly what happens in our case.
For what concerns the second cumulant, it is null in this infinite quark-mass limit. This is due to the fact that the $WDV$ model is built considering only the leading terms in the expansion in $1/N_c$ and, so, while it contains the term
$\frac{1}{2A}Q^2$ [see Eq. \eqref{Witten effective Lagrangian with Q}],
it does not contain also a term proportional to $Q^4$, which would contribute to the pure-gauge value of the second cumulant $c_4$:
indeed, this kind of term is of the next-to-leading order in $1/N_c$ (for a detailed discussion on the next-to-leading terms, see Ref. \cite{DNPV1981}).
\section{An ``Interpolating model'' with the inclusion of a U(1) axial condensate}
In this section, we shall consider another effective Lagrangian model (which
was originally proposed in Refs. \cite{EM1994} and elaborated on in Refs.
\cite{MM2003,EM2011,MM2013}), which is in a sense in-between the $EL_\sigma$ model and the $WDV$ model: for this reason we shall call it the \emph{Interpolating model}.
Indeed, in this model the $U(1)$ axial anomaly is implemented, as in the $WDV$ model \eqref{Witten effective Lagrangian with Q}, by properly introducing the auxiliary field $Q$, so that it correctly satisfies the transformation property \eqref{effective Lagrangian transformation under axial U(1)} under the chiral group.
Moreover, it also includes an interaction term proportional to the determinant of the mesonic field $U$, which is similar to the interaction term \eqref{instanton term in the Lagrangian} in the $EL_\sigma$ model, assuming that there is another $U(1)_A$-breaking condensate (in addition to the usual quark-antiquark chiral condensate $\langle \bar{q}q \rangle$).
This extra $U(1)$ chiral condensate has the form
$C_{U(1)} = \langle {\cal O}_{U(1)} \rangle$,
where, for a theory with $L$ light quark flavors, ${\cal O}_{U(1)}$ is a
$2L$-quark local operator that has the chiral transformation properties of
\cite{tHooft1976,KM1970,Kunihiro2009}
${\cal O}_{U(1)} \sim \displaystyle{{\det_{st}}(\bar{q}_{sR}q_{tL})
+ {\det_{st}}(\bar{q}_{sL}q_{tR}) }$,
where $s,t = 1, \dots ,L$ are flavor indices. The color indices (not
explicitly indicated) are arranged in such a way that
(i) ${\cal O}_{U(1)}$ is a color singlet, and (ii)
$C_{U(1)} = \langle {\cal O}_{U(1)} \rangle$ is a \emph{genuine} $2L$-quark
condensate, i.e., it has no \emph{disconnected} part proportional to some
power of the quark-antiquark chiral condensate $\langle \bar{q} q \rangle$;
the explicit form of the condensate for the cases $L=2$ and $L=3$ is
discussed in detail in the Appendix A of Ref. \cite{EM2011} (see also Ref. \cite{DM1995}).
The effective Lagrangian of the \emph{Interpolating} model is written in terms
of the topological charge density $Q$, the mesonic field
$U_{ij} \sim \bar{q}_{jR} q_{iL}$ (up to a multiplicative constant),
and the new field variable $X \sim {\det} \left( \bar{q}_{sR} q_{tL} \right)$
(up to a multiplicative constant), associated with the $U(1)$ axial
condensate:
\begin{equation}\label{Interpolating model Lagrangian with Q}
\begin{split}
\mathscr{L}(U,U^\dagger , X,& X^\dagger , Q)
=\frac{1}{2}\mathrm{Tr} [\partial_\mu U \partial^\mu U^{\dagger}]
+ \frac{1}{2}\partial_\mu X \partial^\mu X^{\dagger} \\
&-V_0(U,U^\dagger , X,X^\dagger)
+ \frac{i}{2}\omega_1 Q(x) \mathrm{Tr}\left[\log U -\log U^\dagger\right] \\
&+ \frac{i}{2}(1-\omega_1)Q(x)\left[\log X -\log X^\dagger\right]
+ \frac{1}{2A}Q^2(x) +\theta Q(x) ,
\end{split}
\end{equation}
where
\begin{equation}\label{potential of the interpolating model}
\begin{split}
V_0(U,U^\dagger , X,X^\dagger) &= \frac{1}{4}\lambda_\pi^2\mathrm{Tr} [( UU^{\dagger}-\rho_\pi \mathbf{I} )^2] + \frac{1}{4}\lambda_\pi^{'2}\left[\mathrm{Tr}(UU^\dagger)\right]^2 \\ & +\frac{1}{4}\lambda_X^2 [XX^\dagger - \rho_X]^2 - \frac{B_m}{2\sqrt{2}}\mathrm{Tr}[M(U + U^{\dagger})] \\& - \frac{\kappa_1}{2\sqrt{2}}[X^\dagger \det U + X\det U^\dagger ] .
\end{split}
\end{equation}
Since under a chiral $U(L)_L\otimes U(L)_R$ transformation \eqref{transformation SU(L)_L x SU(L)_R x U(1)_A} the field $X$ transforms exactly as $\det U$ [see Eq. \eqref{chiral variation of detU}], i.e.,
\begin{equation}\label{trasfX}
X \rightarrow \det (\tilde{V}_L) \det (\tilde{V}_R)^* X ,
\end{equation}
[i.e., $X$ is invariant under $SU(L)_L\otimes SU(L)_R\otimes U(1)_V$,
while, under a $U(1)$ axial transformation, $X\rightarrow e^{i2L\alpha}X$],
we have that, in the chiral limit $M=0$, the effective Lagrangian
\eqref{Interpolating model Lagrangian with Q}
is invariant under $SU(L)_L\otimes SU(L)_R \otimes U(1)_V$,
while under a $U(1)$ axial transformation, it correctly transforms as
in Eq. \eqref{effective Lagrangian transformation under axial U(1)}.
As in the case of the $WDV$ model, the auxiliary field $Q(x)$ in \eqref{Interpolating model Lagrangian with Q} can be integrated out using its equation of motion:
\begin{equation}
Q(x)=-A\left\{\theta + \frac{i}{2}\left[\omega_1 \mathrm{Tr} (\log U - \log U^\dagger )+ (1-\omega_1)(\log X - \log X^\dagger) \right]\right\} .
\end{equation}
After the substitution, we obtain:
\begin{equation}\label{Interpolating model Lagrangian without Q}
\mathscr{L}(U,U^\dagger , X,X^\dagger)=\frac{1}{2}\mathrm{Tr} [\partial_\mu U \partial^\mu U^{\dagger}] + \frac{1}{2}\partial_\mu X \partial^\mu X^{\dagger}
-\tilde{V}(U,U^\dagger , X,X^\dagger) ,
\end{equation}
where
\begin{equation}\label{potential of the interpolating model after having integrated out Q}
\begin{split}
\tilde{V}(U,U^\dagger &, X,X^\dagger)=V_0(U,U^\dagger , X,X^\dagger) \\
&+\frac{A}{2}\left\{\theta + \frac{i}{2}\left[\omega_1 \mathrm{Tr} (\log U - \log U^\dagger)+ (1-\omega_1)(\log X - \log X^\dagger )\right]\right\}^2 .
\end{split}
\end{equation}
Let us now briefly focus on the interaction term between $U$ and $X$ in Eqs.
\eqref{Interpolating model Lagrangian with Q}-\eqref{potential of the interpolating model}:
\begin{equation}\label{interaction term between X e U}
\mathscr{L}_{int}=\frac{\kappa_1}{2\sqrt{2}}[X^\dagger \det U + X\det U^\dagger ] .
\end{equation}
This term has a form very similar to the ``instantonic'' term \eqref{instanton term in the Lagrangian} of the $EL_\sigma$ model, but, differently from it, this term is invariant under the entire chiral group $U(L)_L \otimes U(L)_R$.\footnote{Assuming that the field $X$ has a non-zero vacuum expectation value $\Overline[2]{X}$ (which is the case if the parameter $\rho_X$ in the potential \eqref{potential of the interpolating model} is positive: see also Eq. \eqref{Parametrization of the field X} below\dots) and expanding $\det U = (F_\pi/\sqrt{2})^L e^{i\sqrt{2L}{S_\pi}/{F_\pi}}$ and $X=\Overline[2]{X}e^{i{S_X}/{\Overline[2]{X}}}$ in powers of the (pseudoscalar) excitations $S_\pi$ and $S_X$, one finds that $\mathscr{L}_{int}$ is quadratic at the leading order in the fields: considering for simplicity the chiral limit $M=0$ (and $\theta=0$), this term and the ``anomalous'' term (the last term in Eqs. \eqref{potential of the interpolating model after having integrated out Q} and \eqref{Interpolating: potential}) generate a squared-mass matrix for the fields $S_\pi$ and $S_X$, whose eigenstates are two different non-zero-mass singlets, called $\eta'$ and $\eta_X$ (see the original Refs. \cite{EM1994,MM2003,EM2011} for more details). This is what happens at $T=0$. Instead, at non-zero temperature, above the chiral transition, where $\Overline[2]{U}=0$ (and $U$ is thus ``linearized''), assuming that $\Overline[2]{X}$ is still different from zero (and, moreover, $\omega_1=0$; see Ref. \cite{MM2013}), one finds that, expanding in the fields:
\begin{equation}\label{Instantonic term in the Interpolating model}
\mathscr{L}_{int} = \kappa [\det U + \det U^\dagger ] + \ldots ~,~~~ \text{with:}~~ \kappa\equiv \frac{\kappa_1\Overline[2]{X}}{2\sqrt{2}} .
\end{equation}
In this case, therefore, the leading-order term in the fields has exactly the same form of the ``instantonic'' term \eqref{instanton term in the Lagrangian}: the dots in Eq. \eqref{Instantonic term in the Interpolating model} stay for
higher-order interaction terms containing also $S_X$.}
As usual, proceeding as we have done in the previous sections for the
$EL_\sigma$ model and the $WDV$ model, we shall neglect the
scalar degrees of freedom (retaining only the low-energy dynamics of the
effective pseudoscalar degrees of freedom), by taking the formal limits
$\lambda_\pi^2\rightarrow \infty$ and $\lambda_X^2\rightarrow \infty$
(i.e., by taking the limit of infinite mass for the scalar fields), which,
in addition to the constraint \eqref{U form for T<Tc} for the matrix field $U$,
also implies the analogous constraint
$XX^\dagger = \rho_X \equiv\frac{F_X^2}{2}$ for the $X$ field, i.e.,
\begin{equation}\label{Parametrization of the field X}
X=\frac{F_X}{\sqrt{2}}\, e^{i\beta} ,
\end{equation}
having introduced the decay constant $F_X$ of the field $X$, analogous to the
decay constant $F_\pi$ of the pions.
In this way, the Lagrangian of the model reduces to:
\begin{equation}\label{Interpolating model Lagrangian}
\mathscr{L}(U,U^\dagger , X,X^\dagger)=\frac{1}{2}\mathrm{Tr} [\partial_\mu U \partial^\mu U^{\dagger}] + \frac{1}{2}\partial_\mu X \partial^\mu X^{\dagger}
-V(U,U^\dagger , X,X^\dagger) ,
\end{equation}
where the potential $V$ is (apart from a trivial constant):
\begin{equation}\label{Interpolating: potential}
\begin{aligned}
V(U,U^\dagger,X,X^\dagger)&= - \frac{B_m}{2\sqrt{2}}\mathrm{Tr}[M(U+ U^{\dagger})] - \frac{\kappa_1}{2\sqrt{2}}[X^\dagger \det U + X \det U^\dagger ] \\
& +\frac{A}{2}\left\{\theta + \frac{i}{2}\left[\omega_1 \mathrm{Tr} (\log U - \log U^\dagger) + (1-\omega_1)(\log X - \log X^\dagger)\right]\right\}^2 .
\end{aligned}
\end{equation}
Setting $M$ in the usual diagonal form, $U$ as in Eq. \eqref{Diagonal form of U}
(but without the constraint \eqref{Constraint of the determinant of U}) and
the analogous parametrization \eqref{Parametrization of the field X}
for the field $X$, where the phase $\beta$ (exactly as the phases $\alpha_j$)
is constant with respect to $x$, we find the following expression for the
potential:
\begin{equation}\label{Interpolating: explicit potential}
\begin{aligned}
V(\vec{\alpha},\beta)= &-\frac{F_\pi B_m}{2}\sum\limits_{j=1}^L m_j \cos\alpha_j - c \cos\left(\beta-\sum\limits_{j=1}^L \alpha_j\right) \\
& +\frac{A}{2}\left[\omega_1\sum\limits_{j=1}^L \alpha_j + (1-\omega_1)\beta - \theta\right]^2 ,
\end{aligned}
\end{equation}
where we have defined:
\begin{equation}\label{Interpolating: definition of c}
c\equiv \kappa_1\frac{F_X}{2}\left(\frac{F_\pi}{\sqrt{2}}\right)^L .
\end{equation}
In order to find the minimum of the potential, we have to solve the following system of minimization equations:
\begin{equation}\label{Minimization equations in the interpolating model}
\frac{\partial V(\vec{\alpha}, \beta)}{\partial \alpha_i} = 0 \quad \forall i=1,\ldots,L ~,~~~
\frac{\partial V(\vec{\alpha}, \beta)}{\partial \beta} = 0 ,
\end{equation}
which, after a slight rearrangement, read as follows:
\begin{equation}\label{Minimization equations in the interpolating model rearranged}
\left\{
\begin{aligned}
&\frac{F_\pi B_m}{2}m_i\sin\alpha_i +A \left[\omega_1\sum\limits_{j=1}^L \alpha_j + (1-\omega_1)\beta - \theta\right]=0 ,\\
\\
&c\sin\left(\beta-\sum_j\alpha_j\right) + A(1-\omega_1) \left[\omega_1\sum\limits_{j=1}^L \alpha_j + (1-\omega_1)\beta - \theta\right]=0 .
\end{aligned}
\right.
\end{equation}
It is easy to check that, in the case $\theta=0$, setting $\beta=0$ and $\alpha_j=0$ $\forall j$ puts the potential in its minimum. So, if we consider the case
$\theta\ll 1$, we are allowed to use for the phases $\alpha_i$ and $\beta$
the following Taylor expansion in powers of $\theta$:
\begin{equation}\label{Interpolating: expansion in theta of all the phases}
\alpha_i=A_i \theta + B_i \theta^2 + C_i \theta^3 + \ldots ~,~~~
\beta = W \theta + Y \theta^2 + Z \theta^3 + \ldots
\end{equation}
The coefficients $A_i$, $B_i$, $C_i$, $W$, $Y$, $Z$ have to be determined by solving (order by order in $\theta$) the system \eqref{Minimization equations in the interpolating model rearranged}.
Looking at the equations \eqref{Minimization equations in the interpolating model rearranged}, it is easy to see that the change $\theta \rightarrow -\theta$ corresponds to the changes $\alpha_j \rightarrow -\alpha_j$ $\forall j$ and $\beta \rightarrow -\beta$, and, as a consequence, the coefficients of the even powers of $\theta$ in the expansions \eqref{Interpolating: expansion in theta of all the phases} must vanish:
\begin{equation}\label{Interpolating: quadratic coefficients of the expansions are equal to zero}
Y=0 ~,~~~ B_i=0 \quad \forall i .
\end{equation}
Concerning the coefficients of the odd powers of $\theta$, the following expressions are found:
\begin{equation}\label{Interpolating: linear coefficient of beta}
W=\frac{A \left[ 1+\frac{F_\pi B_m\bar{m}}{2c}(1-\omega_1) \right]}
{\frac{F_\pi B_m\bar{m}}{2}\left(1+\frac{A(1-\omega_1)^2}{c}\right)+A} ,
\end{equation}
\begin{equation}\label{Interpolating: linear coefficient of alphas}
A_i= \frac{A}{\frac{F_\pi B_m\bar{m}}{2}\left(1+\frac{A(1-\omega_1)^2}{c}\right)+A} \frac{\bar{m}}{m_i} ,
\end{equation}
\begin{equation}\label{Interpolating: cubic coefficient of beta}
\begin{aligned}
Z=&\frac{1}{6}\left(\frac{F_\pi B_m\bar{m}}{2}\right) A^3 \left[ \frac{F_\pi B_m\bar{m}}{2}\left(1+\frac{A(1-\omega_1)^2}{c}\right)+A \right]^{-4} \\
&\times \left[\left(\frac{F_\pi B_m\bar{m}}{2}\right)^2\frac{(1-\omega_1)^3}{c^3}\left(\frac{F_\pi B_m\bar{m}}{2} + A\omega_1 \right) \right. \\
& + \left. \left( 1 - \frac{A\omega_1(1-\omega_1)}{c} \right) \sum_j\left(\frac{\bar{m}}{m_j}\right)^3\right] ,
\end{aligned}
\end{equation}
and:
\begin{equation}\label{Interpolating: cubic coefficient of alphas}
\begin{aligned}
C_i = \frac{1}{6}& \left[ \frac{A}{\frac{F_\pi B_m\bar{m}}{2}\left(1+\frac{A(1-\omega_1)^2}{c}\right)+A} \right]^3 \frac{\bar{m}}{m_i} \\
\times & \left\{ \left(\frac{\bar{m}}{m_i}\right)^2 - \frac{A \left[\sum_j\left(\frac{\bar{m}}{m_j}\right)^3 + \left(\frac{F_\pi B_m\bar{m}}{2c}\right)^3 (1-\omega_1)^4 \right]}{\frac{F_\pi B_m\bar{m}}{2}\left(1+\frac{A(1-\omega_1)^2}{c}\right)+A} \right\} ,
\end{aligned}
\end{equation}
with $\bar{m}$ defined in Eq. \eqref{m-bar}.
Substituting the expressions \eqref{Interpolating: expansion in theta of all the phases} (with $B_i=Y=0$) into Eq. \eqref{Interpolating: explicit potential}
and expanding the potential up to the fourth order in $\theta$, we find:
\begin{equation}\label{Interpolating: potential up to the fourth order in theta}
\begin{aligned}
V(\theta)& \simeq const. {+} \frac{1}{2}\left\{\frac{F_\pi B_m}{2}\sum_j m_j A_j^2{+}c(W{-}\sum_j A_j)^2
{+} A\Big[\omega_1\sum_j A_j{+}(1{-}\omega_1)W{-}1\Big]^2\right\}\theta^2 \\
& {+} \frac{1}{24}\left\{24\,\frac{F_\pi B_m}{2}\sum_j m_j A_j C_j
{-} \frac{F_\pi B_m}{2}\sum_j m_j A_j^4{+}24\,c\,(W{-}\sum_j A_j)(Z{-}\sum_j C_j) \right. \\
& \left. {-} c\:(W{-}\sum_j A_j)^4 {+} 24A\Big[\omega_1\sum_j A_j{+}(1{-}\omega_1)W{-}1\Big] \Big[\omega_1\sum_j C_j{+}(1{-}\omega_1)Z\Big]\right\}\theta^4 + \ldots ,
\end{aligned}
\end{equation}
from which, after inserting the expressions \eqref{Interpolating: linear coefficient of beta}-\eqref{Interpolating: cubic coefficient of alphas}, we obtain the following expressions for the topological susceptibility $\chi$ and the second cumulant $c_4$ in this model:
\begin{equation}\label{Interpolating: topological susceptibility}
\chi=\frac{F_\pi B_m\bar{m}}{2} \:
\frac{A}{\frac{F_\pi B_m\bar{m}}{2}\left(1+\frac{A(1-\omega_1)^2}{c}\right)+A} ,
\end{equation}
\begin{equation}\label{Interpolating: second cumulant}
\begin{aligned}
c_4=-&\frac{F_\pi B_m\bar{m}}{2}\left[\frac{A}{\frac{F_\pi B_m\bar{m}}{2}\left(1+\frac{A(1-\omega_1)^2}{c}\right)+A}\right]^4 \\ & \times \left[\sum\limits_{j=1}^L\left(\frac{\bar{m}}{m_j}\right)^3+\left(\frac{F_\pi B_m\bar{m}}{2c}\right)^3(1-\omega_1)^4\right] .
\end{aligned}
\end{equation}
\subsection{Considerations on the results}
We first notice that the result \eqref{Interpolating: topological susceptibility} was originally found in Ref. \cite{EM1994}, but once again it was obtained by a different approach, i.e., by directly studying the two-point function of the field $Q(x)$. On the contrary, the result \eqref{Interpolating: second cumulant} has been derived in this paper for the first time.
Moreover, we notice that, if $\omega_1 \neq 1$, the topological susceptibility obtained in this \emph{Interpolating} model is smaller than the one obtained in the $WDV$ model, due to the positive (assuming $c>0$: see Refs. \cite{EM2011,MM2013}) corrective factor in the denominator.
If, instead, we set $\omega_1 = 1$ (which, as we shall comment in the next section, represents the most natural choice at $T=0$) the results for both $\chi$ and $c_4$ coincide precisely with those of the $WDV$ model (independently of the other parameters $\kappa_1$ and $F_X$ of the model). The explanation of this fact lies in the potential \eqref{Interpolating: explicit potential}: indeed, if we set $\omega_1=1$, we immediately see that, so as to obtain the minimum value for $V(\vec{\alpha},\beta)$, it is clear that we must set $\beta=\sum_j \alpha_j$, so that the cosine in the second term is equal to one. In this way, we find that the potential \eqref{Interpolating: explicit potential} coincides with the potential \eqref{WDV: explicit potential} of the $WDV$ model apart from a constant with respect to $\theta$: so, the final results for the topological susceptibility and for the second cumulant in the \emph{Interpolating} model with $\omega_1=1$ are indeed expected to coincide with those of the $WDV$ model.
\section{Conclusions: summary and analysis of the results}
In this conclusive section, we shall summarize the analytical results that we have found for the topological susceptibility $\chi$ and the second cumulant $c_4$ in the various cases that we have considered.
Moreover, we shall also report numerical estimates for these quantities, obtained both for $L=2$ and $L=3$ in the case of the Chiral Effective Lagrangian (see the discussion at the end of Sec. 2), and for $L=3$ in the other cases (effective Lagrangian models with the inclusion of the flavour-singlet meson field).\footnote{As discussed in detail in Ref. \cite{Veneziano1979},
when including the flavour-singlet meson field in the effective Lagrangian,
we must consider the case $L=3$, if we want to have a realistic description of
the physical world (at least at $T=0$): this is essentially due to
the fact that (see below) the value of $B m_s$, while being considerably larger
than $B m_u$ and $B m_d$, is comparable to (or even smaller than) the anomalous
contribution proportional to $2A/F_\pi^2$ in the meson squared mass matrix\dots}
For our numerical computations, the following values of the known parameters have been used:
\begin{itemize}
\item $A=\left(180\pm 5 \; \text{MeV}\right)^4$ (see Ref. \cite{DGP2005} and references therein).
\item $F_\pi=92.2 \pm 0.2$ MeV (see Ref. \cite{PDG2016-2017}, where the value of $f_\pi = \sqrt{2}F_\pi$ is reported).
\item For what concerns the parameter $B_m$, we shall rewrite it making use of the relation \eqref{B_m definition} in terms of the quantity $B$, which directly relates the quark masses to the light pseudoscalar meson masses. In particular, the following relations hold, at the leading order in the chiral perturbation theory:
\begin{equation}\label{formulae for B m_i}
\begin{aligned}
B m_u &= M_{\pi^0}^2 - \frac{1}{2}\left(M_{K^0}^2-M_{K^+}^2 + M_{\pi^+}^2\right) , \\
B m_d &= \frac{1}{2}\left(M_{K^0}^2-M_{K^+}^2 + M_{\pi^+}^2\right) , \\
B m_s &= \frac{1}{2}\left(M_{K^0}^2+M_{K^+}^2 - M_{\pi^+}^2\right) . \\
\end{aligned}
\end{equation}
So, these expressions can be numerically evaluated using the known values for the masses of the mesons $\pi^+$, $\pi^0$, $K^+$, $K^0$ \cite{PDG2016-2017}:
\begin{equation}\label{Meson masses values}
\begin{aligned}
& M_{\pi^+} = 139.57061(24) \; \text{MeV} , \\
& M_{\pi^0} = 134.9770(5) \; \text{MeV} , \\
& M_{K^+} = 493.677(16) \; \text{MeV} , \\
& M_{K^0} = 497.611(13) \; \text{MeV} .
\end{aligned}
\end{equation}
\item For what concerns the quantity $\kappa$, its value is not known \emph{a priori}. A possible way to evaluate it numerically is to make use of the relation among $\kappa$, $F_\pi$ and the meson masses, obtained within the $ENL_\sigma$ model in the case $L=3$:
\begin{equation}\label{relation for kappa}
M^2_{\eta'} + M^2_\eta - M_{K^0}^2 - M_{K^+}^2 = 6\kappa \frac{F_\pi}{\sqrt{2}}.
\end{equation}
Substituting the experimental values of the meson masses (in addition to those given in \eqref{Meson masses values} we need $M_\eta=547.862 \pm 0.017$ MeV and $M_{\eta'}=957.78 \pm 0.06$ MeV \cite{PDG2016-2017}), we find for this parameter the value $\kappa=1856.38\pm 4.04$ MeV.
\end{itemize}
The values of all the parameters we listed above allow us to evaluate numerically all the results coming from the Chiral Effective Lagrangian at the leading order $\mathcal{O}(p^2)$, the $ENL_\sigma$ model and the $WDV$ model. The situation of the \emph{Interpolating} model is more complicated: due to the fact that very little is known about its peculiar parameters, it is not possible to give a complete numerical form to the results found in this model. In particular:
\begin{itemize}
\item For what concerns the parameter $F_X$, only an upper bound is known
for it \cite{EM1994,MM2003,EM2011}: $\left|F_X\right| \leq 20$ MeV.
\item For what concerns the parameter $\kappa_1$ (which was named ``$c_1$'' in the original papers), we cannot say too much, apart from the fact that (assuming $F_X \neq 0$) it cannot be zero (see Ref. \cite{EM2011} for a detailed discussion on the role of this parameter).
\item At last, concerning the parameter $\omega_1$, we observe that the Lagrangian of the $WDV$ model is obtained from that of the \emph{Interpolating} model by choosing $\omega_1=1$ (and then letting $F_X \to 0$). At low temperatures, one expects that the deviations from the $WDV$ Lagrangian are small, in some sense, and therefore that $\omega_1$ should not be much different from the unity near $T=0$ (on the other side, $\omega_1$ must necessarily be taken equal to zero above the chiral transition temperature, in order to avoid a singular behaviour of the anomalous term \cite{EM1994,MM2013}).
Therefore, $\omega_1=1$ seems to be the most natural choice for $T=0$: with this choice, all the numerical values coincide with those of the $WDV$ model, regardless of the values of the other (unknown) parameters of the model, i.e., $\kappa_1$ and $F_X$.
\end{itemize}
Here is (in the following two subsections) a summary of both analytical and numerical results. [We recall that $\bar{m}$ is defined in Eq. \eqref{m-bar}.]
\subsection{Topological susceptibility}
\begin{itemize}
\item \emph{Chiral Effective Lagrangian $\mathcal{O}(p^2)$}:
\begin{eqnarray}\label{Eff Ch Lagr: numerical topological susceptibility}
\chi =& \frac{F_\pi B_m\bar{m}}{2} \nonumber \\
\chi^{(L=2)} =& \left(77.25\pm 0.08 \; \text{MeV}\right)^4 \nonumber \\
\chi^{(L=3)} =& \left(76.91\pm 0.08 \; \text{MeV}\right)^4
\end{eqnarray}
\item \emph{ENL$_{\sigma}$ model}:
\begin{eqnarray}\label{ELsm: numerical topological susceptibility}
\chi =& \frac{F_\pi B_m \bar{m}}{2} \frac{2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L}{\frac{F_\pi B_m \bar{m}}{2} + 2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L} \nonumber \\
\chi^{(L=3)} =& \left(76.271\pm 0.085 \; \text{MeV}\right)^4
\end{eqnarray}
\item \emph{WDV model}:
\begin{eqnarray}\label{WDV: numerical topological susceptibility}
\chi =& \frac{F_\pi B_m \bar{m}}{2}\frac{A}{\frac{F_\pi B_m \bar{m}}{2} + A} \nonumber \\
\chi^{(L=3)} =& \left(76.283\pm 0.106 \; \text{MeV}\right)^4
\end{eqnarray}
\item \emph{Interpolating model}:
\begin{eqnarray}\label{IM: numerical topological susceptibility}
\chi =& \frac{F_\pi B_m\bar{m}}{2} \: \frac{A}{\frac{F_\pi B_m\bar{m}}{2}\left(1+\frac{A(1-\omega_1)^2}{c}\right)+A} \nonumber \\
\chi^{(L=3)}_{(\omega_1=1)} =& \left(76.283\pm 0.106 \; \text{MeV}\right)^4
\end{eqnarray}
\end{itemize}
\subsection{Second cumulant}
\begin{itemize}
\item \emph{Chiral Effective Lagrangian $\mathcal{O}(p^2)$}:
\begin{eqnarray}\label{Eff Ch Lagr: numerical second cumulant}
c_4 =& -\frac{F_\pi B_m\bar{m}}{2}\sum\limits_{j=1}^L\left(\frac{\bar{m}}{m_j}\right)^3 \nonumber \\
c_4^{(L=2)} =& -\left(11.05 \pm 0.49\right)\times 10^6 \; \text{MeV}^4 \nonumber \\
c_4^{(L=3)} =& -\left(10.30 \pm 0.46\right)\times 10^6 \; \text{MeV}^4
\end{eqnarray}
\item \emph{ENL$_{\sigma}$ model}:
\begin{eqnarray}\label{ELsm: numerical second cumulant}
c_4 =& -\frac{F_\pi B_m \bar{m}}{2} \frac{2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L}{\left(\frac{F_\pi B_m \bar{m}}{2} + 2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L\right)^4} \nonumber \\
&\times \left[\left(2\kappa\left(\frac{F_\pi}{\sqrt{2}}\right)^L\right)^3\sum_j\left(\frac{\bar{m}}{m_j}\right)^3 + \left(\frac{F_\pi B_m \bar{m}}{2}\right)^3\right] \nonumber \\
c_4^{(L=3)} =& -\left(9.007 \pm 0.426\right)\times 10^6 \; \text{MeV}^4
\end{eqnarray}
\item \emph{WDV model}:
\begin{eqnarray}\label{WDV: numerical second cumulant}
c_4 =& -\frac{F_\pi B_m \bar{m}}{2}\left(\frac{A}{\frac{F_\pi B_m \bar{m}}{2} + A}\right)^4 \sum\limits_{j=1}^L \left(\frac{\bar{m}}{m_j}\right)^3 \nonumber \\
c_4^{(L=3)} =& -\left(9.030 \pm 0.134\right)\times 10^6 \; \text{MeV}^4
\end{eqnarray}
\item \emph{Interpolating model}:
\begin{eqnarray}\label{IM: numerical second cumulant}
c_4 =& -\frac{F_\pi B_m\bar{m}}{2}\left(\frac{A}{\frac{F_\pi B_m\bar{m}}{2}\left(1+\frac{A(1-\omega_1)^2}{c}\right)+A}\right)^4 \nonumber \\
& \times \left[\sum\limits_{j=1}^L\left(\frac{\bar{m}}{m_j}\right)^3+\left(\frac{F_\pi B_m\bar{m}}{2c}\right)^3(1-\omega_1)^4\right] \nonumber \\
c_{4~(\omega_1=1)}^{(L=3)} =& -\left(9.030 \pm 0.134\right)\times 10^6 \; \text{MeV}^4
\end{eqnarray}
\end{itemize}
Let us make some remarks on these results.
We observe that, within the present accuracy, there are no significant numerical differences between the results found in the $ENL_\sigma$ model and those found in the $WDV$ model (or in the \emph{Interpolating} model with $\omega_1=1$),
even if the theoretical expressions for the topological susceptibility and the second cumulant are in principle different (even considering the correspondence \eqref{Substitution to go from WDV to ELsm and vice versa}: see the discussion in Sec. 4.1).
On the contrary, the numerical results found in the $ENL_\sigma$ model, the $WDV$ model, and the \emph{Interpolating} model with $\omega_1=1$, are sensibly different from those found using the Chiral Effective Lagrangian at order $\mathcal{O}(p^2)$.
In this respect, we must here recall that in Refs. \cite{MC2009,GM2015,GHVV2016} also the non-leading order (NLO) correction to the result for the topological susceptibility using the Chiral Effective Lagrangian have been computed, and it turned out that it is of the order of percent for physical quark masses. Starting from our results, we can derive the order of the corrections caused by the presence of the flavour singlet to the numerical values obtained using the Chiral Effective Lagrangian $\mathcal{O}(p^2)$, so as to make a comparison with that of the NLO corrections: for what concerns the topological susceptibility, these corrections are of the order of some percent and, so, are comparable with the NLO ones; for what concerns the second cumulant, instead, the corrections are considerably larger, being about the 12\%.
\subsection{Comparison of the results with the literature}
In the end, let us make a comparison between the above-reported numerical estimates and the available lattice results in the literature.
We first consider the topological susceptibility. The value of the topological susceptibility in \emph{full} QCD has been measured through Monte Carlo simulations on the lattice. We report here two recent results, obtained with $L=2+1$ light flavours with physical quark masses:
\begin{equation}\label{lattice results for topological susceptibility}
\begin{split}
\chi^{1/4} = 73(9) \; \text{MeV} ~\qquad \text{(see Ref. \cite{chi-lattice_1})};\\
\chi^{1/4} = 75.6(2.0) \; \text{MeV} \quad \text{(see Ref. \cite{chi-lattice_2})},
\end{split}
\end{equation}
where, for the second value, the error in parentheses has been obtained adding in quadrature the statistical error (1.8) and the systematic error (0.9).
These results are in perfect agreement (within the large errors) with all those found in our work.
In figure 1, the numerical values obtained for the topological susceptibility in our work are reported together with the lattice results.
\begin{figure}[!ht]
\begin{center}
\begin{minipage}[!ht]{8 cm}
\centering
\includegraphics[width=7.8 cm]{u1theta_T_0_fig1a.eps}
\end{minipage}
\begin{minipage}[!ht]{7.2 cm}
\centering
\includegraphics[width=7.2 cm]{u1theta_T_0_fig1b.eps}
\end{minipage}
\end{center}
\caption{On the left, the two lattice results
\eqref{lattice results for topological susceptibility} for the topological
susceptibility (in the \emph{full} theory with quarks) and the three
theoretical estimates for $L=3$, reported in Eqs.
\eqref{Eff Ch Lagr: numerical topological susceptibility},
\eqref{ELsm: numerical topological susceptibility}, and
\eqref{WDV: numerical topological susceptibility}-\eqref{IM: numerical topological susceptibility},
are shown (from left to right). On the right, only the three theoretical estimates are shown (in a different scale), so as to better compare them with each other.}
\end{figure
With the help of this figure, we clearly see that the numerical value obtained using the Chiral Effective Lagrangian $\mathcal{O}(p^2)$ (the first point in the figure on the right) is clearly detached from the ones related to the $ENL_\sigma$ model and to the $WDV$ (or \emph{Interpolating}) model (respectively, the second and the third point in the figure on the right). Besides, these last two values are evidently compatible within the uncertainties.
Let us now move to the second cumulant. In lattice simulations, a quantity which is linked to the second cumulant is usually measured rather than the second cumulant itself, due to a simpler definition on the lattice. We report here the definition of this quantity, usually called $b_2$ (a more detailed description of this parameter can be found in Ref. \cite{VP2009}):
\begin{equation}\label{b2 definition}
b_2 \equiv \frac{c_4}{12\chi} = -\frac{\langle Q_{tot}^4\rangle_{\theta=0}- 3 \langle Q_{tot}^2\rangle^2_{\theta=0}}{12 \langle Q_{tot}^2\rangle_{\theta=0}} .
\end{equation}
All the lattice determinations of this parameter at $T=0$ are obtained, to date, in $SU(N_c)$ pure-gauge frameworks, considering $N_c\geq3$: it must be taken into account that our final results have been obtained in a \emph{full} QCD framework. There are, in the literature, a number of results for $b_2$ at $N_c = 3$, obtained using different approaches (see Ref. \cite{VP2009} and references therein):
\begin{equation}\label{lattice results for b2 (1)}
\begin{aligned}
b_2&=-0.023(7) \quad \text{(cooling method)};\\
b_2&=-0.024(6) \quad \text{(heating method)};\\
b_2&=-0.025(9) \quad \text{(overlap method)};
\end{aligned}
\end{equation}
while more recent results are:
\begin{equation}\label{lattice results for b2 (2)}
\begin{split}
b_2 & = -0.026(3) \qquad \text{(see Ref. \cite{b2-lattice_1})};\\
b_2 & = -0.0216(15) \quad \text{(see Ref. \cite{b2-lattice_2})}.
\end{split}
\end{equation}
Starting from our results for the topological susceptibility and for the second cumulant in the various cases described, we find:
\begin{itemize}
\item \emph{Chiral Effective Lagrangian $\mathcal{O}(p^2)$}:
\begin{eqnarray}\label{Eff Ch Lagr: numerical b_2}
b_2^{(L=2)} =& -0.026(1) \nonumber \\
b_2^{(L=3)} =& -0.025(1)
\end{eqnarray}
\item \emph{ENL$_{\sigma}$ model}:
\begin{equation}\label{ELsm: numerical b_2}
b_2^{(L=3)}=-0.0222(1)
\end{equation}
\item \emph{WDV model}:
\begin{equation}\label{WDV: numerical b_2}
b_2^{(L=3)}=-0.0222(4)
\end{equation}
\item \emph{Interpolating model}:
\begin{equation}\label{IM: numerical b_2}
b_{2~(\omega_1=1)}^{(L=3)}=-0.0222(4)
\end{equation}
\end{itemize}
In figure 2, these theoretical estimates for $b_2$ (for the \emph{full}
theory with $L=3$) are reported together with the above-mentioned
lattice (pure-gauge) results.
\begin{figure}[!ht]
\begin{center}
\centering
\includegraphics[width=12 cm]{u1theta_T_0_fig2.eps}
\end{center}
\caption{The five lattice (pure-gauge) results, reported in Eqs.
\eqref{lattice results for b2 (1)} and \eqref{lattice results for b2 (2)},
and the three theoretical estimates for the \emph{full} theory with $L=3$,
reported in Eqs. \eqref{Eff Ch Lagr: numerical b_2},
\eqref{ELsm: numerical b_2}, and \eqref{WDV: numerical b_2}-\eqref{IM: numerical b_2}, are shown (from left to right).}
\end{figure}
We notice that the lattice (pure-gauge) results turn out to be compatible (in
almost all cases) with our theoretical estimates: this global accordance is
quite impressive, considering that our results have been derived in \emph{full}
QCD rather than in a pure Yang-Mills theory.
We also recall that, on the basis of the results obtained in Secs.
3.1, 4.1, and 5.1, the value of the ratio $b_2$ tends, in the infinite
quark-mass limit, to the pure-gauge value $b_2^{(YM)} = -\frac{1}{12} \simeq -0.083$ (also obtained using a pure-gauge instanton-gas model) in the $ENL_\sigma$
model [see Eq. \eqref{ELsm: pure-gauge limit of chi and c4}], while it tends
to the pure-gauge value $b_2^{(YM)} = 0$ in the $WDV$ model (and in the
\emph{Interpolating} model with $\omega_1 = 1$)
[see Eq. \eqref{WDV: pure-gauge limit of chi and c4}]: therefore, we see
that both the lattice pure-gauge data and our \emph{full}-QCD theoretical
estimates lie in between these two different values and (considering the errors) they disagree with both of them, even if they are considerably closer to the
second one.
It will be interesting to see if future more precise lattice data
(including also the effects of quarks with physical masses) will confirm
(or not) this curious coincidence.
\newpage
\renewcommand{\Large}{\large}
|
{
"timestamp": "2018-10-02T02:24:53",
"yymm": "1806",
"arxiv_id": "1806.00835",
"language": "en",
"url": "https://arxiv.org/abs/1806.00835"
}
|
\section{Architecture}
We implemented HNAs as two joined networks as depicted in figure~\ref{fig:architecture}: a backbone network, that learns distributions over the training data, and an observer network whose output modulates the activations of the backbone network neurons. The backbone is itself composed of two parts: an optimized prior, and a posterior
network that integrates the prior with the information received from the observer. Only the observer network receives information specific to the desired output. The combination in the backbone of the prior and the specific information encoded by the observer produces an approximation of the desired output. Interestingly, without the measurement made by the observer network, the output of the backbone network alone represents an undifferentiated state, i.e. it is a distribution of all possible outputs. Furthermore, because the output of the observer network is directly connected to all neurons inside the backbone network, theses neurons are entangled. In other words, their activations are synchronized in a way that causes the output of the network to collapse into values representing a more restricted set of possible outputs. Because the role of the backbone network is to explicitly encode the space of shared variations among examples, it allows for the specific information of the observer to be encoded in a very low dimensionality space. For this paper we used a single neuron with a sine activation as the output of the observer network, reducing all the specific information about the input to a single, bounded and continuous dimension.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{figures/arch.pdf}
\caption{The HNA used in this work}
\label{fig:architecture}
\end{figure}
Figure~\ref{fig:architecture} shows the architecture that we used to implement our HNAs. The prior can be easily conditioned (for example on the class label), by having one distribution for every condition. Here we used Gaussian mixtures as priors and optimized the parameters of each mixture during training. Every hidden layers in the backbone as well as the output layer receive the output of the previous layer, concatenated with a skip connection to the observer's output. These skip connections are expanded to match the size of the previous hidden layer. Therefore, the activation of the layers of the backbone network are computed using the following formula:
\begin{equation}
h_{n} = f(W_{n} \cdot (h_{n-1} | s_{n})) + b_{n}
\end{equation}
Where $h_{n}$ is the activation of the $n^{th}$ layer, $f$ is an activation function, $s_{n}$ is the $n^{th}$ skip connection of the same size as $h_{n-1}$, $W_{n}$ a parameter matrix of size $N \times 2N$, $b_{n}$ is a bias parameter, $|$ indicates a concatenation and $\cdot$ is matrix multiplication. Thanks to the skip connections, the integration of the prior with the observer's measurement is performed throughout the network. Every layer in the backbone network has its activation modified by the output of the observer, both directly and through the activation of its previous layer.
\section{Biomedical Datasets}
\subsection{The Cancer Genome Atlas dataset}
Recent years have seen a big increase in the availability of RNA sequencing data \citep{tcga,gtex}. Despite constant improvements in the devices and protocols being introduced, RNA sequencing remains a complex process. Samples are often collected by different experimenters in different conditions. These factors introduce an element of uncertainty that is very difficult to quantify. This renders RNA sequencing, and therefore the gene expressions that are derived from the sequencing, susceptible to noise. Another issue that is inherent to the cost and availability of biological data, is that it can often be impossible to get a large number of RNA sequencing samples for a given condition.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figures/tcga.pdf}
\caption{PCA of cancer gene expression samples for: (A) bladder cancer samples (BLCA) when training examples and generated samples are reduced using the first 2 principal components of training examples. Round dots represent training examples, triangles represent samples generated using a FSS method. The color represents the activation of the observer. (B) shows the same graph for kidney cancers (KIPAN). (C) Distributions of the 3 cancer subtypes composing KIPAN. The last row shows how well generated samples follow the distribution of training examples when principal components are taken individually for BLCA (D) and KIPAN (E).}
\label{fig:tcga}
\end{figure}
To investigate how well HGNs can model cancer gene expression data, we trained an HGN on the TCGA dataset \citep{tcga}. This dataset contains \numprint{13246} samples from 37 cancers. The number of samples for each cancer type ranges from 45 to \numprint{1212}. Every sample contains the measured expressions for \numprint{20531} genes. We used the same previous HGN architecture, but with a layer size of 300 neurons and 10 layers in the backbone network. Here the network receives gene expressions instead of images and cancer types instead of image classes. Since gene expressions cannot be plotted and visually evaluated, we elected to reduce the dimensionality of both training examples and reconstructions using a PCA (figure~\ref{fig:tcga}). We chose PCA because it is a linear transformation, therefore any non-linear behavior observed after the dimensionality reduction is due entirely to the network. We computed, for every cancer type, a PCA transformation on the training examples, and used that transformation to reduce both training examples and reconstructions. Reconstructions, therefore, are not only presented from the `point of view' of training examples, but they also have no influence on the PCA computation. The transformation is therefore not biased toward accurately representing reconstructions.
Figure~\ref{fig:tcga} shows how well the distribution of generated gene expressions, models the distribution of training samples when both are reduced to their first 4 principal components. Results on KIPAN are particularly interesting. This cancer type is made of 3 distinct subtypes. Despite the heterogeneity of the samples, the HGN was capable of modeling the 3 subtypes, and of generating intermediate samples between these subtypes. These results strongly suggest that HGNs can accurately model RNA sequencing data. Figure~\ref{fig:tcga} also sheds light on the inner functioning of HGNs as it show the trajectory followed in the input space as the observer value increases. Through the interaction with the backbone network, the 1-dimensional manifold learned by the observer is embedded in the higher-dimensional holographic representation of the backbone. The result is then projected onto the input space.
\subsection{IEDB}
MHC-I associated peptides (MIPs) are small sections of proteins that are presented at the surface of cells by the MHC-I molecules. MIPs present an image of the inner functioning of the cell to the immune system and thus play an essential role in the identification of cancerous and virus-infected cells. Given their central role in immunity, being able to reliably predict the binding affinity of MIPs to MHC-I alleles has been identified as a milestone for the development of virus and cancer vaccines \citep{mhcvac,Backert2015}. Modern high throughput studies routinely generate tens of thousands of MIPs whose binding affinity to the subjects' MHC-I molecules have to be assessed \citep{mhcdset}. Given the impossibility of experimentally measuring such a high number of affinities in the lab, machine learning affinity predictors are now the norm \citep{Backert2015,Granados2014,Granados2012,Laumont2016,Pearson2016}. MHC-I molecules are however encoded by extremely polymorphic genes in the human species ($\sim$\numprint{13324} alleles) \citep{robinson2014ipd}. This makes it impossible to gather tens of thousands of training examples for every single allele and modern predictors suffer from a lack of precision regarding rare alleles.
We previously showed on the Olivetti dataset that HGNs are capable of learning rich representations from a small number of training examples. Here we tested if this extends in particular to regression models as well. Our Holographic Regression Network (HRN) shares the same architecture as the HGNs. However, the network takes the label of the MHC-I allele instead of the class label, and the sequence of the MIP in amino acids encoded using word embeddings \citep{bengio2003neural} instead of the image. Most MIPs vary between 8 to 11 amino acids in length. To account for this difference, we used a default size of 11 and padded the remaining slots with zeros if needed. The output of the network is the predicted affinity of the MIP sequence to the MHC-I allele. We trained our network on the IEDB database \citep{iedb} that contains \numprint{185157} measurements of affinities of MIPs to MHC-I alleles, for 6 species including humans. We normalized all the measurements between [0, 1] by transforming them using the following formula:
\begin{equation}
X = \log_2 (V+2)
\end{equation}
where $V$ is the vector of measurements, and then by dividing by the maximum value of $X$. IEDB also contains several MIP to MHC-I alleles associations with underspecified binding measurements that only give an upper or lower bound. We elected to keep these examples and use the lower or upper bound as the target value. We evaluated our model on a 80/20 random split of the data. We trained our model on 80\% of the data and tested on the remaining 20\%.
Our model achieved comparable performances than those reported by the state-of the-art model NetMHC 4.0 \citep{andreatta2015gapped} (see table 1). HRN results show higher Pearson correlation coefficient (PCC) than NetMHC4.0 on all lengths except for the 11-mers. Furthermore, our results also show very good performances on the alleles that were removed in the study \citet{andreatta2015gapped} for having too few examples (less than 20 MIPs). These results show that the ability of HGNs to generalize from very few examples extends to HRNs as well. This suggests that HNAs can improve the performances of regression models, especially when the number of training examples is small.
\begin{table}[h]
\centering
\begin{tabular}{@{}lll@{}}
\toprule
& NetMHC4.0 & HRN\\
\midrule
8-mer & 0.717 & \textbf{0.761}\\
9-mer & 0.717 & \textbf{0.761}\\
10-mer & 0.744 & \textbf{0.747}\\
11-mer & \textbf{0.706} & 0.691\\
\hline
\hline
All lengths, human MIPs only & - & 0.769\\
All lengths & - & 0.760\\
All lengths, MHC-I alleles with less than 20 ex. & - & 0.953*\\
\bottomrule
\end{tabular}
\caption{Pearson correlation coefficient (PCC) NetMHC4.0 vs HRN. NetMHC4.0 results are taken from \citet{andreatta2015gapped}.\\$*$Training set contains 160 examples, testing 19 examples.}
\label{tbl:iedb}
\end{table}
\section{Conclusion}
In this work, we have introduced HNAs, a new framework for deep learning capable of deriving holographic representations from training sets. We have shown that HNAs can generalize from very few examples, can be used for both generative models and state-of-the-art regression models, and are highly noise resistant. These characteristics make HNAs particularly well-suited for biological applications where the available training data is limited or noisy. We also showed that, in the context of HNAs, the choice of activation function can dramatically influence convergence. Our results show that sine activations consistently outperforms several more widely used activation functions. Whether these results extend to other type of architectures remains to be investigated. Our experiments also suggest that HNAs are easy to train. All the networks in this work follow the same basic recipe, and we used the exact same architecture for all tasks on image datasets. Finally, a very interesting aspect of HNAs is their ability to project all examples into a single bounded dimension. This strongly suggests that they could also be used as effective dimensionality reduction methods.
\section{Holographic Generative Networks}
Generative networks try to learn the distribution of training examples in order to generate new samples. Because HNAs explicitly learn priors over the training data, they should perform well as generative networks. We designed Holographic Generative Networks (HGN) as auto-encoders. Here the information received by the observer is the image to be reconstructed, while the backbone network receives random samples drawn from Gaussian mixtures conditioned on the image class. To ensure the reproducibility of our results, and assess the ease of training of HGNs, we used a single architecture for all experiments in this section regardless of the dataset: MNIST\citep{mnist}, Fashion-MNIST\citep{fashionmnist}, SVHN\citep{svhn}, Olivetti\citep{olivetti}. All networks minimized a MSE reconstruction loss and were trained using adagrad\citep{duchi2011adaptive} with a learning rate of 0.01. Layers were initialized using Glorot initialization\citep{glorot2010understanding}, Gaussian mixtures parameters were initialized using a [0, 1] bounded uniform distribution, each mixture contained 16 1-dimensional Gaussians. We used a layer size of 128, 2 hidden layers for the observer network and 4 for the backbone. All networks were implemented and trained using the deep learning framework Mariana\citep{mariana}.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figures/nonlin.pdf}
\caption{Impact of non-linearities on the convergence of HGNs. Sine type activations outperform more widely used non-linearities}
\label{fig:nonlin}
\end{figure}
Because HNAs heavily rely on the modulation of the backbone network by the observer’s output, the backbone representations have to be rich and flexible enough to allow for a wide set of possible modulations. To explore the impact of activation functions on HNAs, we compared convergence performances of networks using the following non-linearities: sigmoid, tanh, relu\citep{relu}, lrelu\citep{lrelu}, and sine. For networks using sine activations we used a normal sine activation for all layers but the last one, for which we used a sine normalized between [0, 1] to match target values. As shown in figure~\ref{fig:nonlin}, HGNs using sine type activation functions consistently converge better than the others. Surprisingly, performances of non-sine activation functions are very dataset dependent. We also found that multiplying the output of the observer by an arbitrary value (here 10) can improve convergence~\ref{fig:nonlin}, we refer to this non-linearity as `sin10'. As shown in figure~\ref{fig:nonlin}, HGNs using sin10 converge better than those using a regular sine function. The only exception being the Olivetti dataset, on which both networks showed very similar performances. These results suggest that HNAs benefit from the non-linear processing performed by sine type functions.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figures/recons.pdf}
\caption{HGN generated images. The first row shows generated images using an FSS method. The second row shows the closest images in the training set calculated using an euclidean distance. The third row shows the observer activation.}
\label{fig:recons}
\end{figure}
Sampling from a trained HGN is trivial. After fixing the class label, the whole generation process is controlled by the single output of the observer. Since this value is bounded between [-1, 1], our sampling method simply generates images using an ascending vector of $N$ evenly spaced values between [-1, 1]. We call this method of sampling Full Spectrum Sampling (FSS). It has the advantage of generating samples showing the whole spectrum of what the model can generate with arbitrary precision. Furthermore, because all elements are arranged along a single dimension, we found that examples will generally be arranged in a ``semantic'' way, with similar images corresponding to similar observer activations. This is exemplified in figure~\ref{fig:recons} where we can see the gradual evolution of generated outputs that occur on all datasets.
As shown in figure~\ref{fig:recons}, the same architecture was able to generate realistic images for all datasets. Some of the generated images are however a little blurry. Our hypothesis is that the blur is a consequence of the architecture not having enough capacity to effectively model the distribution of training examples. This is supported by the fact that generations on the Olivetti dataset are much sharper. Of all datasets, the Olivetti dataset is the most challenging for generative models because of the small number of samples. This dataset contains 10 images for every one of the 40 subjects. The small number of examples makes it easy to overfit and harder to generalize. Nonetheless, the HGN was able to generate smooth interpolation between facial expression as shown in figure~\ref{fig:recons}. Using the same architecture as for the other datasets, we were able to generate 100 images for each subject from the 10 available in the training set. Because the model was conditioned on the image class, here the subject, it learned a different movement pattern for each class. These movements include continuous changes of facial expressions and 3D rotations as shown in figure~\ref{fig:recons}. On Olivetti we've found that adding a small fixed amount of stochastic Gaussian noise can slightly improve the smoothness and quality of interpolation. Interestingly, SVHN generations show no side digits. This is due to the lack of correlation between the center digit and the side digits. This behavior is related to the noise reduction performed by HNAs. This aspect is explored further in the next section.
\subsection{Demonstration of Noise Resistance}
An interesting feature of HNAs is their ability to filter out noise from the training set. This generalization property is especially important when the signal to noise ratio cannot be as directly assessed as with images, or when getting noise free examples is virtually impossible.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figures/noiseres1.png}
\caption{Comparison of denoising performances of an HNG, a standard AE, and a VAE tested on a corrupted synthetic toy dataset.}
\label{fig:noiseres1}
\end{figure}
To illustrate the nature of the noise filtering, we generated a synthetic dataset from three concentric crescents, each one corresponding to a single class. For each class, we generated 1000 points and added increasing amounts of normal noise. We then trained an HGN to reproduce the points of each class. Here it is important to note that, contrary to standard denoising Auto-Encoders (dAE) setup where the network receives noisy inputs but the costs are computed on clean targets\citep{dae1,dae2}, here both the input and the target are identical and thus share the same noise. This setup therefore mimics a real life situation where the true target cannot be known. We can see in figure~\ref{fig:noiseres1} that the HGN was able to remove a significant part of the noise in the reconstructions. As expected, the capacity of the network to extract the underlying distributions decreases as we increase the noise standard deviation. However, at high levels of noise (std=0.08), the network is still able to extract clear distinct distributions for each class. At very high levels of noise (std=0.16), the network extracted important features about the original data such as the bell shape and the 3 separate classes. These results show that HGNs are capable of retrieving information about underlying distributions from very corrupted examples. Figure~\ref{fig:noiseres1} also shows the results obtained by a standard AE with relu units and an architecture similar to the backbone network of the HGN, and by a variational autoencoder (VAE). We've used a standard VAE implementation \citep{vae}. The inference model is gaussian with a diagonal covariance, and the generative model is a bernoulli vector:
\begin{align}
z \sim q_{\phi}(z | x) &= \mathcal{N}(\mu_{\phi}(x), \Sigma_{\phi}(x)^2)\\
x \sim p_{\theta}(x | z) &= \mathcal{B}(\pi_{\theta}(z))
\end{align}
We report results with the prior $p(z) = \mathcal{N}(0, \mathtt{I})$ as we have not found that learning the prior during training helps the model perform the denoising operation. Under these conditions, the results from figure~\ref{fig:recons} display a typical failure mode of VAEs when trained on low-dimensionality inputs (as hinted also in \citet{vaefail1}). As expected, the standard AE overfitted the training set and was unable to distinguish the true input signal from the overall image distribution.
We further designed an experiment on the Olivetti and MNIST datasets mimicking the variability encountered in real life data capture. In both cases we corrupted images using the following formula:
\begin{equation}
\widetilde{X} = \mathtt{max}(0, X + \mathcal{N}(0, \Sigma^2))
\end{equation}
The clipping to 0 for negative values simulates a situation where the signal goes below the detection threshold. The corruption was performed only once on the whole dataset, not at every training step. This simulates the real life situation where only a finite number of noisy training examples are available.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figures/noise_1c.pdf}
\caption{Comparison of denoising performances when models are trained on corrupted data of a single class of: (A) MNIST and (B) Olivetti. Reconstructions obtained with a standard AE, a dAE, a VAE and a HGN.}
\label{fig:noise_1c}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figures/noise_unlab.pdf}
\caption{Comparison of denoising performances when models are trained on corrupted unlabeled examples. Reconstructions obtained with a VAE and a HGN.}
\label{fig:noise_unlab}
\end{figure}
Using these corrupted datasets we compared the denoising performances of HGN to those of other models. Because standard implementations of AEs, dAEs and VAEs are not conditioned on the class, we trained all models either on a single class for each dataset (figure~\ref{fig:noise_1c}), or on unlabeled datasets by putting all examples under the same class (figure~\ref{fig:noise_unlab}). As shown in figure~\ref{fig:noise_1c}, at low noise levels on MNIST (std=0.08), both the AE and the dAE appear to have overfitted the data. The reconstructions of the VAE and HGN are both noiseless and look similar to each other. On higher noise levels (std=0.32), the dAE was able to remove more of the noise than AE. The VAE output collapsed towards the average image. This shows a failure mode of VAEs. Natural images rarely have a multivariate gaussian structure. Therefore, unless $z$ is lossless, which is not going to be the case in practice, the best-fitting distribution for the generative model will be averaging multiple outputs \citep{vaefail2,elbo}. The HGN reconstructions were not affected by the noise increase.
On the Olivetti dataset, training on a single class means training on 10 images. Figure~\ref{fig:noise_1c} therefore shows the whole training set. At low noise level (std=0.08), the AE and dAE show similar outputs and neither was able to perform any noise reduction. The VAE again collapsed on the average face. The HGN reconstructions however are significantly less noisy than the inputs and also show variability in the face angle. On high noise levels (std=0.32) the HGN was the only model that did not overfit, and was able to remove a major part of the noise.
Figure~\ref{fig:noise_unlab} shows the comparison of a HGN and VAE when trained on an unlabeled, corrupted version of Olivetti. At low noise levels (std=0.08), both models generated faces that are less noisy than the inputs. Interestingly, some of the faces generated by the HGN do not reconstruct the inputs, such as with the face of the third woman from the left. This behavior suggests that the model is relying more on the general representation in the backbone than on the single value received from the observer. The more we increase the noise level, the bigger the difference between the two models. At very high noise levels (std=0.32 and std=0.64) the HGN was more able to extract a face structure than the VAE, and still generate outputs showing some variability. In conclusion, despite being presented with very corrupted inputs from very small datasets, HGNs did not overfit, and were able to filter out a major part of the noise. These results show that HGNs are highly noise resistant.
\section{Introduction}
The success of deep learning algorithms during the last decade can be attributed to their ability to learn relevant latent representations \citep{bengio2013representation}. However, despite these recent advances, generalization from few and potentially noisy examples remains an open problem. Most deep learning architectures have many more parameters than the number of examples they are trained upon and are therefore particularly sensitive to noise and overfitting on small datasets. These limitations can make deep learning algorithms hard to apply to real life situations where data is scarce and noisy. These problems could in theory be overcome if we were able to effectively learn relevant, smooth, latent representations from few examples and despite the presence of noise.
Manifold learning is an emergent property of some deep learning algorithms. A class of algorithms that aims at deriving latent, lower-dimensional manifolds from the training set are Autoencoders (AEs) with bottleneck layers \citep{reprlearning}. Since these networks reconstruct their input using fewer units than the input dimensionality, it is assumed that the bottleneck representation `summarises' the input. More recently, the application of variational approaches to auto-encoders have led to the introduction of Variational Autoencoders (VAEs) \citep{vae}. VAEs address the problem of manifold learning by constraining the parameter distribution to a simpler distribution family than the true distribution. This approach forces the latent space to be dense, enabling straightforward sampling for generation.
In our approach, manifold learning is also a consequence of the network's training. We force the network to generalize by placing an extremely severe bottleneck over the information received from training examples. Here, we are projecting all training example into a single bounded dimension. As with VAEs, we also combine the input information with an optimized prior. However, we treat the prior as a separate input to the network.
Because the network has very little information from the training examples, it must complement it with an accurate general representation of the training set. Because these representations are continuous, multi-dimensional, and represent the whole training set, we call them `Holographic Representations' and the architectures capable of generating them `Holographic Neural Architectures' (HNAs).
In this work, we introduce the general framework of HNAs as well as an implementation of the concept. We tested our models on several datasets: MNIST \citep{mnist}, Fashion-MNIST \citep{fashionmnist}, SVHN \citep{svhn}, Olivetti \citep{olivetti}, TCGA \citep{tcga} and IEDB \citep{iedb}. We show that HNAs are inherently highly resistant to input noise. We report that HNAs can be used to build noise resistant generative networks even on very small datasets, as well as state-of-the-art regression models. Finally, we also explore the effects of different activation functions on the performances of HNAs. Our results show that sine types activations outperform more widely used non-linearities.
|
{
"timestamp": "2018-06-05T02:13:53",
"yymm": "1806",
"arxiv_id": "1806.00931",
"language": "en",
"url": "https://arxiv.org/abs/1806.00931"
}
|
\section{Introduction}
The bivariate formula
\begin{equation}\label{GF1}
\sum_{k=1}^{\infty}\frac{k}{k^4-a^2k^2-b^4}
=\frac{1}{2}\sum_{k=1}^{\infty}\frac{(-1)^{k-1}(5k^2-a^2)}{k\binom{2k}{k}}
\cdot\frac{\prod_{j=1}^{k-1}((j^2-a^2)^2+4b^4)}{\prod_{j=1}^{k} (j^4-a^2j^2-b^4)}
\end{equation}
has been first conjectured by H. Cohen and then proved independently by T. Rivoal \cite[Theorem 1.1]{Ri04} and D. M. Bradley \cite[Theorem 1]{Br08} by reducing it to the finite combinatorial identity
\begin{equation}\label{ID1}
\sum_{k=1}^n\binom{2k}{k}
\frac{(5k^2-a^2)\prod_{j=1}^{k-1}((n^2-j^2)(n^2+j^2-a^2))}{
\prod_{j=1}^{k} (n^2+(n-j)^2-a^2)(n^2+(n+j)^2-a^2)}=\frac{2}{n^2-a^2}
\end{equation}
and by Kh. and T. Hessami Pilehrood \cite[Theorem 1]{PP08} by applying the Wilf-Zeilberger theory. Since the left-hand side of \eqref{GF1} can be written as the generating function of $\zeta(3+2r+4s)$,
$$\sum_{r=0}^{\infty}\sum_{s=0}^{\infty}\binom{r+s}{r}\zeta(3+2r+4s)a^{2r}b^{4s},$$
it follows that, by extracting the coefficients for $(r,s)=(0,0)$
and $(r,s)=(1,0)$, we obtain the A\'pery-like identities
\begin{equation}\label{S12}
\zeta(3)=\frac{5}{2}\sum_{k=1}^{\infty}\frac{(-1)^{k-1}}{k^3\binom{2k}{k}}
\quad\mbox{and}\quad
\zeta(5)=\frac{1}{2}\sum_{k=1}^{\infty}\frac{(-1)^{k-1}}{\binom{2k}{k}}
\left(\frac{4}{k^5}-\frac{5H_{k-1}(2)}{k^3}\right)
\end{equation}
where $H_n(s)=\sum_{j=1}^n\frac{1}{j^s}$ is the {\sl harmonic sum} of weight $s$. For more details about Ap\'ery-like series see also \cite{AG99,BB97,PP08,Ch17}.
Here we consider a similar bivariate formula
\begin{equation}\label{GF2}
\sum_{k=1}^{\infty}\frac{1}{k^2-ak-b^2}
=\sum_{k=1}^{\infty}\frac{(3k-a)}{k\binom{2k}{k}}
\cdot\frac{\prod_{j=1}^{k-1}(j^2-a^2-4b^2)}{\prod_{j=1}^{k} (j^2-aj-b^2)}
\end{equation}
where the left-hand side is the generating function of $\zeta(2+r+2s)$,
$$\sum_{r=0}^{\infty}\sum_{s=0}^{\infty}
\binom{r+s}{r}\zeta(2+r+2s)a^rb^{2s}.$$
For $a=0$, \eqref{GF2} yields a formula due to D. H. Bailey, J. M. Borwein, and D. M. Bradley,
$$\sum_{s=0}^{\infty}
\zeta(2+2s)b^{2s}=\sum_{k=1}^{\infty}\frac{1}{k^2-b^2}
=3\sum_{k=1}^{\infty}\frac{1}{\binom{2k}{k}}
\cdot\frac{\prod_{j=1}^{k-1}(j^2-4b^2)}{\prod_{j=1}^{k} (j^2-b^2)}$$
which appeared in \cite[Theorem 1.1]{BBB06}.
Moreover, for $(r,s)=(1,0)$ and $(r,s)=(0,1)$, we get the A\'pery-like identities
\begin{align}\label{S34}
\zeta(3)=\sum_{k=1}^{\infty}\frac{1}{\binom{2k}{k}}
\left(\frac{2}{k^3}+\frac{3H_{k-1}(1)}{k^2}\right)\quad\mbox{and}\quad
\zeta(4)=3\sum_{k=1}^{\infty}\frac{1}{\binom{2k}{k}}
\left(\frac{1}{k^4}-\frac{3H_{k-1}(2)}{k^2}\right).
\end{align}
Replacing $a$ by $2a$ and then letting $x^2 = a^2 + b^2$
in \eqref{GF2} we find the equivalent identity
$$\sum_{k=1}^{\infty}\frac{1}{(k-a)^2-x^2}
=\sum_{k=1}^{\infty}\frac{(3k-2a)}{k\binom{2k}{k}}
\cdot\frac{\prod_{j=1}^{k-1}(j^2-4x^2)}{\prod_{j=1}^{k} ((j-a)-x^2)}$$
which has been proved by Kh. and T. Hessami Pilehrood \cite[(24)]{PP12a}.
Again, in the same spirit of what has been done for \eqref{GF1}, our proof of the identity \eqref{GF2} is reduced to show the following novel finite identity
\begin{equation}\label{ID2}
\sum_{k=1}^n
\binom{2k}{k}\frac{3k-2n+a}{k^2-a^2}\cdot\prod_{j=1}^{k-1}
\frac{(j-n)(j-n+a)}{j^2-a^2}=\frac{2}{n-a}.
\end{equation}
In \cite[Theorem 4.2]{Ta10} the author established that for any prime $p>5$,
\begin{align}\label{CO1}
&\sum_{k=1}^{p-1}\frac{1}{k}\binom{2k}{k}
\equiv -\frac{8H_{p-1}(1)}{3}\pmod{p^4},\\
\label{CO2}
&\sum_{k=1}^{p-1}\frac{(-1)^{k}}{k^2}\binom{2k}{k}
\equiv \frac{4}{5}\left(\frac{H_{p-1}(1)}{p}+2pH_{p-1}(3)\right)\pmod{p^4}.
\end{align}
Thanks to the finite identities \eqref{ID1} and \eqref{ID2}, we managed to improve congruence \eqref{CO1} and to show several other congruences. The main results are as follows: for any prime $p>5$,
\begin{align}
\label{CO4b}
&\sum_{k=1}^{p-1}\frac{1}{k^3}\binom{2k}{k}\equiv
-\frac{2H_{p-1}(1)}{p^2}\pmod{p^2},\\
\label{CO4c}
&\sum_{k=1}^{p-1}\binom{2k}{k}\frac{H_k(2)}{k}
\equiv \frac{2H_{p-1}(1)}{3p^2}\pmod{p^2}.
\end{align}
These congruences are known modulo $p$ (see \cite[Theorem 2]{PP12}) and they confirm modulo $p^2$ the following conjecture by Z.-W. Sun
: for each prime $p>7$,
\begin{align*}
&\sum_{k=1}^{p-1}\frac{1}{k^3}\binom{2k}{k}\equiv
-\frac{2H_{p-1}(1)}{p^2}-\frac{13H_{p-1}(3)}{27}\pmod{p^4},\\
&\sum_{k=1}^{p-1}\binom{2k}{k}\frac{H_k(2)}{k}
\equiv \frac{2H_{p-1}(1)}{3p^2}-\frac{38H_{p-1}(3)}{81}\pmod{p^3}.
\end{align*}
the first one appeared in \cite[Conjecture 1.1]{SunZW11} and the second one in \cite[Conjecture 5.1]{SunZW15}.
\section{Preliminaries concerning multiple harmonic sums}
We define the {\sl multiple harmonic sum} as
$$H_n(s_1,\dots,s_r)=\sum_{1\leq k_1<k_2<\cdots<k_r\leq n}\frac{1}{k_1^{s_1} k_2^{s_2}\cdots k_r^{s_r}}$$
where $n\geq r>0$ and each $s_i$ is a positive integer. The sum $s_1+s_2+\dots+s_r$ is the weight of the multiple sum. Furthermore, by $\{s_1, s_2, \dots, s_j\}^m$ we denote the
sequence of length $mj$ with $m$ repetitions of $(s_1, s_2,\dots, s_j)$.
\noindent By \cite[Theorem 5.1]{SunZH00}), for any prime $p>s+2$ we have
\begin{align*}
H_{p-1}(s)\equiv
\begin{cases}
\displaystyle -\frac{s(s+1)}{2(s+2)}\,p^2\,B_{p-s-2} \pmod{p^3} &\mbox{if $s$ is odd,}\vspace{1mm}\\
\displaystyle \frac{s}{s+1}\,p\,B_{p-s-1} \pmod{p^2} &\mbox{if $s$ is even.}
\end{cases}
\end{align*}
where $B_n$ be the $n$-th Bernoulli number.
\noindent Let $p>5$ be a prime, then by \cite[Theorem 2.1]{Ta10},
\begin{equation}\label{ta}
H_{p-1}(2)\equiv - \frac{2H_{p-1}(1)}{p}-\frac{pH_{p-1}(3)}{3}\pmod{p^4}.
\end{equation}
Moreover, by \cite[Lemma 3]{PP12},
$$H_{p-1}(1,2)\equiv -\frac{3H_{p-1}(1)}{p^2}-\frac{5H_{p-1}(3)}{12}\pmod{p^3}$$
and by \cite[Proposition 3.7]{Za08} and \cite[Theorem 4.5]{PPT14}
$$H_{p-1}(1,1,2)\equiv -\frac{11H_{p-1}(3)}{12p}\pmod{p^2}\;,\;
H_{p-1}(1,1,1,2)\equiv -\frac{5H_{p-1}(3)}{6p^2}\pmod{p}.$$
Finally, by \cite[Theorem 3.2]{Za08},
$$H_{p-1}(2,2)\equiv \frac{H_{p-1}(3)}{3p}\;,\;
H_{p-1}(1,3)\equiv \frac{3H_{p-1}(3)}{4p} \pmod{p^2}$$
and by \cite[Theorem 3.5]{Za08},
$$H_{p-1}(2,1,2)\equiv 0\;,\;
H_{p-1}(1,2,2)\equiv \frac{5H_{p-1}(3)}{4p^2}\;,\;
H_{p-1}(1,1,3)\equiv -\frac{5H_{p-1}(3)}{12p^2} \pmod{p}.$$
\section{Proofs of the generating function \eqref{GF2} and the related combinatorial identity \eqref{ID2}}
By partial fraction decomposition with respect to $b^2$, we get
$$\frac{\prod_{j=1}^{k-1}(j^2-a^2-4b^2)}{\prod_{j=1}^{k} (j^2-aj-b^2)}=
\sum_{n=1}^k\frac{C_{n,k}(a)}{n^2-an-b^2}$$
where
$$C_{n,k}(a)=\frac{\prod_{j=1}^{k-1} (j^2-(a-2n)^2)}{\prod_{j=1,j\not=n}^{k} (j-n)(j+n-a)}.$$
Hence, by inverting the summations order, the identity \eqref{GF2} can be written as
$$\sum_{n=1}^{\infty}\frac{1}{n^2-an-b^2}
=\sum_{k=1}^{\infty}\frac{(3k-a)}{k\binom{2k}{k}}
\sum_{n=1}^k\frac{C_{n,k}(a)}{n^2-an-b^2}
=\sum_{n=1}^{\infty}\frac{1}{n^2-an-b^2}
\sum_{k=n}^{\infty}\frac{(3k-a)C_{n,k}(a)}{k\binom{2k}{k}}.$$
It follows that \eqref{GF2} holds as soon as
\begin{equation}\label{x1}
1=\sum_{k=n}^{\infty}\frac{(3k-a)C_{n,k}(a)}{k\binom{2k}{k}}=
\sum_{k=n}^{\infty}\frac{(3k-a)}{k\binom{2k}{k}}\cdot\frac{\prod_{j=1}^{k-1} (j^2-(a-2n)^2)}{\prod_{j=1,j\not=n}^{k} (j-n)(j+n-a)}.
\end{equation}
Taking the same approach given in \cite{Ri04} for the proof of \eqref{GF1}, the above formula is equivalent to this finite combinatorial identity
\begin{equation}\label{x2}
\sum_{k=1}^n
\binom{2k}{k}(3k-a)\,
\frac{\prod_{j=1}^{k-1}(j-n)(j+n-a)}{\prod_{j=1}^{k}(j^2-(a-2n)^2)}=\frac{2}{a-n}.
\end{equation}
Both identities \eqref{x1} and \eqref{x2} are consequences of the next theorem after setting $z=2n-a$.
\begin{theorem} For any positive integer $n$,
\begin{equation}\label{y2}
\sum_{k=1}^n
\binom{2k}{k}(3k-2n+z)\,
\frac{\prod_{j=1}^{k-1}(j-n)(j-n+z)}{\prod_{j=1}^{k} (j^2-z^2)}=\frac{2}{n-z},
\end{equation}
and
\begin{equation}\label{y1}
\sum_{k=n}^{\infty}\frac{(3k-2n+z)}{k\binom{2k}{k}}\cdot\frac{\prod_{j=1}^{k-1} (j^2-z^2)}{\prod_{j=1,j\not=n}^{k} (j-n)(j-n+z)}=1.
\end{equation}
\end{theorem}
\begin{proof}
Let
$$F(n,k)=\binom{2k}{k}(3k-2n+z)
\frac{\prod_{j=0}^{k-1}(j-n)(j-n+z)}{\prod_{j=1}^{k}(j^2-z^2)},
$$
and
$$G(n,k)=\frac{k(k^2-z^2)F(n,k)}{(2n-3k-z)(n+1-k)(n+1-k-z)}.$$
Then $(F,G)$ is a Wilf-Zeilberger pair, or WZ pair, which means that they satisfy the relation
$$F(n+1,k)-F(n,k)=G(n,k+1)-G(n,k).$$
In order to prove \eqref{y2}, it suffices to prove that $S_n:=\sum_{k=1}^{n}F(n,k)=2n$. Now
$S_1=F(1,1)=2$.
Morever,
$$S_{n+1}-S_n=\sum_{k=1}^{n+1}F(n+1,k)-\sum_{k=1}^{n+1}F(n,k)=G(n,n+2)-G(n,1)=2$$
because $F(n,n+1)=G(n,n+2)=0$ and $G(n,1)=-2$.
In a similar way, we show \eqref{y1} by considering the WZ pair given by
$$F(n,k)=\frac{(3k-2n+z)}{k\binom{2k}{k}}\cdot\frac{\prod_{j=1}^{k-1} (j^2-z^2)}{\prod_{j=1,j\not=n}^{k} (j-n)(j-n+z)},
$$
and
$$G(n,k)=\frac{2(2k-1)(k-n)F(n,k)}{n(2n-3k-z)(n-z)}.$$
\end{proof}
\section{More binomial identities}
Here we collect a few identities, apparently new, involving the binomial coefficients $\binom{2k}{k}$ and $\binom{n+k}{k}$ which will play a crucial role in the next sections.
\begin{theorem} For any positive integer $n$,
\begin{align}\label{Id0}
&\frac{3}{2}\sum_{k=1}^{n} \frac{1}{k}\binom{2k}{k}=
\sum_{k=1}^{n}\frac{1}{k}\binom{n+k}{k}+H_{n}(1)\\
\label{Id1}
&\sum_{k=1}^n\binom{2k}{k}\left(\frac{3H_{k}(1)}{2k}-\frac{1}{k^2}\right)
=\sum_{k=1}^{n}\binom{n+k}{k}\frac{H_k(1)}{k}-H_n(2)\\
\label{Id2}
&\sum_{k=1}^n\binom{2k}{k}\left(\frac{3H_{k}(2)}{k}-\frac{1}{2k^3}\right)
=\sum_{k=1}^{n}\binom{n+k}{k}\frac{H_k(2)+H_n(2)}{k}
+H_n(2)H_n(1)-H_n(1,2)
\end{align}
\end{theorem}
\begin{proof}
Let us consider the WZ pair
$$F(n,k)=\frac{1}{k}\binom{n+k}{k}\quad\mbox{and}\quad
G(n,k)=\frac{k}{(n+1)^2}\binom{n+k}{k}
$$
then
\begin{align*}
S_{n+1}-S_n&=F(n+1,n+1)+\sum_{k=1}^{n}(G(n,k+1)-G(n,k))\\
&=F(n+1,n+1)+G(n,n+1)-G(n,1)\\
&=\frac{3/2}{n+1}\binom{2(n+1)}{n+1}-\frac{1}{n+1}
\end{align*}
where $S_n:=\sum_{k=1}^{n}F(n,k)$. Thus
\begin{align*}
S_n=\frac{3}{2}\sum_{k=1}^n\frac{1}{k}\binom{2k}{k}-H_n(1)
\end{align*}
and we may conclude that \eqref{Id0} holds.
\noindent Now let $S_n^{(1)}:=\sum_{k=1}^{n}F(n,k)H_k(1)$ then
\begin{align*}
S_{n+1}^{(1)}-S_n^{(1)}&=F(n+1,n+1)H_{n+1}(1)\\
&\qquad+\sum_{k=1}^{n}\left(G(n,k+1)H_{k}(1)-G(n,k)\left(H_{k-1}(1)+\frac{1}{k}\right)\right)\\
&=F(n+1,n+1)H_{n+1}(1)+G(n,n+1)H_{n}(1)-\sum_{k=1}^{n}\frac{G(n,k)}{k}\\
&=\binom{2(n+1)}{n+1}\left(\frac{3H_{n+1}(1)}{2(n+1)}-\frac{1}{(n+1)^2}\right)+\frac{1}{(n+1)^2}
\end{align*}
where we used $\sum_{k=1}^n\binom{n+k}{k}=\frac{1}{2}\binom{2(n+1)}{n+1}-1$. Hence we find that
$$S_n^{(1)}=\sum_{k=1}^n\binom{2k}{k}\left(\frac{3H_{k}(1)}{2k}-\frac{1}{k^2}\right)+H_n(2)$$
which implies \eqref{Id1}.
\noindent Let $S_n^{(2)}:=\sum_{k=1}^{n}F(n,k)H_k(2)$ then
\begin{align*}
S_{n+1}^{(2)}-S_n^{(2)}&=F(n+1,n+1)H_{n+1}(2)\\
&\qquad+\sum_{k=1}^{n}\left(G(n,k+1)H_{k}(2)-G(n,k)\left(H_{k-1}(2)+\frac{1}{k^2}\right)\right)\\
&=F(n+1,n+1)H_{n+1}(2)+G(n,n+1)H_{n}(2)-\sum_{k=1}^{n}\frac{G(n,k)}{k^2}\\
&=\binom{2(n+1)}{n+1}\left(\frac{3H_{n+1}(2)}{2(n+1)}-\frac{1}{2(n+1)^3}\right)-\frac{S_n}{(n+1)^2}
\end{align*}
where we applied
$$
\sum_{k=1}^{n}\frac{G(n,k)}{k^2}=\frac{1}{(n+1)^2}\sum_{k=1}^{n}F(n,k)=\frac{S_n}{(n+1)^2}.$$ Therefore
\begin{align*}
S_n^{(2)}&=\sum_{k=1}^n\binom{2k}{k}\left(\frac{3H_{k}(2)}{2k}-\frac{1}{2k^3}\right)-\sum_{k=1}^n\frac{S_{k-1}}{k^2}\\
&=\sum_{k=1}^n\binom{2k}{k}\left(\frac{3H_{k}(2)}{2k}-\frac{1}{2k^3}\right)-\frac{3}{2}\sum_{k=1}^n\frac{1}{k^2}\sum_{j=1}^{k-1}\frac{1}{j}\binom{2j}{j}+H_n(1,2)\\
&=\sum_{k=1}^n\binom{2k}{k}\left(\frac{3H_{k}(2)}{2k}-\frac{1}{2k^3}\right)-\frac{3}{2}\sum_{j=1}^{n }\frac{1}{j}\binom{2j}{j}(H_n(2)-H_j(2))+H_n(1,2)\\
&=\sum_{k=1}^n\binom{2k}{k}\left(\frac{3H_{k}(2)}{k}-\frac{1}{2k^3}\right)-\frac{3H_n(2)}{2}\sum_{k=1}^{n}\frac{1}{k}\binom{2k}{k}+H_n(1,2)\\
\end{align*}
and the proof of \eqref{Id2} is complete.
\end{proof}
\section{Proofs of the main supercongruences}
\begin{theorem} For any prime $p>3$,
\begin{equation}\label{CO5}
\sum_{k=1}^{p-1} \frac{1}{k}\binom{2k}{k}\equiv
-\frac{8H_{p-1}(1)}{3}-\frac{5p^2H_{p-1}(3)}{3}\pmod{p^5}\\
\end{equation}
Moreover, for any prime $p>5$,
\begin{align*}
&\sum_{k=1}^{p-1}\frac{1}{k^3}\binom{2k}{k}\equiv
-\frac{2H_{p-1}(1)}{p^2}\pmod{p^2},\\
&\sum_{k=1}^{p-1}\binom{2k}{k}\frac{H_k(2)}{k}
\equiv \frac{2H_{p-1}(1)}{3p^2}\pmod{p^2}.
\end{align*}
\end{theorem}
\begin{proof} We first note that
\begin{equation}\label{nkk}
\binom{p-1+k}{k}=\frac{p}{k}\binom{p+k-1}{k-1}=
\frac{p}{k}\prod_{j=1}^{k-1}\left(1+\frac{p}{j}\right)
=\frac{1}{k}\sum_{j=0}^{k-1}p^{j+1}H_{k-1}(\{1\}^{j}).
\end{equation}
Therefore, by \eqref{Id0} with $n=p-1$ we obtain the desired congruence \eqref{CO5},
\begin{align*}
\sum_{k=1}^{p-1} \frac{1}{k}\binom{2k}{k}&=\frac{2}{3}\left(H_{p-1}(1)+\sum_{j=0}^{p-2}p^{j+1}H_{p-1}(\{1\}^{j},2) \right)\\
&\equiv \frac{2}{3}\left(H_{p-1}(1)+pH_{p-1}(2)+p^2 H_{p-1}(1,2)\right.\\
&\qquad\qquad \left. +p^3 H_{p-1}(1,1,2)+p^4 H_{p-1}(1,1,1,2)\right)\\
&\equiv-\frac{8H_{p-1}(1)}{3}-\frac{5p^2H_{p-1}(3)}{3}\pmod{p^5}.
\end{align*}
By letting $z=2n$ in \eqref{y2} we have
$$\sum_{k=1}^n
\binom{2k}{k}\frac{k}{k^2-4n^2}\,
\prod_{j=1}^{k-1}\frac{j^2-n^2}{j^2-4n^2}=-\frac{2}{3n}.$$
Let $n=p>5$ be a prime and move the $p$-th term of the sum to the right-hand side,
$$\sum_{k=1}^{p-1}
\frac{1}{k}\binom{2k}{k}\frac{1}{1-\frac{4p^2}{k^2}}\,
\prod_{j=1}^{k-1}\frac{1-\frac{p^2}{j^2}}{1-\frac{4p^2}{j^2}}=\frac{2}{3p}\left(
\frac{1}{2}\binom{2p}{p}\prod_{j=1}^{p-1}\frac{1-\frac{p^2}{j^2}}{1-\frac{4p^2}{j^2}}-1\right).$$
The left-hand side modulo $p^4$ is congruent to
$$\sum_{k=1}^{p-1}
\frac{1}{k}\binom{2k}{k}\left(1+\frac{4p^2}{k^2}\right)\,
\prod_{j=1}^{k-1}\left(1+\frac{3p^2}{j^2}\right)\equiv
\sum_{k=1}^{p-1}
\frac{1}{k}\binom{2k}{k}+p^2\sum_{k=1}^{p-1}
\binom{2k}{k}\left(\frac{1}{k^3}+\frac{3H_k(2)}{k}\right).
$$
On the other hand, by \cite[Theorem 2.4]{Ta10},
\begin{equation}\label{ppp}
\frac{1}{2}\binom{2p}{p}\equiv 1+2pH_{p-1}(1)+\frac{2p^3H_{p-1}(3)}{3}
\equiv 1-p^2H_{p-1}(2)-\frac{p^4H_{p-1}(4)}{2}
\pmod{p^6},
\end{equation}
the right-hand side is
\begin{align*}
\frac{1}{2}\binom{2p}{p}\prod_{j=1}^{p-1}\frac{1-\frac{p^2}{j^2}}{1-\frac{4p^2}{j^2}}&\equiv \frac{1}{2}\binom{2p}{p}\prod_{j=1}^{p-1}\left(1+\frac{3p^2}{j^2}+\frac{12p^4}{j^4}\right)\\
&\equiv \left(1-p^2H_{p-1}(2)-\frac{p^4H_{p-1}(4)}{2}\right)
\\&\qquad \cdot\left(1+3p^2H_{p-1}(2)+12p^4H_{p-1}(4)+9p^4H_{p-1}(2,2)\right)\\
&\equiv1+2p^2H_{p-1}(2)+p^4\left(\frac{17H_{p-1}(4)}{2}+3H_{p-1}(2,2)\right)\\
&\equiv1+2p^2H_{p-1}(2)
\pmod{p^5}.\end{align*}
where $2H_{p-1}(2,2)=(H_{p-1}(2))^2-H_{p-1}(4)\equiv 0 \pmod{p}$.
Finally, by \eqref{CO5},
\begin{equation}\label{D1}
\sum_{k=1}^{p-1}
\binom{2k}{k}\left(\frac{1}{k^3}+\frac{3H_k(2)}{k}\right)\equiv
\frac{8H_{p-1}(1)}{3p^2}+\frac{5H_{p-1}(3)}{3}+\frac{4pH_{p-1}(2)}{3}
\equiv 0 \pmod{p^2}.
\end{equation}
where we used \eqref{ta}.
\noindent By \eqref{Id2}, with $n=p-1$, we have that
\begin{align}\label{D2}
\sum_{k=1}^{p-1}\binom{2k}{k}\left(\frac{3H_{k}(2)}{k}-\frac{1}{2k^3}\right)
&=p\sum_{k=1}^{p-1}\prod_{j=1}^{k-1}\left(1+\frac{p}{j}\right)
\frac{H_k(2)+H_{p-1}(2)}{k^2}\nonumber\\
&\qquad
+H_{p-1}(2)H_{p-1}(1)-H_{p-1}(1,2)\nonumber\\
&\equiv p\sum_{k=1}^{p-1}\frac{H_k(2)}{k^2}-H_{p-1}(1,2)
\nonumber\\
&=pH_{p-1}(2,2)+pH_{p-1}(4) -H_{p-1}(1,2)\nonumber\\
&\equiv -H_{p-1}(1,2)=\frac{3H_{p-1}(1)}{p^2}\pmod{p^2}.
\end{align}
The proof is completety as soon as we combine properly congruences \eqref{D1} and \eqref{D2}.
\end{proof}
\section{Finale - Two A\'pery-like congruences}
The following congruences are related to the second series in \eqref{S12} and to the first series in \eqref{S34}.
\begin{theorem} For any prime $p>3$,
\begin{align}\label{CO5b}
&\sum_{k=1}^{p-1}\binom{2k}{k}\left(\frac{2}{k^2}-\frac{3H_k(1)}{k}\right)
\equiv \frac{2H_{p-1}(1)}{p}+3pH_{p-1}(3)\pmod{p^4},\\
\label{CO5c}
&\sum_{k=1}^{p-1}(-1)^{k}\binom{2k}{k}\left(\frac{4}{k^4}+\frac{5H_k(2)}{k^2}\right)
\equiv -H_{p-1}(4) \pmod{p^2}.
\end{align}
\end{theorem}
\begin{proof} By \eqref{Id1} with $n=p-1$ and by $\eqref{nkk}$
\begin{align*}
\sum_{k=1}^{p-1}\binom{2k}{k}\left(\frac{3H_{k}(1)}{2k}-\frac{1}{k^2}\right)
&=\sum_{k=1}^{p-1}\frac{H_k(1)}{k^2}\sum_{j=0}^{k-1}p^{j+1}H_{k-1}(\{1\}^{j})-H_{p-1}(2)\\
&\equiv p\sum_{k=1}^{p-1}\frac{H_{k-1}(1)+\frac{1}{k}}{k^2}\left(1+pH_{k-1}(1)+p^2H_{k-1}(1,1)\right)-H_{p-1}(2)\\
&\equiv -H_{p-1}(2)+pH_{p-1}(1,2)+pH_{p-1}(3)\\
&\quad +2p^2H_{p-1}(1,1,2)+p^2H_{p-1}(2,2)
+p^2H_{p-1}(1,3)\\
&\quad+3p^3H_{p-1}(1,1,1,2)+p^3H_{p-1}(2,1,2)+p^3H_{p-1}(1,2,2)\\
&\quad +p^3 H_{p-1}(1,1,3)\\
&\equiv -\frac{H_{p-1}(1)}{p}-\frac{3pH_{p-1}(3)}{2}\pmod{p^4}
\end{align*}
where at the last step we applied the results mentioned in the preliminaries.
By comparing the coefficient of $a^2$ in the expansion of both sides of \eqref{ID1} at $a=0$ we have
$$\sum_{k=1}^n
\binom{2k}{k}\frac{5k^2}{4n^4+k^4}
\prod_{j=1}^{k-1}
\frac{n^4-j^4}{4n^4+j^4}
\left(
\frac{1}{5k^2}+\sum_{j=1}^{k-1}\frac{1}{n^2+j^2}
-2\sum_{j=1}^{k}\frac{2n^2+j^2}{4n^4+j^4}
\right)
=-\frac{2}{n^4}.$$
Let $n=p>3$ be a prime then move to the right-hand side the $p$-th term of the sum on the left. Thus, the left-hand side modulo $p^2$ is congruent to
$$\sum_{k=1}^{p-1}
(-1)^{k-1}\binom{2k}{k}\frac{5}{k^2}
\left(
\frac{1}{5k^2}+H_{k-1}(2)
-2H_{k}(2)
\right)=\sum_{k=1}^{p-1}(-1)^k\binom{2k}{k}\left(\frac{4}{k^4}+\frac{5H_k(2)}{k^2}\right).$$
The right-hand side multiplied by $p^4$ is
\begin{equation}\label{rhs}
-2-\binom{2p}{p}
\prod_{j=1}^{p-1}
\frac{p^4-j^4}{4p^4+j^4}
\left(
\frac{1}{5}+p^2\sum_{j=1}^{p-1}\frac{1}{p^2+j^2}
-2p^2\sum_{j=1}^{p}\frac{2p^2+j^2}{4p^4+j^4}
\right)
\end{equation}
and it remains to verify that it is congruent to $-p^4H_{p-1}(4)$ modulo $p^6$.
We note that
\begin{align*}
&\prod_{j=1}^{p-1}
\frac{p^4-j^4}{4p^4+j^4}\equiv 1-5p^4H_{p-1}(4)
\pmod{p^6},\\
&p^2\sum_{j=1}^{p-1}\frac{1}{p^2+j^2}\equiv p^2H_{p-1}(2)-p^4H_{p-1}(4)
\pmod{p^6},\\
&2p^2\sum_{j=1}^{p}\frac{2p^2+j^2}{4p^4+j^4}
\equiv\frac{6}{5}+2p^2H_{p-1}(2)+4p^4H_{p-1}(4)
\pmod{p^6},
\end{align*}
Hence, by \eqref{ppp}, \eqref{rhs} simplifies to
$$
-2+2\left( 1-p^2H_{p-1}(2)-\frac{p^4H_{p-1}(4)}{2}\right)
\left( 1+p^2H_{p-1}(2)\right)
\equiv -p^4H_{p-1}(4)\pmod{p^6}$$
and the proof is finished.
\end{proof}
|
{
"timestamp": "2020-02-03T02:15:51",
"yymm": "1806",
"arxiv_id": "1806.00846",
"language": "en",
"url": "https://arxiv.org/abs/1806.00846"
}
|
\section{Introduction}
Deep learning has proven to be extremely successful in a wide variety of tasks \citep{krizhevsky2012imagenet,lecun2015deep,mnih2015human,silver2016mastering,wu2016google}. Despite its tremendous success, the reasons behind the good generalization properties of these methods to unseen data is not fully understood (and, arguably, remains somewhat of a mystery to this day). Initially, this success was mostly attributed to the special deep architecture of these models. However, in the past few years, it has been widely noted that the architecture is only part of the story, and, in fact, the optimization algorithms used to train these models, typically stochastic gradient descent (SGD) and its variants, play a key role in learning parameters that generalize well.
In particular, it has been observed that since these deep models are \emph{highly over-parameterized}, they have a lot of capacity, and can fit to virtually any (even random) set of data points \citep{zhang2016understanding}. In other words, highly over-parameterized models can ``interpolate'' the data, so much so that this regime has been called the ``interpolating regime'' \citep{mababe2017}. In fact, on a given dataset, the loss function often has (uncountably infinitely) many \emph{global} minima, which can have drastically different generalization properties, and it is not hard to construct ``trivial'' global minima that do not generalize. Which minimum among all the possible minima we pick in practice is determined by the optimization algorithm that we use for training the model.
Even though it may seem at first that, because of the non-convexity of the loss function, the stochastic descent algorithms may get stuck in local minima or saddle points, in practice they almost always achieve a global minimum \citep{kawaguchi2016deep,zhang2016understanding,lee2016gradient}, which perhaps can also be justified by the fact that these models are highly over-parameterized. What is even more interesting is that not only do these stochastic descent algorithms converge to global minima, but they converge to ``special'' ones that generalize well, even in the absence of any explicit regularization or early stopping \citep{zhang2016understanding}. Furthermore, it has been observed that even among the common optimization algorithms, namely SGD or its variants (AdaGrad \citep{duchi2011adaptive}, RMSProp \citep{tieleman2012lecture}, Adam \citep{kingma2014adam}, etc.), there is a discrepancy in the solutions achieved by different algorithms and their generalization capabilities \citep{wilson2017marginal}, which again highlights the important role of the optimization algorithm in generalization.
There have been many attempts in recent years to explain the behavior and properties of these stochastic optimization algorithms, and many interesting insights have been obtained \citep{achille2017emergence,chaudhari2018stochastic,shwartz2017opening,soltanolkotabi2017theoretical}. In particular, it has been argued that the optimization algorithms perform an \emph{implicit regularization} \citep{neyshabur2017geometry,ma2017implicit,gunasekar2017implicit,gunasekar2018characterizing,soudry2017implicit,gunasekar2018implicit} while optimizing the loss function, which is perhaps why the solution generalizes well.
Despite this recent progress, most results explaining the behavior of the optimization algorithm, even for SGD, are limited to linear or very simplistic models. Therefore, a general characterization of the behavior of stochastic descent algorithms for more general models would be of great interest.
\subsection{Our Contribution}
In this paper, we present an alternative explanation of the behavior of SGD, and more generally, the stochastic mirror descent (SMD) family of algorithms, which includes SGD as a special case. We do so by obtaining a fundamental identity for such algorithms (see Lemmas~\ref{lem:sum} and \ref{lem:sum_SMD}). Using these identities, we show that for general nonlinear models and general loss functions, when the step size is sufficiently small, SMD (and therefore also SGD) is the optimal solution of a certain minimax filtering (or online learning) problem. The minimax formulation is inspired by, and rooted, in $H^{\infty}$ filtering theory, which was originally developed in the 1990's in the context of robust control theory \citep{hassibi1999indefinite,simon2006optimal,hassibi1996h}, and we generalize several results from this literature, e.g., \citep{hassibi1994hoo,kivinen2006p}. Furthermore, we show that many properties recently proven in the learning/optimization literature, such as the implicit regularization of SMD in the over-parameterized linear case---when convergence happens---\citep{gunasekar2018characterizing}, naturally follow from this theory. The theory also allows us to establish new results, such as the convergence (in a deterministic sense) of SMD in the over-parameterized linear case. We also use the theory developed in this paper to provide some speculative arguments into why SMD (and SGD) may have similar convergence and implicit regularization properties in the so-called ``highly over-parameterized'' nonlinear setting (where the number of parameters far exceeds the number of data points) common to deep learning.
In an attempt to make the paper easier to follow, we first describe the main ideas and results in a simpler setting, namely, SGD on the square loss of linear models, in Section~\ref{sec:warmup}, and mention the connections to $H^{\infty}$ theory. The full results, for SMD on a general class of loss functions and for general nonlinear models, are presented in Section~\ref{sec:SMD}. We demonstrate some implications of this theory, such as deterministic convergence and implicit regularization, in Section~\ref{sec:implications}, and we finally conclude with some remarks in Section~\ref{sec:conclusion}. Most of the formal proofs are relegated to the appendix.
\section{Preliminaries}
Denote the training dataset by $\{(x_i,y_i): i=1,\dots,n\}$, where $x_i\in\mathbb{R}^d$ are the inputs, and $y_i\in\mathbb{R}$ are the labels. We assume that the data is generated through a (possibly nonlinear) model $f_i(w)=f(x_i,w)$ with some parameter vector $w\in\mathbb{R}^m$, plus some noise $v_i$, i.e., $y_i=f(x_i,w)+v_i$ for $i=1,\dots,n$. The noise can be due to actual measurement error, or it can be due to modeling error (if the model $f(x_i,\cdot)$ is not rich enough to fully represent the data), or it can be a combination of both. As a result, we do not make any assumptions on the noise (such as stationarity, whiteness, Gaussianity, etc.).
Since typical deep models have a lot of capacity and are highly over-parameterized, we are particularly interested in the over-parameterized (so-caled interpolating) regime, i.e., when $m>n$. In this case, there are many parameter vectors $w$ (in fact, uncountably infinitely many) that are consistent with the observations. We denote the set of these parameter vectors by
\begin{equation}
\mathcal{W} = \left\{w\in\mathbb{R}^m\mid y_i=f(x_i,w),\ i=1,\dots,n\right\}.
\end{equation}
(Note the absence of the noise term, since in this regime we can fully interpolate the data.) The set $\mathcal{W}$ is typically an ($m-n$)-dimensional manifold and depends only on the training data $\{(x_i,y_i): i=1,\dots,n\}$ and nonlinear model $f(\cdot,\cdot)$.
The total loss on the training set (empirical risk) can be denoted by
$L(w)=\sum_{i=1}^n L_i(w)$, where $L_i(\cdot)$ is the loss on the individual data point $i$. We assume that the loss $L_i(\cdot)$ depends only on the residual, i.e., the difference between the prediction and the true label. In other words,
\begin{equation}
L_i(w) = l(y_i-f(x_i,w)) ,
\end{equation}
where $l(\cdot)$ can be any nonnegative differentiable function with $l(0)=0$. Typical examples of $l(\cdot)$ include square ($l_2$) loss, Huber loss, etc.
We remark that, in the interpolating regime, every parameter vector in the set $\mathcal{W}$ renders each individual loss zero, i.e., $L_i(w) = 0$, for all $w\in\mathcal{W}$.
\section{Warm-up: Revisiting SGD on Square Loss of Linear Models}\label{sec:warmup}
In this section, we describe the main ideas and results in a simple setting, i.e., stochastic gradient descent (SGD) for the square loss of a linear model, and we revisit some of the results from $H^{\infty}$ theory \citep{hassibi1999indefinite,simon2006optimal}. In this case, the data model is $y_i = x_i^Tw+v_i$, $i=1,\ldots, n$ (where there is no assumption on $v_i$) and the loss function is $L_i(w) = \frac{1}{2}(y_i-x_i^Tw)^2$.
Assuming the data is indexed randomly, the SGD updates are defined as $w_i = w_{i-1}-\eta\nabla L_i(w_{i-1})$, where $\eta>0$ is the step size or learning rate.\footnote{For the sake of simplicity of presentation, we present the results for constant step size. We show in the appendix that all the results extend to the case of time-varying step-size.} The update in this case can be expressed as
\begin{equation}\label{eq:SGD}
w_i = w_{i-1}+\eta \left(y_i-x_i^Tw_{i-1}\right)x_i ,
\end{equation}
for $i\geq 1$ (for $i>n$, we can either cycle through the data, or select them at random).
\begin{remark}
We should point out that, when the step size $\eta$ is fixed, the SGD recursions have no hope of converging, unless there exists a weight vector $w$ which perfectly interpolates the data $\{(x_i,y_i): i=1,\dots,n\}$. The reason being that, if this is not the case, for any estimated weight vector in SGD there will exist at least one data point that has a nonzero instantaneous gradient and that will therefore move the estimate by a non-vanishing amount.\footnote{Of course, one may get convergence by having a vanishing step size $\eta_i\rightarrow 0$. However, in this case, convergence is not surprising---since, effectively, after a while the weights are no longer being updated---and the more interesting question is ``what'' the recursion converges to.} It is for this reason that the results on the convergence of SGD and SMD (Sections~\ref{sec:warmup_convergence} and \ref{sec:implications}) pertain to the interpolating regime.
\end{remark}
\subsection{Conservation of Uncertainty}
Prior to the $i$-th step of any optimization algorithm, we have two sources of uncertainty: our uncertainty about the unknown parameter vector $w$, which we can represent by $w-w_{i-1}$, and our uncertainty about the $i$-th data point $(x_i,y_i)$, which we can represent by the noise $v_i$. After the $i$-th step, the uncertainty about $w$ is transformed to $w-w_i$. But what about the uncertainty in $v_i$? What is it transformed to? In fact, we will view any optimization algorithm as one which redistributes the uncertainties at time $i-1$ to new uncertainties at time $i$. The two uncertainties, or error terms, we will consider are $e_i$ and $e_{p,i}$, defined as follows.
\begin{equation
e_i := y_i-x_i^Tw_{i-1} , \text{ and } e_{p,i} := x_i^Tw-x_i^Tw_{i-1} .
\end{equation}
$e_i$ is often referred to as the {\em innvovations} and is the error in predicting $y_i$, given the input $x_i$. $e_{p,i}$ is sometimes called the {\em prediction error}, since it is the error in predicting the noiseless output $x_i^Tw$, i.e., in predicting what the best output of the model is. In the absence of noise, $e_i$ and $e_{p,i}$ coincide.
One can show that SGD transforms the uncertainties in the fashion specified by the following lemma, which was first noted in \citep{hassibi1996h}.
\begin{lemma}\label{lem:equality}
For any parameter $w$ and noise values $\{v_i\}$ that satisfy $y_i =x_i^Tw+v_i$ for $i=1,\dots,n$, and for any step size $\eta>0$, the following relation holds for the SGD iterates $\{w_i\}$ given in Eq.~\eqref{eq:SGD}
\begin{equation}\label{eq:equality}
\|w-w_{i-1}\|^2 + \eta v_i^2 = \|w-w_i\|^2 + \eta\left(1-\eta\|x_i\|^2\right)e_i^2 + \eta e_{p,i}^2,\quad \forall i\geq 1.
\end{equation}
\end{lemma}
As illustrated in Figure~\ref{fig:SGD}, this means that each step of SGD can be thought of as a lossless transformation of the input uncertainties to the output uncertainties, with the specified coefficients.
Once one knows this result, proving it is straightforward. To see that, note that we can write $v_i=y_i-x_i^Tw$ as $v_i=(y_i-x_i^Tw_{i-1})-(x_i^Tw-x_i^Tw_{i-1})$. Multiplying both sides by $\sqrt{\eta}$, we have
\begin{equation}\label{eq:SGDequality_proof_eq1}
\sqrt{\eta}v_i=\sqrt{\eta}(y_i-x_i^Tw_{i-1})-\sqrt{\eta}(x_i^Tw-x_i^Tw_{i-1}) .
\end{equation}
On the other hand, subtracting both sides of the update rule~\eqref{eq:SGD} from $w$ yields
\begin{equation}\label{eq:SGDequality_proof_eq2}
w-w_i = (w-w_{i-1})-\eta\left(y_i-x_i^Tw_{i-1}\right)x_i.
\end{equation}
Squaring both sides of \eqref{eq:SGDequality_proof_eq1} and \eqref{eq:SGDequality_proof_eq2}, and subtracting the results leads to Equation~\eqref{eq:equality}.
A nice property of Equation~\eqref{eq:equality} is that, if we sum over all $i=1,\dots,T$, the terms $\|w-w_i\|^2$ and $\|w-w_{i-1}\|^2$ on different sides cancel out telescopically, leading to the following important lemma.
\begin{lemma}\label{lem:sum}
For any parameter $w$ and noise values $\{v_i\}$ that satisfy $y_i =x_i^Tw+v_i$ for $i=1,\dots,n$, any initialization $w_0$, any step size $\eta>0$, and any number of steps $T\geq 1$, the following relation holds for the SGD iterates $\{w_i\}$ given in Eq.~\eqref{eq:SGD}
\begin{empheq}[box=\fbox]{equation}\label{eq:sum}
\|w-w_{0}\|^2 + \eta \sum_{i=1}^T v_i^2 = \|w-w_T\|^2 + \eta\sum_{i=1}^T\left(1-\eta\|x_i\|^2\right)e_i^2 + \eta \sum_{i=1}^Te_{p,i}^2 .
\end{empheq}
\end{lemma}
As we will show next, this identity captures most properties of SGD, and implies several important results in a very transparent fashion. For this reason, this relation can be viewed as a ``fundamental identity'' for SGD.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.6\columnwidth]{SGD.png}
\end{center}
\caption{Illustration of Lemma~\ref{lem:equality}. Each step of SGD can be viewed as a transformation of the uncertainties with the right coefficients.}\label{fig:SGD}
\end{figure}
\subsection{Minimax Optimality of SGD}
For a given horizon $T$, consider the following minimax problem:
\begin{equation}
\min_{\{w_i\}}~\max_{w,\{v_i\}}~\frac{\|w-w_T\|^2+\eta\sum_{i=1}^Te_{p,i}^2}{\|w-w_0\|^2+\eta\sum_{i=1}^Tv_i^2} .
\label{hinf}
\end{equation}
This minimax problem is motivated by the theory of $H^\infty$ control and estimation \citep{francis,hassibi1999indefinite,basarbernhard}. The denominator of the cost function can be interpreted as the {\em energy of the uncertainties} and consists of two terms, $\|w-w_0\|^2$, the energy of our uncertainty of the unknown weight vector at the beginning of learning when we have not yet observed the data, and $\sum_{i=1}^Tv_i^2$, the energy of the uncertainty in the measurements. The numerator denotes the energy of the estimation errors in an {\em online setting}. The first term, $\|w-w_T\|^2$, is the energy of our uncertainty of the unknown weight vector after we have observed $T$ data points, and the second term, $\sum_{i=1}^Te_{p,i}^2 = \sum_{i=1}^T(x_i^Tw-x_i^Tw_{i-1})^2$, is the energy of the prediction error, i.e., how well we can predict the true uncorrupted output $x_i^Tw$ using measurements up to time $i-1$. The parameter $\eta$ weighs the two energy terms relative to each other. In this minimax problem, nature has access to the unknown weight vector $w$ and the noise sequence $v_i$ and would like to maximize the energy gain from the uncertainties to prediction errors (so that the estimator behaves poorly), whereas the estimator attempts to minimize the energy gain. Such an estimator is referred to as $H^\infty$-optimal and is robust because it safeguards against the worst-case noise. It is also conservative---for the exact same reason.\footnote{The setting described is somewhat similar to the setting of online learning, where one considers the relative performance of an online learner who needs to predict, compared to a clairvoyant one who has access to the entire data set \citep{onlineschwatrz,onlinehazan}. In online learning, the relative performance is described as a difference, rather than as a ratio in $H^\infty$ theory, and is referred to as {\em regret}.}
\begin{theorem} \label{thm:SGDminimax}
For any initialization $w_0$, any
step size
$0<\eta\leq\min_i\frac{1}{\|x_i\|^2}$, and any number of steps $T\geq 1$, the stochastic gradient descent iterates $\{w_i\}$ given in Eq.~\eqref{eq:SGD} are the optimal solution to the minimax problem (\ref{hinf}). Furthermore, the optimal minimax value (achieved by SGD) is $1$.
\end{theorem}
This theorem explains the observed robustness and conservatism of SGD. Despite the conservativeness of safeguarding against the worst-case disturbance, this choice may actually be the rational thing to do in situations where we do not have much knowledge about the disturbances, which is the case in many machine learning tasks.
Theorem~\ref{thm:SGDminimax} holds for any horizon $T\geq 1$. A variation of this result, i.e., when $T\to\infty$ and without the $\|w-w_T\|^2$ term in the numerator, was first shown in \citep{hassibi1994hoo,hassibi1996h}. In that case, the ratio $\frac{\eta\sum_{i=1}^{\infty}e_{p,i}^2}{\|w-w_0\|^2+\eta\sum_{i=1}^{\infty}v_i^2}$ in the minimax problem is in fact the \emph{$H^{\infty}$ norm} of the transfer operator that maps the unknown disturbances $(w-w_0,\{\sqrt{\eta}v_i\})$ to the prediction errors $\{\sqrt{\eta}e_{p,i}\}$.
We end this section with a stochastic interpretation of SGD \citep{hassibi1996h}. Assume that the true weight vector has a normal distribution with mean $w_0$ and covariance matrix $\eta I$, and that the noise $v_i$ are iid standard normal. Then SGD solves
\begin{equation}
\min_{\{w_i\}} \mathbb{E} \exp\left(\frac{1}{2}\cdot\left(\|w-w_T\|^2+\eta\sum_{i=1}^T(x_i^Tw-x_i^Tw_{i-1})^2\right)\right),
\label{risk-sense}
\end{equation}
and no exponent larger than $\frac{1}{2}$ is possible, in the sense that no estimator can keep the expected cost finite. This means that, in the Gaussian setting, SGD minimizes the expected value of an {\em exponential} quadratic cost. The algorithm is thus very adverse to large estimation errors, as they are penalized exponentially larger than moderate ones.
\subsection{Convergence and Implicit Regularization}\label{sec:warmup_convergence}
The over-parameterized (interpolating) linear regression regime is a simple but instructive setting, recently considered in some papers \citep{gunasekar2018characterizing,zhang2016understanding}. In this setting, we can show that, for sufficiently small step, i.e. $0<\eta\leq\min_i\frac{1}{\|x_i\|^2}$, SGD always converges to a special solution among all the solutions $\mathcal{W}$, in particular to the one with the smallest $l_2$ distance from $w_0$. In other words, if, for example, initialized at zero, SGD implicitly regularizes the solution according to an $l_2$ norm. This result follows directly from Lemma~\ref{lem:sum}.
To see that, note that in the interpolating case the $v_i$ are zero,
and we have $e_i=y_i-x_i^Tw_{i-1}=x_i^Tw-x_i^Tw_{i-1}=e_{p,i}$. Hence, identity \eqref{eq:sum} reduces to
\begin{equation}\label{eq:equality_noiseless_sum}
\|w-w_{0}\|^2 = \|w-w_{T}\|^2 + \eta\sum_{i=1}^T\left(2-\eta\|x_i\|^2\right)e_i^2 ,
\end{equation}
for all $w\in\mathcal{W}$. By dropping the $\|w-w_T\|^2$ term and taking $T\to\infty$, we have $\eta\sum_{i=1}^\infty\left(2-\eta\|x_i\|^2\right)e_i^2 \leq \|w-w_{0}\|^2$, which implies that, for $0<\eta<\min_i\frac{2}{\|x_i\|^2}$, we must have $e_i\to 0$ as $i\to\infty$. When $e_i=y_i-x_i^Tw_{i-1}$ goes to zero, the updates in \eqref{eq:SGD} vanish and we get convergence, i.e., $w\to w_{\infty}$. Further, again because $e_i\to 0$, all the data points are being fit, which means $w_\infty\in\mathcal{W}$.
Moreover, it is again very straightforward to see from \eqref{eq:equality_noiseless_sum} that the solution converged to is the one with minimum Euclidean norm from the initial point. To see that, notice that the summation term in Eq.~\eqref{eq:equality_noiseless_sum} is \emph{independent of $w$} (it depends only on $x_i,y_i$ and $w_0$). Therefore, by taking $T\to\infty$ and minimizing both sides with respect to $w\in\mathcal{W}$, we get
\begin{equation}
w_{\infty}=\argmin_{w\in\mathcal{W}}\|w-w_0\| .
\end{equation}
Once again, this also implies that if SGD is initialized at the origin, i.e., $w_0=0$, then it converges to the minimum-$l_2$-norm solution, among all the solutions.
\section{Main Result: General Characterization of Stochastic Mirror Descent}\label{sec:SMD}
Stochastic Mirror Descent (SMD) \citep{nemirovskii1983problem,beck2003mirror,cesa2012mirror,zhou2017stochastic} is one of the most widely used families of algorithms for stochastic optimization, which includes SGD as a special case. In this section, we provide a characterization of the behavior of general SMD, on \emph{general} loss functions and \emph{general} nonlinear models, in terms of a fundamental identity and minimax optimality.
For any strictly convex and differentiable potential $\psi(\cdot)$, the corresponding SMD updates are defined as
\begin{equation}
w_i = \argmin_w\ \eta w^T\nabla L_i(w_{i-1})+D_{\psi}(w,w_{i-1}) ,
\end{equation}
where
\begin{equation}
D_{\psi}(w,w_{i-1})=\psi(w)-\psi(w_{i-1})-\nabla\psi(w_{i-1})^T (w-w_{i-1})
\end{equation}
is the Bregman divergence with respect to the potential function $\psi(\cdot)$. Note that $D_\psi(\cdot,\cdot)$ is non-negative, convex in its first argument, and that, due to strict convexity, $D_\psi(w,w') = 0$ iff $w=w'$. Moreover, the updates can be equivalently written as
\begin{equation}\label{eq:SMD}
\nabla\psi(w_i)=\nabla\psi(w_{i-1})-\eta\nabla L_i(w_{i-1}) ,
\end{equation}
which are uniquely defined because of the invertibility of $\nabla\psi$ (again, implied by the strict convexity of $\psi(\cdot)$). In other words, stochastic mirror descent can be thought of as transforming the variable $w$, with a \emph{mirror map} $\nabla\psi(\cdot)$, and performing the SGD update on the new variable. For this reason, $\nabla\psi(w)$ is often referred to as the \emph{dual} variable, while $w$ is the \emph{primal} variable.
Different choices of the potential function $\psi(\cdot)$ yield different optimization algorithms, which, as we will see, result in different implicit regularizations. To name a few examples: For the potential function $\psi(w)=\frac{1}{2}\|w\|^2$, the Bregman divergence is $D_{\psi}(w,w')=\frac{1}{2}\|w-w'\|^2$, and the update rule reduces to that of SGD. For $\psi(w)=\sum_j w_j\log w_j$, the Bregman divergence becomes the unnormalized relative entropy (Kullback-Leibler divergence) $D_{\psi}(w,w')=\sum_j w_j\log\frac{w_j}{w'_j}-\sum_j w_j + \sum_j w'_j$, which corresponds to the exponentiated gradient descent (aka the exponential weights) algorithm.
Other examples include $\psi(w)=\frac{1}{2}\|w\|_Q^2=\frac{1}{2}w^TQw$ for a positive definite matrix $Q$, which yields $D_{\psi}(w,w')=\frac{1}{2}(w-w')^TQ(w-w')$, and the $q$-norm squared $\psi(w)=\frac{1}{2}\|w\|_q^2$, which with $\frac{1}{p}+\frac{1}{q}=1$ yields the $p$-norm algorithms \citep{grove2001general,gentile2003robustness}.
In order to derive an equivalent ``conservation law'' for SMD, similar to the identity \eqref{eq:equality}, we first need to define a new measure for the difference between the parameter vectors $w$ and $w'$ according to the loss function $L_i(\cdot)$. To that end, let us define
\begin{equation}\label{eq:D_Li}
D_{L_i}(w,w'):=L_i(w)-L_i(w')-\nabla L_i(w')^T(w-w'),
\end{equation}
which is defined in a similar way to a Bregman divergence for the loss function.\footnote{It is easy to verify that for linear models and quadratic loss we obtain $D_{L_i}(w,w') = (x_i^Tw-x_i^Tw')^2$.} The difference though is that, unlike the potential function of the Bregman divergence, the loss function $L_i(\cdot) = \ell(y_i-f(x_i,\cdot))$ need not be convex, even when $\ell(\cdot)$ is, due to the nonlinearity of $f(\cdot,\cdot)$. As a result, $D_{L_i}(w,w')$ is not necessarily non-negative. The following result, which is the general counterpart of Lemma~\ref{lem:equality}, states the identity that characterizes SMD updates in the general setting.
\begin{lemma}\label{lem:equality_SMD}
For any (nonlinear) model $f(\cdot,\cdot)$, any differentiable loss $l(\cdot)$, any parameter $w$ and noise values $\{v_i\}$ that satisfy $y_i =f(x_i,w)+v_i$ for $i=1,\dots,n$, and any step size $\eta>0$, the following relation holds for the SMD iterates $\{w_i\}$ given in Eq.~\eqref{eq:SMD}
\begin{equation}\label{eq:equality_SMD}
D_{\psi}(w,w_{i-1})+\eta l(v_i) = D_{\psi}(w,w_i) +E_i(w_i,w_{i-1}) +\eta D_{L_i}(w,w_{i-1}) ,
\end{equation}
for all $i\geq 1$, where
\begin{equation}
E_i(w_i,w_{i-1}) := D_{\psi}(w_i,w_{i-1})-\eta D_{L_i}(w_i,w_{i-1})+\eta L_i(w_i) .
\end{equation}
\end{lemma}
The proof is provided in Appendix~\ref{sec:proof_of_equality_SMD}. Note that $E_i(w_i,w_{i-1})$ is not a function of $w$. Furthermore, even though it does not have to be nonnegative in general, for $\eta$ sufficiently small, it becomes nonnegative, because the Bregman divergence $D_{\psi}(.,.)$ is nonnegative.
Summing Equation~\eqref{eq:equality_SMD} over all $i=1,\dots,T$ leads to the following identity, which is the general counterpart of Lemma~\ref{lem:sum}.
\begin{lemma}\label{lem:sum_SMD}
For any (nonlinear) model $f(\cdot,\cdot)$, any differentiable loss $l(\cdot)$, any parameter $w$ and noise values $\{v_i\}$ that satisfy $y_i =f(x_i,w)+v_i$ for $i=1,\dots,n$, any initialization $w_0$, any step size $\eta>0$, and any number of steps $T\geq 1$, the following relation holds for the SMD iterates $\{w_i\}$ given in Eq.~\eqref{eq:SMD}
\begin{empheq}[box=\fbox]{equation}\label{eq:sum_SMD}
D_{\psi}(w,w_{0})+\eta \sum_{i=1}^T l(v_i) = D_{\psi}(w,w_T) +\sum_{i=1}^T \left( E_i(w_i,w_{i-1}) +\eta D_{L_i}(w,w_{i-1}) \right) .
\end{empheq}
\end{lemma}
We should reiterate that Lemma~\ref{lem:sum_SMD} is a fundamental property of SMD, which allows one to prove many important results, in a direct way.
In particular, in this setting, we can show that SMD is minimax optimal in a manner that generalizes Theorem~\ref{thm:SGDminimax} of Section~\ref{sec:warmup}, in the following 3 ways: 1) General potential $\psi(\cdot)$, 2) General model $f(\cdot,\cdot)$, and 3) General loss function $l(\cdot)$. The result is as follows.
\begin{theorem}\label{thm:SMD}
Consider any (nonlinear) model $f(\cdot,\cdot)$, any non-negative differentiable loss $l(\cdot)$ with the property $l(0)=l'(0)=0$, and any initialization $w_0$. For sufficiently small step size, i.e., for any $\eta>0$ for which $\psi(w)-\eta L_i(w)$ is convex for all $i$, and for any number of steps $T\geq 1$, the SMD iterates $\{w_i\}$ given by Eq.~\eqref{eq:SMD}, w.r.t. any strictly convex potential $\psi(\cdot)$, is the optimal solution to the following minimization problem
\begin{equation}
\min_{\{w_i\}}\max_{w,\{v_i\}}\frac{D_{\psi}(w,w_T)+\eta\sum_{i=1}^{T}D_{L_i}(w,w_{i-1})}{D_{\psi}(w,w_0)+\eta\sum_{i=1}^{T}l(v_i)} .
\end{equation}
Furthermore, the optimal value (achieved by SMD) is $1$.
\end{theorem}
The proof is provided in Appendix~\ref{sec:proof_of_thm_SMD}.
For the case of square loss and a linear model, the result reduces to the following form.
\begin{corollary}\label{cor:SMD_squareloss_linear}
For $L_i(w)=\frac{1}{2}(y_i-x_i^Tw)^2$, for any initialization $w_0$, any sufficiently small step size, i.e., $0<\eta\leq\frac{\alpha}{\|x_i\|^2}$, and any number of steps $T\geq 1$, the SMD iterates $\{w_i\}$ given by Eq.~\eqref{eq:SMD}, w.r.t. any $\alpha$-strongly convex potential $\psi(\cdot)$, is the optimal solution to
\begin{equation}
\min_{\{w_i\}}\max_{w,\{v_i\}}\frac{D_{\psi}(w,w_T)+\frac{\eta}{2}\sum_{i=1}^{T}e_{p,i}^2}{D_{\psi}(w,w_0)+\frac{\eta}{2}\sum_{i=1}^{T} v_i^2} .
\end{equation}
The optimal value (achieved by SMD) is $1$.
\end{corollary}
We should remark that Theorem~\ref{thm:SMD} and Corollary~\ref{cor:SMD_squareloss_linear} generalize several known results in the literature. In particular, as mentioned in Section~\ref{sec:warmup}, the result of \citep{hassibi1994hoo} is a special case of Corollary~\ref{cor:SMD_squareloss_linear} for $\psi(w)=\frac{1}{2}\|w\|^2$. Furthermore, our result generalizes the result of \citep{kivinen2006p}, which is the special case for the $p$-norm algorithms, again, with square loss and a linear model. Another interesting connection to the literature is that it was shown in \citep{hassibi1995h} that SGD is \emph{locally} minimax optimal, with respect to the $H^{\infty}$ norm. Strictly speaking, our result is not a generalization of that result; however, Theorem~\ref{thm:SMD} can be interpreted as SGD/SMD being \emph{globally} minimax optimal, but with respect to different metrics in the numerator and denominator. Namely, the uncertainty about the weight vector $w$ is measured by the Bregman divergence of the potential, the uncertainty about the noise by the loss, and the prediction error by the ``Bregman-divergence-like'' expression of the loss.
\section{Convergence and Implicit Regularization in Over-Parameterized Models}\label{sec:implications}
In this section, we show some of the implications of the theory developed in the previous section. In particular, we show convergence and implicit regularization, in the over-parameterized (so-called interpolating) regime, for general SMD algorithms. We first consider the linear interpolating case, which has been studied in the literature, and show that the known results follow naturally from our Lemma~\ref{lem:sum_SMD}. Further, we shall obtain some {\em new} convergence results. Finally, we discuss the implications for nonlinear models, and argue that the same results hold {\em qualitatively} in highly-overparameterized settings, which is the typical scenario in deep learning.
\subsection{Over-Parameterized Linear Models}
In this setting, the $v_i$ are zero, $\mathcal{W} = \left\{w\mid y_i=x_i^Tw,\ i=1,\dots,n\right\}$, and $L_i(w)=l(y_i-x_i^Tw)$, with any differentiable loss $l(\cdot)$. Therefore, Eq.~\eqref{eq:sum_SMD} reduces to
\begin{equation}\label{eq:sum_SMD_noiseless}
D_{\psi}(w,w_{0}) = D_{\psi}(w,w_T) +\sum_{i=1}^T \left( E_i(w_i,w_{i-1}) +\eta D_{L_i}(w,w_{i-1}) \right) ,
\end{equation}
for all $w\in\mathcal{W}$, where
\begin{align}
D_{L_i}(w,w_{i-1}) &=L_i(w)-L_i(w_{i-1})-\nabla L_i(w_{i-1})^T(w-w_{i-1})\\
&= 0-l(y_i-x_i^Tw_{i-1})+l'(y_i-x_i^Tw_{i-1})x_i^T(w-w_{i-1})\\
&= -l(y_i-x_i^Tw_{i-1})+l'(y_i-x_i^Tw_{i-1})(y_i-x_i^Tw_{i-1})
\end{align}
which is notably \emph{independent of $w$}. As a result, we can easily minimize both sides of Eq.~\eqref{eq:sum_SMD_noiseless} with respect to $w\in\mathcal{W}$, which
leads to the following result.
\begin{proposition}\label{prop:ImpReg_SMD}
For any differentiable loss $l(\cdot)$, any initialization $w_0$, and any step size $\eta$, consider the SMD iterates given in Eq.~\eqref{eq:SMD} with respect to any strictly convex potential $\psi(\cdot)$. If the iterates converge to a solution $w_{\infty}\in\mathcal{W}$, then
\begin{equation}
w_{\infty}=\argmin_{w\in\mathcal{W}} D_{\psi}(w,w_0) .
\end{equation}
\end{proposition}
\begin{remark}
In particular, for the initialization $w_0=\argmin_{w\in\mathbb{R}^m} \psi(w)$, if the iterates converge to a solution $w_{\infty}\in\mathcal{W}$, then
\begin{equation}\label{eq:minimum_potential}
w_{\infty}=\argmin_{w\in\mathcal{W}} \psi(w) .
\end{equation}
\end{remark}
An equivalent form of Proposition~\ref{prop:ImpReg_SMD} has been shown recently in, e.g., \citep{gunasekar2018characterizing}.\footnote{To be precise, the authors in \citep{gunasekar2018characterizing} assume convergence to a global minimizer of the loss function $L(w)=\sum_{i=1}^n l(y_i-x_i^Tw)$, which with their assumption of the loss function $l(\cdot)$ having a unique finite root is equivalent to assuming convergence to a point $w_{\infty}\in\mathcal{W}$.} Other implicit regularization results have been shown in \citep{gunasekar2018implicit,soudry2017implicit} for classification problems, which are not discussed here.
Note that the result of \citep{gunasekar2018characterizing} does not say anything about \emph{whether the algorithm converges or not}. However, our fundamental identity of SMD (Lemma~\ref{lem:sum_SMD}) allows us to also establish convergence to the regularized point, for some common cases, which will be shown next.
What Proposition~\ref{prop:ImpReg_SMD} says is that depending on the choice of the potential function $\psi(\cdot)$, the optimization algorithm can perform an implicit regularization without any explicit regularization term. In other words, for any desired regularizer, if one chooses a potential function that approximates the regularizer, we can run the optimization without explicit regularization, and if it converges to a solution, the solution must be the one with the minimum potential.
In principle, one can choose the potential function in SMD for {\em any} desired convex regularization. For example, we can find the maximum entropy solution by taking the potential to be the negative entropy. Another illustrative example follows.
\noindent {\bf Example [Compressed Sensing]:} In compressed sensing, one seeks the sparsest solution to an under-determined (over-parameterized) system of linear equations. The surrogate convex problem one solves is:
\begin{equation}
\begin{array} {cl} \min & \|w\|_1 \\ \mbox{subject to} & y_i = x_i^Tw,~~~i=1,\ldots n \end{array}
\end{equation}
One cannot choose $\psi(w) = \|w\|_1$, since it is neither differentiable nor strictly convex. However, $\psi(w) = \|w\|_{1+\epsilon}$, for any $\epsilon>0$, can be used. Figure 4 shows a compressed sensing example, with $n=50$, $m=100$, and sparsity $k=10$. SMD was used with a step size of $\eta = 0.001$ and the potential function was $\psi(\cdot) = \|\cdot\|_{1.1}$. SMD converged to the true sparse solution after around 10,000 iterations. On this example, it was an order of magnitude faster than standard $l_1$ optimization.
\begin{figure}[thbp]
\begin{center}
\includegraphics[scale=.4]{Plot_notitle.png}
\end{center}
\label{fig:cs}
\caption{The training loss and actual error of stochastic mirror descent for compressed sensing. SMD recovers the actual sparse signal.}
\end{figure}
Next we establish \emph{convergence to the regularized point} for the convex case.
\begin{proposition}\label{prop:convergence_SMD}
Consider the following two cases.
\begin{enumerate}[label=(\roman*)]
\item $l(\cdot)$ is differentiable and convex and has a unique root at 0, $\psi(\cdot)$ is strictly convex, and $\eta>0$ is such that $\psi-\eta L_i$ is convex for all $i$.
\item $l(\cdot)$ is differentiable and quasi-convex, $l'(\cdot)$ is zero only at zero, $\psi(\cdot)$ is $\alpha$-strongly convex, and $0<\eta\leq\min_i\frac{\alpha |y_i-x_i^Tw_{i-1}|}{\|x_i\|^2|l'(y_i-x_i^Tw_{i-1})|}$.
\end{enumerate}
If either (i) or (ii) holds, then for any $w_0$, the SMD iterates given in Eq.~\eqref{eq:SMD} converge to
\begin{equation}
w_{\infty}=\argmin_{w\in\mathcal{W}} D_{\psi}(w,w_0) .
\end{equation}
\end{proposition}
The proof is provided in Appendix~\ref{sec:proof_of_prop_convergence_SMD}.
\subsection{Discussion of Highly Over-Parameterized Nonlinear Models}
Let us consider the highly-overparameterized nonlinear model
\begin{equation}
y_i = f(x_i,w),~~~i=1,\ldots,n,~~~~w\in \mathbb{R}^m
\end{equation}
where by highly-overparameterized we mean $m\gg n$. Since the model is highly over-parameterized, it is assumed that we can perfectly interpolate the data points $(x_i,y_i)$ so that the noise $v_i$ is zero. In this case, the set of parameter vectors that interpolate the data is given by
$\mathcal{W} = \{w \in \mathbb{R}^m\mid y_i = f(x_i,w),\ i=1,\ldots, n\}$,
and Eq.~\eqref{eq:sum_SMD}, again, reduces to
\begin{equation}
D_{\psi}(w,w_{0}) = D_{\psi}(w,w_T) +\sum_{i=1}^T \left( E_i(w_i,w_{i-1}) +\eta D_{L_i}(w,w_{i-1}) \right) ,
\label{eq:nonlinearnov}
\end{equation}
for all $w\in\mathcal{W}$. Our proofs of convergence and implicit regularization for SGD and SMD in the linear case relied on two facts: (i) $D_{L_i}(w,w_{i-1})$ was non-negative (this allowed us to show convergence), and (ii) $D_{L_i}(w,w_{i-1})$ was independent of $w$ (this allowed us to show implicit regularization). Unfortunately, neither of these hold in the nonlinear case.
However, they do hold in a {\em local} sense. In other words, (i) $D_{L_i}(w,w_{i-1})\geq 0$ for $w_{i-1}$ ``close enough'' to $w$ (see Figure~\ref{fig:local}), and (ii) $D_{L_i}(w,w_{i-1})$ is weakly dependent on $w$ for $w_{i-1}$ ``close enough.'' (Both statements can be made precise.)
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=.3]{local.jpg}
\end{center}
\caption{Non-negativity of $D_{L_i}(w,w_{i-1})$ for $w_{i-1}$ ``close enough'' to $w$.}
\label{fig:local}
\end{figure}
Now define
\begin{equation}
w_* = \mbox{arg}\min_{w\in{\cal W}}D_\psi(w,w_0) .
\end{equation}
Then one can show the following result.
\begin{theorem}
There exists an $\epsilon>0$, such that if $\|w_*-w_0\|<\epsilon$, then for sufficiently small step size $\eta>0$:
\begin{enumerate}
\item SMD iterates converge to a point $w_\infty\in{\cal W}$
\item $\|w_\infty-w_*\| = o(\epsilon)$
\end{enumerate}
\end{theorem}
This shows that if the initial condition is close enough, then we have convergence to a point $w_\infty$ that interpolates the data, and that $w_\infty$ is an order of magnitude closer to $w_*$ (the implicitly regularized solution) than the initial $w_0$ was. At first glance, this result seems rather dissatisfying. It relies on $w_0$ being close to the manifold ${\cal W}$ which appears hard to guarantee. We would now like to argue that in deep learning $w_0$ being close to ${\cal W}$ is often the case.
In the highly-overparameterized regime, $m\gg n$, and so the dimension of the manifold $\mathcal{W}$ is $m-n$, which is very large. Now if the $x_i$ are sufficiently random, then the tangent space to ${\cal W}$ at $w_*$ will be a randomly oriented affine subspace of dimension $m-n$. This means that any randomly chosen $w_0$ will whp have a very large component when projected onto $\mathcal{W}$. In particular, it can be shown that $\|w_*-w_0\|^2 = O(\frac{n}{m})\cdot\|y-f(x,w)\|^2$, where $y = \mbox{vec}(y_i,i=1,\ldots,n)$ and $f(x,w) = \mbox{vec}(f(x_i,w),i=1,\ldots,n)$. Thus, we may expect that, when $m\gg n$, the distance of any randomly chosen $w_0$ to $\mathcal{W}$ will be small and so SMD will converge to a point on $\mathcal{W}$ that approximately performs implicit regularization.
The gist of the argument is that (i) When $m\gg n$, any random initial condition is ``close'' to the $n-m$ dimensional solution manifold ${\cal W}$, (ii) when $w_0$ is ``close'' to $w_*$, then SMD converges to a point $w_\infty\in{\cal W}$, (iii) $w_\infty$ is ``an order of magnitude closer'' to $w_*$ than $w_0$ was, and (iv) thus, when highly overparamatrized, SMD converges to a point that exhibits implicit regularization.
Of course, this was a very heuristic argument that merits a much more careful analysis. But it is suggestive of the fact that SGD and SMD, when performed on highly-overparameterized nonlinear models, as occurs in deep learning, may exhibit implicit regularization.
\section{Concluding Remarks}\label{sec:conclusion}
We should remark that all the results stated throughout the paper extend to the case of time-varying step size $\eta_i$, with minimal modification. In particular, it is easy to show that in this case, the identity (the counterpart of Eq.~\eqref{eq:sum_SMD}) becomes
\begin{equation}
D_{\psi}(w,w_{0})+ \sum_{i=1}^T \eta_i l(v_i) = D_{\psi}(w,w_T) +\sum_{i=1}^T \left( E_i(w_i,w_{i-1}) +\eta_i D_{L_i}(w,w_{i-1}) \right) ,
\end{equation}
where $E_i(w_i,w_{i-1}) = D_{\psi}(w_i,w_{i-1})-\eta_i D_{L_i}(w_i,w_{i-1})+\eta_i L_i(w_i)$.
As a consequence, our main result will be the same as in Theorem~\ref{thm:SMD}, with the only difference that the small-step-size condition in this case is the convexity of $\psi(w)-\eta_iL_i(w)$ for all $i$, and the SMD with time-varying step size will be the optimal solution to the following minimax problem
\begin{equation}
\min_{\{w_i\}}\max_{w,\{v_i\}}\frac{D_{\psi}(w,w_T)+\sum_{i=1}^{T}\eta_iD_{L_i}(w,w_{i-1})}{D_{\psi}(w,w_0)+\sum_{i=1}^{T}\eta_il(v_i)} .
\end{equation}
Similarly, the convergence and implicit regularization results can be proven under the same conditions (See Appendix~\ref{sec:timevar} for more details on the time-varying case).
This paper opens up a variety of important directions for future work. Most of the analysis developed here is general, in terms of the \emph{model}, the \emph{loss function}, and the \emph{potential function}. Therefore, it would be interesting to study the implications of this theory for specific classes of models (such as different neural networks), specific losses, and specific mirror maps (which induce different regularization biases). Something for future work.
|
{
"timestamp": "2019-01-21T02:02:33",
"yymm": "1806",
"arxiv_id": "1806.00952",
"language": "en",
"url": "https://arxiv.org/abs/1806.00952"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.